is defragmentation a meme?
Is defragmentation a meme?
It’s a Jewish trick to wear out your hard-drive faster so you have to buy a new one.
Improving HDD health with it is a meme.
Improving access speed is not.
Everywhere other than Windows
yes
How is it that there are so many posters on Sup Forums that don't know a god damn thing about hard drives?
>his filesystem is so bad it has to be defragged
mkfs.{ext4,f2fs,xfs,btrfs,madeinthiscenturyfs} /dev/disk/by*/*
It's not necessary if you have an SSD
Ah yes, let's use a filesystem that ignores that desktop hard drive firmware is optimized for localized access.
>what is swapping
Most retarded thread of the day and that includes the ricing one
No. It's very peaceful to watch.
ah, yeah, that's what I'm talking about.
It is probably a net gain anyways
It's not. It's why the idea of server drives in desktop computers disappeared. A 7200RPM PATA/SATA drive could keep right with a 10k SCSI for desktop workloads since its caching was designed for localized access while the SCSI was for random. Spreading your files across the entire span of the drive in an attempt to minimize fragmentation on initial write makes them truly random access, and there desktop drives fall far behind.
Fragmentation is a killer, but defragmentation takes care of that. You can't fix a filesystem that throws all your files to the winds. The individual files may not be fragmented but if you need to read multiple files the overall read is fragmented as fuck.
Nah, it's just another feature that Linux lacks because, as is customary with open source, things stopped being maintained.
marc.info
Tl;dr: Development on the old ext defrag utility became inactive so long ago that it has become unusable. Over time, Linux users just rationalized that their filesystems magically didn't defrag over use, as if physics simply didn't apply to them.
Only on windows.
>Drive fails
>fragmented as fuck
>you try to recover a file
>it's in 1000 parts all over the drive
hmm
>random access magnetic disks inevitably fragment over time regardless of the filesystem for the simple fact that contiguity is destroyed by deletions
>Linux lacks the most basic functionality to mitigate that
>free software developers are always struggling and playing catch-up with proprietary, they can't even develop filepicker thumbnails, so of course they fell behind as usual
>Linux users just go like "meh, w-we didn't even need that anyway because g-grapes are sour *nervous laugh*"
You can't make this shit up.
How do I undelete files from ext4?
I accidentally deleted a file using rm!
Defragment your PS4 faggot .
>tfw watching this as a kid
>tfw this doesn't even mean you're underage
Heh, I just defragmented my laptop's HDD today (:
>today
hello there, time traveler!
My Seagate hd is almost completing 6 years and it was spinning and clicking a lot, after defragmentation I don't hear it anymore. Why? Yes I'm dumb and built my first computer in 2012
I wonder if moving all the files to an external HD and copying them back is faster and less strainful.
Pretty much always.
It's better too since you can keep directories together while AFAIK most defragmentation programs will just stack files willy-nilly.
not really. at any given time, hard drives can be doing a linear transfer or a seek, but not both. and each seek costs about a full platter rotation of time on average.
so 7200 rpm drive = ~120 seeks per second. with a high end modern drive that can get ~250 MB/s sustained linear reads, that means that each seek costs about 2 MB of theoretical bandwidth. E.g., reading 10 scattered 4 kB files takes as long as reading an entire 20 MB file stored contiguously.
But SSD access times make this less of an issue for non-poorfags nowaday at least.
is defraggler retarded? it just "finished" defragging a drive, but there are still tons of red blocks, and when i hit defrag again, it says it's going to take another five hours to finish
yeah .. windows - where it's still the 1990s