defrag

Christian Baer christian.baer at uni-dortmund.de
Sat Mar 3 14:19:29 UTC 2007


On Thu, 1 Mar 2007 17:21:57 -0500 Bill Moran wrote:

> But this also makes it _easy_ for the filesystem to avoid causing the type
> of fragmentation that _does_ degrade performance.  For example, when the
> first block is on track 10, then the next block is on track 20, then we're
> back to track 10 again, then over to track 35 ... etc, etc

Fragmentation *this* bad doesn't happen on MS systems either. Although
the systems are much more in danger of creating a big mess on the drive,
there is a certain method included to reduce this, like only allowing
the track numbers to either rise or fall (possibly per file access) but
not back and forth over the drive.

I can remember experimenting on my Commodore 64 (can anyone remember
that ol' thing?) and the floppy drive. I stored a file all over the
disc, one sector per track. The idea was to find out how much time it
actually took to load a file "fragmented" like this - and made a really
cool loading sound as well, especially if you had a floppy speeder like
dolphin DOS. :-) I wanted to actually cause the drive to go from track 1
to 40 and then back again while loading a single file. But that didn't
work. So if I started on track a and I am now on track c, then jumping
to track b (with a<b<c) resulted in an error from the drive. Mind you,
this was not a load command that I programmed. It's just the way the
file was allocated on the disc.

A certain logic to how files are saved on discs (no matter if hard or
floppy) has been around for a fair while.

Regards
Chris


More information about the freebsd-questions mailing list