Optimizing UFS 1/2 for non-rotating / compressed storage

Maxim Sobolev sobomax at freebsd.org
Sun Aug 7 08:09:43 UTC 2016


Thanks, Kirk, I hope you've got great time off down there!

So far, we've been set the following, which seems to pessimize compression
levels slightly but greatly reduce size of incremental upgrade using rsync
after we change just few files and re-pack:

newfs -n -b 65536 -f $((65536 / 2)) -m 0 -L "${FW_LABEL}" "/dev/${MD_UNIT}"

Unfortunately 64k is the max block size we can get out of it (128k is
rejected) and we run out of inodes if we set fragment size to be 64k as
well. Is there a fundamental limitation on the size of the block? I am
curious to see how 128/32 might work considering that bigger block size is
preferred by the compressor. We'll try to play with other options too, as
you've suggested.

-Max

On Thu, Aug 4, 2016 at 10:04 AM, Kirk McKusick <mckusick at mckusick.com>
wrote:

> > From: Maxim Sobolev <sobomax at freebsd.org>
> > Date: Wed, 20 Jul 2016 11:45:03 -0700
> > Subject: Optimizing UFS 1/2 for non-rotating / compressed storage
> > To: Kirk McKusick <mckusick at mckusick.com>,
> >         FreeBSD Filesystems <freebsd-fs at freebsd.org>
> >
> > Hi Kirk et al,
> >
> > Do you by any chance have some hints of what parameters we need to set in
> > newfs to maximally fit the following criteria:
> >
> > 1. Minimize free space fragmentation, i.e. we start with huge array of
> > zeroes, we want to end up with as few number of continuous zero areas as
> > possible (i.e. minimize free space discontinuity).
> >
> > 2. Blocks that belong to the same file should be as continuous as
> possible
> > "on disk".
> >
> > 3. Each individual file should preferably start at the block offset that
> is
> > multiple of certain pre-defined power-of-two size from the start of
> > partition, e.g. 64k, 128k etc.
> >
> > The file system in question is write-mostly. We create image from scratch
> > every time and them populate with installworld + pkg add. Any free space
> is
> > subsequently erased with dd if=/dev/zero of=/myfs/bigfile; rm
> > /myfs/bigfile, unmounted and image is compressed. We also grossly
> > over-provision space, i.e. 2GB UFS image is created, less than 1GB is
> used
> > at the end.
> >
> > Any hints would be appreciated. Thanks!
> >
> > -Maxim
>
> Just back from spending the month of July in Tasmania (Australia)
> and trying to get caught up on email...
>
> Unfortunately UFS/FFS is not well designed for what you want to do.
> It splits the filesystem space up into "cylinder groups" and then
> tries to place the files evenly across the cylinder groups. At least
> it packs the files into the front of each cylinder group, so you
> will tend to get a big block of unallocated space at the end of
> each cylinder group.
>
> You could benefit from allocating the fewest number of cylinder
> groups possible which is what newfs does by default.  But you could
> help this along by creating a filesystem with no fragments (just
> full-sized blocks) as that keeps the bitmaps small (the bitmap needs
> one bit per possible fragment). I will note that going without
> fragments will blow up your disk usage if you have many small files,
> as a small file will use 8x as much space as it would if you had
> fragments.
>
> Use the `-e maxbpg' parameter to newfs (or tunefs after the fact)
> to set a huge value for contiguous blocks before being forced to
> move to a new cylinder group. Note that doing this will penalize
> your small file read performance, so you may want to leave this
> alone.
>
> To get all files to start on a particular block boundary, set your
> filesystem block size to the starting offset boundary you desire
> (e.g., if you want files to start on a 32k offset, use a 32k block
> size for your filesystem). If you create a filesystem with no
> fragments, then all files will by definition start on a block boundary.
>
>         Kirk McKusick
>
>


More information about the freebsd-fs mailing list