tunefs question

Rick C. Petty rick-freebsd at kiwi-computer.com
Sat Jun 9 02:02:10 UTC 2007


On Sat, Jun 09, 2007 at 12:30:04AM +0200, Richard Noorlandt wrote:
> Thanks for the reactions. They cleared up quite a bit, and my conclusion is
> that tweaking the FS isn't a very good idea. They're defaults for a reason,

Sure it is, there are just a few well-known bugs.

> although I still have some doubts about the appropriateness of the defaults
> for large filesystems. Large filesystems don't seem to be very well
> supported at the moment.

Where do you arrive at this conclusion?  UFS2 supports large filesystems
very well.  The only known problems have to do with snapshots and full
filesystems, and these affect large filesystems as well as small.

> I hope (and believe) ZFS will settle this. It

Then you are mistaken.  ZFS has large memory requirements, and as you grow
the filesystem you require more memory.  UFS has a small memory
footprint...  it only requires memory for one cylinder group (superblock
plus free bitmaps), plus all indirect blocks, in addition to the referenced
blocks for each file open.

This is not meant to critical to ZFS..  each file system has its own perks
and problems.  UFS has been tested and used for much longer, at least with
the default newfs parameters.  Also, it's called fast file system for a
reason.

> sounds promising :-) Unfortunately I don't think it's stable enough at the
> moment.

That's an understatement.  However, it's receiving active attention and
development.  I doubt it will ever replace UFS, nor do I think that is the
intention.

> >If you know the precise files (i.e. total number of files + number of
> >directories --> number of inodes, average filesize --> inode density),
> >this
> >helps you speeze more space without sacrificing anything.
> 
> I don't really understand what you're trying to say here. How do you exactly
> determine the number of inodes needed?

Since my description is rather long, I've put it at the end of this email.

Although I suggest leaving block & fragment size alone, I highly recommend
specifying the inode density parameter!!  At least if you know the average
file size ahead of time.

> And when you change the number of inodes at filesystem creation, what effect
> will it have when you run growfs later on? Will it expand the filesystem
> with an equal inode density, or is it expanded with the default density?

growfs currently doesn't look at inode density.  If you run growfs on a
filesystem without modifying the size, it will allocate extra metadata
blocks and could end up failing if the filesystem is nearly full.  growfs
wasn't developed at the same time as newfs, and the authors decided to
ignore inode density as an option.  IMO, this is a bug.  Although the
density is not stored as a parameter in the superblock, it is trivial to
compute.  Someday I'll probably get around to sending them a patch.

If your plan is to use growfs on a filesystem someday, you shouldn't use
anything but the default newfs options.  However, then you're wasting all
that space for metadata you will never use.  For UFS2, to figure out how
much space is spent on metadata, do the following calculation:

	ncg = number of cylinder groups (reported by newfs)
	ipg = number of inodes per group (reported by newfs)
	bsize = block size (specified to, reported by newfs)

	metadata = ncg * (65536 + 2 * bsize + 256 * ipg) bytes

Using the newfs defaults, for a 70 GiB volume, 2.23 GiB (3.2%) is spent on
metadata.  NOTE:  specifying inode density paramater to newfs greatly
reduces the metadata size!

-- 

When I'm migrating non-dynamic filesystems:

find /path/to/current/mountpoint | wc -l

This will tell you how many files and directories are on the current file
system.  Number of files plus number of directories == (approx) number of
inodes used.  With hard links, you may actually need fewer inodes than is
shown here, but it's still a good estimate in most cases.  Then:

df -k /path/to/current/mountpoint

will tell you how much storage is in use on said filesystem, in KiB.
Divide this by the number of inodes and you have the average filesize and
also an approximate inode density (inode density being defined as how much
storage is required per inode, or how many inodes to create given a known
storage size).  I take this number and round it down to the nearest
multiple of the block size (in favor of extra inodes instead of not
enough).

Then I create a new slice/volume of a specified size, greater than the
total used space, including extra room if I think the filesystem will
grow.  I create the filesystem, specifying the -i option (and -b and -f
options, if desired).  I typically try a few settings and use the -N
option as well, until I like the results.  I look at the output of newfs:

...
	using _#_CG_ cylinder groups of _CG_SZ_, _CG_BLKS_, _I_ inodes.

I check that the number of inodes per CG times the total number of CGs is
greater than or equal to the number of inodes I need.  For example, if I
found 1000 files taking up 10 GB of space and I decide to allocate a 15
GB filesystem, I ensure there are at least 1500 inodes.

As for tweaking the block/frag sizes, I wouldn't go less than 8192/1024 
and you can't go above 65536/8192.  Things to consider:  Remember every
file allocates space one block at a time.  The last "block" of a file can
be a fragment, which is 1/8 of a block.  If you have lots of small files,
you'll want a small frag size (and thus small blocks).  If there are few
files and they are all large, larger block sizes can be used.  The
defaults of 16384/2048 are generally sufficient.  I only tweak these when
I've calculated that I actually save space by tweaking them.  Remember
every directory will allocate at least one fragment.  Also larger blocks
means more inodes will fit into every cylinder group and that a cylinder
group can describe more blocks since the free bitmap is part of the CG.


-- Rick C. Petty


More information about the freebsd-fs mailing list