filesystem size after newfs

Rick C. Petty rick-freebsd2008 at kiwi-computer.com
Tue Aug 11 19:18:38 UTC 2009


On Tue, Aug 11, 2009 at 12:41:06AM +0000, Naeem Afzal wrote:
> 
> resending to FS mailing list:I created this small partition of 512K bytes on disk, I am noticing about 24% is used up before system can be mounted and used. My assumption was about 4% is supposed to be used if minfree is set to 0.
>     #newfs -U -l -m 0 -n -o space /dev/ad1d
>     /dev/ad1d: 0.5MB (1024 sectors) block size 16384, fragment size 2048 using 1 cylinder groups of 0.50MB, 32 blks, 64 inodes with soft updates
>     super-block backups (for fsck -b #) at:
>     160
>     #mount /dev/ad1d /test
>     #df -H /test
>     Filesystem    Size    Used  Avail Capacity  Mounted on
>     /dev/ad1d      391k    2.0k    389k    1%    /test
>     Could someone explain where the 512-391=121K of disk space went to? What is the relation between this used of space and total paritition size or is it some fixed ratio?

When you use newfs(8), it leaves 64k at the front for bootstrap code.
This is followed by at least one "block" for the superblock, one block for
the superblock backup, one block for the cylinder group, and at least one
block for inodes.  Since your block size is 16k (the default), this means
that your filesystem uses 64k for filesystem metadata.  This isn't a
problem with larger filesystems, but yours is 512k so 128k is "wasted"
meaning you cannot even use the space.  I'm not sure how you are seeing a
filesystem of 391k..  I performed these same steps and I have a 382k
filesystem: 512 - 128 - 2 = 382, so I'm not surprised with my numbers.
That extra 2k is one fragment allocated to the root directory.

If you want to better conserve space on your small partition, you should
probably use UFS1 (which only reserves 8k for bootstrap) instead of UFS2
and specify smaller block and fragment sizes.  I would also specify inode
density.  I tried the following:

% newfs -O 1 -U -l -m 0 -n -o space -f 512 -b 4096 -i 1048576 /dev/md0
/dev/md0: 0.5MB (1024 sectors) block size 4096, fragment size 512
	using 1 cylinder groups of 0.50MB, 128 blks, 32 inodes.
	with soft updates
super-block backups (for fsck -b #) at:
 32

After mounting, it shows:

Filesystem              Size    Used   Avail Capacity  Mounted on
/dev/md0                480K    512B    479K     0%    /mnt

There are a number of things of which you should be careful.  Using UFS1,
you won't be able to use bootstrap code larger than 8k and you won't be
able to use large files (not a problem because your filesystem is only
512k).  You also won't get snapshots, which you apparently don't want.

Specifying inode density can put you in a bind if you need a lot of
inodes.  In my example there are exactly 16 inodes, which is somewhat
limited.  The first three inodes are reserved (2 is the root inode) which
leaves you with a maximum of 13 files and/or directories.  I'm assuming
this isn't a problem since you're using such a small filesystem.  The
smaller block and fragment sizes help reduce the "wasted space" taken up
by filesystem metadata, but will require some tuning if you want more
inodes.  Be sure that only one cylinder group is created, or you'll be
wasting 16k or more for each cylinder group.

I also recommend keeping the 8:1 ratio of blocks to fragments.  If you do
wish to tweak that, here are a few things to note.  Minimum blocksize is
4096 and at least 4 blocks are allocated for each cylinder group (in
addition to the leading 64k).  More blocks are allocated if the inode
density is higher (specifying a lower number to "newfs -i").  UFS1 can fit
twice as many inodes in the same space as UFS2, which is why I recommend
using it with very small filesystems.  Since filesystem metadata is always
allocated in blocks, it doesn't really help to tweak the fragment size.

At one time I was thinking of writing up a patch to newfs to allow you
specify the superblock offset, so you could save 16-64k per cylinder
group.  But there are limitations, since the FFS code searches for
superblocks at specific offsets, namely (in order): 64k, 8k, 0, 256k.
I also had thoughts about patching it to remove the superblock backup, so
that fs_sblkno could be 0 instead of 144 or 32.  Because of its structure,
at least 16k (8k bootstrap plus 8k initial superblock) is unused for every
cylinder group in UFS1 (at least 72k for UFS2).

There isn't much to be gained in such a patch except for very small
filesystems such as in your case.  When you're dealing with 512k, that
extra 16k (or more) is starting to look significant (3%).

HTH,

-- Rick C. Petty


More information about the freebsd-fs mailing list