UFS2 optimization for many small files

Wojciech Puchar wojtek at wojtek.tensor.gdynia.pl
Sun Jul 1 18:14:08 UTC 2007

> We're going to build a server with some 1Tb of over 500 million small
> files with size from 0,5k to 4k.  I'm wonder if the ufs2 can handle
> this kind of system well. From newfs(8) the min block size is 4k. This
> is not optimal in our case, a 1k or 0,5k block is more effective IMHO.
> I'd be happy if anyone can suggest what does fragment (block/8) in the
> ufs2 mean and how this parameter works. I know It's better to read the

exactly as a block/cluster in windows. fragment is the smallest allocation 
block. "block" is a group of 8 fragments to make allocation faster and 

> full ufs2 specification, but hope that someone here can give a hint.
> Please advice with optimizations or tricks.

please DO NOT make single partition like that. try to divide it to 3-4 
partitions. it will work on a single one but waiting for fsck will kill 
you ;)

AFAIK fsck time grows nonlinearly with fs size to some extent..

options for newfs will be like that

newfs -m <A> -i <B> -b 4096 -f 512 -U /dev/partition

where A is space left. with mostly small files and huge partition don't 
worry to set it 1 or even 0.

B - size of disk(bytes)/amount of inodes

default is probably 2048, you may use 1024 or 4096 for your case - make 
rough estimate how much files will you have (you told between 4 and 0.5k, 
but what average?). making too much inodes=wasted space (128 bytes/inode), 
making too little=big problem  :)

another question - HOW do you plan to make backups of such data? with dump 
rsync tar etc. it's clearly "mission impossible".

feel free to mail me i had such cases not 5E8 but over 1E8 files :)

More information about the freebsd-questions mailing list