UFS2 optimization for many small files

Claus Guttesen kometen at gmail.com
Sun Jul 1 07:30:41 UTC 2007


> We're going to build a server with some 1Tb of over 500 million small
> files with size from 0,5k to 4k.  I'm wonder if the ufs2 can handle
> this kind of system well. From newfs(8) the min block size is 4k. This
> is not optimal in our case, a 1k or 0,5k block is more effective IMHO.
> I'd be happy if anyone can suggest what does fragment (block/8) in the
> ufs2 mean and how this parameter works. I know It's better to read the
> full ufs2 specification, but hope that someone here can give a hint.
> Please advice with optimizations or tricks.
> Thank you very much.

Try zfs on amd64 unless your app doesn't work well with zfs or your
organization doesn't allow current. Current is remarkably stable
taking into account zfs is fairly new and ported from solaris and
running on current. I'm using it on a 8.2 TB nexsan storage and no
crashes during testing and a limited time in production.

Some years ago I used FreeBSD (5.2) as nfs-server (using ufs2) on
approx. 15 partitions ranging from 400 GB to 2 TB in size.  If the
server for some reason had crashed the webservers were unable to
access the nfs-mounted partitions during the period the server did a
snapshot of a partition, in order to perform a background-fsck and
thus our website was down. So ufs2 does not scale well.

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare


More information about the freebsd-questions mailing list