Improving old-fashioned UFS2 performance with lots of inodes...

George Sanders gosand1982 at yahoo.com
Mon Jun 27 23:44:16 UTC 2011


I have a very old-fashioned file server running a 12-disk raid6 array on a 3ware 
9650SE.  2TB disks, so the size comes out to 18TB.

I newfs the raw device with:

newfs -i 65535 /dev/xxx

and I would consider jumping to 131072 ... that way my fsck should not take any 
longer than it would with a smaller disk, since there are not any more total 
inodes.

BUT ...

with over 100 million inodes on the filesystem, things go slow.  Overall 
throughput is fine, and I have no complaints there, but doing any kind of 
operations with the files is quite slow.  Building a file list with rsync, or 
doing a cp, or a ln -s of a big dir tree, etc.

Let's assume that the architecture is not changing ... it's going to be FreeBSD 
8.x, using UFS2, and raid6 on actual spinning (7200rpm) disks.

What can I do to speed things up ?

Right now I have these in my loader.conf:

kern.maxdsiz="4096000000"# for fsck
vm.kmem_size="1610612736"# for big rsyncs
vm.kmem_size_max="1610612736"# for big rsyncs

and I also set:

vfs.ufs.dirhash_maxmem=64000000

but that's it.

What bugs me is, the drives have 64M cache, and the 3ware controller has 224 MB 
(or so) but the system itself has 64 GB of RAM ... is there no way to use the 
RAM to increase performance ?  I don't see a way to actually throw hardware 
resources at UFS2, other than faster disks which are uneconomical for this 
application ...

Yes, 3ware write cache is turned on, and storsave is set to "balanced".

Is there anything that can be done ?


More information about the freebsd-fs mailing list