how big can kern.maxvnodes get?

Chris Peiffer bsdlists at cabstand.com
Wed Dec 29 22:35:00 UTC 2010


I have a backend server running 8-2-PRERELEASE with lots of
independent files that randomly grow and then get truncated to
zero. (Think popserver.)  

Roughly 2.5 million inodes on each of 4 Intel SSD disks. 24 gb of RAM
in the system. I want to maximize the buffer cache in RAM.

I doubled kern.maxvnodes to 942108 and reads/second went down and
memory use went up, (as I expected) but right now there's still about
15g RAM marked as free.

vfs.numvnodes crept up to 821704 and has hovered there. The file
sizes range to 1 mb but most are in the range 0-10k. Since the server
operations are so simple kern.openfiles hovers in the range 100-200.

Obviously, all things being equal I'd like to give the filesystem
buffer cache access to that free RAM by allowing more vnodes to stay
cached. 

Can I increase kern.maxvnodes by another factor of 2? more? Are there
any known problems with stepping it up, besides general memory
exhaustion? With so much free RAM I'd like to turn the dial a little
bit but I wonder if there are other linked things I should watch out
for.  

Thanks.

Here are some lines from vmstat -z that might be relevant:

ITEM                     SIZE     LIMIT      USED      FREE  REQUESTS   FAILURES

VNODE:                    472,        0,   779555,    69229, 163219829,        0
VNODEPOLL:                112,        0,        2,       64,        4,        0
S VFS Cache:              108,        0,   761856,    63606, 504696076,        0
L VFS Cache:              328,        0,        0,      228,      300,        0




More information about the freebsd-fs mailing list