system slowdown - vnode related
Mike Harding
mvh at ix.netcom.com
Mon May 26 09:24:42 PDT 2003
On my sytem, with 512 meg of memory, I have the following (default)
vnode related values:
bash-2.05b$ sysctl -a | grep vnode
kern.maxvnodes: 36079
kern.minvnodes: 9019
vm.stats.vm.v_vnodein: 140817
vm.stats.vm.v_vnodeout: 0
vm.stats.vm.v_vnodepgsin: 543264
vm.stats.vm.v_vnodepgsout: 0
debug.sizeof.vnode: 168
debug.numvnodes: 33711
debug.wantfreevnodes: 25
debug.freevnodes: 5823
...is this really low? Is this something that should go into
tuning(7)? I searched on google and found basically nothing related to
adjust vnodes - although I am admittedly flogging the system - I have
leafnode+ running, a mirrored CVS tree, an experimental CVS tree,
mount_union'd /usr/ports in a jaile, and so on. Damn those $1 a
gigabyte drives!
On Mon, 2003-05-26 at 09:12, Marc G. Fournier wrote:
> On Mon, 26 May 2003, Mike Harding wrote:
>
> > Er - are any changes made to RELENG_4_8 that aren't made to RELENG_4? I
> > thought it was the other way around - that 4_8 only got _some_ of the
> > changes to RELENG_4...
>
> Ack, my fault ... sorry, wasn't thinking :( RELENG_4 is correct ... I
> should have confirmed my settings before blathering on ...
>
> One of the scripts I used extensively while debugging this ... a quite
> simple one .. was:
>
> #!/bin/tcsh
> while ( 1 )
> echo `sysctl debug.numvnodes` - `sysctl debug.freevnodes` - `sysctl debug.vnlru_nowhere` - `ps auxl | grep vnlru | grep -v grep | awk '{print $20}'`
> sleep 10
> end
>
> which outputs this:
>
> debug.numvnodes: 463421 - debug.freevnodes: 220349 - debug.vnlru_nowhere: 3 - vlruwt
>
> I have my maxvnodes set to 512k right now ... now, when the server "hung",
> the output would look something like (this would be with 'default' vnodes):
>
> debug.numvnodes: 199252 - debug.freevnodes: 23 - debug.vnlru_nowhere: 12 - vlrup
>
> with the critical bit being the vlruwt -> vlrup change ...
>
> with unionfs, you are using two vnodes per file, instead of one in
> non-union mode, which is why I went to 512k vs the default of ~256k vnodes
> ... it doesn't *fix* the problem, it only reduces its occurance ...
More information about the freebsd-stable
mailing list