system slowdown - vnode related
Matthew Dillon
dillon at apollo.backplane.com
Tue May 27 14:05:55 PDT 2003
:I'll try this if I can tickle the bug again.
:
:I may have just run out of freevnodes - I only have about 1-2000 free
:right now. I was just surprised because I have never seen a reference
:to tuning this sysctl.
:
:- Mike H.
The vnode subsystem is *VERY* sensitive to running out of KVM, meaning
that setting too high a kern.maxvnodes value is virtually guarenteed to
lockup the system under certain circumstances. If you can reliably
reproduce the lockup with maxvnodes set fairly low (e.g. less then
100,000) then it ought to be easier to track the deadlock down.
Historically speaking systems did not have enough physical memory to
actually run out of vnodes.. they would run out of physical memory
first which would cause VM pages to be reused and their underlying
vnodes deallocated when the last page went away. Hence the amount of
KVM being used to manage vnodes (vnode and inode structures) was kept
under control.
But today's Intel systems have far more physical memory relative to
available KVM and it is possible for the vnode management to run
out of KVM before the VM system runs out of physical memory.
The vnlru kernel thread is an attempt to control this problem but it
has had only mixed success in complex vnode management situations
like unionfs where an operation on a vnode may cause accesses to
additional underlying vnodes. In otherwords, vnlru can potentially
shoot itself in the foot in such situations while trying to flush out
vnodes.
-Matt
More information about the freebsd-stable
mailing list