cvs commit: src/sys/kern vfs_subr.c
David O'Brien
obrien at freebsd.org
Mon Aug 2 19:24:17 PDT 2004
On Mon, Aug 02, 2004 at 08:36:28PM -0500, Alan Cox wrote:
> On Mon, Aug 02, 2004 at 07:11:03PM -0600, Scott Long wrote:
> > David E. O'Brien wrote:
> >
> > >obrien 2004-08-02 21:52:43 UTC
> > >
> > > FreeBSD src repository
> > >
> > > Modified files:
> > > sys/kern vfs_subr.c
> > > Log:
> > > Put a cap on the auto-tuning of kern.maxvnodes.
> > >
> > > Cap value chosen by: scottl
> > >
> > > Revision Changes Path
> > > 1.518 +8 -0 src/sys/kern/vfs_subr.c
> >
> > Well, the number that I gave was really only a suggestion and is
> > far too low to be useful on in a production environment like
> > squid or a mail/imap server. What we should really be doing is
> > scaling based on the size of the kmem_map. We should also be
> > scaling kmem_map based on the size of physical RAM and not capping
> > it to such relatively low values like we do right now. I'm also
> > quite afraid of what might happen to something like squid that
> > will be exerting both vnode and mbug pressure at the same time.
>
> It does scale with the amount of physical memory. There is, however,
> an architecture-specific cap to account for the KVA size. This cap
> is now too low, particularly, on i386.
>
> In short, VM_KMEM_SIZE_MAX needs to increase on i386. I just don't
> know how large of an increase is safe. Do you have access to an i386
> with 4+ GB of RAM?
I do -- and it is the machine that was panicing with:
panic: kmem_malloc(4096): kmap_map too small: 209715200 total allocated
Scottl told me the root cause was probably too high a kern.maxvnodes
value and that he's been telling many people to limit kern.maxvnodes to
100000 (but I see he now likes a larger number...).
vm.kmem_size: 209715200
hw.physmem: 3883290624
hw.usermem: 3751428096
are my current values. What can I do to help choose a better
VM_KMEM_SIZE_MAX capping?
--
-- David (obrien at FreeBSD.org)
More information about the cvs-all
mailing list