nmbclusters: how do we want to fix this for 8.3 ?

Josh Paetzel jpaetzel at freebsd.org
Thu Feb 23 16:13:37 UTC 2012

Hash: SHA1

On 02/22/2012 13:51, Jack Vogel wrote:
> On Wed, Feb 22, 2012 at 1:44 PM, Luigi Rizzo <rizzo at iet.unipi.it 
> <mailto:rizzo at iet.unipi.it>> wrote:
> On Wed, Feb 22, 2012 at 09:09:46PM +0000, Ben Hutchings wrote:
>> On Wed, 2012-02-22 at 21:52 +0100, Luigi Rizzo wrote:
> ...
>>> I have hit this problem recently, too. Maybe the issue
>>> mostly/only exists on 32-bit systems.
>> No, we kept hitting mbuf pool limits on 64-bit systems when we
>> started working on FreeBSD support.
> ok never mind then, the mechanism would be the same, though the
> limits (especially VM_LIMIT) would be different.
>>> Here is a possible approach:
>>> 1. nmbclusters consume the kernel virtual address space so
>>> there must be some upper limit, say
>>> VM_LIMIT = 256000 (translates to 512MB of address space)
>>> 2. also you don't want the clusters to take up too much of the
> available
>>> memory. This one would only trigger for minimal-memory
>>> systems, or virtual machines, but still...
>>> MEM_LIMIT = (physical_ram / 2) / 2048
>>> 3. one may try to set a suitably large, desirable number of
>>> buffers
>>> TARGET_CLUSTERS = 128000
>>> 4. and finally we could use the current default as the
>>> absolute
> minimum
>>> MIN_CLUSTERS = 1024 + maxusers*64
>>> Then at boot the system could say
>>> nmbclusters = min(TARGET_CLUSTERS, VM_LIMIT, MEM_LIMIT)
>>> nmbclusters = max(nmbclusters, MIN_CLUSTERS)
>>> In turn, i believe interfaces should do their part and by
>>> default never try to allocate more than a fraction of the total
>>> number of buffers,
>> Well what fraction should that be?  It surely depends on how
>> many interfaces are in the system and how many queues the other
>> interfaces have.
>>> if necessary reducing the number of active queues.
>> So now I have too few queues on my interface even after I
>> increase the limit.
>> There ought to be a standard way to configure numbers of queues
>> and default queue lengths.
> Jack raised the problem that there is a poorly chosen default for 
> nmbclusters, causing one interface to consume all the buffers. If
> the user explicitly overrides the value then the number of cluster
> should be what the user asks (memory permitting). The next step is
> on devices: if there are no overrides, the default for a driver is
> to be lean. I would say that topping the request between 1/4 and
> 1/8 of the total buffers is surely better than the current 
> situation. Of course if there is an explicit override, then use it
> whatever happens to the others.
> cheers luigi
> Hmmm, well, I could make the default use only 1 queue or something
> like that, was thinking more of what actual users of the hardware
> would want.
> After the installed kernel is booted and the admin would do
> whatever post install modifications they wish it could be changed,
> along with nmbclusters.
> This was why i sought opinions, of the algorithm itself, but also
> anyone using ixgbe and igb in heavy use, what would you find most
> convenient?
> Jack

The default setting is a thorn in our (with my ixsystems servers for
freebsd hat on) side.  A system with a quad port igb card and two
onboard igb NICs won't boot stable/8 or 8.x-R to multiuser.  Ditto for
a dual port chelsio or ixgbe alongside dual onboard igb interfaces.

My vote would be having systems over some minimum threshold of system
ram to come up with a higher default for nmbclusters.  You don't see
too many 10gbe NICs in systems with 2GB of RAM....

- -- 

Josh Paetzel
FreeBSD -- The power to serve
Version: GnuPG v2.0.18 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/


More information about the freebsd-stable mailing list