svn commit: r254520 - in head/sys: kern sys

Scott Long scott4long at yahoo.com
Wed Aug 21 19:41:42 UTC 2013


On Aug 21, 2013, at 8:59 AM, Andre Oppermann <andre at freebsd.org> wrote:

> On 19.08.2013 23:45, Navdeep Parhar wrote:
>> On 08/19/13 13:58, Andre Oppermann wrote:
>>> On 19.08.2013 19:33, Navdeep Parhar wrote:
>>>> On 08/19/13 04:16, Andre Oppermann wrote:
>>>>> Author: andre
>>>>> Date: Mon Aug 19 11:16:53 2013
>>>>> New Revision: 254520
>>>>> URL: http://svnweb.freebsd.org/changeset/base/254520
>>>>> 
>>>>> Log:
>>>>>    Remove the unused M_NOFREE mbuf flag.  It didn't have any in-tree
>>>>> users
>>>>>    for a very long time, if ever.
>>>>> 
>>>>>    Should such a functionality ever be needed again the appropriate and
>>>>>    much better way to do it is through a custom EXT_SOMETHING
>>>>> external mbuf
>>>>>    type together with a dedicated *ext_free function.
>>>>> 
>>>>>    Discussed with:    trociny, glebius
>>>>> 
>>>>> Modified:
>>>>>    head/sys/kern/kern_mbuf.c
>>>>>    head/sys/kern/uipc_mbuf.c
>>>>>    head/sys/sys/mbuf.h
>>>>> 
>>>> 
>>>> Hello Andre,
>>>> 
>>>> Is this just garbage collection or is there some other reason for this?
>>> 
>>> This is garbage collection and removal of not quite right, rotten,
>>> functionality.
>>> 
>>>> I recently tried some experiments to reduce the number of mbuf and
>>>> cluster allocations in a 40G NIC driver.  M_NOFREE and EXT_EXTREF proved
>>>> very useful and the code changes to the kernel were minimal.  See
>>>> user/np/cxl_tuning.  The experiment was quite successful and I was
>>>> planning to bring in most of those changes to HEAD.  I was hoping to get
>>>> some runtime mileage on the approach in general before tweaking the
>>>> ctors/dtors for jumpbo, jumbo9, jumbo16 to allow for an mbuf+refcnt
>>>> within the cluster.  But now M_NOFREE has vanished without a warning...
>>> 
>>> I'm looking through your experimental code and that is some really good
>>> numbers you're achieving there!
>>> 
>>> However a couple things don't feel quite right, hackish even, and not
>>> fit for HEAD.  This is a bit the same situation we had with some of the
>>> first 1GigE cards quite a number of years back (mostly ti(4)).  There
>>> we ended up with a couple of just good enough hacks to make it fast.
>>> Most of the remains I've collected today.
>> 
>> If M_NOFREE and EXT_EXTREF are properly supported in the tree (and I'm
>> arguing that they were, before r254520) then the changes are perfectly
>> legitimate.  The only hackish part was that I was getting the cluster
>> from the jumbop zone while bypassing its normal refcnt mechanism.  This
>> I did so as to use the same zone as m_uiotombuf to keep it "hot" for all
>> consumers (driver + network stack).
> 
> If you insist I'll revert the commit removing M_NOFREE.  EXT_EXTREF isn't
> touched yet, but should get better support.
> 
> The hackish part for me is that the driver again manages its own memory
> pool.  Windows works that way, NetBSD is moving towards it while FreeBSD
> has and remains at a central network memory pool.  The latter (our current)
> way of doing it seems more efficient overall especially on heavily loaded
> networked machines.  There may be significant queues building (think app
> blocked having many sockets buffer fill up) up delaying the freeing and
> returning of network memory resources.  Together with fragmentation this
> can lead to bad very outcomes.  Router applications with many interfaces
> also greatly benefit from central memory pools.
> 
> So I'm really not sure that we should move back in the driver owned pool
> direction with lots of code duplication and copy-pasting (see NetBSD).
> Also it is kinda weird to have a kernel based pool for data going down
> the stack and another one in each driver for those going up.
> 
> Actually I'm of the opinion that we should stay with the central memory
> pool and fix so that it works just as well for those cases a driver pool
> currently performs better.

The central memory pool approach is too slow, unfortunately.  There's a
reason that other OS's are moving to them.  At Netflix we are
currently working on some approaches to private memory pools in order to
achieve better efficiency, and we're closely watching and anticipating Navdeep's
work.

Scott



More information about the svn-src-all mailing list