svn commit: r246204 - head/sys/arm/include

Alan Cox alc at rice.edu
Sat Feb 2 19:23:48 UTC 2013


On 02/01/2013 15:57, Andre Oppermann wrote:
> On 01.02.2013 22:16, Juli Mallett wrote:
>> On Fri, Feb 1, 2013 at 1:01 PM, Andre Oppermann <andre at freebsd.org>
>> wrote:
>>> On 01.02.2013 21:23, Adrian Chadd wrote:
>>>>
>>>> .. before you make that assumption, please re-visit some the ..
>>>> lower-end integrated ethernet MACs in embedded chips.
>>>>
>>>> I don't know whether the Atheros stuff does (I think it does, but I
>>>> don't know under what conditions it's possible.)
>>>>
>>>> Maybe have it by default not return jumbo mbufs, and if a driver wants
>>>> jumbo mbufs it can explicitly ask for them.
>>>
>>>
>>> Jumbo frames do not see wide-spread use.  If they are used, then
>>> in data centre LAN environments and possibly also inter-datacenter.
>>> That is high performance environments.
>>>
>>> I seriously doubt that lower-end ethernet MACs you're referring to
>>> fit that bill.
>>
>> These are silly generalizations, Andre.  I know of low-end systems in
>> jumbo frame environments.  I think Adrian's implication that Atheros
>> hardware can't handle doing scatter-gather into multiple buffers for
>> jumbo frames is probably an unlikely one, but if we have hardware that
>> requires jumbo mbufs, we should obviously keep supporting jumbo mbufs
>> to some extent.
>
> My generalizations are about as silly as Adrian's handwaving.  ;)
>
> The reason jumbo mbufs (> PAGE_SIZE) ever came into existence was for
> non-s/g DMA network cards, the Tigeon(II) in particular IIRC.  Jumbo
> mbufs are much more taxing on the on VM and the kernel_map because of
> the contigmalloc requirement.

Not really.  The lowest layer in the physical memory management system
has used a binary buddy system-based allocator since FreeBSD 7.0.  It's
no more costly to ask for 16KB than 4KB, and if it exists, to allocate
it.  It just may not exist.  Only then, will the cost be greater than an
ordinary multipage allocation, such as a kmem_malloc() call.  However,
16KB of contiguous pages typically exist, especially on machines with
larger physical memories.

> ...  They should only be used when really
> necessary in the receive path and not as general purpose extra large
> mbufs.  That's what we've got mbuf chains for.  The send path from
> userspace uses PAGE_SIZE (jumbo) mbuf's whenever possible.  The same
> is true for sendfile() because it wraps file object pages to mbufs.
>
>> Hypotheticals are somewhat irrelevant, but I find it surprising that
>> you're being so glib about breaking FreeBSD networking just because of
>> an idea you have about where jumbo frame use is appropriate and what
>> kinds of hardware should be connected to jumbo frame networks.
>
> First I said I'd love to remove it.  Not that I'm doing now or soon.
> It's an opinion and expression of desire.  Second very little would
> break because any contemporary network card (I have looked at so far)
> supports jumbo frame s/g into 2K or 4K mbufs on receive.  Jumbo frames
> are only available with GigE and 10GigE.  It was never supported on
> 100M.  Any patch removing large jumbo support wouldn't go unnoticed. ;)
>



More information about the svn-src-head mailing list