igb and jumbo frames

Jack Vogel jfvogel at gmail.com
Fri Dec 3 22:05:35 UTC 2010


Since you're already configuring the system into a special non-standard
way you are playing the admin, so I'd expect you to also configure memory
pool resources, not to have the driver do so. Its also going to depend on
the number of queues you have, you can reduce those manually as well.

I'm glad you're trying this out however, the 9K cluster use is new, and not
uncontroversial either, I decided to put it in, but if problems occur, or
someone
has a strong valid-sounding argument for not using them, I could be
persuaded
to take it our and just use 2K and 4K sizes.

So... any feedback is good right now.

Jack


On Fri, Dec 3, 2010 at 11:00 AM, Tom Judge <tom at tomjudge.com> wrote:

> Hi,
>
> So I have been playing around with some new hosts I have been deploying
> (Dell R710's).
>
> The systems have a single dual port card in them:
>
> igb0 at pci0:5:0:0:        class=0x020000 card=0xa04c8086 chip=0x10c98086
> rev=0x01 hdr=0x00
>    vendor     = 'Intel Corporation'
>    class      = network
>    subclass   = ethernet
>    cap 01[40] = powerspec 3  supports D0 D3  current D0
>    cap 05[50] = MSI supports 1 message, 64 bit, vector masks
>    cap 11[70] = MSI-X supports 10 messages in map 0x1c enabled
>    cap 10[a0] = PCI-Express 2 endpoint max data 256(512) link x4(x4)
> igb1 at pci0:5:0:1:        class=0x020000 card=0xa04c8086 chip=0x10c98086
> rev=0x01 hdr=0x00
>    vendor     = 'Intel Corporation'
>    class      = network
>    subclass   = ethernet
>    cap 01[40] = powerspec 3  supports D0 D3  current D0
>    cap 05[50] = MSI supports 1 message, 64 bit, vector masks
>    cap 11[70] = MSI-X supports 10 messages in map 0x1c enabled
>    cap 10[a0] = PCI-Express 2 endpoint max data 256(512) link x4(x4)
>
>
> Running 8.1 these cards panic the system at boot when initializing the
> jumbo mtu, so to solve this I back ported the stable/8 driver to 8.1 and
> booted with this kernel.  So far so good.
>
> However when configuring the interfaces with and mtu of 8192 the system
> is unable to allocate the required mbufs for the receive queue.
>
> I believe the message was from here:
> http://fxr.watson.org/fxr/source/dev/e1000/if_igb.c#L1209
>
> After a little digging and playing with just one interface i discovered
> that the default tuning for kern.ipc.nmbjumbo9 was insufficient to run a
> single interface with jumbo frames as it seemed just the TX queue
> consumed 90% of the available 9k jumbo clusters.
>
> So my question is (well 2 questions really):
>
> 1) Should igb be auto tuning kern.ipc.nmbjumbo9 and kern.ipc.nmbclusters
> up to suite its needs?
>
> 2) Should this be documented in igb(4)?
>
> Tom
>
> --
> TJU13-ARIN
>


More information about the freebsd-net mailing list