Is netmap jumbo frames broken in STABLE?

Andrew Vylegzhanin avv314 at gmail.com
Tue Jun 7 12:22:27 UTC 2016


Just for support Luigi assumption.

I've tested on 11.0-ALPHA1 (r301204).
Same situation with frame size 5166 and works _well_ with frame size 4032.

--
Andrew

2016-06-07 1:47 GMT+03:00 Ryan Stone <rysto32 at gmail.com>:

> The use of mbuf clusters larger than a single page really doesn't work.
> The problem is that over time physical memory becomes fragmented and
> eventually 9K of contiguous memory can't be allocated anymore.  This is why
> many drivers now limit themselves to page-sized clusters.
>
> On Mon, Jun 6, 2016 at 10:03 AM, Luigi Rizzo <rizzo at iet.unipi.it> wrote:
>
>> On Mon, Jun 6, 2016 at 3:22 PM, Andrew Vylegzhanin <avv314 at gmail.com>
>> wrote:
>>
>> > Hello all,
>> >
>> >
>> > I have an application that uses netmap for capture jumbo frames. The
>> frames
>> > are fixed size and have fixed rate (for example size 5166, rate 50000
>> pps).
>> > The frames are pure Ethernet, without IP header.
>> >
>> >
>> > Everything works fine in 10.0-RELEASE, 10.1-RELEASE.
>> >
>> >
>> > Starting from 10.3 and actual 10-STABLE I've got wrong data from netmap
>> > ring. It's looks like packet data broke and packet split on two parts
>> 4092
>> > and 1070 bytes,  where original size was 5166.
>> >
>> > A code ring precessing is based on macros from netmap_user.h :
>> >
>> >
>> >         n = nm_ring_space(ring);
>> >
>> >         for (rx = 0; rx < limit; rx++) {
>> >
>> >                 struct netmap_slot *slot = &ring->slot[cur];
>> >
>> >                 char *p = NETMAP_BUF(ring, slot->buf_idx);
>> >
>> >                 process_payload(p, slot->len, datapx);
>> >
>> >                 cur = nm_ring_next(ring, cur);
>> >
>> >         }
>> >
>> >         ring->head = ring->cur = cur;
>> >
>> >
>> > Here is netmap sysctl's:
>> >
>> > dev.netmap.buf_num=81920
>> >
>> > dev.netmap.ring_size=73728
>> >
>> > dev.netmap.buf_size=5248
>> >
>> >
>> > Hardware is Dell R720 (2x E5-2643 v2) with four Intel Ethernet 10G 2P
>> X520
>> > Adapter. I use only one hardware queue per interface.
>> >
>> >
>> > BTW, may be a new version of Intel ixgbe driver (3.1.13-k) is a reason?
>> >
>> >
>> ​Hi,
>> yes I suspect the problem may be in the new ixgbe driver,
>> probably it programs the hardware to limit buffer sizes to 4k
>> even when large MTUs are in use,
>> so the receiver will split
>> the incoming frame in two buffers, while netmap is expecting
>> only one.
>> I suggest to have a look at the ioctl handler (in the driver)
>> that handles the MTU setting and compare with the code
>> in the previous driver.
>>
>> cheers
>> luigi
>>
>>
>> > Does it make sense to try with 11-CURRENT?
>> >
>> >
>> > Thank you in advance.
>> >
>> >
>> > --
>> >
>> > Andrew
>> > _______________________________________________
>> > freebsd-net at freebsd.org mailing list
>> > https://lists.freebsd.org/mailman/listinfo/freebsd-net
>> > To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
>> >
>>
>>
>>
>> --
>> -----------------------------------------+-------------------------------
>>  Prof. Luigi RIZZO, rizzo at iet.unipi.it  . Dip. di Ing. dell'Informazione
>>  http://www.iet.unipi.it/~luigi/        . Universita` di Pisa
>>  TEL      +39-050-2217533               . via Diotisalvi 2
>>  Mobile   +39-338-6809875               . 56122 PISA (Italy)
>> -----------------------------------------+-------------------------------
>> _______________________________________________
>> freebsd-net at freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-net
>> To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
>>
>
>


More information about the freebsd-net mailing list