Introducing netmap: line-rate packet send/receive at 10Gbit/s
rizzo at iet.unipi.it
Fri Jun 3 13:42:28 UTC 2011
On Fri, Jun 03, 2011 at 10:20:50AM -0300, Patrick Tracanelli wrote:
> Em 02/06/2011, ?s 19:31, Luigi Rizzo escreveu:
> > Hi,
> > we have recently worked on a project, called netmap, which lets
> > FreeBSD send/receive packets at line rate even at 10 Gbit/s with
> > very low CPU overhead: one core at 1.33 GHz does 14.88 Mpps with a
> > modified ixgbe driver, which gives plenty of CPU cycles to handle
> > multiple interface and/or do useful work (packet forwarding, analysis, etc.)
> > You can find full documentation and source code and even a picobsd image at
> > http://info.iet.unipi.it/~luigi/netmap/
> > The system uses memory mapped packet buffers to reduce the cost of
> > data movements, but this would not be enough to make it useful or
> > novel. Netmap uses many other small but important tricks to make
> > the system fast, safe and easy to use, and support transmission,
> > reception, and communication with the host stack.
> > You can see full details in documentation at the above link.
> > Feedback welcome.
> Dear Rizzo,
> Which packet len you transmitted at 14.8MPPS? According to figure 5 or the description I could not find it. Did you test TCP?
The paper gives all details in sec.6, please read it carefully.
The test is done with individual streams (either tx or rx) of packets,
the protocol is irrelevant.
In fig.5 of the paper, packet size is on the X
axis, pps is on the Y axis. You get the maximum PPS rate with
min-sized packets (60 bytes + 4 byte crc).
If you want to do both send and receive and perhaps on multiple
interfaces you should make sure there are enough resources (CPU
cycles, bus bandwidth and transactions etc.) for the task. In my
tests, CPU does not seem to be a problem (i can send about 27Mpps
with just one core and two interfaces), but bus cycles perhaps are
(e.g. receiving with some of the "bad" packet sizes also slows down
the sender on the same bus, no matter how many cores i put in).
> How did you perform this test? Multihomed with forwarding between NICs or you generated the data from userland to the wire and let it flow? If not tell me how you believe netmap may impact in our current forwarding rate (specially the pps limit) and FreeBSD should be changed to take advantage of netmap for pkt forwarding.
> Thank you for your time, code and all the stuff in between :)
> Patrick Tracanelli
More information about the freebsd-current