Can netmap be more efficient when it just does bridging between NIC and Linux kernal?

Vincenzo Maffione v.maffione at gmail.com
Fri Dec 9 10:50:09 UTC 2016


Hi,


2016-12-09 10:02 GMT+01:00 Xiaoye Sun <Xiaoye.Sun at rice.edu>:

> Hi Vincenzo,
>
> Thank you for your suggestion. I think attaching only subset of NIC queues
> to netmap is a brilliant idea!!!
>
> I am going through the instructions on the blog you send to me.
> https://blog.cloudflare.com/single-rx-queue-kernel-bypass-with-netmap/
>
> Now, I can use "ethtool -N eth3" (Configure Rx network flow classification)
> command to set up filters so that type 1 data goes to the netmap nic queues
> and the type 2 data goes to other queues at the receiver side.
>
> However, it seems that my NIC (Intel 10G IXGBE) does not support
> indirection table, since when I use the command "ethtool -X eth3 weight 0
> 1 1 1", I got error message like
> Cannot get RX flow hash indirection table size: Operation not supported
> This makes the kernel not isolate the queues given to netmap.
>

I see, but actually I have never personally tried these hw flow steering
configurations, so I cannot help here. Maybe you can try asking the
Cloudflare guys that did the setup, they could have had similar problems.
In the end these are hardware specific features, not related to netmap.


>
> In such case, outgoing packets from the kernel stack are stuck and never
> sent out, since these packets may want to go to the TX nic queues that have
> been given to netmap (I guess).
>

Netmap intercepts all the packets that the kernel wants to transmit on a
NIC opened in netmap mode (to be precise, in general just a subset of the
TX/RX NIC queues are opened in netmap mode). However, at the point the TX
packet gets intercepted, the kernel has already decided what is the
designated TX (hardware) NIC queue for the packet: if the designated queue
is opened in netmap mode, then the packet will be put in the host RX ring
(and will wait there until a netmap application consumes it); on the other
hand, if the designed queue is not open in netmap mode, netmap will just
let it go transparently, as if the packet was never intercepted.

Pay attention to play well with nm_open() modifiers: the T and R modifiers
can be used to open in netmap mode only TX or RX queues.
So for instance:
   "netmap:ethX" --> opens all TX and RX queues in netmap mode; all packets
that network stack tries to transmit will end up into the host RX ring
   "netmap:ethX/T" --> opens only TX queues in netmap mode; all packets
that network stack tries to transmit will end up into the host RX ring
   "netmap:ethX/R" --> opens only RX queues in netmap mode; this means that
network stack can actually transmit on ethX, and packet won't end up into
the host RX ring

Of course you can play with the ring ids, to open just specific queues:
e.g. "netmap:ethX-3/T" "netmap:ethX-0/R", etc.


Cheers,
  Vincenzo


> I am wondering is there a way to work around this issue.
>
> Best,
> Xiaoye
>
> On Thu, Dec 8, 2016 at 5:39 AM, Vincenzo Maffione <v.maffione at gmail.com>
> wrote:
>
>> Hi,
>>
>> 2016-12-07 2:36 GMT+01:00 Xiaoye Sun <Xiaoye.Sun at rice.edu>:
>>
>>> Hi,
>>>
>>> I am wondering if there a way to reduce the CPU usage of a netmap program
>>> similar to the bridge.c example.
>>>
>>> In my use case, I have a distributed application/framework (e.g. Spark or
>>> Hadoop) running on a cluster of machines (each of the machines runs Linux
>>> and has an Intel 10Gbps NIC). The application is both computation and
>>> network intensive. So there is a lot of data transfers between machines.
>>> I
>>> divide different data into two types (type 1 and type 2). Packets of
>>> type 1
>>> data are sent through netmap (these packets don't go through Linux
>>> network
>>> stack). Packets of type 2 data are sent through Linux network stack. Both
>>> type 1 and type 2 data could be small or large.
>>>
>>> My netmap program runs on all the machines in the cluster and processes
>>> the
>>> packets of type 1 data  (create, send, receive the packets) and forward
>>> packets of type 2 data between the NIC and the kernel by swapping the
>>> pointer to the NIC slot and the pointer to the kernel stack slot (similar
>>> to the bridge.c example in netmap repository).
>>>
>>> With my netmap program running on the machines, for an application having
>>> no type 1 data (netmap program behaves like a bridge which only does slot
>>> pointer swapping), the total running time of the application is longer
>>> than
>>> the case where no netmap program runs on the machines.
>>>
>>
>> Yes, but this is not surprising. If the only thing your netmap
>> application is doing is forwardinig all the traffic between the nework
>> stack and the NIC, then your netmap application is a process that is doing
>> an useless job: netmap is intercepting packets from the network stack and
>> reinjecting them back in the network stack (where their goes on as they
>> were not intercepted). It's just wasting resources. Netmap is designed to
>> let netmap applications use efficiently the NICs and/or talk efficently to
>> each other (e.g. using the VALE switch or the virtualization extensions).
>> The "host rings" are instead useful in some use-cases, for example (1)
>> you want to implement an high performance input packet filter for your
>> network stack, that is able to manage Ddos attacks: your netmap application
>> would receive somthing like 10 Mpps from the NIC, drop 99% of it (since it
>> realize it is not legitimate traffic) and forward the remaining packets to
>> the network stack; (2) you want to manage (forward, drop, modify, etc.)
>> most of the traffic in your netmap application, but there are some low
>> badwidth protocols that you want to manage using standard socket
>> applications (e.g. SSH).
>>
>>
>>>
>>> It seems to me that the netmap program either slows down the network
>>> transfer for type 2 data, or it eats up too many CPU cycles and competes
>>> with the application process. However, with my netmap program running,
>>> iperf can reach 10Gbps bandwidth with 40-50% CPU usage on the netmap
>>> program (the netmap program is doing pointer swaping for iperf packets).
>>> I
>>> also found that after each poll returns, most of the time, the program
>>> might just swap one pointer, so there is a lot of system call overhead.
>>>
>>> This is also not surprising, since you are probably iperf is generating
>> large packets (1500 bytes or more). As a consequence, the packet rate is
>> something like 800Kpps, which is not extremely high (netmap applications
>> can work with workloads of 5, 10, 20 or more Mpps; since the packet rate is
>> not high, it means that the interval between two packets arriving is
>> greater than the time needed to do a poll()/ioctl() syscall and process the
>> packet, and so the batches don't get formed.
>>
>>
>>> Can anybody help me diagnose the source of the problem or is there a
>>> better
>>> way to write such program?
>>
>>
>>
>>> I am wondering if there is a way to tuning the configuration so that the
>>> netmap program won't take up too much extra CPU when it runs like the
>>> bridge.c program.
>>>
>>
>> The point is that when you have only type 2 data you shouldn't use
>> netmap, as it does not make sense. Unfortunately, the fact that packet
>> batches (with more than 1 packet) get formed or not depends on the external
>> traffic input patterns: it's basically a producer/consumer problem, and
>> there are no tunable for this. One thing you may do is to rate-limit the
>> calls to poll()/ioctl() in order to artificially create the batches; in
>> this way you would trade off a bit of latency for the sake of energy
>> efficiency.
>>
>> Another approach that you may be interested in is using NIC hardware
>> features like "flow-director" or "receive-flow-steering" to classify input
>> packets and steer different classes into specific NIC queues. In this way
>> you could open with netmap just a subset of the NIC queues (the type 1
>> data1traffic), and let the network stack directly process the traffic on
>> the other queues (type 2 data). There are some blog posts about this kind
>> of setup, here is one https://blog.cloudflare.com/si
>> ngle-rx-queue-kernel-bypass-with-netmap/
>>
>> Cheers,
>>   Vincenzo
>>
>>>
>>>
>>> Best,
>>> Xiaoye
>>> _______________________________________________
>>> freebsd-net at freebsd.org mailing list
>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net
>>> To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
>>>
>>
>>
>>
>> --
>> Vincenzo Maffione
>>
>>
>>
>> --
>> Vincenzo Maffione
>>
>
>


-- 
Vincenzo Maffione


More information about the freebsd-net mailing list