Advice on a multithreaded netisr patch?

Ivan Voras ivoras at
Sun Apr 5 05:21:00 PDT 2009


I'm developing an application that needs a high rate of small TCP
transactions on multi-core systems, and I'm hitting a limit where a
kernel task, usually swi:net (but it depends on the driver) hits 100% of
a CPU at some transactions/s rate and blocks further performance
increase even though other cores are 100% idle.

So I've got an idea and tested it out, but it fails in an unexpected
way. I'm not very familiar with the network code so I'm probably missing
something obvious. The idea was to locate where the packet processing
takes place and offload packets to several new kernel threads. I see
this can happen in several places - netisr, ip_input and tcp_input, and
I chose netisr because I thought maybe it would also help other uses
(routing?). Here's a patch against CURRENT:

It's fairly simple - starts a configurable number of threads in
start_netisr(), assigns circular queues to each, and modifies what I
think are entry points for packets in the case. I also
try to have TCP and UDP traffic from the same host+port processed by the
same thread. It has some rough edges but I think this is enough to test
the idea. I know that there are several people officially working in
this area and I'm not an expert in it so think of it as a weekend hack
for learning purposes :)

These parameters are needed in loader.conf to test it:

I expected things like the contention in upper layers (TCP) leading to
not improving performance one bit, but I can't explain what I'm getting
here. While testing the application on a plain kernel, I get approx.
100,000 - 120,000 packets/s per direction (by looking at "netstat 1")
and a similar number of transactions/s in the application. With the
patch I get up to 250,000 packets/s in netstat (3 mtdispatch threads),
but for some weird reason the actual number of transactions processed by
the application drops to less than 1,000 at the beginning (~~ 30
seconds), then jumps to close to 100,000 transactions/s, with netstat
also showing a drop this number of packets. In the first phase, the new
threads (netd0..3) are using CPU time almost 100%, in the second phase I
can't see where the CPU time is going (using top).

I thought this has something to deal with NIC moderation (em) but can't
really explain it. The bad performance part (not the jump) is also
visible over the loopback interface.

Any ideas?

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 258 bytes
Desc: OpenPGP digital signature
Url :

More information about the freebsd-net mailing list