Freebsd IP Forwarding performance (question,
and some info) [7-stable, current, em, smp]
andre at freebsd.org
Mon Jul 7 16:20:10 UTC 2008
Bruce Evans wrote:
> On Mon, 7 Jul 2008, Andre Oppermann wrote:
>> to get a systematic analysis of the performance please do the following
>> tests and put them into a table for easy comparison:
>> 1. inbound pps w/o loss with interface in monitor mode (ifconfig em0
> I won't be running many of these tests, but found this one interesting --
> I didn't know about monitor mode. It gives the following behaviour:
> -monitor ttcp receiving on bge0 at 397 kpps: 35% idle (8.0-CURRENT) 13.6
> monitor ttcp receiving on bge0 at 397 kpps: 83% idle (8.0-CURRENT) 5.8
> -monitor ttcp receiving on em0 at 580 kpps: 5% idle (~5.2) 12.5
> monitor ttcp receiving on em0 at 580 kpps: 65% idle (~5.2) 4.8
> cm/p = k8-dc-misses (bge0 system)
> cm/p = k7-dc-misses (em0 system)
> So it seems that the major overheads are not near the driver (as I already
> knew), and upper layers are responsible for most of the cache misses.
> The packet header is accessed even in monitor mode, so I think most of
> the cache misses in upper layers are not related to the packet header.
> Maybe they are due mainly to perfect non-locality for mbufs.
Monitor mode doesn't access the payload packet header. It only looks
at the mbuf (which has a structure called mbuf packet header). The mbuf
header it hot in the cache because the driver just touched it and filled
in the information. The packet content (the payload) is cold and just
arrived via DMA in DRAM.
More information about the freebsd-net