ix(intel) vs mlxen(mellanox) 10Gb performance

Hans Petter Selasky hps at selasky.org
Thu Oct 8 09:32:42 UTC 2015


Hi,

I've now MFC'ed r287775 to 10-stable and 9-stable. I hope this will 
resolve the issues with m_defrag() being called on too long mbuf chains 
due to an off-by-one in the driver TSO parameters and that it will be 
easier to maintain these parameters in the future.

Some comments were made that we might want to have an option to select 
if the IP-header should be counted or not. Certain network drivers 
require copying of the whole ETH/TCP/IP-header into separate memory 
areas, and can then handle one more data payload mbuf for TSO. Others 
required DMA-ing of the whole mbuf TSO chain. I think it is acceptable 
to have one TX-DMA segment slot free, in case of 2K mbuf clusters being 
used for TSO. From my experience the limitation typically kicks in when 
2K mbuf clusters are used for TSO instead of 4K mbuf clusters. 65536 / 
4096 = 16, whereas 65536 / 2048 = 32. If an ethernet hardware driver has 
a limitation of 24 data segments (mlxen), and assuming that each mbuf 
represent a single segment, then iff the majority of mbufs being 
transmitted are 2K clusters we may have a small, 1/24 = 4.2%, loss of TX 
capability per TSO packet. From what I've seen using iperf, which in 
turn calls m_uiotombuf() which in turn calls m_getm2(), MJUMPPAGESIZE'ed 
mbuf clusters are preferred for large data transfers, so this issue 
might only happen in case of NODELAY being used on the socket and if the 
writes are small from the application point of view.  If an application 
is writing small amounts of data per send() system call, it is expected 
to degrade the system performance.

Please file a PR if it becomes an issue.

Someone asked me to MFC r287775 to 10.X release aswell. Is this still 
required?

--HPS


More information about the freebsd-net mailing list