netmap and mlx4 driver status (linux)

HelpDesk SEALNET helpdesk at sealnet.de
Tue Jun 2 16:23:24 UTC 2015



 --
von unterwegs gesendet.

> Am 02.06.2015 um 17:40 schrieb Adrian Chadd <adrian at freebsd.org>:
> 
> Hi,
> 
> You'll likely want to poke the linux mellanox driver maintainer for
some help.
> 
> 
> 
> -adrian
> 
> 
>> On 1 June 2015 at 17:08, Blake Caldwell <caldweba at colorado.edu>
wrote:
>> Wondering if those experienced with other netmap drivers might be
able to comment what is limiting performance of mlx4.  It seems that the
reason pkt-gen is only getting 2.4Mpps with mlx4 40G is that pkt-gen is
saturating a core. This clearly shouldn’t be the case as evidenced by
netmap papers (14.8Mpps at 900Mz core).  As would be expected, the
output from ‘perf top’ shows that sender_body and poll() are the largest
userspace CPU hogs (measured in % of samples—over 24 cpus)
>> 
>> 29.65%  [netmap]               [k] netmap_poll
>> 12.47%  [mlx4_en]              [k] mlx4_netmap_txsync
>>  8.69%  libc-2.19.so           [.] poll
>>  6.15%  pkt-gen                [.] sender_body
>>  2.26%  [kernel]               [k] local_clock
>>  2.12%  [kernel]               [k] context_tracking_user_exit
>>  1.87%  [kernel]               [k] select_estimate_accuracy
>>  1.81%  [kernel]               [k] system_call
>> ….
>>  1.24%  [netmap]               [k] nm_txsync_prologue
>> ….
>>  0.63%  [mlx4_en]              [k] mlx4_en_arm_cq
>>  0.61%  [kernel]               [k] account_user_time
>> 
>> 
>> Furthermore it appears from annotating the code in pkt-gen.c with
utilization, about 50% of sender_body is spent on this line while
iterating through the rings:
>>
https://github.com/caldweba/netmap/blob/master/examples/pkt-gen.c#L1091
<https://github.com/caldweba/netmap/blob/master/examples/pkt-gen.c#L1091>
>>                        if (nm_ring_empty(txring))
>> 
>> Does this mean it is waiting for free slots most of the time and
increasing from 8 rings might help?
>> 
>> Here are the current module parameters in case they shed light on the
issue. Also, netmap config kernel messages are shown below.
>> 
>> Thanks in advance.
>> 
>> /sys/module/netmap/parameters/adaptive_io: 0
>> /sys/module/netmap/parameters/admode: 0
>> /sys/module/netmap/parameters/bridge_batch: 1024
>> /sys/module/netmap/parameters/buf_curr_num: 163840
>> /sys/module/netmap/parameters/buf_curr_size: 2048
>> /sys/module/netmap/parameters/buf_num: 163840
>> /sys/module/netmap/parameters/buf_size: 2048
>> /sys/module/netmap/parameters/default_



More information about the freebsd-net mailing list