bad throughput performance on multiple systems: Re: Fwd: Re: Disappointing packets-per-second performance results on a Dell,PE R530

Slawa Olhovchenkov slw at zxy.spb.ru
Sun Mar 12 23:18:36 UTC 2017


On Sun, Mar 12, 2017 at 06:13:46PM -0400, John Jasen wrote:

> I think I am able to confirm Mr. Caraballo's findings.
> 
> I pulled a Dell PowerEdge 720 out of production, and upgraded it to
> 11-RELEASE-p8.
> 
> Currently, as in the R530, it has a single Chelsio T5-580, but has two
> v2 Intel E5-26xx CPUs versus the newer ones in the R530.
> 
> Both ports are configured for jumbo frames, and lro/tso are off. One is
> pointed at 172.16.2.0/24 as the load receivers; the other is pointed to
> 172.16.1.0/24 where the generators reside. Each side has 24 systems.
> 
> I've played around a little with the number of queues, cpuset interrupt
> binding, and net.isr values -- the only differences were going from
> pathetic scores (1.7 million packets-per-second) to absolutely pathetic
> (1.3 million when QPI was hit).
> 
> In these runs, it seems that no matter what we try on the system, not
> all the CPUs are engaged, and the receive queues are also unbalanced. As
> an example, in the last run, only 4 of the CPUs were engaged, and
> tracking rx queues using
> https://github.com/ocochard/BSDRP/blob/master/BSDRP/Files/usr/local/bin/nic-queue-usage,
> they ranges from 800k/second to 0/second, depending on the queues (this
> run used Chelsio defaults of 8 rx queues/16 tx queues). Interrupts also
> seem to confirm there is an unbalance, as current totals on the
> 'receive' chelsio port range from 935,000 to 9,200,000 (vmstat -ai).
> 
> Any idea whats going on?

what traffic you generated (TCP? UDP? ICMP? other?), what reported in
dmesg | grep txq ?


More information about the freebsd-net mailing list