basic questions on using Mellanox IB/40GbE converged cards

John Jasen jjasen at gmail.com
Tue May 27 13:46:45 UTC 2014


I'm not quite sure if this belongs in freebsd-infiniband or another
list, such as freebsd-net or -questions. If it should be filed
elsewhere, my apologies.

I am attempting to prototype a 40GbE router/packet filter, using FreeBSD
10.0 as the base operating system, with the test platform being a Dell R820.

I have three dual port Mellanox cards in the system, one a ConnectX-3,
the others OEM-branded; and now attempting performance tests.

I'm hoping to get answers to the following questions:

A) As the cards come up, I see, on each interface, "64B EQEs/CQEs
supported by the device but not enabled". My googling has not yielded
FreeBSD-specific results on this.

Can 64B EQEs and CQEs be enabled under FreeBSD? Will it help with
performance across multiple cards?

If so, how can 64B EQE and CQEs be enabled?

B) I also see "Using 5 TX rings" and "Using 2 RX rings" as the cards
come up.

Is this tunable under FreeBSD?

If it is tunable, should it be tuned for a router? Maybe 4 TX and 3 RX,
perhaps?

And, if it is tunable and should be tuned, how is this accomplished?

It appears hw.mlxen$N.conf.rx_rings and hw.mlxen$N.conf.tx_rings are
read-only sysctls. Does that mean /boot/loader.conf?

C) network tuning guides for FreeBSD are easy to find. Ones discussing
40GbE are not. Are there specific recommendations to obtain higher
performance out of these cards, either general or specific to the mlx
modules, that I should be using?

D) In my tests, I am seeing various TCP retransmissions, which decrease
throughput. What can be done to mitigate those?

Thanks in advance for any assistance!

-- John Jasen (jjasen at gmail.com)


More information about the freebsd-infiniband mailing list