Supported NICs

Jason bacon jwbacon at tds.net
Thu Oct 17 13:25:21 UTC 2013


It's a Dell Poweredge r410, which is 1U.  Only one PCIe slot.  I'd have 
to go into the BIOS settings to see if NUMA is even enabled.  There's a 
setting that toggles between enabling interleaving and NUMA.  I suspect 
we have interleaving enabled.

In any case, I configured and tuned an NFS server on the BSD machine and 
was able to saturate the local hard disk on NFS reads from a CentOS 
client, so I know ib0 can do at least 1.2 gigabit/sec outgoing, much 
better than it did with iperf.  I'm going to repeat the test with a RAM 
disk and eventually with a good RAID to see what it can really do for 
NFS.  That's what we're ultimately interested in.

I'll still try and figure out what's going on with iperf.  It seems to 
be an anomaly, but it must be indicating *something* that should 
probably be addressed.

I also plan to try a better HCA than this old Infinihost I scrounged 
up.  Motorola is giving us a rack full of servers with IB in the next 
couple of weeks.  I don't know the details yet, but I'm hoping it has 
ConnectX cards...

Thanks again for your assistance,

     JB

On 10/17/13 3:49 AM, Oded Shanoon wrote:
> Are you sure the card is connected in the slot related to the same NUMA as cpu0?
> How many cpus per core do you have?
>
>
>
> Regards,
>
> Oded Shanoon
> OFED-FreeBSD Team Leader
> Mellanox Technologies, Raanana
>
>
> -----Original Message-----
> From: owner-freebsd-infiniband at freebsd.org [mailto:owner-freebsd-infiniband at freebsd.org] On Behalf Of Jason Bacon
> Sent: Wednesday, October 16, 2013 8:22 PM
> To: Anthony Cornehl; freebsd-infiniband at freebsd.org
> Subject: Re: Supported NICs
>
> On 10/14/13 22:53, Anthony Cornehl wrote:
>>
>> On Oct 14, 2013 11:36 AM, "Jason Bacon"<bacon at uwm.edu
>> <mailto:bacon at uwm.edu>>  wrote:
>>>
>>> Some initial test results...
>>>
>>> I installed an old Infinihost DDR HCA in one of our compute nodes
>> running FreeBSD 9.1.
>>> RHEL nodes are using qlogic IB HCAs.
>>>
>>> 10.1.1 is gigabit Ethernet, 10.1.2 is IB.
>>>
>>> Running iperf server on FreeBSD and client on one of our RHEL nodes
>> shows OK performance:
>>> bacon at infinibsd:/home/bacon % iperf -s
>>> ------------------------------------------------------------
>>> Server listening on TCP port 5001
>>> TCP window size: 64.0 KByte (default)
>>> ------------------------------------------------------------
>>> [  4] local 10.1.1.140 port 5001 connected with 10.1.1.39 port 35947
>>> [ ID] Interval       Transfer     Bandwidth
>>> [  4]  0.0-10.0 sec  1.10 GBytes   947 Mbits/sec
>>>
>>> [  5] local 10.1.2.140 port 5001 connected with 10.1.2.39 port 60090
>>> [  5]  0.0-10.0 sec  7.20 GBytes  6.18 Gbits/sec
>>>
>>> RHEL to RHEL gives us about 8 Gbits/sec.
>>>
>>> Iperf server on RHEL and client on FreeBSD shows very poor
>> performance for IB, while GigE is fine:
>>> bacon at infinibsd:/home/bacon % iperf -c 10.1.1.39
>>> ------------------------------------------------------------
>>> Client connecting to 10.1.1.39, TCP port 5001 TCP window size: 32.8
>>> KByte (default)
>>> ------------------------------------------------------------
>>> [  3] local 10.1.1.140 port 60066 connected with 10.1.1.39 port 5001
>>> [ ID] Interval       Transfer     Bandwidth
>>> [  3]  0.0-10.0 sec  1.10 GBytes   943 Mbits/sec
>>>
>>> bacon at infinibsd:/home/bacon % iperf -c 10.1.2.39
>>> ------------------------------------------------------------
>>> Client connecting to 10.1.2.39, TCP port 5001 TCP window size: 32.0
>>> KByte (default)
>>> ------------------------------------------------------------
>>> [  3] local 10.1.2.140 port 14608 connected with 10.1.2.39 port 5001
>>> [ ID] Interval       Transfer     Bandwidth
>>> [  3]  0.0-32.7 sec   768 KBytes   192 Kbits/sec
>>>
>>> Any ideas what might cause this?
>>>
>> - Does the performance change when you pin iperf to cpu0 on the
>> FreeBSD host?
>>
> Unfortunately, no:
>
> FreeBSD infinibsd bacon ~ 39: cpuset -l 0 iperf -c 10.1.2.116
> ------------------------------------------------------------
> Client connecting to 10.1.2.116, TCP port 5001 TCP window size: 88.0 KByte (default)
> ------------------------------------------------------------
> [  3] local 10.1.2.140 port 50193 connected with 10.1.2.116 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-32.7 sec   768 KBytes   192 Kbits/sec
>
> For verification, "top" showed iperf consistently on CPU 0.
>> (Assuming NUMA)
>> - Is there a MTU mismatch between the interfaces? (Assuming
>> connected-mode is broken)
>>
> Nope:
>
> FreeBSD:
>
> ib0: flags=8043<UP,BROADCAST,RUNNING,MULTICAST>  metric 0 mtu 65520
>       options=80018<VLAN_MTU,VLAN_HWTAGGING,LINKSTATE>
>       lladdr 80.0.4.4.fe.80.0.0.0.0.0.0.0.2.c9.2.0.23.15.d1
>       inet 10.1.2.140 netmask 0xffffff00 broadcast 10.1.2.255
>       inet6 fe80::226:b9ff:fe2e:207e%ib0 prefixlen 64 scopeid 0xc
>       nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
>
> RHEL:
>
> ib0       Link encap:InfiniBand  HWaddr
> 80:00:00:02:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00
>             inet addr:10.1.2.116  Bcast:10.1.2.255  Mask:255.255.255.0
>             inet6 addr: fe80::211:7500:ff:5f80/64 Scope:Link
>             UP BROADCAST RUNNING MULTICAST  MTU:65520  Metric:1
>             RX packets:656228332 errors:0 dropped:0 overruns:0 frame:0
>             TX packets:631393406 errors:0 dropped:5026 overruns:0 carrier:0
>             collisions:0 txqueuelen:256
>             RX bytes:1096675565577 (1021.3 GiB)  TX bytes:4942093939442
> (4.4 TiB)
>
> Also verified using -m flag with iperf server and client.
>
>> - What is the profile when going from FreeBSD to FreeBSD? (Assuming
>> there's some cross-IB stack discrepancies)
>>
> Can't tell right now.  We only have the one Mellanox card at the moment, so I can only configure one FreeBSD host with IB.  Our RHEL nodes are using Qlogic cards.  I'll try it out as soon as I get my hands on another usable HCA.
>
> I get better performance out of an scp transfer, so this appears to be some sort of interaction between iperf and the IB stack.  I'll play around with some other benchmarks and report my findings.
>
> Thanks for the feedback,
>
>       JB
>>> I'm going to install 9.2-RELEASE and retest in any case, plus
>> explore the ib config tools, but any feedback in the meantime would be
>> appreciated.
>>> Thanks,
>>>
>>>      JB
>>>
>>>
>>> On 10/07/13 01:51, Oded Shanoon wrote:
>>>> Hi Jason,
>>>>
>>>> IB support in 9.2 is supposed to be stable enough.
>>>> Please note that Mellanox started supporting FreeBSD only recently.
>> The driver in 9.2 was ported by someone from Isilon (Jeff Roberson)
>> from OFA-1.5.3.
>>>> Since we started our involvement we entered some fixes to that
>> driver (that were submitted into 9.2).
>>>> We also mapped various issues which needs to be fixed in the future.
>>>> We are now working on a major "face lift" to the driver - making it
>> much more stable and with improved performance and features.
>>>> Regards,
>>>>
>>>> Oded Shanoon
>>>> OFED-FreeBSD Team Leader
>>>> Mellanox Technologies, Raanana
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: owner-freebsd-infiniband at freebsd.org
>> <mailto:owner-freebsd-infiniband at freebsd.org>
>> [mailto:owner-freebsd-infiniband at freebsd.org
>> <mailto:owner-freebsd-infiniband at freebsd.org>] On Behalf Of Jason
>> Bacon
>>>> Sent: Monday, September 16, 2013 4:32 PM
>>>> To: Anthony Cornehl
>>>> Cc: freebsd-infiniband at freebsd.org
>> <mailto:freebsd-infiniband at freebsd.org>
>>>> Subject: Re: Supported NICs
>>>>
>>>>
>>>> Thanks Anthony&   Oded!
>>>>
>>>>
>>>> Would you say that IB support in 9.2 is stable enough for a
>> production HPC file server?
>>>> Will do plenty of my own testing first, of course.
>>>>
>>>> Regards,
>>>>
>>>>        Jason
>>>>
>>>> On 09/15/13 22:38, Anthony Cornehl wrote:
>>>>>
>>>>> On Sep 15, 2013 8:17 PM, "Anthony Cornehl"<accornehl at gmail.com
>> <mailto:accornehl at gmail.com>
>>>>> <mailto:accornehl at gmail.com<mailto:accornehl at gmail.com>>>   wrote:
>>>>>>
>>>>>> On Sep 15, 2013 1:20 PM, "Jason bacon"<jwbacon at tds.net
>> <mailto:jwbacon at tds.net>
>>>>> <mailto:jwbacon at tds.net<mailto:jwbacon at tds.net>>>   wrote:
>>>>>>>
>>>>>>> Is there a list of supported IB NICs out there somewhere?
>>>>>>>
>>>>>>> I followed the wiki instructions for rebuilding with IB support
>>>>> and now have mlx4ib, mlxen, etc.
>>>>>>> Was hoping there would be man pages for the drivers that list
>>>>> known working cards, but there don't seem to be.  I'm hoping to
>>>>> test a file server using IPOIB and possible roll a FreeNAS ISO
>>>>> with IB support if it works out.
>>>>>>> Thanks,
>>>>>>>
>>>>>>> --
>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>>>     Jason W. Bacon
>>>>>>> jwbacon at tds.net<mailto:jwbacon at tds.net><mailto:jwbacon at tds.net
>> <mailto:jwbacon at tds.net>>
>>>>>>>     Circumstances don't make a man:
>>>>>>>     They reveal him.
>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> freebsd-infiniband at freebsd.org
>> <mailto:freebsd-infiniband at freebsd.org>
>>>>> <mailto:freebsd-infiniband at freebsd.org
>> <mailto:freebsd-infiniband at freebsd.org>>   mailing list
>>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-infiniband
>>>>>>> To unsubscribe, send any mail to
>>>>> "freebsd-infiniband-unsubscribe at freebsd.org
>> <mailto:freebsd-infiniband-unsubscribe at freebsd.org>
>>>>> <mailto:freebsd-infiniband-unsubscribe at freebsd.org
>> <mailto:freebsd-infiniband-unsubscribe at freebsd.org>>"
>>>>>> Hey Jason,
>>>>>>
>>>>>> Mellanox ConnectX series cards are the only ones supported
>> currently...
>>>>>> http://www.mellanox.com/page/infiniband_cards_overview
>>>>>>
>>>>>> Don't forget to compile with IPoIB as well, since the IPoIB
>>>>>> support
>>>>> is compiled into the driver, unlike in Linux.
>>>>>> Just be aware that IPoIB performance is also improved by enabling
>>>>> connected mode when you compile the kernel module. The IB code in
>>>>> FreeBSD is a few years older the what is in Linux, but, the
>>>>> following forum thread is probably relevant...
>>>>>>
>>>>> http://forums.servethehome.com/networking/1554-infiniband-ipoib-pe
>>>>> rfor
>>>>> mance-problems.html
>>>>>> Cheers!
>>>>> It also appears that Jeff fixed SDP a few weeks ago, which is more
>>>>> capable of reaching line-speed for IB-connected devices.
>>>>>
>>>>> http://pkg-ofed.alioth.debian.org/howto/infiniband-howto-7.html
>>>>>
>>>>> Cheers!
>>>>>
>>>> --
>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>      Jason W. Bacon
>>>> jwbacon at tds.net<mailto:jwbacon at tds.net>
>>>>
>>>>      Circumstances don't make a man:
>>>>      They reveal him.
>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>
>>>> _______________________________________________
>>>> freebsd-infiniband at freebsd.org
>> <mailto:freebsd-infiniband at freebsd.org>  mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-infiniband
>>>> To unsubscribe, send any mail to
>> "freebsd-infiniband-unsubscribe at freebsd.org
>> <mailto:freebsd-infiniband-unsubscribe at freebsd.org>"
>>>
>>>
>>> --
>>>
>>> -------------------------------------
>>>    Jason W. Bacon
>>>    Systems Programmer
>>>    Research Computing Support
>>>    University of Wisconsin Milwaukee
>>> bacon at uwm.edu<mailto:bacon at uwm.edu>
>>> -------------------------------------
>>>
>>>
>


-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   Jason W. Bacon
   jwbacon at tds.net

   Circumstances don't make a man:
   They reveal him.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



More information about the freebsd-infiniband mailing list