Just joined the infiniband club

Jason Bacon bacon4000 at gmail.com
Sun Sep 8 01:26:51 UTC 2019

On 2019-09-07 19:00, John Fleming wrote:
> Hi all, i've recently joined the club. I have two Dell R720s connected
> directly to each other. The card is a connectx-4. I was having a lot
> of problem with network drops. Where i'm at now is i'm running
> FreeBSD12-Stable as of a week ago and cards have been cross flashed
> with OEM firmware (these are lenovo i think) and i'm no longer getting
> network drops. This box is basically my storage server. Its exporting
> a raid 10 ZFS volume to a linux (compute 19.04 5.0.0-27-generic) box
> which is running GNS3 for a lab.
> So many questions.. sorry if this is a bit rambly!
>  From what I understand this card is really 4 x 25 gig lanes. If i
> understand that correctly then 1 data transfer should be able to do at
> max 25 gig (best case) correct?
> I'm not getting what the difference between connected mode and
> datagram mode is. Does this have anything to do with the card
> operating in infiniband mode vs ethernet mode? FreeBSD is using the
> modules compiled in connected mode with shell script (which is really
> a bash script not a sh script) from freebsd-infiniband page.

Nothing to do with Ethernet...

Google turned up a brief explanation here:


Those are my module building scripts on the wiki.  What bash extensions 
did you see?
> Linux box complains if mtu is over 2044 with expect mulitcast drops or
> something like that so mtu on both boxes is set to 2044.
> Everything i'm reading makes it sound like there is no RDMA support in
> FreeBSD or maybe that was no NFS RDMA support. Is that correct?
RDMA is inherent in Infiniband AFAIK.  Last I checked, there was no 
support in FreeBSD for NFS over RDMA, but news travels slowly in this 
group so a little digging might prove otherwise.
> So far it seems like these cards struggle to full 10 gig pipe. Using
> iperf (2) the best i'm getting is around 6gb(bit) sec. Interfaces
> aren't showing drops on either end. Doesn't seem to matter if i do 1,
> 2 or 4 threads on iperf.
You'll need both ends in connected mode with a fairly large MTU to get 
good throughput.  CentOS defaults to 64k, but FreeBSD is unstable at 
that size last I checked.  I got good results with 16k.

My FreeBSD ZFS NFS server performed comparably to the CentOS servers, 
with some buffer space errors causing the interface to shut down (under 
the same loads that caused CentOS servers to lock up completely).  
Someone mentioned that this buffer space bug has been fixed, but I no 
longer have a way to test it.



Earth is a beta site.

More information about the freebsd-infiniband mailing list