Kernel modules

Jason Bacon bacon4000 at gmail.com
Tue Aug 27 13:26:20 UTC 2019


On 2019-08-26 23:41, Justin Clift wrote:
> On 2019-08-26 23:39, Jason Bacon wrote:
>> On 2019-08-26 08:13, Hans Petter Selasky wrote:
> <snip>
>>> Mellanox found a bug in ipoib which can lead to similar sympthoms 
>>> that you see. Can you try the attached patch?
>>>
>>> Thank you!
> <snip>
>> That's great to hear...
>>
>> Unfortunately, I no longer have admin access to a large cluster with
>> Mellanox HCAs, as I just changed jobs.
>>
>> I did my best to thoroughly test FreeBSD IB before I left my old
>> position.  This was the only outstanding issue with the FreeBSD file
>> server I was testing, so if this fix resolves it, I would say that
>> FreeBSD is production-ready for IB clusters.
>
> That would be welcome news.  People still turn up in the FreeNAS Forums
> from time to time, looking for IB support.  It'd be nice to have stable
> IB drivers, suitable for adding to the FreeNAS image. :)
>
> + Justin
I'll add that iperf throughput fell short of identical CentOS nodes by 
something like 15%, but NFS nevertheless outperformed CentOS in some 
aspects and averaged out about the same.  PowerEdge R720xd, RAID 6 on a 
PERC controller with 12 2T SAS disks, using mrsas driver on FreeBSD.  I 
ran ZFS on top of a hardware RAID, did not try RAIDZ*.

Also note: The loads that triggered the issue (e.g. a canu de novo 
assembly using very poor quality sequence data) also caused problems 
with the CentOS file servers - the server hanging temporarily and 
clients hanging indefinitely, even after load subsided.  I did not see 
the same client issues with the FreeBSD file server, just the buffer 
space issue.  I got around the issue with the CentOS servers by 
switching to NFS over RDMA.

Intended to try a parallel filesystem at some point (e.g. gluster, 
ceph), but wasn't able to find time before I left.

-- 
Earth is a beta site.




More information about the freebsd-infiniband mailing list