Mellanox MT25418 performance IPOIB?

Jason Bacon jwbacon at tds.net
Thu Apr 17 18:08:02 UTC 2014


OK, so 4X 5.0Gbps would imply a DDR network, for which the best 
throughput I've seen on TCP/IP connections is around 12Gbps.  I get 
around 11 both ways CentOS to CentOS.

OpenMPI will reach about 16.

Given that, I'm hopeful that an FDR network will be able to exceed the 
local throughput of our 12-disk RAIDs even without IPOIB.

Regards,

     JB

On 04/17/14 12:22, Karl Pielorz wrote:
>
> --On 17 April 2014 08:01:47 -0500 Jason Bacon <jwbacon at tds.net> wrote:
>
>> Hi Karl,
>>
>> What type of network are you running?  DDR?  QDR?  FDR?  Switch and HCA
>> models?
>
> Ok, still 'new' to IB - I've basically just got two MT25418 cards 
> running back to back in two machines.
>
> 'ibportstate' seems to tell me I'm running LinkWidthActive 4X, 
> LinkSpeedActive 5.0Gbps.
>
>> FYI, with some tuning effort, I was able to get up to 600 megabytes/sec
>> both ways over DDR IB from a CentOS NFS server with 12 SATA disks on a
>> PERC H710, ext4, RAID 6.  Might be able to do a little better with
>> FreeBSD and ZFS where IB is providing 7.5gb/sec.
>
> I'm planning on using these cards for HAST between two boxes running 
> ZFS (so they're just doing TCP/IP duties). Though at the moment I've 
> not been overly impressed with HAST's performance (even running with 
> 'none' as the secondary) - it seems to dull the IO performance of any 
> disks it's using even when just reading.
>
> I'm going to give it the benefit of doubt though and just build the 
> boxes up, sling a bunch of drives in and see what the performance 
> comes out as.
>
> -Karl
>



More information about the freebsd-infiniband mailing list