Request updated/unified patch for OFED v3.2 update

David P. Discher dpd at dpdtech.com
Tue May 10 02:12:18 UTC 2016


Sorry for the MIA - I got seriously sick, and still working to get healthy, but feeling better now.


> On Apr 25, 2016, at 11:32 PM, Hans Petter Selasky <hps at selasky.org> wrote:
> 
> On 04/25/16 18:41, David P. Discher wrote:
>> A couple notes that frustrated the hell out me this weekend:
>> 
>> I could not get the modules to compile in-kernel.  They have to be loadable modules.  I kept getting linking errors, to what looks like references in the linux modules.
> 
> You'll have to use:
> 
> options LINUXKPI
> 
> When building all into the kernel.


Figured something like that would do it … however, this is a difference than when I was doing it previously with -HEAD and with 10-stable … where the all the IB modules could be defined in the KERNCONF, without linux. Not a problem, but something note worthy for the README or change log.


>> patch.diff was created with:
>> 
>> cat D5790.diff.txt D5791.diff.txt D5792.diff.txt D5793.diff.txt D5794.diff.txt D5795.diff.txt D5796.diff.txt D5797.diff.txt D5798.diff.txt D5799.diff.txt > patch.diff
>> 
> 
> It might be easier if you have a git checkout of the FreeBSD kernel.
> 
> git am xxx
> 
> handles these issues automatically.


Yes - this might be much easier … do you have a branch forked on github or something with the patches already applied ?

So, I dived into this again, I refreshed all patches as of May 7th, 1:15pm PDT - I had one rejection -

	- https://gist.github.com/daviddpd/6249a0fe8df328ceede052a2299670f6

It seemed straight forward so I resolved it by hand.

However, against r298518 - I’m getting compile errors. It looks like some kind of header file problems ?  These are only in mlx5ib … and since I don’t have that card, I was testing without.


	- https://gist.github.com/daviddpd/0ebb569da3a57f801afaf67d2a7f54c5


I’m running on older cards - using the mthca driver.
	ib_mthca0: <ib_mthca> mem 0xfea00000-0xfeafffff,0xd0000000-0xd07fffff irq 18 at device 0.0 on pci1
	ib_mthca: Mellanox InfiniBand HCA driver v1.0-ofed1.5.2 (August 4, 2010)
	ib_mthca: Initializing ib_mthca
	ib_mthca0: HCA FW version 5.2.916 is old (5.3.000 is current).
	ib_mthca0: If you have problems, try updating your HCA FW.

ib_mthca0 at pci0:1:0:0:	class=0x0c0600 card=0x628215b3 chip=0x628215b3 rev=0x20 hdr=0x00
    vendor     = 'Mellanox Technologies'
    device     = 'MT25208 [InfiniHost III Ex]'
    class      = serial bus

Without this compile error, building the kernel as :

	MODULES_OVERRIDE=ipoib linuxkpi ibcore mthca
	WITH_OFED='yes'

I got the same performance on both pre OFED 3.2 and post. about 4.8-5 Gbps with iperf.


	root at amd:~ # iperf -c 172.16.0.1 -i 8 -t 300 -P 1
	------------------------------------------------------------
	Client connecting to 172.16.0.1, TCP port 5001
	TCP window size: 33.3 KByte (default)
	------------------------------------------------------------
	[  3] local 172.16.0.2 port 23663 connected with 172.16.0.1 port 5001
	[ ID] Interval       Transfer     Bandwidth
	[  3]  0.0- 8.0 sec  4.64 GBytes  4.98 Gbits/sec
	[  3]  8.0-16.0 sec  4.64 GBytes  4.99 Gbits/sec
	[  3] 16.0-24.0 sec  4.64 GBytes  4.99 Gbits/sec

Firmware is out of date … not sure if this will make a difference.   I am running these on slower desktop machines right now

	CPU: AMD A4-6300 APU with Radeon(tm) HD Graphics     (3693.17-MHz K8-class CPU)

	CPU: Intel(R) Core(TM)2 Duo CPU     E8600  @ 3.33GHz (3325.07-MHz K8-class CPU)

I’ll move to my 8-core machines later, but I’m not sure this will make a huge difference.

Questions :
	Will OFED 3.2 speed up these InfiniHost III cards ?

	Do these cards have multiple queues? Can they use multiple MSI-X interrupts ?  I see they can in the driver, and back a few months ago, maybe a year ago, I enabled them in the driver … but of course, couldn’t get the systems to use them (then gave up).  The single IRQ for this card, intr{irq274: ib_mthca} - runs at a 100%.  I’m guessing that this single threading is limiting the performance right now.

These are dual ported cards …  when testing with dual ports, if I test both ports at the same time, they share/split the 5 Gbps of bandwidth.  (Again, I think both ports are using the same interrupt.)

Also, something I’m noticing … after running an iperf … some buffer is getting full or not getting cleaned up.

	root at amd:~ # ping 172.16.0.1
	PING 172.16.0.1 (172.16.0.1): 56 data bytes
	ping: sendto: No buffer space available

Will move back to the bigger machines 8+ cores and 32 GB of ram and retest later in the week.  Maybe the PCIe slot is limiting it.  It is in a 16x slot.


-
David P. Discher
http://davidpdischer.com/
AIM: DavidDPD | Y!M: daviddpdz
Mobile: 408.368.3725






-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.freebsd.org/pipermail/freebsd-infiniband/attachments/20160509/212c6904/attachment.sig>


More information about the freebsd-infiniband mailing list