Some MSI are not routed correctly

Maxim Sobolev sobomax at FreeBSD.org
Thu Oct 8 14:33:35 UTC 2015


Hi John & others,

We've came across a weird MSI routing issue on one of our newest dual
E5-2690v3 (haswell) Supermicro X10DRL-i boxes running latest 10.2-p4. It is
fitted with dual port Intel I350 card, in addition to the built-in I210
chip that is not used. The hw.igb.num_queues is set to 4, and the driver
reports binding to the CPUs 0-3 for the first port and CPUs 4-7 for the
second, however when verified with top -P under the load, interrupts are
only delivered to the CPUs 0-3, no interrupt time is recorded on the CPUs
4-7. systat -vm shows that all 8 queues are firing interrupts, so my guess
that for whatever reason bus_bind_intr() is not doing what's expected to do
for half of those interrupts.

What's interesting is that on a similar box (same chassis/mobo/cpu) but
equipped with the quad-port X540-AT2 10Gig card, interrupts are routed
properly. The latter is running with hw.ix.num_queues="3".

pcib2: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib2
pcib3: <ACPI PCI-PCI bridge> irq 26 at device 1.0 on pci0
pci1: <ACPI PCI bus> on pcib3
igb0: <Intel(R) PRO/1000 Network Connection version - 2.4.0> mem
0xc7200000-0xc72fffff,0xc7304000-0xc7307fff irq 26 at device 0.0 on pci1
igb0: Using MSIX interrupts with 5 vectors
igb0: Ethernet address: a0:36:9f:76:af:20
igb0: Bound queue 0 to cpu0
igb0: Bound queue 1 to cpu1
igb0: Bound queue 2 to cpu2
igb0: Bound queue 3 to cpu3
igb0: netmap queues/slots: TX 4/4096, RX 4/4096
igb1: <Intel(R) PRO/1000 Network Connection version - 2.4.0> mem
0xc7100000-0xc71fffff,0xc7300000-0xc7303fff irq 28 at device 0.1 on pci1
igb1: Using MSIX interrupts with 5 vectors
igb1: Ethernet address: a0:36:9f:76:af:21
igb1: Bound queue 0 to cpu4
igb1: Bound queue 1 to cpu5
igb1: Bound queue 2 to cpu6
igb1: Bound queue 3 to cpu7
igb1: netmap queues/slots: TX 4/4096, RX 4/4096

pcib2: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib2
pcib3: <ACPI PCI-PCI bridge> irq 26 at device 1.0 on pci0
pci1: <ACPI PCI bus> on pcib3
pcib4: <ACPI PCI-PCI bridge> irq 32 at device 2.0 on pci0
pci2: <ACPI PCI bus> on pcib4
pcib5: <ACPI PCI-PCI bridge> irq 40 at device 3.0 on pci0
pci3: <ACPI PCI bus> on pcib5
ix0: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.8.3> port
0x6020-0x603f mem 0xc7c00000-0xc7dfffff,0xc7e04000-0xc7e07fff irq 40 at
device 0.0 on pci3
ix0: Using MSIX interrupts with 4 vectors
ix0: Bound queue 0 to cpu 0
ix0: Bound queue 1 to cpu 1
ix0: Bound queue 2 to cpu 2
ix0: Ethernet address: 0c:c4:7a:5e:be:64
ix0: PCI Express Bus: Speed 5.0GT/s Width x8
ix0: netmap queues/slots: TX 3/4096, RX 3/4096
ix1: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.8.3> port
0x6000-0x601f mem 0xc7a00000-0xc7bfffff,0xc7e00000-0xc7e03fff irq 44 at
device 0.1 on pci3
ix1: Using MSIX interrupts with 4 vectors
ix1: Bound queue 0 to cpu 3
ix1: Bound queue 1 to cpu 4
ix1: Bound queue 2 to cpu 5
ix1: Ethernet address: 0c:c4:7a:5e:be:65
ix1: PCI Express Bus: Speed 5.0GT/s Width x8
ix1: netmap queues/slots: TX 3/4096, RX 3/4096

Some extra debug is here:

http://sobomax.sippysoft.com/haswell_bug/bad.dmesg
http://sobomax.sippysoft.com/haswell_bug/lstopo_bad.png
http://sobomax.sippysoft.com/haswell_bug/systat_vm_bad.png
http://sobomax.sippysoft.com/haswell_bug/top_P_bad.png

http://sobomax.sippysoft.com/haswell_bug/good.dmesg
http://sobomax.sippysoft.com/haswell_bug/lstopo_good.png
http://sobomax.sippysoft.com/haswell_bug/systat_vm_good.png
http://sobomax.sippysoft.com/haswell_bug/top_P_good.png

Any ideas on how to debug that further are welcome. The box in the
production, but we can remove traffic during off-peak to run some
test/debug code on.

Thanks!


More information about the freebsd-net mailing list