[Bug 265714] igc(4) drops link under high traffic
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Sun, 17 Aug 2025 23:50:00 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=265714
karl@denninger.net changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |karl@denninger.net
--- Comment #32 from karl@denninger.net ---
I've got a little N-150 "minimal PC" box here with dual I226-v ports -- it
comes up with 2.17 for the EEPROM.
igc0: <Intel(R) Ethernet Controller I226-V> mem
0x80c00000-0x80cfffff,0x80d00000-0x80d03fff at device 0.0 on pci1
igc0: EEPROM V2.17-0 eTrack 0x80000303
igc0: Using 1024 TX descriptors and 1024 RX descriptors
igc0: Using 4 RX queues 4 TX queues
igc0: Using MSI-X interrupts with 5 vectors
igc0: Ethernet address: c8:ff:bf:05:95:e2
igc0: netmap queues/slots: TX 4/1024, RX 4/1024
....
$ ifconfig igc0
igc0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0
mtu 1500
options=4e427bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,WOL_MAGIC,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6,HWSTATS,MEXTPG>
ether c8:ff:bf:05:95:e2
inet 192.168.10.50 netmask 0xffffff00 broadcast 192.168.10.255
media: Ethernet autoselect (2500Base-T <full-duplex>)
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
If I turn off TSO and LRO (intended thing to do for the outside interface on a
gateway/firewall) the only change is a slight increase in CPU use, as expected.
I've had zero trouble under 14.3-STABLE (built about a week ago) and I've
hammered it pretty hard with iperf3 on 2.5Gbps connections.... zero hangs,
drops or other misbehavior.
The "other end" I'm testing against is a server on a 10GbE link through a
Mellanox card (mce driver) into a switch with both 10G and 2.5G ports.
(This is in contrast to a similar machine from a different vendor I have with
dual RealTek 2.5G ports that is unusably bad; random resets occur on the
kernel-included driver under load and with the kmod driver some versions if I
dig through the older package sets sort of work but the "best" ones seem to be
"stable" but produce insane retry counts on iperf and very random-latency
performance on real workloads.)
I am not setting anything special (e.g. hw.pci.enable_aspm=0) or similar.
--
You are receiving this mail because:
You are the assignee for the bug.