Proposed 6.2 em RELEASE patch

Mike Tancsa mike at sentex.net
Thu Nov 9 22:01:20 UTC 2006


At 10:51 AM 11/9/2006, Scott Long wrote:
>Mike Tancsa wrote:
>>At 08:19 PM 11/8/2006, Jack Vogel wrote:
>>
>>>BUT, I've added the FAST_INTR changes back into the code, so
>>>if you go into your Makefile and add -DEM_FAST_INTR you will
>>>then get the taskqueue stuff.
>>It certainly does make a difference performance wise.  I did some 
>>quick testing with netperf and netrate.  Back to back boxes, using 
>>an AMD x2 with bge nic and one intel box
>>CPU: AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ (2009.27-MHz 
>>686-class CPU)
>>CPU: Intel(R) Core(TM)2 CPU          6400  @ 2.13GHz (2144.01-MHz 
>>686-class CPU)
>>The intel is a  DG965SS with integrated em nic, the AMD a Tyan with 
>>integrated bge.  Both running SMP kernels with pf built in, no inet6.
>>
>>Intel box as sender. In this test its with the patch from 
>>yesterday. The first set with the patch as is, the second test with 
>>-DEM_FAST_INTR.
>
>Thanks for the tests.  One thing to note is that Gleb reported a higher
>rate of dropped packets with INTR_FAST.  He is the only one who has
>reported this, so I'd like to find out if there is something unique to
>his environment, or if there is a larger problem to be addressed.  There
>are ways that we can change the driver to not drop any packets at all
>for Gleb, but they expose the system to risk if there is ever an
>accidental (or malicious) RX flood on the interface.

With a high rate of packets, I am able to live lock the box.  I setup 
the following


b1a ------|
b2a -----R1 ------------b1b
           |-------------b2b


R1 has PCIe dual port em PRO/1000 PT

em0: <Intel(R) PRO/1000 Network Connection Version - 6.2.9> port 
0x9000-0x901f mem 0xd7020000-0xd703ffff,0xd7000000-0xd701ffff irq 1
7 at device 0.0 on pci6
em0: Ethernet address: 00:15:17:0b:70:98
em0: [FAST]
em1: <Intel(R) PRO/1000 Network Connection Version - 6.2.9> port 
0x9400-0x941f mem 0xd7040000-0xd705ffff,0xd7060000-0xd707ffff irq 1
8 at device 0.1 on pci6
em1: Ethernet address: 00:15:17:0b:70:99
em1: [FAST]

b1a = 192.168.44.1 onboard bge0
b1b = 192.168.88.218 - onboard em 82547EI Gigabit Ethernet Controller

b2a  = 192.168.88.176 single port PCIe em0
b2b =  192.168.44.244 onboard em0 (DG965SS)

R1 has 192.168.44.223 and 192.168.88.223.    Routing across R1 with 
b1a blasting b1b and b2a blastin b2b with netrate will lock up R1 
even though the total throughput is only 500Mb.


While on b1a # ./netblast 192.168.88.218 500 10 1000

I see the following on R1 (bge1 is my management interface)

R1 # ifstat -b
        em0                 em1                 bge1
  Kbps in  Kbps out   Kbps in  Kbps out   Kbps in  Kbps out
273770.1      0.00      0.00  237269.1      1.40      3.51
273509.8      0.00      0.00  237040.2      1.73      2.76
273694.9      0.00      0.00  237202.6      0.94      2.34
274258.6      0.00      0.00  237690.4      1.40      2.34
273623.8      0.00      0.00  237140.7      0.94      2.34

If  I start up the netblast on b2b or on b2a (either direction, 
doesnt matter) R1 locks up. This was with R1 in an SMP config.

Without INTR_FAST, it doesnt work as fast, but R1 does not lock up, 
or at least lock me out of my management interface.

         ---Mike




More information about the freebsd-stable mailing list