dummynet, em driver, device polling issues :-((

Benjamin Rosenblum ben at benswebs.com
Tue Oct 4 07:58:11 PDT 2005


i have been messing with the em driver now for over a month, ive come to 
the conclusion is a piece of crap.  if you watch on this list every 
other day you have someone saying there em driver is causing some sort 
of error, this should not be on a nic from a company like intel.  im 
saddly contimplating moving over to fedora right now just so i can work 
until 6.0 comes out (which i doubt will solve the problem anyway since 
im using the drivers from 6.0 now and there not helping out either).  
somebody really needs to look into this and find out what the hell is 
going on as i consider this a major problem right now.

Ferdinand Goldmann wrote:

> Kevin Day wrote:
>
>> This is pretty odd. We've got dozens of servers using various 
>> versions of 5.x, and many different em cards, and have no problem, 
>> even when shoving near line rate speeds out of them.
>
>
> Maximum transfer rates we see in MRTG were around 320Mbit/s (with 
> polling disabled)
>
>> em0: <Intel(R) PRO/1000 Network Connection, Version - 1.7.35> port 
>> 0xecc0-0xecff mem 0xdfae0000-0xdfafffff irq 64 at device 7.0 on pci6
>
>
> em0: <Intel(R) PRO/1000 Network Connection, Version - 1.7.35> port 
> 0x2280-0x22bf mem 0xeffc0000-0xeffdffff
> irq 20 at device 5.0 on pci1
>
> Pretty much the same here, even the driver version.
>
> em0 at pci1:5:0:   class=0x020000 card=0x10028086 chip=0x10268086 
> rev=0x04 hdr=0x00
>     vendor   = 'Intel Corporation'
>     device   = '82545GM Gigabit Ethernet Controller'
>
>> After you experience your problems, can you do "sysctl -w 
>> hw.em0.stats=1" and "sysctl -w hw.em0.debug_info=1" and post what 
>> gets dumped to your syslog/dmesg output?
>
>
> em0: Excessive collisions = 0
> em0: Symbol errors = 0
> em0: Sequence errors = 0
> em0: Defer count = 11
> em0: Missed Packets = 0
> em0: Receive No Buffers = 0
> em0: Receive length errors = 0
> em0: Receive errors = 0
> em0: Crc errors = 0
> em0: Alignment errors = 0
> em0: Carrier extension errors = 0
> em0: XON Rcvd = 11
> em0: XON Xmtd = 0
> em0: XOFF Rcvd = 11
> em0: XOFF Xmtd = 0
> em0: Good Packets Rcvd = 283923273
> em0: Good Packets Xmtd = 272613648
> em0: Adapter hardware address = 0xc12cfb48
> em0:CTRL  = 0x58f00249
> em0:RCTL  = 0x8002 PS=(0x8402)
> em0:tx_int_delay = 66, tx_abs_int_delay = 66
> em0:rx_int_delay = 0, rx_abs_int_delay = 66
> em0: fifo workaround = 0, fifo_reset = 0
> em0: hw tdh = 173, hw tdt = 173
> em0: Num Tx descriptors avail = 256
> em0: Tx Descriptors not avail1 = 0
> em0: Tx Descriptors not avail2 = 0
> em0: Std mbuf failed = 0
> em0: Std mbuf cluster failed = 0
> em0: Driver dropped packets = 0
>
>> We're using polling on nearly all the servers, and don't see ierrs at 
>> all. 
>
>
> Hm. That's strange. The above values were gathered with polling 
> disabled. As soon as I enable polling, ierrs on the em0 interface are 
> rising:
>
> em0: Excessive collisions = 0
> em0: Symbol errors = 0
> em0: Sequence errors = 0
> em0: Defer count = 11
> em0: Missed Packets = 39
> em0: Receive No Buffers = 2458
> em0: Receive length errors = 0
> em0: Receive errors = 0
> em0: Crc errors = 0
> em0: Alignment errors = 0
> em0: Carrier extension errors = 0
> em0: XON Rcvd = 11
> em0: XON Xmtd = 4
> em0: XOFF Rcvd = 11
> em0: XOFF Xmtd = 43
> em0: Good Packets Rcvd = 315880003
> em0: Good Packets Xmtd = 303985941
> em0: Adapter hardware address = 0xc12cfb48
> em0:CTRL  = 0x58f00249
> em0:RCTL  = 0x8002 PS=(0x8402)
> em0:tx_int_delay = 66, tx_abs_int_delay = 66
> em0:rx_int_delay = 0, rx_abs_int_delay = 66
> em0: fifo workaround = 0, fifo_reset = 0
> em0: hw tdh = 57, hw tdt = 57
> em0: Num Tx descriptors avail = 249
> em0: Tx Descriptors not avail1 = 0
> em0: Tx Descriptors not avail2 = 0
> em0: Std mbuf failed = 0
> em0: Std mbuf cluster failed = 0
> em0: Driver dropped packets = 0
>
>
> Can you tell me what settings you are using for polling? I have set it 
> to HZ=1000 and burst_max=300.
>
> I have now noticed another thing which might indicate one of the 
> possible causes for the problem - this box until now ran FreeBSD 4.x 
> and did not support ipfw tables to lock out whole lists of IP 
> adresses. So there were quite a few inefficient rules for this. I now 
> put all the locked IP addresses in a table which is referenced by only 
> one rule. Since I did this, the ierrs seem to rise slower with polling 
> enabled.
>
>> Have you tried contacting Intel directly about this? 
>> freebsdnic at mailbox.intel.com has been pretty helpful with em specific 
>> problems in the past.
>
>
> Not yet, thank you for the hint.
>




More information about the freebsd-net mailing list