bce packet loss

David Christensen davidch at broadcom.com
Wed Jul 6 21:45:56 UTC 2011


> You had 282 RX buffer shortages and these frames were dropped. This
> may explain why you see occasional packet loss. 'netstat -m' will
> show which size of cluster allocation were failed.
> However it seems you have 0 com_no_buffers which indicates
> controller was able to receive all packets destined for this host.
> You may host lost some packets(i.e. non-zero mbuf_alloc_failed_count)
> but your controller and system was still responsive to the network
> traffic.
> 
> Data sheet says IfHCInBadOctets indicates number of octets received
> on the interface, including framing characters for packets that
> were dropped in the MAC for any reason. 

The IfHcInBadOctets counter says the controller received X bytes 
that were bad on the wire (collisions, FCS errors, etc.).  A value
of 539,369 would equal about 355 frames @ 1518 bytes per frame.
How bad that is really depends on the amount of time the server
was running.  The minimum bit-error rate (BER) for 1000Base-T is 
10^-12, so running at line rate you'd expect to see an error very
1000 seconds according to the following link:

http://de.brand-rex.com/LinkClick.aspx?fileticket=TFxnnLPedAg%3D&tabid=1956&mid=5686

Most vendors design to greater than 10^-12 and you're probably not
running at line rate all the time so you should see fewer errors.
In my testing I can go for days without seeing any errors, but if
you run long enough or have marginal interconnects/cabling the 
error rate will rise.

Dave



More information about the freebsd-net mailing list