bge dropping packets issue

Bruce Evans brde at optusnet.com.au
Fri Apr 18 23:24:01 UTC 2008


On Fri, 18 Apr 2008, Alexander Sack wrote:

> Here are my results:
>
> Good news:
>
> Well after fiddling around with it, it seems if I bump the number of
> rx_bds to 512, disable polling, and use net.isr.direct=1, bge does not
> drop packets anymore (as verified by assigning a counter within
> bge_ticks() when a packet is dropped as read by the hardware
> registers).  What's interesting is that there is also an outOfRxBDs
> register you can read if you suspect chain starvation which I
> discovered after looking at the Linux driver's more complete stat
> structure.

This register seems to be spelled NoMoreRxBDs in FreeBSD (~7.0 and later):

     dev.bge.0.stats.NoMoreRxBDs: 0

(This is slightly better spelling.  A data book spells it
nicNoMoreRxBDs.)

> Packets still get dropped but this time by BPF.  It seems I pushed the
> problem upstream (in terms of the stack).  The user land software
> listening in this instance is using BPF.  I guess my next adventure is
> to understand how much can BPF take before dumping packets due to lack
> of buffer space - currently net.bpf.bufsize is 1048576 which is the
> maxbufsize.  Is this common place for BPF to drop packets?  (forgive
> me I have not searched the mailing list as I just confirmed these
> results by instrumenting BPF).  Could I raise the maxbufsize and still
> operate safely?  (I do have 8GB on a 64-bit system).

I didn't notice that you needed bpf.  I can't offer any hope for
avoiding packet loss at rates near the ethernet limit with bpf or any
other heavy upstream processing.  My main systems (both old ~2GHz A64
or AXP UP) are completely incapable of keeping up with each other when
bpf is turned on on the receiver.  With bpf, the slowest one with an
em receiver drops about 90% of packets at a send rate of about 600
kpps (with the packets being looked at by a simple tcpdump >/dev/null),
while the fastest one with a bge receiver drops "only" about 60% of
packets at a send rate of about 400 kpps.  This is consistent with
there being no CPU and/or memory bandwidth to spare without bpf and
bpf increasing CPU/memory overheads by more than 50%.

Bruce


More information about the freebsd-net mailing list