dummynet dropping too many packets

Luigi Rizzo rizzo at iet.unipi.it
Mon Oct 5 12:25:51 UTC 2009


On Mon, Oct 05, 2009 at 05:12:11PM +0500, rihad wrote:
> Luigi Rizzo wrote:
> >On Mon, Oct 05, 2009 at 04:29:02PM +0500, rihad wrote:
> >>Luigi Rizzo wrote:
> >...
> >>>you keep omitting the important info i.e. whether individual
> >>>pipes have drops, significant queue lenghts and so on.
> >>>
> >>Sorry. Almost everyone has 0 in the last Drp column, but some have above 
> >>zero. I'm not just sure how this can be helpful to anyone.
> >
> >because you were complaining about 'dummynet causing drops and
> >waste of bandwidth'.
> >Now, drops could be due to either
> >1) some saturation in the dummynet machine (memory shortage, cpu
> >   shortage, etc.) which cause unwanted drops;
> >
> I too think the box is hitting some other global limit and dropping 
> packets. If not, then how come that between 4a.m. and 10a.m. when the 
> traffic load is at 250-330 mbit/s there isn't a single drop?

there may be different reasons, e.g. the big offenders were
idle when you saw no drops. You still do not have enough
information on which packets are dropped and where,
so you cannot prove your assumptions.

Also, below:
1. increasing the queue size won't help at all. Those
   who overflow a queue of 1000 slots will also overflow
   a queue of 10k slots.

2. your test with 'ipfw allow ip from any to any' does not
   prove that the interface queue is not saturating, because
   you also remove the burstiness that dummynet introduces,
   and so the queue is driven differently.

good luck
luigi

> >2) intentional drops introduced by dummynet because a flow exceeds
> >   its queue size. These drops are those shown in the 'Drop'
> >   column in 'ipfw pipe show' (they are cumulative, so you
> >   should do an 'ipfw pipe delete; ipfw pipe 5120 config ...'
> >   whenever you want to re-run the stats, or compute the
> >   differences between subsequent reads, to figure out what
> >   happens.
> >
> >If all drops you are seeing are of type 2, then there is nothing
> >you can do to remove them: you set a bandwidth limit, the
> >client is sending faster than it should, perhaps with UDP
> >so even RED/GRED won't help you, and you see the drops
> >once the queue starts to fill up.
> >Examples below: the entries in bucket 4 and 44
> >
> Then I guess I'm left with increasing slots and see how it goes. 
> Currently it's set to 10000 for each pipe. Thanks for yours and Eugene's 
> efforts, I appreciate it.
> 
> >If you are seeing drops that are not listed in 'pipe show'
> >then yun need to investigate where the packets are lost,
> >again it could be on the output queue of the interface
> >(due to the burstiness introduced by dummynet), or shortage
> >of mbufs (but this did not seem to be the case from your
> >previous stats) or something else.
> >
> This indeed is not a problem, proved by the fact that, like I said, 
> short-circuiting "ipfw allow ip from any to any" before dummynet pipe 
> rules instantly eliminates all drops, and bce0 and bce1 load evens out 
> (bce0 used for input, and bce1 for output).
> 
> >It's  all up to you to run measurements, possibly
> >without omitting potentially significant data
> >(e.g. sysctl -a net.inet.ip)
> >or making assumptions (e.g. you have configured
> >5000 slots per queue, but with only 50k mbufs in total
> >there is no chance to guarantee 5000 slots to each
> >queue -- all you will achieve is give a lot of slots
> >to the greedy nodes, and very little to the other ones)
> >
> Well, I've been monitoring this stuff. It has never reached above 20000 
> mbufs (111111 is the current limit).


More information about the freebsd-net mailing list