Slow speeds experienced with Dummynet

Luigi Rizzo rizzo at iet.unipi.it
Sat Feb 20 09:50:37 UTC 2010


On Fri, Feb 19, 2010 at 10:48:32PM +0400, rihad wrote:
> Hi, all,
> 
> Recalling my old posting "dummynet dropping too many packets" dated 
> October 4, 2009, the problem isn't over just yet. This time, there are 
> no interface i/o drops (just a reminder: we have 2 bce(4) GigE cards 
> connected to a Cisco router, one for input, and one for output. The box 
> itself does some traffic accounting and enforces speed limits w/ 
> ipfw/dummynet. There are normally around 5-6k users online).

If i remember well, the previous discussion ended when you
raised the intr_queue_maxlen (and perhaps increased HZ) to
avoid that the bursts produced by periodic invocation of
dummynet_io() could overflow that queue.

>From the rest of your post it is not completely clear if you have
not found any working configuration, or there is some setting (e.g.
with "queue 1000" or larger) which do produce a smooth experience
for your customers.

Another thing i'd like to understand is whether all of your pipes
have a /32 mask, or there are some which cover multiple hosts.
Typical TCP connections have around 50 packets in flight when the
connection is fully open (which in turn is hard to happen on a 512k
pipe) so a queue of 100-200 is unlikely to overflow.

In fact, long queues are very detrimental for customers because
they increase the delay of the congestion control loop -- as a rule
of thumb, you should try to limit the queue size to at most 1-2s
of data.

cheers
luigi

> Traffic shaping is accomplished by this ipfw rule:
> pipe tablearg ip from any to table(0) out
> where table(0) contains those 5-6k IP addresses. The pipes themselves 
> are GRED (or taildrop, it doesn't matter):
> ipfw pipe   512 config bw   512kbit/s mask dst-ip 0xffffffff gred 
> 0.002/900/1000/0.1 queue 1000
> Taking this template the speeds range from 512 to tens of mbps. With the 
> setup as above very many users complain about very slow downloads, slow 
> browsing. systat -ifstat, refreshed every 5 seconds, does reveal large 
> differences between subsequent displays: if around 800-850 mbps is 
> what's to be expected, it doesn't stay within those limits, also jumping 
> to as low as 620+, and to somewhere in the 700's, Now imagine this: once 
> I turn dummynet off (by short-circuiting a "allow ip from any to any" 
> before the "pipe tablearg" rule) all user complaints stop, with traffic 
> load staying stably at around 930-950 mbps.
> 
> Does this have anything to do with "dummynet bursts"? How can I beat 
> that? If I keep the pipe queue size at 2000 slots, the 
> net.inet.ip.dummynet.io_pkt_drops sysctl stops increasing, once I start 
> tweaking the value to as low as 100 slots, the value starts raising 
> constantly at about 300-500 pps. I had hoped that smaller queue sizes 
> would battle the negative effects of dummynet burstiness, it did so, I 
> guess, but not in a very decisive manner.
> 
> 
> FreeBSD 7.1-RELEASE-p10
> kern.hz=4000
> kern.ipc.nmbclusters=111111
> net.inet.ip.fastforwarding=1
> net.inet.ip.dummynet.io_fast=1
> net.isr.direct=0
> net.inet.ip.intr_queue_maxlen=5000
> net.inet.ip.dummynet.hash_size=512
> net.inet.ip.dummynet.max_chain_len=16
> 
> net.inet.ip.intr_queue_drops: 0
> systat -ip shows zero output drops at times of trouble. netstat -s's 
> "output packets dropped due to no bufs, etc." is also fine. netstat -m 
> shows nothing suspicious.
> 
> p.s: Two "bloody expensive" Intel 10 GigE's are on their way to us to 
> replace the Broadcom cards, meanwhile what should I try doing? Thanks 
> for reading.
> _______________________________________________
> freebsd-net at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"


More information about the freebsd-net mailing list