dummynet queues hanging

Mark Knight lists at knigma.org
Fri Feb 21 19:13:03 UTC 2014


I'm trying to use Dummynet to throttle bandwidth at "peak" times. However my configuration seems to be behaving very oddly. Before I go too much further debugging this, can anyone see anything obviously wrong with my configuration?

I have tried two similar configurations. The first works very reliably but doesn't make any attempt to distribute the available bandwidth between different flows:

  queue 40 config pipe 40 queue 5 mask src-ip 0xffffffff src-port 0xffff    (this line is essentially redundant)
  pipe 40 config bw 600Kbit/s type QFQ queue 5 mask dst-ip 0xffffffff
  add 525 pipe 40 ip from any to any via em0 out

In this second configuration I pass traffic through a queue first, rather than directly through the pipe. My end goal is to limit bandwidth to each host but to also try and distribute the available bandwidth fairly between the applications on each host.

  queue 40 config pipe 40 queue 5 mask src-ip 0xffffffff src-port 0xffff
  pipe 40 config bw 600Kbit/s type QFQ queue 5 mask dst-ip 0xffffffff
  add 525 queue 40 ip from any to any via em0 out

When I switch to using the queue I start to see very odd behaviour when traffic levels increase. Typically after just a few minutes the scheduler or queues seem to get "stuck":

mkn at shrewd$ sudo ipfw sched list
00040: 600.000 Kbit/s    0 ms burst 0
 sched 40 type QFQ flags 0x1 256 buckets 1 active
    mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
   Children flowsets: 40
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
181 ip           0.0.0.0/0      217.169.23.231/0     11701  3844619 1123 72289 2112

If I look at "ipfw queue list", I see that many the flows have full buffers and are dropping packets like crazy - essentially the queues seem to have stopped draining. I also get a few kernel message when I list the queues:

	Feb 21 18:38:00 shrewd kernel: [29168] copy_obj_q ERROR type 5 queue -1 have 32 need 96

I saw the same behaviour with FreeBSD 9.2 and now FreeBSD 10.0. I have another queue running with a slightly different configuration on another interface and it's absolutely rock solid. The only obvious difference is that I do have a inbound "ipfw fwd" rule running on on em0 but that could be a red herring:

	00476 fwd 81.2.102.154,8090 tcp from any to any dst-port 80 via em0 in not tagged 2 // lan

Thanks in advance for any insight.

PS: The problem offers using either QFQ or WF2Q+.
-- 
Mark Knight
Mobile: +44 7753 250584.  http://www.knigma.org/
Email: markk at knigma.org.  Skype: knigma


More information about the freebsd-ports mailing list