multiple pipes cause slowdown

Sten Daniel Sørsdal sten.daniel.sorsdal at wan.no
Wed Dec 3 17:10:29 PST 2003


I read somewhere that dummynet was designed to simulate different network connections.
So dummynet is not at fault here, the effects congestion has on tcp is.
Use queues to let the small ACK's have higher priority through the pipes.
The effect you describe for the wireless is the same thing only with a few more variables (packetloss/retransmission/etc).


> -----Original Message-----
> From: Vector [mailto:freebsd at itpsg.com] 
> Sent: 26. november 2003 21:43
> To: freebsd-ipfw at freebsd.org
> Subject: multiple pipes cause slowdown
> 
> I've got a FreeBSD system setup and I'm using dummynet to 
> manage bandwidth.
> Here is what I am seeing:
> 
> We are communicating with a server on a 100Mbit ethernet 
> segment in the
> freebsd box as fxp0 and an 11Mbit wireless client that is 
> getting throttled
> with ipfw pipes.
> If I add two pipes limiting my two clients A and B to 1Mbit 
> each then here
> is what happens.
> 
> Client A does a transfer to/from the server and gets 1Mbps up 
> and 1Mbps down
> Client B does a transfer to/from the server and gets 1Mbps up 
> and 1Mbps down
> Clients A & B do simultaneous transfers to the server and 
> each get between
> 670 and 850 Kbps
> 
> If I delete the pipes and the firewall rules, they behave like regular
> 11Mbit unthrottled clients sharing the available wireless bandwidth
> (although not necessarily equally).
> 
> It gets worse when I start doing 3 or 4 clients each at 
> 1Mbit, I've also
> tried setting up 4 clients at 512Kbps and the performance 
> does the same
> thing, essentially gets cut significantly the more pipes we 
> have.  Here are
> the rules I'm using:
> 
> ipfw add 100 pipe 100 all from any to 192.168.1.50 xmit wi0
> ipfw add 100 pipe 5100 all from 192.168.1.50 to any recv wi0
> ipfw pipe 100 config bw 1024Kbits/s
> ipfw pipe 5100 config bw 1024Kbits/s
> 
> ipfw add 101 pipe 101 all from any to 192.168.1.51 xmit wi0
> ipfw add 101 pipe 5101 all from 192.168.1.51 to any recv wi0
> ipfw pipe 101 config bw 1024Kbits/s
> ipfw pipe 5101 config bw 1024Kbits/s
> 
> I've played with using in/out instead of recv/xmit and even 
> not specifying a
> direction at all (which makes traffic to the client get cut 
> in half but
> traffic from the client remains as high as if I specify which 
> interface to
> throttle on).  ipfw pipe list shows no dropped packets and 
> looks like it's
> behaving normally, other than the slowdown for multiple 
> clients.  I'm not
> specifying a delay and latency does not seem abnormally high.
> 
> I am using 5.0 Release and I have HZ=1000 compiled in the kernel.
> Here are my sysctl vars:
> net.inet.ip.fw.enable: 1
> net.inet.ip.fw.autoinc_step: 100
> net.inet.ip.fw.one_pass: 0
> net.inet.ip.fw.debug: 0
> net.inet.ip.fw.verbose: 0
> net.inet.ip.fw.verbose_limit: 1
> net.inet.ip.fw.dyn_buckets: 256
> net.inet.ip.fw.curr_dyn_buckets: 256
> net.inet.ip.fw.dyn_count: 2
> net.inet.ip.fw.dyn_max: 4096
> net.inet.ip.fw.static_count: 72
> net.inet.ip.fw.dyn_ack_lifetime: 300
> net.inet.ip.fw.dyn_syn_lifetime: 20
> net.inet.ip.fw.dyn_fin_lifetime: 1
> net.inet.ip.fw.dyn_rst_lifetime: 1
> net.inet.ip.fw.dyn_udp_lifetime: 10
> net.inet.ip.fw.dyn_short_lifetime: 5
> net.inet.ip.fw.dyn_keepalive: 1
> net.link.ether.bridge_ipfw: 0
> net.link.ether.bridge_ipfw_drop: 0
> net.link.ether.bridge_ipfw_collisions: 0
> net.link.ether.bdg_fw_avg: 0
> net.link.ether.bdg_fw_ticks: 0
> net.link.ether.bdg_fw_count: 0
> net.link.ether.ipfw: 0
> net.inet6.ip6.fw.enable: 0
> net.inet6.ip6.fw.debug: 0
> net.inet6.ip6.fw.verbose: 0
> net.inet6.ip6.fw.verbose_limit: 1
> 
> 
> net.inet.ip.dummynet.hash_size: 64
> net.inet.ip.dummynet.curr_time: 99067502
> net.inet.ip.dummynet.ready_heap: 16
> net.inet.ip.dummynet.extract_heap: 16
> net.inet.ip.dummynet.searches: 0
> net.inet.ip.dummynet.search_steps: 0
> net.inet.ip.dummynet.expire: 1
> net.inet.ip.dummynet.max_chain_len: 16
> net.inet.ip.dummynet.red_lookup_depth: 256
> net.inet.ip.dummynet.red_avg_pkt_size: 512
> net.inet.ip.dummynet.red_max_pkt_size: 1500
> 
> Am I just doing something stupid or does the dummynet/QoS 
> implementation in
> FreeBSD need some work.  If so, I may be able to help and contribute.
> Thanks,
> 
> vec
> 
> 
> _______________________________________________
> freebsd-ipfw at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw
> To unsubscribe, send any mail to 
> "freebsd-ipfw-unsubscribe at freebsd.org"
> 
> 


More information about the freebsd-ipfw mailing list