simple, adaptive bandwidth throttling with ipfw/dummynet ?

Ian Smith smithi at nimnet.asn.au
Sat Mar 1 22:31:44 PST 2008


On Sun, 2 Mar 2008, Peter Jeremy wrote:
 > On Fri, Feb 29, 2008 at 02:28:04PM -0800, Juri Mianovich wrote:
 > >"after 30 minutes of maxed dummynet rule, add X mbps
 > >to the rule for every active TCP session, with a max
 > >ceiling of Y mbps"
 > >
 > >and:
 > >
 > >"after 30 minutes of less than max usage, subtract X
 > >mbps from the rule every Y minutes, with a minimum
 > >floor of Z"
 > >
 > >Make sense ?
 > 
 > It doesn't really make sense to me but it's your firewall and you are
 > free to implement whatever rules you like.

:)

 > >If I wanted to do this myself with a shell script, is
 > >there any way to test a particular dummynet rule for
 > >its current "fill rate" - OR - a simple way to test if
 > >a particular dummynet rule is currently in enforcement
 > >?
 > 
 > The system doesn't maintain stats on the instantaneous "fill rate"
 > of pipes/queues.  All it will report is total counts of traffic
 > through and in the pipe/queue.  Since the format wasn't clear to
 > me from a quick read of the man page, the following is a breakdown
 > of the output, with added notes:
 > fwall# ipfw pipe list
 > 00001:   6.400 Mbit/s    0 ms   50 sl. 1 queues (1 buckets) droptail
 >     mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
 > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
 >   0 tcp  192.168.123.200/56599   150.101.135.3/61455 122097  6353558  0    0 397
 >    |----- dummynet accumulation bucket details -----|---- Totals ---|Queued |

 > 'dummynet accumulation bucket details' is the details of the most recent
 >  (I think) packet matching the specific bucket mask

Yes, but I'm not sure if it's the last packet into or out of the queue.

 > 'Totals' is total bytes and packets through that particular bucket
 > 'Queued' refer to bytes and packets for that bucket currently queued
 > 'Drp' is the number of packets dropped.
 > 
 > You would need to calculate a rate by periodically sampling the
 > counts.  You can get a rough idea of if a particular dummynet rule is
 > restricting traffic flow by looking for non-zero queued counts (though
 > keep in mind that it is normal for a packet to occasionally be queued).

Also if there's any burstiness in the flow (ie letting the queue fully
or partially empty), you could easily misinterpret the overall flow.

 > Assuming you have the TCP sessions spread across distinct buckets
 > (either with multiple pipes/queues or with masks to split them up), my

I think this would be the way to go.  Juri said he only has one pipe
defined, and managing multiple sessions through that has to be handled
by some tricky out of band means. 

Personally I've found it easier to monitor recv/sent throughput per host
over a period by parsing the output of ipfw show on rules numbered by IP
address than trying to parse ipfw pipe show output, using sh rather than
perl, but everyone's mileage varies.  An extract: 

subnet="192.168.0"
base='27000'                    # ctc 'preweb' skipto rules
if [ $ip -eq 1 ]; then ip="*"; recvrule='26890'; sentrule='26900'
else recvrule=$(($base + $ip * 10)); sentrule=$(($recvrule + 5)); fi
getbytes() {
        echo -n `ipfw show $1 2>/dev/null | awk '{print $3}'`
}
oldrx=`getbytes $recvrule` ; oldtx=`getbytes $sentrule`
[..]

 > suggestion would be a perl script that regularly does 'ipfw pipe list'
 > or 'ipfw queue list' and use change_in_total_bytes/time to calculate
 > average throughput per session.  Then use a leaky bucket on the
 > average throughput to trigger pipe/queue re-configurations as desired.

Please explain 'leaky bucket'?

Someone on questions@ recently mentioned using one pipe with masks to
limit traffic per-host, then fed through another pipe limiting overall
bandwidth for the lot or for distinct subgroups, but due to a crash I'm
several days behind and haven't yet caught up with how that's done, or
indeed if that can be done on a filtering bridge using ipfw1 and old
bridge(4) on 4.8-RELEASE, which I'm stuck with using for a while yet.

cheers, Ian



More information about the freebsd-net mailing list