IPFW DUMMYNET: Several pipes after each other

Ian Smith smithi at nimnet.asn.au
Tue Jan 27 07:41:30 PST 2009


On Mon, 26 Jan 2009, Sebastian Mellmann wrote:
 > Ian Smith wrote:
 > On Thu, 22 Jan 2009 08:10:09 +0100 (CET)
 > >  >
 > >  > So far I've got those rules:
 > >  >
 > >  > in_if="em0"
 > >  > out_if="em1"
 > >  > management_if="em2"
 > >  > in_ip="100.100.100.1"
 > >  > out_ip="200.200.200.1"
 > >  > management_ip="172.16.0.201"
 > >  > client1_subnet="192.168.5.0/26"
 > >  > client2_subnet="192.168.6.0/26"
 > >  > server_subnet="192.168.7.0/24"
 > >  >
 > >  > download_bandwidth="6144Kbit/s"
 > >  > upload_bandwidth="1024Kbit/s"
 > >  > delay="0"
 > >  > queue_size="10"
 > >
 > > 10 slots ie packets is likely too small a queue size at these rates.
 > > You want to check the dropped packet stats from 'ipfw pipe show' re
 > > that; see the section in ipfw(8) about calculating sizes / delays.
 > >
 > 
 > I had a look at the ipfw howto on the freebsd site [1], but I'm not 100%
 > sure how to choose a "good" value for the queue size.

Neither am I :) but I'm using some values that seem to work ok.  Well 
actually, on checking since we went from 1500/256kbps to 8192/384k, I 
might play a bit more, noticing 0.6% or so drops on a couple of pipes.

 > [1] http://www.freebsd-howto.com/HOWTO/Ipfw-HOWTO

That's a very good ipfw tutorial, given parts of it are a bit outdated 
(FreeBSD 4.x) but it covers a lot of useful background. I just skimmed 
lots of it now but nothing I read jarred, unlike the Handbook section.

 > If I choose the default (50 packets) it means that it takes approx. 100ms
 > (600kbits / 6144kbits) to fill the queue.
 > So the question is: Which value to choose for the queue?

It's going to depend on lots of things, your workload, upstream push .. 
you could start with more like the default and adjust as necessary?

 > > I suggest using 'in recv' and 'out xmit' rather than via for these, for
 > > the sake of clarity.  'in recv' and 'in via' come to the same thing, as
 > > only the receive interface is known on inbound packets, but 'out via'
 > > applies to packets that were *received* on the specified interface as
 > > well as those going out on that interface after routing, which can lead
 > > to surprising results sometimes, and being more specific never hurts ..
 > 
 > Thanks for the hint.
 > I'll change that.

Also, I'd take both that howto's and ipfw(8)'s advice about your faster 
inside pipe, and use one pipe per flow/direction (ie full duplex).

 > >  > But when I have a look at the pipes with 'ipfw show' I can only see
 > >  > packets go through the pipe 50 and nothing goes through the other pipes
 > >  > (which makes sense actually since IPFW work that way?).
 > >
 > > IPFW works that way if you (likely) have net.inet.ip.fw.one_pass=1 .. so
 > > that packets exiting from pipes aren't seen by the firewall again.  If
 > > you set one_pass=0, packets are reinjected into the firewall at the rule
 > > following the pipe (or queue) action, which is what you want to do here.
 > 
 > Actually this is also described in the manpage of ipfw(8).
 > Shame on me ;-)

As penance, read 7 times before sleeping with it under your pillow :)

 > > And you'll surely need a much larger queue for this pipe, at 100Mbit/s.
 > 
 > As already asked above:
 > 
 > How do I know the queue is large or small enough for my needs?

I'm never sure, so tend to experiment.  How fast your hardware is and 
kern.hz setting could be significant factors, as could be TCP/UDP mix 
and other factors I know little about.  Reducing reported packet drops 
is about all I've used for a guide so far.  This one is a FreeBSd 4.8 
box, a 2.4GHz P4 doing little but being a filtering bridge between a 
8192/384kbps ADSL link and nests of mostly XP boxes in 3 LAN groups:

!ipfw pipe show | egrep 'tcp|bit'
00010: 256.000 Kbit/s    0 ms  30 KB 1 queues (1 buckets) droptail
  0 tcp     192.168.0.23/1043     207.46.17.61/80    7196387 2897628161  0    0 9706
00020:   5.120 Mbit/s    0 ms  50 KB 1 queues (1 buckets) droptail
  0 tcp     207.46.17.61/80       192.168.0.23/1043  9977802 12858014698  0    0 63260

00040:  96.000 Kbit/s    0 ms  20 KB 1 queues (1 buckets) droptail
  0 tcp     192.168.0.45/1037    66.249.89.147/443   2315107 299340364  0    0 2086
00050:   1.536 Mbit/s    0 ms  40 KB 1 queues (1 buckets) droptail
  0 tcp    66.249.89.147/443      192.168.0.45/1037  3279021 3802388928  0    0 22433

00060: 192.000 Kbit/s    0 ms  30 KB 1 queues (1 buckets) droptail
  0 tcp     192.168.0.64/1032    207.46.106.36/1863  1847947 563209421  0    0 141
00070:   3.072 Mbit/s    0 ms  40 KB 1 queues (1 buckets) droptail
  0 tcp    207.46.106.36/1863     192.168.0.64/1032  2438211 3075075035  0    0 4550

It's nearly all streaming rather than more interactive traffic, so 
pipe latency isn't so much of a concern.  Anyway, I rarely actually 
catch any traffic still in-queue, which you can stare at for tuning.

Also, that's aggregate traffic, not per IP as with your masks (which 
look maybe wider than necessary, 0x0000ffff covers a /16) so you may
wind up with lots of separate queues sharing a pipe, which may look 
very different.  How many hosts, how much memory to spare for each?

HTH, Ian


More information about the freebsd-questions mailing list