PF/ALTQ Issues

Shane James shane at phpboy.co.za
Mon Jan 24 01:14:34 PST 2005


I'm running FreeBSD 5.3-Stable. The only change I've made in the generic kernel is added the following options.

device          pf
device          pflog
device          pfsync 

options         ALTQ
options         ALTQ_CBQ        # Class Bases Queueing
options         ALTQ_RED        # Random Early Drop
options         ALTQ_RIO        # RED In/Out
options         ALTQ_HFSC       # Hierarchical Packet Scheduler
options         ALTQ_CDNR       # Traffic conditioner
options         ALTQ_PRIQ       # Prioirity Queueing

this Box is a P4-2.4ghz + 512Mb RAM

Here is a drop of 'netstat -m'
270 mbufs in use
267/32768 mbuf clusters in use (current/max)
0/3/4496 sfbufs in use (current/peak/max)
601 KBytes allocated to network
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines

Just to show that I'm not maxing out on my mbuf's

Please Excuse how uneat the ALTQ limits/rules are, but I've been playing around quite a bit with this, to try and solve the issue.

#tables
table <zaips> persist file "/etc/zaips" - All South African Routes(My home country)
table <sodium> { 196.23.168,136, 196.14.164.130, 196.46.187.69 }

#############################
# AltQ on Uplink Interface
#############################
altq on $uplink_if hfsc bandwidth 100Mb queue { dflt_u, lan_u, local_u, intl_u, monitor_u }
        queue dflt_u bandwidth 64Kb hfsc(default realtime 512Kb upperlimit 512Kb)
        queue lan_u bandwidth 10Mb hfsc(realtime 10Mb upperlimit 10Mb)
        queue monitor_u bandwidth 64Kb hfsc(realtime 256Kb upperlimit 256Kb)

queue local_u bandwidth 10Mb hfsc(upperlimit 10Mb) { windows_u_l, blueworld-l_u, mail_u_l, unix_u_l }
        queue windows_u_l bandwidth 64Kb hfsc(realtime 192Kb upperlimit 320Kb)
        queue blueworld-l_u bandwidth 64Kb hfsc(realtime 64Kb upperlimit 192Kb)
        queue mail_u_l bandwidth 64Kb hfsc(realtime 256Kb upperlimit 320Kb)
        queue unix_u_l bandwidth 256Kb hfsc(realtime 256Kb upperlimit 256Kb)

queue intl_u bandwidth 10Mb hfsc(upperlimit 10Mb) { windows_u_i, blueworld_u_i, mail_u_i, unix_u_i }
        queue windows_u_i bandwidth 64Kb hfsc(upperlimit 64Kb)
        queue blueworld_u_i bandwidth 64Kb hfsc(upperlimit 64Kb)
        queue mail_u_i bandwidth 64Kb hfsc(realtime 64Kb upperlimit 64Kb)
        queue unix_u_i bandwidth 64Kb hfsc(upperlimit 64Kb)

#############################
# AltQ on Hosting Interface
#############################
altq on $hosting_if hfsc bandwidth 100Mb queue { dflt_d, lan_d, local_d, intl_d, sodium_d }
        queue dflt_d bandwidth 64Kb hfsc(default realtime 512Kb upperlimit 512Kb)
        queue lan_d bandwidth 10Mb hfsc(realtime 10Mb upperlimit 10Mb)

queue local_d bandwidth 10Mb hfsc(upperlimit 10Mb) { windows_ld, monitor_d, blueworld_ld, mail_d_l, unix_d_l }
        queue windows_ld bandwidth 64Kb hfsc(realtime 192Kb upperlimit 256Kb)
        queue monitor_d bandwidth 64Kb hfsc(realtime 256Kb upperlimit 256Kb)
        queue blueworld_ld bandwidth 64Kb hfsc(realtime 64Kb upperlimit 128Kb)
        queue mail_d_l bandwidth 64Kb hfsc(realtime 256Kb upperlimit 320Kb)
        queue unix_d_l bandwidth 256Kb hfsc(realtime 256Kb upperlimit 256Kb)

queue intl_d bandwidth 10Mb hfsc(upperlimit 10Mb) { windows_d_i, monitor_d_i, blueworld_d_i, mail_d_i, unix_d_i }
        queue windows_d_i bandwidth 64Kb hfsc(realtime 64Kb upperlimit 64Kb)
        queue monitor_d_i bandwidth 64Kb hfsc(upperlimit 64Kb)
        queue blueworld_d_i bandwidth 64Kb hfsc(realtime 32Kb upperlimit 64Kb)
        queue mail_d_i bandwidth 64Kb hfsc(upperlimit 64Kb)
        queue unix_d_i bandwidth 64Kb hfsc(upperlimit 64Kb)


Here is an example of how I'm assigning the traffic to one of the queue's

#International Queue's
pass out on $uplink_if from <sodium> to any keep state queue mail_u_i
pass out on $hosting_if from any to <sodium> keep state queue mail_d_i

#Local Queue's
pass out on $uplink_if from <sodium> to <zaips> keep state queue mail_u_l
pass out on $hosting_if from <zaips> to <sodium> keep state queue mail_d_l

Also, I am running Intel Pro 100 S(Intel Ethernet Express) Server Cards on either interface. both cards have been swapped to confirm that it's not a hardware related issue. Which it's not.

'pfctl -vsq' for these 4 queue's:


queue   mail_u_l bandwidth 256Kb hfsc( realtime 256Kb upperlimit 256Kb ) 
  [ pkts:       3592  bytes:    3624366  dropped pkts:      0 bytes:      0 ]
--

queue   mail_u_i bandwidth 64Kb hfsc( realtime 64Kb upperlimit 64Kb ) 
  [ pkts:       1277  bytes:     230620  dropped pkts:      0 bytes:      0 ]
--

queue   mail_d_l bandwidth 256Kb hfsc( realtime 256Kb upperlimit 256Kb ) 
  [ pkts:       3933  bytes:     856087  dropped pkts:      0 bytes:      0 ]
--
queue   mail_d_i bandwidth 64Kb hfsc( upperlimit 64Kb ) 
  [ pkts:       1185  bytes:    1559939  dropped pkts:      0 bytes:      0 ]


Now, here is the issue.
With All queue's that I add. Upstream($uplink_if) Bandwidth goes quite a bit slower that it's suppose to. DownStream($hosting_if) runs at the correct speeds and sometime even more that I've assigned to it. Another strange thing though is that fact that I don't always think that it's assigning all traffic to the correct queue's some times it uses more bandwidth than I've assigned to the queue and some times it uses a lot less. Despite the fact that it as all the bandwidth to it's disposal at test time.

the way I've been measuring the usage is through MRTG and lftp to hosts on my peering network.

Any Help Would be much apprecatied.

Kind Regards,
Shane James
VirTek - http://www.virtek.co.za
O: 0861 10 1107
M: +27 (0) 82 786 3878
F: +27 (0) 11 388 5626


More information about the freebsd-pf mailing list