kern/154428: xn0 network interface and PF - Massive performance drop
Alex
alex at ahhyes.net
Tue Feb 1 01:30:10 UTC 2011
>Number: 154428
>Category: kern
>Synopsis: xn0 network interface and PF - Massive performance drop
>Confidential: no
>Severity: serious
>Priority: medium
>Responsible: freebsd-bugs
>State: open
>Quarter:
>Keywords:
>Date-Required:
>Class: sw-bug
>Submitter-Id: current-users
>Arrival-Date: Tue Feb 01 01:30:09 UTC 2011
>Closed-Date:
>Last-Modified:
>Originator: Alex
>Release: FreeBSD 8.2-RC2
>Organization:
>Environment:
FreeBSD srv.mydomain.net 8.2-RC2 FreeBSD 8.2-RC2 #4: Sun Jan 30 10:15:26 EST 2011 alex at srv.mydomain.net:/usr/obj/usr/src/sys/custom-server amd64
>Description:
Hi Guys,
Have been forced to file a PR as I have had no answer on this from the freebsd-xen mailing list.
I am running FreeBSD under a XEN HVM environment with a commercial VPS provider. I recently went from running a generic type of kernel to one that includes the XENHVM options. I now have a network interface called xn0 instead of re0, It was obviously necessary to update my pf.conf as the interface name has changed.
All i did was edit the pf.conf file, and replace all instances of re0 with xn0. The performance seems to be aweful. I was wondering why network connectivity was so slow. A download test from apache struggled to do 2KB/s. I disabled pf and suddenly the speed skyrocketed. Any ideas where to look? I have the following in my kernel for PF:
device pf
device pflog
device pfsync
options ALTQ
options ALTQ_CBQ # Class Bases Queuing (CBQ)
options ALTQ_RED # Random Early Detection (RED)
options ALTQ_RIO # RED In/Out
options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC)
options ALTQ_PRIQ # Priority Queuing (PRIQ)
options ALTQ_NOPCC # Required for SMP build
and pf.conf (very basic setup):
--------------------------------
mailblocklist = "{ 69.6.26.0/24 }"
#blacklist = "{ 202.16.0.11 }"
# Rule 0 (xn0)
#pass in quick on xn0 inet proto icmp from any to (xn0) label "RULE 0 -- ACCEPT "
#block mail server(s) that continue to try and send me junk
block in quick on xn0 inet proto tcp from $mailblocklist to (xn0) port 25
#block anyone else who's in the blacklist
#block in quick on xn0 inet from $blacklist to (xn0)
pass in quick on xn0 inet proto tcp from any to (xn0) port { 110, 25, 80, 443, 21, 53 } flags any label "RULE 0 -- ACCEPT "
pass in quick on xn0 inet proto udp from any to (xn0) port 53 label "RULE 0 -- ACCEPT "
#
# Rule 1 (lo0)
pass quick on lo0 inet from any to any no state label "RULE 1 -- ACCEPT "
#
# Rule 2 (xn0) -- allow all outbound connectivity
pass out quick on xn0 inet from any to any label "RULE 2 -- ACCEPT "
# Rule 3 (xn0)
# deny all not matched by above
block in quick on xn0 inet from any to any no state label "RULE 3 -- DROP "
--------------------------
Any ideas why I would be seeing such a performance hit? I need to get to the bottom of this as leaving a public facing machine with it's firewall disabled is bad news.
I am not sure whether this a PF or Network interface issue.
>How-To-Repeat:
Install freebsd 8.2RC2 in a XEN HVM environment (could also affect other versions of freebsd), build the XENHVM kernel then enable a simple PF ruleset like above. Test network throughput with PF enabled and also without PF enabled and witness the difference.
>Fix:
>Release-Note:
>Audit-Trail:
>Unformatted:
More information about the freebsd-bugs
mailing list