FreeBSD 7, bridge, PF and syn flood = very bad performance
max at love2party.net
Sun Jan 27 06:49:58 PST 2008
On Sunday 27 January 2008, Stefan Lambrev wrote:
> Max Laier wrote:
> >> Well I think the interesting lines from this experiment are:
> >> max total wait_total count avg
> >> wait_avg cnt_hold cnt_lock name
> >> 39 25328476 70950955 9015860 2 7
> >> 5854948 6309848 /usr/src/sys/contrib/pf/net/pf.c:6729 (sleep
> >> mutex:pf task mtx)
> >> 936935 10645209 350 50 212904 7
> >> 110 47 /usr/src/sys/contrib/pf/net/pf.c:980 (sleep
> >> mutex:pf task mtx)
> > Yeah, those two mostly are the culprit, but a quick fix is not really
> > available. You can try to "set timeout interval" to something bigger
> > (e.g. 60 seconds) which will decrease the average hold time of the
> > second lock instance at the cost of increased peak memory usage.
> I'll try and this. At least memory doesn't seems to be a problem :)
> > I have the ideas how to fix this, but it will take much much more
> > time than I currently have for FreeBSD :-\ In general this requires
> > a bottom up redesign of pf locking and some data structures involved
> > in the state tree handling.
> > The first(=main) lock instance is also far from optimal (i.e. pf is a
> > congestion point in the bridge forwarding path). For this I have
> > also a plan how to make at least state table lookups run in parallel
> > to some extend, but again the lack of free time to spend coding
> > prevents me from doing it at the moment :-\
> Well, now we know where the issue is. The same problem seems to affect
> synproxy state btw.
> Can I expect better performance with IPFW's dynamic rules?
Not significantly better, I'd predict. IPFW's dynamic rules are also
protected by a single mutex leading to similar congestion problems as pf.
There should be a measureable constant improvement as IPFW does much less
sanity checks. i.e. better performance at the expense of less security.
It really depends on your needs which is better suited for your setup.
> I wonder how one can protect himself on gigabit network and service
> more then 500pps.
> For example in my test lab I see incoming ~400k packets per second, but
> if I activate PF,
> I see only 130-140k packets per second. Is this expected behavior, if
> PF cannot handle so many packets?
As you can see from the hwpmc trace starting this thread, we don't spend
that much time in pf. The culprit is the pf task mutext, which forces
serialization in pf congesting the whole forward path. Under different
circumstances pf can handle more pps.
> The missing 250k+ are not listed as discarded or other errors, which is
As you slow down the forwarding protocols like TCP will automatically slow
down. Unless you have UDP bombs blasting at your network this is quite
/"\ Best regards, | mlaier at freebsd.org
\ / Max Laier | ICQ #67774661
X http://pf4freebsd.love2party.net/ | mlaier at EFnet
/ \ ASCII Ribbon Campaign | Against HTML Mail and News
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 187 bytes
Desc: This is a digitally signed message part.
Url : http://lists.freebsd.org/pipermail/freebsd-current/attachments/20080127/5b89fc16/attachment.pgp
More information about the freebsd-current