pf: BAD state happens often with portsnap fetch update

Daniel Hartmeier daniel at benzedrine.cx
Thu Oct 5 09:20:54 PDT 2006


On Thu, Oct 05, 2006 at 12:08:27PM -0400, Adam McDougall wrote:

> (44.18 is the squid server (trident), 37.163 is the system running portsnap (ice))
> 
> Oct  5 11:22:03 jolly-fw1 kernel: pf: BAD state: TCP 35.9.44.18:3128 35.9.44.18:3128 35.9.37.163:55357 
> [lo=646710754 high=646777361 win=33304 modulator=0 wscale=1] [lo=4033525074 high=4033590770 win=33304 
> modulator=0 wscale=1] 9:9 S seq=650709460 ack=4033525074 len=0 ackskew=0 pkts=5:4 dir=in,fwd
> Oct  5 11:22:03 jolly-fw1 kernel: pf: State failure on: 1       | 5

The client (37.163) is running out of random high source ports, and
starts re-using ports from previous connections, violating 2MSL.

pf keeps states of closed connections around for a while (default is
90s), so late packets related to the old connection can be associated
with the state. Creating a second, concurrent state entry for the same
source/destination address:port quadruple is not possible.

You can

  a) lower pf's tcp.closed timeout, so states of closed connections get
     purged sooner.

  b) give the client more random high ports (sysctl net.inet.ip.portrange.*)
     or add aliases, if the client can make use of them concurrently.

  c) reduce the connection establishment rate of the client. if portsnap
     needs one connection for every single file, that's a poor protocol,
     if you expect a single client to fetch thousands of files in a few
     seconds.

Daniel


More information about the freebsd-pf mailing list