Large scale NAT with PF - some weird problem
Milan Obuch
freebsd-pf at dino.sk
Mon Jun 29 11:05:23 UTC 2015
On Mon, 29 Jun 2015 12:46:14 +0200
Daniel Hartmeier <daniel at benzedrine.ch> wrote:
> On Sun, Jun 21, 2015 at 01:32:36PM +0200, Milan Obuch wrote:
>
> > One observation, on pfctl -vs info output - when src-limit counters
> > rises to 30 or so, I am getting first messages someone has problem.
> > Is it only coincidence or is there really some relation to my
> > problem?
>
> This might be a clue. That counter shouldn't increase. It means
> something triggered a PFRES_SRCLIMIT.
>
OK, I will keep an eye on this for some time too. I do not have much
knowledge regarding pf internals, so my observations may or may not be
relevant, just as my questions.
> Are you using source tracking for anything else besides the NAT sticky
> address feature?
>
I reviewed recently some pfctl output and I think this mechanism is
used in other scenarios as well, namely following one for ssh
protection:
block in quick on $if_ext inet proto tcp from <abusive_ips> to any port
22
pass in on $if_ext proto tcp to x.y.24.0/22 port ssh flags S/SA
keep state (max-src-conn 10, max-src-conn-rate 5/5, overload
<abusive_ips> flush)
(somewhat mail-mangled, but I am sure you know this one)
> If not, the only explanation for a PFRES_SRCLIMIT in a translation
> rule is a failure of pf.c pf_insert_src_node(), which could only be an
> allocation failure with uma_zalloc().
>
> Do you see any allocation failures? Log entries about uma, "source
> nodes limit reached"? How about vmstat -m?
>
Where should these failures come? I see nothing in /var/log/messages.
As for 'vmstat -m', I think following lines could be of some interest:
Type InUse MemUse HighUse Requests Size(s)
pf_hash 3 1728K - 3
pf_temp 0 0K - 955 32,64
pf_ifnet 21 7K - 282 128,256,2048
pf_osfp 1130 102K - 6780 32,128
pf_rule 222 129K - 468 128,1024
pf_table 9 18K - 35 2048
but no idea how to interpret this.
Regards,
Milan
More information about the freebsd-pf
mailing list