[Bug 219316] Wildcard matching of ipfw flow tables

bugzilla-noreply at freebsd.org bugzilla-noreply at freebsd.org
Sun May 21 22:41:30 UTC 2017


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=219316

--- Comment #12 from lutz at donnerhacke.de ---
Ah, I missed the previous comment.

>>1) Large Scale NAT violates the happy eyeball requirement, that a given client
>> should always use the same external IP while communicating to a given service.

> On what timescale? Forever?

As long as the client has the same (CGN) IP (from 10.64.0.0/10).

> If a client is idle for 5 minutes (no sessions) can
> it start using a new IP?

No. That violates the happy-eyeball contraint. Several web services bind the
session to the external visible IP. If this IP does change, the customer has to
login again and again. We already made this mistake (using LSN).

>>2) Mapping all customers to a single IP does not work either, because there
>> are too much connections originating by those customers.

> How may remote addresses are you talking too?
> You can reuse the same address and port to may different remote addresses..

That would surprise me. Such an implementation would require dynamic memory for
the NAT tables. I do not see such a memory usage on my FreeBSD machines. I did
see such an effect on a CISCO ASA.
See: https://lutz.donnerhacke.de/Blog/High-memory-with-extended-PAT-on-ASA

>> Consequently a deterministically selected group of clients has to share the
>> same NAT table using a single external IP. A typical approach is to use 
>> wildcards to match the right NAT instance:

> you just said that "Mapping all customers to a single IP does not work .."
> and yet that is what you show here.. Am I misreading it?

The classical NAT setting does not distinguish between the client IPs and
therefore does either have a single IP or LSN.

My setup partitions the clients by their IPs and then I use a "single IP per
partition" NAT.

> How many clients are we talking about here? 10? 100? 1000? 10K? 100K? 1M?
> and are these clients all on separate hardware? or are they coming from a
> small number of session aggregator machines?

Currently I have ~10k clients per hardware, the setup scales horizontally. If I
get more clients, I add additional machines and tell them in DHCP to use a
different gateway (next machine).

>> add 2100 nat 100 ipv4 from 100.64.0.0:255.192.0.63 to any xmit ext out
>> add 2101 nat 101 ipv4 from 100.64.0.1:255.192.0.63 to any xmit ext out
>> add 2102 nat 102 ipv4 from 100.64.0.2:255.192.0.63 to any xmit ext out
>>
>> This approach is inefficient, tables could help. But tables does not support
>> wildcard masking of lookup data. With such an wildcard mask, especially the
>> flow tables could greatly improve performance.

> I don't quite understand this bit
> my memory is that you can have a table
> 100.64.0.0:255.192.0.63  0
> 100.64.0.1:255.192.0.63  1
...
> nat tablearg ip from table (x) to any out xmit XX0

You are right. That's the setup I'm used before switching to this flow based
NAT. I only used the very early setup to demonstrate the problem. My fault.

> what am I missing?

You are missing the privacy expectations and the Law Enforcement Agencies. For
privacy, we like to use different external IPs for the same client reaching
different services. That's why flows.

For LEAs we need to tell exactly which user war involved in a specific session,
so we need to log some data about NAT. This is an overwhelming large amount of
data, so we like to push down the necessary logs. This can be done by
allocation blocks of ports to a customer instead of individual ports.

In order to carefully assign such port ranges, they need to be large (at least
300 per customer in order to access Google Maps without errors). That's why we
need to heavily reuse port (ranges) and this requires multiple NAT tables per
customer. The only separation method left is to include the destination
address, port and protocol.

That's why we switched to flows.

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the freebsd-ipfw mailing list