Extending FIBs to support multi-tenancy

Jonathan T. Looney jtl at freebsd.org
Sat Dec 19 01:32:27 UTC 2015


On 12/18/15, 5:26 PM, "owner-freebsd-transport at freebsd.org on behalf of
Ryan Stone" <owner-freebsd-transport at freebsd.org on behalf of
rysto32 at gmail.com> wrote:

>- they may use independent routing tables
[...]
>- traffic from different tenant networks is not guaranteed to be
>segregated
>in any way -- it might all come in the same network interface, without any
>vlan tagging or any other encapsulation that might differentiate tenant
>networks

The combination of these two requirements seems slightly odd to me.
Usually, you need separate routing tables because you have separate
interfaces. When you have shared interfaces, you can usually use the same
routing table.

I think it might help to have more information about the reasoning for
these requirements, as it seems that this combination is what is leading
you towards making the FIB assignment be an address property.



>1)
>We don't really want to change all of our services to instantiate one
>listening socket for every tenant network.  Instead we're looking at
>implementing (and upstreaming) a kernel extension that allows a listening
>socket to be wildcarded across all FIBs (note: yesterday I described this
>feature as allowing us to pick-and-choose FIBs, but people internally have
>convinced me that a wildcard match would make their lives significantly
>easier).  When a new connection attempt to a listening socket in this mode
>is accepted, the socket would not inherit its FIB from the listening
>socket.  Instead, it would be set based on the local IP address of the
>connection.

Makes sense. My employer does something similar in their stack: listen
sockets can be assigned to a particular FIB or be wildcard entries that
listen in all FIBs. We haven't noticed any scaling problems, but we
typically don't have high connection setup rates, either.

In any case, I think this makes sense.


>2)
>Currently, FIBs are a property of an interface (struct ifnet).  We aren't
>very enthusiastic about the prospect of having to create thousands of
>interfaces to support thousands of network interfaces.  We would instead
>like to make the FIB a property of the interface address.

I don't understand the motivation for this. It would help if you would
provide more context for the use case. (See my earlier comments.)

At minimum, before proceeding, you should connect with the folks who had
talked about wanting to make changes to ifnet. (Among other things, I
think they had considered creating separate physical interface, logical
interface, and interface address constructs.) I'm not sure what happened
to that project, but I think it is still an ongoing project. I think Gleb
(cc'd) was involved in that, so you might want to check with him.


>3)
>The idea of a per-thread FIB has gotten the most pushback so far, and I
>understand the objection.  I'll explain the problem that we're trying to
>solve with this.  When a new request comes in, we may need to perform
>authentication through LDAP or Kerberos.  The problem is that the existing
>open-source implementations that we are using manage sockets directly.  We
>really don't want to have to go through them and make their APIs entirely
>FIB-aware -- that is far too much churn.  By moving awareness of the
>current FIB into the kernel, existing calls to socket() can do the right
>thing transparently.
>
>We're not entirely happy with the solution, but the "right" way to solve
>the problem involves rototilling a number of libraries.  Even if we could
>convince the upstream projects to take patches, it's far more work than
>we're willing to take on.

Thanks for sharing more details on the use case. It certainly helps
clarify the reasoning.

However, I wonder if this really solves all of your problems. For example,
you talk about needing to perform LDAP or Kerberos authentication. You are
already going to need to make your application smart enough to figure out
which servers to use based on the source of the incoming request. That may
or may not require adding intelligence to your libraries to give you
enough information to identify the incoming connection.

Further, per-thread FIBs may not solve your scaling problem. You initially
stated that your objection to VNET was that you would need a minimum of "A
* B * C threads to ensure that any given service on any single tenant
network could fully utilize the system's resources to process requests".
If you assign threads to a particular FIB, then you are back in the A * B
* C scaling model that you didn't want.

However, on the other hand, if you maintain a smaller pool of threads and
continually reassign their FIB, you could hit interesting problems if any
of your libraries implement their own thread pools or event-driven
libraries (e.g. libisc2). In those cases, they may try to switch contexts
between connections as events occur. How will you ensure the thread's FIB
is always assigned correctly? It seems like this could become quite
complicated, depending on the exact situation.

Per-thread FIBs have a lot of potential concerns, ESPECIALLY when
implemented by programs or libraries that aren't expecting to work this
way. The biggest concerns I see are complexity and troubleshooting: you
need to make sure that every thread knows which FIB it is using and only
handles connections for that FIB. If you make one mistake, your connection
suddenly can go to the wrong place.

Just my 2c. Others may disagree.

Jonathan




More information about the freebsd-transport mailing list