[long] Network stack -> NIC flow (was Re: MQ Patch.)

Andre Oppermann andre at freebsd.org
Wed Oct 30 21:30:35 UTC 2013


On 30.10.2013 06:00, Luigi Rizzo wrote:
> On Tue, Oct 29, 2013 at 06:43:21PM -0700, Adrian Chadd wrote:
>> Hi,
>>
>> We can't assume the hardware has deep queues _and_ we can't just hand
>> packets to the DMA engine.
>> [Adrian explains why]
>
> i have the feeling that the variuos folks who stepped into this
> discussion seem to have completely different (and orthogonal) goals
> and as such these goals should be discussed separately.

It looks like it and it is great to have this discussion. :)

> Below is the architecture i have in mind and how i would implement it
> (and it would be extremely simple since we have most of the pieces
> in place).

[Omitted citation further down of good and throughout qos description,
  to be replied to separately]

> It would be useful if people could discuss what problem they are
> addressing before coming up with patches.

Right now Glebius and I are working on the struct ifnet abstraction
which has severely bloated and blurred over the years.  The goal is
to make is opaque to the drivers for better API/ABI stability in the
first step.

When looking at struct ifnet and its place in the kernel then it
becomes evident that it's actual purpose is to serve as abstraction
of a logical layer 3 protocol interface towards the layer 2 mapping
and encapsulation, and eventually and sort of tangentially the real
hardware.

Now ifnet has become very complex and large and should be brought
back to its original purpose of the being the logical layer 3 interface
abstraction.  There isn't necessarily a 1:1 mapping from one ifnet
instance to one hardware interface.  In fact there are pure logical
ifnets (gre, tun, ...), direct hardware ifnets (simple network interfaces
like fxp(4)), and multiple logic interfaces on top a single hardware
(vlan, lagg, ...).  Depending on the ifnets purpose the backend can
be very different.  Thus I want to decouple the current implicit
notion of ifnet==hardware with associated queuing and such.  Instead
it should become a layer 3 abstraction inside the kernel again and
delegate all lower layers to appropriate protocol, layer 2, and
hardware specific implementations.

 From this comes the following *rough* implementation approach to be
tested (ignore naming for now):

/* Function pointers for packets descending into layer 2 */
   (*if_l2map)(ifnet, mbuf, sockaddr, [route]);	/* from upper stack */
   (*if_tx)(ifnet, mbuf);			/* to driver or qos */
   (*if_txframe)(ifnet, mbuf);			/* to driver */
   (*if_txframedone)(ifnet);			/* callback to qos */

/* Function pointers for packets coming up from layer 1 */
   (*if_l2demap)(ifnet, mbuf);			/* l2/l3 unmapping */

When a packet comes down that stack (*if_l2map) gets called to map
and encapsulate a layer 3 packet into an appropriate layer 2 frame.
For IP this would be ether_output() together with ARP and so on.
The result of that step is the ethernet header in front of the IP
packet.  Ether_output() then calls (*if_tx) to have the frame sent
out on the wire(less) which is the driver handoff point for DMA
ring addition.  Normally (*if_tx) and (*if_txframe) are the same
and the job is done.  When software QoS is active (*if_tx) points
into the soft-qos enqueue implementation and will eventually use
(*if_txframe) to push out those packets onto the wire it sees fit.

In addition the drivers have to expose functions to manage the number
and depth of their DMA rings, or rather the number/size of packets
that can be enqueued onto them.  And the (*if_txframedone) callback
to clock out packets from a soft-queue or QoS discipline.  When QoS
is active it probably wants to make the DMA rings small and the
software queue(s) large to be effective.

As default setup and when running a server no QoS will be active
or inserted.  No or only very small software queues exist to handle
concurrency (except for ieee80211 to do sophisticated frame management
inside *if_txframe).  Whenever the DMA ring is full there is no point
in queuing up more packets.  Instead the socket buffers act as buffers
and also ensure flow control and backpressure up to userspace to limit
kernel memory usage from double and triple buffering.

How the packets are efficiently pushed out onto the wire is up to the
drivers and depends on the hardware capabilities.  It can be multiple
hardware DMA rings, or just a single ring with an efficient concurrent
access method.

-- 
Andre



More information about the freebsd-net mailing list