Adding Flow Director sysctls to ixgbe(4)

K. Macy kmacy at freebsd.org
Fri Sep 9 11:23:13 UTC 2011


> What this means is that we have
> a failure of abstraction.  Abstraction has a cost, and some of the people who want
> access to low level queues are not interested in paying an extra abstraction cost.

I think a case can be made that that isn't necessarily the case
depending on how well the abstraction is defined. As an example, I
don't believe that there is any performance penalty for the higher
level of abstraction in the BSD pmap API vs. representing the MD layer
as a multi-level page table.

>
> I think that some of the abstractions we need are tied up in the work that Takuya did
> for SoC and some of it is in the work done by Luigi on netmap.  I'd go so far as to say
> that what we should do is try to combine those two pieces of code into a set of
> low level APIs for programs to interact with high speed NICs.

I'm inclined to agree although I have fairly recently changed the
ifnet API for multi-queue I have received very little in the way of
useful responses to my inquiries with various individuals about the
interfaces exported by current cards. Based on my limited
understanding of netmap as it exists now,  I think that going forward
netmap should develop in to a general API for safely exporting of
queues to userspace, with the current limitations placed on it being
solely for cards that don't support the more advanced features. I am
only familiar with the documentation for Solarflare's quasi iommu and
have access to an implementation of exporting queues to userspace on
ixgbe cards. It appears that for a broader understanding of current
NIC feature sets I will have to resort to spelunking through the
network driver sources on kernel.org. This is probably reasonable when
it comes to directing flows as linux is converging on a single API,
but I am not aware of a similar general API for exporting queues.

> The one thing most
> people do not talk about is extending our socket API to do two things that I think would
> be a win for 80% of our users.  If a socket, and also a kqueue, could be pinned
> to a CPU as well as a NIC queue that should improve overall bandwidth for a large
> number of our users.  The API there is definitely an ioctl() and the hard part is
> doing the tying together.  To do this we need to also work out our low level story.

This is clearly a useful application, perhaps the most
straightforward, but I think there is a much broader set of possible
uses.


Cheers


More information about the freebsd-net mailing list