MPLS

Sami Halabi sodynet1 at gmail.com
Tue Apr 2 13:01:54 UTC 2013


>At least, the per-CPU netisr and other related per-CPU network stuffs
>(e.g. routing table) work quite well as we have _expected_ (the
>measured bi-directional IPv4 forwarding performance w/ fastforwarding
>is 5.6Mpps+, w/o fastforwarding 4.6Mpps+
are you talking about the work Luigi did with Netmap? or performance out of
the box in GENERIC?

Sami


On Tue, Apr 2, 2013 at 9:16 AM, Sepherosa Ziehau <sepherosa at gmail.com>wrote:

> On Mon, Mar 18, 2013 at 9:41 PM, Andre Oppermann <andre at freebsd.org>
> wrote:
> > On 18.03.2013 13:20, Alexander V. Chernikov wrote:
> >>
> >> On 17.03.2013, at 23:54, Andre Oppermann <andre at freebsd.org> wrote:
> >>
> >>> On 17.03.2013 19:57, Alexander V. Chernikov wrote:
> >>>>
> >>>> On 17.03.2013 13:20, Sami Halabi wrote:
> >>>>>>
> >>>>>> ITOH OpenBSD has a complete implementation of MPLS out of the box,
> >>>>>> maybe
> >>>>
> >>>> Their control plane code is mostly useless due to design approach
> >>>> (routing daemons talk via kernel).
> >>>
> >>>
> >>> What's your approach?
> >>
> >> It is actually not mine. We have discussed this a bit in radix-related
> >> thread. Generally quagga/bird (and other hiperf hardware-accelerated and
> >> software routers) have feature-rich RIb from which best routes (possibly
> >> multipath) are installed to kernel/fib. Kernel main task should be to do
> >> efficient lookups while every other advanced feature should be
> implemented
> >> in userland.
> >
> >
> > Yes, we have started discussing it but haven't reached a conclusion among
> > the
> > two philosophies.  We have also agreed that the current radix code is
> > horrible
> > in terms of cache misses per lookup.  That however doesn't preclude an
> > agnostic
> > FIB+RIB approach.  It's mostly a matter of structure layout to keep it
> > efficient.
> >
> >
> >>>> Their data plane code, well.. Yes, we can use some defines from their
> >>>> headers, but that's all :)
> >>>>>>
> >>>>>> porting it would be short and more straight forward than porting
> linux
> >>>>>> LDP
> >>>>>> implementation of BIRD.
> >>>>
> >>>>
> >>>> It is not 'linux' implementation. LDP itself is cross-platform.
> >>>> The most tricky place here is control plane.
> >>>> However, making _fast_ MPLS switching is tricky too, since it requires
> >>>> chages in our netisr/ethernet
> >>>> handling code.
> >>>
> >>>
> >>> Can you explain what changes you think are necessary and why?
> >
> >>
> >>
> >> We definitely need ability to dispatch chain of mbufs - this was already
> >> discussed in intel rx ring lock thread in -net.
> >
> >
> > Actually I'm not so convinced of that.  Packet handling is a tradeoff
> > between
> > doing process-to-completion on each packet and doing context switches on
> > batches
> > of packets.
> >
> > Every few years the balance tilts forth and back between
> > process-to-completion
> > and batch processing.  DragonFly went with a batch-lite token-passing
> > approach
> > throughout their kernel.  It seems it didn't work out to the extent they
> > expected.
> > Now many parts are moving back to the more traditional locking approach.
>
> At least, the per-CPU netisr and other related per-CPU network stuffs
> (e.g. routing table) work quite well as we have _expected_ (the
> measured bi-directional IPv4 forwarding performance w/ fastforwarding
> is 5.6Mpps+, w/o fastforwarding 4.6Mpps+, w/ 4 igb(4) on i7-2600,
> using 90% cpu time on each HT in Dfly's polling(4) mode); it is _not_
> using traditional locking approach on major network paths at all and
> for IPv4 forwarding Dfly is _not_ doing "process-to-completion".
>
> And as a side note: There was a paper compared the message-based
> parallelism TCP implementation, connection-based thread serialization
> TCP implementaion (Dfly is using) and connection-based lock
> serialization TCP implementation.  The conclusion was connection-based
> thread serialization TCP implementation (Dfly is using) had too many
> scheduling cost.  The paper's conclusion _no longer_ holds for Dfly
> nowadays; we have wiped out major scheduling cost on the hot TCP
> paths.  So as far as I could see, its _not_ the problem of the model
> itself sometimes, but how the model should be implemented.
>
> Best Regards,
> sephe
>
> --
> Tomorrow Will Never Die
>



-- 
Sami Halabi
Information Systems Engineer
NMS Projects Expert
FreeBSD SysAdmin Expert


More information about the freebsd-net mailing list