Vector Packet Processing (VPP) portability on FreeBSD

Luigi Rizzo rizzo at iet.unipi.it
Thu May 13 13:02:39 UTC 2021


On Thu, May 13, 2021 at 2:57 PM Luigi Rizzo <rizzo at iet.unipi.it> wrote:
>
> On Thu, May 13, 2021 at 1:27 PM Francois ten Krooden <ftk at nanoteq.com> wrote:
> >
> >
> > On Thursday, 13 May 2021 13:05 Luigi Rizzo wrote:
> > >
> > > On Thu, May 13, 2021 at 10:42 AM Francois ten Krooden
> > > <ftk at nanoteq.com> wrote:
> > > >
> > > > Hi
> > > >
> > > > Just for info I ran a test using TREX (https://trex-tgn.cisco.com/)
> > > > Where I just sent traffic in one direction through the box running  FreeBSD
> > > with VPP using the netmap interfaces.
> > > > These were the results we found before significant packet loss started
> > > occuring.
> > > > +-------------+------------------+
> > > > | Packet Size | Throughput (pps) |
> > > > +-------------+------------------+
> > > > |   64 bytes  |   1.008 Mpps     |
> > > > |  128 bytes  |   920.311 kpps   |
> > > > |  256 bytes  |   797.789 kpps   |
> > > > |  512 bytes  |   706.338 kpps   |
> > > > | 1024 bytes  |   621.963 kpps   |
> > > > | 1280 bytes  |   569.140 kpps   |
> > > > | 1440 bytes  |   547.139 kpps   |
> > > > | 1518 bytes  |   524.864 kpps   |
> > > > +-------------+------------------+
> > >
> > > Those numbers are way too low for netmap.
> > >
> > > I believe you are either using the emulated mode, or issuing a system call on
> > > every single packet.
> > >
> > > I am not up to date (Vincenzo may know better) but there used to be a sysctl
> > > variable to control the operating mode:
> > >
> > > https://www.freebsd.org/cgi/man.cgi?query=netmap&sektion=4
> > >
> > > SYSCTL VARIABLES AND MODULE PARAMETERS
> > >      Some aspects of the operation of netmap and VALE are controlled
> > > through
> > >      sysctl variables on FreeBSD (dev.netmap.*) and module parameters on
> > > Linux
> > >      (/sys/module/netmap/parameters/*):
> > >
> > >      dev.netmap.admode: 0
> > >      Controls the use of native or emulated adapter mode.
> > >
> > >      0 uses the best available option;
> > >
> > >      1 forces native mode and fails if not available;
> > >
> > >      2 forces emulated hence never fails.
> > >
> > > If it still exists, try set it to 1. If the program fails, then you should figure out
> > > why native netmap support is not compiled in.
> >
> > Thank you.  I did set this to 1 specifically now and it still works.  So then it should be running in native mode.
> >
> > I will dig a bit into the function that processes the incoming packets.
> > The code I currently use was added to VPP in somewhere before 2016, so it might be that there is a bug in that code.
>
> Then try to instrument the code and see how many packets
> you are getting on every RXSYNC system call.
>
> If the value is mostly/always 0-1 then there is some bug
> with the (user) code that frees slots in the queue.

Or another issue could be that your application spends too
much time to process packets, so the bottleneck is user processing.
The thing to monitor would be the time between system calls,
divided by the number of packets processed in between
    ..
    ioctl(RXSYNC);
    t1 = get_nanoseconds();
    <process packets>
    n = <number of packets processed>
    t2 = get_nanoseconds()
    time_per_packet = (t2 - t1) / n;
    <this is the upper bound to your packet rate>
    ioctl(RXSYNC);
    ...

cheers
luigi


More information about the freebsd-net mailing list