Vector Packet Processing (VPP) portability on FreeBSD

Vincenzo Maffione vmaffione at freebsd.org
Sun May 16 07:22:38 UTC 2021


Hi,
  Yes, you are not using emulated netmap mode.

  In the test setup depicted here
https://github.com/ftk-ntq/vpp/wiki/VPP-throughput-using-netmap-interfaces#test-setup
I think you should really try to replace VPP with the netmap "bridge"
application (tools/tools/netmap/bridge.c), and see what numbers you get.

You would run the application this way
# bridge -i ix0 -i ix1
and this will forward any traffic between ix0 and ix1 (in both directions).

These numbers would give you a better idea of where to look next (e.g. VPP
code improvements or system tuning such as NIC interrupts, CPU binding,
etc.).

Cheers,
  Vincenzo

Il giorno gio 13 mag 2021 alle ore 15:02 Francois ten Krooden <
ftk at nanoteq.com> ha scritto:

> On Thursday, 13 May 2021 13:59 Jacques Fourie
>
> >
> > On Thu, May 13, 2021 at 7:27 AM Francois ten Krooden <ftk at nanoteq.com>
> > wrote:
> >
> > On Thursday, 13 May 2021 13:05 Luigi Rizzo wrote:
> > >
> > > On Thu, May 13, 2021 at 10:42 AM Francois ten Krooden
> > > <ftk at nanoteq.com> wrote:
> > > >
> > > > Hi
> > > >
> > > > Just for info I ran a test using TREX (https://trex-tgn.cisco.com/)
> > > > Where I just sent traffic in one direction through the box
> > running  FreeBSD
> > > with VPP using the netmap interfaces.
> > > > These were the results we found before significant packet loss
> started
> > > occuring.
> > > > +-------------+------------------+
> > > > | Packet Size | Throughput (pps) |
> > > > +-------------+------------------+
> > > > |   64 bytes  |   1.008 Mpps     |
> > > > |  128 bytes  |   920.311 kpps   |
> > > > |  256 bytes  |   797.789 kpps   |
> > > > |  512 bytes  |   706.338 kpps   |
> > > > | 1024 bytes  |   621.963 kpps   |
> > > > | 1280 bytes  |   569.140 kpps   |
> > > > | 1440 bytes  |   547.139 kpps   |
> > > > | 1518 bytes  |   524.864 kpps   |
> > > > +-------------+------------------+
> > >
> > > Those numbers are way too low for netmap.
> > >
> > > I believe you are either using the emulated mode, or issuing a system
> call
> > on
> > > every single packet.
> > >
> > > I am not up to date (Vincenzo may know better) but there used to be a
> > sysctl
> > > variable to control the operating mode:
> > >
> > > https://www.freebsd.org/cgi/man.cgi?query=netmap&sektion=4
> > >
> > > SYSCTL VARIABLES AND MODULE PARAMETERS
> > >      Some aspects of the operation of netmap and VALE are controlled
> > > through
> > >      sysctl variables on FreeBSD (dev.netmap.*) and module parameters
> on
> > > Linux
> > >      (/sys/module/netmap/parameters/*):
> > >
> > >      dev.netmap.admode: 0
> > >      Controls the use of native or emulated adapter mode.
> > >
> > >      0 uses the best available option;
> > >
> > >      1 forces native mode and fails if not available;
> > >
> > >      2 forces emulated hence never fails.
> > >
> > > If it still exists, try set it to 1. If the program fails, then you
> should figure
> > out
> > > why native netmap support is not compiled in.
> >
> > Thank you.  I did set this to 1 specifically now and it still works.  So
> then it
> > should be running in native mode.
> >
> > I will dig a bit into the function that processes the incoming packets.
> > The code I currently use was added to VPP in somewhere before 2016, so it
> > might be that there is a bug in that code.
> >
> > Will try and see if I can find anything interesting there.
> >
> > >
> > > cheers
> > > luigi
> > >
> > A couple of questions / suggestions:
>
> Thank you for the suggestions.
>
> > Will it be possible to test using the netmap bridge app or a vale switch
> > instead of vpp?
> I did perform a test using netmap-fwd (
> https://github.com/Netgate/netmap-fwd)
> I did look at the code and it appears that the packets are processed as a
> batch in the application.  But each packet is passed through the complete
> IP stack in the application, before the next one is processed.
> With this application it was possible to reach about 1.4Mpps for 64-byte
> packets, and 812 kpps for 1518 byte packets
> I haven't done any other tweaking on the FreeBSD box yet.  It is running
> FreeBSD 13.0
>
> > Did you verify that the TREX setup can perform at line rate when
> connected
> > back to back?
> We did tests with TREX back to back yesterday and we reached the following.
> +-------------+------------------+
> | Packet Size | Throughput (pps) |
> +-------------+------------------+
> |   64 bytes  |    14.570 Mpps   |
> |  128 bytes  |     8.466 kpps   |
> |  256 bytes  |     4.542 kpps   |
> |  512 bytes  |     2.354 kpps   |
> | 1024 bytes  |     1.200 kpps   |
> | 1280 bytes  |   965.042 kpps   |
> | 1440 bytes  |   857.795 kpps   |
> | 1518 bytes  |   814.690 kpps   |
> +-------------+------------------+
>
> > Which NICs are you using?
> We are using Intel X552 10 GbE SFP+ NIC's which is part of the Intel Xeon
> D-1537 SoC, on a SuperMicro X10SDV-8C-TLN4F+ Board.
>
> I will also put the results on the github repository
> https://github.com/ftk-ntq/vpp/wiki
> and will update as we get some more information
>
> Kind Regards
> Francois
>
> >
> >
> > Important Notice:
> >
> > This e-mail and its contents are subject to the Nanoteq (Pty) Ltd e-mail
> legal
> > notice available at:
> > http://www.nanoteq.com/AboutUs/EmailDisclaimer.aspx
> > _______________________________________________
> > freebsd-net at freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-net
> > To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
>
>
> Important Notice:
>
> This e-mail and its contents are subject to the Nanoteq (Pty) Ltd e-mail
> legal notice available at:
> http://www.nanoteq.com/AboutUs/EmailDisclaimer.aspx
>


More information about the freebsd-net mailing list