From nobody Thu May 20 05:10:26 2021 X-Original-To: freebsd-net@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 826126232C6 for ; Thu, 20 May 2021 05:10:43 +0000 (UTC) (envelope-from ftk@nanoteq.com) Received: from mailguard.liquidtelecom.co.za (delivery.mailguard.neotel.co.za [41.168.2.25]) by mx1.freebsd.org (Postfix) with ESMTP id 4FlyVn0sw1z3BqH; Thu, 20 May 2021 05:10:40 +0000 (UTC) (envelope-from ftk@nanoteq.com) Received: from SEC-NGP-AG08 ([192.168.202.38]) by mailguard.liquidtelecom.co.za with Microsoft SMTPSVC(7.5.7601.17514); Thu, 20 May 2021 07:10:00 +0200 Received: from sec-ngp-spt04.e-purifier.com ([192.168.201.1]) by SEC-NGP-AG08.neotel.e-purifier.co.za with Microsoft SMTPSVC(7.5.7601.17514); Thu, 20 May 2021 07:09:59 +0200 Received: from localhost (localhost [127.0.0.1]) by sec-ngp-spt04.e-purifier.com (Postfix) with ESMTP id 47BDE1012D9E; Thu, 20 May 2021 07:10:36 +0200 (SAST) X-Virus-Scanned: by SpamTitan at e-purifier.com Received: from sec-ngp-spt04.e-purifier.com (localhost [127.0.0.1]) by sec-ngp-spt04.e-purifier.com (Postfix) with ESMTP id 9E07C1012E2C; Thu, 20 May 2021 07:10:29 +0200 (SAST) Received: from NTQ-EXC.nanoteq.co.za (unknown [41.170.5.18]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sec-ngp-spt04.e-purifier.com (Postfix) with ESMTPS id 8B5811012E0C; Thu, 20 May 2021 07:10:29 +0200 (SAST) Received: from NTQ-EXC.nanoteq.co.za ([fe80::a5b3:4700:5af3:78b2]) by NTQ-EXC.nanoteq.co.za ([fe80::a5b3:4700:5af3:78b2%12]) with mapi id 14.03.0513.000; Thu, 20 May 2021 07:10:28 +0200 From: Francois ten Krooden To: Marko Zec CC: Vincenzo Maffione , "freebsd-net@freebsd.org" , Jacques Fourie Subject: RE: Vector Packet Processing (VPP) portability on FreeBSD Thread-Topic: Vector Packet Processing (VPP) portability on FreeBSD Thread-Index: AQHXRapQkx/sKwTM3EOdxg1D+Pz3RqrhREohgAAEi1D//+jugIAAL0oAgAQ6h4CAAdzSEIAAXLoAgAQL0WA= Date: Thu, 20 May 2021 05:10:26 +0000 Message-ID: References: <91e21d18a4214af4898dd09f11144493@EX16-05.ad.unipi.it> <20210517192054.0907beea@x23> In-Reply-To: <20210517192054.0907beea@x23> Accept-Language: en-US, en-ZA Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable List-Id: Networking and TCP/IP with FreeBSD List-Archive: https://lists.freebsd.org/archives/freebsd-net List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-net@freebsd.org MIME-Version: 1.0 X-OriginalArrivalTime: 20 May 2021 05:09:59.0577 (UTC) FILETIME=[61D6F890:01D74D36] x-archived: yes x-dbused: RGF0YSBTb3VyY2U9MTkyLjE2OC4yMDEuMjc= X-Rspamd-Queue-Id: 4FlyVn0sw1z3BqH X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=none (mx1.freebsd.org: domain of ftk@nanoteq.com has no SPF policy when checking 41.168.2.25) smtp.mailfrom=ftk@nanoteq.com X-Spamd-Result: default: False [-2.00 / 15.00]; TO_DN_EQ_ADDR_SOME(0.00)[]; RWL_MAILSPIKE_VERYGOOD(0.00)[41.168.2.25:from]; RCVD_IN_DNSWL_LOW(-0.10)[41.168.2.25:from]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; TO_DN_SOME(0.00)[]; NEURAL_HAM_LONG(-0.98)[-0.983]; TAGGED_RCPT(0.00)[]; MIME_GOOD(-0.10)[text/plain]; ARC_NA(0.00)[]; AUTH_NA(1.00)[]; DMARC_NA(0.00)[Nanoteq.com]; TO_MATCH_ENVRCPT_SOME(0.00)[]; NEURAL_HAM_SHORT(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-0.92)[-0.920]; R_SPF_NA(0.00)[no SPF record]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:36937, ipnet:41.168.0.0/17, country:ZA]; RCVD_COUNT_SEVEN(0.00)[7]; MAILMAN_DEST(0.00)[freebsd-net]; FREEMAIL_CC(0.00)[freebsd.org,gmail.com] Thanks will give these a shot. Yes we are working on FreeBSD 13.0 as there is some of the memory managemen= t functionality which is required. I suspected there might have been some impact with implementing iflib. On Monday, 17 May 2021 19:21, Marko Zec wrote: > > On Mon, 17 May 2021 09:53:25 +0000 > Francois ten Krooden wrote: > > > On 2021/05/16 09:22, Vincenzo Maffione wrote: > > > > > > > > Hi, > > > Yes, you are not using emulated netmap mode. > > > > > > In the test setup depicted here > > > https://github.com/ftk-ntq/vpp/wiki/VPP-throughput-using-netmap- > > > interfaces#test-setup > > > I think you should really try to replace VPP with the netmap > > > "bridge" application (tools/tools/netmap/bridge.c), and see what > > > numbers you get. > > > > > > You would run the application this way # bridge -i ix0 -i ix1 and > > > this will forward any traffic between ix0 and ix1 (in both > > > directions). > > > > > > These numbers would give you a better idea of where to look next > > > (e.g. VPP code improvements or system tuning such as NIC interrupts, > > > CPU binding, etc.). > > > > Thank you for the suggestion. > > I did run a test with the bridge this morning, and updated the results > > as well. +-------------+------------------+ > > | Packet Size | Throughput (pps) | > > +-------------+------------------+ > > | 64 bytes | 7.197 Mpps | > > | 128 bytes | 7.638 Mpps | > > | 512 bytes | 2.358 Mpps | > > | 1280 bytes | 964.915 kpps | > > | 1518 bytes | 815.239 kpps | > > +-------------+------------------+ > > I assume you're on 13.0 where netmap throughput is lower compared to > 11.x due to migration of most drivers to iflib (apparently increased > overhead) and different driver defaults. On 11.x I could move 10G line r= ate > from one ix to another at low CPU freqs, where on 13.x the CPU must be se= t > to max speed, and still can't do 14.88 Mpps. > > #1 thing which changed: default # of packets per ring dropped down from > 2048 (11.x) to 1024 (13.x). Try changing this in /boot/loader.conf: > > dev.ixl.0.iflib.override_nrxds=3D2048 > dev.ixl.0.iflib.override_ntxds=3D2048 > dev.ixl.1.iflib.override_nrxds=3D2048 > dev.ixl.1.iflib.override_ntxds=3D2048 > etc. > > For me this increases the throughput of > bridge -i netmap:ixl0 -i netmap:ixl1 > from 9.3 Mpps to 11.4 Mpps > > #2: default interrupt moderation delays seem to be too long. Combined wi= th > increasing the ring sizes, reducing dev.ixl.0.rx_itr from 62 > (default) to 40 increases the throughput further from 11.4 to 14.5 Mpps > > Hope this helps, > > Marko > > > > Besides for the 64-byte and 128-byte packets the other sizes where > > matching the maximum rates possible on 10Gbps. This was when the > > bridge application was running on a single core, and the cpu core was > > maxing out at a 100%. > > > > I think there might be a bit of system tuning needed, but I suspect > > most of the improvement would be needed in VPP. > > > > Regards > > Francois > Important Notice: This e-mail and its contents are subject to the Nanoteq (Pty) Ltd e-mail le= gal notice available at: http://www.nanoteq.com/AboutUs/EmailDisclaimer.aspx