[Bug 272944] Vnet performance issues

From: <bugzilla-noreply_at_freebsd.org>
Date: Fri, 04 Aug 2023 18:56:30 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=272944

            Bug ID: 272944
           Summary: Vnet performance issues
           Product: Base System
           Version: 13.2-RELEASE
          Hardware: Any
                OS: Any
            Status: New
          Severity: Affects Only Me
          Priority: ---
         Component: kern
          Assignee: bugs@FreeBSD.org
          Reporter: jSML4ThWwBID69YC@protonmail.com

Hello, 

During testing, it was noted that switching to Vnet jails causes a significant
reduction in network performance. I tested using Iperf3 from the jail to
another node on the local network. Here's the results. 

This is the performance on a shared network interface. Test is run from inside
a freshly created jail with no services running. 

Command: iperf3 -c 192.168.1.24 -4 -P 10
[SUM]   0.00-10.00  sec  42.6 GBytes  36.6 Gbits/sec   21             sender
[SUM]   0.00-10.00  sec  42.6 GBytes  36.6 Gbits/sec                  receiver

This is the results on the same jail using Vnet.
[SUM]   0.00-10.00  sec  17.6 GBytes  15.1 Gbits/sec  363             sender
[SUM]   0.00-10.00  sec  17.5 GBytes  15.0 Gbits/sec                  receiver

Here's the relevant jail configuration for the shared network vs Vnet. 
# Shared network configuration 
     interface = "lagg0";
     ip4.addr = 192.168.1.140;

# Vnet configuration
    $id     = "140";
    $ipaddr = "192.168.1.${id}";
    $mask   = "255.255.255.0";
    $gw     = "192.168.1.1";
    vnet;
    vnet.interface = "epair${id}b";
    exec.prestart   = "ifconfig epair${id} create up";
    exec.prestart  += "ifconfig epair${id}a up descr vnet-${name}";
    exec.prestart  += "ifconfig epair${id}a mtu 9000";
    exec.prestart  += "ifconfig epair${id}b mtu 9000";
    exec.prestart  += "ifconfig bridge0 addm epair${id}a up";
    exec.start      = "/sbin/ifconfig lo0 127.0.0.1 up";
    exec.start     += "/sbin/ifconfig epair${id}b ${ipaddr} netmask ${mask}
up";
    exec.start     += "/sbin/route add default ${gw}";

Other data.
Underlying network is a LACP (lagg0) based 40Gb with vlans. Configured as
follows on the base system, with IP address removed. Note, the Vlans are not
used in the jail at all. 

ifconfig_mlxen0="up mtu 9000"
ifconfig_mlxen1="up mtu 9000"
cloned_interfaces="lagg0 vlan0 vlan1 vlan2 bridge0"
ifconfig_lagg0="laggproto lacp laggport mlxen0 laggport mlxen1 IP-ADDR/24"
ifconfig_bridge0="addm lagg0 up"
ifconfig_vlan0="inet IP-ADDR/24 vlan 3 vlandev lagg0"
ifconfig_vlan1="inet IP-ADDR/24 vlan 4 vlandev lagg0"
ifconfig_vlan2="inet IP-ADDR/24 vlan 5 vlandev lagg0"
defaultrouter="192.168.1.1"

Epair interfaces: 
For some reason the epair0(A|B) interfaces show 10GB even though they are on a
40GB bridge. Even though they show 10Gb, the test sends data faster than the
interface speed. EX: 15.1Gb/s from above. 

My question is why the huge performance difference? 
Is it my configuration that is wrong? 
Is the Vnet overhead simply that high? 
Are there network interface flags I should be using for Vnet? (tx|rxsum, lro,
tso, etc?)

Reporting it as bug because I'm guessing a 50%+ reduction in performance is not
intended.

-- 
You are receiving this mail because:
You are the assignee for the bug.