[Bug 219672] vmxnet3: with LRO enabled under FreeBSD 11 as a router, outgoing speed of forwarded traffic becomes slower

bugzilla-noreply at freebsd.org bugzilla-noreply at freebsd.org
Wed May 31 03:44:32 UTC 2017


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=219672

            Bug ID: 219672
           Summary: vmxnet3: with LRO enabled under FreeBSD 11 as a
                    router, outgoing speed of forwarded traffic becomes
                    slower
           Product: Base System
           Version: 11.0-RELEASE
          Hardware: amd64
                OS: Any
            Status: New
          Severity: Affects Some People
          Priority: ---
         Component: kern
          Assignee: freebsd-bugs at FreeBSD.org
          Reporter: jwolfe at vmware.com

The following vmxnet3 driver performance issue was report to open-vm-tools in
https://github.com/vmware/open-vm-tools/issues/166

Since vmxnet3 is the community based driver on FreeBSD, the issue is being
cross-filed with FreeBSD. This bug number will be forwarded to the reporter who
will be encouraged to provided needed information to this problem report.

=====
Thank you for the excellent open-vm-tools package!

With Large Receive Offload (LRO) enabled under FreeBSD 11 virtual machine as a
router, outgoing speed of forwarded traffic becomes 500 times slower with
VMXNET3 on HP Proliant G8/G9 (Broadcom BCM5719 enthernet controller chipset)!!

We are using it with pfSense (under FreeBSD 11) virtual appliances (virtual
machine) under VMWare ESXi hosts on HP Proliant G8/G9 servers, all virtual
machines have 1-2 VMXNET3 adapters.

We have tried pfSense version from 2.3.0-RELEASE to 2.4.0-BETA (built on Fri
May 26 19:15:04 CDT 2017), Open-VM-Tools package 10.1.0,1, FreeBSD
11.0-RELEASE-p10.

We have tried VMWare ESXi version from 6.0 to 6.5.0 with all Hewlett-Packard
drivers (highest version of ESXi that we’ve used is HPE Customized Image ESXi
6.5.0 version 650.9.6.5.27 released on May 2017 and based on ESXi 6.5.0
Vmkernel Release Build 5146846).

Regardless of the pfSense version or the VMWare version, on FreeBSD
11.0-RELEASE-p10, if I un-check an option in pfSense to “Disable hardware large
receive offload” (to enable hardware large receive offload) – the virtual
machines that are routed via pfSense (FreeBSD) have very low upload speed
(about 1/500th of their normal speed) or drop connections. To get their speed
back to normal, I have to check this option ON.

Other hardware offload options do not have problems – i have them unchecked to
enable hardware offload of checksums and TCP segmentation.

The Broadcom BCM5719 chipset, that supports Large Receive Offload (LRO) is
quite cheap and ubiquitous, released in 2013. VMWare has added support of
hardware LRO to VMXNET3 also in 2013. In Windows, LRO is supported since
Windows Server 2012 and Windows 8 (since 2012). FreeBSD supports it from
version 8 (since 2009).

There is Open-VM-Tools 10.1.5 version already available at
https://github.com/vmware/open-vm-tools/ , maybe it fixes the issue with Large
Receive Offload (LRO) under FreeBSD with VMXNET3?

I saw some forum messages where people discourage using VMXNET3 adapter, in
favour of E1000 adapter, quote from
https://forum.pfsense.org/index.php?topic=98309.0 : „We saw much better
performance from the E1000 than VMXnet2 and 3”.

There is a VMWare blog on the benefits of LRO for Linux and Windows – see
https://blogs.vmware.com/performance/2015/06/vmxnet3-lro.html According to this
blog entry, LRO saves valuable CPU cycles, and is also very beneficial in VM-VM
local traffic where VMs are located in the same host, communicating with each
other through a virtual switch.

I suspect that the problem is somewhere in the open-vm-tools-nox11 package -
may it includes not fully compatible or not fully stable VMWare drivers for
VMXNET3 -- because Windows machines from our servers connected to Internet
either directly or via pfSense have LRO enabled and don’t have performance
degradation.

There is definitely an incompatibility issues in open-vm-tools on VMXNET3 under
FreeBSD with Large Receive Offload (LRO)! Other hadware TCP offload are working
properlly, because VMXNET3 under Windows makes LRO correctly!

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the freebsd-bugs mailing list