Re: bhyve VM not getting as much upload bandwidth as the host

From: Odhiambo Washington <odhiambo_at_gmail.com>
Date: Mon, 14 Aug 2023 11:32:08 UTC
On Mon, Aug 14, 2023 at 12:38 PM Patrick M. Hausen <hausen@punkt.de> wrote:

> Hi all,
>
> > Am 14.08.2023 um 11:30 schrieb Miroslav Lachman <000.fbsd@quip.cz>:
> >
> > On 14/08/2023 10:49, Odhiambo Washington wrote:
> >> I had the following in sysctl.conf:
> >> net.link.tap.up_on_open=1
> >> net.link.bridge.pfil_member=0
> >> net.link.bridge.pfil_bridge=0
> >> net.link.bridge.pfil_local_phys=1
> >> So I only did:
> >> sysctl -w net.link.bridge.pfil_onlyip=0
> >> sysctl -w net.link.bridge.pfil_local_phys=0
> >> Nothing has changed for the linux VM.
> >> Windows11 VM is getting download/upload speed of 40Mbps/37Mbps while a
> Debian12 VM is getting download/upload of 37Mbps/45Kbps.
> >> Maybe there is an issue with the Linux itself?
> >
> > I never had this solved. Even my FreeBSD guest on FreeBSD host with
> VirtualBox is slow as few hunderds kB/s
> > It was like 10Mbps with Bhyve. I only use VMs for testing but installing
> packages is always so slow. So you are not alone. I would really like to
> know how to improve the network speed with virtualized environment.
>
> This looks weird to me. I run lots of VMs in production on TrueNAS CORE
> - essentially FreeBSD 13-STABLE with bhyve and all of them get near gigabit
> speed with bridged networking.
>
> Guests:
>
> Windows
> Ubuntu
> FreeBSD (OPNsense)
>
> Specifically the OPNsense VM can route 700-800 Mbit/s across gigabit
> interfaces.
>
> All my VMs use VirtIO network drivers - do yours?
>
> Odhiambo, another minor thing:
>
> > ifconfig_em1="inet w.x.y.z netmask 255.255.255.0 mtu 1492 -tso -lro
> -txcsum -rxcsum"
>
> A bridge member interface must not have a layer 3 address. You must put
> the IP address
> on the bridge interface itself and only configure
>
> ifconfig_em1="mtu 1492 -tso -lro -txcsum -rxcsum up"
>

em1 is NOT a bridge member. It's the host's interface that is connected to
the Internet.
So the suggestion by  @Wanpeng was "you have to turn off TSO LRO TXCSUM etc
for the host interface which is bridge to VM", which I understood
as my main interface.
My bridge configuration is as follows:
cloned_interfaces="bridge0 tap0 tap1 tap2 tap3"
ifconfig_bridge0_name="em1bridge"
ifconfig_em1bridge="addm em1 addm tap0 addm tap1 addm tap2 addm tap3 up"
ifconfig_tap0="inet 172.16.1.1/24"
ifconfig_tap1="inet 172.16.2.1/24"
ifconfig_tap2="inet 172.16.3.1/24"
ifconfig_tap3="inet 172.16.4.1/24"

When I create a VM and tie it on tap0, I give the VM an IP like
172.16.1.10/24, with a gateway of 172.16.1.1.

But this is most probably not connected to your performance problem. It
> just breaks
> other things if you have an IP address on a bridge member ;-)
>

So is the setup above problematic or I am on the right track?


> Why are you lowering the MTU of em1?

I don't quite remember why I did that as it's a relic from the past :)


> Does that carry over to the bridge interface?
>

Of course, because em1 is a bridge member, no?


> What's the MTU of the emulated interface in your Linux VM?
>

Removing the MTU change on em1 has resolved the problem. The Debian12 VM
now gets download/upload of 45Mbps/39Mbps!

 One last question for today (although I should just go and RTFM): Do I
really need several tap devices? Can't I just have all my VMs on tap0? Each
with it's own IP in that range?

-- 
Best regards,
Odhiambo WASHINGTON,
Nairobi,KE
+254 7 3200 0004/+254 7 2274 3223
"Oh, the cruft.", egrep -v '^$|^.*#' ¯\_(ツ)_/¯ :-)
[How to ask smart questions:
http://www.catb.org/~esr/faqs/smart-questions.html]