Re: FreeBSD TCP (with iperf3) comparison with Linux

From: Murali Krishnamurthy <muralik1_at_vmware.com>
Date: Fri, 30 Jun 2023 15:03:44 UTC
Richard,

Please see my answers for your queries,


Q. Since you mention two hypervisors - what is the phyiscal network topology in between these two servers? What theoretical link rates would be attainable?


Here is the topology

Iperf end points are on 2 different hypervisors.



——————      ——————-—                                                                                  ——————                ——————-—

|  Linux VM1 |      |  BSD 13 VM 1  |                                                                                 |  Linux VM2  |                |  BSD 13 VM 2  |

|___________|      |_ ____ ____ ___ |                                                                                  |___________ |                |_ ____ ____ ___ |

|          |                         |                                                                                                           |                                   |

          |                          |                                                                                                           |                                   |

———————————————                                                                                  ———————————————

|           ESX Hypervisor 1          |           10G link connected via L2 Switch                      |           ESX Hypervisor  2            |

|                                               |————————————————————————   |                                                |

|—————————————— |                                                                                   |——————————————|





Nic is of 10G capacity on both ESX server and it has below config.



Name    PCI                               Driver      Link    Speed                                       MAC Address                   MTU    Description

vmnic4  0000:81:00.0                ixgben      Up   10000Mbps      Full Duplex        a0:36:9f:61:ca:d4          1500   Intel(R) Ethernet 10G 2P X520 Adapter



iperf3 was run between,

  1.  BSD 13 VM 1 <-> BSD 13 VM 2
  2.  Linux VM 1 <-> Linux VM 2



BDP for 16MB Socket buffer: 16 MB * (1000 ms * 100ms latency) * 8 bits/ 1024 = 1.25 Gbps

So theoretically we should see close to 1.25Gbps of Bitrate and we see Linux reaching close to this number.
But BSD is not able to do that.


Q. Did you run iperf3? Did the transmitting endpoint report any retransmissions between Linux or FBSD hosts?

Yes, we used iper3. I see Linux doing less number retransmissions compared to BSD.
On BSD, the best performance was around 600 Mbps bitrate and the number of retransmissions for this number seen is around 32K
On Linux, the best performance was around 1.15 Gbps bitrate and the number of retransmissions for this number seen is only 2K.
So as you pointed the number of retransmissions in BSD could be the real issue here.
Is there a way to reduce this packet loss by fine tuning some parameters w.r.t ring buffer or any other areas?

Q. Did you contrast your cc_cubic findings with cc_newreno?
New reno does not perform well with long RTT networks. I could only see 200 Mbps bitrate even with no packet loss.
So we did not want to pursue with New Reno with long RTT connections.


Hope the answers are clear. Please let me know if you have any other suggestions to reduce the packet loss.

Regards
Murali



On 29/06/23, 9:44 PM, "Scheffenegger, Richard" <rscheff@freebsd.org> wrote:
Hi Murali,


> Hello FreeBSD Transport experts,
>
> We are evaluating performance of FreeBSD 13 VM on ESX hypervisor in long RTT setup and happened to compare the performance with Linux VM with same hypervisor.
> We see a substantially better performance with Linux getting close to the BDP limit, whereas BSD 13 not filling up the pipe enough.
> We are trying to figure out what could lead to such a huge difference and feel we could be missing something here.
>
> Could you please help us to know if there is a way to make it perform better?
>
> Setup details:
>
> We have 2 ESX hypervisors where 2 VMs (one FreeBSD 13 and one Ubuntu 23.04/Linux kernel 6.2) were launched on each hypervisor.
> Then we ran iperf between,
> a)         BSD 13 <-> BSD 13
> b)        Ubuntu <-> Ubuntu

Since you mention two hypervisors - what is the phyiscal network topology in between these two servers? What theoretical link rates would be attainable?

Or did you want to say that each test was run within one and the same hypervisor, presumably with the same hardware and virtual switch/hypervisor provided on both test scenarios?

So that the theoretical bandwidth would be whatever packets can be shuffled in memory between VM1, Vswitch (with delay simulator), VM2?


Did you run iperf3? Did the transmitting endpoint report any retransmissions between Linux or FBSD hosts?

In theory, in a true back-to-back, lossless environment, it should be fairly irrelevant which CC you are running; if there are packet drops (retransmissions) in the iperf3 output, please compare their frequency between the Linux and FBSD side - a slightly higher incidence rate of packet drops (e.g. ring-buffer overflows used to be a prominent issue at high speed links) could possibly explain what appears to be such a huge differential (the square of the loss probability is relevant for classic TCPs - so a minor change can have a huge impact).

Did you contrast your cc_cubic findings with cc_newreno?

> Even though the network environment were same in both cases, we see Ubuntu performing much better.
>
> Below are connection parameters:
> Socket buffer: 16MB
> TCP CC Algo: Cubic. We used this as this is suitable for Long Fat Networks.
> Ping RTT:  100 ms between the two end points.
> We kept all other parameters to default on both Linux and BSD.
>
> BDP for 16MB Socket buffer: 16 MB * (1000 ms * 100ms latency) * 8 bits/ 1024 = 1.25 Gbps
>
> Ubuntu consistently hits around 1 Gbps Bitrate almost reaching the BDP limit.
> FreeBSD 13 shows a Bit rate between the range of 300-600 Mbps only. So it seems to be doing half as good as Linux.
> For lower socket buffer of 4MB, both FreeBSD and Linux seem to do same and able to meet BDP of 300 Mbps consistently.
> Larger socket buffer seems to have an issue.
>
> Please let us know if there are ways to fine tune the system parameters to make BSD perform better.
> Or any other suggestions/queries welcome.
>
> Regards
> Murali