Re: Low performance of BBR compared to cubic

From: Zhenlei Huang <zlei_at_FreeBSD.org>
Date: Tue, 21 Nov 2023 02:02:08 UTC
> Zhenlei,
>
> Maybe you want to characterize the TX vs. RX codepaths independently of
> each other, and run tests with different Stacks on either end (could be
> tuned via setsockopt, or tcpsso; or maybe by togging the default in
> between starting the iperf3 server side, before running the client, if
> this is done via loopback).

I'll take care that.

>
> Currently, the expectation is that the TX codepath of RACK is more
> optimized vs. the RX codepath - thus a RACK sender to a base stack
> receiver should show the highest performance.

So the client will benefit by enabling RACK on servers.
That's interesting.

>
> Best reagrds,
>    Richard
>
>
> Am 20.11.2023 um 13:29 schrieb Scheffenegger, Richard:
> >
> >
> > BBR has not been looked after for quite some while (and there are no
> > plans to invest there).
> >
> > Among other things, the blatant disregard of flow fairness which BBRv1
> > shows, makes it a poor protocol for general use.
> >
> > Similar issues also show up with BBRv2 (current version), but still it
> > is not considered a reasonable and stable enough protocol - thus the
> > IETF is working on BBRv3.
> >
> > Both of which would require effective re-implementation of the BBR stack.
> >
> >
> >
> > RACK is expected to perform better across congested paths, and in the
> > presence of various pathological network issues. In particular, the
> > receive path is certainly not as performant as the Base Stack currently.
> >
> > In short: If you want to use BBR, please don't use the current code with
> > is at best a variation of BBRv1 - and it's generally known that this
> > version of BBR is not "good".
> >
> >
> > (I presume your testing was acrosss a green field / zero loss network,
> > with ample bandwidth - maybe the loopback interface even).

Yes tested with lo0 and one thread.

> >
> > Richard
> >
> >
> >
> > -----Original Message-----
> >
> >
> > Hi,
> >
> > While test TCP RACK functions, I tested BBR BTW.
> >
> > This is a quick test with iperf3 on bare metal ( old MBP i5 2 cores 4
> > threads ).
> > The kernel current/15 with debug options disabled. Following is the
> > performance result:
> >
> > freebsd:        37.2 Gbits/sec  1.32 MBytes
> > RACK:           27.9 Gbits/sec  1.34 MBytes
> > BBR:            2.34 Gbits/sec  223 KBytes
> >
> > For freebsd and RACK functions the CC is cubic.
> >
> > The last column is Cwnd. BBR's Cwnd looks quite small.
> >
> > There's also a report on Telegram Taiwan FreeBSD chat group, but without
> > performance details.
> >
> > I believe there is something wrong with BBR. This is not a reasonable
> > good performance compared with other tcp congestion control algorithms.
> >
> > Or am I missing something ?
> >
> > Best regards,
> > Zhenlei
> >
> >
>