Re: Slow WAN traffic to FreeBSD hosts but not to Linux hosts---how to debug/fix?

From: Paul Mather <>
Date: Wed, 01 Feb 2023 20:29:37 UTC
On Jan 31, 2023, at 9:46 PM, David <> wrote:

> On 1/31/23 13:38, Marek Zarychta wrote:
>> W dniu 31.01.2023 o 19:31, Paul Mather pisze:
>>>> While playing with different mod_cc(4) might bring some improvement, to get a real boost I'd suggest enabling tcp_rack(4) if feasible.
>>> I am interested in trying this out, but believe it is more feasible in my case for the -STABLE and -CURRENT systems I am using, not so much for the -RELEASE systems that are kept up to date via binary freebsd-update updates.  My reading of the tcp_rack(4) man page is that you have to build a custom kernel as, unlike the cc_* congestion control algorithms, the loadable tcp_rack module is not built by default.  Is that an accurate reading?
>> Yes, this gift from Netflix is probably better suited for -STABLE and -CURRENT as easier to set up there. There is an excellent, up-to-date article about it by Klara Systems writers[1]. From my experience tcp_rack(4) is well suited for congested, lossy or redundant network paths where loses, duplicated packets or races between packets occur. Not a panacea, but very performant TCP stack based on the _fair_ algorithm. In some instances, it might help you to saturate the bandwidth of the link. TCP algo can be loaded/unloaded/changed on the fly. In FreeBSD 14-CURRENT you can change it on an active socket with tcpsso(8) utility, in FreeBSD 12 and 13 you have to restart the app bound to the socket.
>> Please feel free to play with TCP stacks and congestion algos with the help of benchmarks/iperf3 to find out what prevents the link from being saturated and give us some feedback here.
>> [1]
>> Cheers
> I compiled a custom kernel (releng/13.1) and followed Klara Systems instructions. The results are quite good. I would hope the RACK stack will be included in the upcoming 13.2 release as it is a significant upgrade.

I heartily concur with this.  It would be very nice if the extra TCP stacks were available and able to be loaded in the upcoming 13.2 release.

As I mentioned recently in this thread, I built and enabled the extra TCP stacks on a -CURRENT system and got much better performance than with the default "freebsd" stack.  I've just done the same on a 13-STABLE system and get the same result.  Using the tcp_bbr stack appears to solve the problem I was having.

It would be great if the TCPHPTS and RATELIMIT options could be added to the GENERIC kernel and WITH_EXTRA_TCP_STACKS default to enabled for building in src.conf.  That way, the tcp_rack and tcp_bbr modules would get built by default and people would have the option of loading them on -RELEASE systems without having to build their own kernel when doing updates via freebsd-update.