Fast recovery ssthresh value

Scheffenegger, Richard Richard.Scheffenegger at netapp.com
Thu Sep 10 09:49:17 UTC 2020


Hi Liang,

Yes, you are absolutely correct about this observation. The SACK loss recovery will only send  one MSS per received ACK right now - and when there is ACK thinning present, will fail to timely recover all the missing packets, eventually receiving no more ACK to clock out more retransmissions...

I have a Diff in review, to implement Proportional Rate Reduction:

https://reviews.freebsd.org/D18892

Which should address not only that issue about ACK thinning, but also the issue that current SACK loss recovery has to wait until pipe drops below ssthresh, before the retransmissions are clocked out. And then, they would actually be clocked out at the same rate at the incoming ACKs. This would be the same rate as when the overload happened (barring any ACK thinning), and as a secondary effect, it was observed that this behavior too can lead to self-inflicted loss - of retransmissions.

If you have the ability to patch your kernel with D18892 and observe how the reaction is in your dramatic ACK thinning scenario, that would be good to know! The assumption of the Patch was, that - as per TCP RFC requirements - there is one ACK for each received out-of-sequence data segment, and ACK drops / thinning are not happening on such a massive scale as you describe it.

Best regards,

Richard Scheffenegger

-----Original Message-----
From: owner-freebsd-transport at freebsd.org <owner-freebsd-transport at freebsd.org> On Behalf Of Liang Tian
Sent: Mittwoch, 9. September 2020 19:16
To: Scheffenegger, Richard <Richard.Scheffenegger at netapp.com>
Cc: FreeBSD Transport <freebsd-transport at freebsd.org>
Subject: Re: Fast recovery ssthresh value

Hi Richard,

Thanks for the explanation and sorry for the late reply.
I've been investigating SACK loss recovery and I think I'm seeing an issue similar to the ABC L value issue that I reported
previously(https://reviews.freebsd.org/D26120) and I do believe there is a deviation to RFC3517:
The issue happens when a DupAck is received during SACK loss recovery in the presence of ACK Thinning or receiver enabling LRO, which means the SACK block edges could expand by more than 1 SMSS(We've seen 30*SMSS), i.e. a single DupAck could decrement `pipe` by more than 1 SMSS.
In RFC3517,
(C) If cwnd - pipe >= 1 SMSS, the sender SHOULD transmit one or more segments...
        (C.5) If cwnd - pipe >= 1 SMSS, return to (C.1) So based on RFC, the sender should be able to send more segments if such DupAck is received, because of the big change to `pipe`.

In the current implementation, the cwin variable, which controls the amount of data that can be transmitted based on the new information, is dictated by snd_cwnd. The snd_cwnd is incremented by 1 SMSS for each DupAck received. I believe this effectively limits the retransmission triggered by each DupAck to 1 SMSS -  deviation.
 307         cwin =
 308             imax(min(tp->snd_wnd, tp->snd_cwnd) - sack_bytes_rxmt, 0);

As a result, SACK is not doing enough recovery in this scenario and loss has to be recovered by RTO.
Again, I'd appreciate feedback from the community.

Regards,
Liang Tian




On Sun, Aug 23, 2020 at 3:56 PM Scheffenegger, Richard <Richard.Scheffenegger at netapp.com> wrote:
>
> Hi Liang,
>
> In SACK loss recovery, you can recover up to ssthresh (prior cwnd/2 [or 70% in case of cubic]) lost bytes - at least in theory.
>
> In comparison, (New)Reno can only recover one lost packet per window, and then keeps on transmitting new segments (ack + cwnd), even before the receipt of the retransmitted packet is acked.
>
> For historic reasons, the semantic of the variable cwnd is overloaded during loss recovery, and it doesn't "really" indicate cwnd, but rather indicates if/when retransmissions can happen.
>
>
> In both cases (also the simple one, with only one packet loss), cwnd should be equal (or near equal) to ssthresh by the time loss recovery is finished - but NOT before! While it may appear like slow-start, the value of the cwnd variable really increases by acked_bytes only per ACK (not acked_bytes + SMSS), since the left edge (snd_una) doesn't move right - unlike during slow-start. But numerically, these different phases (slow-start / sack loss-recovery) may appear very similar.
>
> You could check this using the (loadable) SIFTR module, which captures t_flags (indicating if cong/loss recovery is active), ssthresh, cwnd, and other parameters.
>
> That is at least how things are supposed to work; or have you investigated the timing and behavior of SACK loss recovery and found a deviation to RFC3517? Note that FBSD currently has not fully implemented RFC6675 support (which deviates slightly from 3517 under specific circumstances; I have a patch pending to implemente 6675 rescue retransmissions, but haven't tweaked the other aspects of 6675 vs. 3517.
>
> BTW: While freebsd-net is not the wrong DL per se, TCP, UDP, SCTP specific questions can also be posted to freebsd-transport, which is more narrowly focused.
>
> Best regards,
>
> Richard Scheffenegger
>
> -----Original Message-----
> From: owner-freebsd-net at freebsd.org <owner-freebsd-net at freebsd.org> On 
> Behalf Of Liang Tian
> Sent: Sonntag, 23. August 2020 00:14
> To: freebsd-net <freebsd-net at freebsd.org>
> Subject: Fast recovery ssthresh value
>
> Hi all,
>
> When 3 dupacks are received and TCP enter fast recovery, if SACK is used, the CWND is set to maxseg:
>
> 2593                     if (tp->t_flags & TF_SACK_PERMIT) {
> 2594                         TCPSTAT_INC(
> 2595                             tcps_sack_recovery_episode);
> 2596                         tp->snd_recover = tp->snd_nxt;
> 2597                         tp->snd_cwnd = maxseg;
> 2598                         (void) tp->t_fb->tfb_tcp_output(tp);
> 2599                         goto drop;
> 2600                     }
>
> Otherwise(SACK is not in use), CWND is set to maxseg before
> tcp_output() and then set back to snd_ssthresh+inflation
> 2601                     tp->snd_nxt = th->th_ack;
> 2602                     tp->snd_cwnd = maxseg;
> 2603                     (void) tp->t_fb->tfb_tcp_output(tp);
> 2604                     KASSERT(tp->snd_limited <= 2,
> 2605                         ("%s: tp->snd_limited too big",
> 2606                         __func__));
> 2607                     tp->snd_cwnd = tp->snd_ssthresh +
> 2608                          maxseg *
> 2609                          (tp->t_dupacks - tp->snd_limited);
> 2610                     if (SEQ_GT(onxt, tp->snd_nxt))
> 2611                         tp->snd_nxt = onxt;
> 2612                     goto drop;
>
> I'm wondering in the SACK case, should CWND be set back to ssthresh(which has been slashed in cc_cong_signal() a few lines above) before line 2599, like non-SACK case, instead of doing slow start from maxseg?
> I read rfc6675 and a few others, and it looks like that's the case. I appreciate your opinion, again.
>
> Thanks,
> Liang
> _______________________________________________
> freebsd-net at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
_______________________________________________
freebsd-transport at freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-transport
To unsubscribe, send any mail to "freebsd-transport-unsubscribe at freebsd.org"


More information about the freebsd-transport mailing list