reno cwnd growth while app limited...
Richard.Scheffenegger at netapp.com
Wed Sep 11 12:26:02 UTC 2019
Thanks Randall for the quick response.
I was just re-reading RFC7661 (NewCWV)....
While I still like the idea of decaying cwnd over time, when a flow doesn't utilize it's cwnd, I think as a first step to constrain the growth of cwnd when it is >= 2* flightsize(instantaneous) may be a good thing already.
Basically, a simplistic version of the two paragraphs of section 4.4 https://tools.ietf.org/html/rfc7661#section-4.4
In NewCWV, additional timers, and smoothing (3 recent, per-rtt samples of flightsize) are being used... But I don't want to mess with tcpcb right now (again), before some superfluous variables, and excessively large counters got fixed to reduce its size.
Consulting Solution Architect
NAS & Networking
+43 1 3676 811 3157 Direct Phone
+43 664 8866 1857 Mobile Phone
Richard.Scheffenegger at netapp.com
From: Randall Stewart <rrs at netflix.com>
Sent: Mittwoch, 11. September 2019 14:18
To: Scheffenegger, Richard <Richard.Scheffenegger at netapp.com>
Cc: Lawrence Stewart <lstewart at netflix.com>; Michael Tuexen <tuexen at FreeBSD.org>; Jonathan Looney <jtl at netflix.com>; freebsd-transport at freebsd.org; Cui, Cheng <Cheng.Cui at netapp.com>; Tom Jones <thj at freebsd.org>; bz at freebsd.org; Eggert, Lars <lars at netapp.com>
Subject: Re: reno cwnd growth while app limited...
NetApp Security WARNING: This is an external email. Do not click links or open attachments unless you recognize the sender and know the content is safe.
Interesting graph :)
I know that years ago I had a discussion along these lines (talking about burst-limits) with Kacheong Poon and Mark Allman. IIRR Kacheong said, at that time, sun limited the cwnd to something like 4MSS more than the flight size (I could have that mixed up though and it might have been Mark proposing that.. its been a while sun was still a company then :D).
On the other hand I am not sure that such a tight limit takes into account all of the ack-artifacts that seem to be rabid in the internet now.. BBR took the approach of limiting its cwnd to 2xBDP (or at least what it thought was the BDP).. which is more along the lines of your .5 if I am reading you right.
It might be something worth looking into but I would want to contemplate it for a while :)
> On Sep 11, 2019, at 8:04 AM, Scheffenegger, Richard <Richard.Scheffenegger at netapp.com> wrote:
> I was just looking at some graph data running two parallel dctcp flows against a cubic receiver (some internal validation) with traditional ecn feedback.
> Now, in the beginning, a single flow can not overutilize the link capacity, and never runs into any loss/mark… but the snd_cwnd grows unbounded (since DCTCP is using the newreno “cc_ack_received” mechanism).
> However, newreno_ack_received is only to grow snd_cwnd, when CCF_CWND_LIMITED is set, which remains set as long as snd_cwnd < snd_wnd (the receiver signaled receive-window).
> But is this still* the correct behavior?
> Say, the data flow rate is application limited (ever n milliseconds, a
> few kB), and the receiver has a large window signalled – cwnd will
> grow until it matches the receivers window. If then the application
> chooses to no longer restrict itself, it would possibly burst out
> significantly more data than the queuing of the path can handle…
> So, shouldn’t there be a second condition for cwnd growth, that e.g.
> pipe (flightsize) is close to cwnd (factor 0.5 during slow start, and
> say 0.85 during congestion avoidance), to prevent sudden large bursts
> when a flow comes out of being application limited? The intention here
> would be to restrict the worst case burst that could be sent out
> (which is dealt will differently in other stacks), to ideally still
> fit into the path’s queues…
> RFC5681 is silent on application limited flows though (but one could
> thing of application limiting a flow being another form of congestion,
> during which cwnd shouldn’t grow…)
> In the example above, growing cwnd up to about 500 kB and then
> remaining there should be approximately the expected setting – based
> on the average of two competing flows hovering at aroud 200-250 kB…
> *) I’m referring to the much higher likelihood nowadays, that the application itself pacing and transfer volume violates the design principle of TCP, where the implicit assumption was that the sender has unlimited data to send, with the timing controlled at the full disgression of TCP.
> Richard Scheffenegger
> Consulting Solution Architect
> NAS & Networking
> +43 1 3676 811 3157 Direct Phone
> +43 664 8866 1857 Mobile Phone
> Richard.Scheffenegger at netapp.com
> <image006.jpg> <image012.jpg>
rrs at netflix.com
More information about the freebsd-transport