TCP stack lock contention with short-lived connections

Julien Charbon jcharbon at
Wed May 28 16:44:18 UTC 2014


On 23/05/14 22:52, Julien Charbon wrote:
> On 23/05/14 14:06, Julien Charbon wrote:
>> On 27/02/14 11:32, Julien Charbon wrote:
>>> On 07/11/13 14:55, Julien Charbon wrote:
>>>> On Mon, 04 Nov 2013 22:21:04 +0100, Julien Charbon
>>>> <jcharbon at> wrote:
>>>>> I have put technical and how-to-repeat details in below PR:
>>>>> kern/183659: TCP stack lock contention with short-lived connections
>>>>>   We are currently working on this performance improvement effort;  it
>>>>> will impact only the TCP locking strategy not the TCP stack logic
>>>>> itself.  We will share on freebsd-net the patches we made for
>>>>> reviewing and improvement propositions;  anyway this change might also
>>>>> require enough eyeballs to avoid tricky race conditions introduction
>>>>> in TCP stack.
>   Joined the two cumulative patches (tcp-scale-inp-list-v1.patch and
> tcp-scale-pcbinfo-rlock-v1.patch) we discussed the most at BSDCan 2014.

  At BSDCan 2014 we were also asked to provide flame graph [1][2] to 
highlight impacts of these TCP changes.  The Dtrace sampling was done on 
a NIC receive queue IRQ bound core.

  o First CPU flame graph on 10.0-RELENG at 40k TCP connection/secs:


  - __rw_wlock_hard on ipi_lock contention is clear as usual.

  o Second, same test with all our patches applied (thus from 10.0-next 
branch [3]):


  - Almost all __rw_wlock_hard on ipi_lock contention is converted in 
idle time.

  o Third, still using 10.0-next branch, the flame graph when doubling 
the rate to 80k TCP connection/sec:

  My 2 cents.



More information about the freebsd-net mailing list