SO_REUSEPORT: strange kernel balancer behaviour

trafdev trafdev at
Mon Jul 22 21:26:36 UTC 2013

Actually overhead is almost zero, the real problem is in non-equivalent 
load distribution between processes.
As mentions -
"At Google, they have seen a factor-of-three difference between the 
thread accepting the most connections and the thread accepting the 
fewest connections;"
I'm getting almost same results

On Mon Jul 22 13:02:05 2013, John-Mark Gurney wrote:
> trafdev wrote this message on Mon, Jul 15, 2013 at 13:04 -0700:
>> Yep I think it's wasting of resources, poll manager should somehow be
>> configured to update only one process/thread.
>> Anyone know how to do that?
> This isn't currently possible w/o a shared kqueue, since the event is
> level triggered, not edge..  You could do it w/ a shared kqueue using
> _ONESHOT (but then you'd also have a shared listen fd which obviously
> isn't what the OP wants)...
> I guess it wouldn't be too hard to do a wake one style thing, where
> kqueue only deliveres the event once per "item/level", but right now
> kqueue doesn't know anything about the format of data (which would be
> number of listeners waiting)...  Even if it did, there would be this
> dangerous contract that if an event is returned that the user land
> process would handle it...  How is kqueue suppose to handle a server
> that crashes/dies between getting the event and accepting a connection?
> How is userland suppose to know that an event wasn't handled, or is
> just taking a long time?
> Sadly, if you want to avoid the thundering heards problem, I think
> blocking on accept is the best method, or using a fd passing scheme
> where only on process accept's connections...
>> On Mon Jul 15 12:53:55 2013, Adrian Chadd wrote:
>>> i've noticed this when doing this stuff in a threaded program with
>>> each thread listening on the same port.
>>> All threads wake up on each accepted connection, one thread wins and
>>> the other threads get EAGAIN.
>>> -adrian
>>> On 15 July 2013 12:31, trafdev <trafdev at> wrote:
>>>> Thanks for reply.
>>>> This approach produces lot of "resource temporary unavailable" (eagain) on
>>>> accept-ing connections in N-1 processes.
>>>> Is this possible to avoid this by e.g. tweaking kqueue?
>>>> On Sun Jul 14 19:37:59 2013, Sepherosa Ziehau wrote:
>>>>> On Sat, Jul 13, 2013 at 1:16 PM, trafdev <trafdev at> wrote:
>>>>>> Hello.
>>>>>> Could someone help with following problem of SO_REUSEPORT.
>>>>> The most portable "load balance" between processes listening on the
>>>>> same TCP addr/port probably is:
>>>>> s=socket();
>>>>> bind(s);
>>>>> listen(s);
>>>>> /* various socketopt and fcntl as you needed */
>>>>> pid=fork();
>>>>> if (pid==0) {
>>>>>       server_loop(s);
>>>>>       exit(1);
>>>>> }
>>>>> server_loop(s);
>>>>> exit(1);
>>>>> Even in Linux or DragonFly SO_REUSEPORT "load balance" between
>>>>> processes listening on the same TCP addr/port was introduced recently,
>>>>> so you probably won't want to rely on it.

More information about the freebsd-net mailing list