irq cpu binding

Slawa Olhovchenkov slw at zxy.spb.ru
Sun Mar 29 15:59:04 UTC 2015


On Sun, Mar 29, 2015 at 08:20:25AM -0700, Adrian Chadd wrote:

> >> The other half of the network stack - the sending side - also needs to
> >> be either on the same or nearby CPU, or you still end up with lock
> >> contention and cache thrashing.
> >
> > For incoming connections this will be automatuc -- sending will be
> > from CPU binding to receiving queue.
> >
> > Outgoing connections is more complex case, yes.
> > Need to transfer FD (with re-binding) and signaling (from kernel to
> > application) about prefered CPU. Prefered CPU is CPU give SYN-ACK.
> > And this need assistance from application. But I am currently can't
> > remember application massive servering outgouing connections.
> 
> Or you realise you need to rewrite your userland application so it
> doesn't have to do this, and instead uses an IOCP/libdispatch style IO
> API to register for IO events and get IO completions to occur in any
> given completion thread.

nginx is multi-process application, not multi-thread, for example.

> Then it doesn't have to care about moving descriptors around - it just
> creates an outbound socket, and then the IO completion callbacks will
> happen wherever they need to happen. If that needs to shuffle around
> due to RSS rebalancing then it'll "just happen".
> 
> And yeah, I know of plenty of applications doing massive outbound
> connections - anything being an intermediary HTTP proxy. :)

Hmm, yes and no :)
Yes, proxy do outbound connections, but proxy crossover inbound and
outbound connections and in general this connections pined to
different CPU. Is this perfomance gain?..


More information about the freebsd-hackers mailing list