IPsec performace - netisr hits %100

Özkan KIRIK ozkan.kirik at gmail.com
Thu May 6 14:07:44 UTC 2021


I wonder that if you received the flame graphs ?

I also tested system with multiple if_ipsec interfaces using different
source-dst tunnel address.
By this way, system can utilize all cpu cores.
But for single if_ipsec interface, is there a way to speed up transfer ?

Thanks!

On Mon, May 3, 2021 at 10:31 PM Mark Johnston <markj at freebsd.org> wrote:

> On Sun, May 02, 2021 at 04:08:18PM +0300, Andrey V. Elsukov wrote:
> > 30.04.2021 23:32, Mark Johnston пишет:
> > > Second, netipsec unconditionally hands rx processing off to netisr
> > > threads for some reason, that's why changing the dispatch policy
> doesn't
> > > help.  Maybe it's to help avoid running out of kernel stack space or to
> > > somehow avoid packet reordering in some case that is not clear to me.
> I
> > > tried a patch (see below) which eliminates this and it helped somewhat.
> > > If anyone can provide an explanation for the current behaviour I'd
> > > appreciate it.
> >
> > Previously we have reports about kernel stack overflow during IPsec
> > processing. In your example there is only one IPsec transform is
> > configured, but it is possible to configure several in the bundle,
> > AFAIR, it is limited to 4 transforms. E.g. if you configure ESP+AH - it
> > is bundle of two transforms and this will grow kernel stack requirements.
>
> Is it only a problem for synchronous crypto ops?  With hardware drivers
> I'd expect the stack usage to be reset after each transform, since
> completions are handled by a dedicated thread.  There is also the
> net.inet.ipsec.async_crypto knob, which has a similar effect I think.
>


More information about the freebsd-net mailing list