openvpn and system overhead

Jim Thompson jim at netgate.com
Wed Apr 17 23:40:08 UTC 2019



> On Apr 17, 2019, at 6:11 PM, Eugene Grosbein <eugen at grosbein.net> wrote:
> 
> 17.04.2019 22:08, Wojciech Puchar wrote:
> 
>> i'm running openvpn server on Xeon E5 2620 server.
>> 
>> when receiving 100Mbit/s traffic over VPN it uses 20% of single core.
>> At least 75% of it is system time.
>> 
>> Seems like 500Mbit/s is a max for a single openvpn process.
>> 
>> can anything be done about that to improve performance?
> 
> Anyone concerning performance should stop using solutions processing payload traffic
> with userland daemon while still using common system network interfaces
> because of unavoidable and big overhead due to constant context switching
> from user land to kernel land and back. Be it openvpn or another userland daemon.
> 
> You need either some netmap-based solution or kernel-side vpn like IPsec (maybe with l2tp).
> For me, IKE daemon plus net/mpd5 work just fine. mpd5 is userland daemon too,
> but it processes only signalling traffic like session establishment packets
> and then it setups kernel structures (netgraph nodes) so that payload traffic is processed in-kernel only.


Addendum to previous message to freebsd-hackers:

We have (also) considered a netmap-enhanced (enabled?) OpenVPN.  You still have the problem that the ‘stack’ inside OpenVPN is single-threaded/single packet at a time.

Also, you’ll need to multiplex > 1 instance of OpenVPN, maybe using the programability of VALE (aka ‘mswitch’).

Linaro’s Open Data Plane project did a shim that would talk via shared memory (ODP queues) between OpenVPN and ODP.   ODP queues aren’t too different from netmap rings, 
so a PoC based on this code would be straight-forward.  "This demo was not intend to improve performance, it was only for basic functionality."
https://github.com/repu1sion/odp_apps/tree/master/openvpn

Jim


More information about the freebsd-hackers mailing list