Re: Performance issues with vnet jails + epair + bridge
- In reply to: Sad Clouds : "Re: Performance issues with vnet jails + epair + bridge"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Fri, 13 Sep 2024 07:36:59 UTC
On Fri, 13 Sep 2024 08:03:56 +0100
Sad Clouds <cryintothebluesky@gmail.com> wrote:
> I built new kernel with "options RSS" however TCP throughput performance
> now decreased from 128 MiB/sec to 106 MiB/sec.
>
> Looks like the problem has shifted from epair to netisr
>
> PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND
> 12 root -56 - 0B 272K CPU3 3 3:45 100.00% intr{swi1: netisr 0}
> 11 root 187 ki31 0B 64K RUN 0 9:00 62.41% idle{idle: cpu0}
> 11 root 187 ki31 0B 64K CPU2 2 9:36 61.23% idle{idle: cpu2}
> 11 root 187 ki31 0B 64K RUN 1 8:24 55.03% idle{idle: cpu1}
> 0 root -64 - 0B 656K - 2 0:50 21.50% kernel{epair_task_2}
I think the issue may be to do with the genet driver itself. I think
the hardware is limited to one CPU per send or receive interrupt. On
Linux the best I can do is set SMP affinity for send on CPU0 and for
receive on CPU1, but that still leaves 2 other CPUs idle.
$ cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
...
37: 74141 0 0 0 GICv2 189 Level eth0
38: 43174 0 0 0 GICv2 190 Level eth0