bhyve behavior under cpuset_setaffinity
Oleg Ginzburg
olevole at olevole.ru
Fri Apr 25 19:05:13 UTC 2014
Hi,
I planned to look at the bhyve through PMC and found bad behavior if you apply
cpuset mask for the bhyve process: guest process on the host begins to consume
up to 40-50% without any load within guest.
For example by executing follow command on bhyve pid:
% cpuset -l 2 -p 3476
% top
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
3082 root 3 75 0 2075M 5016K CPU2 2 0:01 10.06% bhyve
% ktrace -p 3476
% ktrace -C
% kdump
at this point shows only those entries:
...
3476 vcpu 0 RET ioctl 0
3476 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
3476 vcpu 0 RET ioctl 0
3476 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
3476 vcpu 0 RET ioctl 0
3476 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
3476 vcpu 0 RET ioctl 0
3476 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
3476 vcpu 0 RET ioctl 0
3476 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
3476 vcpu 0 RET ioctl 0
3476 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
3476 vcpu 0 RET ioctl 0
3476 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
3476 vcpu 1 RET ioctl 0
3476 vcpu 1 CALL ioctl(0x3,0xc0787601,0x7fffff7fbe40)
3476 vcpu 1 RET ioctl 0
3476 vcpu 1 CALL ioctl(0x3,0xc0787601,0x7fffff7fbe40)
3476 vcpu 0 RET ioctl 0
3476 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
3476 vcpu 0 RET ioctl 0
3476 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
3476 vcpu 0 RET ioctl 0
3476 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
3476 vcpu 0 RET ioctl 0
3476 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
3476 vcpu 1 RET ioctl 0
3476 vcpu 1 CALL ioctl(0x3,0xc0787601,0x7fffff7fbe40)
3476 vcpu 1 RET ioctl 0
3476 vcpu 1 CALL ioctl(0x3,0xc0787601,0x7fffff7fbe40)
3476 vcpu 0 RET ioctl 0
..
All process in the guest is very slow and any action
adjust the load on the CPU>= 100%
Without cpuset_setaffinity in the same conditions ktrace output is similar:
..
22506 vcpu 1 CALL ioctl(0x3,0xc0787601,0x7fffff7fbe40)
22506 vcpu 0 RET ioctl 0
22506 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
22506 vcpu 0 RET ioctl 0
22506 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
22506 vcpu 0 RET ioctl 0
22506 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
22506 vcpu 0 RET ioctl 0
22506 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
22506 vcpu 1 RET ioctl 0
22506 vcpu 0 RET ioctl 0
22506 vcpu 1 CALL ioctl(0x3,0xc0787601,0x7fffff7fbe40)
22506 vcpu 0 CALL ioctl(0x3,0xc0787601,0x7fffff9fce40)
22506 vcpu 1 RET ioctl 0
22506 vcpu 1 CALL ioctl(0x3,0xc0787601,0x7fffff7fbe40)
22506 vcpu 1 RET ioctl 0
22506 vcpu 1 CALL ioctl(0x3,0xc0787601,0x7fffff7fbe40)
22506 vcpu 1 RET ioctl 0
22506 vcpu 1 CALL ioctl(0x3,0xc0787601,0x7fffff7fbe40)
22506 vcpu 1 RET ioctl 0
22506 vcpu 1 CALL ioctl(0x3,0xc0787601,0x7fffff7fbe40)
..
but the cpu usage on the core is close to 5-10% wherein
guest processes are very responsive.
More information about the freebsd-virtualization
mailing list