New bhyve user

Jakub Chromy hicks at cgi.cz
Fri Sep 28 17:47:40 UTC 2018


> You seemed to have heard incorrectly.  There is little to no issues
> overcommiting CPU's in bhyve, I have a 2 core, 4 thread system with
> 6 VM's, each vm using 1 vCPU, this is a 50% overcommit and it my
> base line load.

No I have not. As far as you stick with 1 vCPU per virtual host, you 
should be fine. The problem is with multi-core VMs and spinlocks:

https://lists.freebsd.org/pipermail/freebsd-virtualization/2018-July/006613.html

quote from Alan Somers below:

An anonymous BHyve expert has explained things to me off-list.  Details
below.

On Tue, Jul 24, 2018 at 3:30 PM, Alan Somers <asomers at freebsd.org 
<https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization>> wrote:

>/What are people's experiences with overcommitting CPUs in BHyve? I have />/an 8-core machine that often runs VMs totalling up to 5 allocated CPUs />/without problems. But today I got greedy. I assigned 8 cores to one VM />/for a big build job. Obviously, some of those were shared with the host. />/I also assigned it 8GB of RAM (out of 16 total). Build performance fell />/through the floor, even though the host was idle. Eventually I killed the />/build and restarted it with a more modest 2 make jobs (but the VM still 
had />/8 cores). Performance improved. But eventually the system seemed to be />/mostly hung, while I had a build job running on the host as well as in the />/VM. I killed both build jobs, which resolved the hung processes. Then I />/restarted the host's build alone, and my system completely hung, with />/top(1) indicating that many processes were in the pfault state. />//>/So my questions are: />/1) Is it a known problem to overcommit CPUs with BHyve? />//
Yes it's a problem, and it's not just BHyve.  The problem comes from stuff
like spinlocks.  Unlike normal userland locks, when two CPUs contend on a
spinlock both are running at the same time.  When two vCPUs are contending
on a spinlock, the host has no idea how to prioritize them.  Normally
that's not a problem, because physical CPUs are always supposed to be able
to run.  But when you overcommit vCPUs, some of them must get swapped out
at all times.  If a spinlock is being contended by both a running vCPU and
a swapped out vCPU, then it might be contended for a long time.  The host's
scheduler simply isn't able to fix that problem.  The problem is even worse
when you're using hyperthreading (which I am) because those eight logical
cores are really only four physical cores, and spinning on a spinlock
doesn't generate enough pipeline stalls to cause a hyperthread switch.  So
it's probably best to stick with the n - 1 rule.  Overcommitting is ok if
all guests are single-cored because then they won't use spinlocks.  But my
guests aren't all single-cored.

2) Could this be related to the pfault hang, even though the guest was idle
>/at the time? />//
The expert suspects the ZFS ARC was competing with the guest for RAM.
IIUC, ZFS will sometimes greedily grow its ARC by swapping out idle parts
of the guest's RAM.  But the guest isn't aware of this behavior, and will
happily allocate memory from the swapped-out portion.  The result is a
battle between the ARC and the guest for physical RAM.  The best solution
is to limit the maximum amount of RAM used by the ARC with the
vfs.zfs.arc_max sysctl.

More info:https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=222916

Thanks to everyone who commented, especially the Anonymous Coward.

-Alan




Jakub


More information about the freebsd-virtualization mailing list