Overcommitting CPUs with BHyve?
Jason Tubnor
jason at tubnor.net
Wed Jul 25 00:14:34 UTC 2018
On Wed, 25 Jul 2018 at 08:12, Shawn Webb <shawn.webb at hardenedbsd.org> wrote:
> On Tue, Jul 24, 2018 at 03:30:32PM -0600, Alan Somers wrote:
> > What are people's experiences with overcommitting CPUs in BHyve? I have
> an
> > 8-core machine that often runs VMs totalling up to 5 allocated CPUs
> without
> > problems. But today I got greedy. I assigned 8 cores to one VM for a
> big
> > build job. Obviously, some of those were shared with the host. I also
> > assigned it 8GB of RAM (out of 16 total). Build performance fell through
> > the floor, even though the host was idle. Eventually I killed the build
> > and restarted it with a more modest 2 make jobs (but the VM still had 8
> > cores). Performance improved. But eventually the system seemed to be
> > mostly hung, while I had a build job running on the host as well as in
> the
> > VM. I killed both build jobs, which resolved the hung processes. Then I
> > restarted the host's build alone, and my system completely hung, with
> > top(1) indicating that many processes were in the pfault state.
> >
> > So my questions are:
> > 1) Is it a known problem to overcommit CPUs with BHyve?
> > 2) Could this be related to the pfault hang, even though the guest was
> idle
> > at the time?
>
1) Not that I have experienced.
2) More likely RAM pressure. Are you running ZFS? What is you ARC capped
at? (Total guest + System + ARC < System Total Ram)
> VMWare's ESXi uses a special scheduler to do what it does. I wonder if
> it would be worthwhile to investigate implementing a scheduler in
> FreeBSD that provides decent performance for virtualized workloads.
>
>
>
More information about the freebsd-virtualization
mailing list