Virtual performance

Ivan Voras ivoras at fer.hr
Sat Feb 17 01:43:48 UTC 2007


I haven't been using virtual machines for production much, but this is
likely to change in the near future. After running some benchmarks, it
looks like there's something very bad with performance under VMWare.

I've tried two things: the big "VMWare Infrastructure" product, version
3.0.1 and the small, free (as in beer) VMWare Infrastructure, but the
findings are the same. As an illustration, consider this header from "top":

last pid: 29892;  load averages:  1.20,  1.16,  1.09    up 0+00:45:22
01:03:37
38 processes:  3 running, 35 sleeping
CPU states:  3.8% user,  0.0% nice, 96.2% system,  0.0% interrupt,  0.0%
idle
Mem: 29M Active, 251M Inact, 117M Wired, 2080K Cache, 112M Buf, 3363M Free
Swap: 5120M Total, 5120M Free

Note the "system" time - while this statistic might look like a getpid()
loop, it is, in fact, a sample from the middle of a `make buildkernel`
on 6.2-release. In fact, on a non-virtual machine, a getpid() loop comes
out rougly 60% in sys time + 40% in user time. I don't know why would
compilation, a memory and CPU intensive process, require so much sys
time, and the same is noticeable when running practically any program.

All this time is not spent in a kernel thread - top showing system
threads shows them all as quiescent.

Running unixbench, I see that performance is mainly lost on benchmarks
that do context switching, while numeric performance stays approximately
the same. Most performance loss is on "context1" and "pipe" benchmarks.

Here's a typical result of running unixbench for pipe and context1
benchmark on the above machine:

                     INDEX VALUES
TEST                                        BASELINE     RESULT      INDEX
Pipe Throughput                              12440.0    48528.7       39.0
Pipe-based Context Switching                  4000.0     9593.8       24.0

These should (the INDEX field), for this machine, be somewhere between
350 - 500. Other tests that frequently make kernel calls (like: execl,
file system, syscall overhead) are also very much affected.

Some things I've tried (without luck):

- several different machines and CPUs (Intel, AMD), all on i386 arch
- tried both RELENG_6 and HEAD (WITNESS, INVARIANTS disabled) **
- disabling and enabling SMP **
- disabling and enabling PREEMPTION, ADAPTIVE_MUTEXES and ADAPTIVE_GIANT
- disabling and enabling kern.sched.ipiwakeup.enabled
- decreasing kern.hz
- changing timecounter to TSC

[**] : There's a special case here: I've just finished compiling a UP
kernel for HEAD, and when running unixbench, the "Pipe throughput"
benchmark of unixbench gets an order of magnitude better (to ~~480). I'm
too tired now to dig further on it.

The peculiar thing is that such a slowdown is not present in Linux and
Windows - there it it's something like 10% slowdown at most (compared to
the same machine in non-virtual mode), while doing the same comparison
on FreeBSD it comes on the order of 4x - 5x slower.

I think this finding should be easily reproducable and verifyable -
anyone with a Windows or Linux machine can install the free VMWare
Server and load up FreeBSD as guest.

I don't know whose fault this is, VMWares or FreeBSD's, but
virtualization is popular, and since FreeBSD is very much lagging behind
for server-side virtualization (Xen, VMWare, etc. - jails and vimage
don't count since they're FreeBSD-specific), it would be a shame if it
also got marked as bad at client-side.

Any ideas on what to try next to improve the performance?

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://lists.freebsd.org/pipermail/freebsd-performance/attachments/20070217/e687cb4d/signature.pgp


More information about the freebsd-performance mailing list