VMWare ESX and FBSD 7.2 AMD64 guest
mahlerrd at yahoo.com
Fri Jul 24 15:56:05 UTC 2009
> From: John Nielsen <lists at jnielsen.net>
> Subject: Re: VMWare ESX and FBSD 7.2 AMD64 guest
> To: freebsd-questions at freebsd.org
> Cc: "Steve Bertrand" <steve at ibctech.ca>
> Date: Friday, July 24, 2009, 10:22 AM
> On Thursday 23 July 2009 19:44:15
> Steve Bertrand wrote:
> > This message has a foot that has nearly touched down
> over the OT
> > borderline.
> > We received an HP Proliant DL360G5 collocation box
> yesterday that has
> > two processors, and 8GB of memory.
> > All the client wants to use this box for is a single
> instance of Windows
> > web hosting. Knowing the sites the client wants to
> aggregate into IIS, I
> > know that the box is far over-rated.
> > Making a long story short, they have agreed to allow
> us to put their
> > Windows server inside of a virtual-ized container, so
> we can use the
> > unused horsepower for other vm's (test servers etc).
> > My problem is performance. I'm only willing to make
> this box virtual if
> > I can keep the abstraction performance loss to <25%
> (my ultimate goal
> > would be 15%).
> > The following is what I have, followed by my benchmark
> > # 7.2-RELEASE AMD64
> > FreeBSD 7.2-RELEASE #0: Fri May 1 07:18:07 UTC
> > root at driscoll.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC
> > Timecounter "i8254" frequency 1193182 Hz quality 0
> > CPU: Intel(R) Xeon(R) CPU
> 5150 @ 2.66GHz (2666.78-MHz
> > K8-class CPU)
> > Origin = "GenuineIntel" Id =
> 0x6f6 Stepping = 6
> > usable memory = 8575160320 (8177 MB)
> > avail memory = 8273620992 (7890 MB)
> > FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
> > cpu0 (BSP): APIC ID: 0
> > cpu1 (AP): APIC ID: 1
> > cpu2 (AP): APIC ID: 6
> > cpu3 (AP): APIC ID: 7:
> Did you give the VM 4 virtual processors as well? How much
> RAM did it have?
> What type of storage does the server have? Did the VM just
> get a .vmdk on
> VMFS? What version of ESX?
> > Benchmarks:
> > # time make -j4 buildworld (under vmware)
> > 5503.038u 3049.500s 1:15:46.25
> 188.1% 5877+1961k 3298+586716io 2407pf+0w
> > # time make -j4 buildworld (native)
> > 4777.568u 992.422s 33:02.12 291.1%
> 6533+2099k 25722+586485io 3487pf+0w
> Note that the "user" time is within your 15% margin (if you
> round to the
> nearest percent). The system time is what's running away.
> My guess is that
> that is largely due to disk I/O and virtualization of same.
> What you can do
> to address this depends on what hardware you have. Giving
> the VM a raw
> slice/LUN/disk instead of a .vmdk file may improve matters
> somewhat. If you
> do use a disk file be sure that it lives on a stripe (or
> whatever unit is
> relevant) boundary of the underlying storage. Ways to do
> that (if any) depend
> on the storage. Improving the RAID performance, etc. of the
> storage will
> improve your benchmark overall, and may or may not narrow
> the divide.
> The (virtual) storage driver (mpt IIRC) might have some
> parameters you could
> tweak, but I don't know about that off the top of my head.
> > ...both builds were from the exact same sources, and
> both runs were
> > running with the exact same environment. I was
> extremely careful to
> > ensure that the environments were exactly the same.
> > I'd appreciate any feedback on tweaks that I can make
> (either to VMWare,
> > or FreeBSD itself) to make the virtualized environment
> much more efficient.
> See above about storage. Similar questions come up
> periodically; searching the
> archives if you haven't already may prove fruitful. You may
> want to try
> running with different kernel HZ settings for instance.
> I would also try to isolate the performance of different
> components and
> evaluate their importance for your actual intended load.
> CPU and RAM probably
> perform like you expect out of the box. Disk and network
> I/O won't be as
> close to native speed, but the difference and the impact
> are variable
> depending on your hardware and load.
> A lightly-loaded Windows server is the poster child of
> candidates. If your decision is to dedicate the box to
> Winders or to
> virtualize and use the excess capacity for something else I
> would say it's a
> no-brainer if the cost of ESX isn't a factor (or if ESXi
> gives you similar
> performance). If that's already a given and your decision
> is between running
> a specific FreeBSD instance on the ESX host or on its own
> hardware then
> you're wise to spec out the performance differences.
If I recall correctly from ESX (well, VI) training*, there may be a minor scheduling issue affecting things here. If you set up the VM with 4 processors, ESX schedules time on the CPU only when there's 4 things to execute (well, there's another time period it also uses, so even a single thread will get run eventually, but anyway...). The physical instance will run one thread immediately even if there's nothing else waiting, whereas the VM will NOT execute a single thread necessarily immediately. I would retry using perhaps -j8 or even -j12 to make sure the 4 CPUs see plenty of work to do and see if the numbers don't slide closer to one another.
For what it's worth, if there were a raw LUN available and made available to the VM, the disk performance of that LUN should very nearly match native performance, because it IS native performance. VMWare (if I understood right in the first place and remember correctly as well, I supposed I should * this as well. :) ) doesn't add anything to slow that down. Plugging in a USB drive to the Host and making it available to the guest would also be at native USB/drive speeds, assuming you can do that (I've never tried to use USB drives on our blade center!).
*Since I'm recalling it, the standard caveats about my bad memory apply. In this case, there's also the caveats about the VI instructor's bad memory, too. :)
More information about the freebsd-questions