VMWare ESX and FBSD 7.2 AMD64 guest

Dean Weimer dweimer at orscheln.com
Fri Jul 24 14:49:05 UTC 2009


> This message has a foot that has nearly touched down over the OT
> borderline.
> 
> We received an HP Proliant DL360G5 collocation box yesterday that has
> two processors, and 8GB of memory.
> 
> All the client wants to use this box for is a single instance of
> Windows
> web hosting. Knowing the sites the client wants to aggregate into IIS,
> I
> know that the box is far over-rated.
> 
> Making a long story short, they have agreed to allow us to put their
> Windows server inside of a virtual-ized container, so we can use the
> unused horsepower for other vm's (test servers etc).
> 
> My problem is performance. I'm only willing to make this box virtual if
> I can keep the abstraction performance loss to <25% (my ultimate goal
> would be 15%).
> 
> The following is what I have, followed by my benchmark findings:
> 
> # 7.2-RELEASE AMD64
> 
> FreeBSD 7.2-RELEASE #0: Fri May  1 07:18:07 UTC 2009
>     root at driscoll.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC
> 
> Timecounter "i8254" frequency 1193182 Hz quality 0
> CPU: Intel(R) Xeon(R) CPU            5150  @ 2.66GHz (2666.78-MHz
> K8-class CPU)
>   Origin = "GenuineIntel"  Id = 0x6f6  Stepping = 6
> 
> usable memory = 8575160320 (8177 MB)
> avail memory  = 8273620992 (7890 MB)
> 
> FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
>  cpu0 (BSP): APIC ID:  0
>  cpu1 (AP): APIC ID:  1
>  cpu2 (AP): APIC ID:  6
>  cpu3 (AP): APIC ID:  7:
> 
> Benchmarks:
> 
> # time make -j4 buildworld (under vmware)
> 
> 5503.038u 3049.500s 1:15:46.25 188.1%   5877+1961k 3298+586716io
> 2407pf+0w
> 
> # time make -j4 buildworld (native)
> 
> 4777.568u 992.422s 33:02.12 291.1%	6533+2099k 25722+586485io 3487pf+0w
> 
> ...both builds were from the exact same sources, and both runs were
> running with the exact same environment. I was extremely careful to
> ensure that the environments were exactly the same.
> 
> I'd appreciate any feedback on tweaks that I can make (either to
> VMWare,
> or FreeBSD itself) to make the virtualized environment much more
> efficient.
> 
> Off-list is fine.
> 
> Cheers,
> 
> Steve

I haven't actually done any benchmarks to compare the performance, but I have been running production FreeBSD servers on VMware for a couple of years.  I currently have two 6.2 systems running CUPS, one on VMware Server, and the other on ESX 3.5.  I also have a 7.0 system and two 7.1 systems running Squid on ESX 3.5 as well.  The thing that I noticed as the biggest bottle neck for any guest within VMware is the Disk I/O (with the exception of video which isn't an issue for a server).  Compiling software does take longer, because of this, however if you tune your disks properly the performance under real application load doesn't seem to be an issue.  Using soft updates on the file system seems to help out a lot, but be aware of the consequences.
That being said, on the Systems I have running squid we average 9G of traffic a day on the busiest system with about 11% cache hit rate, These proxies sit close to idle after hours.  Looking at the information from systat -vmstat, the system is almost idle during the day under the full load as well, you just can't touch FreeBSD with only 2 DSL lines for web traffic.  Its faster than the old native system was, however there is an iSCSI SAN behind the ESX server for disk access, and we went from a Dell PowerEdge 850 to a Dell PowerEdge 2950.  It does share that server with around 15 or more other servers (Mostly windows, some Linux) depending on the current load.  Which brings us to another point, It seems to do just fine when VMware VMotion moves it between servers.
Not sure if this information helps you out any, but my recommendation would be that if your application will be very disk intensive, avoid the Virtual machine.  In my case with the Squid, gaining the redundancy of the VMware coupled with VMotion was worth the potential hit in performance.  As we are soon implementing a second data center across town that will house additional VMware servers and thanks to a 10G fiber ring, will allow us to migrate servers while running between datacenters.  Also keep in mind that as of vSphere 4 (We will be upgrading to this once the new data center is complete, just waiting on the shipment of the racks at this point), VMware does officially support FreeBSD 7.1, so you might want to go with that instead of 7.2, as there may be a performance issue with 7.2, but it's also just as likely that it was a timing issue on releases that 7.1 is supported and 7.2 isn't.  As of ESXi 4.0 (released 5-21-2009), I believe it has the same code base as vSphere 4, so the same guests should be supported.

Thanks,
     Dean Weimer
     Network Administrator
     Orscheln Management Co


More information about the freebsd-questions mailing list