Virtualizing FreeBSD...

Florian Heigl florian.heigl at gmail.com
Mon Jul 29 18:51:57 UTC 2013


Hi,

On 29.07.2013, at 16:23, Karl Pielorz wrote:
> Anyone got any recommended / "works for them" advice on what's the best virtualization platform to run FreeBSD under? (apart from FreeBSD itself) - we're looking for something commercial / with management etc. (e.g. HyperV/ESXi etc.)
> 
> We've got some experience running it under VMware - but are looking for one that offers good driver support (i.e. not 'emulated hardware' for NIC / disks)?
> 
> The experience we've had so far hasn't been brilliant from an I/O point of view (hence the push to find out who/what supports FreeBSD better for I/O drivers) - there's only so far an emulated E1000 can go :)

I've been running FreeBSD on two platforms so far:
- Xen (non-support "real" Xen) in paravirtualized and HVM modes; for my own use I've sticked with that since the days of Xen2.
The performance and low of PV mode was what kept me trying it, but most of the times there have been intolerable bugs, regressions etc. that were unfixed for years.
The worst thing is that bugs generally tend to not get fixed for years until the next FreeBSD or Xen version somehow makes them go away.
This may sound bitter, but I've been working with FreeBSD on Xen for 8 years now and I'm just saying I've seen this happen.

PV Benefits that I also cared about were i.e. online RAM increase/decrease, that at least worked some time during those years; with HVM it's not possible.
PV is still mostly dead / bugged so it's out of the question. (FYI XenServer can run both modes too)

So, if I want a FreeBSD VM for compiling, testing etc. I'll fire up a Xen HVM domU; this is exactly what you'd have in XenServer, too.
Running XENHVM is something that has worked nicely for me ever since FreeBSD 8 with performance being very good for networking and "ok" for disk IO.

A really stupid fact is that those two tiny net/disk drivers that you need for Xen HVM didn't make it into GENERIC in more than two years.
You still need to run GENERIC off horribly slow emulated disk devices so you can finally build your real kernel.
Really, there's no single non-stupid reason for that.

Xen(Server) has a nice advantage over VMWare in not having per-CPU memory overhead.

I'll also add that I've supported a XenServer farm from 3.x to 6.x and the time we spent in supporting / fixing it was significantly more than expected. 


- VMWare ESXi
This is what I have also tried. I'm quite sure I have had the e1000/vmxnet autoswitching working fine and the overall experience was just fine.
No bugs, no troubles.
The platform is perfectly robust and plays nice with FreeBSD (what I'm trying to say: You can completely skip worrying about platform issues)

I've now also inherited management of a medium sized platform running FreeBSD on ESXi, some 45 VMs I guess.
What I notice there is disk throughput being not great and disk IO latency feels abysmal, especially to someone used PV Xen VMs.

My tinderboxes run around 2 days for under 1500 packages.

I suspect that my own servers are a bit more powerful than what's under those VMs.
Still, often it's wiser to have a physical, super-responsive FreeBSD box with jails than many tiny but not snappy FreeBSD VMs on VMWare.

Pro sides I didn't mention, for VMWare:
- Page sharing will be easier to use than in Xen (and HyperV just went to a corner, crying)
- Snapshots, stuff like VEEAM, VAAI, or better Filer-side snapshots on NFS and freezing the VM is aeons ahead of XenServer (imho)
- HA really exists, if you afford it. (XenServer HA exists but the scope differs)


That brings me to the last points, lessons learned:
- Alignment: 
For all I can tell stock 9.x FreeBSD hasn't gotten the news about disk alignment, meaning you need to invest some time to fix that, or you would waste a lot of your performance.
- Avoid optimized CFlags if you go with XenServer, or you might run into bad surprises i.e. if you change your hardware platform.
I ended up needing to recompile for AMD on the old server before being able to move to new hosts.
VMWare would have just run GENERIC and never have caused me that trouble in the first place.



Greetings,
Florian


More information about the freebsd-isp mailing list