FreeBSD 9 On ESXi 5.5?

dweimer dweimer at
Tue Dec 3 05:46:05 UTC 2013

On 12/02/2013 6:15 pm, Drew Tomlinson wrote:
> I recently purchased an HP Proliant ML310 to replace an old server
> that happily ran FBSD for 13 years before it died last month.
> Now because the hardware will support it, I'd like to learn about
> vSphere or ESXi or whatever they are calling it these days as we are
> beginning to migrate that way at work.  Thus I would like to rebuild
> my FBSD box as a guest.
> My new server has 4, 1TB drives.  The onboard RAID controller will
> only do mirroring or striping without parity.  It will not do what I
> know of as RAID5 where parity info is spread across the disks.  I had
> hoped to use the 4 drives as one logical 3 TB drive with parity.  But
> since I can't, I have set it up as 4 single drives and therefore have
> 4 different drives in which I could create virtual drives for my FBSD
> guest.  It was my thought that then I could use these 4 virtual drives
> and build my FBSD on ZFS, just as I would if it were bare metal.
> I used to use ZFS and like the redundancy it provides.  However I've
> googled and there seems to be a lot of posts about ZFS not working
> well in a virtual machine.
> Does anyone have any insight on this?  Good idea?  Bad idea that will
> bite me later?  It is most important to me to have a machine that is
> reliable and just runs like my old one did than to do anything fancy.
> I like a raid1z pool because I could lose a disk and not lose data.
> However I do not want to cause problems by using it in a vm since they
> are so easy to restore from backups.
> Thanks,
> Drew
> _______________________________________________
> freebsd-questions at mailing list
> To unsubscribe, send any mail to 
> "freebsd-questions-unsubscribe at"

I have multiple FreeBSD 9.0, 9.1, and 9.2 machines running on ESX 5.1, a 
single 9.2, and a 10 beta build on esx 5.5.  All running ZFS, but not 
raidz, or even mirrors, just single disk (disk storage is in redundant 
iSCSI san or direct attach raid), my systems are booting from zfs for 
the benefit of boot environments.  Its stable, but I haven't done any 
benchmarks for performance as the ones I am running are not heavy I/O.  
If the system supports Direct path I/O it should work without a whole 
lot of overhead, and give you near native speed to the disks, but the 
hardware that is supported is fairly limited and on the costly side.

If you have the hardware already, and have the time to test it, give it 
a shot, and see if the performance is where you need it.  Its all going 
to depend on your usage case if its going to work.

    Dean E. Weimer

More information about the freebsd-questions mailing list