Storage overhead on zvols
Dustin Wenz
dustinwenz at ebureau.com
Tue Dec 5 16:50:46 UTC 2017
> On Dec 5, 2017, at 10:41 AM, Rodney W. Grimes <freebsd-rwg at pdx.rh.CN85.dnsmgr.net> wrote:
>
>>
>>
>> Dustin Wenz wrote:
>>> I'm not using ZFS in my VMs for data integrity (the host already
>>> provides that); it's mainly for the easy creation and management of
>>> filesystems, and the ability to do snapshots for rollback and
>>> replication.
>>
>> snapshot and replication works fine on the host, acting on the zvol.
>
> I suspect he is snapshotting and doing send/recvs of something
> much less than the zvol, probably some datasetbs, maybe boot
> envorinments, a snapshot of the whole zvol is ok if your managing
> data at the VM level, not so good if you got lots of stuff going
> on inside the VM.
Exactly, it's useful to have control of each filesystem discretely.
>>> Some of my deployments have hundreds of filesystems in
>>> an organized hierarchy, with delegated permissions and automated
>>> snapshots, send/recvs, and clones for various operations.
>>
>> what kind of zpool do you use in the guest, to avoid unwanted additional
>> redundancy?
>
> Just a simple stripe of 1 device would be my guess, though your
> still gona have metadata redundancy.
Also correct; just using the zvol virtual device as a single-disk pool.
>>
>> did you benchmark the space or time efficiency of ZFS vs. UFS?
>>
>> in some bsd related meeting this year i asked allan jude for a bhyve
>> level null mount, so that we could access at / inside the guest some
>> subtree of the host, and avoid block devices and file systems
>> altogether. right now i have to use nfs for that, which is irritating.
>
> This is not as simple as it seems, remember bhyve is just presenting
> a hardware environment, hardware environments dont have a file system
> concept per se, unlike jails which are providing a software environment.
>
> In effect what your asking for is what NFS does, so use NFS and get
> over the fact that this is the way to get what you want. Sure you
> could implement a virt-vfs but I wonder how close the spec of that
> would be to the spec of NFS.
>
> Or maybe thats the answer, implement virt-vfs as a more effecient way
> to transport nfs calls in and out of the guest.
I've not done any deliberate comparisons for latency or throughput. What I've decided to virtualize does not have any exceptional performance requirements. If I need the best possible IO, I would lean toward using jails instead of a hypervisor.
- .Dustin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2202 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-virtualization/attachments/20171205/80f1a1b3/attachment.bin>
More information about the freebsd-virtualization
mailing list