Storage overhead on zvols
Dustin Marquess
dmarquess at gmail.com
Mon Dec 4 23:22:11 UTC 2017
I doubt it's best practice, and I'm sure I'm just crazy for doing it,
but personally I try and match the ZVOL blocksize to whatever the
underlaying filesystem's blocksize is. To me that just makes the most
logical sense.
-Dustin
On Mon, Dec 4, 2017 at 5:19 PM, Dustin Wenz <dustinwenz at ebureau.com> wrote:
> I'm starting a new thread based on the previous discussion in "bhyve uses all available memory during IO-intensive operations" relating to size inflation of bhyve data stored on zvols. I've done some experimenting with this, and I think it will be useful for others.
>
> The zvols listed here were created with this command:
>
> zfs create -o volmode=dev -o volblocksize=Xk -V 30g vm00/chyves/guests/myguest/diskY
>
> The zvols were created on a raidz1 pool of four disks. For each zvol, I created a basic zfs filesystem in the guest using all default tuning (128k recordsize, etc). I then copied the same 8.2GB dataset to each filesystem.
>
> volblocksize size amplification
>
> 512B 11.7x
> 4k 1.45x
> 8k 1.45x
> 16k 1.5x
> 32k 1.65x
> 64k 1x
> 128k 1x
>
> The worst case is with a 512B volblocksize, where the space used is more than 11 times the size of the data stored within the guest. The size efficiency gains are non-linear as I continue from 4k and double the block sizes; 32k blocks being the second-worst. The amount of wasted space was minimized by using 64k and 128k blocks.
>
> It would appear that 64k is a good choice for volblocksize if you are using a zvol to back your VM, and the VM is using the virtual device for a zpool. Incidentally, I believe this is the default when creating VMs in FreeNAS.
>
> - .Dustin
>
More information about the freebsd-virtualization
mailing list