bhyve and vfs.zfs.arc_max, and zfs tuning for a hypervisor

Mike Gerdts mike.gerdts at joyent.com
Thu Mar 21 01:47:44 UTC 2019


On Tue, Mar 19, 2019 at 3:07 AM Patrick M. Hausen <hausen at punkt.de> wrote:

> Hi!
>
> > Am 19.03.2019 um 03:46 schrieb Victor Sudakov <vas at mpeks.tomsk.su>:
> > 1. Does ARC actually cache zfs volumes (not files/datasets)?
>
> Yes it does.
>
> > 2. If ARC does cache volumes, does this cache make sense on a hypervisor,
> > because guest OSes will probably have their own disk cache anyway.
>
> IMHO not much, because the guest OS is relying on the fact that when
> it writes it’s own cached data out to „disk“, it will be committed to
> stable storage.
>

I'd recommend caching at least metadata (primarycache=metadata).  The guest
will not cache zfs metadata and not having metadata in the cache can lead
to a big hit in performance.  The metadata in question here are things like
block pointers that keep track of where the data is at - zfs can't find the
data without metadata.

I think the key decision as to whether you use primarycache=metadata or
primarycache=all comes down to whether you are after predictable
performance or optimal performance.  You will likely get worse performance
with primarycache=metaadata (or especially with primarycache=none),
presuming the host has RAM to spare.  As you pack the system with more VMs
or allocate more disk to existing VMs, you will probably find that
primarycache=metadata leads steadier performance regardless of how much
storage is used or how active other VMs are.


> > 3. Would it make sense to limit vfs.zfs.arc_max to 1/8 or even less of
> > total RAM, so that most RAM is available to guest machines?
>
> Yes if you build your own solution on plain FreeBSD. No if you are running
> FreeNAS which already tries to autotune the ARC size according to the
> memory committed to VMs.
>
> > 4. What other zfs tuning measures can you suggest for a bhyve
> > hypervisor?
>
> e.g.
>         zfs set sync=always zfs/vm
>
> if zfs/vm is the dataset under which you create the ZVOLs for your emulated
> disks.
>

I'm not sure what the state of this is in FreeBSD, but in SmartOS we allow
the guests to benefit from write caching if they negotiate FLUSH.  Guests
that do negotiate flush are expected to use proper barriers to flush the
cache at critical times.  When a FLUSH arrives, SmartOS bhyve issues an
fsync().  To be clear - SmartOS bhyve is not actually caching writes in
memory, it is just delaying transaction group commits.  This avoids
significant write inflation and associated latency.  Support for FLUSH
negotiation has greatly improved I/O performance - to the point that some
tests show parity with running directly on the host pool.  If not already
in FreeBSD, this would probably be something of relatively high value to
pull in.

If you do go the route of sync=always and primarycache=<none|metadata>, be
sure your guest block size and host volblocksize match.  ZFS (on platforms
I'm more familiar with, at least) defaults to volblocksize=8k.  Most guest
file systems these days seem to default to a block size of 4 KiB.  If the
guest file system issues a 4 KiB aligned write, that will turn into a
read-modify-write cycle to stitch that 4 KiB guest block into the host's 8
KB block. If the adjacent guest block that is in the same 8 KiB host block
is written in the next write, it will also turn into a read-modify-write
cycle.

If you are using ZFS in the guest, this can be particularly problematic
because the guest ZFS will align writes with the guest pool's ashift, not
with a guest dataset's recordsize or volblocksize.  I discovered this in an
extended benchmarking of zfs-on-zfs - primarily with primarycache=metadata
and sync=always.  The write inflation was quite significant: 3x was
common.  I tracked some of this down to alignment issues and part of it was
due to sync writes causing the data to be written twice.

George Wilson has a great talk where he describes the same issues I hit.

https://www.youtube.com/watch?v=_-QAnKtIbGc

I've mentioned write inflation related to sync writes a few times.  One
point that I think is poorly understood is that when ZFS is rushed into
committing a write with fsync or similar, the immediate write is of ZIL
blocks to the intent log.  The intent log can be on a separate disk (slog,
log=<device>) or it can be on the disks that hold the pool's data.  When
the intent log is on the data disks, the data is written to the same disks
multiple times: once as ZIL blocks and once as data blocks.  Between these
writes there will be full-disk head movement as the uberblocks are updated
at the beginning and end of the disk.

What I say above is based on experience with kernel zones on Solaris and
bhyve on SmartOS.  There are enough similarities that I expect bhyve on
FreeBSD will be the same, but FreeBSD may have some strange-to-me zfs
caching changes.

Regards,
Mike


More information about the freebsd-virtualization mailing list