svn commit: r294329 - in head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs: . sys

Alan Somers asomers at freebsd.org
Tue Jan 19 18:55:07 UTC 2016


On Tue, Jan 19, 2016 at 10:00 AM, Alan Somers <asomers at freebsd.org> wrote:
> Author: asomers
> Date: Tue Jan 19 17:00:25 2016
> New Revision: 294329
> URL: https://svnweb.freebsd.org/changeset/base/294329
>
> Log:
>   Disallow zvol-backed ZFS pools
>
>   Using zvols as backing devices for ZFS pools is fraught with panics and
>   deadlocks. For example, attempting to online a missing device in the
>   presence of a zvol can cause a panic when vdev_geom tastes the zvol.  Better
>   to completely disable vdev_geom from ever opening a zvol. The solution
>   relies on setting a thread-local variable during vdev_geom_open, and
>   returning EOPNOTSUPP during zvol_open if that thread-local variable is set.
>
>   Remove the check for MUTEX_HELD(&zfsdev_state_lock) in zvol_open. Its intent
>   was to prevent a recursive mutex acquisition panic. However, the new check
>   for the thread-local variable also fixes that problem.
>
>   Also, fix a panic in vdev_geom_taste_orphan. For an unknown reason, this
>   function was set to panic. But it can occur that a device disappears during
>   tasting, and it causes no problems to ignore this departure.
>
>   Reviewed by:  delphij
>   MFC after:    1 week
>   Relnotes:     yes
>   Sponsored by: Spectra Logic Corp
>   Differential Revision:        https://reviews.freebsd.org/D4986
>
> Modified:
>   head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/vdev_impl.h
>   head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c
>   head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c
>   head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zvol.c
>
> Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys/vdev_impl.h

Due to popular demand, I will conditionalize this behavior on a
sysctl, and I won't MFC it.  The sysctl must default to off (ZFS on
zvols not allowed) because having the ability to put pools on zvols
can cause panics even for users who aren't using it.  And let me clear
up some confusion:

1) Having the ability to put a zpool on a zvol can cause panics and
deadlocks, even if that ability is unused.
2) Putting a zpool atop a zvol causes unnecessary performance problems
because there are two layers of COW involved, with all their software
complexities.  This also applies to putting a zpool atop files on a
ZFS filesystem.
3) A VM guest putting a zpool on its virtual disk, where the VM host
backs that virtual disk with a zvol, will work fine.  That's the ideal
use case for zvols.
3b) Using ZFS on both host and guest isn't ideal for performance, as
described in item 2.  That's why I prefer to use UFS for VM guests.
4) Using UFS on a zvol as Stefen Esser described works fine.  I'm not
aware of any performance problems associated with mixing UFS and ZFS.
Perhaps Stefan was referring to duplication between the ARC and UFS's
vnode cache.  The same duplication would be present in a ZFS on top of
zvol scenario.

-Alan


More information about the svn-src-head mailing list