svn commit: r344569 - in head/cddl/contrib/opensolaris: cmd/zfs lib/libzfs/common

Benjamin Kaduk bjkfbsd at gmail.com
Tue Feb 26 17:28:57 UTC 2019


On Tue, Feb 26, 2019 at 11:19 AM John Baldwin <jhb at freebsd.org> wrote:

> On 2/26/19 8:59 AM, Rodney W. Grimes wrote:
> >> On Tue, Feb 26, 2019 at 10:14 AM Cy Schubert <Cy.Schubert at cschubert.com
> >
> >> wrote:
> >>
> >>> On February 26, 2019 7:48:27 AM PST, Cy Schubert <
> >>> Cy.Schubert at cschubert.com> wrote:
> >>>> On February 26, 2019 12:18:35 AM PST, Baptiste Daroussin
> >>>> <bapt at FreeBSD.org> wrote:
> >>>
> >>
> >> [trimming the unneeded pile of commit body]
> >>
> >>
> >>>> This broke my systems, many filesystems fail to mount causing nullfs
> >>>> late mounts to fail. No details now until tonight.
> >>>>
> >>>> Suggest we back this out until it is properly tested.
> >>>
> >>> Nested zfs filesystems seem not to be handled properly or possibly not
> >>> supported any more. This explains my mail gateway also not mounting all
> >>> filesystems in /home. It was odd that dovecot stopped working.
> >>>
> >>> The symptom of the problem is zfs mount -a no longer mounts all
> >>> filesystems. Zfs mount fails saying the filesystem is already mounted.
> The
> >>> workaround is to zfs umount each affected zfs dataset by hand and zfs
> mount
> >>> it by hand.
> >>>
> >>> Generally this has screwed up sites that have hundreds (in my case 122)
> >>> zfs datasets. The work around might be to script testing each mount,
> >>> unmounting and remounting if necessary.
> >>>
> >>> I'm being sarcastic about creating an rc script to clean this up. This
> >>> needs to be backed out and tested properly before being committed.
> >>>
> >>>
> >> I don't know what you mean by "nested zfs filesystems" -- do you mean a
> >> zpool within a zvol?
> >> That has been unsupported for a long time, IIRC.  And
> > That had better not be unsupported, that is the prefered technology
> > for all of the virtualization stuff, bhyve, virtualbox, qemu, etc.
>
> I think Ben is referring to using the nested zpool on the host itself
> rather
> than in the guest.  We do actually let you do such crazy things I think (I
> use UFS in my VMs usually and fsck on the host against
> /dev/zvol/bhyve/<foo>p2
> can be faster than fsck in the booted guest), but normally the host just
> hosts
> the zvol and the guest manages filesystems in the volume.  Mounting the
> nested zpool on the host is probably best characterized as running with
> scissors.
>
>
Exactly so; thanks for clarifying.

-Ben


More information about the svn-src-all mailing list