ZFS bug in v28 - temporary clones are not automatically
destroyed on error
Luke Marsden
luke-lists at hybrid-logic.co.uk
Tue Jul 12 10:14:41 UTC 2011
On Mon, 2011-07-11 at 12:25 +0100, Luke Marsden wrote:
> Hi all,
>
> I'm experiencing this bug on mm's ZFS v28 image from 19.06.2011
> r222557M:
>
> cannot destroy 'hpool/hcfs/fs at snapshot': dataset already exists
>
> That is on a v4 formatted zfs filesystem on a v28 formatted pool, if I
> zfs upgrade the filesystem to v5 the error changes to "snapshot has
> dependent clones" (from memory) which is more informative but otherwise
> behaves the same. See:
>
> http://serverfault.com/questions/66414
> http://opensolaris.org/jive/thread.jspa?messageID=484242&tstart=0
>
Just an update on this for posterity, I found this:
http://www.freebsd.org/cgi/query-pr.cgi?pr=157728
The workaround indicated there - which in our case was implemented by a
semaphore around 'zfs list' and 'zfs recv' operations (so they never run
in parallel for the same filesystem), seems to have worked perfectly and
we're not seeing any more stray clones.
It would be good to fix this properly, of course :-)
--
Best Regards,
Luke Marsden
CTO, Hybrid Logic Ltd.
Mobile: +447791750420
www.hybrid-cluster.com - Cloud web hosting platform
More information about the freebsd-fs
mailing list