zfs destroy dry-run lying about reclaimable space?
Marcus Müller
znek at mulle-kybernetik.com
Tue Mar 17 10:25:20 UTC 2015
Hi ZFS experts,
I just stumbled across this one:
root at tank:(/)# zfs-destroy-snapshots.sh -n tank/usr/local
would destroy tank/usr/local at zfs-auto-snap_monthly-2014-04-01-00h28
would reclaim 53.6M
would destroy tank/usr/local at zfs-auto-snap_monthly-2014-05-01-00h28
would reclaim 8.32M
would destroy tank/usr/local at zfs-auto-snap_monthly-2014-06-01-00h28
would reclaim 81.4M
would destroy tank/usr/local at zfs-auto-snap_monthly-2014-07-01-00h28
would reclaim 170M
[...]
root at tank:(/)# zfs-destroy-snapshots.sh tank/usr/local
will destroy tank/usr/local at zfs-auto-snap_monthly-2014-04-01-00h28
will reclaim 53.6M
will destroy tank/usr/local at zfs-auto-snap_monthly-2014-05-01-00h28
will reclaim 191M
will destroy tank/usr/local at zfs-auto-snap_monthly-2014-06-01-00h28
will reclaim 1.17G
will destroy tank/usr/local at zfs-auto-snap_monthly-2014-07-01-00h28
will reclaim 177M
[...]
zfs-destroy-snapshots.sh iterates on all snapshots of the given filesystem and calls zfs destroy -v [-n] <snapshot>, no magic involved.
I was a bit stumped by the fact that the sum of the reclaimable space reported via dry-run of zfs-destroy-snapshots.sh wasn't anywhere near the supposed "usedbysnapshots" value of the filesystem, so I just deleted all snapshots for real to see what would happen. Unfortunately I cannot zfs diff any of them now to get any clues why the dry-run would be so wrong, but maybe anyone else knows? Is this a known bug already?
Cheers,
Marcus
--
Marcus Müller . . . http://www.mulle-kybernetik.com/znek/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2326 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20150317/40f08200/attachment.bin>
More information about the freebsd-fs
mailing list