ZFS: destroying snapshots without compromising boot environments

Allan Jude allanjude at freebsd.org
Sat Mar 28 15:19:36 UTC 2020


On 2020-03-28 03:24, Graham Perrin wrote:
> I imagine that some of the 2019 snapshots below are redundant.
> 
> Can I safely destroy any of them?
> 
> $ zfs list -t snapshot
> NAME                                                     USED AVAIL 
> REFER  MOUNTPOINT
> copperbowl/ROOT/Waterfox at 2020-03-20-06:19:45            67.0M -  59.2G  -
> copperbowl/ROOT/r359249b at 2019-08-18-04:04:53            5.82G -  40.9G  -
> copperbowl/ROOT/r359249b at 2019-08-18-11:28:31            4.32G -  40.7G  -
> copperbowl/ROOT/r359249b at 2019-09-13-18:45:27-0          9.43G -  43.4G  -
> copperbowl/ROOT/r359249b at 2019-09-19-20:03:26            5.13G -  43.3G  -
> copperbowl/ROOT/r359249b at 2019-09-24-20:45:59-0          7.67G -  44.6G  -
> copperbowl/ROOT/r359249b at 2020-01-09-17:05:57-0          7.66G -  55.2G  -
> copperbowl/ROOT/r359249b at 2020-01-11-14:15:47            7.41G -  56.2G  -
> copperbowl/ROOT/r359249b at 2020-03-17-21:57:17            12.0G -  59.2G  -
> copperbowl/iocage/releases/12.0-RELEASE/root at jbrowsers     8K -  1.24G  -
> copperbowl/poudriere/jails/head at clean                    328K -  1.89G  -
> $ beadm list
> BE       Active Mountpoint  Space Created
> Waterfox -      -           12.2G 2020-03-10 18:24
> r357746f -      -            1.3G 2020-03-20 06:19
> r359249b NR     /          148.9G 2020-03-28 01:19
> $ beadm list -aDs
> BE/Dataset/Snapshot                              Active Mountpoint 
> Space Created
> 
> Waterfox
>   copperbowl/ROOT/Waterfox                       -      - 137.0M
> 2020-03-10 18:24
>     r359249b at 2020-03-17-21:57:17                 - -           59.2G
> 2020-03-17 21:57
>   copperbowl/ROOT/Waterfox at 2020-03-20-06:19:45   - -           67.0M
> 2020-03-20 06:19
> 
> r357746f
>   copperbowl/ROOT/r357746f                       - -            1.2G
> 2020-03-20 06:19
>     Waterfox at 2020-03-20-06:19:45                 - -           59.2G
> 2020-03-20 06:19
> 
> r359249b
>   copperbowl/ROOT/r359249b at 2019-08-18-04:04:53   - -            5.8G
> 2019-08-18 04:04
>   copperbowl/ROOT/r359249b at 2019-08-18-11:28:31   - -            4.3G
> 2019-08-18 11:28
>   copperbowl/ROOT/r359249b at 2019-09-13-18:45:27-0 - -            9.4G
> 2019-09-13 18:45
>   copperbowl/ROOT/r359249b at 2019-09-19-20:03:26   - -            5.1G
> 2019-09-19 20:03
>   copperbowl/ROOT/r359249b at 2019-09-24-20:45:59-0 - -            7.7G
> 2019-09-24 20:45
>   copperbowl/ROOT/r359249b at 2020-01-09-17:05:57-0 - -            7.7G
> 2020-01-09 17:05
>   copperbowl/ROOT/r359249b at 2020-01-11-14:15:47   - -            7.4G
> 2020-01-11 14:15
>   copperbowl/ROOT/r359249b at 2020-03-17-21:57:17   - -           12.0G
> 2020-03-17 21:57
>   copperbowl/ROOT/r359249b                       NR /           59.0G
> 2020-03-28 01:19
> $
> 
> _______________________________________________
> freebsd-current at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe at freebsd.org"


You can try to destroy the snapshot, if it is the basis of a clone, then
you will get an error, that you'd need to destroy the BE first, so you
might decide to keep that snapshot. As long as you don't use the -R flag
to zfs destroy dataset at snapshot, it will not destroy the clones.

You can also use 'zfs promote' to make the clone into the parent, making
the original parent into the clone. This allows you to destroy that
original and the snapshot while keeping the clone.


-- 
Allan Jude

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 834 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freebsd.org/pipermail/freebsd-current/attachments/20200328/25d99dc8/attachment.sig>


More information about the freebsd-current mailing list