From bcr at FreeBSD.org Thu Apr 17 20:07:00 2014
From: bcr at FreeBSD.org (Benedict Reuschling)
Date: Thu, 17 Apr 2014 20:06:59 +0000 (UTC)
Subject: svn commit: r44599 -
projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs
Message-ID: <201404172006.s3HK6x1N023670@svn.freebsd.org>
Author: bcr
Date: Thu Apr 17 20:06:59 2014
New Revision: 44599
URL: http://svnweb.freebsd.org/changeset/doc/44599
Log:
Update and expand the sections on ZFS snapshots and clones.
It describes:
- what they are, what they can do and how they can be helpful,
- how to create them
- how to compare snapshots using zfs diff
- how to do rollbacks
- the .zfs directory and how to control its visibility using the ZFS property
- promoting clones to real datasets and what the origin property shows
A bunch of examples are also added to follow along with the descriptions.
Modified:
projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Apr 17 18:24:40 2014 (r44598)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Apr 17 20:06:59 2014 (r44599)
@@ -1246,19 +1246,18 @@ Filesystem Size Used Avail Cap
Renaming a Dataset
- The name of a dataset can be changed with
- zfs rename. rename can
- also be used to change the parent of a dataset. Renaming a
- dataset to be under a different parent dataset will change the
- value of those properties that are inherited by the child
- dataset. When a dataset is renamed, it is unmounted and then
- remounted in the new location (inherited from the parent
- dataset). This behavior can be prevented with
- . Due to the nature of snapshots, they
- cannot be renamed outside of the parent dataset. To rename a
- recursive snapshot, specify , and all
- snapshots with the same specified snapshot will be
- renamed.
+ The name of a dataset can be changed with zfs
+ rename. rename can also be
+ used to change the parent of a dataset. Renaming a dataset to
+ be under a different parent dataset will change the value of
+ those properties that are inherited by the child dataset.
+ When a dataset is renamed, it is unmounted and then remounted
+ in the new location (inherited from the parent dataset). This
+ behavior can be prevented with . Due to
+ the nature of snapshots, they cannot be renamed outside of the
+ parent dataset. To rename a recursive snapshot, specify
+ , and all snapshots with the same specified
+ snapshot will be renamed.
@@ -1309,36 +1308,350 @@ tank custom:costcenter -
Snapshots are one
of the most powerful features of ZFS. A
- snapshot provides a point-in-time copy of the dataset. The
- parent dataset can be easily rolled back to that snapshot
- state. Create a snapshot with zfs snapshot
- dataset@snapshotname.
- Adding creates a snapshot recursively,
- with the same name on all child datasets.
-
- Snapshots are mounted in a hidden directory
- under the parent dataset: .zfs/snapshots/snapshotname.
- Individual files can easily be restored to a previous state by
- copying them from the snapshot back to the parent dataset. It
- is also possible to revert the entire dataset back to the
- point-in-time of the snapshot using
- zfs rollback.
-
- Snapshots consume space based on how much the parent file
- system has changed since the time of the snapshot. The
- written property of a snapshot tracks how
- much space is being used by the snapshot.
-
- Snapshots are destroyed and the space reclaimed with
- zfs destroy
- dataset@snapshot.
- Adding recursively removes all
- snapshots with the same name under the parent dataset. Adding
- to the command
- displays a list of the snapshots that would be deleted and
- an estimate of how much space would be reclaimed without
- performing the actual destroy operation.
+ snapshot provides a read-only, point-in-time copy of the
+ dataset. Due to ZFS' Copy-On-Write (COW) implementation,
+ snapshots can be created quickly simply by preserving the
+ older version of the data on disk. When no snapshot is
+ created, ZFS simply reclaims the space for future use.
+ Snapshots preserve disk space by recording only the
+ differences that happened between snapshots. ZFS llow
+ snapshots only on whole datasets, not on individual files or
+ directories. When a snapshot is created from a dataset,
+ everything contained in it, including the filesystem
+ properties, files, directories, permissions, etc. are
+ duplicated.
+
+ Snapshots provide a variety of uses that other filesystems
+ with snapshot functionality do not have. A typical example
+ for snapshots is to have a quick way of backing up the current
+ state of the filesystem when a risky action like a software
+ installation or a system upgrade is performed. When the
+ action fails, the snapshot can be rolled back and the system
+ has the same state as when the snapshot was created. If the
+ upgrade was successful, the snapshot can be deleted to free up
+ space. Without snapshots and a failed upgrade a restore from
+ backup is often required, which is tedious, time consuming and
+ may require a downtime in which the system cannot be used as
+ normal. Snapshots can be rolled back quickly and can be done
+ when the system is running in normal operation, with little or
+ no downtime. The time savings are enormous considering
+ multi-terabyte storage systems and the time required to copy
+ the data from backup. Snapshots are not a replacement for a
+ complete backup of a pool, but can be used as a quick and easy
+ way to store a copy of the dataset at a specific point in
+ time.
+
+
+ Creating Snapshots
+
+ Create a snapshot with zfs snapshot
+ dataset@snapshotname.
+ Adding creates a snapshot recursively,
+ with the same name on all child datasets. The following
+ example creates a snapshot of a home directory:
+
+ &prompt.root; zfs snapshot
+ bigpool/work/joe@backup
+&prompt.root; zfs list -t snapshot
+NAME USED AVAIL REFER MOUNTPOINT
+bigpool/work/joe at backup 0 - 85.5K -
+
+ Snapshots are not listed by a normal zfs
+ list operation. In order to list the snapshot
+ that was just created, the option -t
+ snapshot has to be appended to zfs
+ list. The output clearly indicates that
+ snapshots can not be mounted directly into the system as
+ there is no path shown in the MOUNTPOINT
+ column. Additionally, there is no mention of available disk
+ space in the AVAIL column as snapshots
+ cannot be written after they are created. It becomes more
+ clear when comparing the snapshot with the original dataset
+ from which it was created:
+
+ &prompt.root; zfs list -rt all bigpool/work/joe
+NAME USED AVAIL REFER MOUNTPOINT
+bigpool/work/joe 85.5K 1.29G 85.5K /usr/home/joe
+bigpool/work/joe at backup 0 - 85.5K -
+
+ Displaying both the dataset and the snapshot in one
+ output using zfs list -rt all reveals how
+ snapshots work in COW fashion. They save only the changes
+ (delta) that were made and not the whole filesystem contents
+ all over again. This means that snapshots do not take up
+ much space when there were not many changes being made in
+ the meantime. This becomes more apparent when creating a
+ second snapshot after making a change like copying a file to
+ the dataset after the first snapshot was taken.
+
+ &prompt.root; cp /etc/passwdbigpool/work/joe
+&prompt.root; zfs snapshot bigpool/work/joe@after_cp
+&prompt.root; zfs list -rt all bigpool/work/joe
+NAME USED AVAIL REFER MOUNTPOINT
+bigpool/work/joe 115K 1.29G 88K /usr/home/joe
+bigpool/work/joe at backup 27K - 85.5K -
+bigpool/work/joe at after_cp 0 - 88K -
+
+ The second snapshot contains only the changes on the
+ dataset after the copy operation. This yields enormous
+ space savings. Note that the snapshot
+ bigpool/work/joe at backup
+ also changed in the output of the USED
+ column to indicate the changes between itself and the
+ snapshot taken afterwards.
+
+
+
+ Comparing Snapshots
+
+ ZFS provides a built-in command to compare the
+ differences in content between two snapshots. This is
+ helpful when many snapshots were taken over time and the
+ user wants to know the filesystem has changed over time.
+ For example, a user can determine what the latest snapshot
+ is that still contains a file that was accidentally deleted
+ using zfs diff. Doing this for the two
+ snapshots that were created in the previous section yields
+ the following output:
+
+ &prompt.root; zfs list -rt all bigpool/work/joe
+NAME USED AVAIL REFER MOUNTPOINT
+bigpool/work/joe 115K 1.29G 88K /usr/home/joe
+bigpool/work/joe at backup 27K - 85.5K -
+bigpool/work/joe at after_cp 0 - 88K -
+&prompt.root; zfs diff bigpool/work/joe at backup
+M /usr/home/bcr/
++ /usr/home/bcr/passwd
+
+ The command lists the changes between the most recent
+ snapshot (in this case
+ bigpool/work/joe at after_cp)
+ and the one provided as a parameter to zfs
+ diff. The first column indicates the type of
+ change according to the following table:
+
+
+
+
+
+ +
+ The path or file was added.
+
+
+
+ -
+ The path or file was deleted.
+
+
+
+ M
+ The path or file was modified.
+
+
+
+ R
+ The path or file was renamed.
+
+
+
+
+
+ By comparing the output with the table, it becomes clear
+ that passwd
+ was added after the snapshot
+ bigpool/work/joe at backup
+ was created. This resulted also in a modification of the
+ parent dataset mounted at
+ /usr/home/joe
+ because, among other things, the directory listing would now
+ include the new file.
+
+ Comparing the contents of two snapshots is helpful when
+ using ZFS' replication feature to transfer a dataset to a
+ different host for backup purposes. A backup administrator
+ can compare the two snapshots he just received from the
+ sending host and figure out what the actual changes in the
+ dataset were (provided the dataset is not encrypted). See
+ the Replication section
+ for more information.
+
+
+
+ Snapshot Rollback
+
+ Once at least one snapshot is available, it can be
+ rolled back to at any time. Most of the time this is the
+ case when the current state of the dataset is no longer
+ required and an older version is preferred. Scenarios like
+ local development tests have gone wrong, botched system
+ updates hamper the systems overall functionality or the
+ requirement to restore accidentally deleted files or
+ directories are all too common occurances. Luckily, rolling
+ back a snapshot is just as easy as typing zfs
+ rollback
+ snapshotname.
+ Depending on how many changes are involved, the operation
+ will finish in a certain amout of time. During that time,
+ the dataset always remains in a consistent state, much like
+ a database that conforms to ACID principles is performing a
+ rollback. This is happening while the dataset is live and
+ accessible without requiring a downtime. Once the snapshot
+ has been rolled back, the dataset has the same state as it
+ had when the snapshot was originally taken. All other data
+ in that dataset that was not part of the snapshot is
+ discarded. Taking a snapshot of the current state of the
+ dataset before rolling back to a previous one is a good idea
+ when some data is required later. This way, the user can
+ roll back and forth between snapshots without losing data
+ that is still valuable.
+
+ In the first example, a snapshot is rolled back because
+ of a careless rm operation that removes
+ too much data than was intended.
+
+ &prompt.root; zfs list -rt all bigpool/work/joe
+NAME USED AVAIL REFER MOUNTPOINT
+bigpool/work/joe 115K 1.29G 88K /usr/home/joe
+bigpool/work/joe at santa 27K - 85.5K -
+bigpool/work/joe at summerplan 0 - 88K -
+&prompt.user; ls
+santaletter.txt summerholiday.txt
+&prompt.user; rm s*
+&prompt.user; ls
+&prompt.user;
+
+ At this point, the user realized that too many files
+ were deleted and wants them back. ZFS provides an easy way
+ to get them back using rollbacks, but only when snapshots of
+ important data are performed on a regular basis. To get the
+ files back and start over from the last snapshot, issue the
+ following command:
+
+ &prompt.root; zfs rollback bigpool/work/joe at summerplan
+&prompt.user; ls
+santaletter.txt summerholiday.txt
+
+ The rollback operation restored the dataset to the state
+ of the last snapshot. It is also possible to roll back to a
+ snapshot that was taken much earlier and has other snapshots
+ following after it. When trying to do this, ZFS will issue
+ the following warning:
+
+ &prompt.root; zfs list -t snapshot
+NAME USED AVAIL REFER MOUNTPOINT
+bigpool/work/joe at santa 27K - 85.5K -
+bigpool/work/joe at summerplan 0 - 88K -
+&prompt.root; zfs rollback bigpool/work/joe at santa
+cannot rollback to 'bigpool/work/joe at santa': more recent snapshots exist
+use '-r' to force deletion of the following snapshots:
+bigpool/work/joe at summerplan
+
+ This warning means that when snapshots exist between the
+ current state of the dataset and the snapshot the user wants
+ to roll back to, these snapshots must be deleted. This is
+ because ZFS can not track all the changes between different
+ states of the dataset in time since snapshots are read-only.
+ As a precaution, ZFS will not delete the affected snapshots,
+ but offers to use the parameter when
+ this is the desired action. If that is what the intention
+ is and the consequences of losing all intermediate snapshots
+ is understood, the command can be issued as follows:
+
+ &prompt.root; zfs rollback -r bigpool/work/joe at santa
+&prompt.root; zfs list -t snapshot
+NAME USED AVAIL REFER MOUNTPOINT
+bigpool/work/joe at santa 27K - 85.5K -
+&prompt.user; ls
+santaletter.txt
+
+ The output from zfs list -t snapshot
+ confirms that the snapshot
+ bigpool/work/joe at summerplan
+ was removed as a result of zfs rollback
+ -r.
+
+
+
+ Restoring Individual Files from Snapshots
+
+ Snapshots are mounted in a hidden directory under the
+ parent dataset: .zfs/snapshots/snapshotname.
+ By default, these directories will not be displayed even
+ when a standard ls -a is issued.
+ Although the directory is not displayed, it is there
+ nevertheless and can be accessed like any normal directory.
+ ZFS maintains a property named snapdir
+ that controls whether these hidden directories show up in a
+ directory listing. Setting the property to
+ visible will let them show up in the
+ output of ls and any other that deal with
+ directory contents.
+
+ &prompt.root; zfs get snapdir bigpool/work/joe
+NAME PROPERTY VALUE SOURCE
+bigpool/work/joe snapdir hidden default
+&prompt.user; ls -a
+. santaletter.txt
+.. summerholiday.txt
+&prompt.root; zfs set snapdir=visible bigpool/work/joe
+&prompt.user; ls -a
+. .zfs santaletter.txt
+.. summerholiday.txt
+
+ Individual files can easily be restored to a previous
+ state by copying them from the snapshot back to the parent
+ dataset. The directory structure below .zfs/snapshot has a directory
+ named exactly like the snapshots taken earlier to make it
+ easier to identify them. In the following example, it is
+ assumed that a file should be restored from the hidden
+ .zfs directory by
+ copying it from the snapshot that contained the latest
+ version of the file:
+
+ &prompt.root; ls .zfs/snapshot
+santa summerplan
+&prompt.root; ls .zfs/snapshot/summerplan
+summerholiday.txt
+&prompt.root; cp .zfs/snapshot/summerplan/summerholiday.txt/bigpool/work/joe
+
+ Note that when the command ls
+ .zfs/snapshot was issued, the property
+ snapdir could be set to hidden and it
+ would still be possible to list the contents of that
+ directory. It is up to the administrator to decide whether
+ these directories should be displayed. Of course, it is
+ possible to display these for certain datasets and prevent
+ it for others. Copying files or directories from these
+ hidden .zfs/snapshot
+ is simple enough. Trying it the other way around results in
+ the following error:
+
+ &prompt.root; cp /etc/rc.conf .zfs/snapshot/santa/
+cp: .zfs/snapshot/santa/rc.conf: Read-only file system
+
+ This error reminds the user that snapshots are read-only
+ and can not be changed after they have been created. No
+ files can be copied into or removed from snapshot
+ directories because that would change the state of the
+ dataset they represent.
+
+ Snapshots consume space based on how much the parent
+ file system has changed since the time of the snapshot. The
+ written property of a snapshot tracks how
+ much space is being used by the snapshot.
+
+ Snapshots are destroyed and the space reclaimed with
+ zfs destroy
+ dataset@snapshot.
+ Adding recursively removes all snapshots
+ with the same name under the parent dataset. Adding
+ to the command displays a list of the
+ snapshots that would be deleted and an estimate of how much
+ space would be reclaimed without performing the actual
+ destroy operation.
+
@@ -1347,14 +1660,98 @@ tank custom:costcenter -
A clone is a copy of a snapshot that is treated more like
a regular dataset. Unlike a snapshot, a clone is not read
only, is mounted, and can have its own properties. Once a
- clone has been created, the snapshot it was created from
- cannot be destroyed. The child/parent relationship between
- the clone and the snapshot can be reversed using
- zfs promote. After a clone has been
- promoted, the snapshot becomes a child of the clone, rather
- than of the original parent dataset. This will change how the
- space is accounted, but not actually change the amount of
- space consumed.
+ clone has been created using zfs clone, the
+ snapshot it was created from cannot be destroyed. The
+ child/parent relationship between the clone and the snapshot
+ can be reversed using zfs promote. After a
+ clone has been promoted, the snapshot becomes a child of the
+ clone, rather than of the original parent dataset. This will
+ change how the space is accounted, but not actually change the
+ amount of space consumed. The clone can be mounted at any
+ point within the ZFS filesystem hierarchy, not just below the
+ original location of the snapshot.
+
+ To demonstrate the clone feature, the following example
+ dataset is used:
+
+ &prompt.root; zfs list -rt all camino/home/joe
+NAME USED AVAIL REFER MOUNTPOINT
+camino/home/joe 108K 1.3G 87K /usr/home/joe
+camino/home/joe at plans 21K - 85.5K -
+camino/home/joe at backup 0K - 87K -
+
+ A typical use case for clones is to experiment with a
+ specific dataset while keeping the snapshot around to fall
+ back to in case something goes wrong. Since snapshots can not
+ be changed, a clone of a snapshot is created to perform the
+ changes in. Once the desired result is achieved, the old
+ filesystem can be removed after promoting the clone to a
+ dataset to replace it. This is not strictly necessary as the
+ clone and dataset can coexist side by side with each other
+ without causing problems.
+
+ &prompt.root; zfs clone camino/home/joe@backupcamino/home/joenew
+&prompt.root; ls /usr/home/joe*
+/usr/home/joe:
+backup.txz plans.txt
+
+/usr/home/joenew:
+backup.txz plans.txt
+&prompt.root; df -h /usr/home
+Filesystem Size Used Avail Capacity Mounted on
+usr/home/joe 1.3G 31k 1.3G 0% /usr/home/joe
+usr/home/joenew 1.3G 31k 1.3G 0% /usr/home/joenew
+
+ After a clone is created it is an exact copy of the state
+ the dataset was in when the snapshot was taken. The clone can
+ now be changed independently from its originating dataset.
+ The only connection between the two is the snapshot. ZFS
+ records this connection in the property
+ origin. Once the dependency between the
+ snapshot and the clone has been removed by promoting the clone
+ using zfs promote, the
+ origin of the clone is removed as it is now
+ an independent dataset. The following example demonstrates
+ this:
+
+ &prompt.root; zfs get origin camino/home/joenew
+NAME PROPERTY VALUE SOURCE
+camino/home/joenew origin camino/home/joe at backup -
+&prompt.root; zfs promote camino/home/joenew
+&prompt.root; zfs get origin camino/home/joenew
+NAME PROPERTY VALUE SOURCE
+camino/home/joenew origin - -
+
+ After making some changes like copying
+ loader.conf to the promoted clone for
+ example, the old directory becomes obsolete in this case.
+ Instead, the promoted clone should replace it. This can be
+ achieved by two consecutive commands: zfs
+ destroy on the old dataset and zfs
+ rename on the clone to name it like the old
+ dataset (it could also get an entirely different name).
+
+ &prompt.root; cp /boot/defaults/loader.conf/usr/home/joenew
+&prompt.root; zfs destroy -f camino/home/joe
+&prompt.root; zfs rename camino/home/joenewcamino/home/joe
+&prompt.root; ls /usr/home/joe
+backup.txz loader.conf plans.txt
+&prompt.root; df -h /usr/home
+Filesystem Size Used Avail Capacity Mounted on
+usr/home/joe 1.3G 128k 1.3G 0% /usr/home/joe
+
+ The cloned snapshot is now handled by ZFS like an ordinary
+ dataset. It contains all the data from the original snapshot
+ plus the files that were added to it like
+ loader.conf. Clones can be used in
+ different scenarios to provide useful features to ZFS users.
+ For example, jails could be provided as snapshots containing
+ different sets of installed applications. Users can clone
+ these snapshots and add their own applications as they see
+ fit. Once they are satisfied with the changes, the clones can
+ be promoted to full datasets and provided to end users to work
+ with like they would with a real dataset. This saves time and
+ administrative overhead when providing these jails.
@@ -2459,7 +2856,8 @@ vfs.zfs.vdev.cache.size="5M"ZFS.
-
+
+