ZFS Handbook Update

Allan Jude freebsd at allanjude.com
Tue Nov 5 05:14:35 UTC 2013


On 2013-11-05 00:08, Allan Jude wrote:
> Attached find ~320 new lines and 87 modified lines of the ZFS chapter of
> the FreeBSD Handbook that I wrote on the plane to and from the FreeBSD
> 20th Anniversary Party.
>
> Note: this is for, and is a patch against, the projects/zfsupdate-201307
> branch.
>
After talking with wblock on IRC, here is a version of the above patch
with all my nasty whitespace changes included (since this is an
untranslated project branch)

-- 
Allan Jude

-------------- next part --------------
Index: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
===================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	(revision 43100)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	(working copy)
@@ -22,22 +22,35 @@
 	<surname>Reuschling</surname>
 	<contrib>Written by </contrib>
       </author>
+      <author>
+	<firstname>Warren</firstname>
+	<surname>Block</surname>
+	<contrib>Written by </contrib>
+      </author>
     </authorgroup>
   </chapterinfo>
 
   <title>The Z File System (<acronym>ZFS</acronym>)</title>
 
   <para>The <emphasis>Z File System</emphasis>
-    (<acronym>ZFS</acronym>) was developed at &sun; to address many of
-    the problems with current file systems.  There were three major
-    design goals:</para>
+    (<acronym>ZFS</acronym>) was originally developed at &sun; to
+    address many of the problems with then current file systems.
+    Development has since moved to the Open-ZFS Project.  For more on
+    past and future development, see
+    <link linkend="zfs-history">.</link>.  The three major design
+    goals of <acronym>ZFS</acronym> are:</para>
 
   <itemizedlist>
     <listitem>
-      <para>Data integrity: checksums are created when data is written
-	and checked when data is read.  If on-disk data corruption is
-	detected, the user is notified and recovery methods are
-	initiated.</para>
+      <para>Data integrity: All data that is stored on
+	<acronym>ZFS</acronym> includes a <link
+	linkend="zfs-term-checksum">checksum</link> of the data.  When
+	data is written the checksum is calculated and written along
+	with the data.  When that data is later read back, the
+	checksum is calculated again and if the values do not match an
+	error is returned.  <acronym>ZFS</acronym> will attempt to
+	automatically correct the error if there is sufficient
+	redundancy available.</para>
     </listitem>
 
     <listitem>
@@ -48,7 +61,13 @@
     </listitem>
 
     <listitem>
-      <para>Performance:</para>
+      <para>Performance: <acronym>ZFS</acronym> features a number of
+	optional caching mechanisms to provide increased performance.
+	In addition to an advanced read cache known as the <link
+	linkend="zfs-term-arc">ARC</link> in memory, there is also the
+	optional <link linkend="zfs-term-l2arc">L2ARC</link> read
+	cache and the <link linkend="zfs-term-zil">ZIL</link>
+	synchronous write cache.</para>
     </listitem>
   </itemizedlist>
 
@@ -243,8 +262,8 @@
 	method of avoiding data loss from disk failure is to
 	implement <acronym>RAID</acronym>.  <acronym>ZFS</acronym>
 	supports this feature in its pool design.
-	<acronym>RAID-Z</acronym> pools require three or more disks but
-	yield more usable space than mirrored pools.</para>
+	<acronym>RAID-Z</acronym> pools require three or more disks
+	but yield more usable space than mirrored pools.</para>
 
       <para>To create a <acronym>RAID-Z</acronym> pool, use this
 	command, specifying the disks to add to the
@@ -345,7 +364,8 @@
 
       <para>This completes the <acronym>RAID-Z</acronym>
 	configuration.  Daily status updates about the file systems
-	created can be generated as part of the nightly &man.periodic.8; runs:</para>
+	created can be generated as part of the nightly
+	&man.periodic.8; runs:</para>
 
       <screen>&prompt.root; <userinput>echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf</userinput></screen>
     </sect2>
@@ -360,13 +380,15 @@
 
       <screen>&prompt.root; <userinput>zpool status -x</userinput></screen>
 
-      <para>If all pools are healthy and everything is normal, the
-	message indicates that:</para>
+      <para>If all pools are <link
+	  linkend="zfs-term-online">Online</link> and everything is
+	normal, the message indicates that:</para>
 
       <screen>all pools are healthy</screen>
 
-      <para>If there is an issue, perhaps a disk has gone offline,
-	the pool state will look similar to:</para>
+      <para>If there is an issue, perhaps a disk is in the <link
+	  linkend="zfs-term-offline">Offline</link> state, the pool
+	state will look similar to:</para>
 
       <screen>  pool: storage
  state: DEGRADED
@@ -393,10 +415,9 @@
 
       <screen>&prompt.root; <userinput>zpool offline storage da1</userinput></screen>
 
-      <para>Now the system can be powered down to replace <devicename>da1</devicename>.
-	When the system is
-	back online, the failed disk can replaced
-	in the pool:</para>
+      <para>Now the system can be powered down to replace
+	<devicename>da1</devicename>.  When the system is back online,
+	the failed disk can replaced in the pool:</para>
 
       <screen>&prompt.root; <userinput>zpool replace storage da1</userinput></screen>
 
@@ -418,8 +439,8 @@
 	    da2     ONLINE       0     0     0
 
 errors: No known data errors</screen>
-      <para>In this example, everything is
-	normal.</para>
+
+      <para>In this example, everything is normal.</para>
     </sect2>
 
     <sect2>
@@ -441,8 +462,9 @@
 
       <screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen>
 
-      <para>The duration of a scrub depends on the
-	amount of data stored.  Large amounts of data can take a considerable amount of time to verify.  It is also very <acronym>I/O</acronym>
+      <para>The duration of a scrub depends on the amount of data
+	stored.  Large amounts of data can take a considerable amount
+	of time to verify.  It is also very <acronym>I/O</acronym>
 	intensive, so much so that only one scrub> may be run at any
 	given time.  After the scrub has completed, the status is
 	updated and may be viewed with a status request:</para>
@@ -474,7 +496,14 @@
   <sect1 id="zfs-zpool">
     <title><command>zpool</command> Administration</title>
 
-    <para></para>
+    <para>The administration of ZFS is divided between two main
+      utilities.  The <command>zpool</command> utility which controls
+      the operation of the pool and deals with adding, removing,
+      replacing and managing disks, and the <link
+      linkend="zfs-zfs"><command>zfs</command></link> utility, which
+      deals with creating, destroying and managing datasets (both
+      <link linkend="zfs-term-filesystem">filesystems</link> and <link
+	linkend="zfs-term-volume">volumes</link>).</para>
 
     <sect2 id="zfs-zpool-create">
       <title>Creating & Destroying Storage Pools</title>
@@ -490,33 +519,40 @@
 	vdev types do not allow additional disks to be added to the
 	vdev.  The exceptions are mirrors, which allow additional
 	disks to be added to the vdev, and stripes, which can be
-	upgraded to mirrors by attaching an additional disk to the vdev.
-	Although additional vdevs can be added to a pool, the layout
-	of the pool cannot be changed once the pool has been created,
-	instead the data must be backed up and the pool
+	upgraded to mirrors by attaching an additional disk to the
+	vdev.  Although additional vdevs can be added to a pool, the
+	layout of the pool cannot be changed once the pool has been
+	created, instead the data must be backed up and the pool
 	recreated.</para>
 
-      <para></para>
+      <para>A ZFS pool that is no longer needed can be destroyed so
+	that the disks making up the pool can be reused in another
+	pool or for other purposes.  Destroying a pool involves
+	unmouting all of the datasets in that pool.  If the datasets
+	are in use, the unmount operation will fail and the pool will
+	not be destroyed.  The destruction of the pool can be forced
+	with the <option>-f</option> parameter, however this can cause
+	undefined behavior in the applications which had open files on
+	those datasets.</para>
     </sect2>
 
     <sect2 id="zfs-zpool-attach">
       <title>Adding & Removing Devices</title>
 
-      <para>Adding disks to a zpool can be broken down into
-	two separate cases: attaching a disk to an
-	existing vdev with the <literal>zpool attach</literal>
-	command, or adding vdevs to the pool with the
-	<literal>zpool add</literal> command.  Only some
-	<link linkend="zfs-term-vdev">vdev types</link> allow disks to
-	be added to the vdev after creation.</para>
+      <para>Adding disks to a zpool can be broken down into two
+	separate cases: attaching a disk to an existing vdev with
+	<command>zpool attach</command>, or adding vdevs to the pool
+	with <command>zpool add</command>.  Only some <link
+	linkend="zfs-term-vdev">vdev types</link> allow disks to be
+	added to the vdev after creation.</para>
 
       <para>When adding disks to the existing vdev is not
 	an option, as in the case of RAID-Z, the other option is
 	to add a vdev to the pool.  It is possible, but
 	discouraged, to mix vdev types.  ZFS stripes data across each
-	of the vdevs.  For example, if there are two mirror vdevs, then
-	this is effectively a RAID 10, striping the writes across the
-	two sets of mirrors.  Because of the way that space is
+	of the vdevs.  For example, if there are two mirror vdevs,
+	then this is effectively a RAID 10, striping the writes across
+	the two sets of mirrors.  Because of the way that space is
 	allocated in ZFS to attempt to have each vdev reach
 	100% full at the same time, there is a performance penalty if
 	the vdevs have different amounts of free space.</para>
@@ -525,52 +561,63 @@
 	can only be removed from a mirror if there is enough remaining
 	redundancy.</para>
 
-      <para>Creating a ZFS Storage Pool (<acronym>zpool</acronym>)
-	involves making a number of decisions that are relatively
-	permanent.  Although additional vdevs can be added to a pool,
-	the layout of the pool cannot be changed once the pool has
-	been created, instead the data must be backed up and the pool
-	recreated.  Currently, devices cannot be removed from a
-	zpool.</para>
     </sect2>
 
     <sect2 id="zfs-zpool-replace">
-      <title>Replacing a Working Devices</title>
+      <title>Replacing a Functioning Device</title>
 
       <para>There are a number of situations in which it may be
 	desirable to replace a disk with a different disk.  This
 	process requires connecting the new disk at the same time as
-	the disk to be replaced.  The
-	<literal>zpool replace</literal> command will copy all of the
-	data from the old disk to the new one.  After this operation
-	completes, the old disk is disconnected from the vdev.  If the
-	new disk is larger than the old disk, it may be possible to grow the zpool, using the new space.  See
-	<link linkend="zfs-zpool-online">Growing a Pool</link>.</para>
+	the disk to be replaced.  <command>zpool replace</command>
+	will copy all of the data from the old disk to the new one.
+	After this operation completes, the old disk is disconnected
+	from the vdev.  If the new disk is larger than the old disk,
+	it may be possible to grow the zpool, using the new space.
+	See <link linkend="zfs-zpool-online">Growing a
+	  Pool</link>.</para>
     </sect2>
 
     <sect2 id="zfs-zpool-resilver">
       <title>Dealing with Failed Devices</title>
 
-      <para>When a disk fails and the physical device is replaced, ZFS
-	must be told to begin the <link
+      <para>When a disk in a ZFS pool fails, the vdev that the disk
+	belongs to will enter the <link
+	linkend="zfs-term-degraded">Degraded</link> state.  In this
+	state, all of the data stored on the vdev is still available,
+	but performance may be impacted because missing data will need
+	to be calculated from the available redundancy.  To restore
+	the vdev to a fully functional state the failed physical
+	device will need to be replace replaced, and ZFS must be
+	instructed to begin the <link
 	linkend="zfs-term-resilver">resilver</link> operation, where
 	data that was on the failed device will be recalculated
-	from the available redundancy and written to the new
-	device.</para>
+	from the available redundancy and written to the replacement
+	device.  Once this process has completed the vdev will return
+	to <link linkend="zfs-term-online">Online</link> status.  If
+	the vdev does not have any redundancy, or if multiple devices
+	have failed and there is insufficient redundancy to
+	compensate, the pool will enter the <link
+	linkend="zfs-term-faulted">Faulted</link> state.  If a
+	sufficient number of devices cannot be reconnected to the pool
+	then the pool will be inoperative, and data will need to be
+	restored from backups.</para>
     </sect2>
 
     <sect2 id="zfs-zpool-online">
       <title>Growing a Pool</title>
 
       <para>The usable size of a redundant ZFS pool is limited by the
-	size of the smallest device in the vdev.  If each device in the vdev is replaced sequentially,
-	after the smallest device
-	has completed the replace or resilver operation, the pool
-	can grow based on the size of the new smallest device.
-	This expansion can be triggered with the
-	<literal>zpool online</literal> command with the -e flag on
-	each device.  After the expansion of each device,
-	the additional space will be available in the pool.</para>
+	size of the smallest device in the vdev.  If each device in
+	the vdev is replaced sequentially, after the smallest device
+	has completed the <link
+	linkend="zfs-zpool-replace">replace</link> or <link
+	linkend="zfs-term-resilver">resilver</link> operation, the
+	pool can grow based on the size of the new smallest device.
+	This expansion can be triggered by using <command>zpool
+	online</command> with the <option>-e</option> parameter on
+	each device.  After the expansion of each device, the
+	additional space will become available in the pool.</para>
     </sect2>
 
     <sect2 id="zfs-zpool-import">
@@ -582,19 +629,20 @@
 	by other disk subsystems.  This allows pools to be imported on
 	other machines, other operating systems that support ZFS, and
 	even different hardware architectures (with some caveats, see
-	&man.zpool.8;).  When a dataset has open files, <option>-f</option> can be used to force the
-	export of a pool.
-	<option>-f</option> causes the datasets to be forcibly
-	unmounted.  This can have unexpected side effects.</para>
+	&man.zpool.8;).  When a dataset has open files,
+	<option>-f</option> can be used to force the export
+	of a pool.  <option>-f</option> causes the datasets to be
+	forcibly unmounted, which can cause undefined behavior in the
+	applications which had open files on those datasets.</para>
 
-      <para>Importing a pool automatically mounts the datasets.
-	This may not be the desired behavior, and can be prevented with <option>-N</option>.
-	<option>-o</option> sets
-	temporary properties for this import only.  <option>altroot=</option>
-	allows importing a zpool with a base
-	mount point instead of the root of the file system.  If the
-	pool was last used on a different system and was not properly
-	exported, an import might have to be forced with <option>-f</option>.
+      <para>Importing a pool automatically mounts the datasets.  This
+	may not be the desired behavior, and can be prevented with
+	<option>-N</option>.  <option>-o</option> sets temporary
+	properties for this import only.  <option>altroot=</option>
+	allows importing a zpool with a base mount point instead of
+	the root of the file system.  If the pool was last used on a
+	different system and was not properly exported, an import
+	might have to be forced with <option>-f</option>.
 	<option>-a</option> imports all pools that do not appear to be
 	in use by another system.</para>
     </sect2>
@@ -602,18 +650,18 @@
     <sect2 id="zfs-zpool-upgrade">
       <title>Upgrading a Storage Pool</title>
 
-      <para>After upgrading &os;, or if a pool has been
-	imported from a system using an older version of ZFS, the pool
-	must be manually upgraded to the latest version of ZFS.  This
-	process is unreversible.  Consider whether the pool may ever need
-	to be imported on an older system before upgrading.  An upgrade
-	cannot be undone.</para>
+      <para>After upgrading &os;, or if a pool has been imported from
+	a system using an older version of ZFS, the pool can be
+	manually upgraded to the latest version of ZFS.  Consider
+	whether the pool may ever need to be imported on an older
+	system before upgrading.  The upgrade process is unreversible
+	and cannot be undone.</para>
 
       <para>The newer features of ZFS will not be available until
-	the <literal>zpool upgrade</literal> command has completed.
-	will the newer features of ZFS be available.
-	<option>-v</option> can be used to see what new
-	features will be provided by upgrading.</para>
+	<command>zpool upgrade</command> has completed.
+	<option>-v</option> can be used to see what new features will
+	be provided by upgrading, as well as which features are
+	already supported by the existing version.</para>
     </sect2>
 
     <sect2 id="zfs-zpool-status">
@@ -627,40 +675,42 @@
 
       <para>ZFS has a built-in monitoring system that can display
 	statistics about I/O happening on the pool in real-time.
-	Additionally, it shows the free and used space on the pool and
-	how much I/O bandwidth is currently utilized for read and
-	write operations.  By default, all pools in the system will be
-	monitored and displayed.  A pool name can be provided to monitor
-	just that single pool.  A basic example:</para>
+	It shows the amount of free and used space on the pool, how
+	many read and write operations are being performed per second,
+	and how much I/O bandwidth is currently being utilized for
+	read and write operations.  By default, all pools in the
+	system will be monitored and displayed.  A pool name can be
+	provided as part of the command to monitor just that specific
+	pool.  A basic example:</para>
 
-<screen>&prompt.root; <userinput>zpool iostat</userinput>
+      <screen>&prompt.root; <userinput>zpool iostat</userinput>
                capacity     operations    bandwidth
 pool        alloc   free   read  write   read  write
 ----------  -----  -----  -----  -----  -----  -----
 data         288G  1.53T      2     11  11.3K  57.1K</screen>
 
-	<para>To continuously monitor I/O activity on the pool, specify
-	  a number as the last parameter, indicating the number of seconds
-	  to wait between updates.  ZFS will print the next
-	  statistic line after each interval.  Press
-	  <keycombo
-	  action="simul"><keycap>Ctrl</keycap><keycap>C</keycap></keycombo>
-	  to stop this continuous monitoring.  Alternatively, give a second
-	  number on the command line after the
-	  interval to specify the total number of statistics to
-	  display.</para>
+      <para>To continuously monitor I/O activity on the pool, a
+	number can be specified as the last parameter, indicating
+	the frequency in seconds to wait between updates.  ZFS will
+	print the next statistic line after each interval.  Press
+	<keycombo
+	action="simul"><keycap>Ctrl</keycap><keycap>C</keycap></keycombo>
+	to stop this continuous monitoring.  Alternatively, give a
+	second number on the command line after the interval to
+	specify the total number of statistics to display.</para>
 
-	<para>Even more detailed pool I/O statistics can be
-	  displayed with <option>-v</option> parameter.
-	  Each storage device in the pool will be shown with a
-	  separate statistic line.  This is helpful to
-	  determine reads and writes on devices that slow down I/O on
-	  the whole pool.  The following example shows a
-	  mirrored pool consisting of two devices.  For each of these,
-	  a separate line is shown with the current I/O
-	  activity.</para>
+      <para>Even more detailed pool I/O statistics can be
+	displayed with <option>-v</option>.  In this case 
+	each storage device in the pool will be shown with a
+	corresponding statistics line.  This is helpful to
+	determine how many read and write operations are being
+	performed on each device, and can help determine if any
+	specific device is slowing down I/O on the entire pool.  The
+	following example shows a mirrored pool consisting of two
+	devices.  For each of these, a separate line is shown with
+	the current I/O activity.</para>
 
-<screen>&prompt.root; <userinput>zpool iostat -v </userinput>
+      <screen>&prompt.root; <userinput>zpool iostat -v </userinput>
                             capacity     operations    bandwidth
 pool                     alloc   free   read  write   read  write
 -----------------------  -----  -----  -----  -----  -----  -----
@@ -674,25 +724,86 @@
     <sect2 id="zfs-zpool-split">
       <title>Splitting a Storage Pool</title>
 
-      <para></para>
+      <para>A ZFS pool consisting of one or more mirror vdevs can be
+	split into a second pool.  The last member of each mirror
+	(unless otherwise specified) is detached and used to create a
+	new pool containing the same data.  It is recommended that
+	the operation first be attempted with the <option>-n</option>
+	parameter.  This will print out the details of the proposed
+	operation without actually performancing it.  This helps
+	ensure the operation will happen as expected.</para>
     </sect2>
   </sect1>
 
   <sect1 id="zfs-zfs">
     <title><command>zfs</command> Administration</title>
 
-    <para></para>
+    <para>The <command>zfs</command> utility is responsible for
+      creating, destroying, and managing all <acronym>ZFS</acronym>
+      datasets that exist within a pool.  The pool is managed using
+      the <link linkend="zfs-zpool"><command>zpool</command></link>
+      command.</para>
 
     <sect2 id="zfs-zfs-create">
       <title>Creating & Destroying Datasets</title>
 
-      <para></para>
+      <para>Unlike with traditional disks and volume managers, space
+	in <acronym>ZFS</acronym> is not preallocated, allowing
+	additional file systems to be created at any time.  With
+	traditional file systems, once all of the space was
+	partitioned and assigned to a file system, there was no way to
+	add an additional file system without adding a new disk.
+	<acronym>ZFS</acronym> also allows you to set a number of
+	properties on each <link
+	linkend="zfs-term-dataset">dataset</link>.  These properties
+	include features like compression, deduplication, caching and
+	quoteas, as well as other useful properties like readonly,
+	case sensitivity, network file sharing and mount point.  Each
+	separate dataset can be administered, <link
+	linkend="zfs-zfs-allow">delegated</link>, <link
+	linkend="zfs-zfs-send">replicated</link>, <link
+	linkend="zfs-zfs-snapshot">snapshoted</link>, <link
+	linkend="zfs-zfs-jail">jailed</link>, and destroyed as a unit.
+	This offers many advantages to creating a separate dataset for
+	each different type or set of files.  The only drawback to
+	having an extremely large number of datasets, is that some
+	commands like <command>zfs list</command> will be slower,
+	and the mounting of an extremely large number of datasets
+	(100s or 1000s) can make the &os; boot process take
+	longer.</para>
+
+      <para>Destroying a dataset is much quicker than deleting all
+	of the files that reside on the dataset, as it does not
+	invole scanning all of the files and updating all of the
+	corresponding metadata.  In modern versions of
+	<acronym>ZFS</acronym> the <command>zfs destroy</command>
+	operation is asynchronous, the free space may take several
+	minutes to appear in the pool.  The <literal>freeing</literal>
+	property, accessible with <command>zpool get freeing
+	<replaceable>poolname</replaceable></command> indicates how
+	many datasets are having their blocks freed in the background.
+	If there are child datasets, such as <link
+	linkend="zfs-term-snapshot">snapshots</link> or other
+	datasets, then the parent cannot be destroyed.  To destroy a
+	dataset and all of its children, use the <option>-r</option>
+	parameter to recursively destroy the dataset and all of its
+	children.  The <option>-n -v</option> parameters can be used
+	to not actually perform the destruction, but instead list
+	which datasets and snapshots would be destroyed and in the
+	case of snapshots, how much space would be reclaimed by
+	proceeding with the destruction.</para>
     </sect2>
 
     <sect2 id="zfs-zfs-volume">
       <title>Creating & Destroying Volumes</title>
 
-      <para></para>
+      <para>A volume is special type of <acronym>ZFS</acronym>
+	dataset.  Rather than being mounted as a file system, it is
+	exposed as a block device under
+	<devicename>/dev/zvol/<replaceable>poolname</replaceable>/<replaceable>dataset</replaceable></devicename>.
+	This allows the volume to be used for other file systems, to
+	back the disks of a virtual machine, or to be exported using
+	protocols like iSCSI or HAST.</para>
 
       <para>A volume can be formatted with any filesystem on top of
 	it.  This will appear to the user as if they are working with
@@ -714,18 +825,46 @@
 /dev/zvol/tank/fat32 249M  24k  249M     0%   /mnt
 &prompt.root; <userinput>mount | grep fat32</userinput>
 /dev/zvol/tank/fat32 on /mnt (msdosfs, local)</screen>
+
+      <para>Destroying a volume is much the same as destroying a
+	regular filesystem dataset.  The operation is nearly
+	instantaneous, but it make take several minutes for the free
+	space to be reclaimed in the background.</para>
+
     </sect2>
 
     <sect2 id="zfs-zfs-rename">
       <title>Renaming a Dataset</title>
 
-      <para></para>
+      <para>The name of a dataset can be changed using <command>zfs
+	  rename</command>.  The rename command can also be used to
+	change the parent of a dataset.  Renaming a dataset to be
+	under a different parent dataset will change the value of
+	those properties that are inherited by the child dataset.
+	When a dataset is renamed, it is unmounted and then remounted
+	in the new location (inherited from the parent dataset).  This
+	behavior can be prevented using the <option>-u</option>
+	parameter.  Due to the nature of snapshots, they cannot be
+	renamed outside of the parent dataset.  To rename a recursive
+	snapshot, specify the <option>-r</option> parameter, and all
+	snapshots with the same specified snapshot will be
+	renamed.</para>
     </sect2>
 
     <sect2 id="zfs-zfs-set">
       <title>Setting Dataset Properties</title>
 
-      <para></para>
+      <para>Each <acronym>ZFS</acronym> dataset has a number of
+	properties to control its behavior.  Most properties are
+	automatically inherited from the parent dataset, but can be
+	overridden locally.  Set a property on a dataset with
+	<command>zfs set
+	<replaceable>property</replaceable>=<replaceable>value</replaceable>
+	<replaceable>dataset</replaceable></command>.  Most properties
+	have a limited set of valid values, <command>zfs get</command>
+	will display each possible property and its valid values.
+	Most properties can be reverted to their inherited values
+	using <command>zfs inherit</command>.</para>
 
       <para>It is possible to set user-defined properties in ZFS.
 	They become part of the dataset configuration and can be used
@@ -743,13 +882,55 @@
     <sect2 id="zfs-zfs-snapshot">
       <title>Managing Snapshots</title>
 
-      <para></para>
+      <para><link linkend="zfs-term-snapshot">Snapshots</link> are one
+	of the most powerful features of <acronym>ZFS</acronym>.  A
+	snapshot provides a point-in-time copy of the dataset that the
+	parent dataset can be rolled back to if required.  Create a
+	snapshot with <command>zfs snapshot
+	<replaceable>dataset</replaceable>@<replaceable>snapshotname</replaceable></command>.
+	Specifying the <option>-r</option> parameter will recursively
+	create a snapshot with the same name on all child
+	datasets.</para>
+
+      <para>By default, snapshots are mounted in a hidden directory
+	under the parent dataset: <filename
+	role="directory">.zfs/snapshots/<replaceable>snapshotname</replaceable></filename>.
+	Individual files can easily be restored to a previous state by
+	copying them from the snapshot back to the parent dataset.  It
+	is also possible to revert the entire dataset back to the
+	point-in-time of the snapshot using <command>zfs
+	  rollback</command>.</para>
+
+      <para>Snapshots consume space based on how much the parent file
+	system has changed since the time of the snapshot.  The
+	<literal>written</literal> property of a snapshot tracks how
+	much space is being used by a snapshot.</para>
+
+      <para>To destroy a snapshot and recover the space consumed by
+	the overwritten or deleted files, run <command>zfs destroy
+	<replaceable>dataset</replaceable>@<replaceable>snapshot</replaceable></command>.
+	The <option>-r</option> parameter will recursively remove all
+	snapshots with the same name under the parent dataset.  Adding
+	the <option>-n -v</option> parameters to the destroy command
+	will display a list of the snapshots that would be deleted and
+	an estimate of how much space would be reclaimed by proceeding
+	with the destroy operation.</para>
     </sect2>
 
     <sect2 id="zfs-zfs-clones">
       <title>Managing Clones</title>
 
-      <para></para>
+      <para>A clone is a copy of a snapshot that is treated more like
+	a regular dataset.  Unlike a snapshot, a clone is not read
+	only, is mounted, and can have its own properties.  Once a
+	clone has been created, the snapshot it was created from
+	cannot be destroyed.  The child/parent relationship between
+	the clone and the snapshot can be reversed using <command>zfs
+	promote</command>.  After a clone has been promoted, the
+	snapshot becomes a child of the clone, rather than of the
+	original parent dataset.  This will change how the space is
+	accounted, but not actually change the amount of space
+	consumed.</para>
     </sect2>
 
     <sect2 id="zfs-zfs-send">
@@ -761,6 +942,18 @@
     <sect2 id="zfs-zfs-quota">
       <title>Dataset, User and Group Quotas</title>
 
+      <para><link linkend="zfs-term-quota">Dataset
+	  quotas</link> can be used to restrict the amount of space
+	that can be consumed by a peticular dataset.  <link
+	linkend="zfs-term-refquota">Reference Quotas</link> work in
+	very much the same way, except they only count the space used
+	by the dataset it self, excluding snapshots and child
+	datasets.  Similarly <link
+	linkend="zfs-term-userquota">user</link> and <link
+	linkend="zfs-term-groupquota">group</link> quotas can be used
+	to prevent users or groups from consuming all of the available
+	space in the pool or dataset.</para>
+
       <para>To enforce a dataset quota of 10 GB for
 	<filename>storage/home/bob</filename>, use the
 	following:</para>
@@ -861,7 +1054,13 @@
     <sect2 id="zfs-zfs-reservation">
       <title>Reservations</title>
 
-      <para></para>
+      <para><link linkend="zfs-term-reservation">Reservations</link>
+	guarantee a minimum amount of space will always be available
+	to a dataset.  The reserved space will not
+	be available to any other dataset.  This feature can be
+	especially useful to ensure that users cannot comsume all of
+	the free space, leaving none for an important dataset or log
+	files.</para>
 
       <para>The general format of the <literal>reservation</literal>
 	property is
@@ -878,7 +1077,8 @@
 
       <para>The same principle can be applied to the
 	<literal>refreservation</literal> property for setting a
-	refreservation, with the general format
+	<link linkend="zfs-term-refreservation">Reference
+	Reservation</link>, with the general format
 	<literal>refreservation=<replaceable>size</replaceable></literal>.</para>
 
       <para>To check if any reservations or refreservations exist on
@@ -898,7 +1098,18 @@
     <sect2 id="zfs-zfs-deduplication">
       <title>Deduplication</title>
 
-      <para></para>
+      <para>When enabled, <link
+	  linkend="zfs-term-deduplication">Deduplication</link> uses
+	the checksum of each block to detect duplicate blocks.  When a
+	new block is about to be written and it is determined to be a
+	duplicate of an existing block, rather than writing the same
+	data again, <acronym>ZFS</acronym> just references the
+	existing data on disk an additional time.  This can offer
+	tremendous space savings if your data contains many discreet
+	copies of the file information.  Deduplication requires an
+	extremely large amount of memory, and most of the space
+	savings can be had without the extra cost by enabling
+	compression instead.</para>
 
       <para>To activate deduplication, you simply need to set the
 	following property on the target pool.</para>
@@ -986,6 +1197,22 @@
 	thumb, compression should be used first before deduplication
 	due to the lower memory requirements.</para>
     </sect2>
+
+    <sect2 id="zfs-zfs-jail">
+      <title>ZFS and Jails</title>
+
+      <para><command>zfs jail</command> and the corresponding
+	<literal>jailed</literal> property are used to delegate a
+	<acronym>ZFS</acronym> dataset to a <link
+	linkend="jails">Jail</link>.  <command>zfs jail
+	<replaceable>jailid</replaceable></command> attaches a dataset
+	to the specified jail, and the <command>zfs unjail</command>
+	detaches it.  In order for the dataset to be administered from
+	within a jail, the <literal>jailed</literal> property must be
+	set.  Once a dataset is jailed it can no longer be mounted on
+	the host, because the jail administrator may have set
+	unacceptable mount points.</para>
+    </sect2>
   </sect1>
 
   <sect1 id="zfs-zfs-allow">
@@ -1170,6 +1397,12 @@
 	    Best Practices Guide</ulink></para>
       </listitem>
     </itemizedlist>
+
+    <sect2 id="zfs-history">
+      <title>History of <acronym>ZFS</acronym></title>
+
+      <para></para>
+    </sect2>
   </sect1>
 
   <sect1 id="zfs-term">
@@ -1344,31 +1577,28 @@
 		  <para id="zfs-term-vdev-log">
 		    <emphasis>Log</emphasis> - <acronym>ZFS</acronym>
 		    Log Devices, also known as ZFS Intent Log
-		    (<acronym>ZIL</acronym>) move the intent log from
-		    the regular pool devices to a dedicated device.
-		    The <acronym>ZIL</acronym> accelerates synchronous
-		    transactions by using storage devices (such as
-		    <acronym>SSD</acronym>s) that are faster than
-		    those used for the main pool.  When data is being
-		    written and the application requests a guarantee
-		    that the data has been safely stored, the data is
-		    written to the faster <acronym>ZIL</acronym>
-		    storage, then later flushed out to the regular
-		    disks, greatly reducing the latency of synchronous
-		    writes.  Log devices can be mirrored, but
-		    <acronym>RAID-Z</acronym> is not supported.  If
-		    multiple log devices are used, writes will be load
-		    balanced across them.</para>
+		    (<link
+		    linkend="zfs-term-zil"><acronym>ZIL</acronym></link>)
+		    move the intent log from the regular pool devices
+		    to a dedicated device, typically an
+		    <acronym>SSD</acronym>.  Having a dedicated log
+		    device can significantly improve the performance
+		    of applications with a high volume of synchronous
+		    writes, especially databases.  Log devices can be
+		    mirrored, but <acronym>RAID-Z</acronym> is not
+		    supported.  If multiple log devices are used,
+		    writes will be load balanced across them.</para>
 		</listitem>
 
 		<listitem>
 		  <para id="zfs-term-vdev-cache">
 		    <emphasis>Cache</emphasis> - Adding a cache vdev
 		    to a zpool will add the storage of the cache to
-		    the <acronym>L2ARC</acronym>.  Cache devices
-		    cannot be mirrored.  Since a cache device only
-		    stores additional copies of existing data, there
-		    is no risk of data loss.</para>
+		    the <link
+		    linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>.
+		    Cache devices cannot be mirrored.  Since a cache
+		    device only stores additional copies of existing
+		    data, there is no risk of data loss.</para>
 		</listitem>
 	      </itemizedlist></entry>
 	  </row>
@@ -1446,6 +1676,26 @@
 	  </row>
 
 	  <row>
+	    <entry
+	      id="zfs-term-zil"><acronym>ZIL</acronym></entry>
+
+	    <entry>The <acronym>ZIL</acronym> accelerates synchronous
+	      transactions by using storage devices (such as
+	      <acronym>SSD</acronym>s) that are faster than those used
+	      for the main storage pool.  When data is being written
+	      and the application requests a synchronous write (a
+	      guarantee that the data has been safely stored to disk
+	      rather than only cached to be written later), the data
+	      is written to the faster <acronym>ZIL</acronym> storage,
+	      then later flushed out to the regular disks, greatly
+	      reducing the latency and increasing performance.
+	      Only workloads that are synchronous such as databases
+	      will benefit from a <acronym>ZIL</acronym>.  Regular
+	      asynchronous writes such as copying files will not use
+	      the <acronym>ZIL</acronym> at all.</entry>
+	  </row>
+
+	  <row>
 	    <entry id="zfs-term-cow">Copy-On-Write</entry>
 
 	    <entry>Unlike a traditional file system, when data is
@@ -1481,12 +1731,24 @@
 	      properties on a child to override the defaults inherited
 	      from the parents and grandparents.
 	      <acronym>ZFS</acronym> also allows administration of
-	      datasets and their children to be delegated.</entry>
+	      datasets and their children to be <link
+	        linkend="zfs-zfs-allow">delegated</link>.</entry>
 	  </row>
 
 	  <row>
-	    <entry id="zfs-term-volum">Volume</entry>
+	    <entry id="zfs-term-filesystem">Filesystem</entry>
 
+	    <entry>A <acronym>ZFS</acronym> dataset is most often used
+	      as a file system.  Like most other file systems, a
+	      <acronym>ZFS</acronym> file system is mounted somewhere
+	      in the systems directory heirarchy and contains files
+	      and directories of its own with permissions, flags and
+	      other metadata.</entry>
+	  </row>
+
+	  <row>
+	    <entry id="zfs-term-volume">Volume</entry>
+
 	    <entry>In additional to regular file system datasets,
 	      <acronym>ZFS</acronym> can also create volumes, which
 	      are block devices.  Volumes have many of the same
@@ -1802,6 +2064,63 @@
 	      remaining drives) to the new drive is called
 	      <emphasis>resilvering</emphasis>.</entry>
 	  </row>
+
+	  <row>
+	    <entry id="zfs-term-online">Online</entry>
+
+	    <entry>A ZFS pool or vdev that is in the
+	      <literal>Online</literal> state has all of its member
+	      devices connected and fully operational.  Individual
+	      devices in the <literal>Online</literal> state are
+	      functioning normally.</entry>
+	  </row>
+
+	  <row>
+	    <entry id="zfs-term-offline">Offline</entry>
+
+	    <entry>Individual devices can be put in an
+	      <literal>Offline</literal> state by the administrator if
+	      there is sufficient redundancy to avoid putting the pool
+	      or vdev into a <link
+	      linkend="zfs-term-faulted">Faulted</link> state.  An
+	      administrator may choose to offline a disk in
+	      preperation for replacing it, or to make it easier to
+	      identify.</entry>
+	  </row>
+
+	  <row>
+	    <entry id="zfs-term-degraded">Degraded</entry>
+
+	    <entry>A ZFS pool or vdev that is in the
+	      <literal>Degraded</literal> state has one or more disks
+	      that have been disconnected or have failed.  The pool is
+	      still usable however if additional devices fail the pool
+	      could become unrecoverable.  Reconnecting the missing
+	      device(s) or replacing the failed disks will return the
+	      pool to a <link
+	      linkend="zfs-term-online">Online</link> state after
+	      the reconnected or new device has completed the <link
+	      linkend="zfs-term-resilver">Resilver</link>
+	      process.</entry>
+	  </row>
+
+	  <row>
+	    <entry id="zfs-term-faulted">Faulted</entry>
+
+	    <entry>A ZFS pool or vdev that is in the
+	      <literal>Faulted</literal> state is no longer
+	      operational and the data residing on it can no longer
+	      be accessed.  A pool or vdev enters the
+	      <literal>Faulted</literal> state when the number of
+	      missing or failed devices exceeds the level of
+	      redundancy in the vdev.  If missing devices can be
+	      reconnected the pool will return to a <link
+	      linkend="zfs-term-online">Online</link> state.  If
+	      there is insufficient redundancy to compensate for the
+	      number of failed disks, then the contents of the pool
+	      are lost and will need to be restored from
+	      backups.</entry>
+	  </row>
 	</tbody>
       </tgroup>
     </informaltable>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 899 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freebsd.org/pipermail/freebsd-doc/attachments/20131105/1cc35a2a/attachment.sig>


More information about the freebsd-doc mailing list