ZFS handbook project patch

Allan Jude freebsd at allanjude.com
Thu Feb 20 07:07:31 UTC 2014


Attached is another patch to the project branch for the ZFS section of
the handbook

adds the missing documentation on the 'zpool status', 'zpool scrub' and
'zpool clear' commands, and fills in the compression section (both the
zfs set compression part, and the beefed up section of the terminology
page).

It also adds a 'zpool status' example to the 'zpool upgrade' section, so
users know what a pool that needs to be upgraded will look like, and
adds a reminder to make sure they update the bootcode (one of the
popular problems people seem to run into, despite the fact the 'zpool
upgrade' reminds you.


It also fixes a paragraph that someone else wrote, that Warren had
pointed out made no sense.

Also adds some missing <acronym> tags, and replace all of the
<userinput> tags that are actually commands with <command>


Feedback welcome

-- 
Allan Jude
-------------- next part --------------
Index: zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
===================================================================
--- zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	(revision 44001)
+++ zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	(working copy)
@@ -134,7 +134,7 @@
 
     <para>Then start the service:</para>
 
-    <screen>&prompt.root; <userinput>service zfs start</userinput></screen>
+    <screen>&prompt.root; <command>service zfs start</command></screen>
 
     <para>The examples in this section assume three
       <acronym>SCSI</acronym> disks with the device names
@@ -152,12 +152,12 @@
 	pool using a single disk device, use
 	<command>zpool</command>:</para>
 
-      <screen>&prompt.root; <userinput>zpool create <replaceable>example</replaceable> <replaceable>/dev/da0</replaceable></userinput></screen>
+      <screen>&prompt.root; <command>zpool create <replaceable>example</replaceable> <replaceable>/dev/da0</replaceable></command></screen>
 
       <para>To view the new pool, review the output of
 	<command>df</command>:</para>
 
-      <screen>&prompt.root; <userinput>df</userinput>
+      <screen>&prompt.root; <command>df</command>
 Filesystem  1K-blocks    Used    Avail Capacity  Mounted on
 /dev/ad0s1a   2026030  235230  1628718    13%    /
 devfs               1       1        0   100%    /dev
@@ -169,10 +169,10 @@
 	accessible as a file system.  Files may be created on it and
 	users can browse it, as seen in the following example:</para>
 
-      <screen>&prompt.root; <userinput>cd /example</userinput>
-&prompt.root; <userinput>ls</userinput>
-&prompt.root; <userinput>touch testfile</userinput>
-&prompt.root; <userinput>ls -al</userinput>
+      <screen>&prompt.root; <command>cd /example</command>
+&prompt.root; <command>ls</command>
+&prompt.root; <command>touch testfile</command>
+&prompt.root; <command>ls -al</command>
 total 4
 drwxr-xr-x   2 root  wheel    3 Aug 29 23:15 .
 drwxr-xr-x  21 root  wheel  512 Aug 29 23:12 ..
@@ -182,8 +182,8 @@
 	<acronym>ZFS</acronym> features.  To create a dataset on this
 	pool with compression enabled:</para>
 
-      <screen>&prompt.root; <userinput>zfs create example/compressed</userinput>
-&prompt.root; <userinput>zfs set compression=gzip example/compressed</userinput></screen>
+      <screen>&prompt.root; <command>zfs create example/compressed</command>
+&prompt.root; <command>zfs set compression=gzip example/compressed</command></screen>
 
       <para>The <literal>example/compressed</literal> dataset is now a
 	<acronym>ZFS</acronym> compressed file system.  Try copying
@@ -192,14 +192,14 @@
 
       <para>Compression can be disabled with:</para>
 
-      <screen>&prompt.root; <userinput>zfs set compression=off example/compressed</userinput></screen>
+      <screen>&prompt.root; <command>zfs set compression=off example/compressed</command></screen>
 
       <para>To unmount a file system, use
 	<command>zfs umount</command> and then verify by using
 	<command>df</command>:</para>
 
-      <screen>&prompt.root; <userinput>zfs umount example/compressed</userinput>
-&prompt.root; <userinput>df</userinput>
+      <screen>&prompt.root; <command>zfs umount example/compressed</command>
+&prompt.root; <command>df</command>
 Filesystem  1K-blocks    Used    Avail Capacity  Mounted on
 /dev/ad0s1a   2026030  235232  1628716    13%    /
 devfs               1       1        0   100%    /dev
@@ -210,8 +210,8 @@
 	use <command>zfs mount</command> and verify with
 	<command>df</command>:</para>
 
-      <screen>&prompt.root; <userinput>zfs mount example/compressed</userinput>
-&prompt.root; <userinput>df</userinput>
+      <screen>&prompt.root; <command>zfs mount example/compressed</command>
+&prompt.root; <command>df</command>
 Filesystem         1K-blocks    Used    Avail Capacity  Mounted on
 /dev/ad0s1a          2026030  235234  1628714    13%    /
 devfs                      1       1        0   100%    /dev
@@ -222,7 +222,7 @@
       <para>The pool and file system may also be observed by viewing
 	the output from <command>mount</command>:</para>
 
-      <screen>&prompt.root; <userinput>mount</userinput>
+      <screen>&prompt.root; <command>mount</command>
 /dev/ad0s1a on / (ufs, local)
 devfs on /dev (devfs, local)
 /dev/ad0s1d on /usr (ufs, local, soft-updates)
@@ -237,13 +237,13 @@
 	is created.  Important files will be stored here, the file
 	system is set to keep two copies of each data block:</para>
 
-      <screen>&prompt.root; <userinput>zfs create example/data</userinput>
-&prompt.root; <userinput>zfs set copies=2 example/data</userinput></screen>
+      <screen>&prompt.root; <command>zfs create example/data</command>
+&prompt.root; <command>zfs set copies=2 example/data</command></screen>
 
       <para>It is now possible to see the data and space utilization
 	by issuing <command>df</command>:</para>
 
-      <screen>&prompt.root; <userinput>df</userinput>
+      <screen>&prompt.root; <command>df</command>
 Filesystem         1K-blocks    Used    Avail Capacity  Mounted on
 /dev/ad0s1a          2026030  235234  1628714    13%    /
 devfs                      1       1        0   100%    /dev
@@ -264,9 +264,9 @@
       <para>To destroy the file systems and then destroy the pool as
 	they are no longer needed:</para>
 
-      <screen>&prompt.root; <userinput>zfs destroy example/compressed</userinput>
-&prompt.root; <userinput>zfs destroy example/data</userinput>
-&prompt.root; <userinput>zpool destroy example</userinput></screen>
+      <screen>&prompt.root; <command>zfs destroy example/compressed</command>
+&prompt.root; <command>zfs destroy example/data</command>
+&prompt.root; <command>zpool destroy example</command></screen>
     </sect2>
 
     <sect2>
@@ -283,7 +283,7 @@
 	command, specifying the disks to add to the
 	pool:</para>
 
-      <screen>&prompt.root; <userinput>zpool create storage raidz da0 da1 da2</userinput></screen>
+      <screen>&prompt.root; <command>zpool create storage raidz da0 da1 da2</command></screen>
 
       <note>
 	<para>&sun; recommends that the number of devices used in a
@@ -301,22 +301,22 @@
 	command makes a new file system in the pool called
 	<literal>home</literal>:</para>
 
-      <screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen>
+      <screen>&prompt.root; <command>zfs create storage/home</command></screen>
 
       <para>Now compression and keeping extra copies of directories
 	and files can be enabled with these commands:</para>
 
-      <screen>&prompt.root; <userinput>zfs set copies=2 storage/home</userinput>
-&prompt.root; <userinput>zfs set compression=gzip storage/home</userinput></screen>
+      <screen>&prompt.root; <command>zfs set copies=2 storage/home</command>
+&prompt.root; <command>zfs set compression=gzip storage/home</command></screen>
 
       <para>To make this the new home directory for users, copy the
 	user data to this directory, and create the appropriate
 	symbolic links:</para>
 
-      <screen>&prompt.root; <userinput>cp -rp /home/* /storage/home</userinput>
-&prompt.root; <userinput>rm -rf /home /usr/home</userinput>
-&prompt.root; <userinput>ln -s /storage/home /home</userinput>
-&prompt.root; <userinput>ln -s /storage/home /usr/home</userinput></screen>
+      <screen>&prompt.root; <command>cp -rp /home/* /storage/home</command>
+&prompt.root; <command>rm -rf /home /usr/home</command>
+&prompt.root; <command>ln -s /storage/home /home</command>
+&prompt.root; <command>ln -s /storage/home /usr/home</command></screen>
 
       <para>Users now have their data stored on the freshly
 	created <filename class="directory">/storage/home</filename>.
@@ -325,7 +325,7 @@
       <para>Try creating a snapshot which can be rolled back
 	later:</para>
 
-      <screen>&prompt.root; <userinput>zfs snapshot storage/home at 08-30-08</userinput></screen>
+      <screen>&prompt.root; <command>zfs snapshot storage/home at 08-30-08</command></screen>
 
       <para>Note that the snapshot option will only capture a real
 	file system, not a home directory or a file.  The
@@ -333,7 +333,7 @@
 	file system name or the volume name.  When a user's home
 	directory is accidentally deleted, restore it with:</para>
 
-      <screen>&prompt.root; <userinput>zfs rollback storage/home at 08-30-08</userinput></screen>
+      <screen>&prompt.root; <command>zfs rollback storage/home at 08-30-08</command></screen>
 
       <para>To list all available snapshots, run
 	<command>ls</command> in the file system's
@@ -341,7 +341,7 @@
 	directory.  For example, to see the previously taken
 	snapshot:</para>
 
-      <screen>&prompt.root; <userinput>ls /storage/home/.zfs/snapshot</userinput></screen>
+      <screen>&prompt.root; <command>ls /storage/home/.zfs/snapshot</command></screen>
 
       <para>It is possible to write a script to perform regular
 	snapshots on user data.  However, over time, snapshots can
@@ -348,7 +348,7 @@
 	consume a great deal of disk space.  The previous snapshot can
 	be removed using the following command:</para>
 
-      <screen>&prompt.root; <userinput>zfs destroy storage/home at 08-30-08</userinput></screen>
+      <screen>&prompt.root; <command>zfs destroy storage/home at 08-30-08</command></screen>
 
       <para>After testing,
 	<filename class="directory">/storage/home</filename> can be
@@ -355,19 +355,19 @@
 	made the real <filename class="directory">/home</filename>
 	using this command:</para>
 
-      <screen>&prompt.root; <userinput>zfs set mountpoint=/home storage/home</userinput></screen>
+      <screen>&prompt.root; <command>zfs set mountpoint=/home storage/home</command></screen>
 
       <para>Run <command>df</command> and <command>mount</command> to
 	confirm that the system now treats the file system as the real
 	<filename class="directory">/home</filename>:</para>
 
-      <screen>&prompt.root; <userinput>mount</userinput>
+      <screen>&prompt.root; <command>mount</command>
 /dev/ad0s1a on / (ufs, local)
 devfs on /dev (devfs, local)
 /dev/ad0s1d on /usr (ufs, local, soft-updates)
 storage on /storage (zfs, local)
 storage/home on /home (zfs, local)
-&prompt.root; <userinput>df</userinput>
+&prompt.root; <command>df</command>
 Filesystem   1K-blocks    Used    Avail Capacity  Mounted on
 /dev/ad0s1a    2026030  235240  1628708    13%    /
 devfs                1       1        0   100%    /dev
@@ -380,7 +380,7 @@
 	created can be generated as part of the nightly
 	&man.periodic.8; runs:</para>
 
-      <screen>&prompt.root; <userinput>echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf</userinput></screen>
+      <screen>&prompt.root; <command>echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf</command></screen>
     </sect2>
 
     <sect2>
@@ -391,7 +391,7 @@
 	<acronym>RAID-Z</acronym> devices may be viewed with this
 	command:</para>
 
-      <screen>&prompt.root; <userinput>zpool status -x</userinput></screen>
+      <screen>&prompt.root; <command>zpool status -x</command></screen>
 
       <para>If all pools are
 	<link linkend="zfs-term-online">Online</link> and everything
@@ -425,19 +425,19 @@
       <para>This indicates that the device was previously taken
 	offline by the administrator with this command:</para>
 
-      <screen>&prompt.root; <userinput>zpool offline storage da1</userinput></screen>
+      <screen>&prompt.root; <command>zpool offline storage da1</command></screen>
 
       <para>Now the system can be powered down to replace
 	<filename>da1</filename>.  When the system is back online,
 	the failed disk can replaced in the pool:</para>
 
-      <screen>&prompt.root; <userinput>zpool replace storage da1</userinput></screen>
+      <screen>&prompt.root; <command>zpool replace storage da1</command></screen>
 
       <para>From here, the status may be checked again, this time
 	without <option>-x</option> so that all pools are
 	shown:</para>
 
-      <screen>&prompt.root; <userinput>zpool status storage</userinput>
+      <screen>&prompt.root; <command>zpool status storage</command>
  pool: storage
  state: ONLINE
  scrub: resilver completed with 0 errors on Sat Aug 30 19:44:11 2008
@@ -463,7 +463,7 @@
 	upon creation of file systems and may be disabled using the
 	following command:</para>
 
-      <screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen>
+      <screen>&prompt.root; <command>zfs set checksum=off storage/home</command></screen>
 
       <warning>
 	<para>Doing so is <emphasis>not</emphasis> recommended!
@@ -478,16 +478,16 @@
 	<quote>scrubbing</quote>.  Verify the data integrity of the
 	<literal>storage</literal> pool, with this command:</para>
 
-      <screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen>
+      <screen>&prompt.root; <command>zpool scrub storage</command></screen>
 
       <para>The duration of a scrub depends on the amount of data
 	stored.  Large amounts of data can take a considerable amount
 	of time to verify.  It is also very <acronym>I/O</acronym>
-	intensive, so much so that only one scrub> may be run at any
+	intensive, so much so that only one scrub may be run at any
 	given time.  After the scrub has completed, the status is
 	updated and may be viewed with a status request:</para>
 
-      <screen>&prompt.root; <userinput>zpool status storage</userinput>
+      <screen>&prompt.root; <command>zpool status storage</command>
  pool: storage
  state: ONLINE
  scrub: scrub completed with 0 errors on Sat Jan 26 19:57:37 2013
@@ -502,9 +502,10 @@
 
 errors: No known data errors</screen>
 
-      <para>The completion time is displayed and helps to ensure data
-	integrity over a long period of time.</para>
-	<!-- WB: what does that mean? -->
+      <para>The completion date of the last scrub operation is
+	displayed to help track when another scrub is required.
+	Routine pool scrubs help protect data from silent corruption
+	and ensure the integrity of the pool.</para>
 
       <para>Refer to &man.zfs.8; and &man.zpool.8; for other
 	<acronym>ZFS</acronym> options.</para>
@@ -581,6 +582,53 @@
 	redundancy.</para>
     </sect2>
 
+    <sect2 xml:id="zfs-zpool-status">
+      <title>Checking the Status of a Pool</title>
+
+      <para>It is important to monitor the status of the
+	<acronym>ZFS</acronym> pool.  If a drive goes offline, a
+	read or write error is detected, or a checksum fails to match,
+	the corresponding counters in the <option>status</option>
+	display will be incremented.  The <option>status</option>
+	output shows the configuration and status of each device in
+	the pool, in addition to the status of the pool as the whole.
+	Also displayed are any actions that may need to be taken, and
+	details about when the last
+	<link linkend="zfs-zpool-scrub"><option>scrub</option></link>
+	operation was completed.</para>
+
+      <screen>&prompt.root; <command>zpool status</command>
+  pool: mypool
+ state: ONLINE
+  scan: scrub repaired 0 in 2h25m with 0 errors on Sat Sep 14 04:25:50 2013
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          raidz2-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada1p3  ONLINE       0     0     0
+            ada2p3  ONLINE       0     0     0
+            ada3p3  ONLINE       0     0     0
+            ada4p3  ONLINE       0     0     0
+            ada5p3  ONLINE       0     0     0
+
+errors: No known data errors</screen>
+    </sect2>
+
+    <sect2 xml:id="zfs-zpool-clear">
+      <title>Clearing Errors</title>
+
+      <para>If an error is detected with a device in a pool, the
+	corresponding read, write, or checksum counter will be
+	incremented.  Once the issue is resolved, or to track the
+	rate of errors, <command>zpool clear mypool</command> will
+	reset the counters.  This step can be important for automated
+	scripts that monitor the health of the pool and alert the
+	administrator when there is an error, further errors may not
+	be reported if the old errors are not cleared.</para>
+    </sect2>
+
     <sect2 xml:id="zfs-zpool-replace">
       <title>Replacing a Functioning Device</title>
 
@@ -622,8 +670,40 @@
 	restored from backups.</para>
     </sect2>
 
+    <sect2 xml:id="zfs-zpool-scrub">
+      <title>Scrubbing a Pool</title>
+
+      <para>It is strongly recommended that a
+	<link linkend="zfs-term-scrub">Scrub</link> operation be
+	performed regularly.  Ideally atleast once each quarter.  The
+	<option>scrub</option> operating is very I/O intensive and
+	will reduce performance while it is in progress, so it much
+	be scheduled to avoid high demand periods.</para>
+
+      <screen>&prompt.root; <command>zpool scrub mypool</command>
+&prompt.root; <command>zpool status</command>
+  pool: mypool
+ state: ONLINE
+  scan: scrub in progress since Wed Feb 19 20:52:54 2014
+        116G scanned out of 8.60T at 649M/s, 3h48m to go
+        0 repaired, 1.32% done
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool       ONLINE       0     0     0
+          raidz2-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada1p3  ONLINE       0     0     0
+            ada2p3  ONLINE       0     0     0
+            ada3p3  ONLINE       0     0     0
+            ada4p3  ONLINE       0     0     0
+            ada5p3  ONLINE       0     0     0
+
+errors: No known data errors</screen>
+    </sect2>
+
     <sect2 xml:id="zfs-zpool-selfheal">
-      <title>ZFS Self-Healing</title>
+      <title><acronym>ZFS</acronym> Self-Healing</title>
 
       <para><acronym>ZFS</acronym> utilizes the checkums stored with
 	each data block to provide a feature called self-healing.
@@ -651,8 +731,8 @@
 	two disks <filename>/dev/ada0</filename> and
 	<filename>/dev/ada1</filename> is created.</para>
 
-      <screen>&prompt.root; <userinput>zpool create <replaceable>healer</replaceable> mirror <replaceable>/dev/ada0</replaceable> <replaceable>/dev/ada1</replaceable></userinput>
-&prompt.root; <userinput>zpool status <replaceable>healer</replaceable></userinput>
+      <screen>&prompt.root; <command>zpool create <replaceable>healer</replaceable> mirror <replaceable>/dev/ada0</replaceable> <replaceable>/dev/ada1</replaceable></command>
+&prompt.root; <command>zpool status <replaceable>healer</replaceable></command>
   pool: healer
  state: ONLINE
   scan: none requested
@@ -665,7 +745,7 @@
        ada1     ONLINE       0     0     0
 
 errors: No known data errors
-&prompt.root; <userinput>zpool list</userinput>
+&prompt.root; <command>zpool list</command>
 NAME     SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
 healer   960M  92.5K   960M     0%  1.00x  ONLINE  -</screen>
 
@@ -674,12 +754,12 @@
 	A checksum of the pool is then created to compare it against
 	the pool later on.</para>
 
-      <screen>&prompt.root; <userinput>cp /some/important/data /healer</userinput>
-&prompt.root; <userinput>zfs list</userinput>
+      <screen>&prompt.root; <command>cp /some/important/data /healer</command>
+&prompt.root; <command>zfs list</command>
 NAME     SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
 healer   960M  67.7M   892M     7%  1.00x  ONLINE  -
-&prompt.root; <userinput>sha1 /healer > checksum.txt</userinput>
-&prompt.root; <userinput>cat checksum.txt</userinput>
+&prompt.root; <command>sha1 /healer > checksum.txt</command>
+&prompt.root; <command>cat checksum.txt</command>
 SHA1 (/healer) = 2753eff56d77d9a536ece6694bf0a82740344d1f</screen>
 
       <para>Next, data corruption is simulated by writing random data
@@ -700,12 +780,12 @@
 	  of the pool are created before running the command!</para>
       </warning>
 
-      <screen>&prompt.root; <userinput>zpool export <replaceable>healer</replaceable></userinput>
-&prompt.root; <userinput>dd if=/dev/random of=/dev/ada1 bs=1m count=200</userinput>
+      <screen>&prompt.root; <command>zpool export <replaceable>healer</replaceable></command>
+&prompt.root; <command>dd if=/dev/random of=/dev/ada1 bs=1m count=200</command>
 200+0 records in
 200+0 records out
 209715200 bytes transferred in 62.992162 secs (3329227 bytes/sec)
-&prompt.root; <userinput>zpool import healer</userinput></screen>
+&prompt.root; <command>zpool import healer</command></screen>
 
       <para>The <acronym>ZFS</acronym> pool status shows that one
 	device has experienced an error.  It is important to know that
@@ -717,7 +797,7 @@
 	easily as the <literal>CKSUM</literal> column contains a value
 	greater than zero.</para>
 
-      <screen>&prompt.root; <userinput>zpool status <replaceable>healer</replaceable></userinput>
+      <screen>&prompt.root; <command>zpool status <replaceable>healer</replaceable></command>
     pool: healer
    state: ONLINE
   status: One or more devices has experienced an unrecoverable error.  An
@@ -742,8 +822,8 @@
 	with the original one should reveal whether the pool is
 	consistent again.</para>
 
-      <screen>&prompt.root; <userinput>sha1 /healer >> checksum.txt</userinput>
-&prompt.root; <userinput>cat checksum.txt</userinput>
+      <screen>&prompt.root; <command>sha1 /healer >> checksum.txt</command>
+&prompt.root; <command>cat checksum.txt</command>
 SHA1 (/healer) = 2753eff56d77d9a536ece6694bf0a82740344d1f
 SHA1 (/healer) = 2753eff56d77d9a536ece6694bf0a82740344d1f</screen>
 
@@ -762,8 +842,8 @@
 	required to remove the falsely written data from
 	<filename>ada1</filename>.</para>
 
-      <screen>&prompt.root; <userinput>zpool scrub <replaceable>healer</replaceable></userinput>
-&prompt.root; <userinput>zpool status <replaceable>healer</replaceable></userinput>
+      <screen>&prompt.root; <command>zpool scrub <replaceable>healer</replaceable></command>
+&prompt.root; <command>zpool status <replaceable>healer</replaceable></command>
   pool: healer
  state: ONLINE
 status: One or more devices has experienced an unrecoverable error.  An
@@ -792,7 +872,7 @@
 	operation is complete, the pool status has changed to the
 	following:</para>
 
-      <screen>&prompt.root; <userinput>zpool status <replaceable>healer</replaceable></userinput>
+      <screen>&prompt.root; <command>zpool status <replaceable>healer</replaceable></command>
   pool: healer
  state: ONLINE
 status: One or more devices has experienced an unrecoverable error.  An
@@ -817,8 +897,8 @@
 	from the pool status by running <command>zpool
 	  clear</command>.</para>
 
-      <screen>&prompt.root; <userinput>zpool clear <replaceable>healer</replaceable></userinput>
-&prompt.root; <userinput>zpool status <replaceable>healer</replaceable></userinput>
+      <screen>&prompt.root; <command>zpool clear <replaceable>healer</replaceable></command>
+&prompt.root; <command>zpool status <replaceable>healer</replaceable></command>
   pool: healer
  state: ONLINE
   scan: scrub repaired 66.5M in 0h2m with 0 errors on Mon Dec 10 12:26:25 2012
@@ -890,17 +970,38 @@
 	need to be imported on an older system before upgrading.  The
 	upgrade process is unreversible and cannot be undone.</para>
 
+      <screen>&prompt.root; <command>zpool status</command>
+  pool: mypool
+ state: ONLINE
+status: The pool is formatted using a legacy on-disk format.  The pool can
+        still be used, but some features are unavailable.
+action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
+        pool will no longer be accessible on software that does not support feat
+        flags.
+  scan: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+	    ada0    ONLINE       0     0     0
+	    ada1    ONLINE       0     0     0
+
+errors: No known data errors</screen>
+
       <para>The newer features of <acronym>ZFS</acronym> will not be
 	available until <command>zpool upgrade</command> has
 	completed.  <option>-v</option> can be used to see what new
 	features will be provided by upgrading, as well as which
 	features are already supported by the existing version.</para>
-    </sect2>
 
-    <sect2 xml:id="zfs-zpool-status">
-      <title>Checking the Status of a Pool</title>
-
-      <para></para>
+      <warning>
+	<para>If the system boots from the zpool, the boot code must
+	  also be updated to support the new zpool version.  Run
+	  <command>gpart bootcode</command> on the partition that
+	  contains the boot code.  See &man.gpart.8; for more
+	  information.</para>
+      </warning>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-history">
@@ -917,7 +1018,7 @@
 	review this history is aptly named
 	<command>zpool history</command>:</para>
 
-      <screen>&prompt.root; <userinput>zpool history</userinput>
+      <screen>&prompt.root; <command>zpool history</command>
 History for 'tank':
 2013-02-26.23:02:35 zpool create tank mirror /dev/ada0 /dev/ada1
 2013-02-27.18:50:58 zfs set atime=off tank
@@ -939,7 +1040,7 @@
 	<option>-i</option> displays user initiated events as well
 	as internally logged <acronym>ZFS</acronym> events.</para>
 
-      <screen>&prompt.root; <userinput>zpool history -i</userinput>
+      <screen>&prompt.root; <command>zpool history -i</command>
 History for 'tank':
 2013-02-26.23:02:35 [internal pool create txg:5] pool spa 28; zfs spa 28; zpl 5;uts  9.1-RELEASE 901000 amd64
 2013-02-27.18:50:53 [internal property set txg:50] atime=0 dataset = 21
@@ -954,7 +1055,7 @@
 	including information like the name of the user who issued the
 	command and the hostname on which the change was made.</para>
 
-      <screen>&prompt.root; <userinput>zpool history -l</userinput>
+      <screen>&prompt.root; <command>zpool history -l</command>
 History for 'tank':
 2013-02-26.23:02:35 zpool create tank mirror /dev/ada0 /dev/ada1 [user 0 (root) on :global]
 2013-02-27.18:50:58 zfs set atime=off tank [user 0 (root) on myzfsbox:global]
@@ -992,7 +1093,7 @@
 	to limit monitoring to just that pool.  A
 	basic example:</para>
 
-      <screen>&prompt.root; <userinput>zpool iostat</userinput>
+      <screen>&prompt.root; <command>zpool iostat</command>
                capacity     operations    bandwidth
 pool        alloc   free   read  write   read  write
 ----------  -----  -----  -----  -----  -----  -----
@@ -1019,7 +1120,7 @@
 	pool.  This example shows a mirrored pool
 	consisting of two devices:</para>
 
-      <screen>&prompt.root; <userinput>zpool iostat -v </userinput>
+      <screen>&prompt.root; <command>zpool iostat -v </command>
                             capacity     operations    bandwidth
 pool                     alloc   free   read  write   read  write
 -----------------------  -----  -----  -----  -----  -----  -----
@@ -1122,16 +1223,16 @@
 	compression property on a 250 MB volume allows creation
 	of a compressed <acronym>FAT</acronym> filesystem.</para>
 
-      <screen>&prompt.root; <userinput>zfs create -V 250m -o compression=on tank/fat32</userinput>
-&prompt.root; <userinput>zfs list tank</userinput>
+      <screen>&prompt.root; <command>zfs create -V 250m -o compression=on tank/fat32</command>
+&prompt.root; <command>zfs list tank</command>
 NAME USED AVAIL REFER MOUNTPOINT
 tank 258M  670M   31K /tank
-&prompt.root; <userinput>newfs_msdos -F32 /dev/zvol/tank/fat32</userinput>
-&prompt.root; <userinput>mount -t msdosfs /dev/zvol/tank/fat32 /mnt</userinput>
-&prompt.root; <userinput>df -h /mnt | grep fat32</userinput>
+&prompt.root; <command>newfs_msdos -F32 /dev/zvol/tank/fat32</command>
+&prompt.root; <command>mount -t msdosfs /dev/zvol/tank/fat32 /mnt</command>
+&prompt.root; <command>df -h /mnt | grep fat32</command>
 Filesystem           Size Used Avail Capacity Mounted on
 /dev/zvol/tank/fat32 249M  24k  249M     0%   /mnt
-&prompt.root; <userinput>mount | grep fat32</userinput>
+&prompt.root; <command>mount | grep fat32</command>
 /dev/zvol/tank/fat32 on /mnt (msdosfs, local)</screen>
 
       <para>Destroying a volume is much the same as destroying a
@@ -1182,8 +1283,8 @@
 	(<literal>:</literal>) is used to create a custom namespace
 	for the property.</para>
 
-      <screen>&prompt.root; <userinput>zfs set <replaceable>custom</replaceable>:<replaceable>costcenter</replaceable>=<replaceable>1234</replaceable> <replaceable>tank</replaceable></userinput>
-&prompt.root; <userinput>zfs get <replaceable>custom</replaceable>:<replaceable>costcenter</replaceable> <replaceable>tank</replaceable></userinput>
+      <screen>&prompt.root; <command>zfs set <replaceable>custom</replaceable>:<replaceable>costcenter</replaceable>=<replaceable>1234</replaceable> <replaceable>tank</replaceable></command>
+&prompt.root; <command>zfs get <replaceable>custom</replaceable>:<replaceable>costcenter</replaceable> <replaceable>tank</replaceable></command>
 NAME PROPERTY           VALUE SOURCE
 tank custom:costcenter  1234  local</screen>
 
@@ -1193,11 +1294,11 @@
 	datasets, it will be removed completely (although the changes
 	are still recorded in the pool's history).</para>
 
-      <screen>&prompt.root; <userinput>zfs inherit -r <replaceable>custom</replaceable>:<replaceable>costcenter</replaceable> <replaceable>tank</replaceable></userinput>
-&prompt.root; <userinput>zfs get <replaceable>custom</replaceable>:<replaceable>costcenter</replaceable> <replaceable>tank</replaceable></userinput>
+      <screen>&prompt.root; <command>zfs inherit -r <replaceable>custom</replaceable>:<replaceable>costcenter</replaceable> <replaceable>tank</replaceable></command>
+&prompt.root; <command>zfs get <replaceable>custom</replaceable>:<replaceable>costcenter</replaceable> <replaceable>tank</replaceable></command>
 NAME    PROPERTY           VALUE              SOURCE
 tank    custom:costcenter  -                  -
-&prompt.root; <userinput>zfs get all <replaceable>tank</replaceable> | grep <replaceable>custom</replaceable>:<replaceable>costcenter</replaceable></userinput>
+&prompt.root; <command>zfs get all <replaceable>tank</replaceable> | grep <replaceable>custom</replaceable>:<replaceable>costcenter</replaceable></command>
 &prompt.root;</screen>
     </sect2>
 
@@ -1255,7 +1356,7 @@
     </sect2>
 
     <sect2 xml:id="zfs-zfs-send">
-      <title>ZFS Replication</title>
+      <title><acronym>ZFS</acronym> Replication</title>
 
       <para>Keeping data on a single pool in one location exposes
 	it to risks like theft, natural and human disasters.  Keeping
@@ -1265,12 +1366,13 @@
 	the data to standard output.  Using this technique, it is
 	possible to not only store the data on another pool connected
 	to the local system, but also to send it over a network to
-	another system that runs ZFS.  To achieve this replication,
-	<acronym>ZFS</acronym> uses filesystem snapshots (see the
-	section on <link
-	  linkend="zfs-zfs-snapshot">ZFS snapshots</link>) to send
-	them from one location to another.  The commands for this
-	operation are <command>zfs send</command> and
+	another system that runs <acronym>ZFS</acronym> .  To achieve
+	this replication, <acronym>ZFS</acronym> uses filesystem
+	snapshots (see the section on
+	<link linkend="zfs-zfs-snapshot"><acronym>ZFS</acronym>
+	  snapshots</link>) to send them from one location to another.
+	The commands for this operation are
+	<command>zfs send</command> and
 	<command>zfs receive</command>, respectively.</para>
 
       <para>The following examples will demonstrate the functionality
@@ -1357,7 +1459,7 @@
 mypool  984M  43.7M   940M     4%  1.00x  ONLINE  -</screen>
 
       <sect3 xml:id="zfs-send-incremental">
-	<title>ZFS Incremental Backups</title>
+	<title><acronym>ZFS</acronym> Incremental Backups</title>
 
 	<para>Another feature of <command>zfs send</command> is that
 	  it can determine the difference between two snapshots to
@@ -1365,12 +1467,12 @@
 	  saving disk space and time for the transfer to another pool.
 	  For example:</para>
 
-	<screen>&prompt.root; <userinput>zfs snapshot <replaceable>mypool</replaceable>@<replaceable>backup2</replaceable></userinput>
-&prompt.root; <userinput>zfs list -t snapshot</userinput>
+	<screen>&prompt.root; <command>zfs snapshot <replaceable>mypool</replaceable>@<replaceable>backup2</replaceable></command>
+&prompt.root; <command>zfs list -t snapshot</command>
 NAME                    USED  AVAIL  REFER  MOUNTPOINT
 mypool at backup1         5.72M      -  43.6M  -
 mypool at backup2             0      -  44.1M  -
-&prompt.root; <userinput>zpool list</userinput>
+&prompt.root; <command>zpool list</command>
 NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
 backup  960M  61.7M   898M     6%  1.00x  ONLINE  -
 mypool  960M  50.2M   910M     5%  1.00x  ONLINE  -</screen>
@@ -1377,20 +1479,20 @@
 
 	<para>A second snapshot called
 	  <replaceable>backup2</replaceable> was created.  This second
-	  snapshot contains only the changes on the ZFS filesystem
-	  between now and the last snapshot,
-	  <replaceable>backup1</replaceable>.  Using the
+	  snapshot contains only the changes on the
+	  <acronym>ZFS</acronym> filesystem between now and the last
+	  snapshot, <replaceable>backup1</replaceable>.  Using the
 	  <literal>-i</literal> flag to <command>zfs send</command>
 	  and providing both snapshots, an incremental snapshot can be
 	  transferred, containing only the data that has
 	  changed.</para>
 
-	<screen>&prompt.root; <userinput>zfs send -i <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable> <replaceable>mypool</replaceable>@<replaceable>backup2</replaceable> > <replaceable>/backup/incremental</replaceable></userinput>
-&prompt.root; <userinput>zpool list</userinput>
+	<screen>&prompt.root; <command>zfs send -i <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable> <replaceable>mypool</replaceable>@<replaceable>backup2</replaceable> > <replaceable>/backup/incremental</replaceable></command>
+&prompt.root; <command>zpool list</command>
 NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
 backup  960M  80.8M   879M     8%  1.00x  ONLINE  -
 mypool  960M  50.2M   910M     5%  1.00x  ONLINE  -
-&prompt.root; <userinput>ls -lh /backup</userinput>
+&prompt.root; <command>ls -lh /backup</command>
 total 82247
 drwxr-xr-x     1 root   wheel      61M Dec  3 11:36 backup1
 drwxr-xr-x     1 root   wheel      18M Dec  3 11:36 incremental</screen>
@@ -1407,7 +1509,7 @@
       </sect3>
 
       <sect3 xml:id="zfs-send-recv">
-	<title>Receiving ZFS Data Streams</title>
+	<title>Receiving <acronym>ZFS</acronym> Data Streams</title>
 
 	<para>Up until now, only the data streams in binary form were
 	  sent to other pools.  To get to the actual data contained in
@@ -1421,8 +1523,8 @@
 	  pool to another.  This way, the data can be used directly on
 	  the receiving pool after the transfer is complete.</para>
 
-	<screen>&prompt.root; <userinput>zfs send <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable> | zfs receive <replaceable>backup/backup1</replaceable></userinput>
-&prompt.root; <userinput>ls -lh /backup</userinput>
+	<screen>&prompt.root; <command>zfs send <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable> | zfs receive <replaceable>backup/backup1</replaceable></command>
+&prompt.root; <command>ls -lh /backup</command>
 total 431
 drwxr-xr-x     4219 root   wheel      4.1k Dec  3 11:34 backup1</screen>
 
@@ -1429,11 +1531,11 @@
 	<para>The directory <replaceable>backup1</replaceable> does
 	  contain all the data, which were part of the snapshot of the
 	  same name.  Since this originally was a complete filesystem
-	  snapshot, the listing of all ZFS filesystems for this pool
-	  is also updated and shows the
+	  snapshot, the listing of all <acronym>ZFS</acronym>
+	  filesystems for this pool is also updated and shows the
 	  <replaceable>backup1</replaceable> entry.</para>
 
-	<screen>&prompt.root; <userinput>zfs list</userinput>
+	<screen>&prompt.root; <command>zfs list</command>
 NAME                    USED  AVAIL  REFER  MOUNTPOINT
 backup                 43.7M   884M    32K  /backup
 backup/backup1         43.5M   884M  43.5M  /backup/backup1
@@ -1465,16 +1567,16 @@
 	  encryption of the data on the pool itself.  To make sure the
 	  network connection between both systems is securely
 	  encrypted, <application>SSH</application> can be used.
-	  Since ZFS only requires the stream to be redirected from
-	  standard output, it is relatively easy to pipe it through
-	  SSH.</para>
+	  Since <acronym>ZFS</acronym> only requires the stream to be
+	  redirected from standard output, it is relatively easy to
+	  pipe it through SSH.</para>
 
 	<para>A few settings and security precautions have to be made
-	  before this can be done.  Since this chapter is about ZFS
-	  and not about configuring SSH, it only lists the things
-	  required to perform the encrypted <command>zfs
-	  send</command> operation.  The following settings should
-	  be made:</para>
+	  before this can be done.  Since this chapter is about
+	  <acronym>ZFS</acronym> and not about configuring SSH, it
+	  only lists the things required to perform the encrypted
+	  <command>zfs send</command> operation.  The following
+	  settings should be made:</para>
 
 	<itemizedlist>
 	  <listitem>
@@ -1500,8 +1602,8 @@
 	  the receiving system, the encrypted stream can be sent using
 	  the following commands:</para>
 
-	<screen>&prompt.root; <userinput>zfs snapshot -r <replaceable>mypool/home</replaceable>@<replaceable>monday</replaceable></userinput>
-&prompt.root; <userinput>zfs send -R <replaceable>mypool/home</replaceable>@<replaceable>monday</replaceable> | ssh <replaceable>backuphost</replaceable> zfs recv -dvu <replaceable>backuppool</replaceable></userinput></screen>
+	<screen>&prompt.root; <command>zfs snapshot -r <replaceable>mypool/home</replaceable>@<replaceable>monday</replaceable></command>
+&prompt.root; <command>zfs send -R <replaceable>mypool/home</replaceable>@<replaceable>monday</replaceable> | ssh <replaceable>backuphost</replaceable> zfs recv -dvu <replaceable>backuppool</replaceable></command></screen>
 
 	<para>The first command creates a recursive snapshot (option
 	  <literal>-r</literal>) called
@@ -1549,13 +1651,13 @@
 	<filename>storage/home/bob</filename>, use the
 	following:</para>
 
-      <screen>&prompt.root; <userinput>zfs set quota=10G storage/home/bob</userinput></screen>
+      <screen>&prompt.root; <command>zfs set quota=10G storage/home/bob</command></screen>
 
       <para>To enforce a reference quota of 10 GB for
 	<filename>storage/home/bob</filename>, use the
 	following:</para>
 
-      <screen>&prompt.root; <userinput>zfs set refquota=10G storage/home/bob</userinput></screen>
+      <screen>&prompt.root; <command>zfs set refquota=10G storage/home/bob</command></screen>
 
       <para>The general format is
 	<literal>userquota@<replaceable>user</replaceable>=<replaceable>size</replaceable></literal>,
@@ -1589,11 +1691,11 @@
       <para>For example, to enforce a user quota of 50 GB for the
 	user named <replaceable>joe</replaceable>:</para>
 
-      <screen>&prompt.root; <userinput>zfs set userquota at joe=50G</userinput></screen>
+      <screen>&prompt.root; <command>zfs set userquota at joe=50G</command></screen>
 
       <para>To remove any quota:</para>
 
-      <screen>&prompt.root; <userinput>zfs set userquota at joe=none</userinput></screen>
+      <screen>&prompt.root; <command>zfs set userquota at joe=none</command></screen>
 
       <note>
 	<para>User quota properties are not displayed by
@@ -1611,13 +1713,13 @@
 	<replaceable>firstgroup</replaceable> to 50 GB,
 	use:</para>
 
-      <screen>&prompt.root; <userinput>zfs set groupquota at firstgroup=50G</userinput></screen>
+      <screen>&prompt.root; <command>zfs set groupquota at firstgroup=50G</command></screen>
 
       <para>To remove the quota for the group
 	<replaceable>firstgroup</replaceable>, or to make sure that
 	one is not set, instead use:</para>
 
-      <screen>&prompt.root; <userinput>zfs set groupquota at firstgroup=none</userinput></screen>
+      <screen>&prompt.root; <command>zfs set groupquota at firstgroup=none</command></screen>
 
       <para>As with the user quota property,
 	non-<systemitem class="username">root</systemitem> users can
@@ -1638,7 +1740,7 @@
 	<systemitem class="username">root</systemitem> can list the
 	quota for <filename>storage/home/bob</filename> using:</para>
 
-      <screen>&prompt.root; <userinput>zfs get quota storage/home/bob</userinput></screen>
+      <screen>&prompt.root; <command>zfs get quota storage/home/bob</command></screen>
     </sect2>
 
     <sect2 xml:id="zfs-zfs-reservation">
@@ -1657,11 +1759,11 @@
 	so to set a reservation of 10 GB on
 	<filename>storage/home/bob</filename>, use:</para>
 
-      <screen>&prompt.root; <userinput>zfs set reservation=10G storage/home/bob</userinput></screen>
+      <screen>&prompt.root; <command>zfs set reservation=10G storage/home/bob</command></screen>
 
       <para>To clear any reservation:</para>
 
-      <screen>&prompt.root; <userinput>zfs set reservation=none storage/home/bob</userinput></screen>
+      <screen>&prompt.root; <command>zfs set reservation=none storage/home/bob</command></screen>
 
       <para>The same principle can be applied to the
 	<literal>refreservation</literal> property for setting a
@@ -1672,16 +1774,10 @@
       <para>This command shows any reservations or refreservations
 	that exist on <filename>storage/home/bob</filename>:</para>
 
-      <screen>&prompt.root; <userinput>zfs get reservation storage/home/bob</userinput>
-&prompt.root; <userinput>zfs get refreservation storage/home/bob</userinput></screen>
+      <screen>&prompt.root; <command>zfs get reservation storage/home/bob</command>
+&prompt.root; <command>zfs get refreservation storage/home/bob</command></screen>
     </sect2>
 
-    <sect2 xml:id="zfs-zfs-compression">
-      <title>Compression</title>
-
-      <para></para>
-    </sect2>
-
     <sect2 xml:id="zfs-zfs-deduplication">
       <title>Deduplication</title>
 
@@ -1700,7 +1796,7 @@
       <para>To activate deduplication, set the
 	<literal>dedup</literal> property on the target pool:</para>
 
-      <screen>&prompt.root; <userinput>zfs set dedup=on <replaceable>pool</replaceable></userinput></screen>
+      <screen>&prompt.root; <command>zfs set dedup=on <replaceable>pool</replaceable></command></screen>
 
       <para>Only new data being written to the pool will be
 	deduplicated.  Data that has already been written to the pool
@@ -1708,7 +1804,7 @@
 	such, a pool with a freshly activated deduplication property
 	will look something like this example:</para>
 
-      <screen>&prompt.root; <userinput>zpool list</userinput>
+      <screen>&prompt.root; <command>zpool list</command>
 NAME  SIZE ALLOC  FREE CAP DEDUP HEALTH ALTROOT
 pool 2.84G 2.19M 2.83G  0% 1.00x ONLINE -</screen>
 
@@ -1719,7 +1815,7 @@
 	copied three times into different directories on the
 	deduplicated pool created above.</para>
 
-      <screen>&prompt.root; <userinput>zpool list</userinput>
+      <screen>&prompt.root; <command>zpool list</command>
 for d in dir1 dir2 dir3; do
 for> mkdir $d && cp -R /usr/ports $d &
 for> done</screen>
@@ -1726,7 +1822,7 @@
 
       <para>Redundant data is detected and deduplicated:</para>
 
-      <screen>&prompt.root; <userinput>zpool list</userinput>
+      <screen>&prompt.root; <command>zpool list</command>
 NAME SIZE  ALLOC FREE CAP DEDUP HEALTH ALTROOT
 pool 2.84G 20.9M 2.82G 0% 3.00x ONLINE -</screen>
 
@@ -1742,7 +1838,7 @@
 	<acronym>ZFS</acronym> can show potential space savings by
 	simulating deduplication on an existing pool:</para>
 
-      <screen>&prompt.root; <userinput>zdb -S <replaceable>pool</replaceable></userinput>
+      <screen>&prompt.root; <command>zdb -S <replaceable>pool</replaceable></command>
 Simulated DDT histogram:
 
 bucket              allocated                       referenced
@@ -1778,8 +1874,80 @@
 	due to the much lower memory requirements.</para>
     </sect2>
 
+    <sect2 xml:id="zfs-zfs-compression">
+      <title>Compression</title>
+
+      <para><acronym>ZFS</acronym> provides transparent compression.
+	Compressing data at the block level as it is written not only
+	saves storage space, but can also result in higher disk
+	throughput than would otherwise be possible.  If data is
+	compressed by 25%, then the compressed data can be written to
+	the disk at the same rate as the uncompressed version,
+	resulting in an effective write speed of 125% of what would
+	normally be possible.  Compression can also be a great
+	alternative to
+	<link linkend="zfs-zfs-deduplication">Deduplication</link>
+	because it does not require additional memory to store a
+	<acronym>DDT</acronym>.</para>
+
+      <para><acronym>ZFS</acronym> offers a number of different
+	compression algorithms to choose from, each with different
+	trade-offs.  With the introduction of <acronym>LZ4</acronym>
+	compression in <acronym>ZFS</acronym> v5000, it is possible
+	to enable compression for the entire pool without the large
+	performance trade-off of other algorithms.  The biggest
+	advantage to <acronym>LZ4</acronym> is the
+	<literal>early abort</literal> feature.  If
+	<acronym>LZ4</acronym> does not achieve atleast 12.5%
+	compression in the first part of the data, the block is
+	written uncompressed to avoid wasting CPU cycles trying to
+	compress data that is either already compressed or
+	uncompressible.  For details about the different compression
+	algorithms available in <acronym>ZFS</acronym>, see the
+	<link linkend="zfs-term-compression">Compression</link> entry
+	in the terminology section.</para>
+
+      <para>The administrator can monitor the effectiveness of
+	<acronym>ZFS</acronym> compression using a number of dataset
+	properties.</para>
+
+      <screen>&prompt.root; <command>zfs get used,compressratio,compression,logicalused mypool/compressed_dataset</command>
+NAME        PROPERTY          VALUE     SOURCE
+mypool/compressed_dataset  used              449G      -
+mypool/compressed_dataset  compressratio     1.11x     -
+mypool/compressed_dataset  compression       lz4       local
+mypool/compressed_dataset  logicalused       496G      -</screen>
+
+      <para>The dataset is currently using 449 GB of storage
+	space (the used property).  If this dataset was not compressed
+	it would have taken 496 GB of space (the logicallyused
+	property).  This results in the 1.11:1 compression
+	ratio.</para>
+
+      <para>Compression can have an unexpected side effect when
+	combined with
+	<link linkend="zfs-term-userquota">User Quotas</link>.
+	<acronym>ZFS</acronym> user quotas restrict how much space
+	a user can consume on a dataset, however the measurements are
+	based on how much data is stored, after compression.  So if a
+	user has a quota of 10 GB, and writes 10 GB of
+	compressible data, they will still be able to store additional
+	data.  If they later update a file, say a database, with more
+	or less compressible data, the amount of space available to
+	them will change.  This can result in the odd situation where
+	a user did not increase the actual amount of data (the
+	<literal>logicalused</literal> property), but the change in
+	compression means they have now reached their quota.</para>
+
+      <para>Compression can have a similar unexpected interaction with
+	backups.  Quotas are often used to limit how much data can be
+	stored to ensure there is sufficient backup space available.
+	However since quotas do not consider compression, more data
+	may be written than will fit in uncompressed backups.</para>
+    </sect2>
+
     <sect2 xml:id="zfs-zfs-jail">
-      <title>ZFS and Jails</title>
+      <title><acronym>ZFS</acronym> and Jails</title>
 
       <para><command>zfs jail</command> and the corresponding
 	<literal>jailed</literal> property are used to delegate a
@@ -1843,22 +2011,22 @@
   </sect1>
 
   <sect1 xml:id="zfs-advanced">
-    <title>ZFS Advanced Topics</title>
+    <title><acronym>ZFS</acronym> Advanced Topics</title>
 
     <sect2 xml:id="zfs-advanced-tuning">
-      <title>ZFS Tuning</title>
+      <title><acronym>ZFS</acronym> Tuning</title>
 
       <para></para>
     </sect2>
 
     <sect2 xml:id="zfs-advanced-booting">
-      <title>Booting Root on ZFS</title>
+      <title>Booting Root on <acronym>ZFS</acronym> </title>
 
       <para></para>
     </sect2>
 
     <sect2 xml:id="zfs-advanced-beadm">
-      <title>ZFS Boot Environments</title>
+      <title><acronym>ZFS</acronym> Boot Environments</title>
 
       <para></para>
     </sect2>
@@ -1870,7 +2038,7 @@
     </sect2>
 
     <sect2 xml:id="zfs-advanced-i386">
-      <title>ZFS on i386</title>
+      <title><acronym>ZFS</acronym> on i386</title>
 
       <para>Some of the features provided by <acronym>ZFS</acronym>
 	are memory intensive, and may require tuning for maximum
@@ -1942,38 +2110,46 @@
     <itemizedlist>
       <listitem>
 	<para><link xlink:href="https://wiki.freebsd.org/ZFS">FreeBSD
-	    Wiki - ZFS</link></para>
+	    Wiki - <acronym>ZFS</acronym> </link></para>
       </listitem>
 
       <listitem>
 	<para><link
 	    xlink:href="https://wiki.freebsd.org/ZFSTuningGuide">FreeBSD
-	    Wiki - ZFS Tuning</link></para>
+	    Wiki - <acronym>ZFS</acronym> Tuning</link></para>
       </listitem>
 
       <listitem>
 	<para><link
 	    xlink:href="http://wiki.illumos.org/display/illumos/ZFS">Illumos
-	    Wiki - ZFS</link></para>
+	    Wiki - <acronym>ZFS</acronym> </link></para>
       </listitem>
 
       <listitem>
 	<para><link
 	    xlink:href="http://docs.oracle.com/cd/E19253-01/819-5461/index.html">Oracle
-	    Solaris ZFS Administration Guide</link></para>
+	    Solaris <acronym>ZFS</acronym> Administration
+	    Guide</link></para>
       </listitem>
 
       <listitem>
 	<para><link
-	    xlink:href="http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide">ZFS
+	    xlink:href="http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide"><acronym>ZFS</acronym>
 	    Evil Tuning Guide</link></para>
       </listitem>
 
       <listitem>
 	<para><link
-	    xlink:href="http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide">ZFS
+	    xlink:href="http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide"><acronym>ZFS</acronym>
 	    Best Practices Guide</link></para>
       </listitem>
+
+      <listitem>
+	<para><link
+	    xlink:href="https://calomel.org/zfs_raid_speed_capacity.html">Calomel
+	    Blog - <acronym>ZFS</acronym> Raidz Performance, Capacity
+	    and Integrity</link></para>
+      </listitem>
     </itemizedlist>
 
     <sect2 xml:id="zfs-history">
@@ -2449,10 +2625,68 @@
 	      and write throughput, as only the smaller compressed
 	      version of the file needs to be read or written.
 
-	      <note>
-		<para><acronym>LZ4</acronym> compression is only
-		  available after &os; 9.2.</para>
-	      </note></entry>
+	      <itemizedlist>
+		<listitem xml:id="zfs-term-compression-lz4">
+		  <para><emphasis><acronym>LZ4</acronym></emphasis> -
+		    was added in <acronym>ZFS</acronym> pool version
+		    5000 (feature flags), and is now the recommended
+		    compression algorithm.  <acronym>LZ4</acronym>
+		    compresses approximately 50% faster than
+		    <acronym>LZJB</acronym> when operating on
+		    compressible data, and is over three times faster
+		    when operating on uncompressible data.
+		    <acronym>LZ4</acronym> also decompresses
+		    approximately 80% faster than
+		    <acronym>LZJB</acronym>.  On modern CPUs,
+		    <acronym>LZ4</acronym> can often compress at over
+		    500 MB/s, and decompress at over
+		    1.5 GB/s (per single CPU core).</para>
+
+		  <note>
+		    <para><acronym>LZ4</acronym> compression is
+		      only available after &os; 9.2.</para>
+		  </note>
+		</listitem>
+
+		<listitem xml:id="zfs-term-compression-lzjb">
+		  <para><emphasis><acronym>LZJB</acronym></emphasis> -
+		    is the default compression algorithm in
+		    <acronym>ZFS</acronym>.  Created by Jeff Bonwick
+		    (one of the original creators of
+		    <acronym>ZFS</acronym>).  <acronym>LZJB</acronym>
+		    offers good compression with less
+		    <acronym>CPU</acronym> overhead compared to
+		    <acronym>GZIP</acronym>.  In the future, the
+		    default compression algorithm will likely change
+		    to <acronym>LZ4</acronym>.</para>
+		</listitem>
+
+		<listitem xml:id="zfs-term-compression-gzip">
+		  <para><emphasis><acronym>GZIP</acronym></emphasis> -
+		    is a popular stream compression algorithm and is
+		    available in <acronym>ZFS</acronym>.  One of the
+		    main advantages of using <acronym>GZIP</acronym>
+		    is its configurable level of compression.  When
+		    setting the <literal>compress</literal> property,
+		    the administrator can choose which level of
+		    compression to use, ranging from
+		    <literal>gzip1</literal>, the lowest level of
+		    compression, and <literal>gzip9</literal>, the
+		    higher level of compression.  This gives the
+		    administrator control over how much
+		    <acronym>CPU</acronym> time to trade for saved
+		    disk space.</para>
+		</listitem>
+
+		<listitem xml:id="zfs-term-compression-zle">
+		  <para><emphasis><acronym>ZLE</acronym></emphasis> -
+		    (zero length encoding) is a special compression
+		    algorithm that only compresses continuous runs of
+		    zeros.  This compression algorithm is only useful
+		    if your dataset contains large areas where only
+		    the zero byte is written.</para>
+		</listitem>
+	      </itemizedlist></entry>
 	  </row>
 
 	  <row>
@@ -2511,7 +2745,9 @@
 	      at least once each quarter.  Checksums of each block are
 	      tested as they are read in normal use, but a scrub
 	      operation makes sure even infrequently used blocks are
-	      checked for silent corruption.</entry>
+	      checked for silent corruption, improving the security of
+	      your data, especially in archival storage
+	      situations.</entry>
 	  </row>
 
 	  <row>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 899 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freebsd.org/pipermail/freebsd-doc/attachments/20140220/1ae7f1ad/attachment.sig>


More information about the freebsd-doc mailing list