ZFS handbook project patch

Allan Jude freebsd at allanjude.com
Fri Feb 21 01:19:41 UTC 2014


On 2014-02-20 15:58, Benedict Reuschling wrote:
> Am 20.02.14 20:08, schrieb Warren Block:
>> On Thu, 20 Feb 2014, Allan Jude wrote:
>>>
>>> I've quickly switch those back to <userinput>
>>>
>>> I used simple logic, if talking about a command in a paragraph,
>>> use <command>, when doing it in a <screen> use <userinput>, as in
>>> a paragraph it is usually never more than a subcommand like
>>> <command>zfs send</command>
> 
>> That's correct, IMO.  Semantically, command tags could be used
>> inside userinput, but that really does not seem to gain much, and
>> would suggest filenames and other content in <screen> sections
>> would become even more complicated.
> 
>>> Also, I just noticed that a bunch of the stuff from my previous
>>> zfs patch didn't get in (I sent 2, a whitespace and a content
>>> patch, and only the whitespace one got in), so I've included the
>>> updated zfs send stuff as well (how to do replication without
>>> root)
> 
>> bcr responded about that, and was waiting for feedback (I think).
> 
> 
> Indeed. But no worries. Now that it is one patch, we can review it
> together.
> 
> Good work, Allan! The chapter is beginning to look better and better.
> 
> Regards
> 
> Benedict
> _______________________________________________
> freebsd-doc at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-doc
> To unsubscribe, send any mail to "freebsd-doc-unsubscribe at freebsd.org"
> 


Minor update to the patch to fix a spelling mistake pointed out by bcr@

-- 
Allan Jude
-------------- next part --------------
Index: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
===================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	(revision 44001)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	(working copy)
@@ -483,7 +483,7 @@
       <para>The duration of a scrub depends on the amount of data
 	stored.  Large amounts of data can take a considerable amount
 	of time to verify.  It is also very <acronym>I/O</acronym>
-	intensive, so much so that only one scrub> may be run at any
+	intensive, so much so that only one scrub may be run at any
 	given time.  After the scrub has completed, the status is
 	updated and may be viewed with a status request:</para>
 
@@ -502,9 +502,10 @@
 
 errors: No known data errors</screen>
 
-      <para>The completion time is displayed and helps to ensure data
-	integrity over a long period of time.</para>
-	<!-- WB: what does that mean? -->
+      <para>The completion date of the last scrub operation is
+	displayed to help track when another scrub is required.
+	Routine pool scrubs help protect data from silent corruption
+	and ensure the integrity of the pool.</para>
 
       <para>Refer to &man.zfs.8; and &man.zpool.8; for other
 	<acronym>ZFS</acronym> options.</para>
@@ -581,6 +582,53 @@
 	redundancy.</para>
     </sect2>
 
+    <sect2 xml:id="zfs-zpool-status">
+      <title>Checking the Status of a Pool</title>
+
+      <para>It is important to monitor the status of the
+	<acronym>ZFS</acronym> pool.  If a drive goes offline, a
+	read or write error is detected, or a checksum fails to match,
+	the corresponding counters in the <option>status</option>
+	display will be incremented.  The <option>status</option>
+	output shows the configuration and status of each device in
+	the pool, in addition to the status of the pool as the whole.
+	Also displayed are any actions that may need to be taken, and
+	details about when the last
+	<link linkend="zfs-zpool-scrub"><option>scrub</option></link>
+	operation was completed.</para>
+
+      <screen>&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+  scan: scrub repaired 0 in 2h25m with 0 errors on Sat Sep 14 04:25:50 2013
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          raidz2-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada1p3  ONLINE       0     0     0
+            ada2p3  ONLINE       0     0     0
+            ada3p3  ONLINE       0     0     0
+            ada4p3  ONLINE       0     0     0
+            ada5p3  ONLINE       0     0     0
+
+errors: No known data errors</screen>
+    </sect2>
+
+    <sect2 xml:id="zfs-zpool-clear">
+      <title>Clearing Errors</title>
+
+      <para>If an error is detected with a device in a pool, the
+	corresponding read, write, or checksum counter will be
+	incremented.  Once the issue is resolved, or to track the
+	rate of errors, <command>zpool clear mypool</command> will
+	reset the counters.  This step can be important for automated
+	scripts that monitor the health of the pool and alert the
+	administrator when there is an error, further errors may not
+	be reported if the old errors are not cleared.</para>
+    </sect2>
+
     <sect2 xml:id="zfs-zpool-replace">
       <title>Replacing a Functioning Device</title>
 
@@ -622,8 +670,40 @@
 	restored from backups.</para>
     </sect2>
 
+    <sect2 xml:id="zfs-zpool-scrub">
+      <title>Scrubbing a Pool</title>
+
+      <para>It is strongly recommended that a
+	<link linkend="zfs-term-scrub">Scrub</link> operation be
+	performed regularly.  Ideally atleast once each quarter.  The
+	<option>scrub</option> operating is very I/O intensive and
+	will reduce performance while it is in progress, so it much
+	be scheduled to avoid high demand periods.</para>
+
+      <screen>&prompt.root; <userinput>zpool scrub mypool</userinput>
+&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+  scan: scrub in progress since Wed Feb 19 20:52:54 2014
+        116G scanned out of 8.60T at 649M/s, 3h48m to go
+        0 repaired, 1.32% done
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool       ONLINE       0     0     0
+          raidz2-0  ONLINE       0     0     0
+            ada0p3  ONLINE       0     0     0
+            ada1p3  ONLINE       0     0     0
+            ada2p3  ONLINE       0     0     0
+            ada3p3  ONLINE       0     0     0
+            ada4p3  ONLINE       0     0     0
+            ada5p3  ONLINE       0     0     0
+
+errors: No known data errors</screen>
+    </sect2>
+
     <sect2 xml:id="zfs-zpool-selfheal">
-      <title>ZFS Self-Healing</title>
+      <title><acronym>ZFS</acronym> Self-Healing</title>
 
       <para><acronym>ZFS</acronym> utilizes the checkums stored with
 	each data block to provide a feature called self-healing.
@@ -890,17 +970,38 @@
 	need to be imported on an older system before upgrading.  The
 	upgrade process is unreversible and cannot be undone.</para>
 
+      <screen>&prompt.root; <userinput>zpool status</userinput>
+  pool: mypool
+ state: ONLINE
+status: The pool is formatted using a legacy on-disk format.  The pool can
+        still be used, but some features are unavailable.
+action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
+        pool will no longer be accessible on software that does not support feat
+        flags.
+  scan: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        mypool      ONLINE       0     0     0
+          mirror-0  ONLINE       0     0     0
+	    ada0    ONLINE       0     0     0
+	    ada1    ONLINE       0     0     0
+
+errors: No known data errors</screen>
+
       <para>The newer features of <acronym>ZFS</acronym> will not be
 	available until <command>zpool upgrade</command> has
 	completed.  <option>-v</option> can be used to see what new
 	features will be provided by upgrading, as well as which
 	features are already supported by the existing version.</para>
-    </sect2>
 
-    <sect2 xml:id="zfs-zpool-status">
-      <title>Checking the Status of a Pool</title>
-
-      <para></para>
+      <warning>
+	<para>If the system boots from the zpool, the boot code must
+	  also be updated to support the new zpool version.  Run
+	  <command>gpart bootcode</command> on the partition that
+	  contains the boot code.  See &man.gpart.8; for more
+	  information.</para>
+      </warning>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-history">
@@ -1255,7 +1356,7 @@
     </sect2>
 
     <sect2 xml:id="zfs-zfs-send">
-      <title>ZFS Replication</title>
+      <title><acronym>ZFS</acronym> Replication</title>
 
       <para>Keeping data on a single pool in one location exposes
 	it to risks like theft, natural and human disasters.  Keeping
@@ -1265,12 +1366,13 @@
 	the data to standard output.  Using this technique, it is
 	possible to not only store the data on another pool connected
 	to the local system, but also to send it over a network to
-	another system that runs ZFS.  To achieve this replication,
-	<acronym>ZFS</acronym> uses filesystem snapshots (see the
-	section on <link
-	  linkend="zfs-zfs-snapshot">ZFS snapshots</link>) to send
-	them from one location to another.  The commands for this
-	operation are <command>zfs send</command> and
+	another system that runs <acronym>ZFS</acronym> .  To achieve
+	this replication, <acronym>ZFS</acronym> uses filesystem
+	snapshots (see the section on
+	<link linkend="zfs-zfs-snapshot"><acronym>ZFS</acronym>
+	  snapshots</link>) to send them from one location to another.
+	The commands for this operation are
+	<command>zfs send</command> and
 	<command>zfs receive</command>, respectively.</para>
 
       <para>The following examples will demonstrate the functionality
@@ -1277,7 +1379,7 @@
 	of <acronym>ZFS</acronym> replication using these two
 	pools:</para>
 
-      <screen>&prompt.root; <command>zpool list</command>
+      <screen>&prompt.root; <userinput>zpool list</userinput>
 NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
 backup  960M    77K   896M     0%  1.00x  ONLINE  -
 mypool  984M  43.7M   940M     4%  1.00x  ONLINE  -</screen>
@@ -1297,8 +1399,8 @@
 	<acronym>ZFS</acronym> only replicates snapshots, changes
 	since the most recent snapshot will not be replicated.</para>
 
-      <screen>&prompt.root; <command>zfs snapshot <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable></command>
-&prompt.root; <command>zfs list -t snapshot</command>
+      <screen>&prompt.root; <userinput>zfs snapshot <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable></userinput>
+&prompt.root; <userinput>zfs list -t snapshot</userinput>
 NAME                    USED  AVAIL  REFER  MOUNTPOINT
 mypool at backup1             0      -  43.6M  -</screen>
 
@@ -1305,11 +1407,11 @@
       <para>Now that a snapshot exists, <command>zfs send</command>
 	can be used to create a stream representing the contents of
 	the snapshot, which can be stored as a file, or received by
-	another pool.  The stream will be written to standard
-	output, which will need to be redirected to a file or pipe
-	otherwise <acronym>ZFS</acronym> will produce an error:</para>
+	another pool.  The stream will be written to standard output,
+	which will need to be redirected to a file or pipe otherwise
+	<acronym>ZFS</acronym> will produce an error:</para>
 
-      <screen>&prompt.root; <command>zfs send <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable></command>
+      <screen>&prompt.root; <userinput>zfs send <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable></userinput>
 Error: Stream can not be written to a terminal.
 You must redirect standard output.</screen>
 
@@ -1320,8 +1422,8 @@
 	data contained in the snapshot, not only the changes in that
 	snapshot.</para>
 
-      <screen>&prompt.root; <command>zfs send <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable> > <replaceable>/backup/backup1</replaceable></command>
-&prompt.root; <command>zpool list</command>
+      <screen>&prompt.root; <userinput>zfs send <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable> > <replaceable>/backup/backup1</replaceable></userinput>
+&prompt.root; <userinput>zpool list</userinput>
 NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
 backup  960M  63.7M   896M     6%  1.00x  ONLINE  -
 mypool  984M  43.7M   940M     4%  1.00x  ONLINE  -</screen>
@@ -1334,10 +1436,10 @@
 
       <para>Instead of storing the backups as archive files,
 	<acronym>ZFS</acronym> can receive them as a live file system,
-	allowing the backed up data to be accessed directly.
-	To get to the actual data contained in those streams, the
-	reverse operation of <command>zfs send</command> must be used
-	to transform the streams back into files and directories.  The
+	allowing the backed up data to be accessed directly.  To get
+	to the actual data contained in those streams, the reverse
+	operation of <command>zfs send</command> must be used to
+	transform the streams back into files and directories.  The
 	command is <command>zfs receive</command>.  The example below
 	combines <command>zfs send</command> and
 	<command>zfs receive</command> using a pipe to copy the data
@@ -1345,31 +1447,30 @@
 	directly on the receiving pool after the transfer is complete.
 	A dataset can only be replicated to an empty dataset.</para>
 
-      <screen>&prompt.root; <command>zfs snapshot <replaceable>mypool</replaceable>@<replaceable>replica1</replaceable></command>
-&prompt.root; <command>zfs send -v <replaceable>mypool</replaceable>@<replaceable>replica1</replaceable> | zfs receive <replaceable>backup/mypool</replaceable></command>
+      <screen>&prompt.root; <userinput>zfs snapshot <replaceable>mypool</replaceable>@<replaceable>replica1</replaceable></userinput>
+&prompt.root; <userinput>zfs send -v <replaceable>mypool</replaceable>@<replaceable>replica1</replaceable> | zfs receive <replaceable>backup/mypool</replaceable></userinput>
 send from @ to mypool at replica1 estimated size is 50.1M
 total estimated size is 50.1M
 TIME        SENT   SNAPSHOT
 
-&prompt.root; <command>zpool list</command>
+&prompt.root; <userinput>zpool list</userinput>
 NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
 backup  960M  63.7M   896M     6%  1.00x  ONLINE  -
 mypool  984M  43.7M   940M     4%  1.00x  ONLINE  -</screen>
 
       <sect3 xml:id="zfs-send-incremental">
-	<title>ZFS Incremental Backups</title>
+	<title><acronym>ZFS</acronym> Incremental Backups</title>
 
-	<para>Another feature of <command>zfs send</command> is that
-	  it can determine the difference between two snapshots to
-	  only send what has changed between the two.  This results in
-	  saving disk space and time for the transfer to another pool.
-	  For example:</para>
+	<para><command>zfs send</command> can also determine the
+	  difference between two snapshots and only send the changes
+	  between the two.  This results in saving disk space and
+	  transfer time.  For example:</para>
 
-	<screen>&prompt.root; <userinput>zfs snapshot <replaceable>mypool</replaceable>@<replaceable>backup2</replaceable></userinput>
+	<screen>&prompt.root; <userinput>zfs snapshot <replaceable>mypool</replaceable>@<replaceable>replica2</replaceable></userinput>
 &prompt.root; <userinput>zfs list -t snapshot</userinput>
 NAME                    USED  AVAIL  REFER  MOUNTPOINT
-mypool at backup1         5.72M      -  43.6M  -
-mypool at backup2             0      -  44.1M  -
+mypool at replica1         5.72M      -  43.6M  -
+mypool at replica2             0      -  44.1M  -
 &prompt.root; <userinput>zpool list</userinput>
 NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
 backup  960M  61.7M   898M     6%  1.00x  ONLINE  -
@@ -1376,77 +1477,59 @@
 mypool  960M  50.2M   910M     5%  1.00x  ONLINE  -</screen>
 
 	<para>A second snapshot called
-	  <replaceable>backup2</replaceable> was created.  This second
-	  snapshot contains only the changes on the ZFS filesystem
-	  between now and the last snapshot,
-	  <replaceable>backup1</replaceable>.  Using the
-	  <literal>-i</literal> flag to <command>zfs send</command>
-	  and providing both snapshots, an incremental snapshot can be
-	  transferred, containing only the data that has
-	  changed.</para>
+	  <replaceable>replica2</replaceable> was created.  This
+	  second snapshot contains only the changes on the
+	  <acronym>ZFS</acronym> filesystem between now and the
+	  previous snapshot, <replaceable>replica1</replaceable>.
+	  Using <option>-i</option> with <command>zfs send</command>
+	  and indicating the pair of snapshots, an incremental replica
+	  stream can be generated, containing only the data that has
+	  changed.  This can only succeed if the initial snapshot
+	  already exists on the receiving side.</para>
 
-	<screen>&prompt.root; <userinput>zfs send -i <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable> <replaceable>mypool</replaceable>@<replaceable>backup2</replaceable> > <replaceable>/backup/incremental</replaceable></userinput>
+	<screen>&prompt.root; <userinput>zfs send -v -i <replaceable>mypool</replaceable>@<replaceable>replica1</replaceable> <replaceable>mypool</replaceable>@<replaceable>replica2</replaceable> | zfs receive <replaceable>/backup/mypool</replaceable></userinput>
+send from @replica1 to mypool at replica2 estimated size is 5.02M
+total estimated size is 5.02M
+TIME        SENT   SNAPSHOT
+
 &prompt.root; <userinput>zpool list</userinput>
 NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
 backup  960M  80.8M   879M     8%  1.00x  ONLINE  -
 mypool  960M  50.2M   910M     5%  1.00x  ONLINE  -
-&prompt.root; <userinput>ls -lh /backup</userinput>
-total 82247
-drwxr-xr-x     1 root   wheel      61M Dec  3 11:36 backup1
-drwxr-xr-x     1 root   wheel      18M Dec  3 11:36 incremental</screen>
 
-	<para>The incremental stream was successfully transferred and
-	  the file on disk is smaller than any of the two snapshots
-	  <replaceable>backup1</replaceable> or
-	  <replaceable>backup2</replaceable>.  This shows that it only
-	  contains the differences, which is much faster to transfer
-	  and saves disk space by not copying the complete pool each
-	  time.  This is useful when having to rely on slow networks
-	  or when costs per transferred byte have to be
-	  considered.</para>
-      </sect3>
+&prompt.root; <userinput>zfs list</userinput>
+NAME                         USED  AVAIL  REFER  MOUNTPOINT
+backup                      55.4M   240G   152K  /backup
+backup/mypool               55.3M   240G  55.2M  /backup/mypool
+mypool                      55.6M  11.6G  55.0M  /mypool
 
-      <sect3 xml:id="zfs-send-recv">
-	<title>Receiving ZFS Data Streams</title>
+&prompt.root; <userinput>zfs list -t snapshot</userinput>
+NAME                                         USED  AVAIL  REFER  MOUNTPOINT
+backup/mypool at replica1                       104K      -  50.2M  -
+backup/mypool at replica2                          0      -  55.2M  -
+mypool at replica1                             29.9K      -  50.0M  -
+mypool at replica2                                 0      -  55.0M  -</screen>
 
-	<para>Up until now, only the data streams in binary form were
-	  sent to other pools.  To get to the actual data contained in
-	  those streams, the reverse operation of <command>zfs
-	    send</command> has to be used to transform the streams
-	  back into files and directories.  The command is called
-	  <command>zfs receive</command> and has also a short version:
-	  <command>zfs recv</command>.  The example below combines
-	  <command>zfs send</command> and <command>zfs
-	    receive</command> using a pipe to copy the data from one
-	  pool to another.  This way, the data can be used directly on
-	  the receiving pool after the transfer is complete.</para>
+	<para>The incremental stream was successfully transferred and
+	  only the data that has changed was replicated, rather than
+	  the entirety of <replaceable>replica1</replaceable> and
+	  <replaceable>replica2</replaceable> with both contain mostly
+	  the same data.  The transmitted data only contains the
+	  differences, which took much less time to transfer and saves
+	  disk space by not copying the complete pool each time.  This
+	  is useful when having to rely on slow networks or when costs
+	  per transferred byte have to be considered.</para>
 
-	<screen>&prompt.root; <userinput>zfs send <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable> | zfs receive <replaceable>backup/backup1</replaceable></userinput>
-&prompt.root; <userinput>ls -lh /backup</userinput>
-total 431
-drwxr-xr-x     4219 root   wheel      4.1k Dec  3 11:34 backup1</screen>
-
-	<para>The directory <replaceable>backup1</replaceable> does
-	  contain all the data, which were part of the snapshot of the
-	  same name.  Since this originally was a complete filesystem
-	  snapshot, the listing of all ZFS filesystems for this pool
-	  is also updated and shows the
-	  <replaceable>backup1</replaceable> entry.</para>
-
-	<screen>&prompt.root; <userinput>zfs list</userinput>
-NAME                    USED  AVAIL  REFER  MOUNTPOINT
-backup                 43.7M   884M    32K  /backup
-backup/backup1         43.5M   884M  43.5M  /backup/backup1
-mypool                 50.0M   878M  44.1M  /mypool</screen>
-
-	<para>A new filesystem, <replaceable>backup1</replaceable> is
-	  available and has the same size as the snapshot it was
-	  created from.  It is up to the user to decide whether the
-	  streams should be transformed back into filesystems directly
-	  to have a cold-standby for emergencies or to just keep the
-	  streams and transform them later when required.  Sending and
-	  receiving can be automated so that regular backups are
-	  created on a second pool for backup purposes.</para>
+	<para>A new filesystem,
+	  <replaceable>backup/mypool</replaceable> is
+	  available and has all of the files and data from the pool
+	  <replaceable>mypool</replaceable>.  If <option>-P</option>
+	  is specified, the properties of the dataset will be copied,
+	  including compression settings, quotas and mount points.  If
+	  <option>-R</option> is specified all child datasets of the
+	  indicated dataset will be copied, along with all of their
+	  properties.  Sending and receiving can be automated so that
+	  regular backups are created on the second pool.</para>
       </sect3>
 
       <sect3 xml:id="zfs-send-ssh">
@@ -1454,27 +1537,26 @@
 
 	<para>Although sending streams to another system over the
 	  network is a good way to keep a remote backup, it does come
-	  with a drawback.  All the data sent over the network link is
-	  not encrypted, allowing anyone to intercept and transform
-	  the streams back into data without the knowledge of the
-	  sending user.  This is an unacceptable situation, especially
-	  when sending the streams over the internet to a remote host
-	  with multiple hops in between where such malicious data
-	  collection can occur.  Fortunately, there is a solution
-	  available to the problem that does not require the
-	  encryption of the data on the pool itself.  To make sure the
-	  network connection between both systems is securely
+	  with a drawback.  Data sent over the network link is not
+	  encrypted, allowing anyone to intercept and transform the
+	  streams back into data without the knowledge of the sending
+	  user.  This is undesirable, especially when sending the
+	  streams over the internet to a remote host.  To make sure
+	  the network connection between both systems is securely
 	  encrypted, <application>SSH</application> can be used.
-	  Since ZFS only requires the stream to be redirected from
-	  standard output, it is relatively easy to pipe it through
-	  SSH.</para>
+	  Since <acronym>ZFS</acronym> only requires the stream to be
+	  redirected from standard output, it is relatively easy to
+	  pipe it through <application>SSH</application>.  If you wish
+	  the contents of your <acronym>ZFS</acronym> file system to
+	  remain encrypted on the remote system, consider using <link
+	    xlink:href="http://wiki.freebsd.org/PEFS">PEFS</link>.</para>
 
 	<para>A few settings and security precautions have to be made
-	  before this can be done.  Since this chapter is about ZFS
-	  and not about configuring SSH, it only lists the things
-	  required to perform the encrypted <command>zfs
-	  send</command> operation.  The following settings should
-	  be made:</para>
+	  before this can be done.  Since this chapter is about
+	  <acronym>ZFS</acronym> and not about configuring SSH, it
+	  only lists the things required to perform the
+	  <command>zfs send</command> operation.  The following
+	  configuration is required:</para>
 
 	<itemizedlist>
 	  <listitem>
@@ -1483,50 +1565,74 @@
 	  </listitem>
 
 	  <listitem>
-	    <para>The <systemitem class="username">root</systemitem>
-	      user needs to be able to log into the receiving system
-	      because only that user can send streams from the pool.
-	      <application>SSH</application> should be configured so
-	      that <systemitem class="username">root</systemitem> can
-	      only execute <command>zfs recv</command> and nothing
-	      else to prevent users that might have hijacked this
-	      account from doing any harm on the system.</para>
+	    <para>Normally, the privileges of the
+	      <systemitem class="username">root</systemitem> user are
+	      required to send and receive the <acronym>ZFS</acronym>
+	      stream.  This requires logging in to the receiving
+	      system as
+	      <systemitem class="username">root</systemitem>, which is
+	      disabled by default for security reasons.  Rather than
+	      enabling root login, it is possible to use the <link
+		linkend="zfs-zfs-allow">ZFS Delegation</link> system
+	      to allow a non-root user on each system to perform the
+	      respective send and receieve operations.</para>
 	  </listitem>
+
+	  <listitem>
+	    <para>On the sending system:</para>
+	    <screen>&prompt.root; <command>zfs allow -u someuser send,snapshot mypool</command></screen>
+	  </listitem>
+
+	  <listitem>
+	    <para>In order for the pool to mounted, the unprivileged
+	      user must own the directory, and regular users must be
+	      allowed to mount file systems.  On the receiving
+	      system:</para>
+
+	    <screen>&prompt.root; sysctl vfs.usermount=1
+vfs.usermount: 0 -> 1
+&prompt.root; echo vfs.usermount=1 >> /etc/sysctl.conf
+&prompt.root; <command>zfs create recvpool/backup</command>
+&prompt.root; <command>zfs allow -u someuser create,mount,receive recvpool/backup</command>
+&prompt.root; chown someuser /recvpool/backup</screen>
+	  </listitem>
 	</itemizedlist>
 
-	<para>After these security measures have been put into place
-	  and <systemitem class="username">root</systemitem> can
-	  connect via passwordless <application>SSH</application> to
-	  the receiving system, the encrypted stream can be sent using
-	  the following commands:</para>
+	<para>After the above procedure and the setup of
+	  <application>SSH</application> keys, the unprivileged user
+	  on the sending machine can connect via passwordless
+	  <application>SSH</application> to the receiving system, and
+	  the pool can be replicated using the following
+	  commands:</para>
 
-	<screen>&prompt.root; <userinput>zfs snapshot -r <replaceable>mypool/home</replaceable>@<replaceable>monday</replaceable></userinput>
-&prompt.root; <userinput>zfs send -R <replaceable>mypool/home</replaceable>@<replaceable>monday</replaceable> | ssh <replaceable>backuphost</replaceable> zfs recv -dvu <replaceable>backuppool</replaceable></userinput></screen>
+	<screen>&prompt.user; <command>zfs snapshot -r <replaceable>mypool/home</replaceable>@<replaceable>monday</replaceable></command>
+&prompt.user; <command>zfs send -R <replaceable>mypool/home</replaceable>@<replaceable>monday</replaceable> | ssh <replaceable>someuser at backuphost</replaceable> zfs recv -dvu <replaceable>recvpool/backup</replaceable></command></screen>
 
 	<para>The first command creates a recursive snapshot (option
-	  <literal>-r</literal>) called
-	  <replaceable>monday</replaceable> of the filesystem named
+	  <option>-r</option>) called
+	  <replaceable>monday</replaceable> of the filesystem dataset
 	  <replaceable>home</replaceable> that resides on the pool
 	  <replaceable>mypool</replaceable>.  The second command uses
-	  the <literal>-R</literal> option to <command>zfs
-	    send</command>, which makes sure that all datasets and
-	  filesystems along with their children are included in the
-	  transmission of the data stream.  This also includes
+	  <option>-R</option> to <command>zfs send</command>, which
+	  makes sure that the dataset and all child datasets are
+	  included in the transmitted data stream.  This also includes
 	  snaphots, clones and settings on individual filesystems as
-	  well.  The output is piped directly to SSH that uses a short
-	  name for the receiving host called
-	  <replaceable>backuphost</replaceable>.  A fully qualified
-	  domain name or IP address can also be used here.  The SSH
-	  command to execute is <command>zfs recv</command> to a pool
-	  called <replaceable>backuppool</replaceable>.  Using the
-	  <literal>-d</literal> option with <command>zfs
-	    recv</command> will remove the original name of the pool
-	  on the receiving side and just takes the name of the
-	  snapshot instead.  The <literal>-u</literal> option makes
-	  sure that the filesystem is not mounted on the receiving
-	  side.  More information about the transfer—like the
-	  time that has passed—is displayed when the
-	  <literal>-v</literal> option is provided.</para>
+	  well.  The output is piped to the waiting
+	  <command>zfs receive</command> on the remote host
+	  <replaceable>backuphost</replaceable> via
+	  <application>SSH</application>.  A fully qualified domain
+	  name or IP address should be used here.  The receiving
+	  machine will write the data to
+	  <replaceable>backup</replaceable> dataset on the
+	  <replaceable>recvpool</replaceable> pool.  Using
+	  <option>-d</option> with <command>zfs recv</command>
+	  will remove the original name of the pool on the receiving
+	  side and just takes the name of the snapshot instead.
+	  <option>-u</option> causes the filesystem(s) to not be
+	  mounted on the receiving side.  Details about the transfer
+	  in progress, including time elapsed and a count of how much
+	  data has been sent are displayed if <option>-v</option>
+	  is specified.</para>
       </sect3>
     </sect2>
 
@@ -1676,12 +1782,6 @@
 &prompt.root; <userinput>zfs get refreservation storage/home/bob</userinput></screen>
     </sect2>
 
-    <sect2 xml:id="zfs-zfs-compression">
-      <title>Compression</title>
-
-      <para></para>
-    </sect2>
-
     <sect2 xml:id="zfs-zfs-deduplication">
       <title>Deduplication</title>
 
@@ -1778,8 +1878,80 @@
 	due to the much lower memory requirements.</para>
     </sect2>
 
+    <sect2 xml:id="zfs-zfs-compression">
+      <title>Compression</title>
+
+      <para><acronym>ZFS</acronym> provides transparent compression.
+	Compressing data at the block level as it is written not only
+	saves storage space, but can also result in higher disk
+	throughput than would otherwise be possible.  If data is
+	compressed by 25%, then the compressed data can be written to
+	the disk at the same rate as the uncompressed version,
+	resulting in an effective write speed of 125% of what would
+	normally be possible.  Compression can also be a great
+	alternative to
+	<link linkend="zfs-zfs-deduplication">Deduplication</link>
+	because it does not require additional memory to store a
+	<acronym>DDT</acronym>.</para>
+
+      <para><acronym>ZFS</acronym> offers a number of different
+	compression algorithms to choose from, each with different
+	trade-offs.  With the introduction of <acronym>LZ4</acronym>
+	compression in <acronym>ZFS</acronym> v5000, it is possible
+	to enable compression for the entire pool without the large
+	performance trade-off of other algorithms.  The biggest
+	advantage to <acronym>LZ4</acronym> is the
+	<literal>early abort</literal> feature.  If
+	<acronym>LZ4</acronym> does not achieve atleast 12.5%
+	compression in the first part of the data, the block is
+	written uncompressed to avoid wasting CPU cycles trying to
+	compress data that is either already compressed or
+	uncompressible.  For details about the different compression
+	algorithms available in <acronym>ZFS</acronym>, see the
+	<link linkend="zfs-term-compression">Compression</link> entry
+	in the terminology section.</para>
+
+      <para>The administrator can monitor the effectiveness of
+	<acronym>ZFS</acronym> compression using a number of dataset
+	properties.</para>
+
+      <screen>&prompt.root; <userinput>zfs get used,compressratio,compression,logicalused mypool/compressed_dataset</userinput>
+NAME        PROPERTY          VALUE     SOURCE
+mypool/compressed_dataset  used              449G      -
+mypool/compressed_dataset  compressratio     1.11x     -
+mypool/compressed_dataset  compression       lz4       local
+mypool/compressed_dataset  logicalused       496G      -</screen>
+
+      <para>The dataset is currently using 449 GB of storage
+	space (the used property).  If this dataset was not compressed
+	it would have taken 496 GB of space (the logicallyused
+	property).  This results in the 1.11:1 compression
+	ratio.</para>
+
+      <para>Compression can have an unexpected side effect when
+	combined with
+	<link linkend="zfs-term-userquota">User Quotas</link>.
+	<acronym>ZFS</acronym> user quotas restrict how much space
+	a user can consume on a dataset, however the measurements are
+	based on how much data is stored, after compression.  So if a
+	user has a quota of 10 GB, and writes 10 GB of
+	compressible data, they will still be able to store additional
+	data.  If they later update a file, say a database, with more
+	or less compressible data, the amount of space available to
+	them will change.  This can result in the odd situation where
+	a user did not increase the actual amount of data (the
+	<literal>logicalused</literal> property), but the change in
+	compression means they have now reached their quota.</para>
+
+      <para>Compression can have a similar unexpected interaction with
+	backups.  Quotas are often used to limit how much data can be
+	stored to ensure there is sufficient backup space available.
+	However since quotas do not consider compression, more data
+	may be written than will fit in uncompressed backups.</para>
+    </sect2>
+
     <sect2 xml:id="zfs-zfs-jail">
-      <title>ZFS and Jails</title>
+      <title><acronym>ZFS</acronym> and Jails</title>
 
       <para><command>zfs jail</command> and the corresponding
 	<literal>jailed</literal> property are used to delegate a
@@ -1843,22 +2015,22 @@
   </sect1>
 
   <sect1 xml:id="zfs-advanced">
-    <title>ZFS Advanced Topics</title>
+    <title><acronym>ZFS</acronym> Advanced Topics</title>
 
     <sect2 xml:id="zfs-advanced-tuning">
-      <title>ZFS Tuning</title>
+      <title><acronym>ZFS</acronym> Tuning</title>
 
       <para></para>
     </sect2>
 
     <sect2 xml:id="zfs-advanced-booting">
-      <title>Booting Root on ZFS</title>
+      <title>Booting Root on <acronym>ZFS</acronym> </title>
 
       <para></para>
     </sect2>
 
     <sect2 xml:id="zfs-advanced-beadm">
-      <title>ZFS Boot Environments</title>
+      <title><acronym>ZFS</acronym> Boot Environments</title>
 
       <para></para>
     </sect2>
@@ -1870,7 +2042,7 @@
     </sect2>
 
     <sect2 xml:id="zfs-advanced-i386">
-      <title>ZFS on i386</title>
+      <title><acronym>ZFS</acronym> on i386</title>
 
       <para>Some of the features provided by <acronym>ZFS</acronym>
 	are memory intensive, and may require tuning for maximum
@@ -1942,38 +2114,46 @@
     <itemizedlist>
       <listitem>
 	<para><link xlink:href="https://wiki.freebsd.org/ZFS">FreeBSD
-	    Wiki - ZFS</link></para>
+	    Wiki - <acronym>ZFS</acronym> </link></para>
       </listitem>
 
       <listitem>
 	<para><link
 	    xlink:href="https://wiki.freebsd.org/ZFSTuningGuide">FreeBSD
-	    Wiki - ZFS Tuning</link></para>
+	    Wiki - <acronym>ZFS</acronym> Tuning</link></para>
       </listitem>
 
       <listitem>
 	<para><link
 	    xlink:href="http://wiki.illumos.org/display/illumos/ZFS">Illumos
-	    Wiki - ZFS</link></para>
+	    Wiki - <acronym>ZFS</acronym> </link></para>
       </listitem>
 
       <listitem>
 	<para><link
 	    xlink:href="http://docs.oracle.com/cd/E19253-01/819-5461/index.html">Oracle
-	    Solaris ZFS Administration Guide</link></para>
+	    Solaris <acronym>ZFS</acronym> Administration
+	    Guide</link></para>
       </listitem>
 
       <listitem>
 	<para><link
-	    xlink:href="http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide">ZFS
+	    xlink:href="http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide"><acronym>ZFS</acronym>
 	    Evil Tuning Guide</link></para>
       </listitem>
 
       <listitem>
 	<para><link
-	    xlink:href="http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide">ZFS
+	    xlink:href="http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide"><acronym>ZFS</acronym>
 	    Best Practices Guide</link></para>
       </listitem>
+
+      <listitem>
+	<para><link
+	    xlink:href="https://calomel.org/zfs_raid_speed_capacity.html">Calomel
+	    Blog - <acronym>ZFS</acronym> Raidz Performance, Capacity
+	    and Integrity</link></para>
+      </listitem>
     </itemizedlist>
 
     <sect2 xml:id="zfs-history">
@@ -2449,10 +2629,68 @@
 	      and write throughput, as only the smaller compressed
 	      version of the file needs to be read or written.
 
-	      <note>
-		<para><acronym>LZ4</acronym> compression is only
-		  available after &os; 9.2.</para>
-	      </note></entry>
+	      <itemizedlist>
+		<listitem xml:id="zfs-term-compression-lz4">
+		  <para><emphasis><acronym>LZ4</acronym></emphasis> -
+		    was added in <acronym>ZFS</acronym> pool version
+		    5000 (feature flags), and is now the recommended
+		    compression algorithm.  <acronym>LZ4</acronym>
+		    compresses approximately 50% faster than
+		    <acronym>LZJB</acronym> when operating on
+		    compressible data, and is over three times faster
+		    when operating on uncompressible data.
+		    <acronym>LZ4</acronym> also decompresses
+		    approximately 80% faster than
+		    <acronym>LZJB</acronym>.  On modern CPUs,
+		    <acronym>LZ4</acronym> can often compress at over
+		    500 MB/s, and decompress at over
+		    1.5 GB/s (per single CPU core).</para>
+
+		  <note>
+		    <para><acronym>LZ4</acronym> compression is
+		      only available after &os; 9.2.</para>
+		  </note>
+		</listitem>
+
+		<listitem xml:id="zfs-term-compression-lzjb">
+		  <para><emphasis><acronym>LZJB</acronym></emphasis> -
+		    is the default compression algorithm in
+		    <acronym>ZFS</acronym>.  Created by Jeff Bonwick
+		    (one of the original creators of
+		    <acronym>ZFS</acronym>).  <acronym>LZJB</acronym>
+		    offers good compression with less
+		    <acronym>CPU</acronym> overhead compared to
+		    <acronym>GZIP</acronym>.  In the future, the
+		    default compression algorithm will likely change
+		    to <acronym>LZ4</acronym>.</para>
+		</listitem>
+
+		<listitem xml:id="zfs-term-compression-gzip">
+		  <para><emphasis><acronym>GZIP</acronym></emphasis> -
+		    is a popular stream compression algorithm and is
+		    available in <acronym>ZFS</acronym>.  One of the
+		    main advantages of using <acronym>GZIP</acronym>
+		    is its configurable level of compression.  When
+		    setting the <literal>compress</literal> property,
+		    the administrator can choose which level of
+		    compression to use, ranging from
+		    <literal>gzip1</literal>, the lowest level of
+		    compression, and <literal>gzip9</literal>, the
+		    higher level of compression.  This gives the
+		    administrator control over how much
+		    <acronym>CPU</acronym> time to trade for saved
+		    disk space.</para>
+		</listitem>
+
+		<listitem xml:id="zfs-term-compression-zle">
+		  <para><emphasis><acronym>ZLE</acronym></emphasis> -
+		    (zero length encoding) is a special compression
+		    algorithm that only compresses continuous runs of
+		    zeros.  This compression algorithm is only useful
+		    if your dataset contains large areas where only
+		    the zero byte is written.</para>
+		</listitem>
+	      </itemizedlist></entry>
 	  </row>
 
 	  <row>
@@ -2511,7 +2749,9 @@
 	      at least once each quarter.  Checksums of each block are
 	      tested as they are read in normal use, but a scrub
 	      operation makes sure even infrequently used blocks are
-	      checked for silent corruption.</entry>
+	      checked for silent corruption, improving the security of
+	      your data, especially in archival storage
+	      situations.</entry>
 	  </row>
 
 	  <row>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 899 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freebsd.org/pipermail/freebsd-doc/attachments/20140220/b6e2f48e/attachment.sig>


More information about the freebsd-doc mailing list