svn commit: r43071 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs

Warren Block wblock at FreeBSD.org
Tue Oct 29 06:21:24 UTC 2013


Author: wblock
Date: Tue Oct 29 06:21:23 2013
New Revision: 43071
URL: http://svnweb.freebsd.org/changeset/doc/43071

Log:
  Make an edit pass up to line 696.  Fix spelling errors, remove redundancy,
  reorder passive sentences, add markup.

Modified:
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml

Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Tue Oct 29 05:25:31 2013	(r43070)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Tue Oct 29 06:21:23 2013	(r43071)
@@ -239,15 +239,15 @@ example/data        17547008       0 175
     <sect2>
       <title><acronym>ZFS</acronym> RAID-Z</title>
 
-      <para>There is no way to prevent a disk from failing.  One
-	method of avoiding data loss due to a failed hard disk is to
+      <para>Disks fail.  One
+	method of avoiding data loss from disk failure is to
 	implement <acronym>RAID</acronym>.  <acronym>ZFS</acronym>
 	supports this feature in its pool design.
-	<acronym>RAID-Z</acronym> pools require 3 or more disks but
+	<acronym>RAID-Z</acronym> pools require three or more disks but
 	yield more usable space than mirrored pools.</para>
 
-      <para>To create a <acronym>RAID-Z</acronym> pool, issue the
-	following command and specify the disks to add to the
+      <para>To create a <acronym>RAID-Z</acronym> pool, use this
+	command, specifying the disks to add to the
 	pool:</para>
 
       <screen>&prompt.root; <userinput>zpool create storage raidz da0 da1 da2</userinput></screen>
@@ -270,8 +270,8 @@ example/data        17547008       0 175
 
       <screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen>
 
-      <para>It is now possible to enable compression and keep extra
-	copies of directories and files using the following
+      <para>Now compression and keeping extra
+	copies of directories and files can be enabled with these
 	commands:</para>
 
       <screen>&prompt.root; <userinput>zfs set copies=2 storage/home</userinput>
@@ -286,11 +286,11 @@ example/data        17547008       0 175
 &prompt.root; <userinput>ln -s /storage/home /home</userinput>
 &prompt.root; <userinput>ln -s /storage/home /usr/home</userinput></screen>
 
-      <para>Users should now have their data stored on the freshly
+      <para>Users now have their data stored on the freshly
 	created <filename class="directory">/storage/home</filename>.
 	Test by adding a new user and logging in as that user.</para>
 
-      <para>Try creating a snapshot which may be rolled back
+      <para>Try creating a snapshot which can be rolled back
 	later:</para>
 
       <screen>&prompt.root; <userinput>zfs snapshot storage/home at 08-30-08</userinput></screen>
@@ -299,11 +299,11 @@ example/data        17547008       0 175
 	file system, not a home directory or a file.  The
 	<literal>@</literal> character is a delimiter used between the
 	file system name or the volume name.  When a user's home
-	directory gets trashed, restore it with:</para>
+	directory is accidentally deleted, restore it with:</para>
 
       <screen>&prompt.root; <userinput>zfs rollback storage/home at 08-30-08</userinput></screen>
 
-      <para>To get a list of all available snapshots, run
+      <para>To list all available snapshots, run
 	<command>ls</command> in the file system's
 	<filename class="directory">.zfs/snapshot</filename>
 	directory.  For example, to see the previously taken
@@ -312,8 +312,8 @@ example/data        17547008       0 175
       <screen>&prompt.root; <userinput>ls /storage/home/.zfs/snapshot</userinput></screen>
 
       <para>It is possible to write a script to perform regular
-	snapshots on user data.  However, over time, snapshots may
-	consume a great deal of disk space.  The previous snapshot may
+	snapshots on user data.  However, over time, snapshots can
+	consume a great deal of disk space.  The previous snapshot can
 	be removed using the following command:</para>
 
       <screen>&prompt.root; <userinput>zfs destroy storage/home at 08-30-08</userinput></screen>
@@ -344,9 +344,8 @@ storage       26320512       0 26320512 
 storage/home  26320512       0 26320512     0%    /home</screen>
 
       <para>This completes the <acronym>RAID-Z</acronym>
-	configuration.  To get status updates about the file systems
-	created during the nightly &man.periodic.8; runs, issue the
-	following command:</para>
+	configuration.  Daily status updates about the file systems
+	created can be generated as part of the nightly &man.periodic.8; runs:</para>
 
       <screen>&prompt.root; <userinput>echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf</userinput></screen>
     </sect2>
@@ -362,7 +361,7 @@ storage/home  26320512       0 26320512 
       <screen>&prompt.root; <userinput>zpool status -x</userinput></screen>
 
       <para>If all pools are healthy and everything is normal, the
-	following message will be returned:</para>
+	message indicates that:</para>
 
       <screen>all pools are healthy</screen>
 
@@ -389,21 +388,21 @@ config:
 errors: No known data errors</screen>
 
       <para>This indicates that the device was previously taken
-	offline by the administrator using the following
+	offline by the administrator with this
 	command:</para>
 
       <screen>&prompt.root; <userinput>zpool offline storage da1</userinput></screen>
 
-      <para>It is now possible to replace <devicename>da1</devicename>
-	after the system has been powered down.  When the system is
-	back online, the following command may issued to replace the
-	disk:</para>
+      <para>Now the system can be powered down to replace <devicename>da1</devicename>.
+	When the system is
+	back online, the failed disk can replaced
+	in the pool:</para>
 
       <screen>&prompt.root; <userinput>zpool replace storage da1</userinput></screen>
 
       <para>From here, the status may be checked again, this time
-	without the <option>-x</option> flag to get state
-	information:</para>
+	without <option>-x</option> so that all pools
+	are shown:</para>
 
       <screen>&prompt.root; <userinput>zpool status storage</userinput>
  pool: storage
@@ -419,8 +418,7 @@ config:
 	    da2     ONLINE       0     0     0
 
 errors: No known data errors</screen>
-
-      <para>As shown from this example, everything appears to be
+      <para>In this example, everything is
 	normal.</para>
     </sect2>
 
@@ -434,20 +432,20 @@ errors: No known data errors</screen>
 
       <screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen>
 
-      <para>Doing so is <emphasis>not</emphasis> recommended as
-	checksums take very little storage space and are used to check
-	data integrity using checksum verification in a process is
-	known as <quote>scrubbing.</quote> To verify the data
-	integrity of the <literal>storage</literal> pool, issue this
+      <para>Doing so is <emphasis>not</emphasis> recommended.
+	Checksums take very little storage space and provide
+	data integrity.  Checksum verification is
+	known as <quote>scrubbing</quote>.  Verify the data
+	integrity of the <literal>storage</literal> pool, with this
 	command:</para>
 
       <screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen>
 
-      <para>This process may take considerable time depending on the
-	amount of data stored.  It is also very <acronym>I/O</acronym>
-	intensive, so much so that only one scrub may be run at any
+      <para>The duration of a scrub depends on the
+	amount of data stored.  Large amounts of data can take a considerable amount of time to verify.  It is also very <acronym>I/O</acronym>
+	intensive, so much so that only one scrub> may be run at any
 	given time.  After the scrub has completed, the status is
-	updated and may be viewed by issuing a status request:</para>
+	updated and may be viewed with a status request:</para>
 
       <screen>&prompt.root; <userinput>zpool status storage</userinput>
  pool: storage
@@ -466,6 +464,7 @@ errors: No known data errors</screen>
 
       <para>The completion time is displayed and helps to ensure data
 	integrity over a long period of time.</para>
+	<!-- WB: what does that mean? -->
 
       <para>Refer to &man.zfs.8; and &man.zpool.8; for other
 	<acronym>ZFS</acronym> options.</para>
@@ -484,14 +483,14 @@ errors: No known data errors</screen>
 	involves making a number of decisions that are relatively
 	permanent because the structure of the pool cannot be
 	changed after the pool has been created.  The most important
-	decision is what type(s) of vdevs to group the physical disks
+	decision is what types of vdevs to group the physical disks
 	into.  See the list of <link
 	linkend="zfs-term-vdev">vdev types</link> for details about
-	the possible options.  Once the pool has been created, most
+	the possible options.  After the pool has been created, most
 	vdev types do not allow additional disks to be added to the
 	vdev.  The exceptions are mirrors, which allow additional
 	disks to be added to the vdev, and stripes, which can be
-	upgraded to mirrors by attaching an additional to the vdev.
+	upgraded to mirrors by attaching an additional disk to the vdev.
 	Although additional vdevs can be added to a pool, the layout
 	of the pool cannot be changed once the pool has been created,
 	instead the data must be backed up and the pool
@@ -503,22 +502,22 @@ errors: No known data errors</screen>
     <sect2 id="zfs-zpool-attach">
       <title>Adding & Removing Devices</title>
 
-      <para>Adding additional disks to a zpool can be broken down into
-	two separate cases, attaching an additional disk to an
+      <para>Adding disks to a zpool can be broken down into
+	two separate cases: attaching a disk to an
 	existing vdev with the <literal>zpool attach</literal>
-	command, or adding additional vdevs to the pool with the
+	command, or adding vdevs to the pool with the
 	<literal>zpool add</literal> command.  Only some
 	<link linkend="zfs-term-vdev">vdev types</link> allow disks to
-	be added to the vdev after the fact.</para>
+	be added to the vdev after creation.</para>
 
-      <para>When adding additional disks to the existing vdev is not
-	an option, such as in the case of RAID-Z, the other option is
-	to add an additional vdev to the pool.  It is possible, but
+      <para>When adding disks to the existing vdev is not
+	an option, as in the case of RAID-Z, the other option is
+	to add a vdev to the pool.  It is possible, but
 	discouraged, to mix vdev types.  ZFS stripes data across each
-	of the vdevs, for example if there are two mirror vdevs, then
+	of the vdevs.  For example, if there are two mirror vdevs, then
 	this is effectively a RAID 10, striping the writes across the
 	two sets of mirrors.  Because of the way that space is
-	allocated in ZFS in order to attempt to have each vdev reach
+	allocated in ZFS to attempt to have each vdev reach
 	100% full at the same time, there is a performance penalty if
 	the vdevs have different amounts of free space.</para>
 
@@ -539,24 +538,23 @@ errors: No known data errors</screen>
       <title>Replacing a Working Devices</title>
 
       <para>There are a number of situations in which it may be
-	desirable to replacing a disk with a different disk.  This
+	desirable to replace a disk with a different disk.  This
 	process requires connecting the new disk at the same time as
 	the disk to be replaced.  The
 	<literal>zpool replace</literal> command will copy all of the
-	data from the old disk to the new one.  Once this operation
+	data from the old disk to the new one.  After this operation
 	completes, the old disk is disconnected from the vdev.  If the
-	newer disk is larger this may allow your zpool to grow, see
-	the <link linkend="zfs-zpool-online">Growing a Pool</link>
-	section.</para>
+	new disk is larger than the old disk, it may be possible to grow the zpool, using the new space.  See
+	<link linkend="zfs-zpool-online">Growing a Pool</link>.</para>
     </sect2>
 
     <sect2 id="zfs-zpool-resilver">
       <title>Dealing with Failed Devices</title>
 
       <para>When a disk fails and the physical device is replaced, ZFS
-	needs to be told to begin the <link
+	must be told to begin the <link
 	linkend="zfs-term-resilver">resilver</link> operation, where
-	the data that was on the failed device will be recalculated
+	data that was on the failed device will be recalculated
 	from the available redundancy and written to the new
 	device.</para>
     </sect2>
@@ -565,54 +563,57 @@ errors: No known data errors</screen>
       <title>Growing a Pool</title>
 
       <para>The usable size of a redundant ZFS pool is limited by the
-	size of the smallest device in the vdev.  If you sequentially
-	replace each device in the vdev then when the smallest device
+	size of the smallest device in the vdev.  If each device in the vdev is replaced sequentially,
+	after the smallest device
 	has completed the replace or resilver operation, the pool
-	can then grow based on the size of the new smallest device.
+	can grow based on the size of the new smallest device.
 	This expansion can be triggered with the
 	<literal>zpool online</literal> command with the -e flag on
-	each device.  Once the expansion of each device is complete,
+	each device.  After the expansion of each device,
 	the additional space will be available in the pool.</para>
     </sect2>
 
     <sect2 id="zfs-zpool-import">
       <title>Importing & Exporting Pools</title>
 
-      <para>Pools can be exported in preperation for moving them to
+      <para>Pools can be exported in preparation for moving them to
 	another system.  All datasets are unmounted, and each device
 	is marked as exported but still locked so it cannot be used
 	by other disk subsystems.  This allows pools to be imported on
 	other machines, other operating systems that support ZFS, and
 	even different hardware architectures (with some caveats, see
-	the zpool man page).  The -f flag can be used to force
-	exporting a pool, in cases such as when a dataset has open
-	files.  If you force an export, the datasets will be forcibly
-	unmounted such can have unexpected side effects.</para>
-
-      <para>Importing a pool will automatically mount the datasets,
-	which may not be the desired behavior.  The -N command line
-	param will skip mounting.  The command line parameter -o sets
-	temporary properties for this import only.  The altroot=
-	property allows you to import a zpool with a base of some
-	mount point, instead of the root of the file system.  If the
+	&man.zpool.8;).  When a dataset has open files, <option>-f</option> can be used to force the
+	export of a pool.
+	<option>-f</option> causes the datasets to be forcibly
+	unmounted.  This can have unexpected side effects.</para>
+
+      <para>Importing a pool automatically mounts the datasets.
+	This may not be the desired behavior, and can be prevented with <option>-N</option>.
+	<option>-o</option> sets
+	temporary properties for this import only.  <option>altroot=</option>
+	allows importing a zpool with a base
+	mount point instead of the root of the file system.  If the
 	pool was last used on a different system and was not properly
-	exported, you may have to force an import with the -f flag.
-	The -a flag will import all pools that do not appear to be
+	exported, an import might have to be forced with <option>-f</option>.
+	<option>-a</option> imports all pools that do not appear to be
 	in use by another system.</para>
     </sect2>
 
     <sect2 id="zfs-zpool-upgrade">
       <title>Upgrading a Storage Pool</title>
 
-      <para>After FreeBSD has been upgraded, or if a pool has been
-	imported from a system using an older verison of ZFS, the pool
+      <para>After upgrading &os;, or if a pool has been
+	imported from a system using an older version of ZFS, the pool
 	must be manually upgraded to the latest version of ZFS.  This
-	process is unreversable, so consider if the pool may ever need
-	to be imported on an older system before upgrading.  Onle once
-	the <literal>zpool upgrade</literal> command has completed
-	will the newer features of ZFS be available.  An upgrade
-	cannot be undone.  The -v flag can be used to see what new
-	features will be supported by upgrading.</para>
+	process is unreversible.  Consider whether the pool may ever need
+	to be imported on an older system before upgrading.  An upgrade
+	cannot be undone.</para>
+
+      <para>The newer features of ZFS will not be available until
+	the <literal>zpool upgrade</literal> command has completed.
+	will the newer features of ZFS be available.
+	<option>-v</option> can be used to see what new
+	features will be provided by upgrading.</para>
     </sect2>
 
     <sect2 id="zfs-zpool-status">
@@ -624,13 +625,13 @@ errors: No known data errors</screen>
     <sect2 id="zfs-zpool-iostat">
       <title>Performance Monitoring</title>
 
-      <para>ZFS has a built-in monitoring isystem that can display
+      <para>ZFS has a built-in monitoring system that can display
 	statistics about I/O happening on the pool in real-time.
 	Additionally, it shows the free and used space on the pool and
 	how much I/O bandwidth is currently utilized for read and
 	write operations.  By default, all pools in the system will be
-	monitored and displayed.  A pool name can be provided to just
-	monitor one pool.  A basic example is provided below:</para>
+	monitored and displayed.  A pool name can be provided to monitor
+	just that single pool.  A basic example:</para>
 
 <screen>&prompt.root; <userinput>zpool iostat</userinput>
                capacity     operations    bandwidth
@@ -638,23 +639,23 @@ pool        alloc   free   read  write  
 ----------  -----  -----  -----  -----  -----  -----
 data         288G  1.53T      2     11  11.3K  57.1K</screen>
 
-	<para>To monitor I/O activity on the pool continuously, a
-	  number indicating the seconds after which to refresh the
-	  display can be specified.  ZFS will then print the next
-	  statistic line after each interval has been reached.  Press
+	<para>To continuously monitor I/O activity on the pool, specify
+	  a number as the last parameter, indicating the number of seconds
+	  to wait between updates.  ZFS will print the next
+	  statistic line after each interval.  Press
 	  <keycombo
 	  action="simul"><keycap>Ctrl</keycap><keycap>C</keycap></keycombo>
-	  to stop this continuous monitoring.  Alternatively, a second
-	  whole number can be provided on the command line after the
-	  interval to indicate how many of these statistics should be
-	  displayed in total.</para>
+	  to stop this continuous monitoring.  Alternatively, give a second
+	  number on the command line after the
+	  interval to specify the total number of statistics to
+	  display.</para>
 
 	<para>Even more detailed pool I/O statistics can be
-	  displayed using the <literal>-v</literal> parameter.  For
-	  each storage device that is part of the pool ZFS will
-	  provide a separate statistic line.  This is helpful to
+	  displayed with <option>-v</option> parameter.
+	  Each storage device in the pool will be shown with a
+	  separate statistic line.  This is helpful to
 	  determine reads and writes on devices that slow down I/O on
-	  the whole pool.  In the following example, we have a
+	  the whole pool.  The following example shows a
 	  mirrored pool consisting of two devices.  For each of these,
 	  a separate line is shown with the current I/O
 	  activity.</para>


More information about the svn-doc-projects mailing list