svn commit: r44536 - head/en_US.ISO8859-1/books/handbook/geom

Dru Lavigne dru at FreeBSD.org
Fri Apr 11 17:06:12 UTC 2014


Author: dru
Date: Fri Apr 11 17:06:12 2014
New Revision: 44536
URL: http://svnweb.freebsd.org/changeset/doc/44536

Log:
  White space fix only. Translators can ignore.
  
  Sponsored by:	iXsystems

Modified:
  head/en_US.ISO8859-1/books/handbook/geom/chapter.xml

Modified: head/en_US.ISO8859-1/books/handbook/geom/chapter.xml
==============================================================================
--- head/en_US.ISO8859-1/books/handbook/geom/chapter.xml	Fri Apr 11 16:46:10 2014	(r44535)
+++ head/en_US.ISO8859-1/books/handbook/geom/chapter.xml	Fri Apr 11 17:06:12 2014	(r44536)
@@ -33,20 +33,21 @@
       <see><acronym>GEOM</acronym></see>
     </indexterm>
 
-    <para>In &os;, the <acronym>GEOM</acronym> framework permits access and control to classes, such as Master
-      Boot Records and <acronym>BSD</acronym> labels, through the use
-      of providers, or the disk devices in <filename>/dev</filename>.
-      By supporting various software <acronym>RAID</acronym>
-      configurations, <acronym>GEOM</acronym> transparently provides access  to the
+    <para>In &os;, the <acronym>GEOM</acronym> framework permits
+      access and control to classes, such as Master Boot Records and
+      <acronym>BSD</acronym> labels, through the use of providers, or
+      the disk devices in <filename>/dev</filename>.  By supporting
+      various software <acronym>RAID</acronym> configurations,
+      <acronym>GEOM</acronym> transparently provides access  to the
       operating system and operating system utilities.</para>
 
-    <para>This chapter covers the use of disks under the <acronym>GEOM</acronym>
-      framework in &os;.  This includes the major <acronym>RAID</acronym>
-      control utilities which use the framework for configuration.
-      This chapter is
-      not a definitive guide to <acronym>RAID</acronym> configurations
-      and only <acronym>GEOM</acronym>-supported <acronym>RAID</acronym> classifications
-      are discussed.</para>
+    <para>This chapter covers the use of disks under the
+      <acronym>GEOM</acronym> framework in &os;.  This includes the
+      major <acronym>RAID</acronym> control utilities which use the
+      framework for configuration.  This chapter is not a definitive
+      guide to <acronym>RAID</acronym> configurations and only
+      <acronym>GEOM</acronym>-supported <acronym>RAID</acronym>
+      classifications are discussed.</para>
 
     <para>After reading this chapter, you will know:</para>
 
@@ -68,8 +69,8 @@
       </listitem>
 
       <listitem>
-	<para>How to troubleshoot disks attached to the <acronym>GEOM</acronym>
-	  framework.</para>
+	<para>How to troubleshoot disks attached to the
+	  <acronym>GEOM</acronym> framework.</para>
       </listitem>
     </itemizedlist>
 
@@ -82,8 +83,8 @@
       </listitem>
 
       <listitem>
-	<para>Know how to configure and install a new kernel
-	  (<xref linkend="kernelconfig"/>.</para>
+	<para>Know how to configure and install a new kernel (<xref
+	    linkend="kernelconfig"/>.</para>
       </listitem>
     </itemizedlist>
   </sect1>
@@ -122,18 +123,18 @@
       <acronym>RAID</acronym> controllers.  The
       <acronym>GEOM</acronym> disk subsystem provides software support
       for disk striping, also known as <acronym>RAID0</acronym>,
-      without the need for a <acronym>RAID</acronym> disk 
+      without the need for a <acronym>RAID</acronym> disk
       controller.</para>
 
-    <para>In <acronym>RAID0</acronym>, data is split into
-      blocks that are written across all the drives in the array.  As
-      seen in the following illustration,
-      instead of having to wait on the system to write 256k to one
-      disk, <acronym>RAID0</acronym> can simultaneously write
-      64k to each of the four disks in the array, offering superior <acronym>I/O</acronym>
-      performance.  This performance can be enhanced further by using
-      multiple disk controllers.</para>
-   
+    <para>In <acronym>RAID0</acronym>, data is split into blocks that
+      are written across all the drives in the array.  As seen in the
+      following illustration, instead of having to wait on the system
+      to write 256k to one disk, <acronym>RAID0</acronym> can
+      simultaneously write 64k to each of the four disks in the array,
+      offering superior <acronym>I/O</acronym> performance.  This
+      performance can be enhanced further by using multiple disk
+      controllers.</para>
+
     <mediaobject>
       <imageobject>
 	<imagedata fileref="geom/striping" align="center"/>
@@ -145,11 +146,12 @@
     </mediaobject>
 
     <para>Each disk in a <acronym>RAID0</acronym> stripe must be of
-      the same size, since <acronym>I/O</acronym> requests are interleaved to read or
-      write to multiple disks in parallel.</para>
+      the same size, since <acronym>I/O</acronym> requests are
+      interleaved to read or write to multiple disks in
+      parallel.</para>
 
     <note>
-     <para><acronym>RAID0</acronym> does <emphasis>not</emphasis>
+      <para><acronym>RAID0</acronym> does <emphasis>not</emphasis>
 	provide any redundancy.  This means that if one disk in the
 	array fails, all of the data on the disks is lost.  If the
 	data is important, implement a backup strategy that regularly
@@ -163,7 +165,8 @@
       to control an existing stripe.</para>
 
     <procedure>
-      <title>Creating a Stripe of Unformatted <acronym>ATA</acronym> Disks</title>
+      <title>Creating a Stripe of Unformatted <acronym>ATA</acronym>
+	Disks</title>
 
       <step>
 	<para>Load the <filename>geom_stripe.ko</filename>
@@ -203,11 +206,11 @@ Done.</screen>
 
       <step>
 	<para>This process should create two other devices in
-	  <filename>/dev/stripe</filename> in
-	  addition to <filename>st0</filename>.  Those include
-	  <filename>st0a</filename> and
-	  <filename>st0c</filename>.  At this point, a <acronym>UFS</acronym> file system
-	  can be created on <filename>st0a</filename> using
+	  <filename>/dev/stripe</filename> in addition to
+	  <filename>st0</filename>.  Those include
+	  <filename>st0a</filename> and <filename>st0c</filename>.  At
+	  this point, a <acronym>UFS</acronym> file system can be
+	  created on <filename>st0a</filename> using
 	  <command>newfs</command>:</para>
 
 	<screen>&prompt.root; <userinput>newfs -U /dev/stripe/st0a</userinput></screen>
@@ -218,30 +221,31 @@ Done.</screen>
       </step>
 
       <step>
-    <para>To manually mount the created disk stripe:</para>
+	<para>To manually mount the created disk stripe:</para>
 
-    <screen>&prompt.root; <userinput>mount /dev/stripe/st0a /mnt</userinput></screen>
+	<screen>&prompt.root; <userinput>mount /dev/stripe/st0a /mnt</userinput></screen>
       </step>
 
       <step>
-    <para>To mount this striped file system automatically during the
-      boot process, place the volume information in
-      <filename>/etc/fstab</filename>.  In this example, a permanent
-      mount point, named <filename>stripe</filename>, is
-      created:</para>
+	<para>To mount this striped file system automatically during
+	  the boot process, place the volume information in
+	  <filename>/etc/fstab</filename>.  In this example, a
+	  permanent mount point, named <filename>stripe</filename>, is
+	  created:</para>
 
-    <screen>&prompt.root; <userinput>mkdir /stripe</userinput>
+	<screen>&prompt.root; <userinput>mkdir /stripe</userinput>
 &prompt.root; <userinput>echo "/dev/stripe/st0a /stripe ufs rw 2 2" \</userinput>
-    <userinput>>> /etc/fstab</userinput></screen>
-    </step>
+<userinput>>> /etc/fstab</userinput></screen>
+      </step>
 
-    <step>
-    <para>The <filename>geom_stripe.ko</filename> module must also be
-      automatically loaded during system initialization, by adding a
-      line to <filename>/boot/loader.conf</filename>:</para>
+      <step>
+	<para>The <filename>geom_stripe.ko</filename> module must also
+	  be automatically loaded during system initialization, by
+	  adding a line to
+	  <filename>/boot/loader.conf</filename>:</para>
 
-    <screen>&prompt.root; <userinput>echo 'geom_stripe_load="YES"' >> /boot/loader.conf</userinput></screen>
-    </step>
+	<screen>&prompt.root; <userinput>echo 'geom_stripe_load="YES"' >> /boot/loader.conf</userinput></screen>
+      </step>
     </procedure>
   </sect1>
 
@@ -1340,9 +1344,9 @@ Done.</screen>
   <sect1 xml:id="geom-ggate">
     <title><acronym>GEOM</acronym> Gate Network Devices</title>
 
-    <para><acronym>GEOM</acronym> supports the remote use of devices, such as disks,
-      CD-ROMs, and files through the use of the gate utilities.
-      This is similar to <acronym>NFS</acronym>.</para>
+    <para><acronym>GEOM</acronym> supports the remote use of devices,
+      such as disks, CD-ROMs, and files through the use of the gate
+      utilities.  This is similar to <acronym>NFS</acronym>.</para>
 
     <para>To begin, an exports file must be created.  This file
       specifies who is permitted to access the exported resources and


More information about the svn-doc-head mailing list