New Handbook Section for Review - graid3

Warren Block wblock at wonkity.com
Tue Jan 31 20:58:45 UTC 2012


On Tue, 31 Jan 2012, Daniel Gerzo wrote:

>  A new Handbook section covering graid3 is now available for review;
>  the patch is attached.
>
>  The text is based on PR 164228. A built version is available at
>  http://people.freebsd.org/~danger/geom-raid3.html.
>
>  Comments are welcome.

Patch with suggested changes attached.  This is a full diff, not a 
diff to your diff.

Some general things I'm not sure about, but will suggest:

Personally, I prefer

   add this setting to configfile
   <programlisting>'blahblah'</programlisting>

to

   Run this command
   <screen>'echo blahblah >> configfile'</screen>

I did not change those, though.


Various wording changes.  The sentence explaining total capacity could 
still be better, I think.  Paragraphs were intentionally not rewrapped, 
but should be.

The paragraph starting "To retain this configuration across reboots" was 
unclear.  I hope it still means what it did.

A couple of blank lines had whitespace.  Title capitalization was
changed a bit.  igor (http://www.wonkity.com/~wblock/igor/) can 
proofread those.
-------------- next part --------------
Index: chapter.sgml
===================================================================
RCS file: /home/dcvs/doc/en_US.ISO8859-1/books/handbook/geom/chapter.sgml,v
retrieving revision 1.51
diff -u -r1.51 chapter.sgml
--- chapter.sgml	21 Nov 2011 18:11:25 -0000	1.51
+++ chapter.sgml	31 Jan 2012 20:44:15 -0000
@@ -436,6 +436,162 @@
     </sect2>
   </sect1>
 
+  <sect1 id="GEOM-raid3">
+    <sect1info>
+      <authorgroup>
+	<author>
+	  <firstname>Mark</firstname>
+	  <surname>Gladman</surname>
+	  <contrib>Written by </contrib>
+	</author>
+	<author>
+	  <firstname>Daniel</firstname>
+	  <surname>Gerzo</surname>
+	</author>
+      </authorgroup>
+      <authorgroup>
+	<author>
+	  <firstname>Tom</firstname>
+	  <surname>Rhodes</surname>
+	  <contrib>Based on documentation by </contrib>
+	</author>
+	<author>
+	  <firstname>Murray</firstname>
+	  <surname>Stokely</surname>
+	</author>
+      </authorgroup>
+    </sect1info>
+
+    <indexterm>
+      <primary>GEOM</primary>
+    </indexterm>
+    <indexterm>
+      <primary>RAID3</primary>
+    </indexterm>
+
+    <title><acronym>RAID</acronym>3 - Byte-level Striping with Dedicated
+      Parity</title>
+
+    <para><acronym>RAID</acronym>3 is a method used to combine several
+      disk drives into a single volume with a dedicated parity
+      disk.  In a <acronym>RAID</acronym>3 system, data is split up
+      into a number of bytes that get written across all the drives in
+      the array except for one disk which acts as a dedicated parity
+      disk.  This means that reading 1024KB from a
+      <acronym>RAID</acronym>3 implementation will access all disks in
+      the array.  Performance can be enhanced by using multiple
+      disk controllers.  The <acronym>RAID</acronym>3 array provides a
+      fault tolerance of 1 drive, while providing a capacity of 1 - 1/n times the
+      total capacity of all drives in the array, where n is the
+      number of hard drives in the array.  Such a configuration is
+      mostly suitable for storing data of larger sizes, e.g.
+      multimedia files.</para>
+
+    <para>At least 3 physical hard drives are required to build a
+      <acronym>RAID</acronym>3 array.  Each disk must be of the same
+      size, since I/O requests are interleaved to read or write to
+      multiple disks in parallel.  Also due to the nature of
+      <acronym>RAID</acronym>3, the number of drives must be
+      equal to 3, 5, 9, 17, etc. (2^n + 1).</para>
+
+    <sect2>
+      <title>Creating a Dedicated <acronym>RAID</acronym>3 Array</title>
+
+      <para>In &os;, support for <acronym>RAID</acronym>3 is
+	implemented by the &man.graid3.8; <acronym>GEOM</acronym>
+	class.  Creating a dedicated
+	<acronym>RAID</acronym>3 array on &os; requires the following steps.</para>
+
+      <note>
+	<para>While it is theoretically possible to boot from a
+	  <acronym>RAID</acronym>3 array on &os;, such a configuration
+	  is not common and is not advised.  As such, this section
+	  does not describe how to accomplish that
+	  configuration.</para>
+      </note>
+
+      <procedure>
+	<step>
+	  <para>First, load the <filename>geom_raid3.ko</filename> kernel module.</para>
+
+	  <screen>&prompt.root; <userinput>graid3 load</userinput></screen>
+
+	  <para>Alternatively, it is possible to manually load the
+	    <filename>geom_raid3.ko</filename> module:</para>
+
+	  <screen>&prompt.root; <userinput>kldload geom_raid3.ko</userinput></screen>
+	</step>
+
+	<step>
+	  <para>Create or ensure that a suitable mount point
+	    exists:</para>
+
+	  <screen>&prompt.root; <userinput>mkdir <replaceable>/multimedia/</replaceable></userinput></screen>
+	</step>
+
+	<step>
+	  <para>Determine the device names for the disks which will be
+	    added to the array, and create the new
+	    <acronym>RAID</acronym>3 device.  The final device listed
+	    will act as the dedicated parity disk.  This
+	    example uses three unpartitioned
+	    <acronym>ATA</acronym> drives:
+	    <devicename><replaceable>ada1</replaceable></devicename>
+	    and <devicename><replaceable>ada2</replaceable></devicename>
+	    for data, and
+	    <devicename><replaceable>ada3</replaceable></devicename>
+	    for parity.</para>
+
+	  <screen>&prompt.root; <userinput>graid3 label -v gr0 /dev/ada1 /dev/ada2 /dev/ada3</userinput>
+Metadata value stored on /dev/ada1.
+Metadata value stored on /dev/ada2.
+Metadata value stored on /dev/ada3.
+Done.</screen>
+	</step>
+
+	<step>
+	  <para>Partition the newly created
+	    <devicename>gr0</devicename> device and put a UFS file
+	    system on it:</para>
+
+	  <screen>&prompt.root; <userinput>gpart create -s GPT /dev/raid3/gr0</userinput>
+&prompt.root; <userinput>gpart add -t freebsd-ufs /dev/raid3/gr0</userinput>
+&prompt.root; <userinput>newfs -j /dev/raid3/gr0p1</userinput></screen>
+
+	  <para>Many numbers will glide across the screen, and after a
+	    few seconds, the process will be complete.  The volume has
+	    been created and is ready to be mounted.</para>
+	</step>
+
+	<step>
+	  <para>The last step is to mount the file system:</para>
+
+	  <screen>&prompt.root; <userinput>mount /dev/raid3/gr0p1 /multimedia/</userinput></screen>
+
+	  <para>The <acronym>RAID</acronym>3 array is now ready to
+	    use.</para>
+	</step>
+      </procedure>
+
+      <note>
+	<para>The
+	  <filename>geom_raid3.ko</filename> module must be loaded
+	  before the array can be mounted.  To
+	  automatically load the kernel module during system initialization,
+	  invoke the following command:</para>
+
+	<screen>&prompt.root; <userinput>echo 'geom_raid3_load="YES"' >> /boot/loader.conf</userinput></screen>
+
+	<para>For
+	  automatic mounting of the array's file system during the boot
+	  process, add the volume information in
+	  <filename>/etc/fstab</filename> file:</para>
+
+	<screen>&prompt.root; <userinput>echo "/dev/raid3/gr0p1 /multimedia ufs rw 2 2" >> /etc/fstab</userinput></screen>
+      </note>
+    </sect2>
+  </sect1>
+
   <sect1 id="geom-ggate">
     <title>GEOM Gate Network Devices</title>
 


More information about the freebsd-doc mailing list