svn commit: r41174 - head/en_US.ISO8859-1/books/handbook/vinum

Dru Lavigne dru at FreeBSD.org
Tue Mar 12 13:30:15 UTC 2013


Author: dru
Date: Tue Mar 12 13:30:14 2013
New Revision: 41174
URL: http://svnweb.freebsd.org/changeset/doc/41174

Log:
  This patch addresses the following:
  
  - rewording for you
  
  - fixed xref and acronym tags
  
  - Vinum changed to <devicename>vinum</devicename>
  
  - merge intro and 22.2
  
  - merge 22.5.3 and 22.5.4
  
  - modernized 22.9.1 as gvinum used since 5.x
  
  - fixed filesystem to file system
  
  - remove "" between literals as the literal output does not include it
  
  Approved by:  gjb (mentor)

Modified:
  head/en_US.ISO8859-1/books/handbook/vinum/chapter.xml

Modified: head/en_US.ISO8859-1/books/handbook/vinum/chapter.xml
==============================================================================
--- head/en_US.ISO8859-1/books/handbook/vinum/chapter.xml	Tue Mar 12 11:32:45 2013	(r41173)
+++ head/en_US.ISO8859-1/books/handbook/vinum/chapter.xml	Tue Mar 12 13:30:14 2013	(r41174)
@@ -21,80 +21,54 @@
     </authorgroup>
   </chapterinfo>
 
-  <title>The Vinum Volume Manager</title>
+  <title>The <devicename>vinum</devicename> Volume Manager</title>
 
   <sect1 id="vinum-synopsis">
     <title>Synopsis</title>
 
-    <para>No matter what disks you have, there are always potential
-      problems:</para>
+    <para>No matter the type of disks, there are always potential
+      problems.  The disks can be too small, too slow, or too
+      unreliable to meet the system's requirements.  While disks are
+      getting bigger, so are data storage requirements.  Often a file
+      system is needed that is bigger than a disk's capacity.  Various
+      solutions to these problems have been proposed and
+      implemented.</para>
 
-    <itemizedlist>
-      <listitem>
-	<para>They can be too small.</para>
-      </listitem>
-
-      <listitem>
-	<para>They can be too slow.</para>
-      </listitem>
-
-      <listitem>
-	<para>They can be too unreliable.</para>
-      </listitem>
-    </itemizedlist>
-
-    <para>Various solutions to these problems have been proposed and
-      implemented.  One way some users safeguard themselves against
-      such issues is through the use of multiple, and sometimes
+    <para>One method is through the use of multiple, and sometimes
       redundant, disks.  In addition to supporting various cards and
-      controllers for hardware RAID systems, the base &os; system
-      includes the Vinum Volume Manager, a block device driver that
-      implements virtual disk drives.  <emphasis>Vinum</emphasis> is a
-      so-called <emphasis>Volume Manager</emphasis>, a virtual disk
-      driver that addresses these three problems.  Vinum provides more
-      flexibility, performance, and reliability than traditional disk
-      storage, and implements RAID-0, RAID-1, and RAID-5 models both
-      individually and in combination.</para>
+      controllers for hardware Redundant Array of Independent
+      Disks <acronym>RAID</acronym> systems, the base &os; system
+      includes the <devicename>vinum</devicename> volume manager, a
+      block device driver that implements virtual disk drives and
+      addresses these three problems.  <devicename>vinum</devicename>
+      provides more flexibility, performance, and reliability than
+      traditional disk storage and implements
+      <acronym>RAID</acronym>-0, <acronym>RAID</acronym>-1, and
+      <acronym>RAID</acronym>-5 models, both individually and in
+      combination.</para>
 
     <para>This chapter provides an overview of potential problems with
-      traditional disk storage, and an introduction to the Vinum
-      Volume Manager.</para>
+      traditional disk storage, and an introduction to the
+      <devicename>vinum</devicename> volume manager.</para>
 
     <note>
-      <para>Starting with &os; 5, Vinum has been rewritten in
-	order to fit into the GEOM architecture (<xref
-	  linkend="GEOM"/>), retaining the original ideas,
-	terminology, and on-disk metadata.  This rewrite is called
-	<emphasis>gvinum</emphasis> (for <emphasis> GEOM
-	  vinum</emphasis>).  The following text usually refers to
-	<emphasis>Vinum</emphasis> as an abstract name, regardless of
-	the implementation variant.  Any command invocations should
-	now be done using the <command>gvinum</command> command, and
-	the name of the kernel module has been changed from
+      <para>Starting with &os; 5, <devicename>vinum</devicename>
+	has been rewritten in order to fit into the <link
+	  linkend="GEOM">GEOM architecture</link>, while retaining the
+	original ideas, terminology, and on-disk metadata.  This
+	rewrite is called <emphasis>gvinum</emphasis> (for <emphasis>
+	  GEOM vinum</emphasis>).  While this chapter uses the term
+	<devicename>vinum</devicename>, any command invocations should
+	be performed with <command>gvinum</command>.  The name of the
+	kernel module has changed from the original
 	<filename>vinum.ko</filename> to
 	<filename>geom_vinum.ko</filename>, and all device nodes
 	reside under <filename
 	  class="directory">/dev/gvinum</filename> instead of
 	<filename class="directory">/dev/vinum</filename>.  As of
-	&os; 6, the old Vinum implementation is no longer
-	available in the code base.</para>
+	&os; 6, the original <devicename>vinum</devicename>
+	implementation is no longer available in the code base.</para>
     </note>
-
-  </sect1>
-
-  <sect1 id="vinum-intro">
-    <title>Disks Are Too Small</title>
-
-    <indexterm><primary>Vinum</primary></indexterm>
-    <indexterm><primary>RAID</primary>
-      <secondary>software</secondary></indexterm>
-
-    <para>Disks are getting bigger, but so are data storage
-      requirements.  Often you will find you want a file system that
-      is bigger than the disks you have available.  Admittedly, this
-      problem is not as acute as it was ten years ago, but it still
-      exists.  Some systems have solved this by creating an abstract
-      device which stores its data on a number of disks.</para>
   </sect1>
 
   <sect1 id="vinum-access-bottlenecks">
@@ -108,18 +82,18 @@
 
     <para>Current disk drives can transfer data sequentially at up to
       70 MB/s, but this value is of little importance in an
-      environment where many independent processes access a drive,
+      environment where many independent processes access a drive, and
       where they may achieve only a fraction of these values.  In such
-      cases it is more interesting to view the problem from the
-      viewpoint of the disk subsystem: the important parameter is the
-      load that a transfer places on the subsystem, in other words the
-      time for which a transfer occupies the drives involved in the
+      cases, it is more interesting to view the problem from the
+      viewpoint of the disk subsystem.  The important parameter is the
+      load that a transfer places on the subsystem, or the time for
+      which a transfer occupies the drives involved in the
       transfer.</para>
 
     <para>In any disk transfer, the drive must first position the
       heads, wait for the first sector to pass under the read head,
       and then perform the transfer.  These actions can be considered
-      to be atomic: it does not make any sense to interrupt
+      to be atomic as it does not make any sense to interrupt
       them.</para>
 
     <para><anchor id="vinum-latency"/> Consider a typical transfer of
@@ -134,14 +108,14 @@
       size.</para>
 
     <para>The traditional and obvious solution to this bottleneck is
-      <quote>more spindles</quote>: rather than using one large disk,
-      it uses several smaller disks with the same aggregate storage
+      <quote>more spindles</quote>:  rather than using one large disk,
+      use several smaller disks with the same aggregate storage
       space.  Each disk is capable of positioning and transferring
       independently, so the effective throughput increases by a factor
       close to the number of disks used.</para>
 
-    <para>The exact throughput improvement is, of course, smaller than
-      the number of disks involved: although each drive is capable of
+    <para>The actual throughput improvement is smaller than the
+      number of disks involved.  Although each drive is capable of
       transferring in parallel, there is no way to ensure that the
       requests are evenly distributed across the drives.  Inevitably
       the load on one drive will be higher than on another.</para>
@@ -168,8 +142,8 @@
       relationships.  It works well when the access to the virtual
       disk is spread evenly about its address space.  When access is
       concentrated on a smaller area, the improvement is less marked.
-      <xref linkend="vinum-concat"/> illustrates the sequence in which
-      storage units are allocated in a concatenated
+      <link linkend="vinum-concat"></link> illustrates the sequence in
+      which storage units are allocated in a concatenated
       organization.</para>
 
     <para>
@@ -187,7 +161,7 @@
       <secondary>striping</secondary>
     </indexterm>
     <indexterm>
-      <primary>RAID</primary>
+      <primary><acronym>RAID</acronym></primary>
     </indexterm>
 
     <para>An alternative mapping is to divide the address space into
@@ -196,20 +170,18 @@
       stored on the first disk, the next 256 sectors on the next disk
       and so on.  After filling the last disk, the process repeats
       until the disks are full.  This mapping is called
-      <emphasis>striping</emphasis> or <acronym>RAID-0</acronym>
+      <emphasis>striping</emphasis> or
+      <acronym>RAID-0</acronym>.</para>
 
-    <footnote>
-      <para><acronym>RAID</acronym> stands for <emphasis>Redundant
-	  Array of Inexpensive Disks</emphasis> and offers various
-	forms of fault tolerance, though the latter term is somewhat
-	misleading: it provides no redundancy.</para> </footnote>.
-
-      Striping requires somewhat more effort to locate the
-      data, and it can cause additional I/O load where a transfer is
-      spread over multiple disks, but it can also provide a more
-      constant load across the disks.  <xref linkend="vinum-striped"/>
-      illustrates the sequence in which storage units are allocated in
-      a striped organization.</para>
+    <para><acronym>RAID</acronym> offers various forms of fault
+      tolerance, though <acronym>RAID-0</acronym> is somewhat
+      misleading as it provides no redundancy.  Striping requires
+      somewhat more effort to locate the data, and it can cause
+      additional I/O load where a transfer is spread over multiple
+      disks, but it can also provide a more constant load across the
+      disks.  <link linkend="vinum-striped"></link> illustrates the
+      sequence in which storage units are allocated in a striped
+      organization.</para>
 
     <para>
       <figure id="vinum-striped">
@@ -222,56 +194,55 @@
   <sect1 id="vinum-data-integrity">
     <title>Data Integrity</title>
 
-    <para>The final problem with current disks is that they are
-      unreliable.  Although disk drive reliability has increased
-      tremendously over the last few years, they are still the most
-      likely core component of a server to fail.  When they do, the
-      results can be catastrophic: replacing a failed disk drive and
-      restoring data to it can take days.</para>
+    <para>The final problem with disks is that they are unreliable.
+      Although reliability has increased tremendously over the last
+      few years, disk drives are still the most likely core component
+      of a server to fail.  When they do, the results can be
+      catastrophic and replacing a failed disk drive and restoring
+      data can result in server downtime.</para>
 
     <indexterm>
       <primary>disk mirroring</primary>
     </indexterm>
-    <indexterm><primary>Vinum</primary>
+    <indexterm><primary>vinum</primary>
       <secondary>mirroring</secondary>
     </indexterm>
-    <indexterm><primary>RAID-1</primary>
+    <indexterm><primary><acronym>RAID</acronym>-1</primary>
     </indexterm>
 
-    <para>The traditional way to approach this problem has been
-      <emphasis>mirroring</emphasis>, keeping two copies of the data
-      on different physical hardware.  Since the advent of the
-      <acronym>RAID</acronym> levels, this technique has also been
-      called <acronym>RAID level 1</acronym> or
-      <acronym>RAID-1</acronym>.  Any write to the volume writes to
-      both locations; a read can be satisfied from either, so if one
-      drive fails, the data is still available on the other
+    <para>One approach to this problem is
+      <emphasis>mirroring</emphasis>, or
+      <acronym>RAID-1</acronym>, which keeps two copies of the
+      data on different physical hardware.  Any write to the volume
+      writes to both disks; a read can be satisfied from either, so if
+      one drive fails, the data is still available on the other
       drive.</para>
 
     <para>Mirroring has two problems:</para>
 
     <itemizedlist>
       <listitem>
-	<para>The price.  It requires twice as much disk storage as
-	  a non-redundant solution.</para>
+	<para>It requires twice as much disk storage as a
+	  non-redundant solution.</para>
       </listitem>
 
       <listitem>
-	<para>The performance impact.  Writes must be performed to
-	  both drives, so they take up twice the bandwidth of a
-	  non-mirrored volume.  Reads do not suffer from a
-	  performance penalty: it even looks as if they are
+	<para>Writes must be performed to both drives, so they take up
+	  twice the bandwidth of a non-mirrored volume.  Reads do not
+	  suffer from a performance penalty and can even be
 	  faster.</para>
       </listitem>
     </itemizedlist>
 
-    <para><indexterm><primary>RAID-5</primary></indexterm>An
-      alternative solution is <emphasis>parity</emphasis>, implemented
-      in the <acronym>RAID</acronym> levels 2, 3, 4 and 5.  Of these,
-      <acronym>RAID-5</acronym> is the most interesting.  As
-      implemented in Vinum, it is a variant on a striped organization
-      which dedicates one block of each stripe to parity one of the
-      other blocks.  As implemented by Vinum, a
+    <indexterm><primary><acronym>RAID</acronym>-5</primary></indexterm>
+
+    <para>An alternative solution is <emphasis>parity</emphasis>,
+      implemented in <acronym>RAID</acronym> levels 2, 3, 4 and 5.
+      Of these, <acronym>RAID-5</acronym> is the most interesting.  As
+      implemented in <devicename>vinum</devicename>, it is a variant
+      on a striped organization which dedicates one block of each
+      stripe to parity one of the other blocks.  As implemented by
+      <devicename>vinum</devicename>, a
       <acronym>RAID-5</acronym> plex is similar to a striped plex,
       except that it implements <acronym>RAID-5</acronym> by
       including a parity block in each stripe.  As required by
@@ -281,7 +252,7 @@
 
     <para>
       <figure id="vinum-raid5-org">
-	<title>RAID-5 Organization</title>
+	<title><acronym>RAID</acronym>-5 Organization</title>
 
 	<graphic fileref="vinum/vinum-raid5-org"/>
       </figure></para>
@@ -291,50 +262,52 @@
       access is similar to that of striped organizations, but write
       access is significantly slower, approximately 25% of the read
       performance.  If one drive fails, the array can continue to
-      operate in degraded mode: a read from one of the remaining
+      operate in degraded mode where a read from one of the remaining
       accessible drives continues normally, but a read from the
       failed drive is recalculated from the corresponding block from
       all the remaining drives.</para>
   </sect1>
 
   <sect1 id="vinum-objects">
-    <title>Vinum Objects</title>
+    <title><devicename>vinum</devicename> Objects</title>
 
-    <para>In order to address these problems, Vinum implements a
-      four-level hierarchy of objects:</para>
+    <para>In order to address these problems,
+      <devicename>vinum</devicename> implements a four-level hierarchy
+      of objects:</para>
 
     <itemizedlist>
       <listitem>
 	<para>The most visible object is the virtual disk, called a
 	  <emphasis>volume</emphasis>.  Volumes have essentially the
 	  same properties as a &unix; disk drive, though there are
-	  some minor differences.  They have no size
+	  some minor differences.  For one, they have no size
 	  limitations.</para>
       </listitem>
 
       <listitem>
 	<para>Volumes are composed of <emphasis>plexes</emphasis>,
 	  each of which represent the total address space of a
-	  volume.  This level in the hierarchy thus provides
-	  redundancy.  Think of plexes as individual disks in a
-	  mirrored array, each containing the same data.</para>
+	  volume.  This level in the hierarchy provides redundancy.
+	  Think of plexes as individual disks in a mirrored array,
+	  each containing the same data.</para>
       </listitem>
 
       <listitem>
-	<para>Since Vinum exists within the &unix; disk storage
-	  framework, it would be possible to use &unix; partitions
-	  as the building block for multi-disk plexes, but in fact
-	  this turns out to be too inflexible: &unix; disks can have
-	  only a limited number of partitions.  Instead, Vinum
-	  subdivides a single &unix; partition (the
-	  <emphasis>drive</emphasis>) into contiguous areas called
-	  <emphasis>subdisks</emphasis>, which it uses as building
-	  blocks for plexes.</para>
+	<para>Since <devicename>vinum</devicename> exists within the
+	  &unix; disk storage framework, it would be possible to use
+	  &unix; partitions as the building block for multi-disk
+	  plexes.  In fact, this turns out to be too inflexible as
+	  &unix; disks can have only a limited number of partitions.
+	  Instead, <devicename>vinum</devicename> subdivides a single
+	  &unix; partition, the <emphasis>drive</emphasis>, into
+	  contiguous areas called <emphasis>subdisks</emphasis>, which
+	  are used as building blocks for plexes.</para>
       </listitem>
 
       <listitem>
-	<para>Subdisks reside on Vinum <emphasis>drives</emphasis>,
-	  currently &unix; partitions.  Vinum drives can contain any
+	<para>Subdisks reside on <devicename>vinum</devicename>
+	  <emphasis>drives</emphasis>, currently &unix; partitions.
+	  <devicename>vinum</devicename> drives can contain any
 	  number of subdisks.  With the exception of a small area at
 	  the beginning of the drive, which is used for storing
 	  configuration and state information, the entire drive is
@@ -343,24 +316,25 @@
     </itemizedlist>
 
     <para>The following sections describe the way these objects
-      provide the functionality required of Vinum.</para>
+      provide the functionality required of
+      <devicename>vinum</devicename>.</para>
 
     <sect2>
       <title>Volume Size Considerations</title>
 
       <para>Plexes can include multiple subdisks spread over all
-	drives in the Vinum configuration.  As a result, the size of
-	an individual drive does not limit the size of a plex, and
-	thus of a volume.</para>
+	drives in the <devicename>vinum</devicename> configuration.
+	As a result, the size of an individual drive does not limit
+	the size of a plex or a volume.</para>
     </sect2>
 
     <sect2>
       <title>Redundant Data Storage</title>
 
-      <para>Vinum implements mirroring by attaching multiple plexes to
-	a volume.  Each plex is a representation of the data in a
-	volume.  A volume may contain between one and eight
-	plexes.</para>
+      <para><devicename>vinum</devicename> implements mirroring by
+	attaching multiple plexes to a volume.  Each plex is a
+	representation of the data in a volume.  A volume may contain
+	between one and eight plexes.</para>
 
       <para>Although a plex represents the complete data of a volume,
 	it is possible for parts of the representation to be
@@ -372,66 +346,45 @@
     </sect2>
 
     <sect2>
-      <title>Performance Issues</title>
+      <title>Which Plex Organization?</title>
 
-      <para>Vinum implements both concatenation and striping at the
-	plex level:</para>
+      <para><devicename>vinum</devicename> implements both
+	concatenation and striping at the plex level:</para>
 
       <itemizedlist>
 	<listitem>
 	  <para>A <emphasis>concatenated plex</emphasis> uses the
-	    address space of each subdisk in turn.</para>
+	    address space of each subdisk in turn.  Concatenated
+	    plexes are the most flexible as they can contain any
+	    number of subdisks, and the subdisks may be of different
+	    length.  The plex may be extended by adding additional
+	    subdisks.  They require less <acronym>CPU</acronym>
+	    time than striped plexes, though the difference in
+	    <acronym>CPU</acronym> overhead is not measurable.  On
+	    the other hand, they are most susceptible to hot spots,
+	    where one disk is very active and others are idle.</para>
 	</listitem>
 
 	<listitem>
 	  <para>A <emphasis>striped plex</emphasis> stripes the data
-	    across each subdisk.  The subdisks must all have the same
-	    size, and there must be at least two subdisks in order to
-	    distinguish it from a concatenated plex.</para>
+	    across each subdisk.  The subdisks must all be the same
+	    size and there must be at least two subdisks in order to
+	    distinguish it from a concatenated plex.  The greatest
+	    advantage of striped plexes is that they reduce hot spots.
+	    By choosing an optimum sized stripe, about 256 kB,
+	    the load can be evened out on the component drives.
+	    Extending a plex by adding new subdisks is so complicated
+	    that <devicename>vinum</devicename> does not implement
+	    it.</para>
 	</listitem>
       </itemizedlist>
-    </sect2>
-
-    <sect2>
-      <title>Which Plex Organization?</title>
-
-      <para>The version of Vinum supplied with &os; &rel.current;
-	implements two kinds of plex:</para>
 
-      <itemizedlist>
-	<listitem>
-	  <para>Concatenated plexes are the most flexible: they can
-	    contain any number of subdisks, and the subdisks may be of
-	    different length.  The plex may be extended by adding
-	    additional subdisks.  They require less
-	    <acronym>CPU</acronym> time than striped plexes, though
-	    the difference in <acronym>CPU</acronym> overhead is not
-	    measurable.  On the other hand, they are most susceptible
-	    to hot spots, where one disk is very active and others are
-	    idle.</para>
-	</listitem>
-
-	<listitem>
-	  <para>The greatest advantage of striped
-	    (<acronym>RAID-0</acronym>) plexes is that they reduce hot
-	    spots: by choosing an optimum sized stripe (about
-	    256 kB), you can even out the load on the component
-	    drives.  The disadvantages of this approach are
-	    (fractionally) more complex code and restrictions on
-	    subdisks: they must be all the same size, and extending a
-	    plex by adding new subdisks is so complicated that Vinum
-	    currently does not implement it.  Vinum imposes an
-	    additional, trivial restriction: a striped plex must have
-	    at least two subdisks, since otherwise it is
-	    indistinguishable from a concatenated plex.</para>
-	</listitem>
-      </itemizedlist>
-
-      <para><xref linkend="vinum-comparison"/> summarizes the
+      <para><link linkend="vinum-comparison"></link> summarizes the
 	advantages and disadvantages of each plex organization.</para>
 
       <table id="vinum-comparison" frame="none">
-	<title>Vinum Plex Organizations</title>
+	<title><devicename>vinum</devicename> Plex
+	  Organizations</title>
 
 	<tgroup cols="5">
 	  <thead>
@@ -471,28 +424,32 @@
   <sect1 id="vinum-examples">
     <title>Some Examples</title>
 
-    <para>Vinum maintains a <emphasis>configuration
-	database</emphasis> which describes the objects known to an
-      individual system.  Initially, the user creates the
-      configuration database from one or more configuration files with
-      the aid of the &man.gvinum.8; utility program.  Vinum stores a
-      copy of its configuration database on each disk slice (which
-      Vinum calls a <emphasis>device</emphasis>) under its control.
-      This database is updated on each state change, so that a restart
-      accurately restores the state of each Vinum object.</para>
+    <para><devicename>vinum</devicename> maintains a
+      <emphasis>configuration database</emphasis> which describes the
+      objects known to an individual system.  Initially, the user
+      creates the configuration database from one or more
+      configuration files using &man.gvinum.8;.
+      <devicename>vinum</devicename> stores a copy of its
+      configuration database on each disk
+      <emphasis>device</emphasis> under its control.  This database is
+      updated on each state change, so that a restart accurately
+      restores the state of each
+      <devicename>vinum</devicename> object.</para>
 
     <sect2>
       <title>The Configuration File</title>
 
-      <para>The configuration file describes individual Vinum objects.
-	The definition of a simple volume might be:</para>
+      <para>The configuration file describes individual
+	<devicename>vinum</devicename> objects.  The definition of a
+	simple volume might be:</para>
 
       <programlisting>    drive a device /dev/da3h
     volume myvol
       plex org concat
         sd length 512m drive a</programlisting>
 
-      <para>This file describes four Vinum objects:</para>
+      <para>This file describes four <devicename>vinum</devicename>
+	objects:</para>
 
       <itemizedlist>
 	<listitem>
@@ -500,9 +457,8 @@
 	    partition (<emphasis>drive</emphasis>) and its location
 	    relative to the underlying hardware.  It is given the
 	    symbolic name <emphasis>a</emphasis>.  This separation of
-	    the symbolic names from the device names allows disks to
-	    be moved from one location to another without
-	    confusion.</para>
+	    symbolic names from device names allows disks to be moved
+	    from one location to another without confusion.</para>
 	</listitem>
 
 	<listitem>
@@ -514,7 +470,7 @@
 	<listitem>
 	  <para>The <emphasis>plex</emphasis> line defines a plex.
 	    The only required parameter is the organization, in this
-	    case <emphasis>concat</emphasis>.  No name is necessary:
+	    case <emphasis>concat</emphasis>.  No name is necessary as
 	    the system automatically generates a name from the volume
 	    name by adding the suffix
 	    <emphasis>.p</emphasis><emphasis>x</emphasis>, where
@@ -526,13 +482,13 @@
 	<listitem>
 	  <para>The <emphasis>sd</emphasis> line describes a subdisk.
 	    The minimum specifications are the name of a drive on
-	    which to store it, and the length of the subdisk.  As with
-	    plexes, no name is necessary: the system automatically
-	    assigns names derived from the plex name by adding the
-	    suffix <emphasis>.s</emphasis><emphasis>x</emphasis>,
-	    where <emphasis>x</emphasis> is the number of the subdisk
-	    in the plex.  Thus Vinum gives this subdisk the name
-	    <emphasis>myvol.p0.s0</emphasis>.</para>
+	    which to store it, and the length of the subdisk.  No name
+	    is necessary as the system automatically assigns names
+	    derived from the plex name by adding the suffix
+	    <emphasis>.s</emphasis><emphasis>x</emphasis>, where
+	    <emphasis>x</emphasis> is the number of the subdisk in
+	    the plex.  Thus <devicename>vinum</devicename> gives this
+	    subdisk the name <emphasis>myvol.p0.s0</emphasis>.</para>
 	</listitem>
       </itemizedlist>
 
@@ -547,29 +503,30 @@
       Plexes:         1 (8 configured)
       Subdisks:       1 (16 configured)
 
-	D a                     State: up       Device /dev/da3h        Avail: 2061/2573 MB (80%)
+	D a                     State: up       Device /dev/da3h      Avail: 2061/2573 MB (80%)
 
-	V myvol                 State: up       Plexes:       1 Size:        512 MB
+	V myvol                 State: up       Plexes:       1 Size:      512 MB
 
-	P myvol.p0            C State: up       Subdisks:     1 Size:        512 MB
+	P myvol.p0            C State: up       Subdisks:     1 Size:      512 MB
 
-	S myvol.p0.s0           State: up       PO:        0  B Size:        512 MB</programlisting>
+	S myvol.p0.s0           State: up       PO:        0  B Size:      512 MB</programlisting>
 
       <para>This output shows the brief listing format of
-	&man.gvinum.8;.  It is represented graphically in <xref
-	  linkend="vinum-simple-vol"/>.</para>
+	&man.gvinum.8;.  It is represented graphically in <link
+	  linkend="vinum-simple-vol"></link>.</para>
 
       <para>
 	<figure id="vinum-simple-vol">
-	  <title>A Simple Vinum Volume</title>
+	  <title>A Simple <devicename>vinum</devicename>
+	    Volume</title>
 
 	  <graphic fileref="vinum/vinum-simple-vol"/>
 	</figure></para>
 
       <para>This figure, and the ones which follow, represent a
-	volume, which contains the plexes, which in turn contain the
-	subdisks.  In this trivial example, the volume contains one
-	plex, and the plex contains one subdisk.</para>
+	volume, which contains the plexes, which in turn contains the
+	subdisks.  In this example, the volume contains one plex, and
+	the plex contains one subdisk.</para>
 
       <para>This particular volume has no specific advantage over a
 	conventional disk partition.  It contains a single plex, so it
@@ -597,11 +554,10 @@
 	    sd length 512m drive b</programlisting>
 
       <para>In this example, it was not necessary to specify a
-	definition of drive <emphasis>a</emphasis> again, since Vinum
-	keeps track of all objects in its configuration database.
-	After processing this definition, the configuration looks
-	like:</para>
-
+	definition of drive <emphasis>a</emphasis> again, since
+	<devicename>vinum</devicename> keeps track of all objects in
+	its configuration database.  After processing this definition,
+	the configuration looks like:</para>
 
       <programlisting width="97">
 	Drives:         2 (4 configured)
@@ -609,8 +565,8 @@
 	Plexes:         3 (8 configured)
 	Subdisks:       3 (16 configured)
 
-	D a                     State: up       Device /dev/da3h        Avail: 1549/2573 MB (60%)
-	D b                     State: up       Device /dev/da4h        Avail: 2061/2573 MB (80%)
+	D a                     State: up       Device /dev/da3h       Avail: 1549/2573 MB (60%)
+	D b                     State: up       Device /dev/da4h       Avail: 2061/2573 MB (80%)
 
     V myvol                 State: up       Plexes:       1 Size:        512 MB
     V mirror                State: up       Plexes:       2 Size:        512 MB
@@ -623,12 +579,13 @@
 	S mirror.p0.s0          State: up       PO:        0  B Size:        512 MB
 	S mirror.p1.s0          State: empty    PO:        0  B Size:        512 MB</programlisting>
 
-      <para><xref linkend="vinum-mirrored-vol"/> shows the structure
-	graphically.</para>
+      <para><link linkend="vinum-mirrored-vol"></link> shows the
+	structure graphically.</para>
 
       <para>
 	<figure id="vinum-mirrored-vol">
-	  <title>A Mirrored Vinum Volume</title>
+	  <title>A Mirrored <devicename>vinum</devicename>
+	    Volume</title>
 
 	  <graphic fileref="vinum/vinum-mirrored-vol"/>
 	</figure></para>
@@ -643,15 +600,15 @@
 
       <para>The mirrored volume in the previous example is more
 	resistant to failure than an unmirrored volume, but its
-	performance is less: each write to the volume requires a write
-	to both drives, using up a greater proportion of the total
-	disk bandwidth.  Performance considerations demand a different
-	approach: instead of mirroring, the data is striped across as
-	many disk drives as possible.  The following configuration
-	shows a volume with a plex striped across four disk
-	drives:</para>
+	performance is less as each write to the volume requires a
+	write to both drives, using up a greater proportion of the
+	total disk bandwidth.  Performance considerations demand a
+	different approach:  instead of mirroring, the data is striped
+	across as many disk drives as possible.  The following
+	configuration shows a volume with a plex striped across four
+	disk drives:</para>
 
-	<programlisting>	drive c device /dev/da5h
+      <programlisting>        drive c device /dev/da5h
 	drive d device /dev/da6h
 	volume stripe
 	plex org striped 512k
@@ -661,8 +618,9 @@
 	  sd length 128m drive d</programlisting>
 
       <para>As before, it is not necessary to define the drives which
-	are already known to Vinum.  After processing this definition,
-	the configuration looks like:</para>
+	are already known to <devicename>vinum</devicename>.  After
+	processing this definition, the configuration looks
+	like:</para>
 
       <programlisting width="92">
 	Drives:         4 (4 configured)
@@ -694,15 +652,17 @@
 
       <para>
 	<figure id="vinum-striped-vol">
-	  <title>A Striped Vinum Volume</title>
+	  <title>A Striped <devicename>vinum</devicename>
+	    Volume</title>
 
 	  <graphic fileref="vinum/vinum-striped-vol"/>
 	</figure></para>
 
-      <para>This volume is represented in
-	<xref linkend="vinum-striped-vol"/>.  The darkness of the
-	stripes indicates the position within the plex address space:
-	the lightest stripes come first, the darkest last.</para>
+      <para>This volume is represented in <link
+	  linkend="vinum-striped-vol"></link>.  The darkness of the
+	stripes indicates the position within the plex address space,
+	where the lightest stripes come first and the darkest
+	last.</para>
     </sect2>
 
     <sect2>
@@ -729,16 +689,17 @@
         sd length 102480k drive b</programlisting>
 
       <para>The subdisks of the second plex are offset by two drives
-	from those of the first plex: this helps ensure that writes do
-	not go to the same subdisks even if a transfer goes over two
-	drives.</para>
+	from those of the first plex.  This helps to ensure that
+	writes do not go to the same subdisks even if a transfer goes
+	over two drives.</para>
 
-      <para><xref linkend="vinum-raid10-vol"/> represents the
+      <para><link linkend="vinum-raid10-vol"></link> represents the
 	structure of this volume.</para>
 
       <para>
 	<figure id="vinum-raid10-vol">
-	  <title>A Mirrored, Striped Vinum Volume</title>
+	  <title>A Mirrored, Striped <devicename>vinum</devicename>
+	    Volume</title>
 
 	  <graphic fileref="vinum/vinum-raid10-vol"/>
 	</figure></para>
@@ -748,28 +709,28 @@
   <sect1 id="vinum-object-naming">
     <title>Object Naming</title>
 
-    <para>As described above, Vinum assigns default names to plexes
-      and subdisks, although they may be overridden.  Overriding the
-      default names is not recommended: experience with the VERITAS
-      volume manager, which allows arbitrary naming of objects, has
-      shown that this flexibility does not bring a significant
-      advantage, and it can cause confusion.</para>
+    <para><devicename>vinum</devicename> assigns default names to
+      plexes and subdisks, although they may be overridden.
+      Overriding the default names is not recommended as it does not
+      bring a significant advantage and it can cause
+      confusion.</para>
 
     <para>Names may contain any non-blank character, but it is
       recommended to restrict them to letters, digits and the
-      underscore characters.  The names of volumes, plexes and
+      underscore characters.  The names of volumes, plexes, and
       subdisks may be up to 64 characters long, and the names of
       drives may be up to 32 characters long.</para>
 
-    <para>Vinum objects are assigned device nodes in the hierarchy
-      <filename class="directory">/dev/gvinum</filename>.  The
-      configuration shown above would cause Vinum to create the
-      following device nodes:</para>
+    <para><devicename>vinum</devicename> objects are assigned device
+      nodes in the hierarchy <filename
+	class="directory">/dev/gvinum</filename>.  The configuration
+      shown above would cause <devicename>vinum</devicename> to create
+      the following device nodes:</para>
 
     <itemizedlist>
       <listitem>
-	<para>Device entries for each volume.
-	  These are the main devices used by Vinum.  Thus the
+	<para>Device entries for each volume.  These are the main
+	  devices used by <devicename>vinum</devicename>.  The
 	  configuration above would include the devices
 	  <filename class="devicefile">/dev/gvinum/myvol</filename>,
 	  <filename class="devicefile">/dev/gvinum/mirror</filename>,
@@ -795,6 +756,7 @@
 
     <para>For example, consider the following configuration
       file:</para>
+
     <programlisting>	drive drive1 device /dev/sd1h
 	drive drive2 device /dev/sd2h
 	drive drive3 device /dev/sd3h
@@ -810,7 +772,8 @@
       following structure in <filename
 	class="directory">/dev/gvinum</filename>:</para>
 
-    <programlisting>	drwxr-xr-x  2 root  wheel       512 Apr 13 16:46 plex
+    <programlisting>	drwxr-xr-x  2 root  wheel       512 Apr 13
+16:46 plex
 	crwxr-xr--  1 root  wheel   91,   2 Apr 13 16:46 s64
 	drwxr-xr-x  2 root  wheel       512 Apr 13 16:46 sd
 
@@ -826,58 +789,62 @@
     crwxr-xr--  1 root  wheel   91, 0x20300002 Apr 13 16:46 s64.p0.s3</programlisting>
 
     <para>Although it is recommended that plexes and subdisks should
-      not be allocated specific names, Vinum drives must be named.
-      This makes it possible to move a drive to a different location
-      and still recognize it automatically.  Drive names may be up to
-      32 characters long.</para>
+      not be allocated specific names,
+      <devicename>vinum</devicename> drives must be named.  This makes
+      it possible to move a drive to a different location and still
+      recognize it automatically.  Drive names may be up to 32
+      characters long.</para>
 
     <sect2>
       <title>Creating File Systems</title>
 
-	<para>Volumes appear to the system to be identical to disks,
-	  with one exception.  Unlike &unix; drives, Vinum does
-	  not partition volumes, which thus do not contain a partition
-	  table.  This has required modification to some disk
-	  utilities, notably &man.newfs.8;, which previously tried to
-	  interpret the last letter of a Vinum volume name as a
-	  partition identifier.  For example, a disk drive may have a
-	  name like <filename class="devicefile">/dev/ad0a</filename>
-	  or <filename class="devicefile">/dev/da2h</filename>.  These
-	  names represent the first partition
-	  (<devicename>a</devicename>) on the first (0) IDE disk
-	  (<devicename>ad</devicename>) and the eighth partition
-	  (<devicename>h</devicename>) on the third (2) SCSI disk
-	  (<devicename>da</devicename>) respectively.  By contrast, a
-	  Vinum volume might be called <filename
-	    class="devicefile">/dev/gvinum/concat</filename>, a name
-	  which has no relationship with a partition name.</para>
+      <para>Volumes appear to the system to be identical to disks,
+	with one exception.  Unlike &unix; drives,
+	<devicename>vinum</devicename> does not partition volumes,
+	which thus do not contain a partition table.  This has
+	required modification to some disk utilities, notably
+	&man.newfs.8;, so that it does not try to interpret the last
+	letter of a <devicename>vinum</devicename> volume name as a
+	partition identifier.  For example, a disk drive may have a
+	name like <filename class="devicefile">/dev/ad0a</filename>
+	or <filename class="devicefile">/dev/da2h</filename>.  These
+	names represent the first partition
+	(<devicename>a</devicename>) on the first (0) IDE disk
+	(<devicename>ad</devicename>) and the eighth partition
+	(<devicename>h</devicename>) on the third (2) SCSI disk
+	(<devicename>da</devicename>) respectively.  By contrast, a
+	<devicename>vinum</devicename> volume might be called
+	<filename class="devicefile">/dev/gvinum/concat</filename>,
+	which has no relationship with a partition name.</para>
 
-	<para>In order to create a file system on this volume, use
-	  &man.newfs.8;:</para>
+      <para>In order to create a file system on this volume, use
+	&man.newfs.8;:</para>
 
-	<screen>&prompt.root; <userinput>newfs /dev/gvinum/concat</userinput></screen>
+      <screen>&prompt.root; <userinput>newfs /dev/gvinum/concat</userinput></screen>
     </sect2>
   </sect1>
 
   <sect1 id="vinum-config">
-    <title>Configuring Vinum</title>
+    <title>Configuring <devicename>vinum</devicename></title>
 
     <para>The <filename>GENERIC</filename> kernel does not contain
-      Vinum.  It is possible to build a special kernel which includes
-      Vinum, but this is not recommended.  The standard way to start
-      Vinum is as a kernel module (<acronym>kld</acronym>).  You do
-      not even need to use &man.kldload.8; for Vinum: when you start
-      &man.gvinum.8;, it checks whether the module has been loaded,
-      and if it is not, it loads it automatically.</para>
+      <devicename>vinum</devicename>.  It is possible to build a
+      custom kernel which includes <devicename>vinum</devicename>, but
+      this is not recommended.  The standard way to start
+      <devicename>vinum</devicename> is as a kernel module.
+      &man.kldload.8; is not needed because when &man.gvinum.8;
+      starts, it checks whether the module has been loaded, and if it
+      is not, it loads it automatically.</para>
 
 
     <sect2>
       <title>Startup</title>
 
-      <para>Vinum stores configuration information on the disk slices
-	in essentially the same form as in the configuration files.
-	When reading from the configuration database, Vinum recognizes
-	a number of keywords which are not allowed in the
+      <para><devicename>vinum</devicename> stores configuration
+	information on the disk slices in essentially the same form as
+	in the configuration files.  When reading from the
+	configuration database, <devicename>vinum</devicename>
+	recognizes a number of keywords which are not allowed in the
 	configuration files.  For example, a disk configuration might
 	contain the following text:</para>
 
@@ -902,14 +869,14 @@ sd name bigraid.p0.s3 drive d plex bigra
 sd name bigraid.p0.s4 drive e plex bigraid.p0 state initializing len 4194304b driveoff set 1573129b plexoffset 16777216b</programlisting>
 
 	<para>The obvious differences here are the presence of
-	  explicit location information and naming (both of which are
-	  also allowed, but discouraged, for use by the user) and the
-	  information on the states (which are not available to the
-	  user).  Vinum does not store information about drives in the
-	  configuration information: it finds the drives by scanning
-	  the configured disk drives for partitions with a Vinum
-	  label.  This enables Vinum to identify drives correctly even
-	  if they have been assigned different &unix; drive
+	  explicit location information and naming, both of which are
+	  allowed but discouraged, and the information on the states.
+	  <devicename>vinum</devicename> does not store information
+	  about drives in the configuration information.  It finds the
+	  drives by scanning the configured disk drives for partitions
+	  with a <devicename>vinum</devicename> label.  This enables
+	  <devicename>vinum</devicename> to identify drives correctly
+	  even if they have been assigned different &unix; drive
 	  IDs.</para>
 
 	<sect3 id="vinum-rc-startup">
@@ -922,110 +889,87 @@ sd name bigraid.p0.s4 drive e plex bigra
 	    <literal>geom_vinum_load="YES"</literal> to
 	    <filename>/boot/loader.conf</filename>.</para>
 
-	  <para>When you start Vinum with the <command>gvinum
-	      start</command> command, Vinum reads the configuration
-	    database from one of the Vinum drives.  Under normal
-	    circumstances, each drive contains an identical copy of
-	    the configuration database, so it does not matter which
-	    drive is read.  After a crash, however, Vinum must
-	    determine which drive was updated most recently and read
-	    the configuration from this drive.  It then updates the
-	    configuration if necessary from progressively older
+	  <para>When <devicename>vinum</devicename> is started with
+	    <command>gvinum start</command>,
+	    <devicename>vinum</devicename> reads the configuration
+	    database from one of the <devicename>vinum</devicename>
+	    drives.  Under normal circumstances, each drive contains
+	    an identical copy of the configuration database, so it
+	    does not matter which drive is read.  After a crash,
+	    however, <devicename>vinum</devicename> must determine
+	    which drive was updated most recently and read the
+	    configuration from this drive.  It then updates the
+	    configuration, if necessary, from progressively older
 	    drives.</para>
 	</sect3>
       </sect2>
     </sect1>
 
     <sect1 id="vinum-root">
-      <title>Using Vinum for the Root Filesystem</title>
+      <title>Using <devicename>vinum</devicename> for the Root
+	File System</title>
 
-      <para>For a machine that has fully-mirrored filesystems using
-	Vinum, it is desirable to also mirror the root filesystem.
-	Setting up such a configuration is less trivial than mirroring
-	an arbitrary filesystem because:</para>
+      <para>For a machine that has fully-mirrored file systems using
+	<devicename>vinum</devicename>, it is desirable to also
+	mirror the root file system.  Setting up such a configuration
+	is less trivial than mirroring an arbitrary file system
+	because:</para>
 
       <itemizedlist>
 	<listitem>
-	  <para>The root filesystem must be available very early
-	    during the boot process, so the Vinum infrastructure must
-	    alrqeady be available at this time.</para>
+	  <para>The root file system must be available very early
+	    during the boot process, so the
+	    <devicename>vinum</devicename> infrastructure must
+	    already be available at this time.</para>
 	</listitem>
 	<listitem>
-	  <para>The volume containing the root filesystem also
-	    contains the system bootstrap and the kernel, which must
-	    be read using the host system's native utilities (e. g.
-	    the BIOS on PC-class machines) which often cannot be
-	    taught about the details of Vinum.</para>
+	  <para>The volume containing the root file system also
+	    contains the system bootstrap and the kernel.  These must
+	    be read using the host system's native utilities, such as

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***


More information about the svn-doc-all mailing list