[Review request] improving (g)vinum documentation

Ben Kaduk minimarmot at gmail.com
Fri May 15 07:54:36 UTC 2009


On Tue, May 12, 2009 at 3:46 PM, Ulf Lilleengen
<ulf.lilleengen at gmail.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi,
>
> As part of SoC 2007, I extended the gvinum documentation in the handbook
> with some examples that I would like to commit. It would be good if
> someone could review the language. Thanks!
>
> Patch here:
> http://people.freebsd.org/~lulf/patches/doc/vinum_doc.diff

I'm not a vinum user, which makes me excellently suited to getting
confused and asking easy questions :)


--- chapter.sgml.orig	2008-12-22 22:51:29.000000000 +0100
+++ chapter.sgml	2009-05-11 21:17:38.986943400 +0200
@@ -742,6 +742,99 @@
         </figure>
       </para>
     </sect2>
+    <sect2>
+      <title>Rebuilding a RAID-5 volume</title>
+
+      <para><anchor id="vinum-rebuild">RAID-5 rebuilding is a frequent task for
+        many administrators, and gvinum supports online rebuild if RAID-5

"of".  I might s/rebuild/rebuilding/, too.


+        plexes. This means that the filesystem on your volume may very well be
+        mounted while this is going on. A typical RAID-5 configuration might

This sentence is fairly informal, but not very informational -- I
would remove it.

+        look like this:</para>
+
+      <programlisting>
+        drive a device /dev/ad1
+        drive b device /dev/ad2
+        drive c device /dev/ad3
+        volume raid5vol
+        plex org raid5 512k name raid5vol.p0
+        sd drive a name raid5vol.p0.s0
+        sd drive b name raid5vol.p0.s1
+        sd drive c name raid5vol.p0.s2</programlisting>
+
+      <para>If one of the drives fails (let's say ad3 for instance),
the subdisk

s/let's say ad3/ad3,/

+        using that drive will fail.  When the drive is replaced, a new drive
+	will have to be created for vinum to use:</para>
+
+      <programlisting>
+        drive d device /dev/ad4</programlisting>


Do I create this drive by putting a line like that into some
configuration file?  If so, which file?  Do I need to send a signal to
a daemon after changing the file?  If not, what command do I need to
run?

+
+      <para>When this drive is created, the subdisk using the failed drive will
+        have to be moved to the new drive. This can be done with the following
+        command:</para>
+
+      <programlisting>
+        gvinum move d raid5vol.p0.s2</programlisting>
+
+      <para>This will bind the subdisk to the new drive, and set it's state to

The subdisk is ... s2?  Maybe mention it explicitly for clarity?
Probably also say "the new drive d" to reinforce that this is the same
one added above.

+        'stale'. This means the plex is ready for rebuilding:</para>
+
+      <programlisting>
+        gvinum start raid5vol.p0</programlisting>
+
+      <para>This command initiates the rebuild of the plex. The status of the
+        rebuild can be checked with the 'list' command, which shows how big
+        precentage of the plex that is rebuilt.</para>

s/how big percentage/how much/

Maybe expand 'list' to 'gvinum list' ?


+    </sect2>
+    <sect2>
+      <title>Growing a RAID-5 volume</title>
+
+      <para><anchor id="vinum-growing">Just like rebuilding, growing is a task
+        that is not that frequent, but rather very handy for an administrator.

The phrasing here is somewhat awkward.  Try "not very frequent" and
"can be handy" (no rather)

+        Gvinum supports online growing of RAID-5 plexes the same way it does
+        with rebuilds. Also note that growing striped (RAID-0) plexes is also

s/with/for/

+        supported, and the process of doing this is the same as for RAID-5
+        plexes. A typical configuration before expanding might look like

Is "growing" the technical term for this process?  If so, it should be
used in place
of "expanding", here.  Otherwise, I think s/expanding/the expansion/
would be more clear.

+        this:</para>
+
+      <programlisting>
+        drive a device /dev/ad1
+        drive b device /dev/ad2
+        drive c device /dev/ad3
+        volume raid5vol
+        plex org raid5 512k name raid5vol.p0
+        sd drive a name raid5vol.p0.s0
+        sd drive b name raid5vol.p0.s1
+        sd drive c name raid5vol.p0.s2</programlisting>
+
+      <para>Let us say we want to expand this array with a new drive. There are
+        two ways to do this. One way is to extend the configuration and create
+	the drive manually:</para>
+
+      <programlisting>
+        drive d device /dev/ad4
+        sd drive d name raid5vol.p0.s3 plex raid5vol.p0</programlisting>
+
+      <para>However, the following is a short version of the same:</para>
+
+      <programlisting>
+        grow raid5vol.p0 /dev/ad4</programlisting>
+
+      <para>After the configuration is created, the state of the plex will be
+        set to 'growable'. This state means that the plex is capable of being
+        expanded. The size of the plex is not changed until the growing is
+        complete. First, start the growing process:</para>
+
+      <programlisting>
+        gvinum start raid5vol.p0</programlisting>
+
+      <para>This command initiates the growing process. Just like when
+        rebuilding a plex, you are able to watch the status of the growing
+        process with the 'list' command, which shows how big precentage of the
+        plex that is grown. When the growing is finished, the plex will

s/how big/what/ and s/that is grown/has grown/

+        hopefully be up again, and the volume will have the new size. Remember
+        that if UFS is run on top of the volume, the filesystem itself will
+        also have to be grown using growfs.</para>

markup on growfs?

+    </sect2>
   </sect1>

   <sect1 id="vinum-object-naming">



Thanks for writing this up -- it will be a really handy reference for
those of us who don't know what we're doing!

-Ben Kaduk



More information about the freebsd-doc mailing list