gvinum raid10 stale

Dimitri Aivaliotis aglarond at gmail.com
Fri Dec 19 02:50:23 PST 2008


Hi Rick,

On Thu, Dec 18, 2008 at 7:23 PM, Rick C. Petty
<rick-freebsd2008 at kiwi-computer.com> wrote:
> On Thu, Dec 18, 2008 at 06:57:53PM +0100, Ulf Lilleengen wrote:
>> On tor, des 18, 2008 at 12:20:26pm +0100, Dimitri Aivaliotis wrote:

> I agree with Ulf.  Why are you creating so many subdisks?  It's pretty
> unnecessary and just adds confusion and trouble.

I agree with you about the confusion and trouble. :)

> Were the plexes and subdisks all up before you restarted?  After you create
> stuff in gvinum, sync'd subdisks are marked as stale until you start the
> plexes or force the subdisks up.  I'm not sure if you did this step in
> between.  Also, it is possible that gvinum wasn't marked clean because a
> drive was "disconnected" at shutdown or not present immediately at startup.
> Other than that, I've not seen gvinum mark things down unexplicably.

This wouldn't explain why all the subdisks on one plex of the server
that wasn't restarted were marked as stale.  As far as the logs show,
there's no reason for it.  I also don't know how long the one plex has
been down, as the volume itself remained up.  Both plexes were up
initially though.

Is a 'gvinum start' necessary after a 'gvinum create'?  I know that I
hadn't issued a start until just now, but I didn't see the need for
it, as gvinum was already started.  Perhaps this is a naming issue.

>> I don't see how the subdisks could go stale after inserting the disks unless
>> they changed names, and the new disks you inserted was named with the old
>> disks device number.
>
> This shouldn't happen unless the new disks had used vinum in the past and
> there was a name collision.  Unless a drive was marked down for a period of
> time or you didn't bring the plexes up after creating them, I don't know
> why this would happen.

The volume was up before the restart.  I can't speak to the state of
the individual plexes.  The new drives had not used vinum in the past.

> Agreed.  This is the proper procedure if one plex is good, but you should
> be able to mount that volume-- you can mount any volume that isn't down.  A
> volume is only down if all of its plexes are down.  A plex is down if any
> of its subdisks are down.  You can also mount a plex which I've done before
> when I didn't want vinum state to be changed but wanted to pull my data
> off.  You can also mount subdisks but when you use stripes (multiple
> subdisks per plex), this won't work.  This is one of the many reasons I
> gave up using stripes long ago.  =)

What would you recommend in a situation like this?  I had followed the
"Resilience and Performance" section of
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-examples.html
when initially creating the volume.  I want a RAID10-like solution
which can be easily expanded in the future.


> Good luck,

Thanks!

- Dimitri


More information about the freebsd-geom mailing list