gvinum raid10 stale

Rick C. Petty rick-freebsd2008 at kiwi-computer.com
Fri Dec 19 08:16:05 PST 2008


On Fri, Dec 19, 2008 at 11:50:22AM +0100, Dimitri Aivaliotis wrote:
> Hi Rick,
> 
> > Were the plexes and subdisks all up before you restarted?  After you create
> > stuff in gvinum, sync'd subdisks are marked as stale until you start the
> > plexes or force the subdisks up.  I'm not sure if you did this step in
> > between.  Also, it is possible that gvinum wasn't marked clean because a
> > drive was "disconnected" at shutdown or not present immediately at startup.
> > Other than that, I've not seen gvinum mark things down unexplicably.
> 
> This wouldn't explain why all the subdisks on one plex of the server
> that wasn't restarted were marked as stale.  As far as the logs show,
> there's no reason for it.  I also don't know how long the one plex has
> been down, as the volume itself remained up.  Both plexes were up
> initially though.

gvinum is pretty noisy about these things.  I would check in
/var/log/messages* to see if you see any links containing "gvinum".

> Is a 'gvinum start' necessary after a 'gvinum create'?  I know that I
> hadn't issued a start until just now, but I didn't see the need for
> it, as gvinum was already started.  Perhaps this is a naming issue.

It is in some cases, I believe.  Ulf has a patch that no longer requires
this at least in the case of mirrors, and it has saved me loads of time!
If you're saying the plexes were up at one point, my suspicion is that you
did start the plexes at one time.

> The volume was up before the restart.  I can't speak to the state of
> the individual plexes.  The new drives had not used vinum in the past.

If the volume is up, it should be mountable.

> > Agreed.  This is the proper procedure if one plex is good, but you should
> > be able to mount that volume-- you can mount any volume that isn't down.  A
> > volume is only down if all of its plexes are down.  A plex is down if any
> > of its subdisks are down.  You can also mount a plex which I've done before
> > when I didn't want vinum state to be changed but wanted to pull my data
> > off.  You can also mount subdisks but when you use stripes (multiple
> > subdisks per plex), this won't work.  This is one of the many reasons I
> > gave up using stripes long ago.  =)
> 
> What would you recommend in a situation like this?  I had followed the
> "Resilience and Performance" section of
> http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-examples.html
> when initially creating the volume.  I want a RAID10-like solution
> which can be easily expanded in the future.

Well I personally always sacrifice performance for resilience and I just
use gvinum for mirrors and volume management.  With the speed and cost of
SATA drives, I hardly need those few extra seconds per day of use.

You should be able to add mirrors pretty easily, as long as those are
mirrors of stripes (since you can't stripe your mirrors in gvinum), but you
have to make sure each mirror (plex) contains the right size, regardless of
your stripe (subdisk) size.

I add to my mirrors regularly, usually just to move volumes around.  I
think the confusion here is the number of subdisks per plex that you have
which is unnecessary.

-- Rick C. Petty


More information about the freebsd-geom mailing list