gvinum raid10 stale

Rick C. Petty rick-freebsd2008 at kiwi-computer.com
Thu Dec 18 10:23:54 PST 2008


On Thu, Dec 18, 2008 at 06:57:53PM +0100, Ulf Lilleengen wrote:
> On tor, des 18, 2008 at 12:20:26pm +0100, Dimitri Aivaliotis wrote:
> > Hi,
> > 
> > I created a raid10 using gvinum with the following config:
> > 
> > drive a device /dev/da2
> > drive b device /dev/da3
> > volume raid10
> >    plex org striped 512k
> *SNIP*
> Why do you create 32 subdisks for each stripe? They are still on the same
> drive, and should not give you any performance increase as I see it. Just
> having one subdisk for each drive and mirroring them would give the same
> effect, and would allow you to expand the size.

I agree with Ulf.  Why are you creating so many subdisks?  It's pretty
unnecessary and just adds confusion and trouble.

> > I wanted to add two additional disks to this raid10, so I shutdown the
> > server, inserted the disks and brought it back up.  When the system
> > booted, it reported the filesystem as needing a check.  Doing a gvinum
> > list, I saw that all subdisks were stale, so both plexes were down.
> > After rebooting again (to remove the additional disks), the problem
> > persisted.  My assumption that the new disks caused the old subdisks
> > to be stale wasn't true, as I later noticed that a different server
> > with the same config has a plex down as well because all subdisks on
> > that plex are stale.  The servers are running 6.3-RELEASE-p1 and
> > 6.2-RELEASE-p9, respectively.

Were the plexes and subdisks all up before you restarted?  After you create
stuff in gvinum, sync'd subdisks are marked as stale until you start the
plexes or force the subdisks up.  I'm not sure if you did this step in
between.  Also, it is possible that gvinum wasn't marked clean because a
drive was "disconnected" at shutdown or not present immediately at startup.
Other than that, I've not seen gvinum mark things down unexplicably.

> > (I wound up doing a 'gvinum setstate -f up raid10.p1.s<num>' 32 times
> > to bring one plex back up on the server that had both down.)
> > 
> > My questions:
> > 
> > - Why would these subdisks be set stale?
> I don't see how the subdisks could go stale after inserting the disks unless
> they changed names, and the new disks you inserted was named with the old
> disks device number.

This shouldn't happen unless the new disks had used vinum in the past and
there was a name collision.  Unless a drive was marked down for a period of
time or you didn't bring the plexes up after creating them, I don't know
why this would happen.

> > - How can I recover the other plex, such that the data continues to be
> > striped+mirrored correctly?
> For the volume where you have one good plex, you can do:
> gvinum start raid10 
> 
> This command will sync the bad plex from the good one.

Agreed.  This is the proper procedure if one plex is good, but you should
be able to mount that volume-- you can mount any volume that isn't down.  A
volume is only down if all of its plexes are down.  A plex is down if any
of its subdisks are down.  You can also mount a plex which I've done before
when I didn't want vinum state to be changed but wanted to pull my data
off.  You can also mount subdisks but when you use stripes (multiple
subdisks per plex), this won't work.  This is one of the many reasons I
gave up using stripes long ago.  =)

Good luck,

-- Rick C. Petty


More information about the freebsd-geom mailing list