misc/124969: gvinum raid5 plex does not detect missing subdisk

lulf at stud.ntnu.no lulf at stud.ntnu.no
Wed Jun 25 19:50:05 UTC 2008


The following reply was made to PR kern/124969; it has been noted by GNATS.

From: lulf at stud.ntnu.no
To: Dan Ports <drkp-f at ambulatoryclam.net>
Cc: freebsd-gnats-submit at FreeBSD.org
Subject: Re: misc/124969: gvinum raid5 plex does not detect missing subdisk
Date: Wed, 25 Jun 2008 21:42:06 +0200

 Siterer Dan Ports <drkp-f at ambulatoryclam.net>:
 
 >
 >> Number:         124969
 >> Category:       misc
 >> Synopsis:       gvinum raid5 plex does not detect missing subdisk
 >> Confidential:   no
 >> Severity:       serious
 >> Priority:       medium
 >> Responsible:    freebsd-bugs
 >> State:          open
 >> Quarter:
 >> Keywords:
 >> Date-Required:
 >> Class:          sw-bug
 >> Submitter-Id:   current-users
 >> Arrival-Date:   Wed Jun 25 01:30:01 UTC 2008
 >> Closed-Date:
 >> Last-Modified:
 >> Originator:     Dan Ports
 >> Release:        6.3-STABLE
 >> Organization:
 >> Environment:
 > FreeBSD clamshell.ambulatoryclam.net 6.3-STABLE FreeBSD 6.3-STABLE  =20
 > #4: Sat Jun 14 10:05:12 PDT 2008      =20
 > root at clamshell.ambulatoryclam.net:/usr/obj/usr/src/sys/CLAMSHELL  i386
 >> Description:
 > I am using gvinum to create a RAID 5 array with three drives (i.e. a =20
 >  single raid5 plex with three subdisks). Recently, one drive failed. =20
 >  When the drive failed (but was present on boot), the array =20
 > continues  to work fine, albeit degraded, as one would expect. =20
 > However, with  the drive removed, gvinum does not properly detect =20
 > the plex's  configuration on boot:
 >
 > 2 drives:
 > D b                     State: up       /dev/ad11s1d    A: 0/474891 MB (0%=
 )
 > D a                     State: up       /dev/ad10s1d    A: 0/474891 MB (0%=
 )
 >
 > 1 volume:
 > V space                 State: up       Plexes:       1 Size:        463 G=
 B
 >
 > 1 plex:
 > P space.p0           R5 State: degraded Subdisks:     2 Size:        463 G=
 B
 >
 > 3 subdisks:
 > S space.p0.s2           State: down     D: c            Size:        463 G=
 B
 > S space.p0.s1           State: up       D: b            Size:        463 G=
 B
 > S space.p0.s0           State: up       D: a            Size:        463 G=
 B
 >
 > Note that space.p0 has a capacity of 463 GB, the size of the drive,  =20
 > when it should be twice that. It seems as though the plex isn't  =20
 > aware that the downed subdisk ever existed. As a result, the volume  =20
 > is up, but its data is not valid.
 >
 > It seems a rather alarming flaw that a RAID 5 array fails to work  =20
 > correctly when one drive is not present!
 >> How-To-Repeat:
 > Create a gvinum raid5 plex with three subdisks, then remove the  =20
 > drive corresponding to one of them.
 >> Fix:
 > No fix, but the following thread appears to be describing the same  =20
 > problem, and includes an analysis. However, the problem appears to  =20
 > still exist. (I'm running 6.3-STABLE, and haven't tried either  =20
 > 7-STABLE or -CURRENT, but a cursory examination of the cvs history  =20
 > provides no indication that the problem has been fixed in other  =20
 > branches.)
 >
 > http://lists.freebsd.org/pipermail/freebsd-geom/2007-March/002109.html
 >
 > I'm willing to poke at this problem a bit more, but am probably the  =20
 > wrong person to do so since I currently have neither the time nor  =20
 > any geom experience.
 >
 >> Release-Note:
 >> Audit-Trail:
 >> Unformatted:
 > _______________________________________________
 > freebsd-bugs at freebsd.org mailing list
 > http://lists.freebsd.org/mailman/listinfo/freebsd-bugs
 > To unsubscribe, send any mail to "freebsd-bugs-unsubscribe at freebsd.org"
 >
 >
 
 This is a known issue, and I've fixed it in patches that are pending review
 (for a few months now... ;).  If it's very critical for you right now, I can
 create a patch for you, and request a commit for it, but as there are some
 gvinum restructuring I'd like to get into the tree, I'd rather not fix the
 same issues twice. but I agree this is a special case, so I'll try get out a
 fix soon. I'm sorry for the inconvenience.
 
 --=20
 Ulf Lilleengen
 
 


More information about the freebsd-geom mailing list