Problem restarting gvinum raid-5

glz goran.lowkrantz at ismobile.com
Fri Jul 7 14:35:55 UTC 2006


First, missed some info:
 > uname -a
FreeBSD byleist.hq.ismobile.com 6.1-STABLE FreeBSD 6.1-STABLE #1: Mon 
Jun 26 20:37:45 CEST 2006 
root at byleist.hq.ismobile.com:/usr/obj/usr/src/sys/BYLEISTSMP  i386

So I continued to dig into this and it seems that the plex is not 
rebuilding because the geom is open, i.e. this routine:

/* Check if any consumer of the given geom is open. */
int
gv_is_open(struct g_geom *gp)
{
         struct g_consumer *cp;

         if (gp == NULL)
                 return (0);

         LIST_FOREACH(cp, &gp->consumer, consumer) {
                 if (cp->acr || cp->acw || cp->ace)
                         return (1);
         }

         return (0);
}

What does this mean? How do I make sure the geom is not opened until I 
can start the plex? I have tested single user and not mounted file 
system but it does not help.

 From the code I have read, this state should be the proper for allowing 
a rebuild of the plex:
5 drives:
D disk5                 State: up       /dev/da6s1a     A: 0/17492 MB (0%)
D disk4                 State: up       /dev/da5s1a     A: 0/17492 MB (0%)
D disk3                 State: up       /dev/da4s1a     A: 0/17492 MB (0%)
D disk2                 State: up       /dev/da3s1a     A: 0/17492 MB (0%)
D disk1                 State: up       /dev/da2s1a     A: 0/17492 MB (0%)

1 volume:
V imap                  State: up       Plexes:       1 Size:         68 GB

1 plex:
P imap.p0            R5 State: degraded Subdisks:     5 Size:         68 GB

5 subdisks:
S imap.p0.s0            State: up       D: disk1        Size:         17 GB
S imap.p0.s1            State: up       D: disk2        Size:         17 GB
S imap.p0.s2            State: up       D: disk3        Size:         17 GB
S imap.p0.s3            State: up       D: disk4        Size:         17 GB
S imap.p0.s4            State: stale    D: disk5        Size:         17 GB


/glz

Goran Lowkrantz wrote:
> Hi,
> 
> We have a gvinum raid-5 volume that that we had to replace a disk on and 
> after that we cant get the new subdisk starting.
> 
> Here are the things we did:
> 1: Replace disk and boot singleuser to fdisk and lable new disk:
> gvinum -> list
> 5 drives:
> D disk4                 State: up       /dev/da5s1a     A: 0/17492 MB (0%)
> D disk3                 State: up       /dev/da4s1a     A: 0/17492 MB (0%)
> D disk2                 State: up       /dev/da3s1a     A: 0/17492 MB (0%)
> D disk1                 State: up       /dev/da2s1a     A: 0/17492 MB (0%)
> 
> 1 volume:
> V imap                  State: up       Plexes:       1 Size:         68 GB
> 
> 1 plex:
> P imap.p0            R5 State: up       Subdisks:     5 Size:         68 GB
> 
> 5 subdisks:
> S imap.p0.s0            State: up       D: disk1        Size:         17 GB
> S imap.p0.s1            State: up       D: disk2        Size:         17 GB
> S imap.p0.s2            State: up       D: disk3        Size:         17 GB
> S imap.p0.s3            State: up       D: disk4        Size:         17 GB
> S imap.p0.s4            State: up       D: disk5        Size:         17 GB
> 
> After fixing the new disk partition we did a saveconfig and reboot:
> gvinum -> list
> 5 drives:
> D disk5                 State: up       /dev/da6s1a     A: 0/17492 MB (0%)
> D disk4                 State: up       /dev/da5s1a     A: 0/17492 MB (0%)
> D disk3                 State: up       /dev/da4s1a     A: 0/17492 MB (0%)
> D disk2                 State: up       /dev/da3s1a     A: 0/17492 MB (0%)
> D disk1                 State: up       /dev/da2s1a     A: 0/17492 MB (0%)
> 
> 1 volume:
> V imap                  State: up       Plexes:       1 Size:         68 GB
> 
> 1 plex:
> P imap.p0            R5 State: up       Subdisks:     5 Size:         68 GB
> 
> 5 subdisks:
> S imap.p0.s4            State: stale    D: disk5        Size:         17 GB
> S imap.p0.s3            State: up       D: disk4        Size:         17 GB
> S imap.p0.s2            State: up       D: disk3        Size:         17 GB
> S imap.p0.s1            State: up       D: disk2        Size:         17 GB
> S imap.p0.s0            State: up       D: disk1        Size:         17 GB
> 
> Tried start on plex and subdisk, nnot working. Finally, to get plex into 
> degraded mode we did a setstate down imap.p0.s4.
> gvinum -> list
> 5 drives:
> D disk5                 State: up       /dev/da6s1a     A: 0/17492 MB (0%)
> D disk4                 State: up       /dev/da5s1a     A: 0/17492 MB (0%)
> D disk3                 State: up       /dev/da4s1a     A: 0/17492 MB (0%)
> D disk2                 State: up       /dev/da3s1a     A: 0/17492 MB (0%)
> D disk1                 State: up       /dev/da2s1a     A: 0/17492 MB (0%)
> 
> 1 volume:
> V imap                  State: up       Plexes:       1 Size:         68 GB
> 
> 1 plex:
> P imap.p0            R5 State: degraded Subdisks:     5 Size:         68 GB
> 
> 5 subdisks:
> S imap.p0.s4            State: down     D: disk5        Size:         17 GB
> S imap.p0.s3            State: up       D: disk4        Size:         17 GB
> S imap.p0.s2            State: up       D: disk3        Size:         17 GB
> S imap.p0.s1            State: up       D: disk2        Size:         17 GB
> S imap.p0.s0            State: up       D: disk1        Size:         17 GB
> 
> and here we are. Start on volume or plex give errno 16, start on subdisk 
> gives  can't start: cannot start 'imap.p0.s4' - not yet supported.
> 
> Can't find any descriptions of the proper way to do disk replacement, so 
> if this is wrong, I'd love to get updated. And how do we get the current 
> situation upa nd running?
> 
> Regards,
>     Göran
> 
> 
> 
> ................................................... the future isMobile
> 
>  Goran Lowkrantz <goran.lowkrantz at ismobile.com>
>  System Architect, isMobile, Aurorum 2, S-977 75 Luleå, Sweden
>  Phone: +46(0)920-75559
>  Mobile: +46(0)70-587 87 82 Fax: +46(0)70-615 87 82
> 
> http://www.ismobile.com ...............................................
> _______________________________________________
> freebsd-stable at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"


-- 
................................................... the future isMobile

  Goran Lowkrantz <goran.lowkrantz at ismobile.com>
  System Architect, isMobile, Aurorum 2, S-977 75 Luleå, Sweden
  Phone: +46(0)920-75559
  Mobile: +46(0)70-587 87 82 Fax: +46(0)70-615 87 82

http://www.ismobile.com ...............................................


More information about the freebsd-stable mailing list