System freeze with gvinum

Ulf Lilleengen lulf at stud.ntnu.no
Wed Dec 3 23:34:18 PST 2008


On Thu, Dec 04, 2008 at 03:02:39AM +0100, Hilko Meyer wrote:
> Ulf Lilleengen schrieb:
> >On man, des 01, 2008 at 12:32:22am +0100, Hilko Meyer wrote:
> >> Is gvinum in 7.1RC and 7.x the same? We considered to update to 7.1
> >> before it's released anyway, because we need nfe(4).  And wanted to try
> >> gvinum and zfs there.
> >Yes, they are the same.
> >> 
> >> But we can test a patch against 6.4 before the big update if you want.
> >> 
> >It's really up to you. If you're going to upgrade anyway, it will at least
> >save me from a little bit of work :)
> 
> Unfortunately I have some other work for you. After changing the
> BIOS-setting to AHCI, I tried gvinum with 6.4 again. And strangely
> enough it worked. No freeze with newfs and I could copy several GB to
> the volumes, but after a reboot gvinum list looks like that:
> 
> | D sata3                 State: up       /dev/ad10       A: 9/476939 MB (0%)
> | D sata2                 State: up       /dev/ad8        A: 9/476939 MB (0%)
> | D sata1                 State: up       /dev/ad4        A: 9/476939 MB (0%)
> | 
> | 2 volumes:
> | V homes_raid5           State: down     Plexes:       1 Size:        465 GB
> | V dump_raid5            State: down     Plexes:       1 Size:        465 GB
> | 
> | 2 plexes:
> | P homes_raid5.p0     R5 State: down     Subdisks:     3 Size:        465 GB
> | P dump_raid5.p0      R5 State: down     Subdisks:     3 Size:        465 GB
> | 
> | 6 subdisks:
> | S homes_raid5.p0.s0     State: stale    D: sata1        Size:        232 GB
> | S homes_raid5.p0.s1     State: stale    D: sata2        Size:        232 GB
> | S homes_raid5.p0.s2     State: stale    D: sata3        Size:        232 GB
> | S dump_raid5.p0.s0      State: stale    D: sata1        Size:        232 GB
> | S dump_raid5.p0.s1      State: stale    D: sata2        Size:        232 GB
> | S dump_raid5.p0.s2      State: stale    D: sata3        Size:        232 GB
> 
> Then we updated to FreeBSD 7.1-PRERELEASE, but nothing changed. After a
> reboot the volumes are down. In dmesg I found
> g_vfs_done():gvinum/dump_raid5[READ(offset=65536, length=8192)]error = 6
> but I think, that occurred during a try to mount a volume.
> 
Well, this can happen if there was errors reading/writing to volumes
previously. When volumes are in the down state, it is not possible to use
them. You have a few options:

If currently have any data on the volumes, and would like to recover without
reinitializing the volumes, you can try and force the subdisk states to up by
doing:

1. 'gvinum setstate -f up <subdisk>' on all subdisk. The plexes should then
go into the upstate as all the subdisks are up.
2. Do fsck on the volumes to ensure that they are ok. If so, you are ready to
go again. Note that you might have to pass -t ufs  to fsck as vinum volumes
previously have set their own disklabels and other weird stuff.


If you don't have any valuable data yet, you can run 'gvinum start <volume>'
on all volumes, which should reinitialize the plexes, or you can just
recreate the entire config. Recreating the entire config might also work if
you have data, but I'd try the tip above first.

In any case, I don't guarantee for any these methods to work, but forcing the
state of the subdisks should to the trick. Preferably, you can try the method
on the subdisks of one of the volumes first and see if it works.
-- 
Ulf Lilleengen


More information about the freebsd-geom mailing list