gvinum - problem on hard disk

Ulf Lilleengen lulf at stud.ntnu.no
Fri Oct 19 13:18:36 PDT 2007


On fre, okt 19, 2007 at 03:43:14 -0200, Felipe Neuwald wrote:
> Hi folks,
> 
> I have one gvinum raid on a FreeBSD 6.1-RELEASE machine. There are 4 
> disks running, as you can see:
> 
> [root at fileserver ~]# gvinum list
> 4 drives:
> D a                     State: up       /dev/ad4        A: 0/238474 MB (0%)
> D b                     State: up       /dev/ad5        A: 0/238475 MB (0%)
> D c                     State: up       /dev/ad6        A: 0/238475 MB (0%)
> D d                     State: up       /dev/ad7        A: 0/238475 MB (0%)
> 
> 1 volume:
> V data                  State: down     Plexes:       1 Size:        931 GB
> 
> 1 plex:
> P data.p0             S State: down     Subdisks:     4 Size:        931 GB
> 
> 4 subdisks:
> S data.p0.s3            State: stale    D: d            Size:        232 GB
> S data.p0.s2            State: up       D: c            Size:        232 GB
> S data.p0.s1            State: up       D: b            Size:        232 GB
> S data.p0.s0            State: up       D: a            Size:        232 GB
> 
> 
> But, as you can see, the data.p0.s3 is "stale". What should I do to try 
> recover this and get the raid up again (and recover information)
> 
Hello,

Since your plex organization is RAID0 (striping), recovering after a drive
failure is a problem since you don't have any redundancy, but if you didn't
replace any drives etc, this could just be gvinum fooling around. In that
case, doing a 'gvinum setstate -f up data.p0.s3' should get the volume up
again.
> 

-- 
Ulf Lilleengen


More information about the freebsd-geom mailing list