Has anybody EVER successfully recovered VINUM?

orville weyrich weyrich_comp at yahoo.com
Tue Dec 7 21:45:53 PST 2004


I have been trying to figure out how to get VINUM to
recognize a new disk after a disk failure, and no luck
at all.

I cannot find instructions in the official
documentation, nor in the FreeBSD Dairy.

Lots of places tell how to build a VINUM system. 
Nobody ever takls about how to recover from a disk
failure.

Can someone PLEASE help me recover?  I have already
posted complete information to this list, with no
answer.  I will give a short version now and provide
more info if requested.

I am running FreeBSD 4.10.  My current vinum list
includes the following:

S raid.p0.s2            State: up       PO:     1024
kB Size:       2149 MB
S raid.p0.s3            State: up       PO:     1536
kB Size:       2149 MB
S raid.p0.s4            State: up       PO:     2048
kB Size:       2149 MB
S raid.p0.s5            State: up       PO:     2560
kB Size:       2149 MB
S raid.p0.s6            State: up       PO:     3072
kB Size:       2149 MB
S raid.p0.s7            State: up       PO:     3584
kB Size:       2149 MB
S raid.p0.s8            State: up       PO:     4096
kB Size:       2149 MB
S raid.p0.s9            State: crashed  PO:        0 
B Size:       2150 MB
S raid.p1.s0            State: up       PO:        0 
B Size:       2151 MB
S raid.p1.s1            State: up       PO:      512
kB Size:       2151 MB
S raid.p1.s2            State: up       PO:     1024
kB Size:       2151 MB
S raid.p1.s3            State: up       PO:     1536
kB Size:       2151 MB
S raid.p1.s4            State: obsolete (detached)    
 Size:       2150 MB
S raid.p1.s5            State: reborn   PO:     2560
kB Size:       2151 MB
S raid.p1.s6            State: up       PO:     3072
kB Size:       2151 MB
S raid.p1.s7            State: up       PO:     3584
kB Size:       2151 MB
S raid.p1.s8            State: up       PO:     4096
kB Size:       2151 MB
S raid.p1.s9            State: up       PO:     4608
kB Size:       2151 MB
S raid.p2.s0            State: stale    PO:     2304
MB Size:       2150 MB


The above represents a total of 10 drives, in a
striped raid configuration (half of each disk in each
plex).

Subdisks p0.s0 and p1.s5 are on one failed disk,
Subdisks p0.s9 and p1.s4 are on a second failed disk

Subdisks raid.p2.s0 and raid.p2.s1 are on the
replacement disk that I was trying to install.

I tried detaching the replacement subdisk and failed
subdisk, and then reattaching the replacement in the
failed position, and made things worse.


Can somebody PLEASE tell me two things:

(1) What sequence of steps SHOULD I have taken to
replace the disks (I promise I will test it and
document it for all to see).

(2) How can I recover NOW?  (I seem to recall reading
somewhere that it is actually possible to reset the
config and recreate it with the same config file as
used originally without destroying any data, and then
judicially use setstate to mark the valid subdisks as
up and the invalid ones as obsolete.  But this is a
drastic step that I don't want to take without some
guidance.

Please (grovel grovel) help me!

Thanks

orville.

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


More information about the freebsd-questions mailing list