Replacing failed drive in gvinum causes panic
Peter A. Giessel
peter_giessel at dot.state.ak.us
Thu Sep 14 09:11:51 PDT 2006
I'm getting the same thing as Ludo was getting about 9 months ago with
both 5.4 and 6.1
http://docs.freebsd.org/cgi/getmsg.cgi?fetch=55224+0+/usr/local/www/db/text/2005/freebsd-geom/20051225.freebsd-geom
I'm using a raid5:
---------------------------------
4 drives:
D five State: up /dev/ad10s1 A: 47/190779 MB (0%)
D four State: up /dev/ad11s1 A: 47/190779 MB (0%)
D three State: up /dev/ad12s1 A: 47/190779 MB (0%)
D two State: up /dev/ad14s1 A: 0/190732 MB (0%)
D eleven State: up /dev/ad16s1 A: 47/190779 MB (0%)
1 volume:
V array State: up Plexes: 1 Size: 745 GB
1 plex:
P array.p0 R5 State: degraded Subdisks: 5 Size: 745 GB
6 subdisks:
S array.p0.s5 State: up D: eleven Size: 186 GB
S array.p0.s4 State: up D: five Size: 186 GB
S array.p0.s3 State: up D: four Size: 186 GB
S array.p0.s2 State: up D: three Size: 186 GB
S array.p0.s1 State: up D: two Size: 186 GB
S array.p0.s0 State: down D: one Size: 186 GB
---------------------------------
Drive one failed, so we took it out, put a new one in, repartitioned,
created the config file "driveone":
----------driveone---------
drive one device /dev/ad18s1h
---------------------------
now, "gvinum create driveone" results in an immediate panic. Sometimes
gvinum will print:
"array.p0 degraded -> up" just before the panic.
This seems odd to me since it should take a while for array.p0.s0 to
come back online, so its like its skipping the rebuild, coming up in
an odd state, and immediately panic'ing.
Any advice that you could give would be very much appreciated.
More information about the freebsd-geom
mailing list