gvinum looses drives
Matthias Schuendehuette
msch at snafu.de
Sat Jun 19 10:48:06 GMT 2004
Hi Lukas et al,
I decided to be brave and give geom_vinum a try...
Here the typescript of the test with my comments inserted:
------------------------8><-------------------------------
Script started on Sat Jun 19 12:25:24 2004
root at current - ~
501 # gvinum
gvinum -> list
5 drives:
D testdrive State: up /dev/da4 A: 15/2063 MB (0%)
D d1 State: up /dev/da0s3 A: 8/4133 MB (0%)
D d2 State: up /dev/da1s1 A: 227/4353 MB (5%)
D d3 State: up /dev/da2s1 A: 231/4356 MB (5%)
D d4 State: up /dev/da3s2 A: 478/4604 MB (10%)
2 volumes:
V mp3dev State: up Plexes: 1 Size: 2048 MB
V raid5 State: up Plexes: 1 Size: 12 GB
2 plexes:
P mp3dev.p0 C State: up Subdisks: 1 Size: 2048 MB
P raid5.p0 R5 State: up Subdisks: 4 Size: 12 GB
5 subdisks:
S mp3dev.p0.s0 State: up D: testdrive Size: 2048 MB
S raid5.p0.s3 State: up D: d4 Size: 4125 MB
S raid5.p0.s2 State: up D: d3 Size: 4125 MB
S raid5.p0.s1 State: up D: d2 Size: 4125 MB
S raid5.p0.s0 State: up D: d1 Size: 4125 MB
gvinum -> quit
# Well, looks good! All items found und "up 'n runnin' "
# Now mount it!
root at current - ~
502 # mount /dev/gvinum/mp3dev /mp3dev
mount: /dev/gvinum/mp3dev: Device not configured
# Oops... what's that?
root at current - ~
503 # gvinum list
5 drives:
D testdrive State: up /dev/da4 A: 15/2063 MB (0%)
D d1 State: up /dev/da0s3 A: 8/4133 MB (0%)
D d2 State: up /dev/da1s1 A: 227/4353 MB (5%)
D d3 State: up /dev/da2s1 A: 231/4356 MB (5%)
D d4 State: up /dev/da3s2 A: 478/4604 MB (10%)
2 volumes:
V mp3dev State: down Plexes: 1 Size: 2048 MB
V raid5 State: up Plexes: 1 Size: 12 GB
2 plexes:
P mp3dev.p0 C State: down Subdisks: 1 Size: 2048 MB
P raid5.p0 R5 State: up Subdisks: 4 Size: 12 GB
5 subdisks:
S mp3dev.p0.s0 State: stale D: testdrive Size: 2048 MB
S raid5.p0.s3 State: up D: d4 Size: 4125 MB
S raid5.p0.s2 State: up D: d3 Size: 4125 MB
S raid5.p0.s1 State: up D: d2 Size: 4125 MB
S raid5.p0.s0 State: up D: d1 Size: 4125 MB
# Hmm... 'testdrive' is 'up' but 'mp3dev.p0.s0' is 'stale'. As you see
# below, geom_vinum lost 'testdrive' but tells still that it's 'up'...
root at current - ~
504 # mount /dev/gvinum/raid5 /raid/
mount: /dev/gvinum/raid5: Device not configured
# Same here...
root at current - ~
505 # gvinum list
5 drives:
D testdrive State: up /dev/da4 A: 15/2063 MB (0%)
D d1 State: up /dev/da0s3 A: 8/4133 MB (0%)
D d2 State: up /dev/da1s1 A: 227/4353 MB (5%)
D d3 State: up /dev/da2s1 A: 231/4356 MB (5%)
D d4 State: up /dev/da3s2 A: 478/4604 MB (10%)
2 volumes:
V mp3dev State: down Plexes: 1 Size: 2048 MB
V raid5 State: down Plexes: 1 Size: 12 GB
2 plexes:
P mp3dev.p0 C State: down Subdisks: 1 Size: 2048 MB
P raid5.p0 R5 State: down Subdisks: 4 Size: 12 GB
5 subdisks:
S mp3dev.p0.s0 State: stale D: testdrive Size: 2048 MB
S raid5.p0.s3 State: stale D: d4 Size: 4125 MB
S raid5.p0.s2 State: stale D: d3 Size: 4125 MB
S raid5.p0.s1 State: stale D: d2 Size: 4125 MB
S raid5.p0.s0 State: stale D: d1 Size: 4125 MB
# The drives are 'up' but the subdisks are 'stale'
root at current - ~
507 # tail -n 256 /var/log/messages
Jun 19 12:24:54 current kernel: vinum: unloaded
Jun 19 12:25:39 current kernel: FOO: sd raid5.p0.s3 is up
Jun 19 12:25:39 current kernel: FOO: sd raid5.p0.s2 is up
Jun 19 12:25:40 current kernel: FOO: sd raid5.p0.s1 is up
Jun 19 12:25:40 current kernel: FOO: sd raid5.p0.s0 is up
Jun 19 12:25:40 current kernel: FOO: sd mp3dev.p0.s0 is up
Jun 19 12:26:47 current kernel: gvinum: lost drive 'testdrive'
Jun 19 12:26:47 current kernel: FOO: sd mp3dev.p0.s0 is down
Jun 19 12:26:47 current kernel: FOO: plex mp3dev.p0 is down
Jun 19 12:26:47 current kernel: FOO: sd mp3dev.p0.s0 is stale
Jun 19 12:26:47 current kernel: FOO: plex mp3dev.p0 is down
Jun 19 12:27:28 current kernel: gvinum: lost drive 'd4'
Jun 19 12:27:28 current kernel: FOO: sd raid5.p0.s3 is down
Jun 19 12:27:28 current kernel: FOO: plex raid5.p0 is degraded
Jun 19 12:27:28 current kernel: gvinum: lost drive 'd3'
Jun 19 12:27:28 current kernel: FOO: sd raid5.p0.s2 is down
Jun 19 12:27:28 current kernel: FOO: plex raid5.p0 is down
Jun 19 12:27:28 current kernel: gvinum: lost drive 'd2'
Jun 19 12:27:28 current kernel: FOO: sd raid5.p0.s1 is down
Jun 19 12:27:28 current kernel: FOO: plex raid5.p0 is down
Jun 19 12:27:28 current kernel: gvinum: lost drive 'd1'
Jun 19 12:27:28 current kernel: FOO: sd raid5.p0.s0 is down
Jun 19 12:27:28 current kernel: FOO: plex raid5.p0 is down
Jun 19 12:27:28 current kernel: FOO: sd raid5.p0.s3 is stale
Jun 19 12:27:28 current kernel: FOO: plex raid5.p0 is down
Jun 19 12:27:28 current kernel: FOO: sd raid5.p0.s2 is stale
Jun 19 12:27:28 current kernel: FOO: plex raid5.p0 is down
Jun 19 12:27:28 current kernel: FOO: sd raid5.p0.s1 is stale
Jun 19 12:27:28 current kernel: FOO: plex raid5.p0 is down
Jun 19 12:27:28 current kernel: FOO: sd raid5.p0.s0 is stale
Jun 19 12:27:28 current kernel: FOO: plex raid5.p0 is down
root at current - ~
508 # exit
Script done on Sat Jun 19 12:28:19 2004
------------------------8><-------------------------------
So, I'm quite sure you need additional informations - please tell me
what.
BTW: Your fix of the unload-preventing geom-bug: will it be commited?
--
Ciao/BSD - Matthias
Matthias Schuendehuette <msch [at] snafu.de>, Berlin (Germany)
PGP-Key at <pgp.mit.edu> and <wwwkeys.de.pgp.net> ID: 0xDDFB0A5F
More information about the freebsd-current
mailing list