problem adding subdisk to vinum

Shawn Ostapuk flagg at slumber.org
Tue Aug 12 23:51:25 PDT 2003


(Sorry for being so long, thought it would be less confusing and save
additional questions to show complete vinum lists)

> > su-2.03# vinum lv -r pr0n
> > V pr0n                  State: up       Plexes:       1 Size:       1172 GB
> > P vinum0.p0           C State: corrupt  Subdisks:    11 Size:       1172 GB
> > S vinum0.p0.s0          State: up       PO:        0  B Size:        152 GB
> > ...
> 
> Hmm, that plex shouldn't have been corrupt.  But plex states are
> pretty meaningless anyway.

Actually I may have posted the wrong line previously (i've been trying a
million things). This is what it is normally (before) (the format or the
numbers has changed from 1 to 01, i was trying everything :)

vsu-2.03# vinum lv -r
V pr0n                  State: up       Plexes:       1 Size:       1023 GB
P vinum0.p0           C State: up       Subdisks:    10 Size:       1023 GB
S vinum0.p0.s00         State: up       PO:        0  B Size:        152 GB
S vinum0.p0.s01         State: up       PO:      152 GB Size:         28 GB
S vinum0.p0.s02         State: up       PO:      181 GB Size:         76 GB
S vinum0.p0.s03         State: up       PO:      257 GB Size:         76 GB
S vinum0.p0.s04         State: up       PO:      333 GB Size:         76 GB
S vinum0.p0.s05         State: up       PO:      410 GB Size:         76 GB
S vinum0.p0.s06         State: up       PO:      486 GB Size:         76 GB
S vinum0.p0.s07         State: up       PO:      562 GB Size:         74 GB
S vinum0.p0.s08         State: up       PO:      637 GB Size:        233 GB
S vinum0.p0.s09         State: up       PO:      871 GB Size:        152 GB

> > after:
> > S vinum0.p0.s10         State: empty    PO:     1023 GB Size:        149 GB
> >
> > the old filesystem is not accessable at this point. invalid superblock
> > and other errors with fsck and growfs.
> 
> That's the kind of message I'd want to see.  But given that the
> subdisk isn't available, that's not surprising.

After adding the new disk, it (confirmed) always looks like

su-2.03# vinum lv -r
V pr0n                  State: up       Plexes:       1 Size:       1172 GB
P vinum0.p0           C State: corrupt  Subdisks:    11 Size:       1172 GB
S vinum0.p0.s00         State: up       PO:        0  B Size:        152 GB
S vinum0.p0.s01         State: up       PO:      152 GB Size:         28 GB
S vinum0.p0.s02         State: up       PO:      181 GB Size:         76 GB
S vinum0.p0.s03         State: up       PO:      257 GB Size:         76 GB
S vinum0.p0.s04         State: up       PO:      333 GB Size:         76 GB
S vinum0.p0.s05         State: up       PO:      410 GB Size:         76 GB
S vinum0.p0.s06         State: up       PO:      486 GB Size:         76 GB
S vinum0.p0.s07         State: up       PO:      562 GB Size:         74 GB
S vinum0.p0.s08         State: up       PO:      637 GB Size:        233 GB
S vinum0.p0.s09         State: up       PO:      871 GB Size:        152 GB
S vinum0.p0.s10         State: empty    PO:     1023 GB Size:        149 GB

At this point i can either 1) reboot, and do vinum start vinum0.p0.s10
to get it to list as up (it says device busy if i try before reboot), or 
2) setstate up vinum0.p0.s10, or 3) i even tried "init vinum0.p0.s10" which
took forever but set the state as up but still no success

At this point lv -r shows everything as up. same as above but with total
size as 1172 GB. vinum list _looks_ perfect.
su-2.03# vinum lv -r
V pr0n                  State: up       Plexes:       1 Size:       1172 GB
P vinum0.p0           C State: up       Subdisks:    11 Size:       1172 GB
S vinum0.p0.s00         State: up       PO:        0  B Size:        152 GB
S vinum0.p0.s01         State: up       PO:      152 GB Size:         28 GB
S vinum0.p0.s02         State: up       PO:      181 GB Size:         76 GB
S vinum0.p0.s03         State: up       PO:      257 GB Size:         76 GB
S vinum0.p0.s04         State: up       PO:      333 GB Size:         76 GB
S vinum0.p0.s05         State: up       PO:      410 GB Size:         76 GB
S vinum0.p0.s06         State: up       PO:      486 GB Size:         76 GB
S vinum0.p0.s07         State: up       PO:      562 GB Size:         74 GB
S vinum0.p0.s08         State: up       PO:      637 GB Size:        233 GB
S vinum0.p0.s08         State: up       PO:      637 GB Size:        233 GB
S vinum0.p0.s09         State: up       PO:      871 GB Size:        152 GB
S vinum0.p0.s10         State: up       PO:     1023 GB Size:        149 GB

Either method results in an umountable and unaccessable filesystem.

su-2.03# mount /dev/vinum/pr0n /data
mount: /dev/vinum/pr0n on /data: incorrect super block

su-2.03# fsck /dev/vinum/pr0n
** /dev/vinum/pr0n
 
 CANNOT READ: BLK 16
 CONTINUE? [yn] ^C

If i continue it says cannot read blk 17,18, etc.
If i do fsck -b32, it does the same thing but for the 32>= values.

i thought i saved the growfs error, but i guess i didnt. it was
basically something like "growfs: rdfs: something error 16".

normally i test by trying to mount or fsck the old filesystem.

> > after chanigng the state to up "setstate up vinum0.p0.s10", the
> > behaviour would also be the same.
> 
> Really?  Then I definitely want to see it.
> 
> > (i am still unsure as to the empty state it says, as the man page
> > says that if adding a subdisk to a plex to a volume that has no
> > other plex's it would bring the state to up immediately. (i am also
> > not positive at this point the state was always set to empty, i
> > believe at least a few times it did come up with State: up)
> 
> It's best to check this.  It shouldn't be up immediately.

I havent been able to get it "up" on default after adding the new disk, so
that looks okay then.

> > also, i have more than double verified the drive in the right place
> > and is functioning. after having trouble i even re-fdisk'd the new
> > drive (ad7e) and newfs's it resulting in zero errors or
> > problems. the bios detects it quickly with no delays as well.
> 
> That's not the issue.  The real question is the states.  Getting
> states right is a black art, and it's possible that that's your real
> problem.

It didn't seem like a disk issue, but with the unusual problem i was
trying to make aboslutely sure.

> OK.  Start from your last status (volume up, plex corrupt, last
> subdisk empty).  Set plex and last subdisk to up.  Run fsck and tell
> me what you get.  If that works, try growfs.  If that workds, do a
> saveconfig.

I've probably added the new disk over 100 times now trying every
possible option. I can get vinum to list everything like its perfect,
but once the new drive is listed, I am always completely unable to
access the filesystem. I've tried init'd the subdisk, i've made smaller
partitions on the new drive (tried adding a 25 gig paritition) in case
i was reaching some limits, i've backported to the 4.7 release of vinum
(45 lines difference i think was it) since i had upgraded to 4.8 since
the last time i successfully added a drive, tried the old kernel as
well, all without any luck. (just to give you an idea of how thorough
i've been trying every possible option). Basically I've had vinum list
_look_ good consistantly, but never ever can fsck or mount the
filesystem. Half the subdisks currently on there were added this same
way but never gave me any problems -- the behaviour definately does not
seem to be acting the same way as before.

doing a resetconfig, and vinum create -f origvinum.conf always returns
access to the filesystem with no problems.

If this is interesting enough you wanna take a look i'd be happy to
arrange access, or provide any additional information. Thanks again.

shawn


More information about the freebsd-questions mailing list