Panics with 5-stable - vinum? raid5?
daniel at flipse.org
daniel at flipse.org
Mon Feb 28 15:18:43 GMT 2005
On Fri, 25 Feb 2005 17:00:36 -0800 (PST), Doug White wrote
> On Tue, 22 Feb 2005 daniel at flipse.org wrote:
(..)
> > Turned to debugging kernel, unattended reboot and crash dumping via swap,
> > however this doesn't work at all. Kernel options added to GENERIC:
> > "makeoptions DEBUG=-g"
> > "options KDB, GDB, DDB, KDB_UNATTENDED"
>
> This is not the correct syntax. The options need to be on their own lines.
OK, thanks for that, I didn't know that and will put them on a separate line
from now on.
(..)
> > kldstat:
> > Id Refs Address Size Name
> > 1 3 0xc0400000 3b05c8 kernel
> > 2 1 0xc189c000 dd000 vinum.ko
> > 3 1 0xc1a0a000 7000 nullfs.ko
>
> I'd strongly, strongly suggest migrating to gvinum if you can. My
> experience has found it vastly more stable on 5.X and seems free of the
> odd config problems that have plagued vinum in the 4.X days.
>
> > vinum dumpconfig:
> > Drive vinumdrive0: Device /dev/ad0s1e
> > Drive vinumdrive1: Device /dev/ad2s1e
> > Drive vinumdrive2: Device /dev/ad4s1e
> > Drive vinumdrive3: Device /dev/ad6s1e
>
> This is ok, but:
>
> disklabel /dev/ad0s1:
> # /dev/ad0s1:
> 8 partitions:
> # size offset fstype [fsize bsize bps/cpg]
> a: 77075519 0 4.2BSD 2048 16384 28552
> b: 1048576 77075519 swap
> c: 398283417 0 unused 0 0
> e: 320159322 78124095 vinum
>
> Unfortnately since you've elected to share vinumdrive0 with your root
> partition you can't do the gvinum migration. I'd strongly suggest finding
> a separate root disk, migrate to it, then convert the remaining 4
> disks to gvinum. If you switched to gvinum now your root partition
> (ad0s1a) would be masked (and quite possibly destroyed) by the
> gvinum conversion.
>
> gvinum generally expects to have the whole disk (or slice?) to
> itself. le may be able to expound on the exact reasons for this and
> if your setup is actually supported.
Since replacing the Realtec NIC, I haven't had anymore panics just yet. This
looks promising but keeping fingers crossed. I have no experience with odd
config problems with vinum on 4.X nor 5.X. Although, I haven't succeeded in
having vinum loaded, started and their devices fsck-ed automatically at boot
time on 5.X.
I did in fact give gvinum a try, presumably risking loosing all my data, as I
did NOT convert anything (e.g. disklabels) other than using /dev/gvinum/*
instead of /dev/vinum/* as fs devices. I have been using gvinum for a couple
of weeks and quite naturally I would have noticed fs corruptions and/or raid5
parity problems. A checkparity always (both, vinum and gvinum) runs cleanly.
Though, the performance of gvinum during such a f.i. checkparity run, was
every so much slower than for vinum. Does that make sense?
Daniel
More information about the freebsd-stable
mailing list