Zpool on raw disk and weird GEOM complaint
Dan Naumov
dan.naumov at gmail.com
Mon Jun 29 10:15:13 UTC 2009
On Mon, Jun 29, 2009 at 12:43 PM, Patrick M. Hausen<hausen at punkt.de> wrote:
> Hi, all,
>
> I have a system with 12 S-ATA disks attached that I set up
> as a raidz2:
>
> %zpool status zfs
> pool: zfs
> state: ONLINE
> scrub: scrub in progress for 0h5m, 7.56% done, 1h3m to go
> config:
>
> NAME STATE READ WRITE CKSUM
> zfs ONLINE 0 0 0
> raidz2 ONLINE 0 0 0
> da0 ONLINE 0 0 0
> da1 ONLINE 0 0 0
> da2 ONLINE 0 0 0
> da3 ONLINE 0 0 0
> da4 ONLINE 0 0 0
> da5 ONLINE 0 0 0
> da6 ONLINE 0 0 0
> da7 ONLINE 0 0 0
> da8 ONLINE 0 0 0
> da9 ONLINE 0 0 0
> da10 ONLINE 0 0 0
> da11 ONLINE 0 0 0
>
> errors: No known data errors
I can't address your issue at hand, but I would point out that having
a raidz/raidz2 consisting of more than 9 vdevs is a BAD IDEA (tm). All
SUN documentation recommends using groups from 3 to 9 vdevs in size.
There are known cases where using more vdevs than recommended causes
performance degradation and more importantly, parity computation
problems which can result in crashes and potential data loss. In your
case, I would have the pool built as a group of 2 x 6-disk raidz.
Sincerely,
- Dan Naumov
More information about the freebsd-stable
mailing list