ZFS RaidZ-2 problems
paul-freebsd at fletchermoorland.co.uk
Mon Nov 5 10:26:07 UTC 2012
I've already posted this to freebsd-fs@ but still have no idea as to why
the below has happened.
On 10/30/12 09:08, Paul Wootton wrote:
> I have had lots of bad luck with SATA drives and have had them fail on
> me far too often. Started with a 3 drive RAIDZ and lost 2 drives at
> the same time. Upgraded to a 6 drive RAIDZ and lost 2 drives with in
> hours of each other and finally had a 9 drive RAIDZ (1 parity) and
> lost another 2 drives (as luck would happen, this time I had a 90%
> backup on another machine so did not loose everything). I finally
> decided that I should switch to a RAIDZ2 (my current setup).
> Now I have lost 1 drive and the pack is showing as faulted. I have
> tried exporting and reimporting, but that did not help either.
> Is this normal? Has any one got any ideas as to what has happened and
> The fault this time might be cabling so I might not have lost the
> data, but my understanding was that with RAIDZ-2, you could loose 2
> drives and still have a working pack.
> I do know the fault could also be the power supply, controller etc. I
> can take care of all the hardware.
> The issue I have is, I have a 9 RAIDZ-2 pack with only 1 disk showing
> as offline and the pack is showing as faulted.
> If the power supply was bouncing and a drive was giving bad data, I
> would expect ZFS to report that 2 drives were faulted (1 offline and 1
> Is there a way with ZDB that I can see why the pool is showing as
> faulted? Can it tell me which drives it thinks are bad, or has bad data?
> I do still have the 90% backup of the pool and nothing has really
> changed since that backup, so if someone wants me to try something and
> it blows the pack away, it's not the end of the world.
> pool: storage
> state: FAULTED
> status: One or more devices could not be opened. There are insufficient
> replicas for the pool to continue functioning.
> action: Attach the missing device and online it using 'zpool online'.
> see: http://illumos.org/msg/ZFS-8000-3C
> scan: resilvered 30K in 0h0m with 0 errors on Sun Oct 14 12:52:45 2012
> NAME STATE READ WRITE CKSUM
> storage FAULTED 0 0 1
> raidz2-0 FAULTED 0 0 6
> ada0 ONLINE 0 0 0
> ada1 ONLINE 0 0 0
> ada2 ONLINE 0 0 0
> 17777811927559723424 UNAVAIL 0 0 0 was
> ada4 ONLINE 0 0 0
> ada5 ONLINE 0 0 0
> ada6 ONLINE 0 0 0
> ada7 ONLINE 0 0 0
> ada8 ONLINE 0 0 0
> ada10p4 ONLINE 0 0 0
> root at filekeeper:/storage # zpool export storage
> root at filekeeper:/storage # zpool import storage
> cannot import 'storage': I/O error
> Destroy and re-create the pool from
> a backup source.
> root at filekeeper:/usr/home/paul # uname -a
> FreeBSD filekeeper.caspersworld.co.uk 10.0-CURRENT FreeBSD
> 10.0-CURRENT #0 r240967: Thu Sep 27 08:01:24 UTC 2012
> root at filekeeper.caspersworld.co.uk:/usr/obj/usr/src/sys/GENERIC amd64
More information about the freebsd-current