Request for opinions - gvinum or ccd?

Tom Evans tevans.uk at googlemail.com
Mon Jun 1 14:15:50 UTC 2009


On Mon, 2009-06-01 at 14:19 +0100, krad wrote:
> no you would only loose the data for that block. Zfs also checksums meta
> data, but by default keeps multiple copies of it so that's fairly resilient.
> If you had the copies set to > 1 then you wouldn't loose the block either,
> unless you were real unlucky. 
> 
> It's just about pushing the odds back further and further. If you are super
> paranoid by all means put in 48 drive, group them into 5 x 8 drive raidz2
> vdevs, have a bunch of hot spares, and enable copies=5 for blocks and
> metadata, then duplicate the system and put the other box on another
> continent and zfs send all you updates every 15 mins via a private
> deadicated. This will all prove very resilient, but you will get very little
> % storage from your drives, and have quite a large bandwidth bill 8)
> 
> Oh and don't forget the scrub you disk regularly. BTW that would rebuild any
> missing copies as well (eg if you increase the number of copies after data
> is stored on the fs)
> 

Well, no you wouldn't, because ZFS would never get to try to recover
that error. Since that one block is bad, and you lost a disk, your
underlying RAID-5 would not be able to recoverable, and you just lost
the entire contents of the RAID-5. ZFS wouldn't be able to recover
anything from it. The only time ZFS could recover from this scenario is
if you scrubbed before you had your disk failure. Hard to predict disk
failures..

What I'm trying to say (badly) is that this is redundancy that ZFS knows
nothing about, so it cannot recover from it in the same manner that a 5
disk raidz can. If this happened to a 5 disk raid-z, you would lose just
the corrupted block, rather than all your data.

PS, top posting is still bad. Thanks for making me cut the context out
of all these emails.

Cheers

Tom



More information about the freebsd-hackers mailing list