Request for opinions - gvinum or ccd?

krad kraduk at googlemail.com
Mon Jun 1 13:19:53 UTC 2009


no you would only loose the data for that block. Zfs also checksums meta
data, but by default keeps multiple copies of it so that's fairly resilient.
If you had the copies set to > 1 then you wouldn't loose the block either,
unless you were real unlucky. 

It's just about pushing the odds back further and further. If you are super
paranoid by all means put in 48 drive, group them into 5 x 8 drive raidz2
vdevs, have a bunch of hot spares, and enable copies=5 for blocks and
metadata, then duplicate the system and put the other box on another
continent and zfs send all you updates every 15 mins via a private
deadicated. This will all prove very resilient, but you will get very little
% storage from your drives, and have quite a large bandwidth bill 8)

Oh and don't forget the scrub you disk regularly. BTW that would rebuild any
missing copies as well (eg if you increase the number of copies after data
is stored on the fs)










-----Original Message-----
From: Tom Evans [mailto:tevans.uk at googlemail.com] 
Sent: 01 June 2009 13:50
To: krad
Cc: xorquewasp at googlemail.com; freebsd-hackers at freebsd.org
Subject: RE: Request for opinions - gvinum or ccd?

On Mon, 2009-06-01 at 09:32 +0100, krad wrote:
> Zfs has been designed for highly scalable redundant disk pools therefore
> using it on a single drive kind of goes against it ethos. Remember a lot
of
> the blurb in the man page was written by sun and therefore is written with
> corporates in mind, therefore the cost with of the data vs an extra drive
> being so large why wouldn't you make it redundant.
> 
> Having said that sata drives are cheap these days so you would have to be
on
> the tightest of budgets not to do a mirror.
> 
> Having said all this we quite often us zfs on a single drive, well sort
of.
> The sun clusters have external storage for the shared file systems. These
> are usually a bunch of drives, raid 5, 10 or whatever. Then export a
single
> lun, which is presented to the various nodes. There is a zpool created on
> this LUN. So to all intents and purposes zfs thinks its on a single drive
> (the redundancy provided by the external array). This is common practice
and
> we see no issues with it.

By doing this surely you lose a lot of the self healing that ZFS offers?
For instance, if the underlying vdev is just a raid5, then a disk
failure combined with an undetected checksum error on a different disk
would lead you to lose all your data. Or am I missing something?

(PS, top posting is bad)

Tom




More information about the freebsd-hackers mailing list