Gmirror/graid or hardware raid?
paul at kraus-haus.org
Thu Jul 9 17:10:23 UTC 2015
On Jul 9, 2015, at 12:39, kpneal at pobox.com wrote:
> On Thu, Jul 09, 2015 at 10:32:45AM -0400, Paul Kraus wrote:
>> I do NOT use RaidZ for anything except bulk backup data where capacity is all that matters and performance is limited by lots of other factors.
> A 4-drive raidz2 is more reliable than a pair of two drive mirrors, striped.
> But the pair of mirrors will perform much better.
Agreed. In terms of MTTDL (Mean Time To Data Loss), which Richard Elling did lots of work researching, from best to worst:
Stripe (no redundancy)
But … The MTTDL for a 2-way mirror and a 2 drive RAIDz1 are the same. The same can be said of a 3-way mirror and a 3 drive RAIDz2. A 4-way mirror and a 4 drive RAIDz3 also have the same MTTDL. In reality, no one configures a RAIDz1 of 2 drives, a RAIDz2 of 3 drives, or a RAIDz3 of 4 drives. Take a look at Richards blog post on this topic here: http://blog.richardelling.com/2010/02/zfs-data-protection-comparison.html
> It's all a balancing act of performance vs reliability. *shrug*
Don’t forget cost :-) Fast - Cheap - Reliable … maybe you can have two :-)
> My main server has a three-way mirror and that's it. Three because there
> are only three brands of server-grade SAS drives.
My home server has 3 stripes of 3-way mirrors. And yes, each vdev is made up of three different drives (in some cases the same manufacturer, but different models and production dates).
>> I also create a “do-not-remove” dataset in every zpool with 1 GB reserved and quota. ZFS behaves very, very badly when FULL. This give me a cushion when things go badly so I can delete whatever used up all the space … Yes, ZFS cannot delete files if the FS is completely FULL. I leave the “do-not-remove” dataset unmounted so that it cannot be used.
> Isn't this fixed in FreeBSD 10.2? Or was it 11? I can't remember because
> I haven't upgraded to that point yet. I do remember complaints from people
> who did upgrade and then saw they didn't have as much space free as they
> did before the upgrade.
I was not aware this had been accepted as a bug to fix :-) It has been a detail to note for ZFS from the very beginning. Do you know if this is a FBSD specific fix or coming down from OpenZFS ?
paul at kraus-haus.org
More information about the freebsd-questions