ZFS or UFS for 4TB hardware RAID6?

Richard Mahlerwein mahlerrd at yahoo.com
Tue Jul 14 16:23:08 UTC 2009


--- On Tue, 7/14/09, Matthew Seaman <m.seaman at infracaninophile.co.uk> wrote:

> From: Matthew Seaman <m.seaman at infracaninophile.co.uk>
> Subject: Re: ZFS or UFS for 4TB hardware RAID6?
> To: mahlerrd at yahoo.com
> Cc: "Free BSD Questions list" <freebsd-questions at freebsd.org>
> Date: Tuesday, July 14, 2009, 4:23 AM
> Richard Mahlerwein wrote:
> 
> > With 4 drives, you could get much, much higher
> performance out of
> > RAID10 (which is alternatively called RAID0+1 or
> RAID1+0 depending on
> > the manufacturer
> 
> Uh -- no.  RAID10 and RAID0+1 are superficially
> similar but quite different
> things.  The main differentiator is resilience to disk
> failure. RAID10 takes
> the raw disks in pairs, creates a mirror across each pair,
> and then stripes
> across all the sets of mirrors.  RAID0+1 divides the
> raw disks into two equal
> sets, constructs stripes across each set of disks, and then
> mirrors the
> two stripes.
> 
> Read/Write performance is similar in either case: both
> perform well for the sort of small randomly distributed IO
> operations you'ld get when eg.
> running a RDBMS.  However, consider what happens if
> you get a disk failure.
> In the RAID10 case *one* of your N/2 mirrors is degraded
> but the other N-1
> drives in the array operate as normal.  In the RAID0+1
> case, one of the
> 2 stripes is immediately out of action and the whole IO
> load is carried by
> the N/2 drives in the other stripe.
> 
> Now consider what happens if a second drive should
> fail.  In the RAID10
> case, you're still up and running so long as the failed
> drive is one of
> the N-2 disks that aren't the mirror pair of the 1st failed
> drive.
> In the RAID0+1 case, you're out of action if the 2nd disk
> to fail is one
> of the N/2 drives from the working stripe.  Or in
> other words, if two
> random disks fail in a RAID10, chances are the RAID will
> still work.  If
> two arbitrarily selected disks fail in a RAID0+1 chances
> are basically
> even that the whole RAID is out of action[*].
> 
> I don't think I've ever seen a manufacturer say RAID1+0
> instead of RAID10,
> but I suppose all things are possible.  My impression
> was that the 0+1 terminology was specifically invented to
> make it more visually distinctive
> -- ie to prevent confusion between '01' and '10'.
> 
>     Cheers,
> 
>     Matthew
> 
> [*] Astute students of probability will point out that this
> really only
> makes a difference for N > 4, and for N=4 chances are
> evens either way that failure of two drives would take out
> the RAID.
> 
> -- Dr Matthew J Seaman MA, D.Phil.     
>              7
> Priory Courtyard
>                
>                
>              
>    Flat 3
> PGP: http://www.infracaninophile.co.uk/pgpkey 
>    Ramsgate
>                
>                
>              
>    Kent, CT11 9PW
> 

--- On Tue, 7/14/09, Matthew Seaman <m.seaman at infracaninophile.co.uk> wrote:

> From: Matthew Seaman <m.seaman at infracaninophile.co.uk>
> Subject: Re: ZFS or UFS for 4TB hardware RAID6?
> To: mahlerrd at yahoo.com
> Cc: "Free BSD Questions list" <freebsd-questions at freebsd.org>
> Date: Tuesday, July 14, 2009, 4:23 AM
> Richard Mahlerwein wrote:
> 
> > With 4 drives, you could get much, much higher
> performance out of
> > RAID10 (which is alternatively called RAID0+1 or
> RAID1+0 depending on
> > the manufacturer
> 
> Uh -- no.  RAID10 and RAID0+1 are superficially
> similar but quite different
> things.  The main differentiator is resilience to disk
> failure. RAID10 takes
> the raw disks in pairs, creates a mirror across each pair,
> and then stripes
> across all the sets of mirrors.  RAID0+1 divides the
> raw disks into two equal
> sets, constructs stripes across each set of disks, and then
> mirrors the
> two stripes.
> 
> Read/Write performance is similar in either case: both
> perform well for the sort of small randomly distributed IO
> operations you'ld get when eg.
> running a RDBMS.  However, consider what happens if
> you get a disk failure.
> In the RAID10 case *one* of your N/2 mirrors is degraded
> but the other N-1
> drives in the array operate as normal.  In the RAID0+1
> case, one of the
> 2 stripes is immediately out of action and the whole IO
> load is carried by
> the N/2 drives in the other stripe.
> 
> Now consider what happens if a second drive should
> fail.  In the RAID10
> case, you're still up and running so long as the failed
> drive is one of
> the N-2 disks that aren't the mirror pair of the 1st failed
> drive.
> In the RAID0+1 case, you're out of action if the 2nd disk
> to fail is one
> of the N/2 drives from the working stripe.  Or in
> other words, if two
> random disks fail in a RAID10, chances are the RAID will
> still work.  If
> two arbitrarily selected disks fail in a RAID0+1 chances
> are basically
> even that the whole RAID is out of action[*].
> 
> I don't think I've ever seen a manufacturer say RAID1+0
> instead of RAID10,
> but I suppose all things are possible.  My impression
> was that the 0+1 terminology was specifically invented to
> make it more visually distinctive
> -- ie to prevent confusion between '01' and '10'.
> 
>     Cheers,
> 
>     Matthew
> 
> [*] Astute students of probability will point out that this
> really only
> makes a difference for N > 4, and for N=4 chances are
> evens either way that failure of two drives would take out
> the RAID.

Sorry, you are correct.  Thanks for clearing that up.  

I *have,* by the way, stumbled across them a couple of times in the consumer/on-board market, and that's why I tend to remember that and include it even though it's incorrect now.  IIRC (which is NOT certain :), I remember once perhaps back around 2000 that a major mag tested some and found that it was only nomenclature differences: all RAID10/1+0/0+1 that were available were all RAID10.  

And, if I recall, that was back in the PATA days.

Anyway, NP.  I could also be off my rockers.

(Oh, and thanks for the addendum, I actually was following and thinking "...now wait a minute..." and then you clarified that last bit.  :)  )


      


More information about the freebsd-questions mailing list