ZFS RaidZ2 with 24 drives?

Thomas Burgess wonslung at gmail.com
Thu Dec 17 00:05:31 UTC 2009


On Wed, Dec 16, 2009 at 6:19 PM, Matt Simerson <matt at corp.spry.com> wrote:

>
> On Dec 16, 2009, at 1:20 PM, Thomas Burgess wrote:
>
>  On Wed, Dec 16, 2009 at 3:43 PM, Matt Simerson <matt at corp.spry.com>
>> wrote:
>>
>> On Dec 15, 2009, at 3:52 PM, Solon Lutz wrote:
>>
>> Why do you use JBOD? You can configure a passthrough for all drives,
>> explicitly degrading the Areca to a dumb sata controller...
>>
>> Why would I bother?  Both ways present each disk to FreeBSD.  Based on my
>> understanding (and an answer received from Areca support), the only reason
>> I'd bother manually configuring some disks for passthrough is if I wanted to
>> use some disks in a RAID array and others as raw disks. Configuring JBOD
>> mode configures ALL the disks on the controller as passthrough devices.
>>
>> I think the main reason is that ZFS is better when it has raw drives.
>>
>
> I've heard that numerous times. Perhaps it is true in some cases. Such as
> when using a RAID controller from 1999. Or a $30 RAID adapter.
>
> I've built several ZFS systems using on-board SATA/SAS controllers, a
> couple of 24-disk systems with the Marvell SATA controllers used in the Sun
> x4500, and three 24-disk systems using the Areca 1231ML. Using the Areca as
> a hardware RAID controller with RAID volumes has proven to perform better
> and be much more reliable than when using raw disks.
>
> This is true regardless.  ZFS is, by design, a software raid system.  The
performance of ZFS comes from CPU and ram, not from expensive hardware raid
cards.  The entire POINT of using ZFS is  to get great performance and data
integrity with commodity hardware.  I'm not saying ZFS doesn't work with
hardware raid, it does.  ZFS's redundancy and self healing features are
designed for raw drives.
If you trust your hardware, then by all means, use it.   I'll stick to using
ZFS the way it was intended by the developers.


>
>  Some of the features of ZFS don't work as well without having access to
>> the drives in this way, and other features don't work at all.
>>
>
> The last time I compared the performance of ZFS using dumb Marvell SATA
> controllers versus the Areca with RAID, them features you speak of weren't
> worth the bits used to say them.  On the two systems I described in this
> thread, the one using RAID significantly outperforms the one configured as
> JBOD. And in the case of the Areca, JBOD = passthrough = raw disks.
>
> So data integrity isnt' worth the bits?    As far as your systems
performance goes, that's great.  I wasn't arguing with you about that.  I
was just pointing out that the guy who posted before me probably meant that
ZFS performs better with raw devices.

>
>  In general, it's always best to let ZFS handle the raid stuff and not use
>> the hardware raid settings.
>>
>
> Because you said so?
>
> I'd like to see some evidence to back that statement up. The only time I've
> seen better ZFS performance numbers than what I'm getting with FreeBSD 8 ZFS
> + Areca RAID6 is when I tested OpenSolaris with them Marvell SATA
> controllers. But that was in August of 2008, and ZFS on FreeBSD performs
> much better now. Some updated benchmarks would be welcome.
>
>
> i'm not asking you to take my word for it, i'm just telling you what is
common knowledge among ZFS users and developers.

>  Matt
>
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>


More information about the freebsd-fs mailing list