ZFS on Hardware RAID controller

Devin Teske devin.teske at fisglobal.com
Wed Feb 19 02:01:14 UTC 2014



> -----Original Message-----
> From: andrew clarke [mailto:mail at ozzmosis.com]
> Sent: Tuesday, February 18, 2014 3:11 PM
> To: freebsd at fongaboo.com
> Cc: questions at freebsd.org; Eitan Adler
> Subject: Re: ZFS on Hardware RAID controller
> 
> On Tue 2014-02-18 17:27:52 UTC-0500, freebsd at fongaboo.com
> (freebsd at fongaboo.com) wrote:
> 
> > When we spoke, you noted that when installing ZFS on multiple disks
> > connected to a hardware RAID controller, it is best to config it to
JBOD.
> >
> > I tried to explain this to a colleague, but they were skeptical. Would
> > you (or anyone) be willing to give me a one or two line sales-pitch on
> > why one should abandon traditional notions of RAID performance in
> > favor of allowing ZFS to do disk management?
> 
> Without JBOD, the hardware RAID appears to FreeBSD as a single disk.

As one might expect, when ZFS thinks incorrectly that there is only one
spindle, that it will neglect to apply internal optimizations when it has
access to multiple spindles.

But parallel read/write optimizations aside, the channel plays a big part
in the performance impact that comes along with adding the HW RAID
layer. If you're using 10Gbps iSCSI then serial access to the data is less
of an issue than if you have a 2Gbps Fibre Channel. The channel acts as a
bottleneck in many cases where we've tested.

> There is little to nothing to be gained from running ZFS on a single disk.

Nothing? I wouldn't say that. ZFS brings CoW, snapshots, lack of an fsck,
and so much more. Saying that one is not interested in the raidz* vdev
types doesn't mean that ZFS has nothing to provide.

It may be more truthful to acknowledge that nothing has changed with
respect to a statement like the following (for which you can quote me):

"There is no guarantee of data safety with running any filesystem on a
single disk; wherein said disk is truly one directly attached drive"

Which you'll notice has had mention of ZFS removed from it. Putting ZFS
on a single disk (real disk) I would imagine is perfectly fine if you:

a. don't care about redundancy
b. want CoW, snapshots, transfers, lack of fsck, and more features

I might also add, that "zpool scrub" *is* able to detect errors even on a
single-disk pool.

> Without JBOD, should a disk in the RAID fail, "zpool status"
> won't be able to tell you which one.
> 

ZFS will only see said disk fail if the HW RAID stops being provided; which
will only presumably be when the HW RAID is beyond degraded and has lost
more drives its own parity [and/or hot spare topology] can handle.

While there are a great many benefits that can be had by running your
ZFS with multiple disks in one or more raid-like vdev(s) to comprise your
pool,  I wouldn't exactly say that ZFS is worthless on a single-disk
non-raid-
like vdev.

just 2-cents
-- 
Devin


_____________
The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you.


More information about the freebsd-questions mailing list