ZFS on Hardware RAID controller

dweimer dweimer at dweimer.net
Wed Feb 19 04:57:20 UTC 2014

On 02/18/2014 10:20 pm, dteske at FreeBSD.org wrote:
>> -----Original Message-----
>> From: freebsd at fongaboo.com [mailto:freebsd at fongaboo.com]
>> Sent: Tuesday, February 18, 2014 7:58 PM
>> To: questions at freebsd.org
>> Subject: Re: ZFS on Hardware RAID controller
>> I was speaking to someone else about this today, and it eventually 
>> became
>> apparent that we were getting into a sort-of
> Abbott-and-Costello-Who's-on-First
>> confusion... because apparently people can mean different things when 
>> they
>> use the term 'JBOD'.
>> What I have always meant when I say 'JBOD' is a (not really) RAID mode
> that
>> simply concatenates the drives into one volume in a serial fashion, 
>> ie.
>> 'spanning'. Most RAID controllers and RAID-enabled NAS units that I 
>> have
>> interacted with in my life have offered this mode and referred to it 
>> as
> 'JBOD'.
> This is not entirely correct. JBOD and RAID-SPAN are two different 
> things.
> Your
> controller either supports one, or both (aside a host of other options, 
> such
> as
> RAID-1, RAID-10; and if often RAID-5, maybe RAID-6).

JBOD, stands for "just a bunch of disks", its defined as one or more 
disks spanned together or concatenated in a linear fashion.

> RAID-SPAN is RAID-0. Which is as you describe "simply concatenates the
> drives
> into one volume in a serial fashion, ie. 'spanning'."

Raid-0 is a stripe, its splits the data evenly across the disks, instead 
of spanning them.

> JBOD on the other hand stands for "Just a Bunch of Disks" and is not 
> the
> same
> as "concatenating the disks" but rather allows your controller to throw 
> the
> drives at the Operating System with the attitude of "here, you do it 
> then."
> We can actually therefore call ZFS's RAID capabilities, "software 
> RAID".
> However, it's a very _good_ software RAID that breaks the old adage 
> that
> says "hardware RAID is faster than software RAID."

JBOD has been generally used as in putting just one disk through to the 
O/S, and Spanning is generally used to describe multiple disks 
concatenated linearly however its definition is interchangeable.

FWIW, if the raid controller doesn't have a JBOD/SPAN option, sometimes 
you can define a single disk Raid-0 instead, if there is only one drive 
its functionally equivalent.  Keep in mind though that often the O/S 
will see a Raid controller LUN device, which likely wont except SMART 
commands, and might not display correct sector size etc.  All depends on 
the Raid controller, so make sure you correctly define partitions to 
match the drives or you may get sub-par performance.

> To highlight the difference of "JBOD" versus "spanning" (aka RAID-0; 
> aka
> SPAN), take the following use-case example:
> 1. You take 12 drives and apply spanning logic on the controller
> 2. You take a controller and put it into JBOD mode
> In scenario one (1), your OS still sees a single drive.
> In scenario two (2), your OS sees all twelve drives.
> Just wanted to clarify that putting a controller into JBOD-mode is not 
> to be
> confused with RAID-0 or spanning (which would still be utilizing the
> controller's
> software; the best benefit of ZFS comes from letting it have direct 
> access
> to
> each/every disk).
>> In this kind of mode, the motherboard and the OS still thinks it sees 
>> only
> a
>> single volume. So now I am gathering that this is also not ideal for 
>> ZFS,
> since it
>> would still not be aware of multiple physical volumes and be unable to
> optimize
>> accordingly.
>> I'm learning for the first time that sometimes 'JBOD' can also refer 
>> to
> each
>> individual drive being mounted separately at least as far as the
> controller and
>> the motherboard is concerned.
>> I just want to confirm 100% that this is how you are recommending 
>> multiple
>> drives be configured for ZFS. Because when I started the thread I was
> thinking
>> of JBOD as 'spanning'.
> Correct; JBOD never means span (which is RAID-0). Rather, JBOD usually 
> means
> using non-RAID capable hardware with a RAID-capable operating system 
> (e.g.,
> using ZFS). We are again in-essence talking about using software RAID 
> to
> create
> a software pool.
> I don't want to go drawing similarities to many other software RAID
> solutions,
> because ZFS is truly in a class its own. But if you're familiar with 
> the
> concepts
> of setting up a software RAID (mdadm, vinum, graid*, etc.) then ZFS 
> should
> be
> more familiar. That's not to say that you need to know these things to 
> use
> (in practice, ZFS has predictable commands with predictable syntax) but 
> if
> you
> have ever created a software RAID you will be in unique position to 
> better
> understand the JBOD mentality.
> So in transitioning to a test platform that uses "JBOD" you essentially 
> have
> two
> choices that look attractive...
> + Ditch the RAID card and use a standard adapter for connecting your 
> drives
> + Find a way to change your RAID controller to export all the disks
> NB: This may involve using a LSI provided utility or flashing a QLogic 
> card
> NB: If you give us your exact card info, someone may have information 
> on how
> to transition it into JBOD mode for ZFS
> + Create a bunch of single-disk RAID-0 arrays (12 disks? 12x 
> single-disk
> or RAID-0 arrays; producing 12x single LUNs for use in ZFS).
> Hope this gives some ideas.
> --
> Devin
>> On Tue, 18 Feb 2014, Eitan Adler wrote:
>> > On Tue, Feb 18, 2014 at 5:27 PM,  <freebsd at fongaboo.com> wrote:
>> >>
>> >> When we spoke, you noted that when installing ZFS on multiple disks
>> >> connected to a hardware RAID controller, it is best to config it to
>> >
>> > There are a few reasons for this.
>> > (a) Hardware RAID serves as a single point of failure: if the
>> > contoller dies you have neither disk
>> > (b) As Andrew noted , using hardware RAID means that ZFS won't be able
>> > to tell which disk is which.  The ZFS management tools won't work as
>> > expected (they will show only one disk).
>> > (c) Since ZFS implements RAID itself it can use knowledge about the
>> > physical disks for better performance
>> >
>> > Also see: https://en.wikipedia.org/wiki/ZFS#ZFS_and_hardware_RAID
>> >
>> >> I tried to explain this to a colleague, but they were skeptical.
>> >> Would you (or anyone) be willing to give me a one or two line
>> >> sales-pitch on
>> >
>> > "ZFS does RAID better than the controller."
>> >
>> >>  why one
>> >> should abandon traditional notions of RAID performance in favor of
>> >> allowing ZFS to do disk management?
>> >
>> > The goal isn't to give up on RAID but move its implementation to ZFS.
>> >
>> > --
>> > Eitan Adler
>> > Source, Ports, Doc committer
>> > Bugmeister, Ports Security teams
>> > _______________________________________________
>> > freebsd-questions at freebsd.org mailing list
>> > http://lists.freebsd.org/mailman/listinfo/freebsd-questions
>> > To unsubscribe, send any mail to "freebsd-questions-
>> unsubscribe at freebsd.org"
>> >
>> _______________________________________________
>> freebsd-questions at freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
>> To unsubscribe, send any mail to
> "freebsd-questions-unsubscribe at freebsd.org"
> _____________
> The information contained in this message is proprietary and/or
> confidential. If you are not the intended recipient, please: (i)
> delete the message and all copies; (ii) do not disclose, distribute or
> use the message in any manner; and (iii) notify the sender
> immediately. In addition, please be aware that any message addressed
> to our domain is subject to archiving and review by persons other than
> the intended recipient. Thank you.
> _______________________________________________
> freebsd-questions at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to 
> "freebsd-questions-unsubscribe at freebsd.org"

    Dean E. Weimer

More information about the freebsd-questions mailing list