RAID - hardware vs. ZFS

Peter Eriksson pen at lysator.liu.se
Tue May 12 17:29:56 UTC 2020


We are running a number of HP DL380G9 servers with HP H241 “Smart HBA” controllers in "JBOD mode” and HP D6020 external SAS disk cabinets (70 drives per cabinet (10 & 12TB drives)) using FreeBSD 11.3-12.1 and ZFS.

I would (and do) definitely stay away from RAID mode and use single disk JBODs instead (if possible). ZFS handles errors in a much better/more secure way than typical hardware RAID controllers since it has “knowledge” of the data that is stored - something a RAID controller never can have.


As other have stated the HP “Smart” controllers (using the FreeBSD “ciss” device driver) are a bit “finicky”… They (the HP H241 controller at least) behave a bit better when put into a “pure” JBOD mode (a setting you can set from the BIOS setup menu). Before we found that setting they were a pain to use.

I _think_ you can put the modern HP “Smart RAID” controllers (Pxxx) also into that JBOD mode but I haven’t tried it. (I also run couple of old DL380G5 with a HP P400 controller but that one doesn’t support the JBOD mode so there I have to fake it with single-disk logical volumes).


The only thing to watch out for (that I’ve ran into so far) with many disks on HP controllers is a couple of bugs in the “ciss” device driver which prevents it from detecting more than about 48 drives per controller when in JBOD mode (at least with the H241 controller).


(The “ciss” device driver incorrectly puts a cap on “max_target” (drive “id:s” to probe to the number of logical volumes the controller supports.
For the H241 that is 64. And it seems to start to enumerate physical disks from target 16 so 64-16 = a 48 drives limit…

It also incorrectly sets the “Initiator_id” to “max_logical_volumes” so any physical drive that happens to get the same target number is silently skipped.

I’ve patched the ciss driver source code and there is a bug report so I’m hoping someone comes around to it and applies it to the normal release kernel eventually (we try to stay away from custom kernels if possible)…

   https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=246279 <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=246279>

(This is not a problem unless you have many physical drives behind a single controller in JBOD mode. We didn’t really notice this until we got our second fully stocked D6020 cabinet and tried to use both at the same time… )


We’’ve been using these servers for a couple of years now and it works fine. That said - if I could choose I’d go for an LSI HBA controller instead of the HP ones, but HP apparently doesn’t sell those. (We use the Dell-branded LSI SAS3008 (Dell HBA330) controllers in our Dell servers and they are really nice).

- Peter


> On 12 May 2020, at 18:07, Graham Allan via freebsd-fs <freebsd-fs at freebsd.org> wrote:
> 
> I have implemented ZFS on HPE SmartArray controllers, but not really out of choice - it was the hardware available. I'd prefer just to use a JBOD controller. You lose all the benefits of ZFS knowledge of hardware and drive state, but of course there are useful benefits to ZFS outside of that (volume management, snapshots etc) so it's not a total loss.
> 
> When I did create a ZFS pool on the hardware RAID, I just used a single large hardware RAID volume (I didn't try to expose the individual drives either as single-drive RAID-0 volumes, or drive pass-through). I found the drive pass-through on Gen9 SmartArray controllers to be very flaky, though the Gen10 ones look like they might be better.
> 
> On 5/12/2020 10:46 AM, D'Arcy Cain wrote:
>> I have to purchase new servers soon.  I am planning on getting Proliant
>> DL360 servers.  These come with hardware RAID.  I was wondering what
>> opinions people had about hardware RAID vs. using ZFS for RAID.  Is one
>> safer than the other?  What about performance?  What about hot swapping?
>> All opinions welcome.
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"



More information about the freebsd-fs mailing list