Dell branded storage controller for ZFS file-server | Advice requested

Steven Hartland killing at multiplay.co.uk
Fri Jan 23 12:57:07 UTC 2015


Dells c6000 chassis have the option of LSI controllers, however their 
midplane is troublesome and will cause issues at 6Gbps speed, so be careful.

     Regards
     Steve

On 23/01/2015 11:57, David Gwynne wrote:
>> On 23 Jan 2015, at 9:41 pm, Ahmed Kamal <email.ahmedkamal at googlemail.com> wrote:
>>
>> Thanks a lot for your reply .. I've got a couple of questions inline
>>
>> On Fri, Jan 23, 2015 at 12:21 PM, David Gwynne <david at gwynne.id.au> wrote:
>>
>>> On 23 Jan 2015, at 7:16 pm, Ahmed Kamal <email.ahmedkamal at googlemail.com> wrote:
>>>
>>> Hi everyone,
>>>
>>> I'm looking to build a ZFS file-server, unfortunately I'm locked into Dell
>>> branded hardware only. I know Dell sometimes ships LSI branded cards, but
>>> that too probably won't work for me, as I live outside the US and the local
>>> Dell rep is clueless (so whatever I want to buy, has to be on Dell's
>>> website). I am considering Dell R720xd or R730xd. I will use Intel i350
>>> NICs, and ECC RAM, but the biggest questions are around the SAS HBA
>>>
>>> Per my research, I have the following notes and questions. Appreciating
>>> your comments on them
>>>
>>> * H310 Card supports syspd (jbod) mode per
>>> http://svnweb.freebsd.org/base?view=revision&revision=254906 so it is
>>> probably the best card for what I want to do, right?
>> no.
>>
>>> * H310 is a low end card, it has very limited queue depth (25 I believe) ..
>>> Is this actually a problem when using the disks (20 x SAS 10k disks) in
>>> syspd/jbod mode? what about if I attach a couple of SSD disks?
>> there are two queue limits here, one for the controller and one for each disk attached to it.
>>
>> h310s have a controller queue depth of 31, ie, they can only support 31 commands at a time. if you have 24 disks and distribute the controllers command slots between them, thats about 1 command per disk.
>>
>> My initial line of thought was that in pass-through (jbod) mode, the HBA does *not* buffer commands and proxy disk responses internally. Which meant that in jbod mode, the HBA basically becomes a communications channel connecting the OS to disks, making its internal buffering limitations not-applicable. Is this not correct ?
> the h310 physically doesnt have cache, so it cant buffer your I/O even if it wanted to.
>
> so yes, in your use case it would just be a hba, but a very constrained one. even most sata controllers (eg, ahci or the silicon image ones) can keep more commands up in the air since they can have up 32 command slots per port instead of 31 for the whole controller.
>
> im also not sure if dell will let you order an r720xd with a h310. i dont think they consider that a supported config.
>
>>   
>>
>>> * H310 can be flashed to IT mode (LSI firmware), and is supported by
>>> Illumos kernel (if I'll ever need that) .. so overall it looks like a good
>>> option. Any "watch-outs" I should be aware of ?
>> yes. if you flash the h310 to become an sas hba, the pci product/vendor and subproduct/subvendor ids on the controller change. the dell bios will refuse to boot if it detects an unsupported controller in its storage slot. you'll brick the machine until you remove the h310. because it is on a non standard mezzanine connector you will not be able to flash it back to a h310, and you'll have a useless chunk of silicon to put on your desk as a monument to how annoying vendors are.
>>
>> pff nightmare scenario .. thanks for pointing this out. I stumbled across web posts where people successfully flashed h310, but meh, now I'm scared to buy that configuration
> there are h310s that are normal pci-e cards that go into dells lower end servers. the ones in the r720s sit on the custom mezzanine connector.
>
>>> * H710/p have no way of exposing JBOD/syspd in any way .. If using those
>>> cards, the best I can do, is to create a raid-0 per disk, and put zfs on
>>> top of that. How bad is this? Is the only problem that I'm locked into H710
>>> firmware? Would the nvram and bbu on H710 improve performance significantly
>>> vs H310?
>> i wouldnt run zfs on that.
>>
>>> * H330 and H730, seem supported by mrsas driver. Is this driver able to
>>> expose jbod/syspd mode? If yes, what's the overall advise, should I go with
>>> one of those, or the "trusty" H310 ?
>> thats a workable option. h330 should be fine if all you want to do is jbod.
>>
>> Now that you're pointing me to H330 being my best option given the situation, got some more questions:
>> * "if all you want is jbod" .. with zfs, that's all we need right ? just wondering if I'm missing something important
> nope.
>
>> * Any idea if jbod configuration on h330 is done on bios level, or using some freebsd tool (how?)
> you can do it in the bios.
>
>> * mrsas man page, mentions the driver appeared in 10.1 .. Does that mean if I decide to use FreeNAS (9.x based) it won't be there ?
> i have no idea :)
>
>> * If I go the h330 path, it'll be a r730xd server. Worst case if the h330 gives me trouble, can I throw it out and plug a LSI 9207-8i in its place (same pci slot, cables ..etc) ?
> the h330 sits on a mezzanine connector. you probably can get that card and drive the disks with it, but i havent done it with an r730 or 730xd so i cant say for sure. i have done that with an r720 though, and it mostly works.
>
> we had to order those systems with a h310 and add the lsi card ourselves, and then we discovered that the disks they shipped in the system do not spin up when theyre powered on. the raid controller (h310) explicitly spins them up before issuing io to them.
>
> if you attach those disks to a straight sas hba or use them via syspd on the h310, you cant boot off them because the machines bios (not the raid controller) blindly issues io to them without spinning them up first.
>
> fun times.
>
> dlg
>
>
>
>>   
>>
>> Thanks!
>>   
>>
>>> Your help and advice is most appreciated
>>> _______________________________________________
>>> freebsd-scsi at freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/freebsd-scsi
>>> To unsubscribe, send any mail to "freebsd-scsi-unsubscribe at freebsd.org"
>>
> _______________________________________________
> freebsd-scsi at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-scsi
> To unsubscribe, send any mail to "freebsd-scsi-unsubscribe at freebsd.org"



More information about the freebsd-scsi mailing list