Cannot replace broken hard drive with LSI HBA

Karli Sjöberg karli.sjoberg at slu.se
Wed Sep 30 06:27:52 UTC 2015


tis 2015-09-29 klockan 11:25 -0700 skrev Freddie Cash:
> On Tue, Sep 29, 2015 at 11:04 AM, Michael Fuckner
> <michael at fuckner.net> wrote:
> 
>         On 9/29/2015 3:51 PM, InterNetX - Juergen Gotteswinter wrote:
>                  From my Experience using SATA Disks on SAS
>                 Controllers, no matter if
>                 theres an Expander between or not or mixed, those
>                 Setups keep on beeing
>                 flakey / unreliable. I might work under certain
>                 conditions, but its
>                 nothing you can bet on.
>                 
>                 Garret Damore (Illumos Project) describes the problem
>                 more detailed here
>                 
>                 http://garrett.damore.org/2010/08/why-sas-sata-is-not-such-great-idea.html
>                 
>         
>         come on, the article is 5 years old, some things changed since
>         then!
>         
>         - MUX Boards are unreliable and expensive- long time since I
>         last saw those boards
>         - SAS Disks are not just 10/15k high performance Disks
>         anymore, most Nearline Disks are available with native SAS
>         interface as well
>         - if you pick the right disk there is no trouble using SATA
>         Disks on SAS Expanders or SAS Controllers (they should have
>         R/V sensors, optimized FW...).
>         - if you use desktop drives in a shelf with lets say 24 slots
>         you should not expect it to work ;-)
> 
> 
> 
> ​Why not?  ;)
> 
> 
> We use desktop-class drives in our backups storage servers without any
> issues.  Even the monster boxes with 90 drives in them (2 JBODs of 45
> drives each) run without issues using desktop-class drives.
> 
> 
> We're using a mix of WD Black (1, 2, 4 TB), Toshiba (2 TB), and
> Seagate (1, 2 TB).
> 
> 
> 2 systems using 24 drive bays.  2 systems using 90 drive bays.
> Plugged into SuperMicro SAS expanders and LSI 9211-8i or 9211-8e (I
> think that's the model number) controllers.​  All SAS2008 chipsets
> using mps(4) drivers.
> 
> 
> We're not looking for uber-performance and millions of IOps from these
> systems, as the gigabit NIC is the bottleneck (rsync and zfs send both
> saturate that link, but all operations still complete within the
> allotted 8 hours window).
> 
> 
> We replace maybe 6-8 drives per year across all 4 systems; a little
> more than that this year due to overheating in one location, but
> that's been fixed.
> 
> 
> When a 2 TB desktop-class harddrive is $ 80 CDN in bulk, and we're
> only replacing 8 drives per year (under warranty, of course), it just
> doesn't make sense to spend the extra money on server-class,
> RAID-aware, nearline, or SAS drives.  :)
> 
> 
> ​If you ​are building a storage server that requires millions of IOps
> with multiple 10 Gbps connections, then sure, desktop-class drives
> won't cut it.  But for everything else, they're fine.
> 
> 
> -- 
> Freddie Cash
> fjwcash at gmail.com

Hey Freddie!

So with all that metal, you have never experienced a time where you´ve
had to reboot any one of them to replace a broken hard drive?

Check what driver/firmware it is:
# sysctl -a | egrep 'mps.[0-9].(driver|firmware)_version'

So far, I´ve gathered that our setups are very similar, except you have
WD Blacks, while we are using Greens. But we also have Seagates and
Toshibas as well, and they all seem to behave this way.

That´s not fair! :P (What are we doing wrong then?)

/K



More information about the freebsd-fs mailing list