Cannot replace broken hard drive with LSI HBA

Philip Murray pmurray at nevada.net.nz
Wed Sep 30 00:47:21 UTC 2015



> On 30/09/2015, at 9:38 am, Graham Allan <allan at physics.umn.edu> wrote:
> 
> On 9/29/2015 1:17 AM, Karli Sjöberg wrote:
>>> 
>>> Regarding your experience with firmware 20, I believe it is "known bad",
>>> though some seem to disagree. Certainly when building my recent-ish
>>> large 9.3 servers I specifically tested it and got consistent data
>>> corruption. There is now a newer release of firmware 20 , "20.00.04.00"
>>> which seems to be fixed - see this thread:
>>> 
>>> https://lists.freebsd.org/pipermail/freebsd-scsi/2015-August/006793.html
>> 
>> No, firmware 20.00.04.00 and driver 20.00.00.00-fbsd was the one that
>> was used when ZFS freaked out, so it´s definitely not fixed.
>> 
>> I think this calls for a bug report.
> 
> That is curious, since I could rapidly get data corruption with firmware 20.00.00.00, yet ran a stress test for about a week with 20.00.04.00 with no issues. That was with FreeBSD 9.3, but I just updated my test system to 10.2, and it has been running the same stress test for 4-5 hours again with no issues. I don't doubt your experience at all, of course, but I wonder what is different?
> 
> For what it's worth, my test machine is a Dell R610 with Dell TYVGC HBA (unclear whether this is a 9207-8e or 9205-8e), and WD Red drives in a Supermicro SC847 chassis.

Just as an additional datapoint (this thread is giving me chills) with the LSI IT firmware version 20.00.02.00…

* FreeBSD 10.2-RELEASE
* Supermicro SC826 Chassis
* LSI SAS2004 Controller
	mps0: <Avago Technologies (LSI) SAS2004> port 0xe000-0xe0ff mem 0xf72c0000-0xf72c3fff,0xf7280000-0xf72bffff irq 16 at device 0.0 on pci1
	mps0: Firmware: 20.00.02.00, Driver: 20.00.00.00-fbsd
	mps0: IOCCapabilities: 1285c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,HostDisc>

* LSI SAS2X28 Expander/Backplane
	ses0: <LSI SAS2X28 0e12> Fixed Enclosure Services SPC-3 SCSI device 

* 12x SATA WD RE 2TB  drives (WD2000FYYZ arranged as 2x RAIDZ2 vdevs)

Repeatedly filled up with data with regular scrubs without any issues and performance is pretty good, although I haven’t had a disk fail yet. 

Cheers

Phil 


More information about the freebsd-fs mailing list