mps/LSI SAS2008 controller crashes when smartctl is run with upped disk tags

Peter Maloney peter.maloney at brockmann-consult.de
Wed Nov 2 08:56:30 UTC 2011


On 11/01/2011 09:32 PM, Jason Wolfe wrote:
> On Mon, Oct 31, 2011 at 12:17 PM, Peter Maloney
> <peter.maloney at brockmann-consult.de
> <mailto:peter.maloney at brockmann-consult.de>> wrote:
>
>     Dear Jason,
>
>     I get a simlar problem on a system with an LSI 9211-8i with 20 SATA
>     disks attached (2 SSDs and 18 spnning disks). My system doesn't hang,
>     panic, or reset though. I just lose access to one disk, which is then
>     considered FAULTED in my zpool status (with the ZFS file system). If I
>     physically remove the FAULTED disk and run "gpart recover da0", I
>     get a
>     panic. Otherwise, the system keeps running in a degraded state.
>      When I
>     reboot and resilver, some data is found damaged and repaired, not just
>     refreshed with the latest state. The server has 1 HBA and 2
>     backplanes,
>     and I have the 2 mirrored root disks on different backplanes.
>     Maybe that
>     is why mine runs degraded and yours hang.
>
>     This happened twice so far (in around a month or two), and both
>     times it
>     was one of the mirrored root disks (SSDs) that faulted.
>
>     My tags are set to 255. I will try reproducing it as you said, and
>     then
>     if it fails, rebooting and trying again setting tags to 2 as you
>     suggested.
>
>     And *thank you very much for this information*. This is the last
>     outstanding issue with this server. I hope this workaround helps.
>
>     # camcontrol tags /dev/da0
>     (pass0:mps0:0:7:0): device openings: 255
>
>
> Peter,
>
> This happens 'randomly' for you, or do you have some automated process
> running smartctl that trips the drives up occasionally?
It appears to be completely random, but it could be something specific
going on that I just didn't think of. I don't know how to trigger it. I
wrote a script once that looped over the disks once with smartctl (which
I installed from ports) and recorded the device id, size of the disks,
etc.. But it didn't cause a crash, and I didn't try looping it
constantly to crash it.

The system uses "zfs send" to send the whole pool to another machine. It
uses rsync to back up some servers on to it. It serves a bunch of data
over NFS and has samba online also but not in use. The primary user of
the NFS shares is VMWare ESXi, which has a terrible problem with
synchronous writes, which might put a heavier load on the system.
> The way I'm getting around it currently is to just move
> /usr/local/sbin/smartctl elsewhere, and replacing it with a wrapper
> that simply drops the tags to 1, executes to the new smartctl location
> with the options passed, then moves the tags back to whatever you
> prefer. There will obviously be a small detriment here, but it should
> be fairly quick and hopefully not even noticeable in your case.
In my reading, I found that people think that reducing the io queues
(via kernel parameters) for zfs actually improves performance (moving
the queue to the OS I guess), so if the tags is similar, then I wasn't
thinking there would be too much of a drop. And also luckily, this
system of mine is not a performance machine... just a huge file server.
So if it is slower but more stable that way, I will leave tags set to 2
forever.
>
> If smartctl is not triggering these events for you, any idea what is?
I have no real clue, but my guess is that some NFS shares are using the
ZIL (zfs log device) a lot, and since that device is horribly
inefficient (scoring like 1500 iops during ZIL use on a disk that scores
50-140k on other tests), it causes the IO system to be overloaded, and
trigger the failure, purely based on load rather than something
particular like smartctl. So for now, I disabled my ZIL to see if it
still crashes.

Also on my list of things to try is:
-change to the IT firmware instead of IR, since ZFS prefers to have no
RAID in there at all.
-change the tags to 2
-try the LSI driver for the 9210-8i
http://www.lsi.com/products/storagec...AS9210-8i.aspx
<http://www.lsi.com/products/storagecomponents/Pages/LSISAS9210-8i.aspx>

Here is my forum thread about it:

http://forums.freebsd.org/showthread.php?t=26656

Are you using ZFS? Is your root volume in hardware RAID or software
RAID? I am curious because you say your systems hang, and mine just runs
degraded.
>
> Jason


Peter

-- 

--------------------------------------------
Peter Maloney
Brockmann Consult
Max-Planck-Str. 2
21502 Geesthacht
Germany
Tel: +49 4152 889 300
Fax: +49 4152 889 333
E-mail: peter.maloney at brockmann-consult.de
Internet: http://www.brockmann-consult.de
--------------------------------------------



More information about the freebsd-scsi mailing list