mps and LSI SAS2308: controller resets on 12.0 - IOC Fault 0x40000d04, Resetting
Mark.Martinec+freebsd at ijs.si
Mon Dec 17 15:52:30 UTC 2018
One of our servers that was upgraded from 11.2 to 12.0 (to RC2
initially, then to RC3
and lastly to a 12.0-RELEASE) is suffering severe instability of a disk
resetting itself a couple of times a day, usually associated with high
(like poudriere buils or zfs scrub or nightly file system scans). The
was rock-solid under 11.2 (and still/again is).
The disk controller is LSI SAS2308. It has four disks attached as JBODs,
one pair of SSDs and one pair of hard disks, each pair forming its own
A controller reset can occur regardless of which pair is in heavy use.
The following can be found in logs, just before machine becomes unusable
(although not logged always, as disks may be dropped before syslog has a
of writing anything):
xxx kernel:  mps0: IOC Fault 0x40000d04, Resetting
xxx kernel:  mps0: Reinitializing controller
xxx kernel:  mps0: Firmware: 20.00.02.00, Driver:
xxx kernel:  mps0: IOCCapabilities:
xxx kernel:  (da0:mps0:0:0:0): Invalidating pack
The IOC Fault location is always the same. Apparently the disk
all disk devices are dropped and ZFS finds itself with no disks. The
responds to ping, and if logged-in during the event and running zpool
status -v 1,
zfs reports loss of all devices for each pool:
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool
scan: scrub repaired 0 in 0 days 03:53:41 with 0 errors on Sat Nov 17
NAME STATE READ WRITE CKSUM
data0 UNAVAIL 0 0 0
mirror-0 UNAVAIL 0 24 0
2396428274137360341 REMOVED 0 0 0 was
16738407333921736610 REMOVED 0 0 0 was
(and similar for the other pool)
At this point the machine is unusable and needs to be hard-reset.
My guess is that after the controller resets, disk devices come up again
(according to the report seen on the console, stating 'periph destroyed'
first, then listing full info on each disk) - but zfs ignores them.
I don't see any mention of changes of the mps driver in the 12.0 release
although diff-ing its sources between 11.2 and 12.0 shows plenty of
After suffering this instability for some time, I finally downgraded the
to 11.2, and things are back to normal again!
This downgrade path was nontrivial, as I have foolishly upgraded pool
to what comes with 12.0, so downgrading involved hacking with
both zfs mirror pools, recreating pools without the two new features,
zfs send/receive copying, while having a machine hang during some of
these operations. Not something for the faint at heart. I know, foolish
of me to upgrade pools after just one day of uptime with 12.0.
Some info on the controller:
kernel: mps0: <Avago Technologies (LSI) SAS2308> port 0xf000-0xf0ff mem
0xfbe4ffff,0xfbe00000-0xfbe3ffff irq 64 at device 0.0 numa-domain 1 on
kernel: mps0: Firmware: 20.00.02.00, Driver: 21.02.00.00-fbsd
Board Name: LSI2308-IT
Chip Name: LSISAS2308
Chip Revision: ALL
BIOS Revision: 7.39.00.00
Firmware Revision: 20.00.02.00
Integrated RAID: no
So, what has changed in the mps driver for this to be happening?
Would it be possible to take mps driver sources from 11.2, transplant
them to 12.0, recompile, and use that? Could the new mps driver be
using some new feature of the controller and hits a firmware bug?
I have resisted upgrading SAS2308 firmware and its BIOS, as it is
working very well under 11.2.
Anyone else seen problems with mps driver and LSI SAS2308 controller?
(btw, on another machine the mps driver with LSI SAS2004 is working
just fine under 12.0)
More information about the freebsd-stable