[Bug 224496] mpr and mps drivers seems to have issues with large seagate drives

bugzilla-noreply at freebsd.org bugzilla-noreply at freebsd.org
Tue Aug 13 19:55:26 UTC 2019


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224496

Paul Thornton <freebsd-bugzilla at prt.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |freebsd-bugzilla at prt.org

--- Comment #15 from Paul Thornton <freebsd-bugzilla at prt.org> ---
I too have run into this issue on a nas box, once it started taking on any kind
of load.

Running 12.0-RELEASE p3

The server contains 8x Seagate Ironwolf Pro 10Tb SATA drives on an Avago 3008
HBA - 8 of these basically:

da2 at mpr1 bus 0 scbus13 target 12 lun 0
da2: <ATA ST10000NE0004-1Z EN01> Fixed Direct Access SPC-4 SCSI device
da2: Serial Number ZA237AVY
da2: 1200.000MB/s transfers
da2: Command Queueing enabled
da2: 9537536MB (19532873728 512 byte sectors)

Driver versions:
dev.mpr.1.driver_version: 18.03.00.00-fbsd
dev.mpr.1.firmware_version: 15.00.03.00

These drives are configured in a ZFS RAID10 setup (in case that datapoint
matters):
        NAME         STATE     READ WRITE CKSUM
        data0        ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            da2.eli  ONLINE       0     0     0
            da3.eli  ONLINE       0     0     0
          mirror-1   ONLINE       0     0     0
            da4.eli  ONLINE       0     0     0
            da5.eli  ONLINE       0     0     0
          mirror-2   ONLINE       0     0     0
            da6.eli  ONLINE       0     0     0
            da7.eli  ONLINE       0     0     0
          mirror-3   ONLINE       0     0     0
            da8.eli  ONLINE       0     0     0
            da9.eli  ONLINE       0     0     0

I currently get about 25 days between reboots.  The machine hangs and (I'm
guessing here) kernel panics and restarts - I don't have the panic information,
but log messages look very similar to what other people are seeing:

Jul 20 11:14:17 nas1a kernel:   (da2:mpr1:0:12:0): WRITE(10). CDB: 2a 00 62 81
f9 d0 00 00 30 00 length 24576 SMID 1484 Command timeout on
 target 12(0x000c), 60000 set, 60.703976195 elapsed
Jul 20 11:14:17 nas1a kernel: mpr1: At enclosure level 0, slot 2, connector
name (    )
Jul 20 11:14:17 nas1a kernel: mpr1: Sending abort to target 12 for SMID 1484
Jul 20 11:14:17 nas1a kernel:   (da2:mpr1:0:12:0): WRITE(10). CDB: 2a 00 62 81
f9 d0 00 00 30 00 length 24576 SMID 1484 Aborting command 0
xfffffe00bad0b540
Jul 20 11:14:17 nas1a kernel:   (da2:mpr1:0:12:0): SYNCHRONIZE CACHE(10). CDB:
35 00 00 00 00 00 00 00 00 00 length 0 SMID 1792 Command ti
meout on target 12(0x000c), 60000 set, 60.707504796 elapsed
Jul 20 11:14:17 nas1a kernel: mpr1: At enclosure level 0, slot 2, connector
name (    )
Jul 20 11:14:18 nas1a kernel: mpr1: Controller reported scsi ioc terminated tgt
12 SMID 1792 loginfo 31140000
Jul 20 11:14:18 nas1a kernel: (da2:mpr1:0:12:0): WRITE(10). CDB: 2a 00 62 81 f9
d0 00 00 30 00
Jul 20 11:14:18 nas1a kernel: mpr1: Abort failed for target 12, sending logical
unit reset
Jul 20 11:14:18 nas1a kernel: mpr1: (da2:mpr1:0:12:0): CAM status: CCB request
aborted by the host
Jul 20 11:14:18 nas1a kernel: Sending logical unit reset to target 12 lun 0
Jul 20 11:14:18 nas1a kernel: (da2:mpr1:0:12:0): Retrying command, 3 more tries
remain
Jul 20 11:14:18 nas1a kernel: mpr1: At enclosure level 0, slot 2, connector
name (    )
Jul 20 11:14:18 nas1a kernel: (da2:mpr1:0:12:0): SYNCHRONIZE CACHE(10). CDB: 35
00 00 00 00 00 00 00 00 00
Jul 20 11:14:18 nas1a kernel: (da2:mpr1:0:12:0): CAM status: CCB request
completed with an error
Jul 20 11:14:18 nas1a kernel: (da2:mpr1:0:12:0): Retrying command, 0 more tries
remain
Jul 20 11:14:18 nas1a kernel: mpr1: mprsas_action_scsiio: Freezing devq for
target ID 12
Jul 20 11:14:18 nas1a kernel: (da2:mpr1:0:12:0): SYNCHRONIZE CACHE(10). CDB: 35
00 00 00 00 00 00 00 00 00
Jul 20 11:14:18 nas1a kernel: (da2:mpr1:0:12:0): CAM status: CAM subsystem is
busy
Jul 20 11:14:18 nas1a kernel: (da2:mpr1:0:12:0): Error 5, Retries exhausted
Jul 20 11:14:18 nas1a kernel: mpr1: mprsas_action_scsiio: Freezing devq for
target ID 12
Jul 20 11:14:18 nas1a kernel: (da2:mpr1:0:12:0): WRITE(10). CDB: 2a 00 62 81 f9
d0 00 00 30 00
Jul 20 11:14:18 nas1a kernel: (da2:mpr1:0:12:0): CAM status: CAM subsystem is
busy
Jul 20 11:14:18 nas1a kernel: (da2:mpr1:0:12:0): Retrying command, 2 more tries
remain

[reboot happens here]

And the most recent one, today:

Aug 13 08:58:55 nas1a kernel:   (da6:mpr1:0:16:0): SYNCHRONIZE CACHE(10). CDB:
35 00 00 00 00 00 00 00 00 00 length 0 SMID 998 Command tim
eout on target 16(0x0010), 60000 set, 60.109683189 elapsed
Aug 13 08:58:55 nas1a kernel: mpr1: At enclosure level 0, slot 6, connector
name (    )
Aug 13 08:58:55 nas1a kernel: mpr1: Sending abort to target 16 for SMID 998
Aug 13 08:58:55 nas1a kernel:   (da6:mpr1:0:16:0): SYNCHRONIZE CACHE(10). CDB:
35 00 00 00 00 00 00 00 00 00 length 0 SMID 998 Aborting co
mmand 0xfffffe00bacdfaa0
Aug 13 08:58:55 nas1a kernel: mpr1: Abort failed for target 16, sending logical
unit reset
Aug 13 08:58:55 nas1a kernel: (da6:mpr1:0:16:0): SYNCHRONIZE CACHE(10). CDB: 35
00 00 00 00 00 00 00 00 00
Aug 13 08:58:55 nas1a kernel: mpr1: Sending logical unit reset to target 16 lun
0
Aug 13 08:58:55 nas1a kernel: mpr1: At enclosure level 0, slot 6, connector
name (    )
Aug 13 08:58:55 nas1a kernel: (da6:mpr1:0:16:0): CAM status: CCB request
aborted by the host
Aug 13 08:58:55 nas1a kernel: (da6:mpr1:0:16:0): Retrying command, 0 more tries
remain
Aug 13 08:58:55 nas1a kernel: mpr1: mprsas_action_scsiio: Freezing devq for
target ID 16
Aug 13 08:58:55 nas1a kernel: (da6:mpr1:0:16:0): SYNCHRONIZE CACHE(10). CDB: 35
00 00 00 00 00 00 00 00 00
Aug 13 08:58:55 nas1a kernel: (da6:mpr1:0:16:0): CAM status: CAM subsystem is
busy
Aug 13 08:58:55 nas1a kernel: (da6:mpr1:0:16:0): Error 5, Retries exhausted
Aug 13 08:58:55 nas1a kernel: mpr1: mprsas_action_scsiio: Freezing devq for
target ID 16
Aug 13 08:58:55 nas1a kernel: (da6:mpr1:0:16:0): WRITE(10). CDB: 2a 00 92 b0 7d
70 00 00 48 00
Aug 13 08:58:55 nas1a kernel: (da6:mpr1:0:16:0): CAM status: CAM subsystem is
busy
Aug 13 08:58:55 nas1a kernel: (da6:mpr1:0:16:0): Retrying command, 3 more tries
remain

[reboot happens here]

After the reboot, there's no problem and everything works fine.  ZFS never
marks the pool as degraded or unavailable.

Looking at the FreeNAS threads this seems to have been going on for ages.  Can
anyone confirm that a downgrade to 11.1 does work around this issue as that
seems to be the only thing that might help?

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the freebsd-bugs mailing list