[Bug 283189] Sporadic NVMe DMAR faults since updating to 14.2-STABLE
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 283189] Sporadic NVMe DMAR faults since updating to 14.2-STABLE"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 283189] Sporadic NVMe DMAR faults since updating to 14.2-STABLE"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 283189] Sporadic NVMe DMAR faults since updating to 14.2-STABLE"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 283189] Sporadic NVMe DMAR faults since updating to 14.2-STABLE"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 283189] Sporadic NVMe DMAR faults since updating to 14.2-STABLE"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 283189] Sporadic NVMe DMAR faults since updating to 14.2-STABLE"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 283189] Sporadic NVMe DMAR faults since updating to 14.2-STABLE"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Sun, 08 Dec 2024 03:23:11 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=283189
Bug ID: 283189
Summary: Sporadic NVMe DMAR faults since updating to
14.2-STABLE
Product: Base System
Version: 14.2-STABLE
Hardware: amd64
OS: Any
Status: New
Severity: Affects Only Me
Priority: ---
Component: kern
Assignee: bugs@FreeBSD.org
Reporter: jah@FreeBSD.org
As described in
https://lists.freebsd.org/archives/freebsd-stable/2024-November/002549.html
Within a few hours of updating from 13.4-STABLE to 14.2-STABLE, I started
seeing sporadic nvme DMAR faults (followed by nvme transfer errors that seemed
to be triggered by the IOMMU failing the DMA request). These errors always
look like the following:
Nov 24 16:17:52 corona kernel: DMAR4: Fault Overflow
Nov 24 16:17:52 corona kernel: nvme0: WRITE sqid:15 cid:121 nsid:1
lba:1615751416 len:256
Nov 24 16:17:52 corona kernel: DMAR4: nvme0: pci7:0:0 sid 700 fault acc 1 adt
0x0 reason 0x6 addr 42d000
Nov 24 16:17:52 corona kernel: nvme0: DATA TRANSFER ERROR (00/04) crd:0 m:1
dnr:1 p:1 sqid:15 cid:121 cdw0:0
Nov 24 16:17:52 corona kernel: (nda0:nvme0:0:0:1): WRITE. NCB: opc=1 fuse=0
nsid=1 prp1=0 prp2=0 cdw=604e68f8 0 ff 0 0 0
Nov 24 16:17:52 corona kernel: (nda0:nvme0:0:0:1): CAM status: Unknown (0x420)
Nov 24 16:17:52 corona kernel: (nda0:nvme0:0:0:1): Error 5, Retries exhausted
Nov 24 16:17:52 corona ZFS[11614]: vdev I/O failure, zpool=zroot
path=/dev/nda0p4 offset=824843563008 size=131072 error=5
I've seen 5 of these error sequences since the system was updated from 13 to 14
two weeks ago. Previously the same machine had been running various builds of
13-stable for nearly two years (with Intel DMAR enabled the entire time)
without issue. Other things to note:
--The failures are very sporadic, often with several days between them.
--They don't seem to be directly correlated with heavy disk I/O. I can run
-j16 buildworld/buildkernel without issue, yet in some cases these errors have
been logged while the system was nearly idle.
--In all cases, the transfer in question is an NVMe WRITE (host->device). The
transfer length reported by the nvme0 error log so far has always been 16-256
bytes (so well less than the 4K nvme page size)
--In all cases the fault reason reported by the IOMMU is 6, which if I'm
reading the VT-d spec correctly means "no read access in the IOMMU paging
entry".
--In all cases the prp1/prp2 fields reported by the CAM error log are both 0,
which seems surprising given my (very limited) knowledge of the NVMe protocol.
--So far there have been no obvious ill effects (no kernel panics, no
*apparent* data corruption, no controller failures, etc.).
The SSD generating these errors:
nda0: <Micron_7300_MTFDHBG3T8TDF 95420260 21443277CB53>
There's no other NVMe device on the system in question. I don't suspect a
hardware failure here, as I never saw issues while running 13 but immediately
started seeing them after updating to 14.
From a quick look at the deltas between stable/13 and stable/14, I don't see
any DMAR or NVMe changes that would seem a likely culprit for this issue. It
seems the biggest change was the switch from nvd(4) to nda(4) for the disk
device, so I will probably try setting hw.nvme.use_nvd to see if going back to
nvd eliminates the errors (which to me would indicate the more-sophisticated
CAM I/O scheduling brought by nda(4) is likely exposing some existing issue).
--
You are receiving this mail because:
You are the assignee for the bug.