git: 504e85ade103 - stable/14 - mpr, mps: Establish busdma boundaries for memory pools

From: Kenneth D. Merry <>
Date: Wed, 20 Dec 2023 15:42:53 UTC
The branch stable/14 has been updated by ken:


commit 504e85ade103b0c2cafefb2d9dea86e94aef779f
Author:     Kenneth D. Merry <>
AuthorDate: 2023-12-14 20:05:17 +0000
Commit:     Kenneth D. Merry <>
CommitDate: 2023-12-20 15:40:42 +0000

    mpr, mps:  Establish busdma boundaries for memory pools
    Most all of the memory used by the cards in the mpr(4) and mps(4)
    drivers is required, according to the specs and Broadcom developers,
    to be within a 4GB segment of memory.
    This includes:
    System Request Message Frames pool
    Reply Free Queues pool
    ReplyDescriptorPost Queues pool
    Chain Segments pool
    Sense Buffers pool
    SystemReply message pool
    We got a bug report from Dwight Engen, who ran into data corruption
    in the BAE port of FreeBSD:
    > We have a port of the FreeBSD mpr driver to our kernel and recently
    > I found an issue under heavy load where a DMA may go to the wrong
    > address. The test system is a Supermicro X10SRH-CLN4F with the
    > onboard SAS3008 controller setup with 2 enterprise Micron SSDs in
    > RAID 0 (striped). I have debugged the issue and narrowed down that
    > the errant DMA is one that has a segment that crosses a 4GB
    > physical boundary.  There are more details I can provide if you'd
    > like, but with the attached patch in place I can no longer
    > re-create the issue.
    > I'm not sure if this is a known limit of the card (have not found a
    > datasheet/programming docs for the chip) or our system is just
    > doing something a bit different. Any helpful info or insight would
    > be welcome.
    > Anyway, just thought this might be helpful info if you want to
    > apply a similar fix to FreeBSD. You can ignore/discard the commit
    > message as it is my internal commit (blkio is our own tool we use
    > to write/read every block of a device with CRC verification which
    > is how I found the problem).
    The commit message was:
    > [PATCH 8/9] mpr: fix memory corrupting DMA when sg segment crosses
    > 4GB boundary
    > Test case was two SSD's in RAID 0 (stripe). The logical disk was
    > then partitioned into two partitions. One partition had lots of
    > filesystem I/O and the other was initially filled using blkio with
    > CRCable data and then read back with blkio CRC verify in a loop.
    > Eventually blkio would report a bad CRC block because the physical
    > page being read-ahead into didn't contain the right data. If the
    > physical address in the arq/segs was for example 0x500003000 the
    > data would actually be DMAed to 0x400003000.
    The original patch was against mpr(4) before busdma templates were
    introduced, and only affected the buffer pool (sc->buffer_dmat) in
    the mpr(4) driver. After some discussion with Dwight and the
    LSI/Broadcom developers and looking through the driver, it looks
    like most of the queues in the driver are ok, because they limit
    the memory used to memory below 4GB. The buffer queue and the chain
    frames seem to be the exceptions.
    This is pretty much the same between the mpr(4) and mps(4) drivers.
    So, apply a 4GB boundary limitation for the buffer and chain frame pools
    in the mpr(4) and mps(4) drivers.
    Reported by:    Dwight Engen <>
    Reviewed by:    imp
    Obtained from:  Dwight Engen <>
    Differential Revision:  <>
    (cherry picked from commit 264610a86e14f8e123d94c3c3bd9632d75c078a3)
 sys/dev/mpr/mpr.c | 6 ++++--
 sys/dev/mps/mps.c | 6 ++++--
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/sys/dev/mpr/mpr.c b/sys/dev/mpr/mpr.c
index 23f7ff0c3e9a..d5c02f9608ca 100644
--- a/sys/dev/mpr/mpr.c
+++ b/sys/dev/mpr/mpr.c
@@ -1500,7 +1500,8 @@ mpr_alloc_requests(struct mpr_softc *sc)
 	rsize = sc->chain_frame_size * sc->num_chains;
 	bus_dma_template_init(&t, sc->mpr_parent_dmat);
-	    BD_MAXSEGSIZE(rsize), BD_NSEGMENTS((howmany(rsize, PAGE_SIZE))));
+	    BD_MAXSEGSIZE(rsize), BD_NSEGMENTS((howmany(rsize, PAGE_SIZE))),
 	if (bus_dma_template_tag(&t, &sc->chain_dmat)) {
 		mpr_dprint(sc, MPR_ERROR, "Cannot allocate chain DMA tag\n");
 		return (ENOMEM);
@@ -1552,7 +1553,8 @@ mpr_alloc_requests(struct mpr_softc *sc)
 	    BD_FLAGS(BUS_DMA_ALLOCNOW), BD_LOCKFUNC(busdma_lock_mutex),
-	    BD_LOCKFUNCARG(&sc->mpr_mtx));
+	    BD_LOCKFUNCARG(&sc->mpr_mtx),
 	if (bus_dma_template_tag(&t, &sc->buffer_dmat)) {
 		mpr_dprint(sc, MPR_ERROR, "Cannot allocate buffer DMA tag\n");
 		return (ENOMEM);
diff --git a/sys/dev/mps/mps.c b/sys/dev/mps/mps.c
index f358ab8a73a9..adad2450a3d4 100644
--- a/sys/dev/mps/mps.c
+++ b/sys/dev/mps/mps.c
@@ -1431,7 +1431,8 @@ mps_alloc_requests(struct mps_softc *sc)
 	rsize = sc->reqframesz * sc->num_chains;
 	bus_dma_template_clone(&t, sc->req_dmat);
-	    BD_NSEGMENTS(howmany(rsize, PAGE_SIZE)));
+	    BD_NSEGMENTS(howmany(rsize, PAGE_SIZE)),
 	if (bus_dma_template_tag(&t, &sc->chain_dmat)) {
 		mps_dprint(sc, MPS_ERROR, "Cannot allocate chain DMA tag\n");
 		return (ENOMEM);
@@ -1473,7 +1474,8 @@ mps_alloc_requests(struct mps_softc *sc)
 	    BD_FLAGS(BUS_DMA_ALLOCNOW), BD_LOCKFUNC(busdma_lock_mutex),
-	    BD_LOCKFUNCARG(&sc->mps_mtx));
+	    BD_LOCKFUNCARG(&sc->mps_mtx),
         if (bus_dma_template_tag(&t, &sc->buffer_dmat)) {
 		mps_dprint(sc, MPS_ERROR, "Cannot allocate buffer DMA tag\n");
 		return (ENOMEM);