[Bug 294132] arm64/RPi CM4: fresh UFS on NVMe fails first mount with superblock check-hash mismatch

From: <bugzilla-noreply_at_freebsd.org>
Date: Mon, 30 Mar 2026 07:17:01 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=294132

            Bug ID: 294132
           Summary: arm64/RPi CM4: fresh UFS on NVMe fails first mount
                    with superblock check-hash mismatch
           Product: Base System
           Version: 15.0-RELEASE
          Hardware: amd64
                OS: Any
            Status: New
          Severity: Affects Some People
          Priority: ---
         Component: kern
          Assignee: bugs@FreeBSD.org
          Reporter: r.zilli@me.com

Created attachment 269220
  --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=269220&action=edit
Operational steps

Environment
- FreeBSD 15.0-RELEASE-p5 GENERIC arm64
- Raspberry Pi CM4
- Swiss Tofu PCIe/NVMe carrier
- NVMe SSD: WDC WDS100T2B0C, NVMe 1.4
- System currently boots from eMMC
- NVMe detected as nda0

Problem
A freshly created UFS filesystem on the NVMe drive cannot be mounted.
The failure happens on the first mount immediately after newfs, before any data
is copied.

Observed result
Example:

  mount /dev/gpt/nvmroot /nvme
  Superblock check-hash failed: recorded check-hash 0xee9af130 != computed
check-hash 0x7a6b2b14
  mount: /dev/gpt/nvmroot: Integrity check failed

Reproducible steps
1. Boot FreeBSD 15.0-RELEASE-p5 from eMMC on the CM4.
2. Recreate the NVMe from scratch:
     gpart destroy -F nda0
     gpart create -s gpt nda0
     gpart add -a 1M -t freebsd-ufs -l nvmroot nda0
     newfs /dev/gpt/nvmroot
     mkdir -p /nvme
     mount /dev/gpt/nvmroot /nvme
3. The first mount fails immediately with the superblock check-hash mismatch.

Additional testing
- Reproduced with 512-byte LBA format.
- Reproduced again after formatting the namespace to 4Kn / 4096-byte sectors.
- Reproduced with NVMe HMB enabled.
- Reproduced with HMB disabled by setting:
    hw.nvme.hmb_max="0"
- Reproduced with:
    hw.nvme.force_intx="1"
    hw.nvme.num_io_queues="1"
- Reproduced with memory capped below 4GB using:
    total_mem=4015
  Verified after boot:
    hw.physmem: 4105691136
- After disabling HMB, dmesg no longer shows:
    "Allocated 200MB host memory buffer"
  but the mount failure is unchanged.
- A control test using a memory disk works normally:
     mdconfig -a -t swap -s 512m -u 0
     newfs /dev/md0
     mount /dev/md0 /mnt
     umount /mnt
     mdconfig -d -u 0
  So UFS itself appears functional on this system.

Relevant boot/runtime messages
- nvme0: <Generic NVMe Device> irq 92 at device 0.0 on pci1
- nda0 at nvme0 bus 0 scbus0 target 0 lun 1
- When HMB is enabled, dmesg shows:
    nvme0: Allocated 200MB host memory buffer
- nvme0: unable to allocate MSI-X
- No obvious NVMe timeout/reset errors were observed during normal device
detection.

Expected result
The first mount after newfs should succeed.

Notes
This appears specific to the NVMe/PCIe path on this arm64 CM4 setup, because:
- the failure occurs on a fresh filesystem before data copy
- UFS on md(4) works
- stale on-disk Linux metadata was excluded by destroying and recreating GPT
and partitions
- the same CM4/carrier/NVMe hardware previously booted Ubuntu from NVMe

Also reproduced on FreeBSD 15.0-STABLE RPI snapshot dated March 26, 2026
(filename:
FreeBSD-15.0-STABLE-arm64-aarch64-RPI-20260326-4311217a039c-282698.img.xz)


Workarounds tested and still reproducible:
- total_mem=4015
  verified after boot:
    hw.physmem: 4105678848
- hw.nvme.hmb_max="0"
  verified because dmesg no longer shows:
    nvme0: Allocated 200MB host memory buffer
- hw.nvme.force_intx="1"
- hw.nvme.num_io_queues="1"

Control test:
- UFS on md(4) works normally:
    mdconfig -a -t swap -s 512m -u 0
    newfs /dev/md0
    mount /dev/md0 /mnt
    umount /mnt
    mdconfig -d -u 0

-- 
You are receiving this mail because:
You are the assignee for the bug.