git: 7a4522d629 - main - Status/2023Q3/dpaa2.adoc: Add report

From: Lorenzo Salvadore <>
Date: Mon, 02 Oct 2023 08:33:25 UTC
The branch main has been updated by salvadore:


commit 7a4522d6295fddc50799a4e93864d0c8baf22f6e
Author:     Dmitry Salychev <>
AuthorDate: 2023-10-02 08:31:52 +0000
Commit:     Lorenzo Salvadore <>
CommitDate: 2023-10-02 08:31:52 +0000

    Status/2023Q3/dpaa2.adoc: Add report
    Reviewed by:    Graham Perrin <>
 .../en/status/report-2023-07-2023-09/dpaa2.adoc    | 35 ++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/website/content/en/status/report-2023-07-2023-09/dpaa2.adoc b/website/content/en/status/report-2023-07-2023-09/dpaa2.adoc
new file mode 100644
index 0000000000..ca15bfdaf0
--- /dev/null
+++ b/website/content/en/status/report-2023-07-2023-09/dpaa2.adoc
@@ -0,0 +1,35 @@
+=== NXP DPAA2 support
+Links: +
+link:[DPAA2 in the FreeBSD source tree] URL: link:[] +
+link:[DPAA2 on Github] URL: link:[]
+Contact: Dmitry Salychev <> +
+Contact: Bjoern A. Zeeb <>
+==== What is DPAA2?
+DPAA2 is a hardware-level networking architecture found in some NXP SoCs which contains hardware blocks including Management Complex (MC, a command interface to manipulate DPAA2 objects), Wire Rate I/O processor (WRIOP, packets distribution, queuing, drop decisions), Queues and Buffers Manager (QBMan, Rx/Tx queues control, Rx buffer pools) and others.
+The Management Complex runs NXP-supplied firmware which provides DPAA2 objects as an abstraction layer over those blocks to simplify access to the underlying hardware.
+==== Changes from the previous report
+* Isolation between DPAA2 channels link:[improved].
+* Panic under heavy network load link:[fixed].
+* FDT/ACPI MDIO support.
+* NFS root mount link:[do not hang] on netboot over DPAA2 anymore.
+* Drivers link:[started] to communicate with MC via their own command portals (DPMCP).
+* link:[List of all closed issues].
+==== Work in Progress
+Work on link:[dev/sff] started to support SFF/SFP modules in order to test DPAA2 drivers on links above 1 Gbit/s.
+==== Plan
+* Heavy network load tests (2.5 Gbit/s, 10 Gbit/s) and bottlenecks mitigation.
+* Cached memory-backed software portals.
+* Driver resources de-allocation to unload dpaa2.ko properly.
+* Further parts (DPSW, DCE, etc.) supported by the hardware.
+Sponsor: Traverse Technologies (providing Ten64 HW for testing)