git: 4222032e43 - main - Status/2023Q2/nvmf.adoc: Improvements

From: Lorenzo Salvadore <salvadore_at_FreeBSD.org>
Date: Wed, 12 Jul 2023 16:35:41 UTC
The branch main has been updated by salvadore:

URL: https://cgit.FreeBSD.org/doc/commit/?id=4222032e43c581f284023b917dff5dc3ade0e874

commit 4222032e43c581f284023b917dff5dc3ade0e874
Author:     Lorenzo Salvadore <salvadore@FreeBSD.org>
AuthorDate: 2023-07-12 16:28:25 +0000
Commit:     Lorenzo Salvadore <salvadore@FreeBSD.org>
CommitDate: 2023-07-12 16:28:25 +0000

    Status/2023Q2/nvmf.adoc: Improvements
    
    - Switch to one sentence per line.
    - Add one more use of the filename markup.
    
    Approved by:    dbaio (mentor, implicit)
---
 .../en/status/report-2023-04-2023-06/nvmf.adoc     | 75 +++++++---------------
 1 file changed, 22 insertions(+), 53 deletions(-)

diff --git a/website/content/en/status/report-2023-04-2023-06/nvmf.adoc b/website/content/en/status/report-2023-04-2023-06/nvmf.adoc
index 445119c7f9..a9ceefdaea 100644
--- a/website/content/en/status/report-2023-04-2023-06/nvmf.adoc
+++ b/website/content/en/status/report-2023-04-2023-06/nvmf.adoc
@@ -5,67 +5,36 @@ link:https://github.com/bsdjhb/freebsd/tree/nvmf2[nvmf2 branch]	URL: link:https:
 
 Contact: John Baldwin <jhb@FreeBSD.org>
 
-NVMe over Fabrics enables communication with a storage device using
-the NVMe protocol over a network fabric.
-This is similar to using iSCSI to export a storage device over a
-network using SCSI commands.
+NVMe over Fabrics enables communication with a storage device usingthe NVMe protocol over a network fabric.
+This is similar to using iSCSI to export a storage device over a network using SCSI commands.
 
-NVMe over Fabrics currently defines network transports for
-Fibre Channel, RDMA, and TCP.
+NVMe over Fabrics currently defines network transports for Fibre Channel, RDMA, and TCP.
 
-The work in the nvmf2 branch includes a userland library (lib/libnvmf)
-which contains an abstraction for transports and an implementation of
+The work in the nvmf2 branch includes a userland library ([.filename]#lib/libnvmf#) which contains an abstraction for transports and an implementation of
 a TCP transport.
-It also includes changes to man:nvmecontrol[8] to add 'discover',
-'connect', and 'disconnect' commands to manage connections to a remote
-controller.
+It also includes changes to man:nvmecontrol[8] to add 'discover', 'connect', and 'disconnect' commands to manage connections to a remote controller.
 
 The branch also contains an in-kernel Fabrics implementation.
-[.filename]#nvmf_transport.ko# contains a transport abstraction that
-sits in between the nvmf host (initiator in SCSI terms) and the
-individual transports.
-[.filename]#nvmf_tcp.ko# contains an implementation of the TCP
-transport layer.
-[.filename]#nvmf.ko# contains an NVMe over Fabrics host (initiator)
-which connects to a remote controller and exports remote namespaces as
-disk devices.
-Similar to the man:nvme[4] driver for NVMe over PCI-express,
-namespaces are exported via [.filename]#/dev/nvmeXnsY# devices which
-only support simple operations, but are also exported as ndaX disk
-devices via CAM.
-Unlike man:nvme[4], man:nvmf[4] does not support the man:nvd[4] disk
-driver.
-nvmecontrol can be used with remote namespaces and remote controllers,
-for example to fetch log pages, display identify info, etc.
-
-Note that man:nvmf[4] is currently a bit simple and some error cases
-are still a TODO.
-If an error occurs, the queues (and backing network connections) are
-dropped, but the devices stay around, but with I/O requests paused.
-'nvmecontrol reconnect' can be used to connect a new set of network
-connections to resume operation.
-Unlike iSCSI which uses a persistent daemon (man:iscsid[8]) to
-reconnect after an error, reconnections must be done manually.
+[.filename]#nvmf_transport.ko# contains a transport abstraction that sits in between the nvmf host (initiator in SCSI terms) and the individual transports.
+[.filename]#nvmf_tcp.ko# contains an implementation of the TCP transport layer.
+[.filename]#nvmf.ko# contains an NVMe over Fabrics host (initiator) which connects to a remote controller and exports remote namespaces as disk devices.
+Similar to the man:nvme[4] driver for NVMe over PCI-express, namespaces are exported via [.filename]#/dev/nvmeXnsY# devices which only support simple operations, but are also exported as ndaX disk devices via CAM.
+Unlike man:nvme[4], man:nvmf[4] does not support the man:nvd[4] disk driver.
+nvmecontrol can be used with remote namespaces and remote controllers, for example to fetch log pages, display identify info, etc.
+
+Note that man:nvmf[4] is currently a bit simple and some error cases are still a TODO.
+If an error occurs, the queues (and backing network connections) are dropped, but the devices stay around, but with I/O requests paused.
+'nvmecontrol reconnect' can be used to connect a new set of network connections to resume operation.
+Unlike iSCSI which uses a persistent daemon (man:iscsid[8]) to reconnect after an error, reconnections must be done manually.
 
 The current code is very new and likely not robust.
 It is certainly not ready for production use.
-Experienced users who do not mind all their data vanishing in a puff
-of smoke after a kernel panic and who have an interest in NVMe over
-Fabrics can start testing it at their own risk.
+Experienced users who do not mind all their data vanishing in a puff of smoke after a kernel panic and who have an interest in NVMe over Fabrics can start testing it at their own risk.
 
-The next main task is to implement a Fabrics controller (target in
-SCSI language).
-Probably a simple one in userland first followed by a "real" one that
-offloads the data handling to the kernel but is somewhat integrated
-with man:ctld[8] so that individual disk devices can be exported
-either via iSCSI or NVMe or both using a single config file and daemon
-to manage all of that.
-This may require a fair bit of refactoring in ctld to make it less
-iSCSI-specific.
-Working on the controller side will also validate some of the
-currently under-tested API design decisions in the
-transport-independent layer.
-I think it probably does not make sense to merge any of the NVMe over
-Fabrics changes into the tree until after this step.
+The next main task is to implement a Fabrics controller (target in SCSI language).
+Probably a simple one in userland first followed by a "real" one that offloads the data handling to the kernel but is somewhat integrated with man:ctld[8] so that individual disk devices can be exported either via iSCSI or NVMe or both using a single config file and daemon to manage all of that.
+This may require a fair bit of refactoring in ctld to make it less iSCSI-specific.
+Working on the controller side will also validate some of the currently under-tested API design decisions in the transport-independent layer.
+I think it probably does not make sense to merge any of the NVMe over Fabrics changes into the tree until after this step.
 
 Sponsored by: Chelsio Communications