git: 38ac636039 - main - status: 2023q2: nvmf: markup, minor changes
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Thu, 13 Jul 2023 04:02:12 UTC
The branch main has been updated by grahamperrin: URL: https://cgit.FreeBSD.org/doc/commit/?id=38ac6360396e6b41a332272abcd69466da75e79b commit 38ac6360396e6b41a332272abcd69466da75e79b Author: Graham Perrin <grahamperrin@FreeBSD.org> AuthorDate: 2023-07-13 03:59:15 +0000 Commit: Graham Perrin <grahamperrin@FreeBSD.org> CommitDate: 2023-07-13 03:59:15 +0000 status: 2023q2: nvmf: markup, minor changes Link to a manual page. Other changes, minor. Approved-by: salvadore Fixes: 4222032e43 Status/2023Q2/nvmf.adoc: Improvements Pull-request: https://github.com/freebsd/freebsd-doc/pull/208 --- website/content/en/status/report-2023-04-2023-06/nvmf.adoc | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/website/content/en/status/report-2023-04-2023-06/nvmf.adoc b/website/content/en/status/report-2023-04-2023-06/nvmf.adoc index a9ceefdaea..f9477918e4 100644 --- a/website/content/en/status/report-2023-04-2023-06/nvmf.adoc +++ b/website/content/en/status/report-2023-04-2023-06/nvmf.adoc @@ -5,7 +5,7 @@ link:https://github.com/bsdjhb/freebsd/tree/nvmf2[nvmf2 branch] URL: link:https: Contact: John Baldwin <jhb@FreeBSD.org> -NVMe over Fabrics enables communication with a storage device usingthe NVMe protocol over a network fabric. +NVMe over Fabrics enables communication with a storage device using the NVMe protocol over a network fabric. This is similar to using iSCSI to export a storage device over a network using SCSI commands. NVMe over Fabrics currently defines network transports for Fibre Channel, RDMA, and TCP. @@ -20,19 +20,19 @@ The branch also contains an in-kernel Fabrics implementation. [.filename]#nvmf.ko# contains an NVMe over Fabrics host (initiator) which connects to a remote controller and exports remote namespaces as disk devices. Similar to the man:nvme[4] driver for NVMe over PCI-express, namespaces are exported via [.filename]#/dev/nvmeXnsY# devices which only support simple operations, but are also exported as ndaX disk devices via CAM. Unlike man:nvme[4], man:nvmf[4] does not support the man:nvd[4] disk driver. -nvmecontrol can be used with remote namespaces and remote controllers, for example to fetch log pages, display identify info, etc. +man:nvmecontrol[8] can be used with remote namespaces and remote controllers, for example to fetch log pages, display identify info, etc. Note that man:nvmf[4] is currently a bit simple and some error cases are still a TODO. -If an error occurs, the queues (and backing network connections) are dropped, but the devices stay around, but with I/O requests paused. -'nvmecontrol reconnect' can be used to connect a new set of network connections to resume operation. -Unlike iSCSI which uses a persistent daemon (man:iscsid[8]) to reconnect after an error, reconnections must be done manually. +If an error occurs, the queues (and backing network connections) are dropped, but the devices stay around, with I/O requests paused. +`nvmecontrol reconnect` can be used to connect a new set of network connections to resume operation. +Unlike iSCSI which uses a persistent daemon (man:iscsid[8]) to reconnect after an error, reconnections must be made manually. The current code is very new and likely not robust. It is certainly not ready for production use. -Experienced users who do not mind all their data vanishing in a puff of smoke after a kernel panic and who have an interest in NVMe over Fabrics can start testing it at their own risk. +Experienced users who do not mind all their data vanishing in a puff of smoke after a kernel panic, and who have an interest in NVMe over Fabrics, can start testing it at their own risk. The next main task is to implement a Fabrics controller (target in SCSI language). -Probably a simple one in userland first followed by a "real" one that offloads the data handling to the kernel but is somewhat integrated with man:ctld[8] so that individual disk devices can be exported either via iSCSI or NVMe or both using a single config file and daemon to manage all of that. +Probably a simple one in userland first followed by a "real" one that offloads the data handling to the kernel but is somewhat integrated with man:ctld[8] so that individual disk devices can be exported via iSCSI or NVMe, or via both using a single config file and daemon to manage all of that. This may require a fair bit of refactoring in ctld to make it less iSCSI-specific. Working on the controller side will also validate some of the currently under-tested API design decisions in the transport-independent layer. I think it probably does not make sense to merge any of the NVMe over Fabrics changes into the tree until after this step.