nvme-tcp: add NVMe over TCP host driver
This patch implements the NVMe over TCP host driver. It can be used to connect to remote NVMe over Fabrics subsystems over good old TCP/IP. The driver implements the TP 8000 of how nvme over fabrics capsules and data are encapsulated in nvme-tcp pdus and exchaged on top of a TCP byte stream. nvme-tcp header and data digest are supported as well. To connect to all NVMe over Fabrics controllers reachable on a given taget port over TCP use the following command: nvme connect-all -t tcp -a $IPADDR This requires the latest version of nvme-cli with TCP support. Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com> Signed-off-by: Roy Shterman <roys@lightbitslabs.com> Signed-off-by: Solganik Alexander <sashas@lightbitslabs.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
This commit is contained in:
parent
ad4f530e95
commit
3f2304f8c6
|
@ -57,3 +57,18 @@ config NVME_FC
|
|||
from https://github.com/linux-nvme/nvme-cli.
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
config NVME_TCP
|
||||
tristate "NVM Express over Fabrics TCP host driver"
|
||||
depends on INET
|
||||
depends on BLK_DEV_NVME
|
||||
select NVME_FABRICS
|
||||
help
|
||||
This provides support for the NVMe over Fabrics protocol using
|
||||
the TCP transport. This allows you to use remote block devices
|
||||
exported using the NVMe protocol set.
|
||||
|
||||
To configure a NVMe over Fabrics controller use the nvme-cli tool
|
||||
from https://github.com/linux-nvme/nvme-cli.
|
||||
|
||||
If unsure, say N.
|
||||
|
|
|
@ -7,6 +7,7 @@ obj-$(CONFIG_BLK_DEV_NVME) += nvme.o
|
|||
obj-$(CONFIG_NVME_FABRICS) += nvme-fabrics.o
|
||||
obj-$(CONFIG_NVME_RDMA) += nvme-rdma.o
|
||||
obj-$(CONFIG_NVME_FC) += nvme-fc.o
|
||||
obj-$(CONFIG_NVME_TCP) += nvme-tcp.o
|
||||
|
||||
nvme-core-y := core.o
|
||||
nvme-core-$(CONFIG_TRACING) += trace.o
|
||||
|
@ -21,3 +22,5 @@ nvme-fabrics-y += fabrics.o
|
|||
nvme-rdma-y += rdma.o
|
||||
|
||||
nvme-fc-y += fc.o
|
||||
|
||||
nvme-tcp-y += tcp.o
|
||||
|
|
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue