SAN hosts and cloud clients manuals ( CA08872-021 )
NVMe-oF Host Configuration for ESXi 7.x with ONTAP
You can configure NVMe over Fabrics (NVMe-oF) on initiator hosts running ESXi 7.x and ONTAP as the target.
Supportability
-
Beginning with ONTAP 9.7, NVMe over Fibre Channel (NVMe/FC) support is added for VMWare vSphere releases.
-
Beginning with 7.0U3c, NVMe/TCP feature is supported for ESXi Hypervisor.
-
Beginning with ONTAP 9.10.1, NVMe/TCP feature is supported for ONTAP.
Features
-
ESXi initiator host can run both NVMe/FC and FCP traffic through the same adapter ports.
-
Beginning with ONTAP 9.9.1 P3, NVMe/FC feature is supported for ESXi 7.0 update 3.
-
For ESXi 7.0 and later releases, HPP (high performance plugin) is the default plugin for NVMe devices.
Known limitations
The following configurations are not supported:
-
RDM mapping
-
VVols
Enable NVMe/FC
-
Check the ESXi host NQN string and verify that it matches with the host NQN string for the corresponding subsystem on the ONTAP array:
# esxcli nvme info get Host NQN: nqn.2014-08.com.vmware:nvme:nvme-esx # vserver nvme subsystem host show -vserver vserver_nvme Vserver Subsystem Host NQN ------- ------------------- ---------------------------------------- vserver_nvme ss_vserver_nvme nqn.2014-08.com.vmware:nvme:nvme-esx
Configure Broadcom/Emulex
-
Set the lpfc driver parameter
lpfc_enable_fc4_type=3for enabling NVMe/FC support in thelpfcdriver and reboot the host.
Starting with vSphere 7.0 update 3, the brcmnvmefc driver is no longer available. Therefore, the lpfc driver now includes the NVMe over Fibre Channel (NVMe/FC) functionality previously delivered with the brcmnvmefc driver.
|
The lpfc_enable_fc4_type=3 parameter is set by default for the LPe35000-series adapters. You must perform the following command to set it manually for LPe32000-series and LPe31000-series adapters.
|
# esxcli system module parameters set -m lpfc -p lpfc_enable_fc4_type=3 #esxcli system module parameters list -m lpfc | grep lpfc_enable_fc4_type lpfc_enable_fc4_type int 3 Defines what FC4 types are supported #esxcli storage core adapter list HBA Name Driver Link State UID Capabilities Description -------- ------- ---------- ------------------------------------ ------------------- ----------- vmhba1 lpfc link-up fc.200000109b95456f:100000109b95456f Second Level Lun ID (0000:86:00.0) Emulex Corporation Emulex LPe36000 Fibre Channel Adapter FC HBA vmhba2 lpfc link-up fc.200000109b954570:100000109b954570 Second Level Lun ID (0000:86:00.1) Emulex Corporation Emulex LPe36000 Fibre Channel Adapter FC HBA vmhba64 lpfc link-up fc.200000109b95456f:100000109b95456f (0000:86:00.0) Emulex Corporation Emulex LPe36000 Fibre Channel Adapter NVMe HBA vmhba65 lpfc link-up fc.200000109b954570:100000109b954570 (0000:86:00.1) Emulex Corporation Emulex LPe36000 Fibre Channel Adapter NVMe HBA
Configure Marvell/QLogic
-
Set the
qlnativefcdriver parameterql2xnvmesupport=1for enabling NVMe/FC support in theqlnativefcdriver and reboot the host.# esxcfg-module -s 'ql2xnvmesupport=1' qlnativefcThe qlnativefcdriver parameter is set by default for the Qle 277x-series adapters. You must perform the following command to set it manually for Qle 277x series adapters.esxcfg-module -l | grep qlnativefc qlnativefc 4 1912
-
Check whether nvme is enabled on the adapter:
#esxcli storage core adapter list HBA Name Driver Link State UID Capabilities Description -------- ---------- ---------- ------------------------------------ ------------------- ----------- vmhba3 qlnativefc link-up fc.20000024ff1817ae:21000024ff1817ae Second Level Lun ID (0000:5e:00.0) QLogic Corp QLE2742 Dual Port 32Gb Fibre Channel to PCIe Adapter FC Adapter vmhba4 qlnativefc link-up fc.20000024ff1817af:21000024ff1817af Second Level Lun ID (0000:5e:00.1) QLogic Corp QLE2742 Dual Port 32Gb Fibre Channel to PCIe Adapter FC Adapter vmhba64 qlnativefc link-up fc.20000024ff1817ae:21000024ff1817ae (0000:5e:00.0) QLogic Corp QLE2742 Dual Port 32Gb Fibre Channel to PCIe Adapter NVMe FC Adapter vmhba65 qlnativefc link-up fc.20000024ff1817af:21000024ff1817af (0000:5e:00.1) QLogic Corp QLE2742 Dual Port 32Gb Fibre Channel to PCIe Adapter NVMe FC Adapter
Validate NVMe/FC
-
Verify that NVMe/FC adapter is listed on the ESXi host:
# esxcli nvme adapter list Adapter Adapter Qualified Name Transport Type Driver Associated Devices ------- ------------------------------- -------------- ---------- ------------------ vmhba64 aqn:qlnativefc:21000024ff1817ae FC qlnativefc vmhba65 aqn:qlnativefc:21000024ff1817af FC qlnativefc vmhba66 aqn:lpfc:100000109b579d9c FC lpfc vmhba67 aqn:lpfc:100000109b579d9d FC lpfc
-
Verify that the NVMe/FC namespaces are properly created:
The UUIDs in the following example represent the NVMe/FC namespace devices.
# esxcfg-mpath -b uuid.5084e29a6bb24fbca5ba076eda8ecd7e : NVMe Fibre Channel Disk (uuid.5084e29a6bb24fbca5ba076eda8ecd7e) vmhba65:C0:T0:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:34:80:0d:6d:72:69 WWPN: 21:00:34:80:0d:6d:72:69 Target: WWNN: 20:17:00:a0:98:df:e3:d1 WWPN: 20:2f:00:a0:98:df:e3:d1 vmhba65:C0:T1:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:34:80:0d:6d:72:69 WWPN: 21:00:34:80:0d:6d:72:69 Target: WWNN: 20:17:00:a0:98:df:e3:d1 WWPN: 20:1a:00:a0:98:df:e3:d1 vmhba64:C0:T0:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:34:80:0d:6d:72:68 WWPN: 21:00:34:80:0d:6d:72:68 Target: WWNN: 20:17:00:a0:98:df:e3:d1 WWPN: 20:18:00:a0:98:df:e3:d1 vmhba64:C0:T1:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:34:80:0d:6d:72:68 WWPN: 21:00:34:80:0d:6d:72:68 Target: WWNN: 20:17:00:a0:98:df:e3:d1 WWPN: 20:19:00:a0:98:df:e3:d1
In ONTAP 9.7, the default block size for a NVMe/FC namespace is 4K. This default size is not compatible with ESXi. Therefore, when creating namespaces for ESXi, you must set the namespace block size as 512b. You can do this using the vserver nvme namespace createcommand.Examplevserver nvme namespace create -vserver vs_1 -path /vol/nsvol/namespace1 -size 100g -ostype vmware -block-size 512BRefer to the ONTAP 9 Command man pages for additional details.
-
Verify the status of the individual ANA paths of the respective NVMe/FC namespace devices:
esxcli storage hpp path list -d uuid.5084e29a6bb24fbca5ba076eda8ecd7e fc.200034800d6d7268:210034800d6d7268-fc.201700a098dfe3d1:201800a098dfe3d1-uuid.5084e29a6bb24fbca5ba076eda8ecd7e Runtime Name: vmhba64:C0:T0:L1 Device: uuid.5084e29a6bb24fbca5ba076eda8ecd7e Device Display Name: NVMe Fibre Channel Disk (uuid.5084e29a6bb24fbca5ba076eda8ecd7e) Path State: active Path Config: {TPG_id=0,TPG_state=AO,RTP_id=0,health=UP} fc.200034800d6d7269:210034800d6d7269-fc.201700a098dfe3d1:201a00a098dfe3d1-uuid.5084e29a6bb24fbca5ba076eda8ecd7e Runtime Name: vmhba65:C0:T1:L1 Device: uuid.5084e29a6bb24fbca5ba076eda8ecd7e Device Display Name: NVMe Fibre Channel Disk (uuid.5084e29a6bb24fbca5ba076eda8ecd7e) Path State: active Path Config: {TPG_id=0,TPG_state=AO,RTP_id=0,health=UP} fc.200034800d6d7269:210034800d6d7269-fc.201700a098dfe3d1:202f00a098dfe3d1-uuid.5084e29a6bb24fbca5ba076eda8ecd7e Runtime Name: vmhba65:C0:T0:L1 Device: uuid.5084e29a6bb24fbca5ba076eda8ecd7e Device Display Name: NVMe Fibre Channel Disk (uuid.5084e29a6bb24fbca5ba076eda8ecd7e) Path State: active unoptimized Path Config: {TPG_id=0,TPG_state=ANO,RTP_id=0,health=UP} fc.200034800d6d7268:210034800d6d7268-fc.201700a098dfe3d1:201900a098dfe3d1-uuid.5084e29a6bb24fbca5ba076eda8ecd7e Runtime Name: vmhba64:C0:T1:L1 Device: uuid.5084e29a6bb24fbca5ba076eda8ecd7e Device Display Name: NVMe Fibre Channel Disk (uuid.5084e29a6bb24fbca5ba076eda8ecd7e) Path State: active unoptimized Path Config: {TPG_id=0,TPG_state=ANO,RTP_id=0,health=UP}
