SAN hosts and cloud clients manuals ( CA08872-021 )

NVMe-oF Host Configuration for ESXi 8.x with ONTAP

You can configure NVMe over Fabrics (NVMe-oF) on initiator hosts running ESXi 8.x and ONTAP as the target.

Supportability

  • Beginning with ONTAP 9.16.1 space allocation is enabled by default for all newly created NVMe namespaces.

  • Beginning with ONTAP 9.9.1 P3, NVMe/FC protocol is supported for ESXi 8 and later.

  • Beginning with ONTAP 9.10.1, NVMe/TCP protocol is supported for ONTAP.

Features

  • ESXi initiator hosts can run both NVMe/FC and FCP traffic through the same adapter ports.

  • For ESXi 8.0 and later releases, HPP (high performance plugin) is the default plugin for NVMe devices.

Known limitations

  • RDM mapping is not supported.

Enable NVMe/FC

NVMe/FC is enabled by default in vSphere releases.

Verify host NQN

You must check the ESXi host NQN string and verify that it matches with the host NQN string for the corresponding subsystem on the ONTAP array.

# esxcli nvme info get

Example output:

Host NQN: nqn.2014-08.org.nvmexpress:uuid:62a19711-ba8c-475d-c954-0000c9f1a436
# vserver nvme subsystem host show -vserver nvme_fc

Example output:

Vserver Subsystem Host NQN
------- --------- ----------------------------------------------------------
nvme_fc nvme_ss  nqn.2014-08.org.nvmexpress:uuid:62a19711-ba8c-475d-c954-0000c9f1a436

If the host NQN strings do not match, you should use the vserver nvme subsystem host add command to update the correct host NQN string on your corresponding ONTAP NVMe subsystem.

Configure Broadcom/Emulex and Marvell/Qlogic

The lpfc driver and the qlnativefc driver in vSphere 8.x have the NVMe/FC capability enabled by default.

Validate NVMe/FC

You can use the following procedure to validate NVMe/FC.

Steps
  1. Verify that the NVMe/FC adapter is listed on the ESXi host:

    # esxcli nvme adapter list

    Example output:

    Adapter  Adapter Qualified Name           Transport Type  Driver      Associated Devices
    -------  -------------------------------  --------------  ----------  ------------------
    vmhba64  aqn:lpfc:100000109b579f11        FC              lpfc
    vmhba65  aqn:lpfc:100000109b579f12        FC              lpfc
    vmhba66  aqn:qlnativefc:2100f4e9d456e286  FC              qlnativefc
    vmhba67  aqn:qlnativefc:2100f4e9d456e287  FC              qlnativefc
  2. Verify that the NVMe/FC namespaces are correctly created:

    The UUIDs in the following example represent the NVMe/FC namespace devices.

    # esxcfg-mpath -b
    uuid.116cb7ed9e574a0faf35ac2ec115969d : NVMe Fibre Channel Disk (uuid.116cb7ed9e574a0faf35ac2ec115969d)
       vmhba64:C0:T0:L5 LUN:5 state:active fc Adapter: WWNN: 20:00:00:24:ff:7f:4a:50 WWPN: 21:00:00:24:ff:7f:4a:50  Target: WWNN: 20:04:d0:39:ea:3a:b2:1f WWPN: 20:05:d0:39:ea:3a:b2:1f
       vmhba64:C0:T1:L5 LUN:5 state:active fc Adapter: WWNN: 20:00:00:24:ff:7f:4a:50 WWPN: 21:00:00:24:ff:7f:4a:50  Target: WWNN: 20:04:d0:39:ea:3a:b2:1f WWPN: 20:07:d0:39:ea:3a:b2:1f
       vmhba65:C0:T1:L5 LUN:5 state:active fc Adapter: WWNN: 20:00:00:24:ff:7f:4a:51 WWPN: 21:00:00:24:ff:7f:4a:51  Target: WWNN: 20:04:d0:39:ea:3a:b2:1f WWPN: 20:08:d0:39:ea:3a:b2:1f
       vmhba65:C0:T0:L5 LUN:5 state:active fc Adapter: WWNN: 20:00:00:24:ff:7f:4a:51 WWPN: 21:00:00:24:ff:7f:4a:51  Target: WWNN: 20:04:d0:39:ea:3a:b2:1f WWPN: 20:06:d0:39:ea:3a:b2:1f

    In ONTAP 9.7, the default block size for an NVMe/FC namespace is 4K. This default size is not compatible with ESXi. Therefore, when creating namespaces for ESXi, you must set the namespace block size as 512B. You can do this using the vserver nvme namespace create command.

    Example,

    vserver nvme namespace create -vserver vs_1 -path /vol/nsvol/namespace1 -size 100g -ostype vmware -block-size 512B

    Refer to the ONTAP 9 Command man pages for additional details.

  3. Verify the status of the individual ANA paths of the respective NVMe/FC namespace devices:

    # esxcli storage hpp path list -d uuid.df960bebb5a74a3eaaa1ae55e6b3411d
    
    fc.20000024ff7f4a50:21000024ff7f4a50-fc.2004d039ea3ab21f:2005d039ea3ab21f-uuid.df960bebb5a74a3eaaa1ae55e6b3411d
       Runtime Name: vmhba64:C0:T0:L3
       Device: uuid.df960bebb5a74a3eaaa1ae55e6b3411d
       Device Display Name: NVMe Fibre Channel Disk (uuid.df960bebb5a74a3eaaa1ae55e6b3411d)
       Path State: active unoptimized
       Path Config: {ANA_GRP_id=4,ANA_GRP_state=ANO,health=UP}
    
    fc.20000024ff7f4a51:21000024ff7f4a51-fc.2004d039ea3ab21f:2008d039ea3ab21f-uuid.df960bebb5a74a3eaaa1ae55e6b3411d
       Runtime Name: vmhba65:C0:T1:L3
       Device: uuid.df960bebb5a74a3eaaa1ae55e6b3411d
       Device Display Name: NVMe Fibre Channel Disk (uuid.df960bebb5a74a3eaaa1ae55e6b3411d)
       Path State: active
       Path Config: {ANA_GRP_id=4,ANA_GRP_state=AO,health=UP}
    
    fc.20000024ff7f4a51:21000024ff7f4a51-fc.2004d039ea3ab21f:2006d039ea3ab21f-uuid.df960bebb5a74a3eaaa1ae55e6b3411d
       Runtime Name: vmhba65:C0:T0:L3
       Device: uuid.df960bebb5a74a3eaaa1ae55e6b3411d
       Device Display Name: NVMe Fibre Channel Disk (uuid.df960bebb5a74a3eaaa1ae55e6b3411d)
       Path State: active unoptimized
       Path Config: {ANA_GRP_id=4,ANA_GRP_state=ANO,health=UP}
    
    fc.20000024ff7f4a50:21000024ff7f4a50-fc.2004d039ea3ab21f:2007d039ea3ab21f-uuid.df960bebb5a74a3eaaa1ae55e6b3411d
       Runtime Name: vmhba64:C0:T1:L3
       Device: uuid.df960bebb5a74a3eaaa1ae55e6b3411d
       Device Display Name: NVMe Fibre Channel Disk (uuid.df960bebb5a74a3eaaa1ae55e6b3411d)
       Path State: active
       Path Config: {ANA_GRP_id=4,ANA_GRP_state=AO,health=UP}

NVMe deallocate

The NVMe deallocate command is supported for ESXi 8.0u2 and later with ONTAP 9.16.1 and later.

Deallocate support is always enabled for NVMe namespaces. Deallocate also allows the guest OS to perform 'UNMAP' (sometimes called 'TRIM') operations on VMFS datastores. Deallocate operations allow a host to identify blocks of data that are no longer required because they no longer contain valid data. The storage system can then remove those data blocks so that the space can be consumed elsewhere.

Steps
  1. On your ESXi host, verify the setting for DSM deallocate with TP4040 support:

    esxcfg-advcfg -g /SCSi/NVmeUseDsmTp4040

    The expected value is 0.

  2. Enable the setting for DSM deallocate with TP4040 support:

    esxcfg-advcfg -s 1 /Scsi/NvmeUseDsmTp4040

  3. Verify that the setting for DSM deallocate with TP4040 support is enabled:

    esxcfg-advcfg -g /SCSi/NVmeUseDsmTp4040

    The expected value is 1.

Top of Page