ONTAP 9.13

to Japanese version

NFS over RDMA

NFS over RDMA utilizes RDMA adapters, allowing data to be copied directly between storage system memory and host system memory, circumventing CPU interruptions and overhead.

NFS over RDMA configurations are designed for customers with latency sensitive or high-bandwidth workloads such as machine learning and analytics. NVIDIA has extended NFS over RDMA to enable GPU Direct Storage (GDS). GDS further accelerates GPU-enabled workloads by bypassing the CPU and main memory altogether, using RDMA to transfer data between the storage system and GPU memory directly.

NFS over RDMA is supported beginning with ONTAP 9.10.1. NFS over RDMA configurations are only supported for the NFSv4.0 protocol when used with the Mellanox CX-5 or CX-6 adapter, which provides support for RDMA using version 2 of the RoCE protocol. GDS is only supported using NVIDIA Tesla- and Ampere-family GPUs with Mellanox NIC cards and MOFED software. NFS over RDMA support is limited to node-local traffic only. Standard FlexVols or FlexGroups where all constituents are on the same node are supported and must be accessed from a LIF on the same node. NFS mount sizes higher than 64k result in unstable performance with NFS over RDMA configurations.

Requirements
  • Storages systems must be running ONTAP 9.10.1 or later

    • You can configure NFS over RDMA with ONTAP System Manager beginning with ONTAP 9.12.1. In ONTAP 9.10.1 and 9.11.1, you need to use the CLI to configure NFS over RDMA.

  • Both nodes in the HA pair must be the same version.

  • Storage system controllers must have RDMA support (currently AX4100).

  • Storage appliance configured with RDMA-supported hardware (e.g. Mellanox CX-5 or CX-6).

  • Data LIFs must be configured to support RDMA.

  • Clients must be using Mellanox RDMA-capable NIC cards and Mellanox OFED (MOFED) network software.

Interface groups are not supported with NFS over RDMA.
Top of Page