SAN hosts and cloud clients manuals ( CA08872-021 )
Points to Note with ONTAP
Points to Note when using the ETERNUS AX/AC/HX series and current ASA series.
Notes Related to Linux iSCSI Connections
If a command is issued to a disk device that is connected to a server via iSCSI, response delays occur for the command and performance degradation problems may occur.
As a connection configuration example, if a mixed network is used, such as an iSCSI network with a management LAN network, the occurrence of problems tends to increase.
This can be improved by disabling delayed ACK. However, depending on the type of Linux OS, disabling delayed ACK may not be possible. For information on how to disable it, refer to the Linux OS manual.
Notes Related to iSCSI Connections of VMware ESXi Servers
If operations such as copying files are performed to iSCSI connected storage systems, read and write performance problems may occur in VMware ESXi servers.
For the workaround, refer to the following Broadcom website.
https://knowledge.broadcom.com/external/article?legacyId=1002598
Notes Related to the Maximum Queue Depth of NFS (VMware)
The maximum queue depth set in the MaxQueueDepth advanced configuration option is not reflected in the NFS datastore.
For the workaround, refer to the following Broadcom website.
https://knowledge.broadcom.com/external/article?legacyId=86331
Notes Related to VMware Clustered VMDK
The ETERNUS AX/AC/HX series supports VMware Clustered VMDK. No additional settings are required for the ETERNUS AX/AC/HX series to use VMware Clustered VMDK.
See the Broadcom Compatibility Guide for supported models.
Notes Related to FC Connections of VMware ESXi Server
If NVMe support is enabled during an FC connection, duplicate WWPNs may be displayed.
During an FC connection, disabling NVMe support is recommended.
For details on how to disable NVMe support, refer to the following Broadcom website.
https://knowledge.broadcom.com/external/article?legacyId=84325
-
For the lpfc Driver
-
Execution example 1
# esxcli system module parameters set -m lpfc -p "lpfc_enable_fc4_type=1 lpfc0_lun_queue_depth=8"
When changing the value of the driver parameter "lpfcX_lun_queue_depth", disable NVMe support at the same time. Even if NVMe support is already disabled and the "lpfcX_lun_queue_depth" value is changed later, disable NVMe support every time.
-
Execution example 2
# esxcli system module parameters set -m lpfc -p lpfc_enable_fc4_type=1
Disable NVMe support.
# esxcli system module parameters set -a -m lpfc -p lpfc0_lun_queue_depth=8
Change the value of the driver parameter "lpfcX_lun_queue_depth". By specifying the "-a" option, the "lpfcX_lun_queue_depth" value is changed but the NVMe support remains disabled.
-
-
For the qlnativefc Driver
-
Execution example 1
# esxcli system module parameters set -m qlnativefc -p "ql2xnvmesupport=0 ql2xmaxqdepth=8"
When changing the value of the driver parameter "ql2xmaxqdepth", disable NVMe support at the same time. Even if NVMe support is already disabled and the "ql2xmaxqdepth" value is changed later, disable NVMe support every time.
-
Execution example 2
# esxcli system module parameters set -m qlnativefc -p ql2xnvmesupport=0
Disable NVMe support.
# esxcli system module parameters set -a -m qlnativefc -p ql2xmaxqdepth=8
Change the value of the driver parameter "ql2xmaxqdepth". By specifying the "-a" option, the "ql2xmaxqdepth" value is changed but the NVMe support remains disabled.
-
Notes Related to Applying VMware Multi-Pathing plug-in for ETERNUS AX/HX
-
VMware
-
FC
| Contact your our company sales representative to obtain support for this software. |
When the ETERNUS AX/AC/HX series is added to the configuration, applying VMware Multi-Pathing plug-in for ETERNUS AX/HX is recommended to manage the non-response state due to a slowdown of the VMware ESXi host when intermittent faults occur in the path.
Use the latest supported version of the module for the VMware ESXi version being used.
The functional overview and functional enhancement details of VMware Multi-Pathing plug-in for ETERNUS AX/HX are described below.
-
Functional Overview
VMware Multi-Pathing plug-in for ETERNUS AX/HX is a sub plug-in of the VMware standard Native Multipathing Plug-in (NMP) used to configure multipath connections with ETERNUS storage systems.
It is also used as a Storage Array Type Plug-in (SATP) to perform error handling controls that correspond to ETERNUS storage systems. -
Details about Functional Enhancements
In addition to the normal multipath functions, the path switching function that triggered in 1 to 3 below is supported.
This may reduce phenomena such as host becoming unresponsive because the path is not switched, so applying VMware Multi-Pathing plug-in for ETERNUS AX/HX is recommended.-
Switching the path when it is unresponsive
Paths are switched when the I/O is unresponsive.
-
Enhanced diagnosis of dead paths
Paths are recovered after confirming that normal response continues for at least 20 minutes during the diagnosis.
By preventing rapid recovery of the paths with the intermittent faults, the system slowdown time can be reduced.
In an environment that does not use VMware Multi-Pathing plug-in for ETERNUS AX/HX, paths are recovered when the diagnosis succeeds just a single time. -
Blocking unstable paths
If the path status changes from "online" to "dead" six times within three hours after the first status transition, the path is recognized as being unstable and the status is changed to "fataldead". The "fataldead" state is not recovered with a diagnosis, but can be recovered by executing a command manually.
The command is described in Software Information (readme.txt), which is included in the downloaded module. Check the VMware Multi-Pathing plug-in for ETERNUS AX/HX support status.
This can prevent the continued slowdown state even if step2 cannot manage the paths with the intermittent fault.
In an environment that does not use VMware Multi-Pathing plug-in for ETERNUS AX/HX, no functions are available to detect unstable paths.
-
For procedure to download this software, contact our sales representative.
Notes Related to Windows Server iSCSI Connections (Including Hyper-V Environments)
If a command is issued to a disk device that is connected to a server via iSCSI, response delays occur for the command and performance degradation problems may occur.
This can be improved by disabling delayed ACK. For information on how to disable it, refer to the Windows Server manual.
Notes Related to the Disk Driver for Oracle Solaris
In Solaris 11.4, the disk driver used in the FC or iSCSI connected storage system has changed from ssd to sd. In Solaris 11.3 and earlier, the parameter set to ssd must be changed to sd.
| If an OS that is Solaris 11.3 or earlier is updated to Solaris 11.4, because the ssd driver will still be used, problems will not occur. |
Because the settings content is not valid for the following, problems may occur.
-
The ssd driver parameter is set the same as before
-
The sd driver is not reassigned to the ssd driver
In addition, it may be affected by the parameters for the sd driver set for the internal disk. As a result, business may be suspended.
-
Target servers
-
SPARC Enterprise
-
SPARC Servers
-
-
Target OS
-
Solaris 11.4
-
The following is comparison information between OS versions when a storage system is used.
| Item | Solaris 11.3 | Solaris 11.4 |
|---|---|---|
Driver name |
ssd |
sd |
Physical device name |
/pci@ ~ |
/pci@ ~ |
Instance name |
ssd1 |
sd1 |
Parameter |
Definition file: /etc/driver/drv/ssd.conf |
Definition file: /etc/driver/drv/sd.conf |
Definition file: /etc/system |
Definition file: /etc/system |
Depending on the environment and requirements, the storage device (our storage, non-our storage, or virtual storage) that is connected using FC or iSCSI will not work as expected and the setting may not be valid.
An example of the phenomenon is shown below.
| For iSCSI, an MPxIO connection is a requirement. |
-
Phenomenon 1
During a path failure of a multipath configuration, the path may take time to switch and I/O may slow down.
-
Environment
The storage system is connected using FC or iSCSI and the multipath is configured using the Oracle Solaris standard multipath driver (MPxIO).
-
Occurrence Conditions
A Solaris 11.4 OS is newly installed and the parameter for the ssd driver is used to perform a storage system configuration.Configuration example to the /etc/system file
set ssd:ssd_io_time = 20 set ssd:ssd_max_throttle = 8
-
-
Phenomenon 2
If a load that exceeds the processing performance is applied when a our storage system is connected, the performance is significantly reduced and I/O slows down.
Note that for non-our storage systems, in addition to significant performance reduction and I/O slowdown, I/O hang ups also occur.
-
Environment
The storage system is connected using FC or iSCSI.
-
Occurrence Conditions
A Solaris 11.4 OS is newly installed and the parameter for the ssd driver is used to perform a storage system configuration.Configuration example to the /etc/system file
set ssd:ssd_io_time = 20 set ssd:ssd_max_throttle = 8
-
-
Phenomenon 3
I/O to the storage system quickly times out and takes time.
-
Environment
The storage system is connected using FC or iSCSI and the internal disks are configured with sd driver parameters. This parameter can be set when using PRIMECLUSTER GD.
Configuration example to the /etc/system file
set sd:sd_io_time = 30 (the default for Oracle Solaris is 60 seconds)
-
Occurrence Conditions
A Solaris 11.4 OS is newly installed and the parameter for the ssd driver is used to perform a storage system configuration.Configuration example to the /etc/system file
set ssd:ssd_io_time = 20 set ssd:ssd_max_throttle = 8
-
-
How to Prevent Problems from Occurring
-
For Phenomena 1 to 3
For the parameter set in the ssd driver as below, it is changed to the sd driver parameter.
Item Pre-change Post-change Configuration file
/etc/system
(Common file for the sd and ssd drivers)No change due to commonality
Configuration parameter
ssd_io_time
sd_io_time
ssd_max_throttle
sd_max_throttle
Item Pre-change Post-change Configuration file
/etc/driver/drv/ssd.conf
/etc/driver/drv/sd.conf
Configuration parameter
ssd-config-list
sd-config-list
Note that for Solaris 11.4 and later, the sd driver parameter is a common parameter for internal disks and storage systems. If different parameter settings are required for both the internal disk and the storage system, each can be set in the following files.
-
Internal disk: /etc/system file
-
Storage system: /etc/driver/drv/sd.conf file
When the parameter set in /etc/system is set in /etc/driver/drv/sd.conf, the correspondence is as follows.
Item Pre-change Post-change Configuration file
/etc/system
/etc/driver/drv/sd.conf
Configuration parameter
sd_io_time
cmd-timeout in sd-config-list
sd_max_throttle
throttle-max in sd-config-list
For details of the sd-config-list parameter, refer to the following Oracle website.
https://docs.oracle.com/cd/E53394_01/html/E54792/
Reference: "Appendix C Tuning Disk Target Driver Properties"
-
-
-
Recovery Method After Problems Occur
If the system hangs up, follow the model-specific procedure to force a panic and restart the system. After that, perform How to Prevent Problems from Occurring, How to Prevent Problems from Occurring. The forced panic instructions are included in the operation for collecting the crash dump during a hang up. Note that if an investigation is not performed, the collected crash dump can be deleted.
Notes Related to iSCSI Connections of Oracle Solaris
-
Workaround
Use the following command to change the conn-login-max value in the iSCSI initiator to "60".
# iscsiadm modify initiator-node -T conn-login-max=60
Notes Related to the Maximum Number of Commands That Can Be Processed Simultaneously (Queue Depth)
-
Workaround
An optimal value is set by installing Host Utilities. In addition, for a vSphere that does not have Host Utilities, set "64" in FC/iSCSI.
-
Setting the queue depth
However, if the performance does not meet expectations, the setting can also be changed manually according to the system requirements. To change the setting, refer to the following manual to perform the calculation and setting.
-
Calculating the queue depth
-
Setting the queue depth
-
Notes for PRIMECLUSTER Configurations
The following PRIMECLUSTER functions and configurations are not supported.
-
I/O fencing
-
Mirroring between disk storage systems
-
Optional products for PRIMECLUSTER GD (PRIMECLUSTER GD Snapshot and PRIMECLUSTER GD I/O Monitor Option)
Notes Related to the Ethernet Switch Setting
-
Target OS
VMware
-
Workaround
Configure the Ethernet switch by keeping the following points in mind.
-
High availability is ensured by using two networks. Separate iSCSI traffic into different network segments.
-
Hardware flow control for sending and receiving is enabled end-to-end.
-
Priority flow control is disabled.
-
Jumbo Frame is enabled when necessary.
-
These settings must be performed for the servers, switches, and storage systems. Refer to each manual for details.
-
