SANtricity express configuration manuals ( CA08872-012 )

Points to Note with SANtricity

Points to Note when using the ETERNUS AB/HB series

If a command is issued to a disk device that is connected to a server via iSCSI, response delays occur for the command and performance degradation problems may occur. As a connection configuration example, if a mixed network is used, such as an iSCSI network with a management LAN network, the occurrence of problems tends to increase. This can be improved by disabling delayed ACK. However, depending on the type of Linux OS, disabling delayed ACK may not be possible. For information on how to disable it, refer to the Linux OS manual.

Volumes referred to as access volumes are automatically created for the ETERNUS AB2100, ETERNUS AB5100, ETERNUS HB1100/HB1200/HB1300/HB1400, ETERNUS HB2100/HB2200/HB2300/HB2400/HB2500/HB2600, and ETERNUS HB5100/HB5200. Although access volumes are necessary volumes when the automatic host creation and in-band management functions are used, those functions are not supported by our storage. Delete LUN mapping for the access volumes. To delete LUN mapping, perform Storage > Hosts > Unassign Volumes from SANtricity System Manager, or execute the "remove lunmapping" command from SMcli. For details, refer to ETERNUS AB/HB series SANtricity commands: Commands A-Z".

If a command is issued to a disk device that is connected to a server via iSCSI, response delays occur for the command and performance degradation problems may occur.
As a connection configuration example, if a mixed network is used, such as an iSCSI network with a management LAN network, the occurrence of problems tends to increase.
This can be improved by disabling delayed ACK. However, depending on the type of Linux OS, disabling delayed ACK may not be possible. For information on how to disable it, refer to the Linux OS manual.

target OS

VMware

Workaround
  • To enable Jumbo Frame, all the connected devices must support Jumbo Frame. Set an appropriate value for the various parameters (such as MTU size) of the connected devices.

  • For the Jumbo Frame setting method of devices such as LAN cards and Ethernet switches, refer to the manuals of VMware ESXi and the various devices. To reflect the setting, the server may require a restart.

  • The MTU sizes supported by the ETERNUS AB/HB series are between 1,500 to 9,000 bytes. (Default MTU size: 1,500 bytes/frame)

target OS

VMware, Windows, and Linux

Workaround

Configure the Ethernet switch by keeping the following points in mind.

  • High availability is ensured by using two networks. Separate iSCSI traffic into different networksegments.

  • Hardware flow control is enabled for the servers and switches. In addition, priority flow control is disabled.

  • Jumbo Frame is enabled when necessary.

    To enable Jumbo Frame, it must be set for the servers, switches, and storage systems. For the storage systems, refer to the following manual to set an appropriate MTU value.

target OS

VMware

Phenomenon

If NVMe support is enabled during an FC connection, duplicate WWPNs may be displayed.

Workaround

During an FC connection, disabling NVMe support is recommended.
For details on how to disable NVMe support, refer to the following Broadcom website.

https://knowledge.broadcom.com/external/article?legacyId=84325

  • For the lpfc Driver

    • Execution example 1

      # esxcli system module parameters set -m lpfc -p "lpfc_enable_fc4_type=1 lpfc0_lun_queue_depth=8"

      When changing the value of the driver parameter "lpfcX_lun_queue_depth", disable NVMe support at the same time. Even if NVMe support is already disabled and the "lpfcX_lun_queue_depth" value is changed later, disable NVMe support every time.

    • Execution example 2

      # esxcli system module parameters set -m lpfc -p lpfc_enable_fc4_type=1

      Disable NVMe support.

      # esxcli system module parameters set -a -m lpfc -p lpfc0_lun_queue_depth=8

      Change the value of the driver parameter "lpfcX_lun_queue_depth". By specifying the "-a" option, the "lpfcX_lun_queue_depth" value is changed but the NVMe support remains disabled.

  • For the qlnativefc Driver

    • Execution example 1

      # esxcli system module parameters set -m qlnativefc -p "ql2xnvmesupport=0 ql2xmaxqdepth=8"

      When changing the value of the driver parameter "ql2xmaxqdepth", disable NVMe support at the same time. Even if NVMe support is already disabled and the "ql2xmaxqdepth" value is changed later, disable NVMe support every time.

    • Execution example 2

      # esxcli system module parameters set -m qlnativefc -p ql2xnvmesupport=0

      Disable NVMe support.

      # esxcli system module parameters set -a -m qlnativefc -p ql2xmaxqdepth=8

      Change the value of the driver parameter "ql2xmaxqdepth". By specifying the "-a" option, the "ql2xmaxqdepth" value is changed but the NVMe support remains disabled.

The ETERNUS AB/HB series does not support VMware ClusteredVMDK.

target OS
  • VMware vSphere 8.0

  • VMware vSphere 7.0

Phenomenon

In the VMware Guest OS, frequent I/O errors occur in the storage system.
If I/O errors such as system errors and file system errors occur frequently, in rare cases it may cause a kernel panic, and the VMware Guest OS may be stopped as a result.

Cause

Disk timeout value for VMware Guest OS may be incorrect.
These phenomena may occur when a storage system is suspended for a short period of time (such as a path failover) during normal operation, and when the suspend time exceeds the disk timeout value of the VMware Guest OS.

Workaround

If an incorrect disk timeout value is specified for VMware Guest OS, change (manually increase) the disk timeout value to avoid or reduce errors. This may prevent the VMware Guest OS from stopping due to a kernel panic.

If the Path Selection Policy is set to "Most Recently Used (VMware)", change "Path Selection Policy" of all storage devices (LUN) to "Round Robin (VMware)". Commands can be used to check and set it.

Check Command
# esxcli storage nmp satp rule list -s VMW_SATP_ALUA
Setting Command
# esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V FUJITSU -M ETERNUS_AHB -c tpgs_on -P VMW_PSP_RR -e "Fujitsu Eternus arrays with ALUA support"
Target OS

VMware

Phenomenon

In a VMware standard Native Multipathing Plug-in (NMP) environment, the path disconnects during an owner change when the Auto Load Balancing (ALB) function is operating while Auto Load Balancing is enabled in the ETERNUS AB/HB series.

Cause

This is the operation specification for the combination of NMP and ALB.

Workaround

By applying the VMware Multi-Pathing plug-in for ETERNUS AB/HB series, the path disconnection that occurs during an owner change when the ALB function is operating can be avoided.

Target OS

VMware

Contact your our company sales representative to obtain support for this software.
Intended Use

When the ETERNUS AB/HB series is added to the configuration, applying VMware Multi-Pathing plug-in for ETERNUS AB/HB is recommended to manage the non-response state due to a slowdown of the VMware ESXi host when intermittent faults occur in the path.
Use the latest supported version of the module for the VMware ESXi version being used.
The functional overview and functional enhancement details of VMware Multi-Pathing plug-in for ETERNUS AB/HB are described below.

  • Functional Overview

    VMware Multi-Pathing plug-in for ETERNUS AB/HB is a sub plug-in of the VMware standard Native Multipathing Plug-in (NMP) used to configure multipath connections with ETERNUS storage systems.
    It is also used as a Storage Array Type Plug-in (SATP) to perform error handling controls that correspond to ETERNUS storage systems.

  • Details about Functional Enhancements

    In addition to the normal multipath functions, the path switching function that triggered in 1 to 3 below is supported.
    This may reduce phenomena such as host becoming unresponsive because the path is not switched, so applying VMware Multi-Pathing plug-in for ETERNUS AB/HB is recommended.

    1. Switching the path when it is unresponsive

      Paths are switched when the I/O is unresponsive.

    2. Enhanced diagnosis of dead paths

      Paths are recovered after confirming that normal response continues for at least 20 minutes during the diagnosis.
      By preventing rapid recovery of the paths with the intermittent faults, the system slowdown time can be reduced.
      In an environment that does not use VMware Multi-Pathing plug-in for ETERNUS AB/HB, paths are recovered when the diagnosis succeeds just a single time.

    3. Blocking unstable paths

      If the path status changes from "online" to "dead" six times within three hours after the first
      status transition, the path is recognized as being unstable and the status is changed to "fataldead". The "fataldead" state is not recovered with a diagnosis, but can be recovered by
      executing a command manually.

      The command is described in Software Information (readme.txt), which is included in the downloaded module.

      Check the VMware Multi-Pathing plug-in for ETERNUS AB/HB support status.

      This can prevent the continued slowdown state even if step2 cannot manage the paths with the intermittent fault.
      In an environment that does not use VMware Multi-Pathing plug-in for ETERNUS AB/HB, no functions are available to detect unstable paths.

For procedure to download this software, contact our sales representative.

Phenomenon

If MPIO (OS standard multipath driver) is not enabled, installation of Windows Host Utilities 7.1 to Windows Server 2019 may fail.

Workaround

Even for single path configurations, install SANtricity Windows DSM and enable the multipath function.

To set the disk I/O timeout in a virtual machine that is running on Hyper-V, modify and set the following registry key. The disk timeout must be set to a minimum of 180 seconds.

[HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\Disk]
\TimeOutValue\
Setting Example

When specifying 180 seconds, add the following line.

\TimeOutValue\=dword:000000b4
Target Interfaces

FC (vFC), iSCSI, and SAS

Phenomenon

If the SANtricity OS software upgrade is executed, virtual machines that have virtual disks mapped through Hyper-V may lose I/O access. However, commands sent to the physical path are usually held by Windows Server (OS) for 75 seconds.

Workaround

The workaround is to stop the virtual machine before the SANtricity OS software upgrade.
It is restored if the virtual machine is restarted.

Target Environments
  • Windows Server (including Hyper-V and Windows Server used with Guest OS in a virtual environment)

  • ETERNUS AB/HB series

  • FC/iSCSI/SAS/NVMe

  • Direct connection or connection via a switch

  • SAN connections (Localboot/SANboot)

Phenomenon

If Windows Server and the ETERNUS AB/HB series are connected, and when Windows Server is rebooted after the SANtricity OS software is applied to the storage system, the status of the disks on Windows Server may become offline.

  • Checking the Disk Status

    Use the following procedure to check the disk status on Windows Server.

    1. Click Start and select Management Tool > Computer Management .

    2. Open Disk Management in the left pane to check the disk state.

Cause

Windows Server uses the edition number of SANtricity OS provided by the storage system as a disk ID (instance ID). This occurs according to the specifications of Windows Server. When the OS is started, if the instance ID of the disk has been changed, the OS recognizes the disk as a new disk.
In addition, the disk status is set at system startup. The status that is set varies depending on the path configuration.

  • For single path configurations

    The disk status is determined according to the SAN Policy settings.

  • For multipath configurations

    The disk status (online or offline) before configuring the multipath is taken over regardless of theSAN Policy settings.

For the edition number of SANtricity OS of the ETERNUS AB/HB series, the version that has been changed by updating SANtricity OS is returned. This changes the ID information of disks and causes this phenomenon.

Workaround

The occurrence condition and the workaround differ depending on the path configuration of Windows Server.
To prevent the disk in single-path configurations from going offline after the SANtricity OS software is applied, temporarily change the SAN Policy setting before applying the SANtricity OS software.
For multipath configurations, there is no workaround. The procedure in Recovery Method After Problems Occur must be performed to recover from the offline status.
The workaround for each configuration is described below.

  • For Single Path Configurations

    The offline status can be avoided by changing the SAN Policy setting to "Online All" only when upgrading the SANtricity OS software. If operations are possible with the SAN Policy setting set to "Online All", Step 4 is not required. The following describes the workaround.

    1. Check the current SAN Policy settings.

      1. Execute the "diskpart" command in the command prompt.

      2. The prompt changes as shown below. Enter "san" and press the [Enter] key.

        Execution example

        DISKPART> san
      3. One of the following SAN Policies appears.

        • Offline Shared

        • Offline All

        • Online All

      4. Record the current setting value.

        The recorded value is used in Step 4 to restore the SAN Policy setting.

    2. Change the SAN Policy setting.

      1. Confirm that the prompt is "DISKPART", enter "san policy=onlineall" and press the [Enter] key.

        Execution example

        DISKPART> san policy=onlineall
      2. To apply the changed SAN Policy settings, reboot Windows Server.

      3. Execute the "diskpart" command in the command prompt again and confirm that the SAN Policy is set to "Online All".

        Execution example

        DISKPART> san
        SAN Policy : Online All
    3. Upgrade the SANtricity OS software.

      1. Upgrade the SANtricity OS software in the ETERNUS AB/HB series.

      2. To get OS to recognize the new instance ID, reboot Windows Server.

    4. Restore the SAN Policy setting.

      1. Restore the SAN Policy setting to the previous value.

        Execution example

        DISKPART> san policy=Offline Shared

        To apply the SAN Policy setting, reboot Windows Server.

      2. Execute the "diskpart" command again in the command prompt and confirm that the previous value is specified for the SAN Policy setting.

        Execution example

        DISKPART> san
        SAN Policy : Offline Shared
  • For Multipath Configurations

    When the multipathing of an online disk is set up during the configuration of the environment, the multipath disk starts up in online status after the SANtricity OS software is upgraded. This is because the disk status immediately prior to configuration of the multipath is inherited.
    When the multipathing of an offline disk is set up, the multipath disk starts up in offline status after the SANtricity OS software is upgraded. There are no measures that can be taken (such as a setting change) to prevent the disk from going offline after the environment is configured. Perform the procedure described in Recovery Method After Problems Occur if the disk status is offline after the SANtricity OS software upgrade.

Recovery Method After Problems Occur
  • When the OS Can Be Started Up

    Manually change the offline disks to online by following the procedure below.

    1. Click Start and select Management Tool > Computer Management.

    2. Select each offline disk in Disk Management, then right-click the selected disk to change the status to online.

  • When the OS Cannot Be Started Up

    If the Active Directory database is located in a disk other than the OS area, the OS may not be able to start up because the disk is offline and the OS cannot access the Active Directory database.
    In this case, the disk can be recovered by starting the OS in the Directory Services Restore Mode and changing the disk status to online.
    The procedure for recovering is as follows:

    1. Start the server.

    2. Press the [F8] key on the server start-up screen.

    3. The Advanced Boot Options screen appears.

    4. Select Directory Services Restore Mode.

    5. Log in as Administrator after the OS starts.

    6. Select [Management Tool] > [Computer Management].

    7. Select each offline disk in [Disk Management], then right-click the selected disk to change the status to online.

    8. Restart the OS.

If a command is issued to a disk device that is connected to a server via iSCSI, response delays occur for the command and performance degradation problems may occur.
This can be improved by disabling delayed ACK. For information on how to disable it, refer to the Windows Server manual.

In Solaris 11.4, the disk driver used in the FC or iSCSI connected storage system has changed from ssd to sd. In Solaris 11.3 and earlier, the parameter set to ssd must be changed to sd.

If an OS that is Solaris 11.3 or earlier is updated to Solaris 11.4, because the ssd driver will still be used, problems will not occur.

Because the settings content is not valid for the following, problems may occur.

  • The ssd driver parameter is set the same as before

  • The sd driver is not reassigned to the ssd driver

In addition, it may be affected by the parameters for the sd driver set for the internal disk. As a result, business may be suspended.

  • Target servers

    • SPARC Enterprise

    • SPARC Servers

  • Target OS

    • Solaris 11.4

The following is comparison information between OS versions when a storage system is used.

Item Solaris 11.3 Solaris 11.4

Driver name

ssd

sd

Physical device name

/pci@ ~
/ssd@wXXXXXXXXXXXXXXXX,X

/pci@ ~
/disk@wXXXXXXXXXXXXXXXX,X

Instance name

ssd1

sd1

Parameter

Definition file:
/etc/driver/drv/ssd.conf

Definition parameter:
* ssd-config-list
* throttle-max
* retries-timeout +

Definition file:
/etc/driver/drv/sd.conf

Definition parameter:
* sd-config-list
* throttle-max
* retries-timeout
* cmd-timeout

Definition file:
/etc/system

Definition parameter:
* ssd_max_throttle
* ssd_io_time

Definition file:
/etc/system

Definition parameter:
* sd_max_throttle
* sd_io_time

Depending on the environment and requirements, the storage device (our storage, non-our storage, or virtual storage) that is connected using FC or iSCSI will not work as expected and the setting may not be valid.
An example of the phenomenon is shown below.

For iSCSI, an MPxIO connection is a requirement.
  • Phenomenon 1

    During a path failure of a multipath configuration, the path may take time to switch and I/O may slow down.

    • Environment

      The storage system is connected using FC or iSCSI and the multipath is configured using the Oracle Solaris standard multipath driver (MPxIO).

    • Occurrence Conditions
      A Solaris 11.4 OS is newly installed and the parameter for the ssd driver is used to perform a storage system configuration.

      Configuration example to the /etc/system file

      set ssd:ssd_io_time = 20
      set ssd:ssd_max_throttle = 8
  • Phenomenon 2

    If a load that exceeds the processing performance is applied when a our storage system is connected, the performance is significantly reduced and I/O slows down.

    Note that for non-our storage systems, in addition to significant performance reduction and I/O slowdown, I/O hang ups also occur.

    • Environment

      The storage system is connected using FC or iSCSI.

    • Occurrence Conditions
      A Solaris 11.4 OS is newly installed and the parameter for the ssd driver is used to perform a storage system configuration.

      Configuration example to the /etc/system file

      set ssd:ssd_io_time = 20
      set ssd:ssd_max_throttle = 8
  • Phenomenon 3

    I/O to the storage system quickly times out and takes time.

    • Environment

      The storage system is connected using FC or iSCSI and the internal disks are configured with sd driver parameters. This parameter can be set when using PRIMECLUSTER GD.

      Configuration example to the /etc/system file

      set sd:sd_io_time = 30  (the default for Oracle Solaris is 60 seconds)
    • Occurrence Conditions
      A Solaris 11.4 OS is newly installed and the parameter for the ssd driver is used to perform a storage system configuration.

      Configuration example to the /etc/system file

      set ssd:ssd_io_time = 20
      set ssd:ssd_max_throttle = 8
  • How to Prevent Problems from Occurring

    • For Phenomena 1 to 3

      For the parameter set in the ssd driver as below, it is changed to the sd driver parameter.

      Item Pre-change Post-change

      Configuration file

      /etc/system

      (Common file for the sd and ssd drivers)

      No change due to commonality

      Configuration parameter

      ssd_io_time

      sd_io_time

      ssd_max_throttle

      sd_max_throttle

      Item Pre-change Post-change

      Configuration file

      /etc/driver/drv/ssd.conf

      /etc/driver/drv/sd.conf

      Configuration parameter

      ssd-config-list

      sd-config-list

      Note that for Solaris 11.4 and later, the sd driver parameter is a common parameter for internal disks and storage systems. If different parameter settings are required for both the internal disk and the storage system, each can be set in the following files.

      • Internal disk: /etc/system file

      • Storage system: /etc/driver/drv/sd.conf file

        When the parameter set in /etc/system is set in /etc/driver/drv/sd.conf, the correspondence is as follows.

        Item Pre-change Post-change

        Configuration file

        /etc/system

        /etc/driver/drv/sd.conf

        Configuration parameter

        sd_io_time

        cmd-timeout in sd-config-list

        sd_max_throttle

        throttle-max in sd-config-list

        For details of the sd-config-list parameter, refer to the following Oracle website.
        https://docs.oracle.com/cd/E53394_01/html/E54792/
        Reference: "Appendix C Tuning Disk Target Driver Properties"
  • Recovery Method After Problems Occur

    If the system hangs up, follow the model-specific procedure to force a panic and restart the system. After that, perform How to Prevent Problems from Occurring, How to Prevent Problems from Occurring. The forced panic instructions are included in the operation for collecting the crash dump during a hang up. Note that if an investigation is not performed, the collected crash dump can be deleted.

  • Workaround

    Use the following command to change the conn-login-max value in the iSCSI initiator to "60".

    # iscsiadm modify initiator-node -T conn-login-max=60

Oracle Solaris does not support Automatic Load Balancing. Therefore, the Automatic Load Balancing function does not work even though it is enabled by default. However, when using a multipath diagnostic program, make sure to disable Automatic Load Balancing.

Phenomenon

Even if a volume redistribution is performed from SANtricity System Manager, the preferred controller owner may not return for some volumes.

  • Impact on the Business

    In Recovery Guru of SANtricity System Manager, "Volume Not On Preferred Path" is displayed.
    An equivalent I/O performance prior to the occurrence of the phenomenon may not be obtained.

  • Details of the Phenomenon

    If the Environment and Occurrence Conditions described later are satisfied, the Access State on the preferred path side does not return to "active optimized" using the Solaris standard Multipath (MPxIO).
    If this phenomenon occurs, "Volume Not On Preferred Path" is displayed in RecoveryGuru of SANtricity System Manager.

    The Access State can be confirmed with the following command.
    Example:

# mpathadm show lu /dev/rdsk/c0t6D039EA00018A6CB00001B17621FF20Bd0s0
Logical Unit: /dev/rdsk/c0t6D039EA00018A6CB00001B17621FF20Bd0s2
    mpath-support: libmpscsi_vhci.so
    Vendor: FUJITSU
    Product: ETERNUS_AHB
    Revision: 0871
        (omitted)
    Target Port Groups:
        ID: 1
            Explicit Failover: yes
            Access State: active not optimized
            Target Ports:
                Name: 2015d039ea18a991
                Relative ID: 32769
        ID: 0
            Explicit Failover: yes
            Access State: active optimized
            Target Ports:
                Name: 2014d039ea18a991
                Relative ID: 1
  • Environment

    Occurs when all the following conditions are satisfied.

    1. An ETERNUS AB/HB series is connected.

      To check whether the server is connected to the ETERNUS AB/HB series, use the following command. If it is connected to the ETERNUS AB/HB series, "ETERNUS_AHB" is displayed in "Product:"

       # mpathadm show lu /dev/rdsk/c0t6D039EA00018A6CB00001B17621FF20Bd0s0
       Logical Unit: /dev/rdsk/c0t6D039EA00018A6CB00001B17621FF20Bd0s2
           mpath-support: libmpscsi_vhci.so
           Vendor: FUJITSU
           Product: ETERNUS_AHB
    2. The storage system in (1) is connected using the Solaris standard Multipath (MPxIO).

    3. The Multipath Diagnostic Program is being used.

      To check whether the Multipath Diagnostic Program is being used, use the following command. It is in use if "fjsvpdiagAHBX" processes are displayed using grep

       # ps -ef|grep fjsvpdiagAHBX
       root 27408 27407 0 15:53:50 0:00 /opt/FJSVpdiag/bin/fjsvpdiagAHBX
       root 27407 1 0 15:53:50 0:00 /opt/FJSVpdiag/bin/fjsvpdiagAHBX
  • Occurrence Conditions

    When a volume redistribution is performed from SANtricity System Manager.

  • Cause

    If "Redistribute volumes" is performed from SANtricity System Manager of the ETERRNUS AB/HB series, a sense notification is sent to the server from the ETERNUS AB/HB series seires. In an environment in which the Multipath Diagnostic Program is not running, the MPxIO receives the sense for the I/O it issued and switches the path status. However, if the Multipath Diagnostic Program is running, the sense may be received for the diagnostic I/O that is issued by the program without notifying the sense to the MPxIO and the state cannot return to the Access State.

Workaround
  • How to Prevent Problems from Occurring

    Before the redistribution, stop the diagnostic program.

    Execution example:

    1. Confirm that a redistribution can be performed.

      In SANtricity System Manager, make sure Preferred Owner and Current Owner are different.

      Preferred Owner Current Owner

      Volume01

      Controller A

      Controller B

      Volume02

      Controller A

      Controller B

      If the volumes in [Preferred Owner] and [Current Owner] are different as in the above table, a redistribution can be performed. For information on how to perform the operation in SANtricity System Manager, check the manuals such as the online help.

    2. Stop the Multipath Diagnostic Program.

      1. Execute the following command as the root user.

        # /opt/FJSVpdiag/etc/S99pdiagAHBX stop
      2. Make sure there are no resident processes (/opt/FJSVpdiag/bin/fjsvpdiagAHBX) running.

        Example:

        # ps -ef | grep fjsvpdiagAHBX
    3. Execute "Redistribution" from SANtricity System Manager.

      For information on how to perform the operation in SANtricity System Manager, check the manuals such as the online help.

    4. Confirm the Access State.

      Confirm that the Access State on the preferred path is "active optimized" for all the volumes.

      1. Confirm the Access State with the "mpathadm" command.

        Execution example for mpathadm:

        # mpathadm show lu /dev/rdsk/c0t6D039EA00018A6CB00001B17621FF20Bd0s0
        Logical Unit: /dev/rdsk/c0t6D039EA00018A6CB00001B17621FF20Bd0s2
            mpath-support: libmpscsi_vhci.so
            Vendor: FUJITSU
            Product: ETERNUS_AHB
            Revision: 0871
                (omitted)
            Target Port Groups:
                ID: 1
                Explicit Failover: yes
                Access State: active not optimized (*1)
                Target Ports:
                    Name: 2015d039ea18a991 (*2)
                    Relative ID: 32769
                ID: 0
                Explicit Failover: yes
                Access State: active optimized (*3)
                Target Ports:
                    Name: 2014d039ea18a991 (*4)
                Relative ID: 1

        In the above example, the "active optimized" (3*) port of the logical unit /dev/rdsk/c0t6D039EA00018A6CB00001B17621FF20Bd0s2 is the target port whose ID is 0 and Name is 2014d039ea18a991 (4*). Use SANtricity System Manager to confirm that this path is Preferred Owner and Current Owner.

        *1: Path notation on the controller side that does not have preferred ownership

        *2: World Wide Port Identifier on the controller side that does not have preferred ownership

        *3: Path notation on the controller side that has preferred ownership

        *4: World Wide Port Identifier on the controller side that has preferred ownership

      2. Check which controller side is the active optimized (*5) path.

        Click the following in order in Hardware of SANtricity System Manager.
        Show front of shelf > Controller A or Controller B > View settings > Host interfaces > Show more settings In the displayed Fibre Channel host ports, scroll right to World Wide Port Identifier and search for Target Ports: Name (*6).

        *5: The controller side that has preferred ownership

        *6: World Wide Port Identifier on the controller side that has preferred
        ownership

      3. Open the [Volume Settings] screen of Logical Unit.

        In Storage > Volumes, select the corresponding volume, and click View/Edit Settings to open Volume Settings.

        If the volume name that corresponds to Logical Unit is not known, find it with the following procedure.

        In Storage > Volumes, select the volumes in order from the top, and click View/Edit Settings to open Volume Settings.

        Find the volume that matches both the World-Wide Identifier (WWID) of the Volume Settings screen (excluding ":") and the Logical Unit shown by the mpathadm command (excluding /dev/rdsk/c0t and d0s2).

        If Logical Unit and World-Wide identifier (WWID) are displayed as below, it is the corresponding volume.

        Example:

        Logical Unit: /dev/rdsk/c0t6D039EA00018A6CB00001B17621FF20Bd0s2
        World-Wide Identifier (WWID) : 6D:03:9E:A0:00:18:A6:CB:00:00:1B:17:62:1F:F2:0B
      4. In [Controller ownership] of [Advanced] on the [Volume Settings] screen, confirm that both preferred owner and current owner match the controller in Step 4-b. If they differ, "Redistribute volumes" may have failed.

        However, it may take 10 to 15 minutes before the latest Access State applied if there is no I/O access. Wait 15 minutes and try again from Step 4 (Confirm the Access State).

    5. Start the Multipath Diagnostic Program.

      1. Execute the following command as the root user.

        # /opt/FJSVpdiag/etc/S99pdiagAHBX
      2. Confirm that the resident processes /opt/FJSVpdiag/bin/fjsvpdiagAHBX are running.

        # ps -ef | grep fjsvpdiagAHBX
        root 10303 10302 0 17:53:52 ? 0:00 /opt/FJSVpdiag/bin/fjsvpdiagAHBX
        root 10302 1 0 17:53:52 ? 0:00 /opt/FJSVpdiag/bin/fjsvpdiagAHBX
  • Recovery Method After Problems Occur

    Refer to How to Prevent Problems from Occurring, and perform the volume relocation again.

For configurations where a server is connected to a single controller only, errors are output (by Recovery Guru). However, only errors are output and there is no effect on the operation of the storage system.

Phenomenon

The following errors are output.

  • Host Redundancy Lost

  • Host Multipath Driver Incorrect

  • Volume Not On Preferred Path

Workaround

To prevent the errors from being output, set the Automatic Load Balancing to disable and the host connectivity report to disable.

Workaround

The maximum number of commands that can be processed simultaneously (Queue Depth) is 2,048 per controller. Set this value on the server side so that the maximum value is not exceeded. For the setting method, refer to the server manual.

Phenomenon

Environments where the iSCSI LAN is not configured with a dedicated network have the following effects.

  • Processing delays may occur on mutual networks due to traffic conflicts.

  • In terms of security, problems such as SAN data leakage may occur.

Workaround

Configure the iSCSI LAN so that it is on a dedicated network by separating the IP address segment from the business LAN and the administration LAN (such as separating the paths physically or in the VLAN).

If a load consolidation occurs in the controllers that control the volumes in the server connected to the ETERNUS AB/HB series, the preferred controller owner may not be automatically returned.

The following example conditions show when a load consolidation occurs:

  • When servers and switches fail

  • When the connected cables fail or are disconnected

  • When a path is switched from the server

  • When SANtricity OS is upgraded

Target OS

Oracle Solaris

Phenomenon

The following phenomena occur.

The "Volume Not On Preferred Path" message appears in Recovery Guru of SANtricity System Manager.

If there are some volumes whose preferred controller owner has not been returned, an equivalent I/O performance prior to the occurrence of the phenomenon may not be obtained.

  • Impact on the Business

    The data access performance may be reduced.

  • Environment

    Server OS

    SPARC Enterprise
    SPARC Servers

    Solaris

  • Occurrence Conditions

    This phenomenon occurs when multipath is configured and the controller that controls volumes in an Oracle Solaris environment is load-consolidated.

  • Cause

    This occurs due to the specified behavior in the ETERNUS AB/HB series when Oracle Solaris standard multipath driver (MPxIO) is used.

Workaround
  • How to Prevent Problems from Occurring

    There is no workaround.

  • Recovery Method After Problems Occur

    Execute the "Redistribute volumes" function on SANtricity System Manager of the ETERNUS AB/HB series.

    Perform the following procedure.

    1. Access SANtricity System Manager and select Storage > Volumes.

    2. Select More > Redistribute volumes.

    3. In the Redistribute volumes screen, enter redistribute and execute the command.

For the descriptions about each function, the checking method of the setting, and the details about the procedure, refer to Redistribute volumes

When using Oracle Solaris, also check Notes Related to the Volume Relocation of Oracle Solaris.

Notes for PRIMECLUSTER Configurations

The following PRIMECLUSTER functions and configurations are not supported.

  • I/O fencing

  • Mirroring between disk storage systems

  • Optional products for PRIMECLUSTER GD (PRIMECLUSTER GD Snapshot and PRIMECLUSTER GD I/O Monitor Option)

Notes for Redundant Path Configurations

To configure multiple host interfaces that exist in a server with redundant paths, both controllers must be connected and configured with the same number of paths in Host Settings to maintain controller redundancy.
If the controllers used for host access paths are not evenly distributed, the following errors are output.

  • Host Redundancy Lost

  • Host Multipath Driver Incorrect

  • Volume Not On Preferred Path

The ETERNUS AB/HB series does not support the following.

  • Automatic creation of hosts

  • In-band management

Target Storage System

The ETERNUS AB/HB series where 25Gbps/10Gbps iSCSI host interface cards (HICs) are installed

Onboard iSCSI ports are excluded.
Other Target Components
  • 25Gbps/10Gbps iSCSI switches

  • 25Gbps/10Gbps iSCSI converged network adapters (CNAs) or 25Gbps/10Gbps iSCSI network interface cards (NICs)

Phenomenon

The 25Gbps/10Gbps iSCSI host interface card (HIC) in the ETERNUS AB/HB series may not connect to the host via iSCSI connections.

Cause

The 25Gbps/10Gbps iSCSI host interface card (HIC) in the ETERNUS AB/HB series does not support auto negotiation or the Forward Error Correction (FEC) of the auto negotiation.

Workaround
  • Specify the communication speed of the 25Gbps/10Gbps iSCSI host interface card (HIC) as the required fixed value using SANtricity System Manager of the ETERNUS AB/HB series. To do so, perform the following procedure.

    1. Access SANtricity System Manager and select Settings > System.

    2. Select iSCSI settings > Configure iSCSI ports.

    3. Select Controller A or Controller B and click Next.

    4. Select an interface (such as HIC 1, Port 0c) on the host interface card (HIC) and click Next.

    5. From the Configured ethernet port speed drop-down list, select 25Gbps or 10Gbps.

      Confirm that communication speed for each port is the same value. Note that setting a different speed for each port is not allowed.

    6. Click Next.

    7. Configure the IPv4 network settings as required and click Next.

    8. Configure the IPv6 network settings as required and click Finish.

  • In the iSCSI switch settings, fix the port speed (25Gbps or 10Gbps) and disable the Forward Error Correction (FEC) mode. The procedure to disable the FEC mode varies depending on the switch.

    Refer to the manuals for the switches to be used.

Target SANtricity OS

SANtricity 11.60.2 and later

Phenomenon

If an arbitrary IP address is assigned to the iSCSI network settings or the management port for the storage system network, an error message is output to SANtricity System Manager and the IP address may not be set.

"The operation cannot complete because of an incorrect parameter in the command sent to the controller.
Please retry the operation. If this message persists, contact your Technical Support Representitive. (API 4)"
Cause

SANtricity OS has reserved IP addresses for internal operations. These addresses cannot be assigned as arbitrary IP addresses for the storage system, such as for the iSCSI and management ports.
The following IP addresses cannot be assigned.

  • SANtricity 11.60.2 to 11.70.5

    • IPv4 address

      192.0.0.0/17 (192.0.0.0 to 192.0.127.255)

    • IPv6 address

      fd94:29f8:cf58::/48

  • SANtricity 11.80 and Later

    • IPv4 address

      192.0.0.0/22 (192.0.0.1 to 192.0.3.255)

    • IPv6 address

      fd94:29f8:cf58:1::2/48

Workaround

Assign an IP address other than the IP addresses that cannot be assigned.

Storage System Configuration Change for the Administrator Due to HBA Replacement

For an HBA replacement due to an HBA configuration change or HBA failure, addition or deletion work of the WWN (hereinafter referred to as the "host port") in SANtricity System Manager is required.
In addition, when configuring Fabric, changing of the zone setting in the switch may be required.
Whether or not the configuration change is required depends on the zone setting method.

Advance Preparation

Before performing the work, check the following information.

  • The hostname of the change target HBA

  • The WWN assigned to the host port of the change target HBA

  • The label attached to the host port of the change target HBA

  • The WWN of the new HBA

HBA Replacement

Perform the HBA replacement according to the HBA replacement procedure of the server.
Make sure to refer to the server manual to perform the replacement work.

Host Settings

After the HBA replacement, perform the setting changes in the host settings.

  • Delete the host port to be replaced on the target host.

  • Add a new HBA host port to the target host.

If zoning is set on the server and storage system, when a host port is being added, the new HBA host port that is displayed can be selected.

Connection Confirmation of the Server and Storage System

Make sure the disks are recognized by the OS in the same way as before the replacement.

  • For RHEL

    Check the disks with multipath -ll.

  • For VMware

    Check the device from the storage adapter of vCenter.

  • For Windows

    Check the disks from Disk Management.

Notes When a Marvel (Qlogic) Made FC HBA Is Used

Phenomenon

In a SAN Boot configuration, if the access path to the LUN where the OS is installed is only on the controller side that is not displayed in [Current Owner], the OS may fail to boot.

  • Example of a system state where this phenomenon has occurred

    If the controller (Ctrl) that owns OS LUN#0 is A, the access path between HBA 0 and Ctrl A is disconnected for some reason.

    HBA0-Control A Path Down

Target

The SAN Boot configuration when the PY-FC432/431, PY-FC412/411, PY-FC342/341, or PY-FC322/321 is used.

Cause

Because the LUN in the path on the controller side that is not displayed in Current Owner is not recognized by the UEFI of PRIMERGY, PRIMERGY cannot access the LUN in the path on the controller side that is not displayed in Current Owner.
However, because the OS recognizes the LUN from Current Owner and non Current Owner controller paths, once the OS is started, a redundant configuration can be created without any problems.

Workaround

When booting the server, make sure the access path of the LUN where the OS is installed is connected to a controller displayed in Current Owner.
Current Owner can be changed from SANtricity System Manager of the ETERNUS AB/HB series.

  • System state where this phenomenon does not occur Both access paths are normal

    Path Normal

  • If the controller (Ctrl) that owns OS LUN#0 is A, the path between HBA 0 and Ctrl A is normal

    HBA1-Control B Path Down

Example of a Possible Occurrence

In a multipath configuration that includes both controllers, if the path on the controller side (or the Current Owner for the OS LUN) is disconnected, this phenomenon occurs because only the path on the non Current Owner side has access to the LUN.
Therefore, this event will not occur during startup, or when the system, such as the path, is normal.

Impact on the Systems that Are Operating

Impact to the system after startup has not been confirmed.

On this page


Points to Note with SANtricity

Notes Related to Linux iSCSI Connections

Notes Related to Access Volumes

Notes Related to iSCSI Connections of VMware ESXi Servers

Notes Related to the Jumbo Frame Setting

Notes Related to the Ethernet Switch Setting

Notes Related to FC Connections of VMware ESXi Server

Notes Related to VMware ClusteredVMDK

Notes Related to FC Connections of VMware ESXi Server

Notes Related to Path Selection Policy on VMware

Notes Related to Enabling Auto Load Balancing (ALB) on VMware

Notes Related to Applying VMware Multi-Pathing plug-in for ETERNUS AB/HB

Notes Related to the Installation of Windows Host Utilities 7.1 to Windows Server 2019

Notes Related to Disk Timeouts on Virtual Machines Running on Hyper-V

Notes Related to Upgrading the SANtricity OS Software in Hyper-V Environments for Windows Server 2016 and Later

Notes Related to Connections with a Windows Server

Notes Related to Windows Server iSCSI Connections (Including Hyper-V Environments)

Notes Related to the Disk Driver for Oracle Solaris

Notes Related to iSCSI Connections of Oracle Solaris

Notes Related to Automatic Load Balancing of Oracle Solaris

Notes Related to the Volume Relocation of Oracle Solaris

Notes Related to Configurations Where a Server Is Connected to a Single Controller Only

Notes Related to the Maximum Number of Commands That Can Be Processed Simultaneously (Queue Depth)

Notes Related to LAN Environments

Notes Related to Server Connections for Oracle Solaris

Notes for PRIMECLUSTER Configurations

Notes for Redundant Path Configurations

Notes Related to Hosts and Host Clusters

Notes Related to iSCSI 25Gbps/10Gbps Connections

Notes Related to the Configuration of IP Addresses

Storage System Configuration Change for the Administrator Due to HBA Replacement

Notes When a Marvel (Qlogic) Made FC HBA Is Used

Top of Page