Create RAID Group

Overview

This function creates a RAID group.

A RAID group is a group of drives that configure a RAID level.

Features and Required Number of Drives for Each RAID Level

The following table shows the features and the required number of drives for each RAID level.

Actual RAID group configurations (or the number of available drives) vary depending on the maximum number of drives that can be installed in the storage system.

RAID level (*1)

Feature

Required number of drives (*2)

High Performance (RAID1+0)

The high I/O performance of RAID0 (striping) is combined with the reliability of RAID1 (mirroring).

2D+2M - 16D+16M

High Capacity (RAID5)

Data divided into units of blocks and parity information that is created from the data are allocated across multiple drives to allow data redundancy.

2D+1P - 15D+1P

High Reliability (RAID6)

The use of double parity allows the full recovery of lost data even in the event that two of the drives fail.

3D+2P - 14D+2P

High Reliability (RAID6-FR)

Configure a single RAID group with multiple RAID redundant sets and reserved areas equivalent to a hot spare. Distributing data in RAID groups allows high-speed rebuilding when the first drive fails. A recovery can be performed with up to two drive failures, however when the second drive fails, the rebuild is operated at a normal speed. Several restrictions apply to the RAID groups and volumes of this RAID level. Refer to "Restrictions for RAID6-FR" for details.

(3D+2P)x2+1HS

(4D+2P)x2+1HS

(6D+2P)x2+1HS

(9D+2P)x2+1HS

(12D+2P)x2+1HS

(5D+2P)x4+1HS

(13D+2P)x2+1HS

(8D+2P)x3+1HS

(4D+2P)x5+1HS

(3D+2P)x6+1HS

Reliability (RAID5+0)

Multiple RAID5 volumes are RAID0 striped. For large capacity configurations, use of RAID5+0 instead of RAID5 results in enhanced performance, improved reliability, and shorter rebuilding times. Several restrictions apply to the RAID groups of this RAID level. Refer to "Restrictions for RAID5+0" for details.

(2D+1P) × 2 - (15D+1P) × 2

Mirroring (RAID1)

Data is mirrored to two drives (mirroring).

If one drive fails, the other drive continues operation.

1D+1M

Striping (RAID0)

Data is split in unit of blocks and stored across multiple drives (striping).

RAID0 has no data redundancy.

2D - 16D

*1  :  The RAID levels in the table above, setting value fields, and display contents fields for descriptions are written according to how they actually appear in Web GUI. Other fields and descriptions in this manual use "RAIDxx" as an abbreviation for the RAID levels.
*2  :  D: Data drives, M: Mirror drives, P: Parity drives, HS: Hot Spares

Restrictions for RAID6-FR

The following restrictions apply to RAID groups (hereinafter referred to as "Fast Recovery RAID groups") that were created with RAID6-FR and volumes created in those RAID groups.

  • The following operations that use LDE cannot be performed for Fast Recovery RAID groups.
    • Changing the RAID level to "RAID6-FR"

    • Changing the RAID level from "RAID6-FR"

    • Expanding the RAID group capacity by adding drives

  • "Standard (including concatenation volumes by means of LUN Concatenation)" and ODX Buffer volumes can be created in the Fast Recovery RAID group.

  • Encrypted volumes can be created in the Fast Recovery RAID groups. However, encrypting volumes that exist in Fast Recovery RAID groups is not allowed.

  • The Stripe Depth for a Fast Recovery RAID group is fixed at "64KB".

  • The Copybackless function is not performed when the first drive fails in a Fast Recovery RAID group. A high-speed rebuild is performed in a hot spare area within the RAID group, and when the failed drive is exchanged for a normal drive, copyback is performed.

Restrictions for RAID5+0

The following restrictions apply to RAID groups that are created with "RAID5+0".

  • The following operations that use LDE cannot be performed.
    • Changing the RAID level to "RAID5+0"

    • Changing the RAID level from "RAID5+0"

    • Expanding the RAID group capacity by adding drives

  • The Stripe Depth for a new RAID group is fixed at "64KB".

The Maximum Number of RAID Groups for Each Model

The maximum number of RAID groups varies depending on each model. The following table shows the maximum number of RAID groups for each model.

Model The maximum number of RAID groups

ETERNUS DX60 S5

48

ETERNUS DX100 S5

72

ETERNUS DX200 S5

132

ETERNUS DX500 S5

288

ETERNUS DX600 S5

528

ETERNUS DX900 S5

1152
ETERNUS DX8100 S4 24
ETERNUS DX8900 S4 3456

ETERNUS AF150 S3

12

ETERNUS AF250 S3

132

ETERNUS AF650 S3

528

Drive Combinations That Can Configure a RAID Group

The following table shows the drive combinations that can configure a RAID group.

  Online Nearline SSD Online SED Nearline SED SSD SED
Online OK OK (but not recommended) NG NG NG NG
Nearline OK (but not recommended) OK NG NG NG NG
SSD NG NG OK NG NG NG
Online SED NG NG NG OK OK (but not recommended) NG
Nearline SED NG NG NG OK (but not recommended) OK NG
SSD SED NG NG NG NG NG OK

OK: RAID groups can be created    OK (but not recommended): RAID groups can be created, but not a recommended configuration   NG: RAID groups cannot be created

Caution
  • For the ETERNUS DX60 S5 and the ETERNUS AF150 S3, SEDs cannot be used for configuring a RAID group.

  • RAID0 has no data redundancy. The use of RAID1+0, RAID5, RAID6, RAID6-FR, RAID5+0, or RAID1 is recommended.

User Privileges

Availability of Executions in the Default Role

Default role Availability of executions
Monitor  
Admin
StorageAdmin
AccountAdmin  
SecurityAdmin  
Maintainer

Refer to "User Roles and Policies" for details on the policies and roles.

Settings

In this screen, create a RAID group.

There are two methods to create RAID groups: automatic drive selection and manual drive selection.

New RAID Group

Input the name of the RAID group that is to be newly created and select the create mode.

Item Description Setting values

Name

Input a RAID group name that is to be created.

When creating a single RAID group, an existing RAID group name cannot be used.

Up to 16 alphanumeric characters,

symbols (except "," (comma) and "?"),

and spaces

Create Mode

Select the create mode of the RAID group.

  • Automatic

    Select a drive to create a RAID group automatically.

  • Manual

    Select a drive to create a RAID group manually.

Automatic

Manual

Automatic Setting

Item Description Setting values

Number of RAID Groups

Input the number of RAID groups that are to be created.

When creating multiple RAID groups at a time, the new RAID groups are named automatically. Refer to "Naming Conventions for Creating RAID Groups" for details.

For the ETERNUS DX60 S5:

1 - 48

For the ETERNUS DX100 S5:

1 - 72

For the ETERNUS DX200 S5:

1 - 132

For the ETERNUS DX500 S5:

1 - 288

For the ETERNUS DX600 S5:

1 - 528

For the ETERNUS DX900 S5:

1 - 1152

For the ETERNUS DX8100 S4:

1 - 24

For the ETERNUS DX8900 S4:

1 - 3456

For the ETERNUS AF150 S3:

1 - 12

For the ETERNUS AF250 S3:

1 - 132

For the ETERNUS AF650 S3:

1 - 528

0 (Default)

Drive Type

Select the type of drive that configures a RAID group from the list box.

Only the drives that are installed in the storage system are displayed.

Caution
  • If drives that satisfy all of the following conditions are installed in the storage system, select the drives manually.
    • The drive types are the same

    • The drive capacities are the same

    • The sector format (AF-compliant/non-AF-compliant) is different

  • When using SSDs, the SSD types (SSD-H/SSD-M/SSD-L) cannot be specified. SSDs that are the same type and have the necessary capacity are selected. If SSDs with the same type are not available, RAID groups cannot be created. Note that if multiple RAID groups are created at once, different SSD types may be used for each RAID group. SSD types have no order of priority.

    When using SSD SEDs, the SSD types (SSD-H SED/SSD-M SED/SSD-L SED) cannot be specified. If "SSD SED" is selected for the drive type, drives are operated in the same way as SSDs.

Online

Nearline

SSD

Online SED

Nearline SED

SSD SED

RAID Level

Select the level of RAID group that is to be created.

Caution
  • Several restrictions apply to RAID groups and volumes in the "RAID6-FR" type RAID group. Refer to "Restrictions for RAID6-FR" for details.

  • Several restrictions apply to RAID groups that are created with "RAID5+0". Refer to "Restrictions for RAID5+0" for details.

  • If "RAID1+0", "RAID5", or "RAID5+0" is selected for the RAID level, RAID groups cannot be created with drives that are 6 TB or larger (except SSDs and SSD SEDs).

High Performance (RAID1+0)

High Capacity (RAID5)

High Reliability (RAID6)

High Reliability (RAID6-FR)

Reliability (RAID5+0)

Mirroring (RAID1)

Striping (RAID0)

Select Drives

Select the requirements that are given priority when creating a Fast Recovery RAID group with automatic drive configuration.

This item is available only when the RAID level is "RAID6-FR".

Minimize number of using drives

Prioritize rebuild rate

Minimum Capacity per RAID Group

Input the RAID group capacity that is to be created and select the units of capacity.

A RAID group is automatically created with a capacity of the entered value or higher.

Numeric characters

Unit: TB/GB/MB

Naming Conventions for Creating RAID Groups

  • When creating multiple RAID groups at a time, a name is automatically added to a RAID group with the specified "Name" and a suffix number "x" (serial numbers starting with "0").

    (Example) Specified RAID group name: RAIDGroup_aaaa (14 characters) → Name for created RAID group: RAIDGroup_aaaa0, RAIDGroup_aaaa1, etc.

  • When the RAID group name including the suffix number "x" has more than 16 characters, the excess number of characters is deleted from the "Name", starting with the last character and a suffix number "~x" will be added. Then, the name will contain only 16 characters.

    (Example) Specified RAID group name: RAIDGroup_aaaabb (16 characters) → Name for created RAID group: RAIDGroup_aaaa~0, RAIDGroup_aaaa~1, etc.

  • When a RAID group name including the suffix number already exists, the suffix number is increased by one (+1). The suffix number is increased by one (+1) until no RAID group names overlap.

Manual Setting

Item Description Setting values

RAID Level

Select the level of RAID group that is to be created.

Caution
  • Several restrictions apply to RAID groups and volumes in the "RAID6-FR" type RAID group. Refer to "Restrictions for RAID6-FR" for details.

  • Several restrictions apply to RAID groups that are created with "RAID5+0". Refer to "Restrictions for RAID5+0" for details.

High Performance (RAID1+0)

High Capacity (RAID5)

High Reliability (RAID6)

High Reliability (RAID6-FR)

Reliability (RAID5+0)

Mirroring (RAID1)

Striping (RAID0)

Controlling CM

Specify the Controlling CM of the RAID group to be created.

"Automatic" and the normal CM number ("CE#x CM#y" or "CM#y") that is installed are displayed as options.

Select "Automatic" for normal operations. When "Automatic" is selected, the Controlling CM that is to be allocated is determined by the RAID group number. Refer to "Automatic Controlling CM Setting" for details.

For the ETERNUS DX900 S5 or the ETERNUS DX8900 S4

Automatic

CE#x CM#y

For the other models

Automatic

CM#y

x: CE number

y: CM number

Fast Recovery Configuration

Select the drive configuration for a Fast Recovery RAID group.

Select the drive configuration from "No. of drives in the configuration", "capacity efficiency", and "rebuilding speed" according to your environment. Refer to "Drive Configuration for Fast Recovery RAID Groups" for details. The more redundant sets there are, the faster the rebuilding becomes.

This item is blank when the RAID level is not "RAID6-FR".

For the ETERNUS DX60 S5 or the ETERNUS AF150 S3

(3D+2P)x2+1HS

(4D+2P)x2+1HS

(6D+2P)x2+1HS

(9D+2P)x2+1HS

Blank

For the other models

(3D+2P)x2+1HS

(4D+2P)x2+1HS

(6D+2P)x2+1HS

(9D+2P)x2+1HS

(12D+2P)x2+1HS

(5D+2P)x4+1HS

(13D+2P)x2+1HS

(8D+2P)x3+1HS

(4D+2P)x5+1HS

(3D+2P)x6+1HS

Blank

D: Data drives

P: Parity drives

HS: Hot Spares

Minimum Capacity per RAID Group

The capacity of RAID group that is to be created is displayed.

The "Minimum Capacity per RAID Group" is automatically calculated from the selected RAID level and drives.

 

Drive Configuration for Fast Recovery RAID Groups

The drive layout in the storage system is the same as "RAID6". When using automatic configuration, the drive configuration that satisfies the specified capacity is determined according to the following order.

No. of drives in the configuration

(per RAID group)

Redundant sets + HS (*1)

Capacity efficiency (*2)

(%)

Rebuilding speed (*3)

(Rate)

Number of data drives Selection order when configuring automatically
"Minimize number of using drives" is selected "Prioritize rebuild rate" is selected
11 (3D+2P)x2+1HS 54.5 2.20 6 1 5
13 (4D+2P)x2+1HS 61.5 2.17 8 2 6
17 (6D+2P)x2+1HS 70.6 2.13 12 3 7
23 (9D+2P)x2+1HS 78.3 2.09 18 4 8
29 (12D+2P)x2+1HS 82.8 2.07 24 5 9
31 (13D+2P)x2+1HS 83.9 2.06 26 6 10
31 (3D+2P)x6+1HS 58.1 6.20 18 Not selected 1
31 (4D+2P)x5+1HS 64.5 5.17 20 Not selected 2
29 (5D+2P)x4+1HS 70.0 4.14 20 Not selected 3
31 (8D+2P)x3+1HS 77.4 3.10 24 Not selected 4
*1  :  Fast Recovery RAID groups are described as "Redundant sets + HS".

     RAID6 ((Number of data drives (D) + Number of parity drives (P)) × Number of redundant sets + Number of hot spares (HS))

                             ↑

                         Redundant sets

     (Example) "RAID6 ((3D+2P)x2+1HS)" is described as "(3D+2P)x2+1HS".

*2  :  The ratio of the user capacity to physical drive capacity.
*3  :  Rate when the rebuilding speed for the basic "RAID6 (D+P)" configuration is "1". The rate varies depending on the workload of the storage system and system environment.

Advanced Setting

Perform the advanced settings for RAID groups

Item Description Setting values

Stripe Depth

Stripe Depth should be selected only when advanced tuning needs to be performed for each RAID group. It is not necessary to change the default value for normal use.

The setting is not available when the RAID level is "RAID1". Available Stripe Depth value varies depending on the RAID level. Refer to "Available Stripe Depth Value" for details.

Note
  • Specifying a larger value for the Stripe Depth can reduce the number of drives to access. For RAID1+0, reducing the number of commands issued to drives improves the performance of access to the specified RAID group. For RAID5, however, specifying a larger value for the Stripe Depth might decrease the sequential write performance. In addition, several restrictions apply to a RAID group whose Stripe Depth has been changed and volumes created for such RAID group. Refer to "Restrictions for Stripe Depth Modification" for details.

64 KB (Default)

128 KB

256 KB

512 KB

1024 KB

Available Stripe Depth Value

The Stripe Depth values available for each RAID level are as follows:

RAID level Available Stripe Depth value
RAID1 -
RAID1+0, RAID0 64 KB, 128 KB, 256 KB, 512 KB, 1024 KB
RAID5 (2+1) - RAID5 (4+1) 64 KB, 128 KB, 256 KB, 512 KB
RAID5 (5+1) - RAID5 (8+1) 64 KB, 128 KB, 256 KB
RAID5 (9+1) - RAID5 (15+1) 64 KB, 128 KB
RAID5+0 64 KB
RAID6 64 KB
RAID6-FR 64 KB

Restrictions for Stripe Depth Modification

Note that the following restrictions apply to a RAID group whose Stripe Depth has been changed and volumes created for such RAID group.

  • The Stripe Depth of the RAID groups already created cannot be changed .

  • When selecting drives automatically to create a RAID group, the Stripe Depth cannot be changed.

  • The capacity cannot be expanded if the Stripe Depth of the RAID group has been changed (LDE is not available).

  • Encryption cannot be performed for volumes that are created in a RAID group whose Stripe Depth has been changed.

Drive Selection

Drives can be selected from the list or the installation image. To switch between the list and the installation image, click the tab.

Requirements for selecting drives
  • The drive requirements for creating RAID groups are listed below.
    • The drive status is "Present"

    • The drives are not registered in any RAID group, TPP, FTRP, REC Disk Buffer, or Extreme Cache Pool

    • The drives are not registered as hot spares

    • The drive type (Online/Nearline/SSD/Online SED/Nearline SED/SSD SED) must be the same

      (Although "Online" type drives and "Nearline" type drives can be used in the same RAID group, using only "Online" type drives or using only "Nearline" type drives is recommended. Also, "Online SED" type drives and "Nearline SED" type drives can be used in the same RAID group, but using only "Online SED" type drives or using only "Nearline SED" type drives is recommended. This is because the available capacity and the access performance may be reduced when these drives are used in the same RAID group.)

    • If "RAID1+0", "RAID5", or "RAID5+0" is selected for the RAID level, drives that are 6 TB or larger (except SSDs and SSD SEDs) cannot be specified

  • Drive recommendations for creating RAID groups are listed below.
    • Select drives that are the same size and the same speed. If drives of different capacities exist in a RAID group, the smallest capacity becomes the standard, and all other drives are regarded as having the same capacity as the smallest drive. In this case, the remaining drive space is not used. In addition, if drives of different speeds exist in a RAID group, the access performance of the RAID group is reduced by the slower drives.

    • Select the same sector format of drives (AF-compliant/non-AF-compliant).

    • If the host connection environment does not support Advanced Format (AF), select non-AF-compliant drives (*1). If AF-compliant drives (*2) are selected, a data format conversion occurs and the drive access performance is reduced. When the host to be connected supports AF, both AF-compliant and non-AF-compliant drives can be selected.

      *1  :  Drives (such as 2.5" Online and 2.5" Nearline) where "AF" is not displayed for the type.
      *2  :  Drives (such as 2.5" Online AF and 2.5" Nearline AF) where "AF" is displayed for the type.
    • When "RAID1+0" or "RAID1" is selected for the RAID level, allocate the drives (mirroring pair drives) by dividing them into two or more connection lines (for the ETERNUS DX500 S5/DX600 S5/DX900 S5 and the ETERNUS AF650 S3).

    • When "RAID5", "RAID6", or "RAID6-FR" is selected for the RAID level, allocate the drives (multiple drives configuring a striping) by dividing them into two or more connection lines (for the ETERNUS DX500 S5/DX600 S5/DX900 S5 and the ETERNUS AF650 S3).

    • If "RAID1" is selected for the RAID level, using drives other than SSD is recommended.

  • There are conditions for the ETERNUS DX8100 S4/DX8900 S4 drive layout. Refer to "Conditions for the ETERNUS DX8100 S4/DX8900 S4 Drive Layout" for details. Note that these conditions are not applied to other models.

[Tabular] Tab

Click the [Tabular] tab to select drives from the list. Only unused drives are displayed on the list.

There are conditions for the ETERNUS DX8100 S4/DX8900 S4 drive layout. Refer to "Conditions for the ETERNUS DX8100 S4/DX8900 S4 Drive Layout" for details. Note that these conditions are not applied to other models.

Item Description

Checkbox to select drives

Select the checkbox for the drive that is to be used.

Enclosure

The enclosure where the drive is installed is displayed.

   CE: Controller Enclosure (2.5" and 3.5")

   DE: Drive Enclosure (2.5", 3.5", and 3.5" high density DEs)

CE

CE#x

DE#yy

x: CE number

yy: DE number

Slot No.

The slot number of the enclosure where the drive is installed is displayed.

2.5" CE/DE: 0 - 23

3.5" CE/DE: 0 - 11

3.5" high density DE: 0 - 59

Type

The drive type displayed for this item is a combination of the following.

  • Drive size
    • For 2.5-inch drives: 2.5"

    • For 3.5-inch drives: 3.5"

  • Drive type
    • For SAS disks: Online

    • For Nearline SAS disks: Nearline

    • For SSDs, the following items are displayed depending on the SSD type.
      • For SSD-Hs (12 Gbit/s): SSD-H (*1)

      • For SSD-Ms (12 Gbit/s): SSD-M (*1)

      • For SSD-Ls (12 Gbit/s): SSD-L (*1)

Note that "SED" is also displayed for self encrypting drives and "AF" is also displayed for Advanced Format compliant drives.

*1  :  The displayed item varies depending on the interface speed (bandwidth) or the capacity of the reserved space. Unless otherwise specified, "SSD-H", "SSD-M", and "SSD-L" are collectively referred to as "SSD". In addition, there may be cases when "SSD SED" is used as the collective term for self encrypting SSD-Hs, SSD-Ms, and SSD-Ls.

Capacity

The capacity of the drive is displayed.

Caution
  • The displayed drive capacity may differ from the product's actual capacity. For example, the drive capacity of a "1.92 TB SSD" is displayed as "2.00 TB" and the capacity of an "18 TB Nearline SAS disk" is displayed as "17.9 TB".

Speed

The drive speed is displayed.

For SSD or SSD SED, a "-" (hyphen) is displayed.

15000 rpm

10000 rpm

7200 rpm

[Graphic] Tab

Click the [Graphic] tab to select drives from the drive installation image. The installation images of all the drives installed in the storage system are displayed. Checkboxes are displayed for unused drives.

There are conditions for the ETERNUS DX8100 S4/DX8900 S4 drive layout. Refer to "Conditions for the ETERNUS DX8100 S4/DX8900 S4 Drive Layout" for details. Note that these conditions are not applied to other models.

Item Description Setting values

DE selection list box

Select the DE group.

Options are displayed in the list box when at least one CE or DE in the DE group is installed in the storage system.

Refer to "DE selection list box" for details on the options and DE groups for each model.

DE#0x

DE#1x

DE#2x

DE#3x

DE#4x

DE#5x

DE#6x

DE#7x

DE#8x

DE#9x

DE#Ax

DE#Bx

DE#Cx

DE#Dx

DE#Ex

DE#Fx

DE

Only the CEs or the DEs in the selected DE group that are installed in the storage system are displayed.

CE

CE#x

DE#yy

x: CE number

yy: DE number

 

Checkbox to select drives

Select the checkbox for the drive that is to be used.

Checkboxes are displayed for unused drives. For 2.5" CEs or 2.5" DEs, drives are displayed from left to right in ascending order of the slot number. For 3.5" CEs, 3.5" DEs, or 3.5" high density DEs, drives are displayed from bottom left to top right in ascending order of the slot number.

Placing the mouse pointer on the icon displays the detailed information of the drive.

 

Conditions for the ETERNUS DX8100 S4/DX8900 S4 Drive Layout

The drive layout to configure RAID groups in the ETERNUS DX8100 S4/DX8900 S4 must satisfy the conditions described below.

RAID groups cannot be created if the required conditions are not satisfied.

For the ETERNUS DX8100 S4

RAID level Drive layout conditions
RAID1 Required Allocate mirroring pair drives to different DEs.
RAID1+0 Required Allocate mirroring pair drives to different DEs.
Recommended Allocate striping drives to as many DEs as possible.

RAID5

RAID5+0

RAID6

RAID6-FR

Recommended Distribute member drives to as many DEs as possible.

For the ETERNUS DX8900 S4

RAID level Drive layout conditions
RAID1 Required Allocate mirroring pair drives to different DEs.
Recommended

Allocate mirroring pair drives to DEs (*1) under different CEs when possible.

Allocate mirroring pair drives to different SAS cascades (*2) when possible.

RAID1+0 Required Allocate mirroring pair drives to different DEs.
Recommended

Allocate striping drives to DEs under as many CEs as possible.

Allocate striping drives to as many SAS cascades (*2) as possible.

RAID5 Required Allocate member drives to different DEs.
Recommended

Distribute member drives to DEs under as many CEs as possible.

Distribute member drives to as many SAS cascades (*2) as possible.

RAID5+0 Required

Allocate two or less member drives to the same DE.

Member drives in the same DE must belong to different redundant groups.

Recommended

Distribute member drives to DEs under as many CEs as possible.

Distribute member drives to as many SAS cascades (*2) as possible.

RAID6

RAID6-FR

Required Allocate two or less member drives to the same DE.
Recommended

Distribute member drives to DEs under as many CEs as possible.

Distribute member drives to as many SAS cascades (*2) as possible.

*1  :  DEs under different CEs have different numbers as the first digit of the DE number.
*2  :  "SAS cascade" for the ETERNUS DX8900 S4 refers to DEs that are attached to one drive interface port. The DEs that are allocated to the same SAS cascade configuration are as follows:

     DE#x1, DE#x2, and DE#x3 that are connected to CE#x/DI Port#0 (x: 0 - B)

     DE#x4, DE#x5, DE#x6, and DE#x7 that are connected to CE#x/DI Port#1 (x: 0 - B)

     DE#x8, DE#x9, DE#xA, and DE#xB that are connected to CE#x/DI Port#2 (x: 0 - B)

     DE#xC, DE#xD, DE#xE, and DE#xF that are connected to CE#x/DI Port#3 (x: 0 - B)

     (Example) DE#01, DE#02, and DE#03 that are connected to CE#0/DI Port#0 are on the same SAS cascade.

DE selection list box

Model Option DE group

ETERNUS DX60 S5

DE#0x

CE, DE#01 - DE#03 (for 3.5" DEs)

ETERNUS DX100 S5

DE#0x

CE, DE#01 - DE#0A

ETERNUS DX200 S5

DE#0x CE, DE#01 - DE#0A

ETERNUS DX500 S5

DE#0x CE, DE#01 - DE#05
DE#1x DE#10 - DE#15
DE#2x DE#20 - DE#25
DE#3x DE#30 - DE#35

ETERNUS DX600 S5

DE#0x CE, DE#01 - DE#0A
DE#1x DE#10 - DE#1A
DE#2x DE#20 - DE#2A
DE#3x DE#30 - DE#3A

ETERNUS DX900 S5

DE#0x

CE#0, DE#01 - DE#0F

DE#1x CE#1, DE#11 - DE#1F
DE#Cx DE#C0 - DE#CF
DE#Dx DE#D0 - DE#DF
DE#Ex DE#E0 - DE#EF
DE#Fx DE#F0 - DE#FF
ETERNUS DX8100 S4 DE#0x CE
DE#1x DE#10
ETERNUS DX8900 S4 DE#0x CE#0 (*1), DE#01 - DE#0F
DE#1x CE#1 (*1), DE#11 - DE#1F
DE#2x CE#2 (*1), DE#21 - DE#2F
DE#3x CE#3 (*1), DE#31 - DE#3F
DE#4x CE#4 (*1), DE#41 - DE#4F
DE#5x CE#5 (*1), DE#51 - DE#5F
DE#6x CE#6 (*1), DE#61 - DE#6F
DE#7x CE#7 (*1), DE#71 - DE#7F
DE#8x CE#8 (*1), DE#81 - DE#8F
DE#9x CE#9 (*1), DE#91 - DE#9F
DE#Ax CE#A (*1), DE#A1 - DE#AF
DE#Bx CE#B (*1), DE#B1 - DE#BF

ETERNUS AF150 S3

DE#0x CE

ETERNUS AF250 S3

DE#0x

CE, DE#01 - DE#0A

ETERNUS AF650 S3

DE#0x CE, DE#01 - DE#0A
DE#1x DE#10 - DE#1A
DE#2x DE#20 - DE#2A
DE#3x DE#30 - DE#3A
*1  :  Only 2.5" drives can be installed.

Operating Procedures

Automatically Selecting Drives to Create RAID Groups

  1. Click [Create] in [Action].

  2. Select "Automatic" for "Create Mode".

  3. Specify the RAID group detailed information, and click the [Create] button.

    → A confirmation screen appears.

    Caution
    • An error screen appears in the following conditions:
      • The "Name" overlaps with an existing name (when one RAID group is created)

      • The "Name" does not satisfy the input conditions

      • RAID groups cannot be created by using the drives that are installed in the storage system

  4. Click the [OK] button.

    → RAID group creation starts.

  5. Click the [Done] button to return to the [RAID Group] screen.

Manually Selecting Drives to Create RAID Groups

  1. Click [Create] in [Action].

  2. Select "Manual" for "Create Mode".

  3. Specify the RAID group detailed information.

  4. Select drives using a list of the drives or the installation location image.

    Note
    • When the number of drives for each RAID level and the number of selected drives does not match, the [Create] button cannot be clicked.

  5. Click the [Create] button.

    → A confirmation screen appears.

    Caution
    • An error screen appears in the following conditions:
      • The "Name" overlaps with an existing RAID group name

      • The "Name" does not satisfy the input conditions

      • The drive layout does not satisfy the required conditions

        (Refer to "Conditions for the ETERNUS DX8100 S4/DX8900 S4 Drive Layout" for details.)

      • The specified Stripe Depth is not allowed for the RAID level

  6. Click the [OK] button.

    → RAID group creation starts.

  7. Click the [Done] button to return to the [RAID Group] screen.