MetroCluster Manuals ( CA08871-401 )
Configuring the clusters into a MetroCluster configuration
You must peer the clusters, mirror the root aggregates, create a mirrored data aggregate, and then issue the command to implement the MetroCluster operations.
Before you run metrocluster configure
, HA mode and DR mirroring are not enabled and you might see an error message related to this expected behavior. You enable HA mode and DR mirroring later when you run the command metrocluster configure
to implement the configuration.
Peering the clusters
The clusters in the MetroCluster configuration must be in a peer relationship so that they can communicate with each other and perform the data mirroring essential to MetroCluster disaster recovery.
Configuring intercluster LIFs for cluster peering
You must create intercluster LIFs on ports used for communication between the MetroCluster partner clusters. You can use dedicated ports or ports that also have data traffic.
Configuring intercluster LIFs on dedicated ports
You can configure intercluster LIFs on dedicated ports. Doing so typically increases the available bandwidth for replication traffic.
-
List the ports in the cluster:
network port show
For complete command syntax, see the man page.
The following example shows the network ports in "cluster01":
cluster01::> network port show Speed (Mbps) Node Port IPspace Broadcast Domain Link MTU Admin/Oper ------ --------- ------------ ---------------- ----- ------- ------------ cluster01-01 e0a Cluster Cluster up 1500 auto/1000 e0b Cluster Cluster up 1500 auto/1000 e0c Default Default up 1500 auto/1000 e0d Default Default up 1500 auto/1000 e0e Default Default up 1500 auto/1000 e0f Default Default up 1500 auto/1000 cluster01-02 e0a Cluster Cluster up 1500 auto/1000 e0b Cluster Cluster up 1500 auto/1000 e0c Default Default up 1500 auto/1000 e0d Default Default up 1500 auto/1000 e0e Default Default up 1500 auto/1000 e0f Default Default up 1500 auto/1000
-
Determine which ports are available to dedicate to intercluster communication:
network interface show -fields home-port,curr-port
For complete command syntax, see the man page.
The following example shows that ports "e0e" and "e0f" have not been assigned LIFs:
cluster01::> network interface show -fields home-port,curr-port vserver lif home-port curr-port ------- -------------------- --------- --------- Cluster cluster01-01_clus1 e0a e0a Cluster cluster01-01_clus2 e0b e0b Cluster cluster01-02_clus1 e0a e0a Cluster cluster01-02_clus2 e0b e0b cluster01 cluster_mgmt e0c e0c cluster01 cluster01-01_mgmt1 e0c e0c cluster01 cluster01-02_mgmt1 e0c e0c
-
Create a failover group for the dedicated ports:
network interface failover-groups create -vserver system_SVM -failover-group failover_group -targets physical_or_logical_ports
The following example assigns ports "e0e" and" e0f" to failover group "intercluster01" on system "SVMcluster01":
cluster01::> network interface failover-groups create -vserver cluster01 -failover-group intercluster01 -targets cluster01-01:e0e,cluster01-01:e0f,cluster01-02:e0e,cluster01-02:e0f
-
Verify that the failover group was created:
network interface failover-groups show
For complete command syntax, see the man page.
cluster01::> network interface failover-groups show Failover Vserver Group Targets ---------------- ---------------- -------------------------------------------- Cluster Cluster cluster01-01:e0a, cluster01-01:e0b, cluster01-02:e0a, cluster01-02:e0b cluster01 Default cluster01-01:e0c, cluster01-01:e0d, cluster01-02:e0c, cluster01-02:e0d, cluster01-01:e0e, cluster01-01:e0f cluster01-02:e0e, cluster01-02:e0f intercluster01 cluster01-01:e0e, cluster01-01:e0f cluster01-02:e0e, cluster01-02:e0f
-
Create intercluster LIFs on the system SVM and assign them to the failover group.
ONTAP version
Command
9.7 and later
network interface create -vserver system_SVM -lif LIF_name -service-policy default-intercluster -home-node node -home-port port -address port_IP -netmask netmask -failover-group failover_group
For complete command syntax, see the man page.
The following example creates intercluster LIFs "cluster01_icl01" and "cluster01_icl02" in failover group "intercluster01":
cluster01::> network interface create -vserver cluster01 -lif cluster01_icl01 -service- policy default-intercluster -home-node cluster01-01 -home-port e0e -address 192.168.1.201 -netmask 255.255.255.0 -failover-group intercluster01 cluster01::> network interface create -vserver cluster01 -lif cluster01_icl02 -service- policy default-intercluster -home-node cluster01-02 -home-port e0e -address 192.168.1.202 -netmask 255.255.255.0 -failover-group intercluster01
-
Verify that the intercluster LIFs were created:
In ONTAP 9.7 and later:
network interface show -service-policy default-intercluster
For complete command syntax, see the man page.
cluster01::> network interface show -service-policy default-intercluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- cluster01 cluster01_icl01 up/up 192.168.1.201/24 cluster01-01 e0e true cluster01_icl02 up/up 192.168.1.202/24 cluster01-02 e0f true
-
Verify that the intercluster LIFs are redundant:
In ONTAP 9.7 and later:
network interface show -service-policy default-intercluster -failover
For complete command syntax, see the man page.
The following example shows that the intercluster LIFs "cluster01_icl01", and "cluster01_icl02" on the "SVMe0e" port will fail over to the "e0f" port.
cluster01::> network interface show -service-policy default-intercluster –failover Logical Home Failover Failover Vserver Interface Node:Port Policy Group -------- --------------- --------------------- --------------- -------- cluster01 cluster01_icl01 cluster01-01:e0e local-only intercluster01 Failover Targets: cluster01-01:e0e, cluster01-01:e0f cluster01_icl02 cluster01-02:e0e local-only intercluster01 Failover Targets: cluster01-02:e0e, cluster01-02:e0f
Configuring intercluster LIFs on shared data ports
You can configure intercluster LIFs on ports shared with the data network. Doing so reduces the number of ports you need for intercluster networking.
-
List the ports in the cluster:
network port show
For complete command syntax, see the man page.
The following example shows the network ports in "cluster01":
cluster01::> network port show Speed (Mbps) Node Port IPspace Broadcast Domain Link MTU Admin/Oper ------ --------- ------------ ---------------- ----- ------- ------------ cluster01-01 e0a Cluster Cluster up 1500 auto/1000 e0b Cluster Cluster up 1500 auto/1000 e0c Default Default up 1500 auto/1000 e0d Default Default up 1500 auto/1000 cluster01-02 e0a Cluster Cluster up 1500 auto/1000 e0b Cluster Cluster up 1500 auto/1000 e0c Default Default up 1500 auto/1000 e0d Default Default up 1500 auto/1000
-
Create intercluster LIFs on the system SVM:
In ONTAP 9.7 and later:
network interface create -vserver system_SVM -lif LIF_name -service-policy default-intercluster -home-node node -home-port port -address port_IP -netmask netmask
For complete command syntax, see the man page.
The following example creates intercluster LIFs "cluster01_icl01" and "cluster01_icl02":
cluster01::> network interface create -vserver cluster01 -lif cluster01_icl01 -service- policy default-intercluster -home-node cluster01-01 -home-port e0c -address 192.168.1.201 -netmask 255.255.255.0 cluster01::> network interface create -vserver cluster01 -lif cluster01_icl02 -service- policy default-intercluster -home-node cluster01-02 -home-port e0c -address 192.168.1.202 -netmask 255.255.255.0
-
Verify that the intercluster LIFs were created:
In ONTAP 9.7 and later:
network interface show -service-policy default-intercluster
For complete command syntax, see the man page.
cluster01::> network interface show -service-policy default-intercluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- cluster01 cluster01_icl01 up/up 192.168.1.201/24 cluster01-01 e0c true cluster01_icl02 up/up 192.168.1.202/24 cluster01-02 e0c true
-
Verify that the intercluster LIFs are redundant:
In ONTAP 9.7 and later:
network interface show –service-policy default-intercluster -failover
For complete command syntax, see the man page.
The following example shows that intercluster LIFs "cluster01_icl01" and "cluster01_icl02" on the "e0c" port will fail over to the "e0d" port.
cluster01::> network interface show -service-policy default-intercluster –failover Logical Home Failover Failover Vserver Interface Node:Port Policy Group -------- --------------- --------------------- --------------- -------- cluster01 cluster01_icl01 cluster01-01:e0c local-only 192.168.1.201/24 Failover Targets: cluster01-01:e0c, cluster01-01:e0d cluster01_icl02 cluster01-02:e0c local-only 192.168.1.201/24 Failover Targets: cluster01-02:e0c, cluster01-02:e0d
Creating a cluster peer relationship
You can use the cluster peer create command to create a peer relationship between a local and remote cluster. After the peer relationship has been created, you can run cluster peer create on the remote cluster to authenticate it to the local cluster.
-
You must have created intercluster LIFs on every node in the clusters that are being peered.
-
The clusters must be running ONTAP 9.7 or later.
-
On the destination cluster, create a peer relationship with the source cluster:
cluster peer create -generate-passphrase -offer-expiration MM/DD/YYYY HH:MM:SS|1…7days|1…168hours -peer-addrs peer_LIF_IPs -ipspace ipspace
If you specify both
-generate-passphrase
and-peer-addrs
, only the cluster whose intercluster LIFs are specified in-peer-addrs
can use the generated password.You can ignore the
-ipspace
option if you are not using a custom IPspace. For complete command syntax, see the man page.The following example creates a cluster peer relationship on an unspecified remote cluster:
cluster02::> cluster peer create -generate-passphrase -offer-expiration 2days Passphrase: UCa+6lRVICXeL/gq1WrK7ShR Expiration Time: 6/7/2017 08:16:10 EST Initial Allowed Vserver Peers: - Intercluster LIF IP: 192.140.112.101 Peer Cluster Name: Clus_7ShR (temporary generated) Warning: make a note of the passphrase - it cannot be displayed again.
-
On the source cluster, authenticate the source cluster to the destination cluster:
cluster peer create -peer-addrs peer_LIF_IPs -ipspace ipspace
For complete command syntax, see the man page.
The following example authenticates the local cluster to the remote cluster at intercluster LIF IP addresses "192.140.112.101" and "192.140.112.102":
cluster01::> cluster peer create -peer-addrs 192.140.112.101,192.140.112.102 Notice: Use a generated passphrase or choose a passphrase of 8 or more characters. To ensure the authenticity of the peering relationship, use a phrase or sequence of characters that would be hard to guess. Enter the passphrase: Confirm the passphrase: Clusters cluster02 and cluster01 are peered.
Enter the passphrase for the peer relationship when prompted.
-
Verify that the cluster peer relationship was created:
cluster peer show -instance
cluster01::> cluster peer show -instance Peer Cluster Name: cluster02 Remote Intercluster Addresses: 192.140.112.101, 192.140.112.102 Availability of the Remote Cluster: Available Remote Cluster Name: cluster2 Active IP Addresses: 192.140.112.101, 192.140.112.102 Cluster Serial Number: 1-80-123456 Address Family of Relationship: ipv4 Authentication Status Administrative: no-authentication Authentication Status Operational: absent Last Update Time: 02/05 21:05:41 IPspace for the Relationship: Default
-
Check the connectivity and status of the nodes in the peer relationship:
cluster peer health show
cluster01::> cluster peer health show Node cluster-Name Node-Name Ping-Status RDB-Health Cluster-Health Avail… ---------- --------------------------- --------- --------------- -------- cluster01-01 cluster02 cluster02-01 Data: interface_reachable ICMP: interface_reachable true true true cluster02-02 Data: interface_reachable ICMP: interface_reachable true true true cluster01-02 cluster02 cluster02-01 Data: interface_reachable ICMP: interface_reachable true true true cluster02-02 Data: interface_reachable ICMP: interface_reachable true true true
Creating the DR group
You must create the disaster recovery (DR) group relationships between the clusters.
You perform this procedure on one of the clusters in the MetroCluster configuration to create the DR relationships between the nodes in both clusters.
Note
|
The DR relationships cannot be changed after the DR groups are created. |
-
Verify that the nodes are ready for creation of the DR group by entering the following command on each node:
metrocluster configuration-settings show-status
The command output should show that the nodes are ready:
cluster_A::> metrocluster configuration-settings show-status Cluster Node Configuration Settings Status -------------------------- ------------- -------------------------------- cluster_A node_A_1 ready for DR group create node_A_2 ready for DR group create 2 entries were displayed.
cluster_B::> metrocluster configuration-settings show-status Cluster Node Configuration Settings Status -------------------------- ------------- -------------------------------- cluster_B node_B_1 ready for DR group create node_B_2 ready for DR group create 2 entries were displayed.
-
Create the DR group:
metrocluster configuration-settings dr-group create -partner-cluster partner-cluster-name -local-node local-node-name -remote-node remote-node-name
This command is issued only once. It does not need to be repeated on the partner cluster. In the command, you specify the name of the remote cluster and the name of one local node and one node on the partner cluster.
The two nodes you specify are configured as DR partners and the other two nodes (which are not specified in the command) are configured as the second DR pair in the DR group. These relationships cannot be changed after you enter this command.
The following command creates these DR pairs:
-
node_A_1 and node_B_1
-
node_A_2 and node_B_2
Cluster_A::> metrocluster configuration-settings dr-group create -partner-cluster cluster_B -local-node node_A_1 -remote-node node_B_1 [Job 27] Job succeeded: DR Group Create is successful.
-
Configuring and connecting the MetroCluster IP interfaces
You must configure the MetroCluster IP interfaces that are used for replication of each node’s storage and nonvolatile cache. You then establish the connections using the MetroCluster IP interfaces. This creates iSCSI connections for storage replication.
Note
|
You must choose the MetroCluster IP addresses carefully because you cannot change them after initial configuration. |
-
You must create two interfaces for each node. The interfaces must be associated with the VLANs defined in the MetroCluster RCF file.
-
You must create all MetroCluster IP interface "A" ports in the same VLAN and all MetroCluster IP interface "B" ports in the other VLAN. Refer to Considerations for MetroCluster IP configuration.
Note-
Certain platforms use a VLAN for the MetroCluster IP interface. By default, each of the two ports use a different VLAN: 10 and 20. You can also specify a different (non-default) VLAN higher than 100 (between 101 and 4095) using the
-vlan-id parameter
in themetrocluster configuration-settings interface create
command. -
Beginning with ONTAP 9.9.1, if you are using a layer 3 configuration, you must also specify the
-gateway
parameter when creating MetroCluster IP interfaces. Refer to Considerations for layer 3 wide-area networks.
The following platform models can be added to the existing MetroCluster configuration if the VLANs used are 10/20 or greater than 100. If any other VLANs are used, then these platforms cannot be added to the existing configuration as the MetroCluster interface cannot be configured. If you are using any other platform, the VLAN configuration is not relevant as this is not required in ONTAP.
ETERNUS AX series
ETERNUS HX series
-
ETERNUS AX2100
-
ETERNUS AX2200
-
ETERNUS AX4100
-
ETERNUS HX2200
-
ETERNUS HX6100
The following IP addresses and subnets are used in the examples:
Node
Interface
IP address
Subnet
node_A_1
MetroCluster IP interface 1
10.1.1.1
10.1.1/24
MetroCluster IP interface 2
10.1.2.1
10.1.2/24
node_A_2
MetroCluster IP interface 1
10.1.1.2
10.1.1/24
MetroCluster IP interface 2
10.1.2.2
10.1.2/24
node_B_1
MetroCluster IP interface 1
10.1.1.3
10.1.1/24
MetroCluster IP interface 2
10.1.2.3
10.1.2/24
node_B_2
MetroCluster IP interface 1
10.1.1.4
10.1.1/24
MetroCluster IP interface 2
10.1.2.4
10.1.2/24
The physical ports used by the MetroCluster IP interfaces depends on the platform model, as shown in the following table.
Platform model MetroCluster IP port Note ETERNUS AX4100
e1a
e1b
ETERNUS AX4100 and ETERNUS HX6100
e1a
e1b
ETERNUS AX2100 and ETERNUS HX2200
e0a
On these series, these physical ports are also used as cluster interfaces.
e0b
ETERNUS AX2200
e0c
e0d
ETERNUS HX6100
e1a
e1b
-
Verifying or manually performing pool 1 drives assignment
Depending on the storage configuration, you must either verify pool 1 drive assignment or manually assign drives to pool 1 for each node in the MetroCluster IP configuration. The procedure you use depends on the version of ONTAP you are using.
Configuration type |
Procedure |
---|---|
The series meet the requirements for automatic drive assignment. |
|
The configuration includes either three shelves, or, if it contains more than four shelves, has an uneven multiple of four shelves (for example, seven shelves). |
|
The configuration does not include four storage shelves per site. |
Verifying disk assignment for pool 1 disks
You must verify that the remote disks are visible to the nodes and have been assigned correctly.
You must wait at least ten minutes for disk auto-assignment to complete after the MetroCluster IP interfaces and connections were created with the metrocluster configuration-settings connection connect
command.
Command output will show disk names in the form: node-name:0m.i1.0L1
-
Verify pool 1 disks are auto-assigned:
disk show
Manually assigning drives for pool 1
If the system was not preconfigured at the factory and does not meet the requirements for automatic drive assignment, you must manually assign the remote pool 1 drives.
Details for determining whether your system requires manual disk assignment are included in Considerations for automatic drive assignment and ADP systems.
When the configuration includes only two external shelves per site, pool 1 drives for each site should be shared from the same shelf as shown in the following examples:
-
node_A_1 is assigned drives in bays 0-11 on site_B-shelf_2 (remote)
-
node_A_2 is assigned drives in bays 12-23 on site_B-shelf_2 (remote)
-
From each node in the MetroCluster IP configuration, assign remote drives to pool 1.
-
Display the list of unassigned drives:
disk show -host-adapter 0m -container-type unassigned
cluster_A::> disk show -host-adapter 0m -container-type unassigned Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner ---------------- ---------- ----- --- ------- ----------- --------- -------- 6.23.0 - 23 0 SSD unassigned - - 6.23.1 - 23 1 SSD unassigned - - . . . node_A_2:0m.i1.2L51 - 21 14 SSD unassigned - - node_A_2:0m.i1.2L64 - 21 10 SSD unassigned - - . . . 48 entries were displayed. cluster_A::>
-
Assign ownership of remote drives (0m) to pool 1 of the first node (for example, node_A_1):
disk assign -disk disk-id -pool 1 -owner owner-node-name
disk-id
must identify a drive on a remote shelf ofowner-node-name
. -
Confirm that the drives were assigned to pool 1:
disk show -host-adapter 0m -container-type unassigned
NoteThe iSCSI connection used to access the remote drives appears as device 0m. The following output shows that the drives on shelf 23 were assigned because they no longer appear in the list of unassigned drives:
cluster_A::> disk show -host-adapter 0m -container-type unassigned Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner ---------------- ---------- ----- --- ------- ----------- --------- -------- node_A_2:0m.i1.2L51 - 21 14 SSD unassigned - - node_A_2:0m.i1.2L64 - 21 10 SSD unassigned - - . . . node_A_2:0m.i2.1L90 - 21 19 SSD unassigned - - 24 entries were displayed. cluster_A::>
-
Repeat these steps to assign pool 1 drives to the second node on site A (for example, "node_A_2").
-
Repeat these steps on site B.
-
Mirroring the root aggregates
You must mirror the root aggregates to provide data protection.
By default, the root aggregate is created as RAID-DP type aggregate. You can change the root aggregate from RAID-DP to RAID4 type aggregate. The following command modifies the root aggregate for RAID4 type aggregate:
storage aggregate modify –aggregate aggr_name -raidtype raid4
Note
|
On non-ADP systems, the RAID type of the aggregate can be modified from the default RAID-DP to RAID4 before or after the aggregate is mirrored. |
-
Mirror the root aggregate:
storage aggregate mirror aggr_name
The following command mirrors the root aggregate for "controller_A_1":
controller_A_1::> storage aggregate mirror aggr0_controller_A_1
This mirrors the aggregate, so it consists of a local plex and a remote plex located at the remote MetroCluster site.
-
Repeat the previous step for each node in the MetroCluster configuration.
Creating a mirrored data aggregate on each node
You must create a mirrored data aggregate on each node in the DR group.
-
You should know what drives will be used in the new aggregate.
-
If you have multiple drive types in your system (heterogeneous storage), you should understand how you can ensure that the correct drive type is selected.
-
Drives are owned by a specific node; when you create an aggregate, all drives in that aggregate must be owned by the same node, which becomes the home node for that aggregate.
In systems using ADP, aggregates are created using partitions in which each drive is partitioned in to P1, P2 and P3 partitions.
-
Aggregate names should conform to the naming scheme you determined when you planned your MetroCluster configuration.
-
Display a list of available spares:
storage disk show -spare -owner node_name
-
Create the aggregate:
storage aggregate create -mirror true
If you are logged in to the cluster on the cluster management interface, you can create an aggregate on any node in the cluster. To ensure that the aggregate is created on a specific node, use the
-node
parameter or specify drives that are owned by that node.You can specify the following options:
-
Aggregate’s home node (that is, the node that owns the aggregate in normal operation)
-
List of specific drives that are to be added to the aggregate
-
Number of drives to include
NoteIn the minimum supported configuration, in which a limited number of drives are available, you must use the force-small-aggregate option to allow the creation of a three disk RAID-DP aggregate. -
Checksum style to use for the aggregate
-
Type of drives to use
-
Size of drives to use
-
Drive speed to use
-
RAID type for RAID groups on the aggregate
-
Maximum number of drives that can be included in a RAID group
-
Whether drives with different RPM are allowed For more information about these options, see the storage aggregate create man page.
The following command creates a mirrored aggregate with 10 disks:
cluster_A::> storage aggregate create aggr1_node_A_1 -diskcount 10 -node node_A_1 -mirror true [Job 15] Job is queued: Create aggr1_node_A_1. [Job 15] The job is starting. [Job 15] Job succeeded: DONE
-
-
Verify the RAID group and drives of your new aggregate:
storage aggregate show-status -aggregate aggregate-name
Implementing the MetroCluster configuration
You must run the metrocluster configure
command to start data protection in a MetroCluster configuration.
-
There should be at least two non-root mirrored data aggregates on each cluster.
You can verify this with the
storage aggregate show
command.NoteIf you want to use a single mirrored data aggregate, then see Step 1 for instructions. -
The ha-config state of the controllers and chassis must be "mccip".
You issue the metrocluster configure
command once on any of the nodes to enable the MetroCluster configuration. You do not need to issue the command on each of the sites or nodes, and it does not matter which node or site you choose to issue the command on.
The metrocluster configure
command automatically pairs the two nodes with the lowest system IDs in each of the two clusters as disaster recovery (DR) partners. In a four-node MetroCluster configuration, there are two DR partner pairs. The second DR pair is created from the two nodes with higher system IDs.
Note
|
You must not configure Onboard Key Manager (OKM) or external key management before you run the command metrocluster configure .
|
-
Configure the MetroCluster in the following format:
If your MetroCluster configuration has…
Then do this…
Multiple data aggregates
From any node’s prompt, configure MetroCluster:
metrocluster configure node-name
A single mirrored data aggregate
-
From any node’s prompt, change to the advanced privilege level:
set -privilege advanced
You need to respond with
y
when you are prompted to continue into advanced mode and you see the advanced mode prompt (*>). -
Configure the MetroCluster with the
-allow-with-one-aggregate true
parameter:metrocluster configure -allow-with-one-aggregate true node-name
-
Return to the admin privilege level:
set -privilege admin
NoteThe best practice is to have multiple data aggregates. If the first DR group has only one aggregate and you want to add a DR group with one aggregate, you must move the metadata volume off the single data aggregate. The following command enables the MetroCluster configuration on all of the nodes in the DR group that contains "controller_A_1":
cluster_A::*> metrocluster configure -node-name controller_A_1 [Job 121] Job succeeded: Configure is successful.
-
-
Verify the networking status on site A:
network port show
The following example shows the network port usage on a four-node MetroCluster configuration:
cluster_A::> network port show Speed (Mbps) Node Port IPspace Broadcast Domain Link MTU Admin/Oper ------ --------- --------- ---------------- ----- ------- ------------ controller_A_1 e0a Cluster Cluster up 9000 auto/1000 e0b Cluster Cluster up 9000 auto/1000 e0c Default Default up 1500 auto/1000 e0d Default Default up 1500 auto/1000 e0e Default Default up 1500 auto/1000 e0f Default Default up 1500 auto/1000 e0g Default Default up 1500 auto/1000 controller_A_2 e0a Cluster Cluster up 9000 auto/1000 e0b Cluster Cluster up 9000 auto/1000 e0c Default Default up 1500 auto/1000 e0d Default Default up 1500 auto/1000 e0e Default Default up 1500 auto/1000 e0f Default Default up 1500 auto/1000 e0g Default Default up 1500 auto/1000 14 entries were displayed.
-
Verify the MetroCluster configuration from both sites in the MetroCluster configuration.
-
Verify the configuration from site A:
metrocluster show
cluster_A::> metrocluster show Configuration: IP fabric Cluster Entry Name State ------------------------- ------------------- ----------- Local: cluster_A Configuration state configured Mode normal Remote: cluster_B Configuration state configured Mode normal
-
Verify the configuration from site B:
metrocluster show
cluster_B::> metrocluster show Configuration: IP fabric Cluster Entry Name State ------------------------- ------------------- ----------- Local: cluster_B Configuration state configured Mode normal Remote: cluster_A Configuration state configured Mode normal
-
-
To avoid possible issues with nonvolatile memory mirroring, reboot each of the four nodes:
node reboot -node node-name -inhibit-takeover true
-
Issue the
metrocluster show
command on both clusters to again verify the configuration.
Configuring the second DR group in an eight-node configuration
Repeat the previous tasks to configure the nodes in the second DR group.
Creating unmirrored data aggregates
You can optionally create unmirrored data aggregates for data that does not require the redundant mirroring provided by MetroCluster configurations.
-
You should know what drives or array LUNs will be used in the new aggregate.
-
If you have multiple drive types in your system (heterogeneous storage), you should understand how you can verify that the correct drive type is selected.
Important
|
In MetroCluster IP configurations, remote unmirrored aggregates are not accessible after a switchover |
Note
|
The unmirrored aggregates must be local to the node owning them. |
-
Drives and array LUNs are owned by a specific node; when you create an aggregate, all drives in that aggregate must be owned by the same node, which becomes the home node for that aggregate.
-
Aggregate names should conform to the naming scheme you determined when you planned your MetroCluster configuration.
-
Disks and aggregates management contains more information about mirroring aggregates.
-
Enable unmirrored aggregate deployment:
metrocluster modify -enable-unmirrored-aggr-deployment true
-
Verify that disk autoassignment is disabled:
disk option show
-
Install and cable the disk shelves that will contain the unmirrored aggregates.
You can use the procedures in the Installation and Setup documentation for your platform and disk shelves.
-
Manually assign all disks on the new shelf to the appropriate node:
disk assign -disk disk-id -owner owner-node-name
-
Create the aggregate:
storage aggregate create
If you are logged in to the cluster on the cluster management interface, you can create an aggregate on any node in the cluster. To verify that the aggregate is created on a specific node, you should use the -node parameter or specify drives that are owned by that node.
You must also ensure that you are only including drives on the unmirrored shelf to the aggregate.
You can specify the following options:
-
Aggregate’s home node (that is, the node that owns the aggregate in normal operation)
-
List of specific drives or array LUNs that are to be added to the aggregate
-
Number of drives to include
-
Checksum style to use for the aggregate
-
Type of drives to use
-
Size of drives to use
-
Drive speed to use
-
RAID type for RAID groups on the aggregate
-
Maximum number of drives or array LUNs that can be included in a RAID group
-
Whether drives with different RPM are allowed
For more information about these options, see the storage aggregate create man page.
The following command creates a unmirrored aggregate with 10 disks:
controller_A_1::> storage aggregate create aggr1_controller_A_1 -diskcount 10 -node controller_A_1 [Job 15] Job is queued: Create aggr1_controller_A_1. [Job 15] The job is starting. [Job 15] Job succeeded: DONE
-
-
Verify the RAID group and drives of your new aggregate:
storage aggregate show-status -aggregate aggregate-name
-
Disable unmirrored aggregate deployment:
metrocluster modify -enable-unmirrored-aggr-deployment false
-
Verify that disk autoassignment is enabled:
disk option show
Checking the MetroCluster configuration
You can check that the components and relationships in the MetroCluster configuration are working correctly.
You should do a check after initial configuration and after making any changes to the MetroCluster configuration.
You should also do a check before a negotiated (planned) switchover or a switchback operation.
If the metrocluster check run
command is issued twice within a short time on either or both clusters, a conflict can occur and the command might not collect all data. Subsequent metrocluster check show
commands do not show the expected output.
-
Check the configuration:
metrocluster check run
The command runs as a background job and might not be completed immediately.
cluster_A::> metrocluster check run The operation has been started and is running in the background. Wait for it to complete and run "metrocluster check show" to view the results. To check the status of the running metrocluster check operation, use the command, "metrocluster operation history show -job-id 2245"
cluster_A::> metrocluster check show Component Result ------------------- --------- nodes ok lifs ok config-replication ok aggregates ok clusters ok connections ok volumes ok 7 entries were displayed.
-
Display more detailed results from the most recent metrocluster check run command:
metrocluster check aggregate show
metrocluster check cluster show
metrocluster check config-replication show
metrocluster check lif show
metrocluster check node show
NoteThe metrocluster check show
commands show the results of the most recentmetrocluster check run
command. You should always run themetrocluster check run
command prior to using themetrocluster check show
commands so that the information displayed is current.The following example shows the
metrocluster check aggregate show
command output for a healthy four-node MetroCluster configuration:cluster_A::> metrocluster check aggregate show Last Checked On: 8/5/2014 00:42:58 Node Aggregate Check Result --------------- -------------------- --------------------- --------- controller_A_1 controller_A_1_aggr0 mirroring-status ok disk-pool-allocation ok ownership-state ok controller_A_1_aggr1 mirroring-status ok disk-pool-allocation ok ownership-state ok controller_A_1_aggr2 mirroring-status ok disk-pool-allocation ok ownership-state ok controller_A_2 controller_A_2_aggr0 mirroring-status ok disk-pool-allocation ok ownership-state ok controller_A_2_aggr1 mirroring-status ok disk-pool-allocation ok ownership-state ok controller_A_2_aggr2 mirroring-status ok disk-pool-allocation ok ownership-state ok 18 entries were displayed.
The following example shows the metrocluster check cluster show command output for a healthy four-node MetroCluster configuration. It indicates that the clusters are ready to perform a negotiated switchover if necessary.
Last Checked On: 9/13/2017 20:47:04 Cluster Check Result --------------------- ------------------------------- --------- mccint-fas9000-0102 negotiated-switchover-ready not-applicable switchback-ready not-applicable job-schedules ok licenses ok periodic-check-enabled ok mccint-fas9000-0304 negotiated-switchover-ready not-applicable switchback-ready not-applicable job-schedules ok licenses ok periodic-check-enabled ok 10 entries were displayed.
Completing ONTAP configuration
After configuring, enabling, and checking the MetroCluster configuration, you can proceed to complete the cluster configuration by adding additional SVMs, network interfaces and other ONTAP functionality as needed.