ONTAP 9 Manuals ( CA08871-402 )

Add capacity to a local tier (add disks to an aggregate)

You can add disks to an local tier (aggregate) so that it can provide more storage to its associated volumes.

ONTAP System Manager (ONTAP 9.8 and later)

Use ONTAP System Manager to add capacity (ONTAP 9.8 and later)

You can add capacity to a local tier by adding capacity disks.

Beginning with ONTAP 9.12.1, you can use ONTAP System Manager to view the committed capacity of a local tier to determine if additional capacity is required for the local tier. See Monitor capacity in ONTAP System Manager.
About this task

You perform this task only if you have installed ONTAP 9.8 or later.

Steps
  1. Click Storage > Tiers.

  2. Click kebab icon next to the name of the local tier to which you want to add capacity.

  3. Click Add Capacity.

    If there are no spare disks that you can add, then the Add Capacity option is not shown, and you cannot increase the capacity of the local tier.
  4. Perform the following steps, based on the version of ONTAP that is installed:

    If this version of ONTAP is installed…​

    Perform these steps…​

    ONTAP 9.8, 9.9, or 9.10.1

    1. If the node contains multiple storage tiers, then select the number of disks you want to add to the local tier. Otherwise, if the node contains only a single storage tier, the added capacity is estimated automatically.

    2. Click Add.

    Beginning with ONTAP 9.11.1

    1. Select the disk type and number of disks.

    2. If you want to add disks to a new RAID group, check the check box. The RAID allocation is displayed.

    3. Click Save.

  5. (Optional) The process takes some time to complete. If you want to run the process in the background, select Run in Background.

  6. After the process completes, you can view the increased capacity amount in the local tier information at Storage > Tiers.

CLI

Use the CLI to add capacity

The procedure for adding partitioned disks to an aggregate is similar to the procedure for adding unpartitioned disks.

What you’ll need

You must know what the RAID group size is for the aggregate you are adding the storage to.

About this task

When you expand an aggregate, you should be aware of whether you are adding partition or unpartitioned disks to the aggregate. When you add unpartitioned drives to an existing aggregate, the size of the existing RAID groups is inherited by the new RAID group, which can affect the number of parity disks required. If an unpartitioned disk is added to a RAID group composed of partitioned disks, the new disk is partitioned, leaving an unused spare partition.

When you provision partitions, you must ensure that you do not leave the node without a drive with both partitions as spare. If you do, and the node experiences a controller disruption, valuable information about the problem (the core file) might not be available to provide to the technical support.

Do not use the disklist command to expand your aggregates. This could cause partition misalignment.
Steps
  1. Show the available spare storage on the system that owns the aggregate:

    storage aggregate show-spare-disks -original-owner node_name

    You can use the -is-disk-shared parameter to show only partitioned drives or only unpartitioned drives.

    cl1-s2::> storage aggregate show-spare-disks -original-owner cl1-s2 -is-disk-shared true
    
    Original Owner: cl1-s2
     Pool0
      Shared HDD Spares
                                                                Local    Local
                                                                 Data     Root Physical
     Disk                        Type     RPM Checksum         Usable   Usable     Size Status
     --------------------------- ----- ------ -------------- -------- -------- -------- --------
     1.0.1                       BSAS    7200 block           753.8GB  73.89GB  828.0GB zeroed
     1.0.2                       BSAS    7200 block           753.8GB       0B  828.0GB zeroed
     1.0.3                       BSAS    7200 block           753.8GB       0B  828.0GB zeroed
     1.0.4                       BSAS    7200 block           753.8GB       0B  828.0GB zeroed
     1.0.8                       BSAS    7200 block           753.8GB       0B  828.0GB zeroed
     1.0.9                       BSAS    7200 block           753.8GB       0B  828.0GB zeroed
     1.0.10                      BSAS    7200 block                0B  73.89GB  828.0GB zeroed
    2 entries were displayed.
  2. Show the current RAID groups for the aggregate:

    storage aggregate show-status aggr_name

    cl1-s2::> storage aggregate show-status -aggregate data_1
    
    Owner Node: cl1-s2
     Aggregate: data_1 (online, raid_dp) (block checksums)
      Plex: /data_1/plex0 (online, normal, active, pool0)
       RAID Group /data_1/plex0/rg0 (normal, block checksums)
                                                  Usable Physical
         Position Disk        Pool Type     RPM     Size     Size Status
         -------- ----------- ---- ----- ------ -------- -------- ----------
         shared   1.0.10        0   BSAS    7200  753.8GB  828.0GB (normal)
         shared   1.0.5         0   BSAS    7200  753.8GB  828.0GB (normal)
         shared   1.0.6         0   BSAS    7200  753.8GB  828.0GB (normal)
         shared   1.0.11        0   BSAS    7200  753.8GB  828.0GB (normal)
         shared   1.0.0         0   BSAS    7200  753.8GB  828.0GB (normal)
    5 entries were displayed.
  3. Simulate adding the storage to the aggregate:

    storage aggregate add-disks -aggregate aggr_name -diskcount number_of_disks_or_partitions -simulate true

    You can see the result of the storage addition without actually provisioning any storage. If any warnings are displayed from the simulated command, you can adjust the command and repeat the simulation.

    cl1-s2::> storage aggregate add-disks -aggregate aggr_test -diskcount 5 -simulate true
    
    Disks would be added to aggregate "aggr_test" on node "cl1-s2" in the
    following manner:
    
    First Plex
    
      RAID Group rg0, 5 disks (block checksum, raid_dp)
                                                          Usable Physical
        Position   Disk                      Type           Size     Size
        ---------- ------------------------- ---------- -------- --------
        shared     1.11.4                    SSD         415.8GB  415.8GB
        shared     1.11.18                   SSD         415.8GB  415.8GB
        shared     1.11.19                   SSD         415.8GB  415.8GB
        shared     1.11.20                   SSD         415.8GB  415.8GB
        shared     1.11.21                   SSD         415.8GB  415.8GB
    
    Aggregate capacity available for volume use would be increased by 1.83TB.
  4. Add the storage to the aggregate:

    storage aggregate add-disks -aggregate aggr_name -raidgroup new -diskcount number_of_disks_or_partitions

    When creating a Flash Pool aggregate, if you are adding disks with a different checksum than the aggregate, or if you are adding disks to a mixed checksum aggregate, you must use the -checksumstyle parameter.

    If you are adding disks to a Flash Pool aggregate, you must use the -disktype parameter to specify the disk type.

    You can use the -disksize parameter to specify a size of the disks to add. Only disks with approximately the specified size are selected for addition to the aggregate.

    cl1-s2::> storage aggregate add-disks -aggregate data_1 -raidgroup new -diskcount 5
  5. Verify that the storage was added successfully:

    storage aggregate show-status -aggregate aggr_name

    cl1-s2::> storage aggregate show-status -aggregate data_1
    
    Owner Node: cl1-s2
     Aggregate: data_1 (online, raid_dp) (block checksums)
      Plex: /data_1/plex0 (online, normal, active, pool0)
       RAID Group /data_1/plex0/rg0 (normal, block checksums)
                                                                  Usable Physical
         Position Disk                        Pool Type     RPM     Size     Size Status
         -------- --------------------------- ---- ----- ------ -------- -------- ----------
         shared   1.0.10                       0   BSAS    7200  753.8GB  828.0GB (normal)
         shared   1.0.5                        0   BSAS    7200  753.8GB  828.0GB (normal)
         shared   1.0.6                        0   BSAS    7200  753.8GB  828.0GB (normal)
         shared   1.0.11                       0   BSAS    7200  753.8GB  828.0GB (normal)
         shared   1.0.0                        0   BSAS    7200  753.8GB  828.0GB (normal)
         shared   1.0.2                        0   BSAS    7200  753.8GB  828.0GB (normal)
         shared   1.0.3                        0   BSAS    7200  753.8GB  828.0GB (normal)
         shared   1.0.4                        0   BSAS    7200  753.8GB  828.0GB (normal)
         shared   1.0.8                        0   BSAS    7200  753.8GB  828.0GB (normal)
         shared   1.0.9                        0   BSAS    7200  753.8GB  828.0GB (normal)
    10 entries were displayed.
  6. Verify that the node still has at least one drive with both the root partition and the data partition as spare:

    storage aggregate show-spare-disks -original-owner node_name

    cl1-s2::> storage aggregate show-spare-disks -original-owner cl1-s2 -is-disk-shared true
    
    Original Owner: cl1-s2
     Pool0
      Shared HDD Spares
                                                                Local    Local
                                                                 Data     Root Physical
     Disk                        Type     RPM Checksum         Usable   Usable     Size Status
     --------------------------- ----- ------ -------------- -------- -------- -------- --------
     1.0.1                       BSAS    7200 block           753.8GB  73.89GB  828.0GB zeroed
     1.0.10                      BSAS    7200 block                0B  73.89GB  828.0GB zeroed
    2 entries were displayed.
Top of Page