Search Docs...
⌘ K
OverviewDeploymentManagementOperationReferenceGlossary

Replacing an SSD (AVE)

You can refer to this section for replacing an SSD on a host in an ACOS (AVE) cluster.

Applicable scenario

The procedure for replacing an SSD depends on whether the cluster is deployed with tiered storage and on the usage of the SSD. The steps described in this section apply only to the following two scenarios:

  • The cluster uses tiered storage, and the SSD to be replaced is a Cache disk, Cache disk with metadata partition, Data disk, or Data disk with metadata partition.

  • The cluster does not use tiered storage, and the SSD to be replaced is a Data disk.

For a cluster not using tiered storage, if the SSD serves as a Data disk without metadata partition, refer to the replacement procedure in Replacing an HDD.

For an SSD used as the Arcfra system disk, refer to the procedure in Replacing the Arcfra system disk.

Preparation

  • Confirm that the SSD to be installed is compatible with the server model. You can check the compatibility using the Arcfra Hardware Compatibility Checker. In addition, ensure that the capacity of the new SSD is not smaller than that of the original one.

  • Identify and record the serial number of the physical node where the SSD is located, the rack location, and the slot of the disk to be replaced. You can refer to Locating a physical disk to locate the physical disk on the host by flashing its locator light.

  • Log in to AOC and unmount this SSD.

Procedure

  1. Remove the SSD from the host as shown in the figure below.

    1. Press the release button to open the disk tray handle.
    2. Hold the tray handle and slide the tray out of the disk slot.

  2. Log in to AOC. An alert appears indicating that the system has detected a physical disk removed from the host. If the host is a multi-node high-density server or a blade server, the alert will also display the location of the physical disk slot within the chassis.

  3. Install a new SSD on the physical server as shown in the figure below.

    1. Slide the new disk tray into the disk slot.
    2. Close the tray handle to lock the disk in place.

  4. Log in to AOC. An alert appears indicating that the system has detected a new physical disk inserted into the host. If the host is a multi-node high-density server or a blade server, the alert will also display the location of the physical disk slot within the chassis.

  5. Use SSH to log in to the host where the SSD is installed, and run the lsblk command to view the newly installed SSD. At this point, the disk has not been partitioned, as shown in the following example, where sdb represents the installed SSD.

    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
        sda 8:0 0 59.6G 0 disk
        └─sda1 8:1 0 200M 0 part /boot
        sdb 8:16 0 372.6G 0 disk
        sdc 8:32 0 372.6G 0 disk
        ├─sdc1 8:33 0 45G 0 part
     └─md127 9:127 0 45G 0 raid1 /
        ├─sdc2 8:34 0 20G 0 part
     └─md0 9:0 0 20G 0 raid1 /var/lib/zbs/metad
        ├─sdc3 8:35 0 10G 0 part
        └─sdc4 8:36 0 287.6G 0 part
        sdd 8:48 0 931.5G 0 disk
        └─sdd1 8:49 0 931.5G 0 part
        sde 8:64 0 931.5G 0 disk
        └─sde1 8:65 0 931.5G 0 part
        sdf 8:80 0 931.5G 0 disk
        └─sdf1 8:81 0 931.5G 0 part
        sdg 8:96 0 931.5G 0 disk
        └─sdg1 8:97 0 931.5G 0 part
        sr0 11:0 1 1.4G 0 rom
  6. Log in to AOC and mount the new SSD.

  7. After the SSD is successfully mounted, you can check its mounting status in the physical disk list of the host. If the SSD is marked as Mounted, the replacement is complete.

  8. If the SSD fails to be mounted via AOC, you can follow the steps below to remount the SSD through the command line, where sdb represents the SSD to be mounted.

    • The replaced SSD is either a cache disk containing a metadata partition (tiered storage mode) or a data disk containing a metadata partition (non-tiered storage mode):

      zbs-deploy-manage mount-disk /dev/sdb smtx_system

      Run the lsblk command again to verify that the newly installed sdb is successfully partitioned.

      ```
      NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      sdb 8:16 0 372.6G 0 disk
      ├─sdb1 8:17 0 85G 0 part
       └─md127 9:127 0 85G 0 raid1 /
      ├─sdb2 8:18 0 20G 0 part
       └─md0 9:0 0 20G 0 raid1 /var/lib/zbs/metad
      ├─sdb3 8:19 0 10G 0 part
      └─sdb4 8:20 0 247.6G 0 part`
      ```
    • The replaced SSD is a data disk containing a metadata partition (all-flash configuration with tiered storage mode using single-type SSDs):

      zbs-deploy-manage mount-disk /dev/sdb smtx_system

      Run the lsblk command again to verify that the newly installed sdb is successfully partitioned.

      ```
      NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      sdb         8:16   0   500G  0 disk
      ├─sdb4      8:20   0    20G  0 part
      ├─sdb2      8:18   0   100G  0 part
       └─md0     9:0    0   100G  0 raid1 /var/lib/zbs/metad
      ├─sdb5      8:21   0   175G  0 part
      ├─sdb3      8:19   0    10G  0 part
      └─sdb1      8:17   0   185G  0 part
      └─md127   9:127  0 184.9G  0 raid1 /
      ```
    • The replaced SSD is a data disk containing no metadata partitions (all-flash configuration with tiered storage mode using single-type SSDs):

      zbs-deploy-manage mount-disk /dev/sdb  data

      Run the lsblk command again to verify that the newly installed sdb is successfully partitioned.

      ```
      NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      sdb         8:48   0   500G  0 disk
      ├─sdb2      8:50   0    48G  0 part
      ├─sdb3      8:51   0   432G  0 part
      └─sdb1      8:49   0    10G  0 part 
      ```
    • The replaced SSD is a cache disk containing no metadata partitions (tiered storage mode):

      zbs-deploy-manage mount-disk /dev/sdb cache

      Run the lsblk command again to verify that the newly installed sdb is successfully partitioned.

      NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      sdb 8:16 0 372.6G 0 disk
      ├─sdb1 8:19 0 10G 0 part
      └─sdb2 8:20 0 362.6G 0 part