Add drives to a Linux RAID for more storage

ICY DOCK 6 Bay Enclosure

My 6 Terabyte 4 drive RAID 5 array was starting to get tight on space. Upgrading the motherboard to a GIGABYTE X570 AORUS ELITE gave me 6 SATA ports so I could finally populate the the two empty bays in my 6 bay ICY Dock enclosure. I’m adding two Seagate BarraCuda 2TB 2.5 inch Internal Hard Drives to an existing 4 drive RAID 5 array using the same drives. Here’s how to add drives to a Linux RAID

List your drives

sudo fdisk -l                                                                                                                                                                          bash


Disk /dev/nvme0n1: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: Samsung SSD 970 EVO 500GB               
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2FCD7A4E-17E5-4AB9-98B8-35096B48D597

Device           Start       End   Sectors   Size Type
/dev/nvme0n1p1    2048   1050623   1048576   512M EFI System
/dev/nvme0n1p2 1050624 976771071 975720448 465.3G Linux LVM


Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000LM015-2E81
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdb: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000LM015-2E81
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdc: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000LM015-2E81
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 443C6298-2A04-43C3-AF58-69AE4C7B2EF4

Device     Start        End    Sectors  Size Type
/dev/sdc1   2048 3907029134 3907027087  1.8T Linux RAID


Disk /dev/sdd: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000LM015-2E81
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: CE55305E-34EE-4C89-8004-425D3A104E2A

Device     Start        End    Sectors  Size Type
/dev/sdd1   2048 3907029134 3907027087  1.8T Linux RAID


Disk /dev/sde: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000LM015-2E81
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D95B8806-7C60-4541-ABE0-2EF71C18C34B

Device     Start        End    Sectors  Size Type
/dev/sde1   2048 3907029134 3907027087  1.8T Linux RAID


Disk /dev/sdf: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000LM015-2E81
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: EFA00AD8-6DBB-4B16-9572-5736E88361DB

Device     Start        End    Sectors  Size Type
/dev/sdf1   2048 3907029134 3907027087  1.8T Linux RAID


Disk /dev/mapper/vgubuntu-root: 464.3 GiB, 498539167744 bytes, 973709312 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vgubuntu-swap_1: 980 MiB, 1027604480 bytes, 2007040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md0: 5.46 TiB, 6000787587072 bytes, 11720288256 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes

As you can see /dev/sda and /dev/sdb are empty so these are the drives we will be using. If you’re reusing drives that already have a partition this might be a little trickier to figure out

Label your new drives to match drives in array

sudo parted -s -a optimal /dev/sda mklabel gpt
sudo parted -s -a optimal /dev/sdb mklabel gpt

Dump partition table from an existing array member to a new drive

sudo sfdisk -d /dev/sdc | sfdisk /dev/sda                                                                                                                                              bash
Checking that no-one is using this disk right now ... OK

Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000LM015-2E81
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0BC1639C-FBFD-4EBB-AF2E-47235065F95C

Old situation:

>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Created a new GPT disklabel (GUID: 443C6298-2A04-43C3-AF58-69AE4C7B2EF4).
/dev/sda1: Created a new partition 1 of type 'Linux RAID' and of size 1.8 TiB.
/dev/sda2: Done.

New situation:
Disklabel type: gpt
Disk identifier: 443C6298-2A04-43C3-AF58-69AE4C7B2EF4

Device     Start        End    Sectors  Size Type
/dev/sda1   2048 3907029134 3907027087  1.8T Linux RAID

The partition table has been altered.
Calling ioctl() to re-read partition table.
Re-reading the partition table failed.: Permission denied
The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or partx(8).
Syncing disks.

sudo sfdisk -d /dev/sdc | sfdisk /dev/sdb
Checking that no-one is using this disk right now ... OK

Disk /dev/sdb: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000LM015-2E81
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B5DDD383-90DB-4375-917C-F0F8C2F6B47B

Old situation:

>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Created a new GPT disklabel (GUID: 443C6298-2A04-43C3-AF58-69AE4C7B2EF4).
/dev/sdb1: Created a new partition 1 of type 'Linux RAID' and of size 1.8 TiB.
/dev/sdb2: Done.

New situation:
Disklabel type: gpt
Disk identifier: 443C6298-2A04-43C3-AF58-69AE4C7B2EF4

Device     Start        End    Sectors  Size Type
/dev/sdb1   2048 3907029134 3907027087  1.8T Linux RAID

The partition table has been altered.
Calling ioctl() to re-read partition table.
Re-reading the partition table failed.: Permission denied
The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or partx(8).
Syncing disks.

Enable the partition table on the new drive(s)

sudo partprobe

Check the details of your existing array

sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Jul 26 19:55:30 2020
        Raid Level : raid5
        Array Size : 5860144128 (5.46 TiB 6.00 TB)
     Used Dev Size : 1953381376 (1862.89 GiB 2000.26 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Oct  6 00:55:31 2022
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : skynet:0
              UUID : 81dd659c:69c9247e:7f0a404a:5a2db2f1
            Events : 109863

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       33        1      active sync   /dev/sdc1
       2       8       65        2      active sync   /dev/sde1
       4       8       81        3      active sync   /dev/sdf1

Add drives to a Linux RAID array

sudo mdadm --add /dev/md0 /dev/sda1                                                                                                                                                    bash
mdadm: added /dev/sda1

sudo mdadm --add /dev/md0 /dev/sdb1                                                                                                                                                   bash
mdadm: added /dev/sdb1

sudo mdadm --detail /dev/md0                                                                                                                                             bash
/dev/md0:
           Version : 1.2
     Creation Time : Sun Jul 26 19:55:30 2020
        Raid Level : raid5
        Array Size : 5860144128 (5.46 TiB 6.00 TB)
     Used Dev Size : 1953381376 (1862.89 GiB 2000.26 GB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Oct  6 00:56:40 2022
             State : clean 
    Active Devices : 4
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 2

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : skynet:0
              UUID : 81dd659c:69c9247e:7f0a404a:5a2db2f1
            Events : 109865

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       33        1      active sync   /dev/sdc1
       2       8       65        2      active sync   /dev/sde1
       4       8       81        3      active sync   /dev/sdf1

       5       8        1        -      spare   /dev/sda1
       6       8       17        -      spare   /dev/sdb1

Notice the drives have been added as spares

Grow the array

sudo mdadm --grow /dev/md0 --raid-devices=6

Watch the progress of the rebuild

sudo watch cat /proc/mdstat

This will take awhile (about 60 hours for me)

Resize the filesystem

Now that we have grown the “physical” drive, we still need to expand the filesystem so the OS can use the added storage

sudo resize2fs /dev/md0
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/md0 is mounted on /mnt/md0; on-line resizing required
old_desc_blocks = 699, new_desc_blocks = 1165
The filesystem on /dev/md0 is now 2441726720 (4k) blocks long.

Posted in HardwareLinuxSoftware

Tags - adadmHome LabRAIDStorage