Introduction

Previously in one of my article I already explained How to configure Software RAID 5 in Linux. In this article we are going to learn how to increase existing software raid 5 Storage capacity. Now the question is why we need to increase the Software RAID 5 storage capacity and what is the use of it.

Download Free Linux eBook HERE! "Learn Linux in 5 Days" - A Free Linux eBooks for Beginners

Let’s take a scenario : you have a mail server in your organisation which is in Linux platform and Harddisk’s are configured with Software Raid 5 Technology. Thousands of users are using that mail server. one day you came to know that the storage of the mail server getting full. For such kind of situation Software RAID allows us a nice feature by which we can extend the raid 5 array storage capacity.

So let’s have look at configuration steps for increase existing software raid 5 storage capacity.

Note : I recommend you to once read my previous article i.e. How to configure Software RAID 5 in Linux so that you can understand the concept more clearly.

Follow the below Steps to Increase Software RAID 5 Storage Capacity

Here I am assuming that you are already have configured RAID 5 partition. Here I have mentioned my RAID 5 partition details. Refer the sample output below.

As you can see below my RAID 5 Partition Size is 2.17 GB. RAID devices are 3 (Highlighted in Red color).

[root@localhost ~]# mdadm --detail /dev/md0 # Checking the RAID 5 partition Details /dev/md0: Version : 1.2 Creation Time : Thu Apr 13 23:05:41 2017 Raid Level : raid5 Array Size : 2117632 ( 2.02 GiB 2.17 GB ) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Thu Apr 13 23:07:02 2017 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : a8abc922:dc3713f0:31bac3ba:10538cea Events : 18 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 3 8 49 2 active sync /dev/sdd1

You can also check the RAID device details in /proc/mdstat.

[root@localhost ~]# cat /proc/mdstat # Check ing RAID 5 device details Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdd1[3] sdc1[1] sdb1[0] 2117632 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices:

Here below I have shown my mounted RAID 5 device which is mounted in /mydata directory.

[root@localhost ~]# df -h # Check mounted devices Filesystem Size Used Avail Use% Mounted on /dev/sda2 18G 2.4G 15G 15% / tmpfs 495M 224K 495M 1% /dev/shm /dev/sda1 291M 34M 242M 13% /boot /dev/md0 2.0G 68M 1.9G 4% /mydata

So now let’s go ahead and increase the software RAID 5 storage by adding a new Harddisk. Here I have one new harddisk i.e. /dev/sde.

[root@localhost ~]# fdisk -l | grep /dev/sde Disk /dev/sde: 2147 MB, 2147483648 bytes

Also Read :

Now create a new partition in /dev/sde and change the partition id for raid 5 i.e. fd. Refer the sample output below.

[root@localhost ~]# fdisk /dev/sde # Creating a New partition Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0xf1ee24f7. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): n # "n" for New Partition Command action e extended p primary partition (1-4) p # "p" for Primary Partition Partition number (1-4): 1 First cylinder (1-261, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-261, default 261): +1G # Assigning Size of the Partition Command (m for help): t # Changing the Partition ID for RAID 5 Selected partition 1 Hex code (type L to list codes): fd # "fd" is the Partition ID of Software RAID Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): w # Saving the Partition Table The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.

In Linux system you suppose to restart the system after creating a new partition to update in kernel but to skip restart you can use partprobe command to refresh the partition table and update in kernel.

[root@localhost ~]# partprobe /dev/sde # Refreshing the Partition Table

So as we can see on the output below the partition i.e. /dev/sde1 is ready to attach with already existing Software RAID 5 partition.

[root@localhost ~]# fdisk -l | grep /dev/sde Disk /dev/sde: 2147 MB, 2147483648 bytes /dev/sde1 1 132 1060258+ fd Linux raid autodetect

To add a new harddisk to already existing RAID 5 device use the below command.

[root@localhost ~]# mdadm --manage /dev/md0 --add /dev/sde1 # Adding new harddisk to existing Software RAID 5 device mdadm: added /dev/sde1

Where :

/dev/md0 – Existing RAID 5 Partition

/dev/sde1 – Newly add Harddisk

After add the new harddisk to existing softwate raid 5 partition check the details by below command.

As we can see on the output below the newly added harddisk (Highlighted in Red Color) is attached to the RAID 5 device but still it’s in Spare state. and our RAID 5 device size is still not increased.

[root@localhost ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Apr 13 23:05:41 2017 Raid Level : raid5 Array Size : 2117632 (2.02 GiB 2.17 GB) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Apr 13 23:40:57 2017 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : a8abc922:dc3713f0:31bac3ba:10538cea Events : 19 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 3 8 49 2 active sync /dev/sdd1 4 8 65 - spare /dev/sde1

Also you check the simple RAID 5 device status in /proc/mdstat.

[root@localhost ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sde1[4](S) sdc1[1] sdb1[0] sdd1[3] 2117632 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices:

Where :

S – Spare Device

So use the below command to grow the RAID 5 partition size.

[root@localhost ~]# mdadm --grow /dev/md0 -n4 # Increasing the RAID 5 device Storage mdadm: Need to backup 3072K of critical section..

Now let’s check what is the status of the device.

As you can see below the RAID 5 device is Reshaping means it’s Resizing by including the newly added Harddisk. Now the Reshape Status is 5%. Run the below command after some time and you will able find increased RAID 5 device.

[root@localhost ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Apr 13 23:05:41 2017 Raid Level : raid5 Array Size : 2117632 (2.02 GiB 2.17 GB) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Apr 13 23:43:59 2017 State : clean, reshaping Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Reshape Status : 5% complete Delta Devices : 1, (3->4) Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : a8abc922:dc3713f0:31bac3ba:10538cea Events : 37 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 3 8 49 2 active sync /dev/sdd1 4 8 65 3 active sync /dev/sde1

After running the above command again you can see that our Software RAID 5 device increased to 3.25 GB and total RAID devices became 4.

[root@localhost ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Apr 13 23:05:41 2017 Raid Level : raid5 Array Size : 3176448 ( 3.03 GiB 3.25 GB ) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Apr 13 23:44:08 2017 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : a8abc922:dc3713f0:31bac3ba:10538cea Events : 46 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 3 8 49 2 active sync /dev/sdd1 4 8 65 3 active sync /dev/sde1

My RAID 5 device has been increased but when I am trying to check the mounted devices by df -h commad, here It’s showing me the old size only and it’s not increased because we have to inform to kernel that the RAID 5 device size has increased by using resize2fs command.

[root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 18G 2.4G 15G 15% / tmpfs 495M 224K 495M 1% /dev/shm /dev/sda1 291M 34M 242M 13% /boot /dev/md0 2.0G 68M 1.9G 4% /mydata

So follow the below command to update the resized RAID 5 device in kernel and mount point.

[root@localhost ~]# resize2fs /dev/md0 resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/md0 is mounted on /mydata; on-line resizing required old desc_blocks = 1, new_desc_blocks = 1 Performing an on-line resize of /dev/md0 to 794112 (4k) blocks. The filesystem on /dev/md0 is now 794112 blocks long.

After above process you will found the RAID 5 volume resized. Refer the sample output below.

[root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 18G 2.4G 15G 15% / tmpfs 495M 224K 495M 1% /dev/shm /dev/sda1 291M 34M 242M 13% /boot /dev/md0 3.0G 68M 2.8G 3% /mydata

After complete all configuraton don’t forgot to save the configuration in /etc/mdadm.conf by using the below command.

[root@localhost ~]# mdadm --detail --scan --verbose >> /etc/mdadm.conf

Note : It’s mendatory to save the RAID configuration as if you restart the system without saving the configuration then you will lost all configuration and unable to found the RAID device.

After saving the configuration you can check the RAID configuration file, It would look like as shown on the output below.

[root@localhost ~]# cat /etc/mdadm.conf ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 name=localhost.localdomain:0 UUID=a8abc922:dc3713f0:31bac3ba:10538cea devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

If you found this article useful then Like Us, Subscribe Us or If you have something to say then feel free to comment on the comment box below.