Document Scope
The mdadm utility can be used to create and manage storage arrays using Linux’s software RAID capabilities. Administrators have great flexibility in coordinating their individual storage devices and creating logical storage devices that have greater performance or redundancy characteristics.
In this guide, we will describe some different configurations you can use to create a software-based RAID array using mdadm under Linux. We will be focusing on RAID levels 0, 1, and 5 as those are the most common RAID levels we implement and see our customers utilize. Keep in mind the steps here, if followed, will give you working RAID but may not match your original configuration, nor do these steps cover ALL the various ways you can use this tool to setup your RAID.
Prerequisites
To follow the steps in this guide, you will need:
- A non-root user with sudo privileges
- A Linux operating system (like Rocky Linux or Ubuntu) with the mdadm utility installed
- A basic understanding of RAID terminology and concepts. (To learn more about RAID and what RAID level is right for you, read our introduction to RAID article.)
- Multiple raw storage devices available on your server. The examples in this tutorial demonstrate how to configure various types of arrays on the server. As such, you will need some drives to configure. While this guide will have examples, you should ensure any commands you run use YOUR devices and you should NOT just copy and paste the commands as seen here.
- Depending on the array type, you will need two to four storage devices. These drives do not need to be formatted prior to following this guide.
Resetting Existing RAID Devices (Optional)
You can skip this section for now if you have not yet set up any arrays. This guide will introduce a number of different RAID levels. If you wish to follow along and complete each RAID level for your devices, you will likely want to reuse your storage devices after each section. This specific section Resetting Existing RAID Devices can be referenced to reset your component storage devices prior to testing a new RAID level.
Warning: This process will completely destroy the array and any data written to it. Make sure that you are operating on the correct array and that you have copied any data you need to retain prior to destroying the array.
Begin by finding the active arrays in the /proc/mdstat file:
cat /proc/mdstat
Output
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid0 sdc[1] sdd[0]
209584128 blocks super 1.2 512k chunks
unused devices: <none>
Then unmount the array from the filesystem:
sudo umount /dev/md0
Now stop and remove the array:
sudo mdadm --stop /dev/md0
Find the devices that were used to build the array with the following command:
Warning: Keep in mind that the /dev/sd* names can potentially change any time you reboot. Check them every time to make sure you are operating on the correct devices.
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G linux_raid_member disk
sdb 100G linux_raid_member disk
sdc 100G disk
sdd 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
After discovering the devices used to create an array, zero their superblock which holds metadata for the RAID setup. Zeroing this removes the RAID metadata and resets them to normal:
sudo mdadm --zero-superblock /dev/sda
sudo mdadm --zero-superblock /dev/sdb
It’s recommended to also remove any persistent references to the array. Edit the /etc/fstab file and comment out or remove the reference to your array using nano or your preferred text editor.
sudo nano /etc/fstab
You can comment it out by inserting a hashtag symbol # at the beginning of the line, using nano or your preferred text editor:
# /dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
From here, you should be ready to reuse the storage devices individually, or as components of a different array.
Creating a RAID 0 Array
The RAID 0 array works by breaking up data into chunks and striping it across the available disks. This means that each disk contains a portion of the data and that multiple disks will be referenced when retrieving information.
- Requirements: Minimum of 2 storage devices.
- Primary benefit: Performance in terms of read/write and capacity.
- Things to keep in mind: Make sure that you have functional backups. A single device failure will destroy all data in the array.
Identifying the Component Devices
To start, find the identifiers for the raw disks that you will be using:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
In this example, you have two disks without a filesystem, each 100G in size. These devices have been given the /dev/sda and /dev/sdb identifiers for this guide and will be the raw components used to build the array.
Creating the Array
To create a RAID 0 array with these components, pass them into the mdadm --create command. You will have to specify the device name you wish to create, the RAID level, and the number of devices. In this command example, you will be naming the device /dev/md0, and include the two disks that will build the array:
sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sda /dev/sdb
Confirm that the RAID was successfully created by checking the /proc/mdstat file:
cat /proc/mdstat
Output:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid0 sdb[1] sda[0]
209584128 blocks super 1.2 512k chunks
unused devices: <none>
This output reveals that the /dev/md0 device was created in the RAID 0 configuration using the /dev/sda and /dev/sdb devices.
Creating and Mounting the Filesystem
Next, create a filesystem on the array:
sudo mkfs.ext4 -F /dev/md0
Then, create a mount point to attach the new filesystem:
sudo mkdir -p /mnt/md0
You can mount the filesystem with the following command:
sudo mount /dev/md0 /mnt/md0
After, check whether the new space is available:
df -h -x devtmpfs -x tmpfs
Output:
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 196G 61M 186G 1% /mnt/md0
The new filesystem is now mounted and accessible. Next, we want to add the new filesystem mount options to the /etc/fstab file for automatic mounting at boot. To do this, we'll need the UUID for our RAID:
sudo blkid | grep md
This output should provide the UUID we'll need for the next step
$ sudo blkid | grep md
/dev/md0: UUID="a0b9226c-23e8-4a77-a7d8-3c6c7f57c412" TYPE="ext4"
Note this UUID. Now edit your /etc/fstab file and add the following:
UUID=YOUR-UUID /mnt/md0 ext4 defaults 0 0
In a virtual machine of Rocky Linux we created for this guide, it looked like this:
# /etc/fstab
# Created by anaconda on Wed Jan 10 00:34:36 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=625c16e9-4b40-467c-b47d-0029556a3b34 / xfs defaults 0 0
UUID=e1166107-d6d2-408a-9198-bd87d724386c /boot xfs defaults 0 0
UUID=6186-1CD4 /boot/efi vfat umask=0077,shortname=winnt 0 2
UUID=0f8cd2bd-acb5-4ed3-a92f-5492b56c0288 none swap defaults 0 0
UUID=a0b9226c-23e8-4a77-a7d8-3c6c7f57c412 /mnt/md0 ext4 defaults 0 0
Your RAID 0 array will now automatically mount at each boot.
You’re now finished with your RAID 0 set up. If you want to try a different RAID, follow the resetting instructions at the beginning of this tutorial to proceed with creating a new RAID array type from one of the other options below!
Creating a RAID 1 Array
The RAID 1 array type is implemented by mirroring data across all available disks. Each disk in a RAID 1 array gets a full copy of the data, providing redundancy in the event of a device failure.
- Requirements: Minimum of 2 storage devices.
- Primary benefit: Redundancy between two storage devices
- Things to keep in mind: Since every disk maintains a copy of the data, your raid "size" will never be larger than a single disk
Identifying the Component Devices
To start, find the identifiers for the raw disks that you will be using:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
In this example, you have two disks without a filesystem, each 100G in size. These devices have been given the /dev/sda and /dev/sdb identifiers for this guide and will be the raw components used to build the array.
Creating the Array
To create a RAID 1 array with these components, pass them into the mdadm --create command. You will have to specify the device name you wish to create, the RAID level, and the number of devices. In this command example, you will be naming the device /dev/md0, and include the disks that will build the array:
sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
If the component devices you are using are not partitions with the boot flag enabled, you will likely receive the following warning. It is safe to respond with y and continue:
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 104792064K
Continue creating array? y
The mdadm tool will start to mirror the drives. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat file:
cat /proc/mdstat
Output:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb[1] sda[0]
104792064 blocks super 1.2 [2/2] [UU]
[====>................] resync = 20.2% (21233216/104792064) finish=6.9min speed=199507K/sec
unused devices: <none>
In the first highlighted line, the /dev/md0 device was created in the RAID 1 configuration using the /dev/sda and /dev/sdb devices. The second highlighted line reveals the progress on the mirroring. You can continue to the next step while this process completes.
Creating and Mounting the Filesystem
Next, create a filesystem on the array:
sudo mkfs.ext4 -F /dev/md0
Then, create a mount point to attach the new filesystem:
sudo mkdir -p /mnt/md0
You can mount the filesystem with the following command:
sudo mount /dev/md0 /mnt/md0
After, check whether the new space is available:
df -h -x devtmpfs -x tmpfs
Output:
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 99G 60M 94G 1% /mnt/md0
The new filesystem is now mounted and accessible. Next, we want to add the new filesystem mount options to the /etc/fstab file for automatic mounting at boot. To do this, we'll need the UUID for our RAID:
sudo blkid | grep md
This output should provide the UUID we'll need for the next step
$ sudo blkid | grep md
/dev/md0: UUID="a0b9226c-23e8-4a77-a7d8-3c6c7f57c412" TYPE="ext4"
Note this UUID. Now edit your /etc/fstab file and add the following:
UUID=YOUR-UUID /mnt/md0 ext4 defaults 0 0
In a virtual machine of Rocky Linux we created for this guide, it looked like this:
# /etc/fstab
# Created by anaconda on Wed Jan 10 00:34:36 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=625c16e9-4b40-467c-b47d-0029556a3b34 / xfs defaults 0 0
UUID=e1166107-d6d2-408a-9198-bd87d724386c /boot xfs defaults 0 0
UUID=6186-1CD4 /boot/efi vfat umask=0077,shortname=winnt 0 2
UUID=0f8cd2bd-acb5-4ed3-a92f-5492b56c0288 none swap defaults 0 0
UUID=a0b9226c-23e8-4a77-a7d8-3c6c7f57c412 /mnt/md0 ext4 defaults 0 0
Your RAID 1 array will now automatically mount at each boot.
You’re now finished with your RAID 1 set up. If you want to try a different RAID, follow the resetting instructions at the beginning of this tutorial to proceed with creating a new RAID array type.
Creating a RAID 5 Array
The RAID 5 array type is implemented by striping data across the available devices. One component of each stripe is a calculated parity block. If a device fails, the parity block and the remaining blocks can be used to calculate the missing data. The device that receives the parity block is rotated so that each device has a balanced amount of parity information.
- Requirements: Minimum of 3 storage devices.
- Primary benefit: Redundancy with more usable capacity.
- Things to keep in mind: While the parity information is distributed, one disk’s worth of capacity will be used for parity. RAID 5 can suffer from very poor performance when in a degraded state
Identifying the Component Devices
To start, find the identifiers for the raw disks that you will be using:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
You have three disks without a filesystem, each 100G in size. These devices have been given the /dev/sda, /dev/sdb, and /dev/sdc identifiers for this session and will be the raw components you use to build the array.
Creating the Array
To create a RAID 5 array with these components, pass them into the mdadm --create command. You will have to specify the device name you wish to create, the RAID level, and the number of devices. In this command example, you will be naming the device /dev/md0, and include the disks that will build the array:
sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc
The mdadm tool will start to configure the array. It uses the recovery process to build the array for performance reasons. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat file:
cat /proc/mdstat
Output:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
209582080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 0.9% (957244/104791040) finish=18.0min speed=95724K/sec
unused devices: <none>
In the first highlighted line, the /dev/md0 device was created in the RAID 5 configuration using the /dev/sda, /dev/sdb and /dev/sdc devices. The second highlighted line shows the progress of the build. You can continue the guide while this process completes.
Creating and Mounting the Filesystem
Next, create a filesystem on the array:
sudo mkfs.ext4 -F /dev/md0
Then, create a mount point to attach the new filesystem:
sudo mkdir -p /mnt/md0
You can mount the filesystem with the following command:
sudo mount /dev/md0 /mnt/md0
After, check whether the new space is available:
df -h -x devtmpfs -x tmpfs
Output:
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 197G 60M 187G 1% /mnt/md0
The new filesystem is now mounted and accessible. Next, we want to add the new filesystem mount options to the /etc/fstab file for automatic mounting at boot. To do this, we'll need the UUID for our RAID:
sudo blkid | grep md
This output should provide the UUID we'll need for the next step
$ sudo blkid | grep md
/dev/md0: UUID="a0b9226c-23e8-4a77-a7d8-3c6c7f57c412" TYPE="ext4"
Note this UUID. Now edit your /etc/fstab file and add the following:
UUID=YOUR-UUID /mnt/md0 ext4 defaults 0 0
In a virtual machine of Rocky Linux we created for this guide, it looked like this:
# /etc/fstab
# Created by anaconda on Wed Jan 10 00:34:36 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=625c16e9-4b40-467c-b47d-0029556a3b34 / xfs defaults 0 0
UUID=e1166107-d6d2-408a-9198-bd87d724386c /boot xfs defaults 0 0
UUID=6186-1CD4 /boot/efi vfat umask=0077,shortname=winnt 0 2
UUID=0f8cd2bd-acb5-4ed3-a92f-5492b56c0288 none swap defaults 0 0
UUID=a0b9226c-23e8-4a77-a7d8-3c6c7f57c412 /mnt/md0 ext4 defaults 0 0
Your RAID 5 array will now automatically mount at each boot.
You’re now finished with your RAID 5 set up. If you want to try a different RAID, follow the resetting instructions at the beginning of this tutorial to proceed with creating a new RAID array type.
Comments
0 comments
Please sign in to leave a comment.