Migrating Single Disk to Raid Mirror on ubuntu 20.04

Summary

This procedure is based on my own personal desktop installation of ubuntu 20.04.2. I had a single disk installation containing the following partitions: EFI, root (including boot) on an nvme drive. My goal was to obtain a second nvme of the same make and model, and, migrate to a mdadm Raid installation without backup and restore. In searching, I found many tutorials were quite old, nothing recent and some things have changed since those older tutorials were written. If you have a different partition layout, you’ll have to adjust accordingly. Despite what my goal was, I still did take the backup in advance and you should too! If you understand the steps, the risk should be close to non existant.

Some Definitions

My original drive was /dev/nvme0n1, partition 1 was EFI and partition 2 was root/boot of type ext4. The new drive was /dev/nvme1n1. I had an efi installation and GPT. I am using the bash shell. All commands are done as root, so, before starting:

$ sudo -s

Software you may need to install

You may or may not have all of the packages needed to follow the instructions. So:

$ apt install mdadm

Partition the new drive

You should check your existing partitions to see if they match what I started with via:

$ sgdisk -p /dev/nvme0n1

Remember to replace your device names where mine are shown. Also potentially partition numbers. We’re going to copy the partitions and layout of the old drive to the new drive:

$ sgdisk -R /dev/nvme1n1 /dev/nvme0n1
$ sgdisk -G /dev/nvme1n1
$ sgdisk -t 2:0xFD00 /dev/nvme1n1
$ partprobe

The first command copies the partition table from the old device to the new device. Note the paramaters look backwards, but they are not, that’s how sgdisk works.

The second command generates a new UUID for the new drive and it’s partitions since we don’t want or need duplicate UUIDs.

The third command changes the parition type for the root/boot partition, which for me was partition 2 as numbered by sgdisk. 0xFD00 is for mdadm Raid on GPT.

The last command tells ubuntu about the new partitions.

Create a Raid array

We’ll be creating a mdadm mirror array for us to use. For now, we’ll just have the new drive in that array, we’ll add the existing drive to it later once we can boot onto the Raid array. Execute:

$ mdadm --create /dev/md0 --level=mirror --raid-devices=2 missing /dev/nvme1n1p2
$ mdadm --examine --scan >/etc/mdadm/mdadm.conf

The first command generates a new mdadm mirrored array with a device name of /dev/md0, with 2 devices. One of the devices is the new drive partition for root and boot (mine was parition 2), and the other device is missing for now. Answer y to the warning prompt.

The second command saves our Raid configuration.

Create ext4 Filesystem

Our raid device /dev/md0 is just a container, it must be initialized for use as a filesystem, we’re using ext4:

$ mkfs.ext4 /dev/md0
$ mkdir /mnt/root
$ mount /dev/md0 /mnt/root

The first command formats the Raid container with an ext4 filesystem.

The second command creates a mount point to use with the new filesystem

The third command mounts the new filesystem so we can use it

Copy Our Files Over

We’ll want to copy all files from our existing drive to the new Raid drive:

$ rsync -aAHXx / /mnt/root
$ rsync -aAHX /boot/efi/ /mnt/root/boot/efi

The first command copies all files from our root partition, not descending into any other mounted filesystems.

The second command does the same for our EFI partition.

Chroot into our new filesystem

We’ll be making use of chroot to make things a little simpler and easier. This will pretend we are already booted on the new system.

$ mount --bind /dev /mnt/root/dev
$ mount --bind /sys /mnt/root/sys
$ mount -t proc /proc /mnt/root/proc
$ chroot /mnt/root

The first 3 commands set up our chroot environment by making sure we can still access key directories from our existing drive and booted environment in our chroot environment.

The last command does the actuall chroot, so, now it is as if were were booted on the Raid drive.

Modify fstab

We need to modify /etc/fstab on our new raid drive since it currently points to the old drive, meaning we won’t boot under Raid:

$ blkid /dev/md0
$ blkid /dev/nvme1n1p1
$ vi /etc/fstab

We’ll be needing the uuid of our md0 device (first command) and the EFi partition (second command) to use in modifying /etc/fstab. My EFI partition was number 1.

Using your favorite editor (vi shown here), edit /etc/fstab and replace the UUID for / (from the first command) and /boot/efi (from the second command) with the new UUIds discovered in the first 2 commands. For me, my modified fstab entries now look like:

UUID=ae87a6c4-cb56-432f-b1dd-8c4aec0ffb87 /               ext4    errors=remount-ro 0       1
UUID=4599-5926  /boot/efi       vfat    umask=0077,nofail      0       1

Setup Our New Boot Environment

Now it’s time to recreate grub and initramfs:

$ uname -r and capture current kernel version.
$ update-initramfs -k 5.8.0-44-generic -c
$ update-grub
$ mkfs.vfat /dev/nvme1n1p1
$ mount /boot/efi
$ grub-install /dev/nvme1n

The first command is needed to get our current kernel version. For me, this was 5.8.0-44-generic

The second command generates our new initramfs image so we can boot with our new environment using Raid. Replace the kernel version with the one found from command 1.

The third command updates grub

The fourth command creates a filesystem for /boot/efi

The fifth and sixth commands add grub to our new drive

Test Raid Boot Environment

Now it’s time to test our new Raid setup and see if we can boot. Using the GUI or command line, reboot your machine. We’ll want to pick a new boot drive when booting, this differs depending on your motherboard. For mine, I simply entered BIOS setup and there was an option there to temporarily change the boot device. This may differ for you, but, you want to pick the EFI partition on the new drive, not the old one. When you get to the grub menu, you can check this. Just hit ’e’ and make sure there is a line that says:

insmod mdraid1x

If there is that line, you have likely picked the correct drive and partition. If not, try rebooting and picking a different choice.

Check Boot Results

Check if you have booted on the correct drive:

$ df

You should see /dev/md0 mounted at /, and, /dev/nvme1n1p1 mounted at /boot/efi. Here’s what mine looks like:

/dev/md0                  124369340   32305552    85703060  28% /
/dev/nvme1n1p1               497696       7988      489708   2% /boot/efi

Congratulations, you are now booted into (degraded) Raid!

Prepare Old Drive

Now that we are using a degraded Raid, let’s prepare the old drive so we can add it to the mirror set:

$ sgdisk -R /dev/nvme0n1 /dev/nvme1n1
$ sgdisk -G /dev/nvme0n1

The first command copies the parition table from our booted to Raid drive.

The second command generated a new UUID for the drive and each partition.

Add Old Drive to Raid

We are ready to get our mirror running:

$ mdadm --manage /dev/md0 --add /dev/nvme0n1p2
$ mdadm --detail /dev/md0 to check status, should be rebuilding

The first command adds our old drive to the Raid set so we can have a full mirror.

The second command checks the status of the now complete mirror Array, it should say rebuilding.

Configure Second EFI Partition

We need to make sure both EFI partitions get mounted so both can be updated if an update is made by ubuntu:

$ blkid /dev/nvme0n1p1
$ vi /etc/fstab
$ mkfs.vfat /dev/nvme0n1p1
$ mount /boot/eficopy
$ rsync -aAHX /boot/efi/ /boot/eficopy
$ dpkg-reconfigure grub-efi-amd64-signed

Use vi or your favorite editor to edit /etc/fstab. Duplicate the line for /boot/efi and change /boot/efi to /boot/eficopy, and, change the UUID to match the UUID obtained by the blkid command. I also added ,nofail to each efi partition, making mine look like this:

UUID=4599-5926  /boot/efi       vfat    umask=0077,nofail      0       1
UUID=40D0-E7C3  /boot/eficopy       vfat    umask=0077,nofail      0       1

As long as both drives are in your boot sequence as defined by your boot order in the motherboard config, it should boot from either drive if one fails.

The last command tells ubuntu to maintain the efi partition. When you run it, select both drives you used for Raid.

Finalize

Just one more step:

$ mdadm --examine --scan >/etc/mdadm/mdadm.conf

Update our Raid config file now that both drives are in the array. If you got this far, it worked and you should eventually see the drives synced when you check via cat /proc/mdstat.

If you got it to work or have any corrections, please leave a comment. I enjoy hearing.