HowTo Single Disk to RAID: differenze tra le versioni
(Partial content, just not to loose everything) |
(Again, saving, not complete) |
||
Riga 14: | Riga 14: | ||
* Create a RAID1 array composed by 2 drives but with a drive marked as missing | * Create a RAID1 array composed by 2 drives but with a drive marked as missing | ||
* Set LVM up on top of it (as the old drive had LVM (it's a 120GB drive just for the `/` parition, we wanted to have some space available if we'll ever find in the need) | * Set LVM up on top of it (as the old drive had LVM (it's a 120GB drive just for the `/` parition, we wanted to have some space available if we'll ever find in the need) | ||
* | * Boot a live distro and copy / | ||
* Install the bootloader on the new drive adding the necessary modules for RAID and LVM | ** Install the bootloader on the new drive adding the necessary modules for RAID and LVM | ||
* Reboot the server from the RAID-memeber disk | * Reboot the server from the RAID-memeber disk | ||
* Check if everything is OK | * Check if everything is OK | ||
Riga 63: | Riga 63: | ||
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sde1 missing | mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sde1 missing | ||
mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sde5 missing | mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sde5 missing | ||
We made the filesystem on the newly created /dev/md1 | |||
mkfs.ext2 /dev/md1 | |||
And copied the files | |||
mount /dev/md1 /mnt | |||
rsync -avPh /boot /mnt | |||
sync && umount /mnt | |||
We were done with /boot, ans we needed to create the LVM volumes: | |||
First we marked /dev/md2 as LVM member then we told LVM /dev/md2 was a device it could use | |||
pvcreate /dev/md2 | |||
We then created the virtual group that was going to host the logical volumes | |||
vgcreate ba-system /dev/md2 | |||
And created the logical volumes over it | |||
lvcreate -L30G -n root ba-system | |||
lvcreate -L4G -n swap ba-system | |||
We createed the filesystem for root | |||
mkfs.ext4 /dev/mapper/ba--system-root | |||
And for swap | |||
mkswap /dev/mapper/ba--system-swap | |||
Then it was time to poweroff the server and boot from a live, we choosed ArchLinux as it's lightweight and provides all the packages we need without forcefully bringing up an installation procedure | |||
We mounted the root partitions | |||
mkdir /mnt/old /mnt/new | |||
mount /dev/mapper/ba--system-root /mnt/new | |||
mount /dev/mapper/bitarno--system-root /mnt/old | |||
And copied the datas over: | |||
rsync -avPh /mnt/old/ /mnt/new | |||
We then proceeded to mount the boot partition | |||
mount /dev/md1 /mnt/new/boot | |||
And arch-chroot'ed into it arch-chroot is a wonderfull piece of software that saves you from bind mounting /dev /proc and /sys in the chroot | |||
arch-chroot /mnt/new | |||
It was time to install grub and regenerate the image to contain LVM and RAID modules, this was the hardest part as we wrongly supposed all the modules were already present. | |||
Versione delle 20:11, 19 nov 2016
We found ourselves in the need of switching from a single disk root to a root backed by a RAID, here's how we did.
Context
We sat up a new/old server, BitArno, it has a nice /home of ~6TB backed by a RAID and a separate dick for the / partition. As the drive for the / was tested before getting to production resulting in no errors we tought we didn't need a RAID for it. We installed debian stable 8.1, sat up every service, went to bet to wake up with a hung server refusing to boot.
An on site check was required, we got there and found out that the / drive was failing to reallocate sectors, we fsck'ed it and it booted. "Great!" that same evening it was failing again and so we decided to switch from a single disk root to a RAID1 backed root expecially because we do use commodity hardware taken from old computers.
The general idea
We wanted a way to easily switch from a drive to the RAID without having to back everything up and copy it over again with fewer downtime possible and so we decided to:
- Poweroff the server and attach the new disk of an equal size
- Power on the server
- Create a RAID1 array composed by 2 drives but with a drive marked as missing
- Set LVM up on top of it (as the old drive had LVM (it's a 120GB drive just for the `/` parition, we wanted to have some space available if we'll ever find in the need)
- Boot a live distro and copy /
- Install the bootloader on the new drive adding the necessary modules for RAID and LVM
- Reboot the server from the RAID-memeber disk
- Check if everything is OK
- Add the old disk to the RAID and let mdadm resync the contents.
Step by step guide
Here I'll try to cover step by step the command we used, it has been a while since we did it so I'm not 100% sure I'll remember everything First a layout of the partition befor the RAID
sda 8:0 0 114.5G 0 disk ├─sda1 8:1 0 243M 0 part /boot ├─sda2 8:2 0 1K 0 part └─sda5 8:5 0 114.3G 0 part ├─ba--system-root 253:0 0 30G 0 lvm / └─ba--system-swap 253:3 0 4G 0 lvm [SWAP]
And after the switch
sda 8:0 0 114.5G 0 disk ├─sda1 8:1 0 243M 0 part │ └─md1 9:1 0 242.8M 0 raid1 /boot ├─sda2 8:2 0 1K 0 part └─sda5 8:5 0 114.3G 0 part └─md2 9:2 0 114.2G 0 raid1 ├─ba--system-root 253:0 0 30G 0 lvm / └─ba--system-swap 253:3 0 4G 0 lvm [SWAP] sde 8:64 0 114.5G 0 disk ├─sde1 8:65 0 243M 0 part │ └─md1 9:1 0 242.8M 0 raid1 /boot ├─sde2 8:66 0 1K 0 part └─sde5 8:69 0 114.3G 0 part └─md2 9:2 0 114.2G 0 raid1 ├─ba--system-root 253:0 0 30G 0 lvm / └─ba--system-swap 253:3 0 4G 0 lvm [SWAP]
In this way we have two arrays, one for the boot and one for the LVM which contains the root and the swap
First of all we need to wipe out every possible header from the new disk as it will become a RAID member
dd if=/dev/zero of=/dev/sde bs=100M count=1
Then we created the partitions with sfdisk about the same size as the first disk. It's not needed for them to be the same as it wasn't possible to copy the partitions with dd
We need:
- one parition for the boot marked as Linux raid autodetect
- the other for LVM marked as Linux raid autodetect that takes the free space left
We then proceeded to create the 2 RAIDS, /dev/sde is the new disk
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sde1 missing mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sde5 missing
We made the filesystem on the newly created /dev/md1
mkfs.ext2 /dev/md1
And copied the files
mount /dev/md1 /mnt rsync -avPh /boot /mnt sync && umount /mnt
We were done with /boot, ans we needed to create the LVM volumes:
First we marked /dev/md2 as LVM member then we told LVM /dev/md2 was a device it could use
pvcreate /dev/md2
We then created the virtual group that was going to host the logical volumes
vgcreate ba-system /dev/md2
And created the logical volumes over it
lvcreate -L30G -n root ba-system lvcreate -L4G -n swap ba-system
We createed the filesystem for root
mkfs.ext4 /dev/mapper/ba--system-root
And for swap
mkswap /dev/mapper/ba--system-swap
Then it was time to poweroff the server and boot from a live, we choosed ArchLinux as it's lightweight and provides all the packages we need without forcefully bringing up an installation procedure
We mounted the root partitions
mkdir /mnt/old /mnt/new mount /dev/mapper/ba--system-root /mnt/new mount /dev/mapper/bitarno--system-root /mnt/old
And copied the datas over:
rsync -avPh /mnt/old/ /mnt/new
We then proceeded to mount the boot partition
mount /dev/md1 /mnt/new/boot
And arch-chroot'ed into it arch-chroot is a wonderfull piece of software that saves you from bind mounting /dev /proc and /sys in the chroot
arch-chroot /mnt/new
It was time to install grub and regenerate the image to contain LVM and RAID modules, this was the hardest part as we wrongly supposed all the modules were already present.
Issues
We (of course) had some issues in the process:
- Thougth the drive was of the same model the capacity was a little different and so we couldnt copy the partition scheme as we tougth we could do
- It's very important to generate the grub image inserting the modules and the boot parameter needed for lvm and RAID
- It's impossible to backup and restore the lvm too as for a while we found ourselves with the 2 LVMs disk together and names were conflicting.