Debian 5.0 (Lenny) install on Software RAID
As mentioned in previous posts, I’m a big fan of Linux Software RAID. Most of the Ubuntu servers I install these days are configured with two disks in a RAID1 configuration. Contrary to recommendations you’ll find elsewhere, I put all partitions on RAID1, not some (that includes swap, /boot and / – in fact I don’t normally create a separate /boot partition, leaving it on the same partition as /). I guess if you’re using RAID1, I think you should get the advantage of it for all of your data, not just the really, really important stuff on a single RAIDed partition.
When installing Ubuntu (certainly recent releases including 8.10 and 9.04) you can configure all of this through the standard installation process – creating your partitions first, flagged for use in RAID and then configuring software RAID and creating a number of software RAID volumes.
I was recently installing a Debian 5.0 server and wanted to go with a similar config to the following,
Physical device | Size | Software RAID device | Filesystem | Description |
---|---|---|---|---|
/dev/sda1 |
6GB | /dev/md0 |
swap | Double the system physical memory |
/dev/sdb1 |
6GB | |||
/dev/sda2 |
10GB | /dev/md1 |
ext3, / | You can split this into multiple partitions for /var, /home and so on |
/dev/sdb2 |
10GB | |||
/dev/sda3 |
40GB | /dev/md2 |
ext3, /data | Used for critical application data on this server |
/dev/sdb3 |
40GB |
When I followed a standard install of Debian using the above configuration, when it came to installing GRUB, it failed with an error. The error seemed to be related to the use of Software RAID. Searching the web for possible solutions mostly turned up suggestions to create a non-RAIDed /boot
partition but since this works on Ubuntu I figured it should also work on Debian (from which Ubuntu is largely derived).
First, a little background to GRUB and Linux Software RAID. It seems that GRUB cannot read Linux software RAID devices (which it needs to do to start the boot process). What it can do, is read standard Linux partitions. Given that Linux software RAID1 places a standard copy of a Linux partition on each RAID device, you can simply configure GRUB against the Linux partition and, at a GRUB level, ignore the software RAID volume. This seems to be how the Ubuntu GRUB installer works. A GRUB configuration stanza like the following should thus work without problems,
title Debian GNU/Linux, kernel 2.6.26-2-amd64 root (hd0,1) kernel /boot/vmlinuz-2.6.26-2-amd64 root=/dev/md1 ro initrd /boot/initrd.img-2.6.26-2-amd64
When I tried a configuration like this on my first install of Debian on the new server, it failed with the aforementioned error. Comparing a similarly configured Ubuntu server with the newly installed Debian server, the only obvious difference I could see is that the partition table on the Ubuntu server uses the old msdos format while the partition table on the Debian server seems to be in GPT format. I can’t find any documentation on when this change was made in Debian (or indeed whether it was something in my configuration that specifically triggered the use of GPT) but it seems like this was the source of the problems for GRUB.
To circumvent the creation of a GPT partition table on both disks, I restarted the Debian installer in Expert mode and installed the optional parted
partitioning module when prompted. Before proceeding to the partitioning disks stage of the Debian installation, I moved to a second virtual console (Alt-F2) and started parted against each disk and ran the mklabel
command to create a new partition table. When prompted for the partition table type, I input msdos
.
I then returned to the Debian installer (Alt-F1) and continued the installation in the normal way – the partitioner picks up that the disks already have an partition table and uses that rather than recreating it.
This time, when it came to the GRUB bootloader installation step, it proceeded without any errors and I completed the installation of a fully RAIDed system.
10 Comments to Debian 5.0 (Lenny) install on Software RAID
Hello,
with debian-505-i386-CD-1.iso it works even in normal instalation mode, you don’t need this workaround.
July 12, 2010
Hi Danila,
Thanks – thats good to know. The Ubuntu 10.04 grub installer also seems to take care of this automatically.
-stephen
December 28, 2010
Helpful, thanks.
debian-507-amd64-netinst.iso still has this problem. Your page ended hours of head scratching.
January 31, 2011
Would it somehow be possible to use debian-505-i386-CD-1.iso and then after grub and raid is going reinstall with debian-507-amd64-netinst.iso?
January 31, 2011
I’m not sure why you would want to try that – it might work although not sure how update-grub would behave during the reinstall with debian-507-amd64-netinst.iso – I think the workaround mentioned in my article might be safer.
February 3, 2011
I couldn’t get raid setup using debian-508-amd64-netinst.iso until I used your instructions. A BIG thanks! On another note I was hoping to unplug a drive and see if the system would still boot. I also wanted to see if it would email me an error. I added this to cron */10 * * * * /sbin/mdadm –monitor –scan -1. Have you tried this sort of testing? I see there is a cron.daily to be added for future versions: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=497949.
February 4, 2011
Hi,
Glad to hear the article was useful. I’ve certainly unplugged drives from Linux Software RAID1 configurations and the system boots ok – but yes, you should try it – better to identify any problems now than when the drive really does fail. That debian bug you reference is already fixed so you should get that notification. In addition to that, mdadm will mail you an alert any time a problem occurs anyway – just make sure /etc/mdadm/mdadm.conf contains the correct value for the MAILADDR parameter.
February 6, 2011
Thanks for the tips. So I unplugged sdb from the motherboard and rebooted no problem. I got my warning email. 🙂 🙂 Then I plugged sdb back in and the drives re-synced. Then I turned off the system and unplugged sda and the system would **not** boot which I kind of suspected. Any tips on how to make sdb bootable in case sda fails? I have googled this problem but I have still not figured it out. Thanks again.
February 8, 2011
Before unplugging any drives, make sure you install grub on both drives – google for installing grub on multiple drives or similar – then the system will boot for either drives.
February 9, 2011
Got it thanks (see below). Also I see Debian 6 is out now. I wonder if the raid 1 will work better.
grub
Probing devices to guess BIOS drives. This may take a long time.
GNU GRUB version 0.97 (640K lower / 3072K upper memory)[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ]
grub> find /boot/grub/stage1
find /boot/grub/stage1
(hd0,0)
(hd1,0)
grub> device (hd0) /dev/sdb
device (hd0) /dev/sdb
grub> root (hd0,0)
root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
setup (hd0)
Checking if “/boot/grub/stage1” exists… yes
Checking if “/boot/grub/stage2” exists… yes
Checking if “/boot/grub/e2fs_stage1_5” exists… yes
Running “embed /boot/grub/e2fs_stage1_5 (hd0)”… 17 sectors are embedded.
succeeded
Running “install /boot/grub/stage1 (hd0) (hd0)1+17 p (hd0,0)/boot/grub/stage2 /boot/grub/menu.lst”… succeeded
Done.
grub> quit
quit
July 12, 2010