ubuntu 9.04
Debian 5.0 (Lenny) install on Software RAID
As mentioned in previous posts, I’m a big fan of Linux Software RAID. Most of the Ubuntu servers I install these days are configured with two disks in a RAID1 configuration. Contrary to recommendations you’ll find elsewhere, I put all partitions on RAID1, not some (that includes swap, /boot and / – in fact I don’t normally create a separate /boot partition, leaving it on the same partition as /). I guess if you’re using RAID1, I think you should get the advantage of it for all of your data, not just the really, really important stuff on a single RAIDed partition.
When installing Ubuntu (certainly recent releases including 8.10 and 9.04) you can configure all of this through the standard installation process – creating your partitions first, flagged for use in RAID and then configuring software RAID and creating a number of software RAID volumes.
I was recently installing a Debian 5.0 server and wanted to go with a similar config to the following,
Physical device | Size | Software RAID device | Filesystem | Description |
---|---|---|---|---|
/dev/sda1 |
6GB | /dev/md0 |
swap | Double the system physical memory |
/dev/sdb1 |
6GB | |||
/dev/sda2 |
10GB | /dev/md1 |
ext3, / | You can split this into multiple partitions for /var, /home and so on |
/dev/sdb2 |
10GB | |||
/dev/sda3 |
40GB | /dev/md2 |
ext3, /data | Used for critical application data on this server |
/dev/sdb3 |
40GB |
When I followed a standard install of Debian using the above configuration, when it came to installing GRUB, it failed with an error. The error seemed to be related to the use of Software RAID. Searching the web for possible solutions mostly turned up suggestions to create a non-RAIDed /boot
partition but since this works on Ubuntu I figured it should also work on Debian (from which Ubuntu is largely derived).
First, a little background to GRUB and Linux Software RAID. It seems that GRUB cannot read Linux software RAID devices (which it needs to do to start the boot process). What it can do, is read standard Linux partitions. Given that Linux software RAID1 places a standard copy of a Linux partition on each RAID device, you can simply configure GRUB against the Linux partition and, at a GRUB level, ignore the software RAID volume. This seems to be how the Ubuntu GRUB installer works. A GRUB configuration stanza like the following should thus work without problems,
title Debian GNU/Linux, kernel 2.6.26-2-amd64 root (hd0,1) kernel /boot/vmlinuz-2.6.26-2-amd64 root=/dev/md1 ro initrd /boot/initrd.img-2.6.26-2-amd64
When I tried a configuration like this on my first install of Debian on the new server, it failed with the aforementioned error. Comparing a similarly configured Ubuntu server with the newly installed Debian server, the only obvious difference I could see is that the partition table on the Ubuntu server uses the old msdos format while the partition table on the Debian server seems to be in GPT format. I can’t find any documentation on when this change was made in Debian (or indeed whether it was something in my configuration that specifically triggered the use of GPT) but it seems like this was the source of the problems for GRUB.
To circumvent the creation of a GPT partition table on both disks, I restarted the Debian installer in Expert mode and installed the optional parted
partitioning module when prompted. Before proceeding to the partitioning disks stage of the Debian installation, I moved to a second virtual console (Alt-F2) and started parted against each disk and ran the mklabel
command to create a new partition table. When prompted for the partition table type, I input msdos
.
I then returned to the Debian installer (Alt-F1) and continued the installation in the normal way – the partitioner picks up that the disks already have an partition table and uses that rather than recreating it.
This time, when it came to the GRUB bootloader installation step, it proceeded without any errors and I completed the installation of a fully RAIDed system.
BIOS flash upgrades on Linux
To upgrade your system BIOS you normally need to run a piece of software from the system manufacturer which loads an updated copy of the BIOS into the EPROM chip on your system motherboard – a process known as flashing your BIOS. Most system manufacturers supply BIOS upgrades in a form that will run under DOS or, occasionally, Windows. It is rare to find a BIOS upgrade program that runs under Linux (I’d love to hear about one). Recognising that not all of their customers are necessarily running a 28 year old, 16-bit operating system – some system manufacturers supply their BIOS upgrades in the form of an image which you can burn to a CDROM and boot from (making the question of what OS you are running irrelevant).
I recently had to upgrade the BIOS on one of our Supermicro systems (an X7DVL-E system). Supermicro provide their BIOS upgrades as a ZIP file containing the actual BIOS and a DOS flash program. They also seem to provide some software which you can run on Windows to create a BIOS flash floppy disk (for the younger readers in the audience, that’s another wonderful technology from the 80s, and I’m talking about the super-modern 3.5″ floppy there). I’m not singling out Supermicro for particular criticism here, a lot of the system manufacturers seem to work on the assumption we’re still running PCs with Windows and a floppy drive (to be fair, if you have the optional IPMI management card installed, you can normally upload your firmware through that, but we don’t) – but for those of us running Linux servers, upgrading the BIOS can be a painful process.
There is a work-around for this problem. Thanks to the Linux boot-loader, GRUB – you can boot from a DOS disk image containing your BIOS upgrade program and run the program from within that booted image without ever actually installing DOS or a floppy drive in your system. The following procedure worked well for me on an Ubuntu 9.04 system (with thanks to this OpenSUSE page and and this Ubuntu forums posting for some assistance along the way) and the same approach should work on other distributions.
WARNING: Upgrading your system BIOS is an inherently risky process – with the primary risk being that if things go wrong you can brick your system. Things that can go wrong include flashing your system with a BIOS upgrade for a different system or the power getting interrupted while you are in the middle of a BIOS upgrade. In some cases, you may be able to reflash the BIOS using some emergency procedure but with most systems, you may be looking at a motherboard replacement. So proceed with caution and only upgrade your BIOS if you have a specific problem which the upgrade fixes.
- Download a bootable DOS disk image from the FreeDOS distribution site (FreeDOS is an excellent open source version of DOS. It is widely used by hobbyists and companies including Dell, HP and Seagate).
wget http://www.fdos.org/bootdisks/autogen/FDOEM.144.gz
- Download your system manufacturers BIOS upgrade
wget http://www.example.com/bios/version2.zip
- Place the downloaded BIOS upgrade program and files into the downloaded bootable DOS image.
gunzip FDOEM.144.gz sudo mount -o loop FDOEM.144 /mnt sudo mkdir /mnt/bios cd /mnt/bios unzip <path to download BIOS upgrade file>/version2.zip umount /mnt
- Add the bootable DOS image (with the bios upgrade software) to your Linux bootloader (this requires a file from the syslinux package),
sudo aptitude install syslinux sudo mkdir /boot/dos sudo cp /usr/lib/syslinux/memdisk /boot/dos sudo cp FDOEM.144 /boot/dos sudo vi /boot/grub/menu.lst
and add the following section to the end of the file
title DOS BIOS upgrade kernel /boot/dos/memdisk initrd /boot/dos/FDOEM.144
- Reboot your system and choose the DOS BIOS upgrade boot option. If the boot is successful you should shortly be presented with the A:\ DOS boot prompt. At this point you can run the BIOS upgrade software, for example,
A:\CD BIOS A:\FLASH V2BIOS.ROM
- Once the upgrade finishes, reboot and enjoy your upgraded system.
Ubuntu 9.04 Fake RAID problems
So we have RAID – a technology that allowed computer users to achieve high levels of storage reliability from low-cost and less reliable PC-class disk-drive components, via the technique of arranging the devices into arrays for redundancy to quote the Wikipedia article.
In the beginning, manufacturers created dedicated hardware controllers to which disks were attached. These controllers include their own processor and memory and handle all the RAID functionality within the black box they present to the system (the good ones will even include a battery that lets the controller run for long enough in the event of a power failure so that any data stored in the RAID controller’s cache memory isn’t lost but can be written to the drives when the power comes back). As far as the system the controller is attached to is concerned – the RAID controller is one big disk. This is called hardware RAID.
As machines have gotten more powerful, most machines (certainly most desktop machines) are sitting idle most of the time, so it has become feasible to start using the system for operating system level tasks like providing RAID. All mainstream operating systems provide some form of this software RAID which performs exactly the same functionality as the hardware RAID controller above, but using the system’s processor and memory. There are advantages and disadvantages to both approaches (I’m increasingly leaning towards using software RAID on Linux – low end hardware RAID controllers aren’t very reliable and tend to be slow from an I/O perspective – most modern Linux servers tend to have multiple processor cores which are sitting idle most of the time and are perfectly suited to driving a RAID array) but they both work reasonably well.
In between these two comes something described as Firmware/driver-based RAID, HostRAID or Fake RAID. This is provided by cheap RAID controllers that do not implement all the RAID functionality (normally they are standard disk controllers with some special firmware) and utilise the main system processor for most of the heavy-lifting. They also rely on dedicated operating drivers to provide the RAID functionality, hence the name Fake RAID. I’m not a fan of Fake RAID controllers – apart from the fact that the manufacturers of these controllers rarely make it clear that they are not fully functional RAID controllers, their reliance on elaborate driver software makes them less reliable than hardware RAID but more complex to maintain than true software RAID. They are reasonably well supported under Linux these days using the Device-Mapper Software RAID Tool (aka dmraid) but personally, I prefer to use a Fake RAID controller as a standard SATA controller and if I require RAID on such a system, implement it using Linux’s excellent Software RAID support.
Up to recently, when people installed Ubuntu – if they did want to use their Fake RAID controller as a RAID controller, they ran into the problem of the installer not including dmraid support. Using Ubuntu 9.04 (Jaunty) – the installer detects at least some Fake RAID controllers and prompts you as to whether to use this controller via dmraid or not. If you choose not to, you will then be able to use it as a normal SATA controller.
I ran into an interesting problem on a recent reinstall of Ubuntu 9.04 onto a Supermicro X7DVL system which includes an Intel 631xESB/632xESB I/O controller which supports some sort of Fake RAID (Intel seems to call their Fake RAID Matrix Storage Technology). Given my stance on Fake RAID, I immediately disabled this in the BIOS by changing the controller to compatible mode (the datasheet above suggests this should disable RAID). When installing Ubuntu, the installer still detected the Fake RAID volumes and offered to configure dmraid for me. I declined the option and the native SATA disks (unRAIDed) were presented to me and fully partitioned and formatted.
I thought nothing more of this until I rebooted after completing the installation. The system booted as far as GRUB before dumping the message
No block devices found
It took me a while to figure out what was going on. Google turned up lots of people who had problems with Ubuntu and dmraid, but generally they were having the opposite problem of wanting to use dmraid but the installer not supporting it (like DMRAID on Ubuntu with SATA fakeraid, dmraid missing from livecd and Need dmraid to support fakeraid). Presumably most of these problems have been fixed with the inclusion of dmraid in Jaunty.
This was the clue for me – I finally figured out (with some help from bug 392510 I must admit) that even though I had declined to use dmraid during the install, the newly installed operating system still contained dmraid and was loading the dmraid kernel modules at boot-time. This resulted in the kernel seeing some dmraid volumes rather than the partitions I had created during the OS install.
Once I figured that out, fixing the problem was relatively straightforward,
- Reboot with the Ubuntu 9.04 install cd and select Rescue broken system.
- When the rescue boot has been configured, select Execute shell and chroot into the installed environment.
- aptitude purge dmraid (this removes the dmraid software and the dmraid kernel modules from the initramfs).
- Reboot and enjoy your new OS.
Two things that I found misleading here are,
- I had declined to use dmraid during the Ubuntu install, but it still included this functionality during installation
- I had disabled SATA RAID in the BIOS but it was still visible to Ubuntu. I notice a newer version of the BIOS from Supermicro which may fix this problem but since Supermicro don’t include change log in their BIOS releases it’s hard to tell without going to the trouble of actually installing the update.
I should probably log a bug against the dmraid package in Ubuntu (if I get
around to it, it should appear against the dmraid package) – bug 392510 talks about supporting a nodmraid option to the kernel at boot time which would explicitly disable dmraid, I think this could be a good idea (Fedora apparently already does this).
Update 1: Bug 311637 already addresses this problem so I’ve added a comment to this.
Update 2: Upgrading the Supermicro system to the latest BIOS and disabling the Fake RAID controller through the BIOS seems to fix this problem also.
Categories
Archives
- September 2010
- February 2010
- November 2009
- September 2009
- August 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
- May 2008
- April 2008
- March 2008
- February 2008
- November 2007
- September 2007
- April 2007
- March 2007
- February 2007
- January 2007
- December 2006
- September 2006
- July 2006
- June 2006
- April 2006