[Bug 682019] Re: incorrect RAID assembly when multiple drives specified in initramfs mdadm.conf
Jean-Philippe Guérard
jean-philippe.guerard at tigreraye.org
Sat May 5 23:07:26 UTC 2012
*** This bug is a duplicate of bug 942106 ***
https://bugs.launchpad.net/bugs/942106
** This bug is no longer a duplicate of bug 683476
Software raid intermittently fails to start at boot time
** This bug has been marked a duplicate of bug 942106
mdadm-functions missing udevadm settle (?)
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to mdadm in Ubuntu.
https://bugs.launchpad.net/bugs/682019
Title:
incorrect RAID assembly when multiple drives specified in initramfs
mdadm.conf
Status in “mdadm” package in Ubuntu:
New
Bug description:
Binary package hint: mdadm
I have a ubuntu 10.04.1 (i386 server) system with 4 mdadm devices,
which is a mixture of raid0 and raid1. The primary boot partition is
on md0, which is raid1.
md0 -> /dev/sda1, /dev/sdb1 (raid1, mounted at "/" )
md1 -> /dev/sda2, /dev/sdb2 (raid0, swap)
md2 -> /dev/sda3, /dev/sdb3 (raid1, mounted at "/home")
md3 -> /dev/sda4, /dev/sdb4 (raid0, mounted at "/home/pub/unmirrored")
grub is installed on both /dev/sda and /dev/sdb.
(Yes, I know I could have done two RAID devices with two partitions in
each, but that also caused the problem that I am about to explain)
The bug, as it seems, occurs when I specify ALL 4 drives in mdadm.conf
inside the initramfs image. If I specify only md0 in initramfs, and
all 4 drives in /etc/mdadm/mdadm.conf (on md0), the machine boots
perfectly. However, the machine will not boot if I specify only one
drive in both .conf files, or all 4 drives in both .conf files.
This means that each time a new kernel is installed, update-initramfs
creates a new image which does not boot (because it uses
/etc/mdadm/mdadm.conf from md0 in the image).
Here's the scary part: When I specify all 4 drives in initramfs, the
machine drops into an initramfs shell at boot. When I inspect the
assembled drives, it has created md3 as raid 0 using /dev/sda and
/dev/sdb. It has also created md0p1, md0p2 and md0p3. NONE of this is
consistent with the superblock (mdadm --examine --scan looks fine), or
what is in any mdadm.conf file. Also, none of these devices actually
work.
For the machine to recover, I need to "mdadm --delete /dev/md3", and
then "mdadm --assemble --scan" then hit CTRL+D to continue booting.
You'll also find that when multiple mdadm devices are specified during
installation (alternate installer), the machine will not boot
immediately afterwards do to this bug. I had to manually stop all but
md0 between partitioning and installing the bootloader, then re-adding
them to mdadm.conf after installation. This effectively created an
initramfs with only one mdadm device (md0) in it.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays (generated with "mdadm --examine --scan")
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=6e262a6c:792ac6e6:7937b5e2:5bb6f333
#NOTE: The following lines have to be removed from the initramfs image to boot:
ARRAY /dev/md1 level=raid0 num-devices=2 UUID=d9cec93e:db673bd8:a0a0eea7:e933aaa5
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=9545533d:d5a2efc6:a0a0eea7:e933aaa5
ARRAY /dev/md3 level=raid0 num-devices=2 UUID=110a8882:ee7facf4:a0a0eea7:e933aaa5
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/682019/+subscriptions
More information about the foundations-bugs
mailing list