[Bug 1318351] Re: mdadm doesn't assemble imsm raids during normal boot
mjbrands
mathijs at brands.name
Thu Oct 23 18:37:58 UTC 2014
I ran into similar issues, with the added 'bonus' if the RAID5 giving
bad performance (running in degraded mode) and no protection against
disk failure (again, degraded mode).
When installing 14.04 LTS Server on an HP Z620 workstation (Intel C602
chipset), the installer detects the RAID5 array (3 disks, freshly
created in the Intel Matrix firmware), assembles it with mdadm and
starts syncing the disks (since this isn't done when creating it in the
firmware and is left up to the hardware).
When the installation has finished and the machine reboots, syncing has
not completed yet and would be continued after the reboot by mdadm (if
it were used). Instead, because nomdmonddf and nomdmonisw are set in the
default GRUB options dmraid gets used instead of mdadm and it looks like
the syncing does not resume. 'dmraid -s' will show status ok for the
array (even though it has not completely synced). If I then shut the
system down and unplug a disk, the Intel firmware shows Failed instead
of Degraded (which it should do it the disks were synced and parity
complete) and the array is no longer bootable.
My conclusion is that the sync was never completed. I have tested a
similar scenario using mdadm on CentOS 7 and the array did go into
degraded mode and was still bootable when one disk was removed.
I'll try the suggestion in post #3 and see if my array then properly
resyncs and can tolerate losing a single disk (in a 3-disk array).
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to mdadm in Ubuntu.
https://bugs.launchpad.net/bugs/1318351
Title:
mdadm doesn't assemble imsm raids during normal boot
Status in “mdadm” package in Ubuntu:
Confirmed
Bug description:
I have a non-root Intel "fakeraid" volume which is not getting
assembled automatically at startup. I can assemble it just fine with
"sudo mdadm --assemble --scan".
While trying to debug this, I encountered that it is being assembled
when I'm booting in debug (aka recovery) mode. It turns out that
nomdmonisw and nomdmonddf are passed to the kernel during normal boot
only, and this is due to /etc/default/grub.d/dmraid2mdadm.cfg
containing:
GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT nomdmonddf
nomdmonisw"
Commenting out that line fixes the problem.
I've gathered that this is an effort to migrate from dmraid to mdadm
for fakeraids. I don't understand how it's supposed to work, but in my
case dmraid is not installed, and this setting is an interference.
(The background is that I recently added a 3 TB raid1 and was
therefore forced to abandon dmraid in favor of mdadm since the former
doesn't handle volumes larger than ~2 TB. So I dropped dmraid and set
up mdadm from scratch for this new raid.)
Also, I believe it's a bug that these kernel arguments are different
between normal and recovery boot.
My mdadm is 3.2.5-5ubuntu4 in a fresh trusty install.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1318351/+subscriptions
More information about the foundations-bugs
mailing list