[Bug 1828558] Re: installing ubuntu on a former md raid volume makes system unusable
Launchpad Bug Tracker
1828558 at bugs.launchpad.net
Tue Aug 27 21:18:55 UTC 2019
This bug was fixed in the package partman-base - 206ubuntu1.2
---------------
partman-base (206ubuntu1.2) disco; urgency=medium
* Move superblock wiping code from command_new_label to command_commit, as
the disk is not supposed to be written to until the latter is called.
partman-base (206ubuntu1.1) disco; urgency=medium
* parted_server.c: Wipe all known superblocks from device in
command_new_label. (LP: #1828558)
-- Michael Hudson-Doyle <michael.hudson at ubuntu.com> Tue, 06 Aug 2019
12:04:16 +1200
** Changed in: partman-base (Ubuntu Disco)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to partman-base in Ubuntu.
https://bugs.launchpad.net/bugs/1828558
Title:
installing ubuntu on a former md raid volume makes system unusable
Status in partman-base package in Ubuntu:
Fix Released
Status in partman-base source package in Bionic:
Fix Released
Status in partman-base source package in Disco:
Fix Released
Bug description:
[impact]
Installing ubuntu on a disk that was previously a md raid volume leads to a system that doesn't boot (or perhaps does not reliably boot)
[test case]
Create a disk image that has a md RAID 6, metadata 0.90 device on it using the attached "mkraid6" script.
$ sudo mkraid6
Install to it in a VM:
$ kvm -m 2048 -cdrom ~/isos/ubuntu-18.04.2-desktop-amd64.iso -drive
file=raid2.img,format=raw
Reboot into the installed system. Check that it boots and that there
are no occurrences of linux_raid_member in the output of "sudo wipefs
/dev/sda".
SRU member request: testing other, regular installation scenarios to
sanity check for regressions (comment #10).
[regression potential]
The patch makes a change to a core part of the partitioner. A bug here could crash the installer, rendering it impossible to install. The code is adapted from battle-tested code in wipefs from util-linux and has been somewhat tested before uploading to eoan. The nature of the code makes regressions beyond crashing the installer or failing to do what it's supposed to very unlikely -- it is hard to see how this could result on data loss on a drive not selected to be formatted, for example.
[original description]
18.04 is installed using GUI installer in 'Guided - use entire volume' mode on a disk which was previously used as md raid 6 volume. Installer repartitions the disk and installs the system, system reboots any number of times without issues. Then packages are upgraded to the current states and some new packages are installed including mdadm which *might* be the culprit, after that system won't boot any more failing into ramfs prompt with 'gave up waiting for root filesystem device' message, at this point blkid shows boot disk as a device with TYPE='linux_raid_member', not as two partitions for EFI and root (/dev/sda, not /dev/sda1 and /dev/sda2). I was able fix this issue by zeroing the whole disk (dd if=/dev/zero of=/dev/sda bs=4096) and reinstalling. Probably md superblock is not destroyed when disk is partitioned by the installer, not overwritten by installed files and somehow takes precedence over partition table (gpt) during boot.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/partman-base/+bug/1828558/+subscriptions
More information about the foundations-bugs
mailing list