[Bug 1361842] Re: dmraid does not start on boot for single disk RAID0
Phillip Susi
psusi at ubuntu.com
Wed Oct 29 03:06:27 UTC 2014
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
On 10/28/2014 09:55 PM, Jason Gunthorpe wrote:
> Fundamentally, if /dev/sda has a valid RAID label then it *MUST* be setup
> and accessed through the dmraid device and *NEVER* via /dev/sda.
>
> Otherwise the installer will see a disc that is too big and it will destroy
> the RAID label at the end of the disc, then the system will not boot.
For MBR partitioned disks this isn't really a problem because they never
really use the last bit of the disk anyhow, but for GPT, yes... that
would be a problem.
> FWIW, RHEL gets this right and sets up dmraid on this disc.
Interesting.. they must have a patch that hasn't been upstreamed.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCgAGBQJUUFmzAAoJENRVrw2cjl5RJnEH/A2Pjp+b1iD1E0ToEd1xW2DM
T5N6p2PMh4QiO9S5Widz9ouMaHVwWg2qRdFTl2lhjrjGuqWG5reql23Nj+jelHiD
yWZnYQv0HIRIzTscyXG8WAihfmqexZubHl/nBES7COWc0Qw1Zk9ZMGFZ+IF0uEK0
o2f2oH1mVD6jLF39mGJ37UzqCunLglaljZ+cbnLPSFEqmbcgLXC7IU998WnyDdn2
klhYaMez//NLXyBKBaI7U4HMVecH49LKW7uRkPvi2u2ydYGax0QQshq4wc72ArQU
374DMshk7DDh/f9oLqK5GJeXTe8ZGgnTRQn654+tHu7VWJdJeVlnKvbu4IjHq+o=
=Wq8i
-----END PGP SIGNATURE-----
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to dmraid in Ubuntu.
https://bugs.launchpad.net/bugs/1361842
Title:
dmraid does not start on boot for single disk RAID0
Status in “dmraid” package in Ubuntu:
New
Bug description:
I have a Lenovo server with a LSI controller that insists on having a
RAID set to boot. So the BIOS is configured with a RAID0 stripe set,
with a single disk:
$ dmraid -i -si
*** Group superset .ddf1_disks
--> Subset
name : ddf1_4c5349202020202080861d60000000004711471100001450
size : 974608384
stride : 128
type : stripe
status : ok
subsets: 0
devs : 1
spares : 0
Notice that 'devs' is 1.
This causes this bit of code in dm-activate to bail:
case "$Raid_Type" in
stripe)
if [ "$Raid_Nodevs" -lt 2 ]; then
if [ -n "$Degraded" ]; then
log_error "Cannot bring up a RAID0 array in degraded mode, not all devices present."
fi
return 2
fi
;;
Of course, the above is totally bogus, a 1 disk RAID0 is perfectly
valid. I wonder if this should be testing 'status' instead?
This is a problem because of GPT partitioning. If you don't start the
RAID downstream tools will attempt to partition sda. The RAID metadata
at the end of the disk collides with the GPT partition backup and it
ends up destroying the RAID set and making the server unbootable. The
kernel hints at this condition:
[ 4.202136] GPT:Primary header thinks Alt. header is not at the end of the disk.
[ 4.202137] GPT:974608383 != 976773167
[ 4.202138] GPT:Alternate GPT header not at the end of the disk.
[ 4.202138] GPT:974608383 != 976773167
Which is 100% true, the GPT was written to the RAID, not the raw disk,
and 974608383 sectors is at the end of the raid volume.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/1361842/+subscriptions
More information about the foundations-bugs
mailing list