[Bug 1361842] Re: dmraid does not start on boot for single disk RAID0

Phillip Susi psusi at ubuntu.com
Wed Oct 29 14:16:55 UTC 2014


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 10/28/2014 11:31 PM, Jason Gunthorpe wrote:
> Except "never really use" means "don't fill up the filesystem". If
> the FS uses the last portion of the partition it will corrupt the
> RAID label and restoring the RAID label will corrupt the FS. Which
> is pretty bad, but not immediate.

No; typically partitioning tools do not assign every last sector on
the disk to the partition, so that last bit of disk will never be used
by the fs.

>> FWIW, RHEL gets this right and sets up dmraid on this disc.
>> 
>> Interesting.. they must have a patch that hasn't been
>> upstreamed.
>> 
> 
> Looks like they have a dracut specific version, it seems much
> saner, no crazy parsing of dmraid output.
> 

Oh, right... the problem is in the script only, not dmraid itself
right?  If you manually run dmraid -ay, it correctly activates the
array right?  Maybe I'll take a crack and finally fixing that script
then as that shouldn't be too hard.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (MingW32)

iQEcBAEBAgAGBQJUUPbXAAoJEI5FoCIzSKrwaE8H/0R0d3soXV+VeWQpr6gCKkoL
aczmKEWWgVHcv2t22fvYVrhefxuKSLZRWi0hXawq7+iLhHsuC9ZQHMxvAZvFPbiD
Ywzbhy1/bJiJgRbiyCrG7LOUWvZ6ZIS48RfffA4eY6noFqjZj/4gcHzR5kf4aeZj
ZyV8hN8SGZcQqU/grLK/a3LLYnLr1IQ9ITGyZfwPBvX+0/QnbTN9sjzmWPwBORhd
LmOMgvasySKLwWbOBDtrno0hAuXqWfJRR3Ht/3lsxmd2jNYWfNEcouKeQj/agOrO
Bd46NC5frOxNuM3nI27Z4C1yXVWwVdMqecJ3LcsTXaF460JtLaXoIMh3fzF6AcE=
=wh7o
-----END PGP SIGNATURE-----

-- 
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to dmraid in Ubuntu.
https://bugs.launchpad.net/bugs/1361842

Title:
  dmraid does not start on boot for single disk RAID0

Status in “dmraid” package in Ubuntu:
  New

Bug description:
  I have a Lenovo server with a LSI controller that insists on having a
  RAID set to boot. So the BIOS is configured with a RAID0 stripe set,
  with a single disk:

  $ dmraid -i -si
  *** Group superset .ddf1_disks
  --> Subset
  name   : ddf1_4c5349202020202080861d60000000004711471100001450
  size   : 974608384
  stride : 128
  type   : stripe
  status : ok
  subsets: 0
  devs   : 1
  spares : 0

  Notice that 'devs' is 1.

  This causes this bit of code in dm-activate to bail:

          case "$Raid_Type" in
                  stripe)
                          if [ "$Raid_Nodevs" -lt 2 ]; then
                                  if [ -n "$Degraded" ]; then
                                          log_error "Cannot bring up a RAID0 array in degraded mode, not all devices present."
                                  fi
                                  return 2
                          fi
                          ;;

  Of course, the above is totally bogus, a 1 disk RAID0 is perfectly
  valid. I wonder if this should be testing 'status' instead?

  This is a problem because of GPT partitioning. If you don't start the
  RAID downstream tools will attempt to partition sda. The RAID metadata
  at the end of the disk collides with the GPT partition backup and it
  ends up destroying the RAID set and making the server unbootable. The
  kernel hints at this condition:

  [    4.202136] GPT:Primary header thinks Alt. header is not at the end of the disk.
  [    4.202137] GPT:974608383 != 976773167
  [    4.202138] GPT:Alternate GPT header not at the end of the disk.
  [    4.202138] GPT:974608383 != 976773167

  Which is 100% true, the GPT was written to the RAID, not the raw disk,
  and 974608383 sectors is at the end of the raid volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/1361842/+subscriptions



More information about the foundations-bugs mailing list