[Bug 1641078] Re: System cannot be booted up when root filesystem is on an LVM on two disks
Dimitri John Ledkov
launchpad at surgut.co.uk
Mon Jun 26 09:17:31 UTC 2017
This issues was rejected by Canonical, and closed as invalid on
2017-01-18.
This bug is incorrectly filed against src:linux, I will move it to
src:s390-tools shortly.
Further discussion is about upstream features. Canonical does not
participate in s390-tools upstream development.
It is absolutely normal to expect users to regenerate boot files after
significantly modifying their root filesystem. On ubuntu `update-
initramfs -u` regenerates initramfs and reruns zipl as needed on s390x.
This is no different than from other Ubuntu platforms with different
bootloaders.
Further discussion is about providing new features and/or workarounds in
upstream chzdev. Given that no external patches are accepted into
s390-tools there is no meaningful way for me to participate in that
discussion between upstream developers. My personal opinion is that
adding new flags will not fix the user experience here (the user in the
original user story already forgot update-initramfs -u, thus I'm not
expecting the new flag to be used either) and that on Ubuntu we would
not regenerate initramfs on shutdowns as that is somewhat risky. On
ubuntu, we ship zdev-root-update hook, as per upstream recommendation,
that calls appropriate commands on ubuntu to update initramfs/zipl, ie.
update-initramfs -u. My personal opinion is that whenever persistent &
activate configuration is modified together by chzdev, it must call
zdev-root-update. Example: chzdev dasd 0.0.0.0200 -e call should exec
zdev-root-update if 0200 was not previously configured in the persistent
configuration. But this is something for upstream to implement / fix -
the fact that zdev-root-update is not called often enough; not automatic
enough under more conditions. Calling zdev-root-update by chzdev more
often would improve usability of any Linux on z.
** Package changed: linux (Ubuntu) => s390-tools (Ubuntu)
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to s390-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1641078
Title:
System cannot be booted up when root filesystem is on an LVM on two
disks
Status in Ubuntu on IBM z Systems:
Invalid
Status in s390-tools package in Ubuntu:
Invalid
Bug description:
---Problem Description---
LVMed root file system acrossing multiple disks cannot be booted up
---uname output---
Linux ntc170 4.4.0-38-generic #57-Ubuntu SMP Tue Sep 6 15:47:15 UTC 2016 s390x s390x s390x GNU/Linux
---Patches Installed---
n/a
Machine Type = z13
---System Hang---
cannot boot up the system after shutdown or reboot
---Debugger---
A debugger is not configured
---Steps to Reproduce---
Created root file system on an LVM and the LVM crosses two disks. After shut down or reboot the system, the system cannot be up.
Stack trace output:
no
Oops output:
no
System Dump Info:
The system is not configured to capture a system dump.
Device driver error code:
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... lvmetad is not active yet, using direct activation during sysinit
Couldn't find device with uuid 7PC3sg-i5Dc-iSqq-AvU1-XYv2-M90B-M0kO8V.
-Attach sysctl -a output output to the bug.
More detailed installation description:
The installation was on a FCP SCSI SAN volumes each with two active
paths. Multipath was involved. The system IPLed fine up to the point
that we expanded the /root filesystem to span volumes. At boot time,
the system was unable to locate the second segment of the /root
filesystem. The error message indicated this was due to lvmetad not
being not active.
Error message:
Begin: Running /scripts/local-block ... lvmetad is not active yet, using direct activation during sysinit
Couldn't find device with uuid 7PC3sg-i5Dc-iSqq-AvU1-XYv2-M90B-M0kO8V
Failed to find logical volume "ub01-vg/root"
PV Volume information:
physical_volumes {
pv0 {
id = "L2qixM-SKkF-rQsp-ddao-gagl-LwKV-7Bw1Dz"
device = "/dev/sdb5" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 208713728 # 99.5225 Gigabytes
pe_start = 2048
pe_count = 25477 # 99.5195 Gigabytes
}
pv1 {
id = "7PC3sg-i5Dc-iSqq-AvU1-XYv2-M90B-M0kO8V"
device = "/dev/sda" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 209715200 # 100 Gigabytes
pe_start = 2048
pe_count = 25599 # 99.9961 Gigabytes
LV Volume Information:
logical_volumes {
root {
id = "qWuZeJ-Libv-DrEs-9b1a-p0QF-2Fj0-qgGsL8"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "ub01"
creation_time = 1477515033 # 2016-10-26 16:50:33 -0400
segment_count = 2
segment1 {
start_extent = 0
extent_count = 921 # 3.59766 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
segment2 {
start_extent = 921
extent_count = 25344 # 99 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 0
]
}
}
Additional testing has been done with CKD volumes and we see the same behavior. Only the UUID of the fist volume in the VG can be located at boot, and the same message: lvmetad is not active yet, using direct activation during sysinit
Couldn't find device with uuid xxxxxxxxxxxxxxxxx is displayed for CKD disks. Just a different UUID is listed.
If the file /root file system only has one segment on the first volume, CKD or SCSI volumes, the system will IPL. Because of this behavior, I do not believe the problem is related to SAN disk or multipath. I think it is due to the system not being able to read the UUID on any PV in the VG other then the IPL disk.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1641078/+subscriptions
More information about the foundations-bugs
mailing list