[Bug 1608495] Re: IMSM fakeraid handled by mdadm: unclean mounted volumes on shutdown/reboot
Stefan Bader
stefan.bader at canonical.com
Mon Oct 23 13:18:01 UTC 2017
Followed the instructions from comment #7:
Base installation: Xenial/16.04
ii mdadm 3.4-4ubuntu0.1
ii dracut-core 044+3-3
Using IMSM based mdadm mounted on /home:
/dev/mapper/datavg01-home 197G 121G 67G 65% /home
PV VG Fmt Attr PSize PFree
/dev/md126p6 datavg01 lvm2 a-- 831,50g 353,85g
dev/md126p6:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 871895713 (831.50 GiB 892.82 GB)
Used Dev Size : unknown
Raid Devices : 3
Total Devices : 3
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-asymmetric
Chunk Size : 64K
UUID : cc707f7d:77869bd6:de8d52a2:ca21e329
Number Major Minor RaidDevice State
2 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
0 8 48 2 active sync /dev/sdd
Now done 2 reboots, one from text console directly after installing the
new packages and one from the GUI. In both cases the array came up in
sync.
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to mdadm in Ubuntu.
https://bugs.launchpad.net/bugs/1608495
Title:
IMSM fakeraid handled by mdadm: unclean mounted volumes on
shutdown/reboot
Status in mdadm package in Ubuntu:
Fix Committed
Status in mdadm source package in Xenial:
Fix Committed
Status in mdadm source package in Yakkety:
Won't Fix
Status in mdadm source package in Zesty:
Fix Committed
Bug description:
Opening this report for Xenial and later as this problem does surface
again due to moving to systemd.
Background:
mdadm is used to create md raid volumes based on Intel Matrix Storage
Manager fakeraid metadata. The setup usually consists of a container
set that contains one or more raid volumes. Which is the reason that
those fakeraid volume are more affected by timing issues on
shutdown/reboot.
In my specific setup I am using one of the IMSM raid volumes as a LVM
PV and one LV of that is mounted as /home. The problem is that
unmounting /home on shutdown/reboot will update the filesystem
superblock which causes the raid state to become dirty for a small
period of time. For that reason with sysvinit scripts there is a
mdadm-waitidle script which *must* be run after the umountroot (or for
/home at least after umountfs) script has run.
With Xenial both umountroot and umountfs are softlinks to /dev/null in /lib/systemd/system, so I am not sure they are good to cause the mdadm-waitidle to be delayed *after* all filesystems are unmounted.
Practically I see that if /home is mounted on shutdown/reboot, the raid set will go into a full resync the next time I boot (additional pain but different problem: the resync appears to be much more aggressive than in the past, delaying boot a lot and rendering the system barely usable until it finishes). If I manually unmount /home before the reboot, the raid set is good.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1608495/+subscriptions
More information about the foundations-bugs
mailing list