[Bug 1608495] Re: IMSM fakeraid handled by mdadm: unclean mounted volumes on shutdown/reboot

Joshua Diamant 1608495 at bugs.launchpad.net
Tue Feb 11 14:30:07 UTC 2020


I think this issue is occuring because I am running a bcache cache
device on a VROC IMSM 'fake raid' device. Once I followed Dimitri's
finalrd steps in post #14 (this is required), I also had to add the
following script '/lib/systemd/system-shutdown/bcache_stop'

Please make sure you run 'chmod +x /lib/systemd/system-
shutdown/bcache_stop' after creating the file with the contents below:

#!/bin/bash

for stop in /sys/block/bcache[0-9]*/bcache/stop
do
        [ -f "$stop" ] || continue
        #echo "Stopping $stop"
        echo 1 > $stop
        echo 1 > $stop
done

for stop in /sys/fs/bcache/*/stop
do
        [ -f "$stop" ] || continue
        #echo "Stopping $stop"
        echo 1 > $stop
        echo 1 > $stop
done

-- 
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to mdadm in Ubuntu.
https://bugs.launchpad.net/bugs/1608495

Title:
  IMSM fakeraid handled by mdadm: unclean mounted volumes on
  shutdown/reboot

Status in mdadm package in Ubuntu:
  Fix Committed
Status in mdadm source package in Xenial:
  Fix Committed
Status in mdadm source package in Yakkety:
  Won't Fix
Status in mdadm source package in Zesty:
  Won't Fix

Bug description:
  Opening this report for Xenial and later as this problem does surface
  again due to moving to systemd.

  Background:

  mdadm is used to create md raid volumes based on Intel Matrix Storage
  Manager fakeraid metadata. The setup usually consists of a container
  set that contains one or more raid volumes. Which is the reason that
  those fakeraid volume are more affected by timing issues on
  shutdown/reboot.

  In my specific setup I am using one of the IMSM raid volumes as a LVM
  PV and one LV of that is mounted as /home. The problem is that
  unmounting /home on shutdown/reboot will update the filesystem
  superblock which causes the raid state to become dirty for a small
  period of time. For that reason with sysvinit scripts there is a
  mdadm-waitidle script which *must* be run after the umountroot (or for
  /home at least after umountfs) script has run.

  With Xenial both umountroot and umountfs are softlinks to /dev/null in /lib/systemd/system, so I am not sure they are good to cause the mdadm-waitidle to be delayed *after* all filesystems are unmounted.
  Practically I see that if /home is mounted on shutdown/reboot, the raid set will go into a full resync the next time I boot (additional pain but different problem: the resync appears to be much more aggressive than in the past, delaying boot a lot and rendering the system barely usable until it finishes). If I manually unmount /home before the reboot, the raid set is good.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1608495/+subscriptions



More information about the foundations-bugs mailing list