[Bug 1831733] Re: ledmon incorrectly sets the status LED

Ubuntu Foundations Team Bug Bot 1831733 at bugs.launchpad.net
Thu Aug 29 12:26:03 UTC 2019


The attachment "debdiff" seems to be a debdiff.  The ubuntu-sponsors
team has been subscribed to the bug report so that they can review and
hopefully sponsor the debdiff.  If the attachment isn't a patch, please
remove the "patch" flag from the attachment, remove the "patch" tag, and
if you are member of the ~ubuntu-sponsors, unsubscribe the team.

[This is an automated message performed by a Launchpad user owned by
~brian-murray, for any issue please contact him.]

** Tags added: patch

-- 
You received this bug notification because you are a member of Ubuntu
Sponsors Team, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1831733

Title:
  ledmon incorrectly sets the status LED

Status in OEM Priority Project:
  Confirmed
Status in ledmon package in Ubuntu:
  Confirmed

Bug description:
  Description:

  After creating the RAID volume, deleting it and creating second RAID
  volume (with same disks as with first volume but with less disks'
  count), LED statuses on disks left in container are ‘failure’.

  Steps to reproduce:
  1.	Turn on ledmon:
  # ledmon --all

  2.	Create RAID container:
  # mdadm --create /dev/md/imsm0 --metadata=imsm --raid-devices=3 /dev/nvme5n1 /dev/nvme4n1 /dev/nvme2n1 --run –force

  3.	Create first RAID volume:
  # mdadm --create /dev/md/Volume --level=5 --chunk 64 --raid-devices=3 /dev/nvme5n1 /dev/nvme4n1 /dev/nvme2n1 --run –force

  4.	Stop first RAID volume:
  # mdadm --stop /dev/md/Volume

  5.	Delete first RAID volume:
  # mdadm --kill-subarray=0 /dev/md127

  6.	Create second RAID volume in the same container (with less disks' count than first RAID, using the sane disks as in the first volume):
  # mdadm --create /dev/md/Volume --level=1 --raid-devices=2 /dev/nvme5n1 /dev/nvme4n1 --run

  7.      Verify status LED on container member disks which are not part
  of second RAID volume.

  Expected results: 
  Disks from container which are not in the second volume should have ‘normal’ status LED.

  Actual results: 
  Disks from container which are not in the second volume have ‘failure’ status LED.

To manage notifications about this bug go to:
https://bugs.launchpad.net/oem-priority/+bug/1831733/+subscriptions



More information about the Ubuntu-sponsors mailing list