[Bug 1874424] Re: USB external disks are not torn down correctly when they are unplugged

Dan Streetman 1874424 at bugs.launchpad.net
Wed Jun 30 22:29:50 UTC 2021


please reopen if this is still an issue

** Changed in: systemd (Ubuntu)
       Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1874424

Title:
  USB external disks are not torn down correctly when they are unplugged

Status in systemd package in Ubuntu:
  Invalid

Bug description:
  System information:

  $ lsb_release -rd
  Description:	Ubuntu 19.10
  Release:	19.10
  $ apt-cache policy udev
  udev:
    Installed: 242-7ubuntu3.7
    Candidate: 242-7ubuntu3.7
    Version table:
   *** 242-7ubuntu3.7 500
          500 http://gb.archive.ubuntu.com/ubuntu eoan-updates/main amd64 Packages
          100 /var/lib/dpkg/status
       242-7ubuntu3.6 500
          500 http://security.ubuntu.com/ubuntu eoan-security/main amd64 Packages
       242-7ubuntu3 500
          500 http://gb.archive.ubuntu.com/ubuntu eoan/main amd64 Packages

  I have a USB3 external SSD (VID/PID is xxxx/xxxx).  The whole device
  is encrypted (ie /dev/sdc is an encrypted LUKS volume) and the
  encrypted volume contains an LVM2 physical volume, a volume group
  called "vms" and two logical volumes.

  If I plug it into a booted system, everything works as expected.  I'm
  prompted for the volume password, the volume is unlocked and LVM then
  maps the logical volumes:

  $ ls -l /dev/mapper
  total 0
  crw------- 1 root root 10, 236 Apr 23 11:49 control
  lrwxrwxrwx 1 root root       7 Apr 23 11:51 luks-5e586d40-5f49-4c33-8f73-22da39d2728a -> ../dm-3
  lrwxrwxrwx 1 root root       7 Apr 23 11:49 nvme0n1p3_crypt -> ../dm-0
  lrwxrwxrwx 1 root root       7 Apr 23 11:49 vgubuntu-root -> ../dm-1
  lrwxrwxrwx 1 root root       7 Apr 23 11:49 vgubuntu-swap_1 -> ../dm-2
  lrwxrwxrwx 1 root root       7 Apr 23 11:51 vms-veeabuild -> ../dm-4
  lrwxrwxrwx 1 root root       7 Apr 23 11:51 vms-veea--mirror -> ../dm-5

  (The volumes "vms-*" are the ones on the external disk).  I can mount
  the volumes and use them.

  If I'm careful to unmount the LVM volumes correctly and lock the
  disks, everything works as expected:

  $ vgchange -a n vms
    0 logical volume(s) in volume group "vms" now active
  $ sudo udisksctl lock -b /dev/sdc
  Locked /dev/sdc.

  I then unplug the disk and repeat the process - everything works.  If,
  for whatever reason, the device gets unplugged without the proper
  cleanup, things get messy:

  $ ls -l /dev/mapper
  total 0
  crw------- 1 root root 10, 236 Apr 23 11:49 control
  lrwxrwxrwx 1 root root       7 Apr 23 12:04 luks-5e586d40-5f49-4c33-8f73-22da39d2728a -> ../dm-3
  lrwxrwxrwx 1 root root       7 Apr 23 11:49 nvme0n1p3_crypt -> ../dm-0
  lrwxrwxrwx 1 root root       7 Apr 23 11:49 vgubuntu-root -> ../dm-1
  lrwxrwxrwx 1 root root       7 Apr 23 11:49 vgubuntu-swap_1 -> ../dm-2
  lrwxrwxrwx 1 root root       7 Apr 23 12:04 vms-veeabuild -> ../dm-4
  lrwxrwxrwx 1 root root       7 Apr 23 12:04 vms-veea--mirror -> ../dm-5

  Note that the "vms-*" volumes are still there.

  $ ls -l /dev/dm-*
  brw-rw---- 1 root disk 253, 0 Apr 23 11:49 /dev/dm-0
  brw-rw---- 1 root disk 253, 1 Apr 23 11:49 /dev/dm-1
  brw-rw---- 1 root disk 253, 2 Apr 23 11:49 /dev/dm-2
  brw-rw---- 1 root disk 253, 3 Apr 23 12:04 /dev/dm-3
  brw-rw---- 1 root disk 253, 4 Apr 23 12:04 /dev/dm-4
  brw-rw---- 1 root disk 253, 5 Apr 23 12:04 /dev/dm-5

  /dev/dm-* still exists.

  However, sdc has been removed:

  $ ls -l /dev/sdc
  ls: cannot access '/dev/sdc': No such file or directory

  If I then plug the disk in again, I'm again prompted for the
  passphrase and the disk is mapped to /dev/sdd (not /dev/sdc like
  previous times).  If I try to mount one of the volumes:

  $ sudo mount /dev/mapper/vms-veeabuild mnt2
  mount: /home/tkcook/mnt2: can't read superblock on /dev/mapper/vms-veeabuild.

  Something is not getting cleaned up when the disk is forcibly removed.
  There doesn't seem to be any way to clean up from here; vgchange can't
  deactivate the volume group and udisksctl can't lock the LUKS volume.
  The only way I've found of using the disk again is to reboot (!)

  I'm raising this on the udev package but that is a bit of a guess; I'm
  assuming this is udev not processing the disconnection event correctly
  (though since I can't find a way of cleaning this up, it's not clear
  what it could do).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1874424/+subscriptions



More information about the foundations-bugs mailing list