[Bug 2116553] Re: Orphaned multipath devices not removed after volume detach on PureStorage iscsi
Erlon R. Cruz
2116553 at bugs.launchpad.net
Thu Mar 19 21:45:12 UTC 2026
** Also affects: cloud-archive
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to Ubuntu Cloud Archive.
https://bugs.launchpad.net/bugs/2116553
Title:
Orphaned multipath devices not removed after volume detach on
PureStorage iscsi
Status in Ubuntu Cloud Archive:
New
Status in os-brick:
Fix Released
Bug description:
OpenStack Caracal 2024.1 environment using PureStorage iSCSI backend
and multipath enabled, volumes detached from VMs are NOT being
correctly cleaned up by os-brick, leaving behind orphaned multipath
device entries. This behavior has been observed consistently with os-
brick version 6.7.
Environment:
OpenStack release: Caracal
OS: Ubuntu 22.04 (Jammy)
os-brick version: 6.7.0-0ubuntu1~cloud0
Storage backend: PureStorage (iSCSI) with multipath enabled
ii os-brick-common 6.7.0-0ubuntu1~cloud0 all Library for managing local volume attaches - common files
ii python3-os-brick 6.7.0-0ubuntu1~cloud0 all Library for managing local volume attaches - Python 3.x
When detaching a volume from a VM, the device is logically detached in
libvirt and os-brick logs indicate it processes the detach request.
However, the corresponding multipath device(e.g., /dev/dm-8) is never
removed from the host. This leads to lingering multipath devices in a
faulty state.
Example of an orphaned device after volume detach:
<driver name="qemu" type="raw" cache="none" discard="unmap" io="native"/>
<alias name="ua-ef3ccaac-1c22-4a2c-a7c8-76be527f5b7c"/>
<source dev="/dev/dm-8"/>
<target dev="vdb" bus="virtio"/>
<serial>ef3ccaac-1c22-4a2c-a7c8-76be527f5b7c</serial>
<address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
</disk>
detach_device /usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py:470
2025-07-10 15:48:39.424 49779 DEBUG nova.virt.libvirt.driver [None req-44ebffb7-a76a-4386-be03-42fe71ad57d9 - - - - - -] Received event <DeviceRemovedEvent: 1752162519.4242225, 2cf904d2-288c-4778-aafe-c5d3f8b48b38 => ua-ef3ccaac-1c22-4a2c-a7c8-76be527f5b7c> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:2429
2025-07-10 15:48:39.425 49779 DEBUG nova.virt.libvirt.driver [None req-94e063b0-ebba-4443-b16a-9e75a50a0d39 4550e84840264aedad9422fdc0bfd4a3 a9193f1db3fd424c9690c78098e36305 - - fe93567926e448e99b0e4c8bbbe8ddd6 fe93567926e448e99b0e4c8bbbe8ddd6] Start waiting for the detach event from libvirt for device vdb with device alias ua-ef3ccaac-1c22-4a2c-a7c8-76be527f5b7c for instance 2cf904d2-288c-4778-aafe-c5d3f8b48b38 _detach_from_live_and_wait_for_event /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:2658
2025-07-10 15:48:39.427 49779 INFO nova.virt.libvirt.driver [None req-94e063b0-ebba-4443-b16a-9e75a50a0d39 4550e84840264aedad9422fdc0bfd4a3 a9193f1db3fd424c9690c78098e36305 - - fe93567926e448e99b0e4c8bbbe8ddd6 fe93567926e448e99b0e4c8bbbe8ddd6] Successfully detached device vdb from instance 2cf904d2-288c-4778-aafe-c5d3f8b48b38 from the live domain config.
2025-07-10 15:48:39.429 49779 DEBUG oslo_concurrency.lockutils [None req-94e063b0-ebba-4443-b16a-9e75a50a0d39 4550e84840264aedad9422fdc0bfd4a3 a9193f1db3fd424c9690c78098e36305 - - fe93567926e448e99b0e4c8bbbe8ddd6 fe93567926e448e99b0e4c8bbbe8ddd6] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py:402
2025-07-10 15:48:39.429 49779 DEBUG oslo_concurrency.lockutils [None req-94e063b0-ebba-4443-b16a-9e75a50a0d39 4550e84840264aedad9422fdc0bfd4a3 a9193f1db3fd424c9690c78098e36305 - - fe93567926e448e99b0e4c8bbbe8ddd6 fe93567926e448e99b0e4c8bbbe8ddd6] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py:407
2025-07-10 15:48:39.430 49779 DEBUG os_brick.initiator.connector [None req-94e063b0-ebba-4443-b16a-9e75a50a0d39 4550e84840264aedad9422fdc0bfd4a3 a9193f1db3fd424c9690c78098e36305 - - fe93567926e448e99b0e4c8bbbe8ddd6 fe93567926e448e99b0e4c8bbbe8ddd6] Factory for ISCSI on None factory /usr/lib/python3/dist-packages/os_brick/initiator/connector.py:281
2025-07-10 15:48:39.430 49779 DEBUG oslo_concurrency.lockutils [None req-94e063b0-ebba-4443-b16a-9e75a50a0d39 4550e84840264aedad9422fdc0bfd4a3 a9193f1db3fd424c9690c78098e36305 - - fe93567926e448e99b0e4c8bbbe8ddd6 fe93567926e448e99b0e4c8bbbe8ddd6] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py:421
2025-07-10 15:48:39.430 49779 DEBUG nova.virt.libvirt.volume.iscsi [None req-94e063b0-ebba-4443-b16a-9e75a50a0d39 4550e84840264aedad9422fdc0bfd4a3 a9193f1db3fd424c9690c78098e36305 - - fe93567926e448e99b0e4c8bbbe8ddd6 fe93567926e448e99b0e4c8bbbe8ddd6] [instance: 2cf904d2-288c-4778-aafe-c5d3f8b48b38] calling os-brick to detach iSCSI Volume disconnect_volume /usr/lib/python3/dist-packages/nova/virt/libvirt/volume/iscsi.py:72
2025-07-10 15:48:39.430 49779 DEBUG os_brick.initiator.connectors.iscsi [None req-94e063b0-ebba-4443-b16a-9e75a50a0d39 4550e84840264aedad9422fdc0bfd4a3 a9193f1db3fd424c9690c78098e36305 - - fe93567926e448e99b0e4c8bbbe8ddd6 fe93567926e448e99b0e4c8bbbe8ddd6] ==> disconnect_volume: call "{'args': (<os_brick.initiator.connectors.iscsi.ISCSIConnector object at 0x7f99502a34c0>, {'target_discovered': False, 'discard': True, 'addressing_mode': 'SAM2', 'target_luns': [4, 4, 4, 4], 'target_iqns': ['iqn.2010-06.com.purestorage:flasharray.91f9bc2accd53f7', 'iqn.2010-06.com.purestorage:flasharray.91f9bc2accd53f7', 'iqn.2010-06.com.purestorage:flasharray.91f9bc2accd53f7', 'iqn.2010-06.com.purestorage:flasharray.91f9bc2accd53f7'], 'target_portals': ['10.31.180.243:3260', '10.31.180.244:3260', '10.31.180.245:3260', '10.31.180.246:3260'], 'wwn': '3624a93704f4c8fd636cb4dd300012044', 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False, 'device_path': '/dev/dm-8'}, None), 'kwargs': {'force': False}}" trace_logging_wrapper /usr/lib/python3/dist-packages/os_brick/utils.py:176
2025-07-10 15:48:39.431 49779 WARNING os_brick.initiator.connectors.base [None req-94e063b0-ebba-4443-b16a-9e75a50a0d39 4550e84840264aedad9422fdc0bfd4a3 a9193f1db3fd424c9690c78098e36305 - - fe93567926e448e99b0e4c8bbbe8ddd6 fe93567926e448e99b0e4c8bbbe8ddd6] Service needs to call os_brick.setup() before connecting volumes, if it doesn't it will break on the next release
2025-07-10 15:48:39.431 49779 DEBUG os_brick.initiator.connectors.base [None req-94e063b0-ebba-4443-b16a-9e75a50a0d39 4550e84840264aedad9422fdc0bfd4a3 a9193f1db3fd424c9690c78098e36305 - - fe93567926e448e99b0e4c8bbbe8ddd6 fe93567926e448e99b0e4c8bbbe8ddd6] Acquiring lock "connect_volume" by "os_brick.initiator.connectors.iscsi.ISCSIConnector.disconnect_volume" inner /usr/lib/python3/dist-packages/os_brick/initiator/connectors/base.py:68
2025-07-10 15:48:39.431 49779 DEBUG os_brick.initiator.connectors.base [None req-94e063b0-ebba-4443-b16a-9e75a50a0d39 4550e84840264aedad9422fdc0bfd4a3 a9193f1db3fd424c9690c78098e36305 - - fe93567926e448e99b0e4c8bbbe8ddd6 fe93567926e448e99b0e4c8bbbe8ddd6] Lock "connect_volume" acquired by "os_brick.initiator.connectors.iscsi.ISCSIConnector.disconnect_volume" :: waited 0.001s inner /usr/lib/python3/dist-packages/os_brick/initiator/connectors/base.py:73
2025-07-10 15:48:39.431 49779 DEBUG os_brick.initiator.connectors.iscsi [None req-94e063b0-ebba-4443-b16a-9e75a50a0d39 4550e84840264aedad9422fdc0bfd4a3 a9193f1db3fd424c9690c78098e36305 - - fe93567926e448e99b0e4c8bbbe8ddd6 fe93567926e448e99b0e4c8bbbe8ddd6] Getting connected devices for (ips,iqns,luns)=[('10.31.180.243:3260', 'iqn.2010-06.com.purestorage:flasharray.91f9bc2accd53f7', 4), ('10.31.180.244:3260', 'iqn.2010-06.com.purestorage:flasharray.91f9bc2accd53f7', 4), ('10.31.180.245:3260', 'iqn.2010-06.com.purestorage:flasharray.91f9bc2accd53f7', 4), ('10.31.180.246:3260', 'iqn.2010-06.com.purestorage:flasharray.91f9bc2accd53f7', 4)] _get_connection_devices /usr/lib/python3/dist-packages/os_brick/initiator/connectors/iscsi.py:830
2025-07-10 15:48:39.432 49779 INFO oslo.privsep.daemon [None req-94e063b0-ebba-4443-b16a-9e75a50a0d39 4550e84840264aedad9422fdc0bfd4a3 a9193f1db3fd424c9690c78098e36305 - - fe93567926e448e99b0e4c8bbbe8ddd6 fe93567926e448e99b0e4c8bbbe8ddd6] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpr28r0jgz/privsep.sock']
2025-07-10 15:48:39.671 49779 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib/python3/dist-packages/ovs/poller.py:263
2025-07-10 15:48:39.845 49779 WARNING oslo.privsep.daemon [-] privsep log: Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT".
And corresponding multipath output:
1539.257737 | sdbt: prio = const (setting: emergency fallback - alua failed)
1539.258270 | sdbs: prio = const (setting: emergency fallback - alua failed)
1539.259023 | sdbu: prio = const (setting: emergency fallback - alua failed)
1539.259546 | sdbr: prio = const (setting: emergency fallback - alua failed)
mpathaj (3624a93704f4c8fd636cb4dd300012043) dm-8 PURE,FlashArray
size=10G features='0' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=0 status=enabled
|- 25:0:0:4 sdbt 68:112 failed faulty running
|- 17:0:0:4 sdbs 68:96 failed faulty running
|- 9:0:0:4 sdbu 68:128 failed faulty running
`- 33:0:0:4 sdbr 68:80 failed faulty running
The expected cleanup action (/sys/block/sdX/device/delete) as per the
os-brick code[0] seems to not be performed. This can be manually
resolved by echoing to the sysfs path for each faulty device.
Attaching a new volume to a VM (especially with a different size) may reuse the stale device, resulting in I/O errors both on the hypervisor and inside the VM.
New VM instances may also incorrectly attach to these orphaned devices, leading to boot failures or disk corruption risks.
This behavior creates long-term stability and consistency issues for storage and compute services.
Workaround: Alternatively, live-migrate the VM to another compute host
that does not have the orphaned devices, then stop and start the VM.
[0] https://github.com/openstack/os-
brick/blob/master/os_brick/initiator/linuxscsi.py#L142
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2116553/+subscriptions
More information about the Ubuntu-openstack-bugs
mailing list