[Bug 1896617] Re: [SRU] Creation of image (or live snapshot) from the existing VM fails if libvirt-image-backend is configured to qcow2 starting from Ussuri

Maysam Fazeli 1896617 at bugs.launchpad.net
Sat Oct 31 21:54:26 UTC 2020


Guys I followed the procedures and updated the packages and the libvirt-
qemu user is removed from nova group but still getting the same error.
The image type is raw not qcow2 though. Here are the logs:

2020-10-31 17:41:15.579 1735 INFO nova.compute.manager [req-4a879436-1412-45ad-b461-12aaceec4a72 53151ac9de83404882af6c50c66c0278 adf11129db9f4dc494a848389f1d82e0 - 29dc2dc4b31344cfbbb7e896e44026d6 29dc2dc4b31344cfbbb7e896e44026d6] [instance: e05d55df-85e2-44e7-8ea7-f7f060fbc3ba] Successfully reverted task state from image_pending_upload on failure for instance.
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server [req-4a879436-1412-45ad-b461-12aaceec4a72 53151ac9de83404882af6c50c66c0278 adf11129db9f4dc494a848389f1d82e0 - 29dc2dc4b31344cfbbb7e896e44026d6 29dc2dc4b31344cfbbb7e896e44026d6] Exception during message handling: libvirt.libvirtError: unable to verify existence of block copy target: Permission denied
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 2432, in snapshot
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     metadata['location'] = root_disk.direct_snapshot(
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/virt/libvirt/imagebackend.py", line 452, in direct_snapshot
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     raise NotImplementedError(_('direct_snapshot() is not implemented'))
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server NotImplementedError: direct_snapshot() is not implemented
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred:
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 276, in dispatch
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 196, in _do_dispatch
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 77, in wrapped
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     _emit_exception_notification(
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     self.force_reraise()
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     raise value
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 69, in wrapped
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 188, in decorated_function
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     LOG.warning("Failed to revert task state for instance. "
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     self.force_reraise()
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     raise value
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 159, in decorated_function
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/compute/utils.py", line 1447, in decorated_function
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 216, in decorated_function
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     self.force_reraise()
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     raise value
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 205, in decorated_function
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 236, in decorated_function
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     compute_utils.delete_image(
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     self.force_reraise()
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     raise value
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 232, in decorated_function
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     return function(self, context, image_id, instance,
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 3879, in snapshot_instance
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     self._snapshot_instance(context, image_id, instance,
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 3912, in _snapshot_instance
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     self.driver.snapshot(context, instance, image_id,
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 2471, in snapshot
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     self._live_snapshot(context, instance, guest,
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 2680, in _live_snapshot
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     dev.rebase(disk_delta, copy=True, reuse_ext=True, shallow=True)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 815, in rebase
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     return self._guest._domain.blockRebase(
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 193, in doit
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     result = proxy_call(self._autowrap, f, *args, **kwargs)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 151, in proxy_call
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     rv = execute(f, *args, **kwargs)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 132, in execute
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     six.reraise(c, e, tb)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     raise value
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 86, in tworker
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     rv = meth(*args, **kwargs)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3/dist-packages/libvirt.py", line 1079, in blockRebase
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server     if ret == -1: raise libvirtError ('virDomainBlockRebase() failed', dom=self)
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server libvirt.libvirtError: unable to verify existence of block copy target: Permission denied
2020-10-31 17:41:15.583 1735 ERROR oslo_messaging.rpc.server

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to Ubuntu Cloud Archive.
https://bugs.launchpad.net/bugs/1896617

Title:
  [SRU] Creation of image (or live snapshot) from the existing VM fails
  if libvirt-image-backend is configured to qcow2 starting from Ussuri

Status in OpenStack nova-compute charm:
  Invalid
Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive ussuri series:
  Fix Released
Status in Ubuntu Cloud Archive victoria series:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Focal:
  Fix Released
Status in nova source package in Groovy:
  Fix Released

Bug description:
  [Impact]

  tl;dr

  1) creating the image from the existing VM fails if qcow2 image backend is used, but everything is fine if using rbd image backend in nova-compute.
  2) openstack server image create --name <name of the new image> <instance name or uuid> fails with some unrelated error:

  $ openstack server image create --wait 842fa12c-19ee-44cb-bb31-36d27ec9d8fc
  HTTP 404 Not Found: No image found with ID f4693860-cd8d-4088-91b9-56b2f173ffc7

  == Details ==

  Two Tempest tests ([1] and [2]) from the 2018.02 Refstack test lists
  [0] are failing with the following exception:

  49701867-bedc-4d7d-aa71-7383d877d90c
  Traceback (most recent call last):
    File "/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/api/compute/base.py", line 369, in create_image_from_server
      waiters.wait_for_image_status(client, image_id, wait_until)
    File "/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/common/waiters.py", line 161, in wait_for_image_status
      image = show_image(image_id)
    File "/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/lib/services/compute/images_client.py", line 74, in show_image
      resp, body = self.get("images/%s" % image_id)
    File "/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/lib/common/rest_client.py", line 298, in get
      return self.request('GET', url, extra_headers, headers)
    File "/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/lib/services/compute/base_compute_client.py", line 48, in request
      method, url, extra_headers, headers, body, chunked)
    File "/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/lib/common/rest_client.py", line 687, in request
      self._error_checker(resp, resp_body)
    File "/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/lib/common/rest_client.py", line 793, in _error_checker
      raise exceptions.NotFound(resp_body, resp=resp)
  tempest.lib.exceptions.NotFound: Object not found
  Details: {'code': 404, 'message': 'Image not found.'}

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
    File "/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/api/compute/images/test_images_oneserver.py", line 69, in test_create_delete_image
      wait_until='ACTIVE')
    File "/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/api/compute/base.py", line 384, in create_image_from_server
      image_id=image_id)
  tempest.exceptions.SnapshotNotFoundException: Server snapshot image d82e95b0-9c62-492d-a08c-5bb118d3bf56 not found.

  So far I was able to identify the following:

  1) https://github.com/openstack/tempest/blob/master/tempest/api/compute/images/test_images_oneserver.py#L69 invokes a "create image from server"
  2) It fails with the following error message in the nova-compute logs: https://pastebin.canonical.com/p/h6ZXdqjRRm/

  The same occurs if the "openstack server image create --wait" will be
  executed; however, according to
  https://docs.openstack.org/nova/ussuri/admin/migrate-instance-with-
  snapshot.html the VM has to be shut down before the image creation:

  "Shut down the source VM before you take the snapshot to ensure that
  all data is flushed to disk. If necessary, list the instances to view
  the instance name. Use the openstack server stop command to shut down
  the instance:"

  This step is definitely being skipped by the test (e.g it's trying to
  perform the snapshot on top of the live VM).

  FWIW, I'm using libvirt-image-backend: qcow2 in my nova-compute
  application params; and I was able to confirm that if the above
  parameter will be changed to "libvirt-image-backend: rbd", the tests
  will pass successfully.

  Also, there is similar issue I was able to find:
  https://bugs.launchpad.net/nova/+bug/1885418 but it doesn't have any
  useful information rather then confirmation of the fact that OpenStack
  Ussuri + libvirt backend has some problem with the live snapshotting.

  [0] https://refstack.openstack.org/api/v1/guidelines/2018.02/tests?target=platform&type=required&alias=true&flag=false
  [1] tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_delete_image[id-3731d080-d4c5-4872-b41a-64d0d0021314]
  [2] tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_image_specify_multibyte_character_image_name[id-3b7c6fe4-dfe7-477c-9243-b06359db51e6]

  [Test Case]
  deploy/configure openstack, using juju here
  if upgrading to the fixed package, libvirt-guests will require restart: sudo systemctl restart libvirt-guests
  create openstack instance
  openstack server image create --wait <instance-uuid>
  successful if fixed; fails with permissions error if not fixed

  [Regression Potential]
  This actually reverts the nova group members to what they used to be prior to the focal version of the packages. If there is a regression in this fix it would likely result in a permissions issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1896617/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list