[Bug 2103819] [NEW] Nova cleanup tasks

Kumar 2103819 at bugs.launchpad.net
Fri Mar 21 17:03:49 UTC 2025


Public bug reported:

Greetings to the community...

As of now the nova-compute manager has a clean up tasks to remove
deleted vms' uuid folders on the compute nodes. To configure it, the
option 'running_deleted_instance_action' having three allowed values
reap, log, shutdown, noop .   The reap option deletes those uuid folders
in /var/lib/nova/instances which have vm deletion records in the nova db
(records marking a vm as deleted) .

In our scenario, where we run nova-manage db purge at periodic
intervals. If we delete a vm at a moment when the compute node is
unreachable and is down or shutdown, the vm deletion goes through still.
It creates a vm deletion record in the db. At this moment, if the nova-
manage purge tasks deletes old vm deleted records, it removes the
deleted vm record as well and the compute node is still down at this
point.

Later when the node comes up, the nova manager checks the nova db for
the instances it owns and does not find the vm's deleted db record as it
was purged earlier. so the cleaning task leaves the stale vm folders as
they are, adding to disk usage of the compute node..

I have written a few lines of a new periodic task for the nova manager
with the following config option registered in the oslo config
'/openstack/venvs/nova-29.2.1/lib/python3.10/site-
packages/nova/conf/compute.py'

----
cleanup_orphaned_instances_opts = [
    cfg.IntOpt("cleanup_orphaned_instances",
        default=0,
        choices=[0, 1],
        help='Whether to cleanup orphaned instances. 0 = disabled, 1 = enabled.'
),
]

ALL_OPTS = (compute_opts +
            resource_tracker_opts +
            allocation_ratio_opts +
            compute_manager_opts +
            interval_opts +
            timeout_opts +
            running_deleted_opts +
            cleanup_orphaned_instances_opts +
            instance_cleaning_opts +
            db_opts)
---------
It takes 0 or 1 only.

cleanup_orphaned_instances = 1 under the default nova config section

and added the periodic task to the file
'/openstack/venvs/nova-29.2.1/lib/python3.10/site-
packages/nova/compute/manager.py'

If any operators are interested, this will benefit them. If so, let me know and  I would share the code and
someone from the community can review and test it.....

** Affects: nova (Ubuntu)
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/2103819

Title:
  Nova cleanup tasks

Status in nova package in Ubuntu:
  New

Bug description:
  Greetings to the community...

  As of now the nova-compute manager has a clean up tasks to remove
  deleted vms' uuid folders on the compute nodes. To configure it, the
  option 'running_deleted_instance_action' having three allowed values
  reap, log, shutdown, noop .   The reap option deletes those uuid
  folders in /var/lib/nova/instances which have vm deletion records in
  the nova db (records marking a vm as deleted) .

  In our scenario, where we run nova-manage db purge at periodic
  intervals. If we delete a vm at a moment when the compute node is
  unreachable and is down or shutdown, the vm deletion goes through
  still. It creates a vm deletion record in the db. At this moment, if
  the nova-manage purge tasks deletes old vm deleted records, it removes
  the deleted vm record as well and the compute node is still down at
  this point.

  Later when the node comes up, the nova manager checks the nova db for
  the instances it owns and does not find the vm's deleted db record as
  it was purged earlier. so the cleaning task leaves the stale vm
  folders as they are, adding to disk usage of the compute node..

  I have written a few lines of a new periodic task for the nova manager
  with the following config option registered in the oslo config
  '/openstack/venvs/nova-29.2.1/lib/python3.10/site-
  packages/nova/conf/compute.py'

  ----
  cleanup_orphaned_instances_opts = [
      cfg.IntOpt("cleanup_orphaned_instances",
          default=0,
          choices=[0, 1],
          help='Whether to cleanup orphaned instances. 0 = disabled, 1 = enabled.'
  ),
  ]

  ALL_OPTS = (compute_opts +
              resource_tracker_opts +
              allocation_ratio_opts +
              compute_manager_opts +
              interval_opts +
              timeout_opts +
              running_deleted_opts +
              cleanup_orphaned_instances_opts +
              instance_cleaning_opts +
              db_opts)
  ---------
  It takes 0 or 1 only.

  cleanup_orphaned_instances = 1 under the default nova config section

  and added the periodic task to the file
  '/openstack/venvs/nova-29.2.1/lib/python3.10/site-
  packages/nova/compute/manager.py'

  If any operators are interested, this will benefit them. If so, let me know and  I would share the code and
  someone from the community can review and test it.....

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/2103819/+subscriptions




More information about the Ubuntu-openstack-bugs mailing list