[Bug 2097258] [NEW] [ceph-ansible][radosgw]: s3 secret key regenration issue

Danish 2097258 at bugs.launchpad.net
Mon Feb 3 10:24:41 UTC 2025


Public bug reported:

I am getting 500 internal server error after deleting and creating new
access and secret keys.

I am able to access my buckets and and its data using new keys from
S3cmd utility. but it seems ceph MGR keep cache and throwing 500
internal server error I am trying to get meta data from CEPH dashboard.

But when I do ceph mgr fail and move the ceph dashboard to another MGR
then it starts working fine again.

Is there a way I can clear cache of MGR or is the any bug in ceph for
regeneration of S3 access keys?

** Affects: ceph (Ubuntu)
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/2097258

Title:
  [ceph-ansible][radosgw]: s3 secret key regenration issue

Status in ceph package in Ubuntu:
  New

Bug description:
  I am getting 500 internal server error after deleting and creating new
  access and secret keys.

  I am able to access my buckets and and its data using new keys from
  S3cmd utility. but it seems ceph MGR keep cache and throwing 500
  internal server error I am trying to get meta data from CEPH
  dashboard.

  But when I do ceph mgr fail and move the ceph dashboard to another MGR
  then it starts working fine again.

  Is there a way I can clear cache of MGR or is the any bug in ceph for
  regeneration of S3 access keys?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2097258/+subscriptions




More information about the Ubuntu-openstack-bugs mailing list