[Bug 2075541] Re: ceph-volume lvm new-db fails requiring 'bluestore-block-db-size' parameter

macchese 2075541 at bugs.launchpad.net
Wed Aug 14 07:51:22 UTC 2024


** Summary changed:

- ceph-volume lvm new-db requires 'bluestore-block-db-size' parameter
+ ceph-volume lvm new-db fails requiring 'bluestore-block-db-size' parameter

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/2075541

Title:
  ceph-volume lvm new-db fails requiring 'bluestore-block-db-size'
  parameter

Status in ceph package in Ubuntu:
  New

Bug description:
  when trying to add a new-db to an existing LVM OSD, 
  ceph-volume lvm new-db fails requiring 'bluestore-block-db-size' parameter even this bug should be resolved by https://tracker.ceph.com/issues/55260

  my env:
  root at op1:~# lsb_release -r
  Release:        22.04
  root at op1:~# lsb_release -rd
  Description:    Ubuntu 22.04.4 LTS
  Release:        22.04

  ceph-volume                            18.2.0-0ubuntu3~cloud0

  lv db volume:
  root at op1:~# lvdisplay vol_db/c1
    --- Logical volume ---
    LV Path                /dev/vol_db/c1
    LV Name                c1
    VG Name                vol_db
    LV UUID                uCv6n3-Wa0H-0DaO-GGsc-Wa4c-VLfb-7KqG7X
    LV Write Access        read/write
    LV Creation host, time op1.maas, 2024-08-01 16:27:22 +0000
    LV Status              available
    # open                 0
    LV Size                166.00 GiB
    Current LE             42496
    Segments               1
    Allocation             inherit
    Read ahead sectors     auto
    - currently set to     256
    Block device           253:10


  what happens when I try to add a new block.db to an OSD:
  root at op1:~# ceph-volume lvm new-db --osd-id 42 --osd-fsid f720deb5-70eb-4a94-8c14-ca1d07e4a21c --target vol_db/c1  --no-systemd
  --> Making new volume at /dev/vol_db/c1 for OSD: 42 (/var/lib/ceph/osd/ceph-42)
   stdout: inferring bluefs devices from bluestore path
   stderr: Might need DB size specification, please set Ceph bluestore-block-db-size config parameter
  --> failed to attach new volume, error code:1
  --> Undoing lv tag set
  Failed to attach new volume: vol_db/c1

  
  after that, even the error, osd.42 seems to have a block.db

  root at op1:~# ceph-volume lvm list 42

  ====== osd.42 ======

    [block]       /dev/ceph-f720deb5-70eb-4a94-8c14-ca1d07e4a21c/osd-
  block-f720deb5-70eb-4a94-8c14-ca1d07e4a21c

        block device              /dev/ceph-f720deb5-70eb-4a94-8c14-ca1d07e4a21c/osd-block-f720deb5-70eb-4a94-8c14-ca1d07e4a21c
        block uuid                Li93WA-x5oR-rep1-21D1-sJ9m-4lII-msenUU
        cephx lockbox secret      
        cluster fsid              7dfd9e3a-a5b6-11ee-9798-619012c1bb3a
        cluster name              ceph
        crush device class        
        db device                 /dev/vol_db/c1
        db uuid                   l10sEJ-a3Gt-m8AK-eXA6-qTJW-82su-VngPmP
        encrypted                 0
        osd fsid                  f720deb5-70eb-4a94-8c14-ca1d07e4a21c
        osd id                    42
        osdspec affinity          
        type                      block
        vdo                       0
        devices                   /dev/sdc


  but block.db doesn't exist and from now, when restarting osd.42 it
  always fails.

  The only solution is remove osd.42 and re-create it with block.db, but
  ceph takes a lot of time to recover from the disk delete/create
  commands

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2075541/+subscriptions




More information about the Ubuntu-openstack-bugs mailing list