Tuning ceph
Pshem Kowalczyk
pshem.k at gmail.com
Mon Nov 30 18:52:45 UTC 2015
Hi,
Thank you that's exactly what I was after.
kind regards
Pshem
On Tue, 1 Dec 2015 at 01:19 James Page <james.page at ubuntu.com> wrote:
> Hi Pshem
>
> On Thu, Nov 26, 2015 at 12:18 AM, Pshem Kowalczyk <pshem.k at gmail.com>
> wrote:
>
>> Hi,
>>
>> In this particular case I just want to make sure that my settings for
>> CRUSH are used for various pools and be able to define my own pools (see
>> below).
>>
>> I'm trying to create two different pools for both nova-compute and
>> cinder-ceph (one with SSDs the other with spinning drives). I have managed
>> to create another 'local storage' (lvm based) cinder instance (and that
>> works fine with volume-type). But I'm completely unsure as how to keep one
>> ceph instance with different types of pools for cinder. At this stage
>> doesn't look like the cinder (or cinder-ceph) charm allows you to specify
>> the pool to use (it allows for one for volume-group for local LVM). For the
>> nova-compute I'm basing the setup on is here:
>> https://ceph.com/planet/openstack-nova-configure-multiple-ceph-backends-on-one-hypervisor/ (which
>> requires for a single nova-compute charm to be aware of 2 pools). The
>> second one is probably less important since I (probably, not tried that
>> yet) create deploy two nova-compute charms with different rbd-pool settings
>> and point back to the same ceph charm.
>>
>
> You can deploy the cinder-ceph charm multiple times to support your
> multiple cinder pool requirement (so long as they both use the same ceph
> cluster):
>
> juju deploy cinder-ceph ceph-ssd-backend
> juju deploy cinder-ceph ceph-spinning-disk-backend
> juju add-relation cinder ceph-ssd-backend
> juju add-relation cinder ceph-spinning-disk-backend
>
> This will create two pools in the Ceph cluster ('ceph-ssd-backend' and
> 'ceph-spinning-disk-backend'); right now you will have to tune the CRUSH
> map by hand post deployment to target SSD and spinning disks for each pool;
> however I do know that the storage team at Canonical are working on
> features on the Ceph charms to make pool management and configuration a-lot
> easier todo via Juju actions - so although this is a bit painful today, it
> should get alot easier...
>
> That should resolve you challenges with regards to cinder presented
> volumes; right now the nova-compute libvirt rbd backend is usable in the
> nova-compute charm, but it won't support Sebastian's two compute daemon
> configuration as described in the article you linked to.
>
> HTH
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju/attachments/20151130/8623fe0c/attachment.html>
More information about the Juju
mailing list