Ceph deployment
James Page
james.page at ubuntu.com
Fri Nov 20 09:12:47 UTC 2015
Pshem
With that many LXC containers running on a single host, its quite possible
that it took a while for all of the hook executions to complete and for the
OSD's to startup.
On Thu, Nov 19, 2015 at 11:36 PM, Pshem Kowalczyk <pshem.k at gmail.com> wrote:
> Hi,
>
> It looks like ceph had to take its time to come up. After about 35 mins
> the ceph-osd charms are showing as:
>
> ceph-osd/14 active idle 1.25.0 1
> node1.maas Unit is ready
> (1 OSD)
> ceph-osd/15 active idle 1.25.0 2
> node2.maas Unit is ready
> (1 OSD)
>
> All I have to do now is tell ceph to use only 2 OSDs.
>
> kind regards
> Pshem
>
>
> On Fri, 20 Nov 2015 at 11:45 Pshem Kowalczyk <pshem.k at gmail.com> wrote:
>
>> Hi,
>>
>> Please see the complete juju status for the whole setup.
>>
>> This is a test/POC setup. I'm building this on 3 machines - two compute
>> nodes that also run the ceph-osd and once generic 'controller' node that
>> carries all the other functions.
>>
>> kind regards
>> Pshem
>>
>>
>>
>> On Fri, 20 Nov 2015 at 11:14 James Page <james.page at ubuntu.com> wrote:
>>
>>> Hi Pshem
>>>
>>> On Thu, Nov 19, 2015 at 10:04 PM, Pshem Kowalczyk <pshem.k at gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I'm trying to deploy ceph and ceph-osd, however with this config:
>>>>
>>>> ceph:
>>>> source: cloud:trusty-liberty
>>>> fsid: 015cc90c-8f06-11e5-be28-0050569axxxx
>>>> monitor-secret: AQB3QU5WiW3GEhAAVLK19SNzR46kXXXXXXX==
>>>> osd-devices: /dev/sdb
>>>> osd-reformat: 'yes'
>>>>
>>>> ceph-osd:
>>>> source: cloud:trusty-liberty
>>>> osd-devices: /dev/sdb
>>>> osd-reformat: 'yes'
>>>>
>>>> and a relation between ceph and ceph-osd I end up with status:
>>>> No block devices detected using current configuration
>>>>
>>>> The devices are there and a closer inspection of ceph setup reveals
>>>> that the keys are not copied onto the ceph-osd nodes and ceph is failing
>>>> with:
>>>>
>>>> ERROR: osd init failed: (1) Operation not permitted
>>>>
>>>
>>> No keys in /etc/ceph is actually as intended - the OSD's use a special
>>> bootstrap key in /var/lib/ceph/bootstrap-osd
>>>
>>>> the only relation I have is ceph-osd:mon ceph:osd
>>>>
>>>
>>> That should be fine.
>>>
>>>
>>>>
>>>> ceph -s on the mon nodes gives:
>>>>
>>>> # ceph -s
>>>> cluster 015cc90c-8f06-11e5-be28-0050569a302e
>>>> health HEALTH_ERR
>>>> 464 pgs stuck inactive
>>>> 464 pgs stuck unclean
>>>> no osds
>>>> monmap e1: 3 mons at {juju-machine-0-lxc-25=
>>>> 10.0.11.79:6789/0,juju-machine-0-lxc-26=10.0.11.106:6789/0,juju-machine-0-lxc-27=10.0.11.107:6789/0
>>>> }
>>>> election epoch 4, quorum 0,1,2
>>>> juju-machine-0-lxc-25,juju-machine-0-lxc-26,juju-machine-0-lxc-27
>>>> osdmap e5: 0 osds: 0 up, 0 in
>>>> pgmap v6: 464 pgs, 3 pools, 0 bytes data, 0 objects
>>>> 0 kB used, 0 kB / 0 kB avail
>>>> 464 creating
>>>>
>>>
>>> Looking at this output
>>>
>>> 1) the mon cluster bootstrapped OK - which is good
>>> 2) you're running the ceph charm under LXC containers - which is unusual
>>> - the ceph charm is a superset of the ceph-osd charm function, so is
>>> normally run on hardware as well - but typically just 3 units.
>>>
>>> Could you provide the output of 'juju status' so we can see how you have
>>> the charms laid out in the deployment? That might help.
>>>
>>> Cheers
>>>
>>> James
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju/attachments/20151120/a749d541/attachment.html>
More information about the Juju
mailing list