[Bug 1098314] Re: pg_num inappropriately low on new pools

Dr. Jens Rosenboom j.rosenboom at x-ion.de
Tue Feb 16 15:03:21 UTC 2016


You can set "osd pool default pg num" and "osd pool default pgp num" to
some higher value in your ceph.conf before creating pools if you want
some higher values than the default and do not want to specify it on the
command line every time.

For more complex setups however, you want to match the pg count specific
to your pool requirements anyway, so the usefulness of this approach is
limited. Rather use a tool like

http://ceph.com/pgcalc/

to tune your cluster to your requirements.

Finally, you can increase the pg count for existing pools without having
to remove them, but you cannot decrease it again afterwards, so starting
up with a small default certainly is better than having a default too
large for a single OSD test setup.

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1098314

Title:
  pg_num inappropriately low on new pools

Status in Ubuntu Cloud Archive:
  Triaged
Status in ceph package in Ubuntu:
  Triaged

Bug description:
  Version: 0.48.2-0ubuntu2~cloud0

  On a Ceph cluster with 18 OSDs, new object pools are being created
  with a pg_num of 8.  Upstream recommends that there be more like 100
  or so PGs per OSD: http://article.gmane.org/gmane.comp.file-
  systems.ceph.devel/10242

  I've worked around this by removing and recreating the pools with a
  higher pg_num before we started using the cluster, but since we aim
  for fully automated deployment (using Juju and MaaS) this is
  suboptimal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1098314/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list