[Bug 1538812] Re: Relies on DNS to resolve own hostname

Florian Haas florian at hastexo.com
Thu Jan 28 12:58:26 UTC 2016


If my analysis here is correct, then it doesn't just break Ceph; instead
it would break all of the affected charms when hosts are not DNS-
resolvable. Which never happens when using MAAS  and the MAAS host acts
as one's DNS server, but would probably be the case with most (all?)
non-MAAS Juju providers.

** Also affects: nova-compute (Juju Charms Collection)
   Importance: Undecided
       Status: New

** Also affects: cinder (Juju Charms Collection)
   Importance: Undecided
       Status: New

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to cinder in Juju Charms Collection.
Matching subscriptions: charm-bugs
https://bugs.launchpad.net/bugs/1538812

Title:
  Relies on DNS to resolve own hostname

Status in ceph package in Juju Charms Collection:
  New
Status in cinder package in Juju Charms Collection:
  New
Status in nova-compute package in Juju Charms Collection:
  New

Bug description:
  In charm/hooks/utils.py, the get_host_ip method seems to rely on DNS
  to resolve host names:

  @cached
  def get_host_ip(hostname=None):
      if config('prefer-ipv6'):
          return get_ipv6_addr()[0]

      hostname = hostname or unit_get('private-address')
      try:
          # Test to see if already an IPv4 address
          socket.inet_aton(hostname)
          return hostname
      except socket.error:
          # This may throw an NXDOMAIN exception; in which case
          # things are badly broken so just let it kill the hook
          answers = dns.resolver.query(hostname, 'A')
          if answers:
              return answers[0].address

  Firstly, the dns.resolver.query call strikes me as incredibly silly.
  What if the other node in not resolvable via DNS, but its name is in
  /etc/hosts? What if the other node *is* in DNS, but is a CNAME? And,
  bottom line, why not simply use socket.gethostbyname() here?

  Secondly, this currently (as of today) breaks a Ceph deployment. It
  definitely didn't do so a month ago, so whatever it was that changed
  in the interim, this is a regression. I don't know if this code path
  wasn't there earlier, or whether it was just never hit. But it
  definitely used to work, and no longer does.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charms/+source/ceph/+bug/1538812/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list