Juju devel 2.0-alpha1 is available for testing
Antonio Rosales
antonio.rosales at canonical.com
Fri Jan 22 04:00:48 UTC 2016
On Thursday, January 21, 2016, Marco Ceppi <marco.ceppi at canonical.com>
wrote:
> Wow! A lot to play with tomorrow. Thanks for the release core team!
>
+,1 some solid bits to start testing with. Looking forward to exploring.
Thanks juju-core folks.
-Antonio
> On Thu, Jan 21, 2016, 5:06 PM Curtis Hovey-Canonical <curtis at canonical.com
> <javascript:_e(%7B%7D,'cvml','curtis at canonical.com');>> wrote:
>
>> # juju-core 2.0-alpha1
>>
>> A new development release of Juju, juju-core 2.0-alpha1, is now available.
>> This release replaces version 1.26-alpha3.
>>
>>
>> ## Getting Juju
>>
>> juju-core 2.0-alpha1 is available for Xenial and backported to earlier
>> series in the following PPA:
>>
>> https://launchpad.net/~juju/+archive/devel
>>
>> Windows, Centos, and OS X users will find installers at:
>>
>> https://launchpad.net/juju-core/+milestone/2.0-alpha1
>>
>> Development releases use the "devel" simple-streams. You must configure
>> the 'agent-stream' option in your environments.yaml to use the matching
>> juju agents.
>>
>> Upgrading from older releases to this development release is not
>> supported.
>>
>>
>> ## Notable Changes
>>
>> * Terminology
>> * Testing Advice
>> * Command Name Changes
>> * Multi-Model Support Active by Default
>> * Native Support for Charm Bundles
>> * Multi Series Charms
>> * Improved Local Charm Deployment
>> * LXD Provider
>> * Microsoft Azure Resource Manager Provider
>> * Bootstrap Constraints, Series
>> * Juju Logging Improvements
>> * Unit Agent Improvements
>> * API Login with Macaroons
>> * MAAS 1.8 Compatibility
>>
>>
>> ### Terminology
>>
>> In Juju 2.0, environments will now be referred to as "models". Commands
>> which referenced "environments" will now reference "models". Example:
>>
>> juju get-environment
>>
>> will become
>>
>> juju get-model
>>
>>
>> The "state-server" from Juju 1.x becomes a "controller" in 2.0. The
>> change in terminology will be done across several alphas, so messages
>> and errors provided by juju may still reference "environments".
>>
>>
>> ### Testing Advice
>>
>> Juju 2.0's new features and behaviours will confuse older Juju clients.
>> It is best to create a new juju home to ensure you can revert to a 1.x
>> Juju client. You can move an existing .juju/ directory out of the way or
>> create a new directory and export it for Juju to find like so:
>>
>> export JUJU_HOME=~/new-juju-testing
>>
>> If you accidentally use Juju 2.0 with a Juju 1.x home, and Juju 1.x
>> reports problems with the environment, you can delete ~/.go-cookie and
>> the environments/cache.yaml in the Juju home dir to unconfuse Juju 1.x.
>> Juju 2.0 will store its data in a new location soon.
>>
>> It is not possible to test an upgrade from Juju 1.x to 2.0 at this time.
>> Juju will support this in future releases.
>>
>>
>> ### Command Name Changes
>>
>> After a while experimenting with nested command structures, the decision
>> was made to go back to a flat command namespace as the nested commands
>> always felt clumsy and awkward when being used even though they seemed
>> like a good idea.
>>
>> So, we have the following changes:
>>
>> 1.25 command 2.0-alpha1 command
>>
>> juju environment destroy juju destroy-environment *
>> juju environment get juju get-environment **
>> juju environment get-constraints juju get-constraints **
>> juju environment retry-provisioning juju retry-provisioning
>> juju environment set juju set-environment **
>> juju environment set-constraints juju set-constraints **
>> juju environment share juju share-environment
>> juju environment unset juju unset-environment **
>> juju environment unshare juju unshare-environment
>> juju environment users juju list-shares
>> juju user add juju add-user
>> juju user change-password juju change-user-password
>> juju user credentials juju get-user-credentials
>> juju user disable juju disable-user
>> juju user enable juju enable-user
>> juju user info juju show-user
>> juju user list juju list-users
>>
>> * the behaviour of destroy-environment has changed, see the section on
>> controllers below
>> ** these commands existed at the top level before but become the
>> recommended approach again.
>>
>> And for the extra commands previously under the "jes" feature flag but
>> now available out of the box:
>>
>> juju system create-environment juju create-environment
>> juju system destroy juju destroy-controller
>> juju system environments juju list-environments
>> juju system kill juju kill-controller
>> juju system list juju list-controllers
>> juju system list-blocks juju list-all-blocks
>> juju system login juju login
>> juju system remove-blocks juju remove-all-blocks
>> juju system use-environment juju use-environment
>>
>> Fundamentally, listing things should start with 'list-', and looking at
>> an individual thing should start with 'show-'. 'remove' is generally
>> used for things that can be easily added back, whereas 'destroy' is used
>> when it is not so easy to add back.
>>
>>
>> ### Multi-Model Support Active by Default
>>
>> The multiple model support that was previously behind the "jes"
>> developer feature flag is now enabled by default. Along with the
>> enabling there
>>
>> A new concept has been introduced, that of a "controller".
>>
>> A Juju Controller, also sometimes called the "controller model",
>> describes the model that runs and manages the Juju API servers and the
>> underlying database.
>>
>> The controller model is what is created when the bootstrap command is
>> used. This controller model is a normal Juju model that just happens to
>> have machines that manage Juju. A single Juju controller can manage many
>> Juju models, meaning less resources are needed for Juju's management
>> infrastructure and new models can be created almost instantly.
>>
>> In order to keep a clean separation of concerns, it is now considered
>> best practice to create additional models for deploying workloads,
>> leaving the controller model for Juju's own infrastructure. Services can
>> still be deployed to the controller model, but it is generally expected
>> that these be only for management and monitoring purposes (e.g Landscape
>> and Nagios).
>>
>> When creating a Juju controller that is going to be used by more than
>> one person, it is good practice to create users for each individual that
>> will be accessing the models.
>>
>> The main new commands of note are:
>> juju list-models
>> juju create-model
>> juju share-model
>> juju list-shares
>> juju use-model
>>
>> Also see:
>> juju help controllers
>> juju help users
>>
>> Also, since controllers are now special in that they can host multiple
>> other models, destroying controllers now needs to be done with more
>> care.
>>
>> juju destroy-model
>>
>> does not work on controllers, but now only on hosted models (those
>> models that the controller looks after).
>>
>> juju destroy-controller
>>
>> is the way to do an orderly takedown.
>>
>> juju kill-controller
>>
>> will work in those situations where the API server may be broken.
>> However forcibly taking down a controller could leave other models
>> running with no way to talk to an API server.
>>
>>
>> ### Native Support for Charm Bundles
>>
>> The Juju 'deploy' command can now deploy a bundle. The Juju Quickstart
>> or Deployer plugins are not needed to deploy a bundle of charms. You can
>> deploy the mediawiki-single bundle like so:
>>
>> juju deploy cs:bundle/mediawiki-single
>>
>> Local bundles can be deployed by passing the path to the bundle. For
>> example:
>>
>> juju deploy ./openstack/bundle.yaml
>>
>> Local bundles can also be deployed from a local repository. Bundles
>> reside in the "bundle" subdirectory. For example, your local juju
>> repository might look like this:
>>
>> juju-repo/
>> |
>> - trusty/
>> - bundle/
>> |
>> - openstack/
>> |
>> - bundle.yaml
>>
>> and you can deploy the bundle like so:
>>
>> export JUJU_REPOSITORY="$HOME/juju-repo"
>> juju deploy local:bundle/openstack
>>
>> Bundles, when deployed from the command line like this, now support
>> storage constraints. To specify how to allocate storage for a service,
>> you can add a 'storage' key underneath a service, and under 'storage'
>> add a key for each store you want to allocate, along with the
>> constraints. e.g. say you're deploying ceph-osd, and you want each unit
>> to have a 50GiB disk:
>>
>> ceph-osd:
>> ...
>> storage:
>> osd-devices: 50G
>>
>> Because a bundle should work across cloud providers, the constraints in
>> the bundle should not specify a pool/storage provider, and just use the
>> default for the cloud. To customize how storage is allocated, you can use
>> the '--storage' option with a new bundle-specific format: --storage
>> service:store=constraints. e.g. say you you're deploying OpenStack, and
>> you want each unit of ceph-osd to have 3x50GiB disks:
>>
>> juju deploy ./openstack/bundle.yaml --storage
>> ceph-osd:osd-devices=3,50G
>>
>>
>> ### Multi Series Charms
>>
>> Charms now have the capability to declare that they support more than
>> one series. Previously a separate copy of the charm was required for
>> each series. An important constraint here is that for a given charm,
>> all of the listed series must be for the same distro/OS; it is not
>> allowed to offer a single charm for Ubuntu and CentOS for example.
>> Supported series are added to charm metadata as follows:
>>
>> name: mycharm
>> summary: "Great software"
>> description: It works
>> maintainer: Some One <some.one at example.com
>> <javascript:_e(%7B%7D,'cvml','some.one at example.com');>>
>> categories:
>> - databases
>> series:
>> - precise
>> - trusty
>> - wily
>> provides:
>> db:
>> interface: pgsql
>> requires:
>> syslog:
>> interface: syslog
>>
>> The default series is the first in the list:
>>
>> juju deploy mycharm
>>
>> will deploy a mycharm service running on precise.
>>
>> A different, non-default series may be specified:
>>
>> juju deploy mycharm --series trusty
>>
>> It is possible to force the charm to deploy using an unsupported series
>> (so long as the underlying OS is compatible):
>>
>> juju deploy mycharm --series xenial --force
>>
>> or
>>
>> juju add-machine --series xenial
>> Machine 1 added.
>> juju deploy mycharm --to 1 --force
>>
>> '--force' is required in the above deploy command because the target
>> machine is running xenial which is not supported by the charm.
>>
>> The 'force' option may also be required when upgrading charms. Consider
>> the case where a service is initially deployed with a charm supporting
>> precise and trusty. A new version of the charm is published which only
>> supports trusty and xenial. For services deployed on precise, upgrading
>> to the newer charm revision is allowed, but only using force (note the
>> use of '--force-series' since upgrade-charm also supports '--force-
>> units'):
>>
>> juju upgrade-charm mycharm --force-series
>>
>>
>> ### Improved Local Charm Deployment
>>
>> Local charms can be deployed directly from their source directory
>> without having to set up a pre-determined local repository file
>> structure. This feature makes it more convenient to hack on a charm and
>> just deploy it, and it also necessary to develop local charms
>> supporting multi series.
>>
>> Assuming a local charm exists in directory /home/user/charms/mycharm:
>>
>> juju deploy ~/charms/mycharm
>>
>> will deploy the charm using the default series.
>>
>> juju deploy ~/charms/mycharm --series trusty
>>
>> will deploy the charm using trusty.
>>
>> Note that it is no longer necessary to define a JUJU_REPOSITORY nor
>> locate the charms in a directory named after a series. Any directory
>> structure can be used, including simply pulling the charm source from a
>> VCS, hacking on the code, and deploying directly from the local repo.
>>
>>
>> ### LXD Provider
>>
>> The new LXD provider is the best way to use Juju locally.
>>
>> The controller is no longer your host machine; it is now a LXC
>> container. This keeps your host machine clean and allows you to utilize
>> your local model more like a traditional Juju model. Because
>> of this, you can test things like Juju high-availability without needing
>> to utilize a cloud provider.
>>
>> The previous local provider remains functional for backwards
>> compatibility.
>>
>> #### Requirements
>>
>> - Running Wily (LXD is installed by default)
>>
>> - Import the LXD cloud-images that you intend to deploy and register
>> an alias:
>>
>> lxd-images import ubuntu trusty --alias ubuntu-trusty
>> lxd-images import ubuntu wily --alias ubuntu-wily
>> lxd-images import ubuntu xenial --alias ubuntu-xenial
>>
>> or register an alias for your existing cloud-images
>>
>> lxc image alias create ubuntu-trusty <fingerprint>
>> lxc image alias create ubuntu-wily <fingerprint>
>> lxc image alias create ubuntu-xenial <fingerprint>
>>
>> - For 2.0-alpha1, you must specify the "--upload-tools" flag when
>> bootstrapping the controller that will use trusty cloud-images.
>> This is because most of Juju's charms are for Trusty, and the
>> agent-tools for Trusty don't yet have LXD support compiled in.
>>
>> juju bootstrap --upload-tools
>>
>> "--upload-tools" is not required for deploying a wily or xenial
>> controller and services.
>>
>> Logs are located at '/var/log/lxd/juju-{uuid}-machine-#/ ?
>>
>>
>> #### Specifying a LXD Controller
>>
>> In your ~/.juju/environments.yaml, you'll now find a block for LXD
>> providers:
>>
>> lxd:
>> type: lxd
>> # namespace identifies the namespace to associate with containers
>> # created by the provider. It is prepended to the container
>> names.
>> # By default the controller's name is used as the namespace.
>> #
>> # namespace: lxd
>> # remote-url is the URL to the LXD API server to use for managing
>> # containers, if any. If not specified then the locally running
>> LXD
>> # server is used.
>> #
>> # Note: Juju does not set up remotes for you. Run the following
>> # commands on an LXD remote's host to install LXD:
>> #
>> # add-apt-repository ppa:ubuntu-lxc/lxd-stable
>> # apt-get update
>> # apt-get install lxd
>> #
>> # Before using a locally running LXD (the default for this
>> provider)
>> # after installing it, either through Juju or the LXD CLI ("lxc"),
>> # you must either log out and back in or run this command:
>> #
>> # newgrp lxd
>> #
>> # You will also need to prepare the cloud images that Juju uses:
>> #
>> # lxc remote add images images.linuxcontainers.org
>> # lxd-images import ubuntu trusty --alias ubuntu-trusty
>> # lxd-images import ubuntu wily --alias ubuntu-wily
>> # lxd-images import ubuntu xenial --alias ubuntu-xenial
>> #
>> # See: https://linuxcontainers.org/lxd/getting-started-cli/
>> #
>> # remote-url:
>> # The cert and key the client should use to connect to the remote
>> # may also be provided. If not then they are auto-generated.
>> #
>> # client-cert:
>> # client-key:
>>
>>
>> ### Microsoft Azure Resource Manager Provider
>>
>> Juju now supports Microsoft Azure's new Resource Manager API. The Azure
>> provider has effectively been rewritten, but old models are still
>> supported. To use the new provider support, you must bootstrap a new
>> model with new configuration. There is no automated method for
>> migrating.
>>
>> The new provider supports everything the old provider did, but now also
>> supports several additional features, as well as support for unit
>> placement (i.e. you can specify existing machines to which units are
>> deployed). As before, units of a service will be allocated to machines
>> in a service-specific Availability Set if no machine is specified.
>>
>> In the initial release of this provider, each machine will be allocated
>> a public IP address. In a future release, we will only allocate public
>> IP addresses to machines that have exposed services, to enable
>> allocating more machines than there are public IP addresses.
>>
>> Each model is represented as a "resource group" in Azure, with the VMs,
>> subnets, disks, etc. being contained within that resource group. This
>> enables guarantees about ensuring resources are not leaked when
>> destroying a model, which means we are now able to support persistent
>> volumes in the Azure storage provider.
>>
>> Finally, as well as Ubuntu support, the new Azure provider supports
>> Microsoft Windows Server 2012 (series "win2012"), Windows Server 2012 R2
>> (series "win2012r2"), and CentOS 7 (series "centos7") natively.
>>
>> To use the new Azure support, you need the following configuration in
>> environments.yaml:
>>
>> type: azure
>> application-id: <Azure-AD-application-ID>
>> application-password: <Azure-AD-application-password>
>> subscription-id: <Azure-account-subscription-ID>
>> tenant-id: <Azure-AD-tenant-ID>
>> location: westus # or any other Azure location
>>
>> To obtain these values, it is recommended that you use the Azure CLI:
>> https://azure.microsoft.com/en-us/documentation/articles/xplat-cli/.
>>
>> You will need to create an "application" in Azure Active Directory for
>> Juju to use, per the following documentation:
>>
>> https://azure.microsoft.com/en-us/documentation/articles/resource-group-authenticate-service-principal/#authenticate-service-principal-with-password---azure-cli
>> (NOTE: you should assign the role "Owner", not "Reader", to the
>> application.)
>>
>> Take a note of the "Application Id" output when issuing "azure ad app
>> create". This is the value that you must use in the 'application-id'
>> configuration for Juju. The password you specify is the value to use in
>> 'application-password'.
>>
>> To obtain your subscription ID, you can use "azure account list" to list
>> your account subscriptions and their IDs. To obtain your tenant ID, you
>> should use "azure account show", passing in the ID of the account
>> subscription you will use.
>>
>> You may need to register some resources using the azure CLI when
>> updating an existing Azure account:
>>
>> azure provider register Microsoft.Compute
>> azure provider register Microsoft.Network
>> azure provider register Microsoft.Storage
>>
>>
>> ### New Support for Rackspace
>>
>> A new provider has been added that supports hosting a Juju model in
>> Rackspace Public Cloud As Rackspace Cloud is based on OpenStack,
>> Rackspace provider internally uses OpenStack provider, and most of the
>> features and configuration options for those two providers are
>> identical.
>>
>> The basic config options in your environments.yaml will look like this:
>>
>> rackspace:
>> type: rackspace
>> tenant-name: "<your tenant name>"
>> region: <IAD, DFW, ORD, LON, HKG, or SYD>
>> auth-url: https://identity.api.rackspacecloud.com/v2.0
>> auth-mode: <userpass or keypair>
>> username: <your username>
>> password: <secret>
>> # access-key: <secret>
>> # secret-key: <secret>
>>
>> The values in angle brackets need to be replaced with your rackspace
>> information.
>>
>> 'tenant-name' must contain the rackspace Account Number. 'region' must
>> contain rackspace region (IAD, DFW, ORD, LON, HKG, SYD). 'auth-mode'
>> parameter can contain either 'userpass' or 'keypair'. This parameter
>> distinguish the authentication mode that provider will use. If you use
>> 'userpass' mode you must also provide 'username' and 'password'
>> parameters. If you use 'keypair' mode 'access-key' and 'secret-key'
>> parameters must be provided.
>>
>>
>> ### Bootstrap Constraints, Series
>>
>> While bootstrapping, you can now specify constraints for the bootstrap
>> machine independently of the service constraints:
>>
>> juju bootstrap --constraints <service-constraints>
>> --bootstrap-constraints <bootstrap-machine-constraints>
>>
>> You can also specify the series of the bootstrap machine:
>>
>> juju bootstrap --bootstrap-series trusty
>>
>>
>> ### Juju Logging Improvements
>>
>> Logs from Juju's machine and unit agents are now streamed to the Juju
>> controllers over the Juju API in preference to using rsyslogd. This is
>> more robust and is a requirement now that multi-model support is enabled
>> by default. Additionally, the centralised logs are now stored in Juju's
>> database instead of the all-machines.log file. This improves log query
>> flexibility and performance as well as opening up the possibility of
>> structured log output in future Juju releases.
>>
>> Logging to rsyslogd is currently still in place with logs being sent
>> both to rsyslogd and Juju's DB. Logging to rsyslogd will be removed
>> before the final Juju 2.0 release.
>>
>> The 'juju debug-log' command will continue to function as before and
>> should be used as the default way of accessing Juju's logs.
>>
>> This change does not affect the per machine (machine-N.log) and per unit
>> (unit-*-N.log) log files that exist on each Juju managed host. These
>> continue to function as they did before.
>>
>> A new 'juju-dumplogs' tool is also now available. This can be run on
>> Juju controllers to extract the logs from Juju's database even when the
>> Juju server isn't available. It is intended to be used as a last resort
>> in emergency situations. 'juju-dumplogs' will be available on the system
>> $PATH and requires no command line options in typical usage.
>>
>>
>> ### API Login with Macaroons
>>
>> Juju 2.0 supports an alternate API long method based on macaroons. This
>> will support the new charm publishing workflow coming future releases
>>
>>
>> ### Unit Agent Improvements
>>
>> We've made improvements to worker lifecycle management in the unit agent
>> in this release. The resource dependencies (API connections, locks,
>> etc.) shared among concurrent workers that comprise the agent are now
>> well-defined, modeled and coordinated by an engine, in a design inspired
>> by Erlang supervisor trees.
>>
>> This improves the long-term testability of the unit agent, and should
>> improve the agent's resilience to failure. This work also allows hook
>> contexts to execute concurrently, which supports features in development
>> targeting 2.0.
>>
>>
>> ### MAAS 1.8 Compatibility
>>
>> Juju 2.0-alpha1 includes the fix for bug #1483879: MAAS provider:
>> terminate-machine --force or destroy-environment don't DHCP release
>> container IPs. The fix uses the "devices" feature of MAAS, which has a
>> known bug on MAAS 1.8 (bug #1527068: MAAS retains child devices' IP
>> addresses when a parent node is released). There is a work around to
>> clean up the leaked IPs available in bug #1527068:
>> https://bugs.launchpad.net/juju-core/+bug/1527068/comments/10
>>
>> Users on MAAS 1.8 should also set the default gateway for the interface
>> used by juju to avoid problems with container networking. You can
>> verify whether a default gateway has been set on an interface by looking
>> at the network details in the "Networks" tab.
>>
>>
>> ## Known issues
>>
>> * Some providers release wrong resources when destroying hosted models
>> Lp 1536792
>>
>> * Destroying a hosted model in the local provider leaves the controller
>> unusable
>> Lp 1534636
>>
>> * Unable to create hosted environments with MAAS provider
>> Lp 1535165
>>
>>
>> ## Resolved issues
>>
>> * Unit loses network connectivity during bootstrap: juju 1.25.2 +
>> maas 1.9
>> Lp 1534795
>>
>> * Juju debug-log and eof
>> Lp 1390585
>>
>> * I/o timeout errors can cause non-atomic service deploys
>> Lp 1486553
>>
>> * Azure provider does not appear to be opening ports
>> Lp 1527681
>>
>> * 1.25.2 doesn't set up dns information with maas
>> Lp 1528217
>>
>> * Lxd: cannot create multiple environments
>> Lp 1531064
>>
>> * Wrong command displayed when trying to destroy a controller with
>> destroy-environment
>> Lp 1534353
>>
>>
>> Finally
>>
>> We encourage everyone to subscribe the mailing list at
>> juju-dev at lists.canonical.com
>> <javascript:_e(%7B%7D,'cvml','juju-dev at lists.canonical.com');>, or join
>> us on #juju-dev on freenode.
>>
>>
>> --
>> Curtis Hovey
>> Canonical Cloud Development and Operations
>> http://launchpad.net/~sinzui
>>
>> --
>> Juju-dev mailing list
>> Juju-dev at lists.ubuntu.com
>> <javascript:_e(%7B%7D,'cvml','Juju-dev at lists.ubuntu.com');>
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
--
-Thanks
Antonio
(mobile)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju/attachments/20160121/39fe6e2c/attachment.html>
More information about the Juju
mailing list