Error deploying charm on juju-core with CI options

Gary Poster gary.poster at canonical.com
Mon Jun 17 21:28:35 UTC 2013


On 06/17/2013 04:49 AM, Francesco Banconi wrote:
> On 06/14/2013 09:19 PM, Gary Poster wrote:
>> On 06/14/2013 11:23 AM, Nicola 'teknico' Larosa wrote:
>>> Is anyone able to deploy with juju-core and staging enable?
>>
>> I hope not.  You should not be able to, because staging == improv and
>> there is no improv for Go.
> 
> Indeed.
> 
>> The question is whether you can simply run bin/test-charm with Juju
>> Core.  Francesco has it working, AIUI, presumably by not running the
>> tests dependent on staging/improv.  
> 
> Currently it is possible to run the charm tests (juju-test) with
> juju-core: the staging tests are skipped in that case, and similarly we
> avoid to run the force-machine ones if the Python implementation is
> used. On the GUI side, bin/test-charm works only in pyJuju, and AFAIK
> the main reason (excluding Canonistack set up) is that GUI CI tests are
> mostly based on staging=true (we only have one testcase switching from
> staging to sandbox). 

Right.  Thanks for clarifying.  I had been a bit confused and thought
this was part of the existing work.  What you say makes sense.

> For the future, having juju-core sandbox enabled, I
> propose to create our initial set up in sandbox, letting specific test
> cases switch to other modes (staging if available, real backend).
> 
> Work needs to be done for merging charm and GUI tests, and a possible
> strategy follow (please reply with your suggestions/corrections):

I started reading the below, but before I did, I had to remind myself of
our goals in this regard.  I started with a few, and then added some as
I read your points.

- All tests can run on both Juju Core and pyJuju, except a very few that
explicitly test parts that will never be shared (pyJuju's improv, for
instance).

- Charm tests should be able to run all GUI tests across all desired
browsers, locally or in saucelabs, so that ecosystems' charm test
infrastructure can bless a GUI branch completely, and we can leverage
their infrastructure.

- Our tarmac runs our tests against Juju Core, and ideally but not
necessarily PyJuju.

- Our tarmac is able to use ecosystems's charm test infrastructure to
test our charms.

- We can run as many tests as possible locally, without ec2 or saucelabs.

- Writing and debugging tests is easy for people with only JavaScript
experience. (Look! Selenium in node!  Maybe good for the team.
https://github.com/LearnBoost/soda)

> 1) We add to the GUI the ability to run charm tests, e.g.: the charm
> trunk is checked out and then juju-test is used to run the charm tests.
> The charm test suite already knows how to bootstrap an environment,
> download tests dependencies, run unit and functional tests, collect the
> results. This should strongly simplify the creation of a testing
> environment (for the perspective of the GUI user/contributor), and
> reduce the amount of Python requirements in the GUI.

(After reading this, I added the goal, "Writing and debugging tests is
easy for people with only JavaScript experience.")

> 2) Currently charm tests don't support Saucelabs (they just run the
> local Firefox driver) and don't have the notion of multiple browsers
> tests. Moreover, at this time it is not possible to pass to the suite a
> customized juju-gui-source (in order to test an arbitrary proposed
> branch). We need to find a way to propagate this info through juju-test,
> so that the suite can be properly configured: browsers and branches.

What does Marco say to this?  Hi, Marco! :-)  This probably isn't enough
context for you to know what the heck we are talking about, so feel free
to wait until one of us comes over to bother you in order to weigh in.

Have we asked Marco whether he envisions us being able to talk to
saucelabs to the machine running the charm tests?  If that's not OK,
that will make our goals difficult to achieve, I think.

> 3) In the charm functional tests, the GUI is deployed (juju deploy) and
> removed (juju destroy-service) for each test in the test case. Different
> initial configuration are tested (juju deploy --config ...). In the GUI
> integration tests, instead, the GUI is deployed one time at the
> beginning of the test run, and destroyed at the end. Different
> configuration are tested using juju set, isolation is guaranteed
> restarting the agent with ssh/juju ssh when required.
> Merging the two suites also means making a decision about the direction
> we want to follow (i.e. finding a tradeoff). 

Since there is no improv in Go, we need the Go sandbox.  Once we have
the Go sandbox, isolation is even faster than now, at the expense of
being entirely separated in our tests from any "real" Juju.  That seems
like the right think to do for the vast majority of the integration tests.

> For example, we want to
> test and support co-location (in the current force-machine incarnation)
> but IMHO we also want the suite to be fast and focused on the GUI
> behavior. We want to exercise the stop hook (juju destroy-service) but
> also the config-changed one (juju set).

I'm hoping containerized co-location gives us a nice answer for a lot of
this when it comes.  Relatedly, once real containerization comes, I am
fine with ripping out the tests for force-machine in favor of the new
appoach.

> 4) Migrate the CI tests from the GUI to the charm. Also get rid of all
> the Python dependencies currently required to test the GUI.
> 
> Sorry, this was longer than planned. Thoughts?

Thank you for writing that out.  That's the right goal.  It sounds like
a lot of work.  I'm not sure we'll have time to tackle it till 13.10,
but we may discover we need to.  At the least, I'd like to wait until
containerization is out, and look at this again once we know how that works.

All that said, I feel like I didn't give this enough thought.  I would
very much welcome more responses.

Thanks

Gary




More information about the Juju-GUI mailing list