ERROR cannot read info: lock timeout exceeded

Tim Penhey tim.penhey at canonical.com
Sun Sep 27 20:42:08 UTC 2015


The code does just hold the lock for the duration of the read or write.

Since it is possible to have multiple environments sharing a server, and
the server data, the access to that data is synchronized.

There *shouldn't* be a case where the lock is held but not released.

The lock file itself should hold some information about who locked it
and why.

Tim

On 26/09/15 18:43, John Meinel wrote:
> I don't know the concrete details here, but I do believe there are a few
> files that are shared in one JUJU_HOME. There is only one
> environments.yaml and the new multi-environment code means there is
> another shared file that holds the list of known servers and environments.
> I would hope that the code would not hold the lock for the lifetime of
> the process but only do a "grab the lock, read, update, write, release
> the lock". But I don't know that code, so there might have been some
> other thoughts there.
> 
> John
> =:->
> 
> On Sep 25, 2015 5:44 PM, "Tim Van Steenburgh"
> <tim.van.steenburgh at canonical.com
> <mailto:tim.van.steenburgh at canonical.com>> wrote:
> 
> 
>     On Fri, Sep 25, 2015 at 9:56 AM, Curtis Hovey-Canonical
>     <curtis at canonical.com <mailto:curtis at canonical.com>> wrote:
> 
>         On Fri, Sep 25, 2015 at 9:15 AM, Tim Van Steenburgh
>         <tim.van.steenburgh at canonical.com
>         <mailto:tim.van.steenburgh at canonical.com>> wrote:
>         > Hi everyone,
>         >
>         > I have a jenkins slave that's running charm and bundle tests on 5 different
>         > clouds pretty much all the time. My problem is that tests will randomly fail
>         > after hitting this lock timeout.
> 
>         Juju QA has pondered deleting any lock more than a minute old every
>         time we call the client to bootstrap or destroy-environment.
> 
> 
>     This is a good idea and probably worth doing, although it won't fix
>     our most common
>     failure, where a test run bootstraps successfully, but then fails
>     later when *another*
>     env bootstraps just before the running test tries to execute a juju
>     command.
>      
> 
> 
>         > Is the best way around this to have a separate $JUJU_HOME for all my test
>         > clouds, so that I end up with one lock per cloud? I haven't tried this yet
>         > but it seems like the simplest way forward, if it works and is safe (is
>         > it?).
> 
>         Generating separate JUJU_HOMEs will insulate your from bug
>         https://bugs.launchpad.net/juju-core/+bug/1467331
> 
> 
>     Thanks, I'm going to try this approach and see what happens.
>     Although, it seems
>     to me that if it's safe to have multiple JUJU_HOMEs, all "doing
>     stuff" concurrently,
>     it would also be possible to have one lock per env, instead of one
>     global lock. Can
>     anyone on the core team explain why there is one global lock? I'm
>     just curious.
> 
> 
> 
> 
> 
>         --
>         Curtis Hovey
>         Canonical Cloud Development and Operations
>         http://launchpad.net/~sinzui
> 
> 
> 
>     --
>     Juju mailing list
>     Juju at lists.ubuntu.com <mailto:Juju at lists.ubuntu.com>
>     Modify settings or unsubscribe at:
>     https://lists.ubuntu.com/mailman/listinfo/juju
> 
> 
> 




More information about the Juju mailing list