HUE... spark, hive, zeppelin

Andrew Mcleod andrew.mcleod at canonical.com
Fri Feb 26 15:45:26 UTC 2016


Hey Merlijn,

That sounds about right,  but you don't have to change layers.yaml to use
local layers - if the layer exists in your JUJU_LAYERS directory, it will
use that by default (as long as its named correctly, so for example,
layer-hadoop-base would need to be in a directory called hadoop-base -
exclude the 'layer' prefix if it has one).

Andrew

On Fri, Feb 26, 2016 at 4:38 PM, Merlijn Sebrechts <
merlijn.sebrechts at gmail.com> wrote:

> Hi all
>
>
> I'm currently reviewing these changes. I'd like to deploy these changes.
> My current approach:
>
> - clone repo's
> - merge PR's
> - change layers.yaml to use local layers
> - Deploy charm
>
> Is there a less cumbersome way to do this?
>
>
>
> Kind regards
> Merlijn
>
> 2016-02-18 15:31 GMT+01:00 Cory Johns <cory.johns at canonical.com>:
>
>> (Adding the bigdata list.)
>>
>> Thanks, Andrew.  I'd also like to add my own small PR:
>>
>> https://github.com/juju-solutions/interface-spark/pull/3
>>
>> Merlijn, I know you were interested in and perhaps working on Hue
>> yourself.  Hopefully there hasn't been too much parallel effort going on,
>> and hopefully this work benefits you.  Perhaps you could give it a review
>> as well?
>>
>> On Thu, Feb 18, 2016 at 7:25 AM, Andrew Mcleod <
>> andrew.mcleod at canonical.com> wrote:
>>
>>> Ok, so I'm done for HUE for the time being.
>>>
>>> !!!!!!! PLEASE REVIEW !!!!!!!
>>>
>>> https://github.com/juju-solutions/layer-apache-zeppelin/pull/3
>>> https://github.com/juju-solutions/layer-apache-spark/pull/7
>>> https://github.com/juju-solutions/layer-apache-hive/pull/8
>>> https://github.com/juju-solutions/interface-spark/pull/2
>>>
>>>
>>> I have changed some states (from related => joined and available =>
>>> ready) in the spark interface and spark reactive (as discussed)
>>>
>>>
>>>
>>> *** benchmark layer / interface will need to be updated to join to this
>>> new spark layer*
>>>
>>> I've added the livy server into spark with some caveats:
>>>
>>> 1. The start and stop clauses are a bit dodgy (i dont know of a better
>>> way to do it right now, and the jps process name is "Main"...)
>>>
>>> 2. livy CLASSPATH is hard coded - `hadoop classpath` should return the
>>> right value, however i couldnt use the utils.env as it was putting a lone
>>> ‘=‘ after the classpath insertion, i.e /etc/environments looks like this:
>>>
>>> STUFF="thing"
>>> OTHERSTUFF="otherthing"
>>> CLASSPATH="/etc/dir:/dir:/dir:/dir............/dir/dir/dir"
>>> =
>>>
>>> ^ Equals sign out of nowhere.
>>>
>>>
>>>
>>>
>>
>> --
>> Bigdata mailing list
>> Bigdata at lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/bigdata
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/bigdata/attachments/20160226/9b7d9ed4/attachment.html>


More information about the Bigdata mailing list