Zeppelin "sql interpreter not found"
Andrew Mcleod
andrew.mcleod at canonical.com
Fri Nov 6 17:22:15 UTC 2015
That's strange, I haven't seen anything other than spark as the top/default
interpreter in the list after binding (as it is the top in the binding list
also). Are you able to replicate this and take a screenshot? Spark should
definitely be the default...
Andrew
On Fri, Nov 6, 2015 at 4:56 PM, Merlijn Sebrechts <
merlijn.sebrechts at gmail.com> wrote:
> Yes, this is what I see. The default interpreter is the top one in the
> list (if you click on the gear icon). What the top one is after
> installation seems to be more or less random...
>
> 2015-11-06 16:53 GMT+01:00 Andrew Mcleod <andrew.mcleod at canonical.com>:
>
>> Hi Merlijn,
>>
>> Can you tell me if this is what you see:
>> http://pasteboard.co/1W9RYYPF.png
>>
>> If so, the default interpreter for the top paragraph is %md, but the
>> others, without a specification, will use %spark as the default (once the
>> save button is clicked to bind the interpreters, that is)
>>
>>
>> Andrew
>>
>>
>>
>> On Fri, Nov 6, 2015 at 2:38 PM, Merlijn Sebrechts <
>> merlijn.sebrechts at gmail.com> wrote:
>>
>>> Hi Andrew
>>>
>>>
>>> Thanks again for your help. The problem was that the code to create the
>>> table didn't specify which interpreter had to be used. The default
>>> interpreter was markdown, so it just printed out the lines. I'll see if I
>>> can create patch to the charm.
>>>
>>>
>>>
>>> Kind regards
>>> Merlijn
>>>
>>> 2015-11-06 15:09 GMT+01:00 Andrew Mcleod <andrew.mcleod at canonical.com>:
>>>
>>>> Hi Merlijn,
>>>>
>>>> I have seen this problem, but don't know the exact cause - I think it
>>>> has to do with the default spark contexts which are created by zeppelin
>>>> when it starts the interpreter, specifically the SQLContext, see (
>>>> https://zeppelin.incubator.apache.org/docs/interpreter/spark.html) for
>>>> more details.
>>>>
>>>> Try the following in a new paragraph (no %sql interpreter) and if it
>>>> works, its probably an issue with the %sql interpreter context/binding:
>>>>
>>>> sqlContext.sql("SELECT * COUNT(1) from bank")
>>>>
>>>> Try restarting the interpreter (interpreters tab, then the restart
>>>> button next to the interpreter) - then re-run the job that creates the temp
>>>> table.
>>>>
>>>> i.e. this line: "bank.toDF().registerTempTable("bank")"
>>>>
>>>> If that doesn't work, try restarting zeppelin completely
>>>> (/usr/lib/zeppelin/bin/zeppelin-daemon.sh restart)
>>>>
>>>>
>>>> Andrew
>>>>
>>>> On Fri, Nov 6, 2015 at 1:59 PM, Merlijn Sebrechts <
>>>> merlijn.sebrechts at gmail.com> wrote:
>>>>
>>>>> Hi Andrew
>>>>>
>>>>>
>>>>> Thanks for your help!
>>>>>
>>>>> I just figured out my problem: for some reason I thought "blue" meant
>>>>> unselected and "white" meant selected. After selecting the spark
>>>>> interpreter the queries execute, but now I get another error.
>>>>>
>>>>> When running the hdfs tutorial notebook, I get the error "no such
>>>>> table List(bank);". This is strange since the "load data into Table" note
>>>>> executed without any errors. I get the same error when I execute the
>>>>> tutorial notes one by one. Any idea what I'm doing wrong now?
>>>>>
>>>>>
>>>>> Kind regards
>>>>> Merlijn
>>>>>
>>>>> 2015-11-06 14:42 GMT+01:00 Andrew Mcleod <andrew.mcleod at canonical.com>
>>>>> :
>>>>>
>>>>>> Hi Merlijn,
>>>>>>
>>>>>> Have you bound the interpreters to the notebook? The first time you
>>>>>> use the notebook, the top paragraph will be a list of interpreters. You
>>>>>> have to save this to be able to run anything which requires an interpreter
>>>>>> definition.
>>>>>>
>>>>>>
>>>>>> Andrew
>>>>>>
>>>>>> On Fri, Nov 6, 2015 at 1:00 PM, Merlijn Sebrechts <
>>>>>> merlijn.sebrechts at gmail.com> wrote:
>>>>>>
>>>>>>> Hi all
>>>>>>>
>>>>>>>
>>>>>>> I'm trying to get Zeppelin working. Installing works fine, but when
>>>>>>> I run the hdfs notebook, the query parts fail with following error: "sql
>>>>>>> interpreter not found".
>>>>>>>
>>>>>>> I basically deployed the realtime rsyslog bundle
>>>>>>> <https://jujucharms.com/realtime-syslog-analytics/11> without the
>>>>>>> rsyslog and flume parts. I thought this bundle was working since I saw it
>>>>>>> at a demo during the summit. Any ideas to what might be wrong here?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Kind regards
>>>>>>> Merlijn
>>>>>>>
>>>>>>> --
>>>>>>> Bigdata mailing list
>>>>>>> Bigdata at lists.ubuntu.com
>>>>>>> Modify settings or unsubscribe at:
>>>>>>> https://lists.ubuntu.com/mailman/listinfo/bigdata
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/bigdata/attachments/20151106/41709578/attachment-0001.html>
More information about the Bigdata
mailing list