Git Product home page Git Product logo

almond's Introduction

logo

almond

almond is a Scala kernel for Jupyter.

Build Status Join the chat at https://gitter.im/alexarchambault/jupyter-scala Maven Central Gitpod Ready-to-Code

Documentation

See the project website for more details

Developer documentation

Find it in docs/pages/dev-from-sources.md in the sources for more details, and soon on the website when it will be updated (it's currently out-of-date there).

Code of Conduct

The almond project welcomes contributions from anybody wishing to participate. All code or documentation that is provided must be licensed with the same license that almond is licensed with (3-Clause BSD, see LICENSE).

People are expected to follow the Scala Code of Conduct when discussing almond on GitHub, Gitter channel, or other venues.

Feel free to open an issue if you notice a bug, have an idea for a feature, or have a question about the code. Pull requests are also gladly accepted.

almond's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

almond's Issues

bash errors when running jupyter console

Doesn't appear to affect functionality, though the messages are distracting since they come up on every invocation of jupyter console:

phase:jupyter-scala me$ jupyter console --kernel scala211
Jupyter Console 4.1.1

  Dependencies:
com.github.alexarchambault.jupyter:scala-api_2.11.7:0.3.0-SNAPSHOT:default(compile)
com.github.alexarchambault.jupyter:scala-cli_2.11.7:0.3.0-SNAPSHOT:default(compile)
org.scala-lang:scala-compiler:2.11.7:default(compile)
bash: /dev/tty: Device not configured
  Result:
ch.qos.logback:logback-classic:1.0.13:default
ch.qos.logback:logback-core:1.0.13:compile
com.chuusai:shapeless_2.11:2.2.5:compile
com.github.alexarchambault:argonaut-shapeless_6.1_2.11:1.0.0-M1:compile
com.github.alexarchambault:case-app_2.11:0.2.2:default
com.github.alexarchambault:coursier-cache_2.11:1.0.0-M5:compile
com.github.alexarchambault:coursier_2.11:1.0.0-M5:compile
com.github.alexarchambault:shapeless-compat_2.11:1.0.0-M1:compile
com.github.alexarchambault.ammonium:interpreter-api_2.11.7:0.4.0-M3:default
com.github.alexarchambault.ammonium:interpreter_2.11.7:0.4.0-M3:default
com.github.alexarchambault.ammonium:setup_2.11.7:0.4.0-M3:compile
com.github.alexarchambault.ammonium:tprint_2.11.7:0.4.0-M3:default
com.github.alexarchambault.jupyter:kernel-api_2.11:0.3.0-M3:default
com.github.alexarchambault.jupyter:kernel_2.11:0.3.0-M3:default
com.github.alexarchambault.jupyter:scala-api_2.11.7:0.3.0-SNAPSHOT:compile
com.github.alexarchambault.jupyter:scala-cli_2.11.7:0.3.0-SNAPSHOT:compile
com.github.alexarchambault.jupyter:scala-kernel_2.11.7:0.3.0-SNAPSHOT:compile
com.github.julien-truffaut:monocle-core_2.11:1.1.0:compile
com.github.julien-truffaut:monocle-macro_2.11:1.1.0:compile
com.lihaoyi:ammonite-terminal_2.11:0.5.0:compile
com.lihaoyi:derive_2.11:0.3.8:compile
com.lihaoyi:fastparse-utils_2.11:0.3.4:compile
com.lihaoyi:fastparse_2.11:0.3.4:compile
com.lihaoyi:pprint_2.11:0.3.8:default
com.lihaoyi:scalaparse_2.11:0.3.4:compile
com.lihaoyi:sourcecode_2.11:0.1.0:compile
com.typesafe:config:1.2.1:compile
com.typesafe.scala-logging:scala-logging-api_2.11:2.1.2:compile
com.typesafe.scala-logging:scala-logging-slf4j_2.11:2.1.2:compile
io.argonaut:argonaut_2.11:6.1:compile
org.scala-lang:scala-compiler:2.11.7:default
org.scala-lang:scala-library:2.11.7:default
org.scala-lang:scala-reflect:2.11.7:default
org.scala-lang.modules:scala-parser-combinators_2.11:1.0.4:compile
org.scala-lang.modules:scala-xml_2.11:1.0.4:default
org.scalaz:scalaz-concurrent_2.11:7.1.2:compile
org.scalaz:scalaz-core_2.11:7.1.2:compile
org.scalaz:scalaz-effect_2.11:7.1.2:compile
org.scalaz.stream:scalaz-stream_2.11:0.6a:compile
org.slf4j:slf4j-api:1.7.7:compile
org.typelevel:scodec-bits_2.11:1.0.4:compile
org.zeromq:jeromq:0.3.4:compile
  Fetching artifacts
bash: /dev/tty: Device not configured
  Fetching artifacts
bash: /dev/tty: Device not configured
  Fetching artifacts
bash: /dev/tty: Device not configured
Launching

In [1]:

Connection with Spyder

Hi,

Just wondering if there is a way to plug the Jupyter Scala in Ipython 3.0 inside Spyder.
Especially, using Connect to an existing Kernel (Connection file with JSON or remote kernel)

Availability in anaconda channels?

Hi,

I couldn't find jupyter-scala via anaconda channels, but I thought I would ask just in case there are any plans to do so at some point. Thanks!

Running scala notebook on Windows failed

Hi, thank you for this great work!

But, I'm having a problem on running this kernel on Windows.
The library seems to find 'connection-file' in wrong place.

It tries to find the file in 'C:**Users\cmoh.ipython\profile_default\secure**C:\Users\cmoh.ipython\profile_default\security\kernel-~~~~~~~~~.json'.
Note the "C:" path string appears twice, and there is no "C:...\profile_default*secure*" directory.(though 'C:...\profile_default*security*' exists.)

I'm using

C:\Users\cmoh\Documents\scala>ipython --version
3.1.0

C:\Users\cmoh\Documents\scala>scala -version
Scala code runner version 2.11.6 -- Copyright 2002-2013, LAMP/EPFL

Here is my running 'ipython notebook --debug' result.

[D 11:26:08.063 NotebookApp] Connecting to: tcp://127.0.0.1:54672
[I 11:26:08.066 NotebookApp] Kernel started: 7ce6acc0-ee32-4960-9bec-1dbf9b3aa12f
[D 11:26:08.066 NotebookApp] Kernel args: {'cwd': u'C:\\Users\\cmoh\\Documents\\scala'}
[D 11:26:08.068 NotebookApp] 201 POST /api/sessions (::1) 69.00ms
[D 11:26:08.069 NotebookApp] 200 GET /api/contents/Untitled12.ipynb/checkpoints?_=1428546367311 (::1) 1.00ms
"C:\Program Files\Java\jdk1.8.0_40\bin\java.exe"   -cp "C:\Users\cmoh\Documents\jupyter-scala-cli-0.2.0-SNAPSHOT\bin\..\lib\*;" -Dprog.home="C:\Users\cmoh\Documents\jupyter-scala-cli-0.2.0-SNAPSHOT\bin\.." -Dprog.version="0.2.0-SNAPSHOT" jupyter.scala.JupyterScala --quiet --connection-file C:\Users\cmoh\.ipython\profile_default\security\kernel-7ce6acc0-ee32-4960-9bec-1dbf9b3aa12f.json
[D 11:26:08.095 NotebookApp] Initializing websocket connection /api/kernels/7ce6acc0-ee32-4960-9bec-1dbf9b3aa12f/channels
[D 11:26:08.098 NotebookApp] Requesting kernel info from 7ce6acc0-ee32-4960-9bec-1dbf9b3aa12f
[D 11:26:08.098 NotebookApp] Connecting to: tcp://127.0.0.1:54669
11:26:10,414 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
11:26:10,414 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
11:26:10,414 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [jar:file:/C:/Users/cmoh/Documents/jupyter-scala-cli-0.2.0-SNAPSHOT/lib/jupyter-scala-cli_2.11-0.2.0-SNAPSHOT.jar!/logback.xml]
11:26:10,415 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] occurs multiple times on the classpath.
11:26:10,415 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] occurs at [jar:file:/C:/Users/cmoh/Documents/jupyter-scala-cli-0.2.0-SNAPSHOT/lib/jupyter-scala-cli_2.11-0.2.0-SNAPSHOT.jar!/logback.xml]
11:26:10,415 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] occurs at [jar:file:/C:/Users/cmoh/Documents/jupyter-scala-cli-0.2.0-SNAPSHOT/lib/jupyter-scala-cli_2.11.6-0.2.0-SNAPSHOT.jar!/logback.xml]
11:26:10,425 |-INFO in ch.qos.logback.core.joran.spi.ConfigurationWatchList@51827393 - URL [jar:file:/C:/Users/cmoh/Documents/jupyter-scala-cli-0.2.0-SNAPSHOT/lib/jupyter-scala-cli_2.11-0.2.0-SNAPSHOT.jar!/logback.xml] is not of type file
11:26:10,480 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
11:26:10,482 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
11:26:10,491 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE]
11:26:10,540 |-INFO in ch.qos.logback.core.rolling.FixedWindowRollingPolicy@3be4f71 - Will use zip compression
11:26:10,550 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
11:26:10,573 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - Active log file name: C:\Users\cmoh\AppData\Local\Temp\/jupyter-scala.log
11:26:10,573 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - File property is set to [C:\Users\cmoh\AppData\Local\Temp\/jupyter-scala.log]
11:26:10,575 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
11:26:10,577 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
11:26:10,579 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
11:26:10,579 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to DEBUG
11:26:10,580 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [FILE] to Logger[ROOT]
11:26:10,580 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
11:26:10,581 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@5c645b43 - Registering current configuration as safe fallback point

Exception in thread "main" java.io.FileNotFoundException: C:\Users\cmoh\.ipython\profile_default\secure\C:\Users\cmoh\.ipython\profile_default\security\kernel-7ce6acc0-ee32-4960-9bec-1dbf9b3aa12f.json (파일 이름, 디렉터리 이름 또는 볼륨 레이블 구문이 잘못되었습니다)
    at java.io.FileOutputStream.open0(Native Method)
    at java.io.FileOutputStream.open(FileOutputStream.java:270)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
    at java.io.PrintWriter.<init>(PrintWriter.java:263)
    at jupyter.kernel.server.Server$.newConnectionFile(Server.scala:54)
    at jupyter.kernel.server.Server$$anonfun$apply$12$$anonfun$apply$13.apply(Server.scala:146)
    at jupyter.kernel.server.Server$$anonfun$apply$12$$anonfun$apply$13.apply(Server.scala:140)
    at scalaz.$bslash$div.flatMap(Either.scala:134)
    at jupyter.kernel.server.Server$$anonfun$apply$12.apply(Server.scala:140)
    at jupyter.kernel.server.Server$$anonfun$apply$12.apply(Server.scala:127)
    at scalaz.$bslash$div.flatMap(Either.scala:134)
    at jupyter.kernel.server.Server$.apply(Server.scala:127)
    at jupyter.kernel.server.ServerApp$.apply(ServerApp.scala:70)
    at jupyter.scala.JupyterScala.delayedEndpoint$jupyter$scala$JupyterScala$1(JupyterScala.scala:22)
    at jupyter.scala.JupyterScala$delayedInit$body.apply(JupyterScala.scala:8)
    at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
    at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
    at caseapp.App$$anonfun$apply$1.apply(App.scala:25)
    at caseapp.App$$anonfun$apply$1.apply(App.scala:24)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
    at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
    at caseapp.App$class.apply(App.scala:24)
    at jupyter.scala.JupyterScala.apply(JupyterScala.scala:8)
    at caseapp.AppOf.main(App.scala:62)
    at jupyter.scala.JupyterScala.main(JupyterScala.scala)
[I 11:26:11.063 NotebookApp] KernelRestarter: restarting kernel (1/5)
[D 11:26:11.068 NotebookApp] Connecting to: tcp://127.0.0.1:54672

And jupyter-scala.log file:

2015-04-09 11:19:58 INFO [main] j.k.s.Server$ [Server.scala:141] Connection file: C:\Users\cmoh\.ipython\profile_default\secure\C:\Users\cmoh\.ipython\profile_default\security\kernel-be4ac361-8a94-45ba-9072-fb58b39cef1c.json
2015-04-09 11:22:36 INFO [main] j.k.s.Server$ [Server.scala:141] Connection file: C:\Users\cmoh\.ipython\profile_default\secure\C:\Users\cmoh\.ipython\profile_default\security\kernel-6ae236b3-0ff8-4887-8482-58a69bb5eeeb.json
2015-04-09 11:26:10 INFO [main] j.k.s.Server$ [Server.scala:141] Connection file: C:\Users\cmoh\.ipython\profile_default\secure\C:\Users\cmoh\.ipython\profile_default\security\kernel-7ce6acc0-ee32-4960-9bec-1dbf9b3aa12f.json

I generated kernel.json using jupyter-scala.bat --kernel-spec (C:\Uses\cmoh.ipython\kernels\scala-2.11\kernel.json)(though its initial path-delimiter was wrong.. i corrected manually).

{
  "argv": ["C:\\Users\\cmoh\\Documents\\jupyter-scala-cli-0.2.0-SNAPSHOT\\bin\\jupyter-scala.bat", "--quiet", "--connection-file", "{connection_file}"],
  "display_name": "Scala 2.11",
  "language": "scala",
  "extensions": ["snb"]
}

Please, let me know the way to fix this.

inline plot

Hi I am new at IPython and Jupyter so it can be that I missed something.

I am very interested at being able to plot graphs based on data generated in Scala.
How can I do that? I can see that in IPython it is doable but cannot find any documentation / example to do it with Scala

Thank you !

ammonite-spark for Spark 1.4.1

I am following the Spark setup instructions listed in Spark.ipynb. I have Spark 1.4.1. I have tried using ammonite-spark_1.3_2.10.5 but it fails when associating with the master.

Is there a version of the package for Spark 1.4.1? I looked in your Sonatype repository but I could not find it.

Here is what I tried:

load.ivy("com.github.alexarchambault" % "ammonite-spark_1.3_2.10.5" % "0.3.1-SNAPSHOT")
// ... and later
Spark.start()

Which concluded with:

15/08/10 11:41:06 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkMaster@master:7077] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/08/10 11:41:26 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.

And in the Spark master log I got:

15/08/10 11:41:06 ERROR Remoting: org.apache.spark.deploy.ApplicationDescription; local class incompatible: stream classdesc serialVersionUID = 2596819202403185464, local class serialVersionUID = -7685200927816255400
java.io.InvalidClassException: org.apache.spark.deploy.ApplicationDescription; local class incompatible: stream classdesc serialVersionUID = 2596819202403185464, local class serialVersionUID = -7685200927816255400

Thanks!

jupyter-scala crashing repl

installed via curl -L -o jupyter-scala https://git.io/vzhRi && chmod +x jupyter-scala && ./jupyter-scala

phase:scala me$ jupyter console --kernel scala211
Jupyter Console 4.1.1


In [1]: val a = 5
WARNING: The kernel did not respond to an is_complete_request. Setting `use_kernel_is_complete` to False.
a: Int = 5
Traceback (most recent call last):
  File "/Users/me/Library/Python/2.7/bin/jupyter-console", line 11, in <module>
    sys.exit(main())
  File "/Users/me/Library/Python/2.7/lib/python/site-packages/jupyter_core/application.py", line 267, in launch_instance
    return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
  File "/Users/me/Library/Python/2.7/lib/python/site-packages/traitlets/config/application.py", line 596, in launch_instance
    app.start()
  File "/Users/me/Library/Python/2.7/lib/python/site-packages/jupyter_console/app.py", line 152, in start
    self.shell.mainloop()
  File "/Users/me/Library/Python/2.7/lib/python/site-packages/jupyter_console/interactiveshell.py", line 483, in mainloop
    self.interact(display_banner=display_banner)
  File "/Users/me/Library/Python/2.7/lib/python/site-packages/jupyter_console/interactiveshell.py", line 659, in interact
    self.run_cell(source_raw)
  File "/Users/me/Library/Python/2.7/lib/python/site-packages/jupyter_console/interactiveshell.py", line 203, in run_cell
    self.handle_execute_reply(msg_id, timeout=0.05)
  File "/Users/me/Library/Python/2.7/lib/python/site-packages/jupyter_console/interactiveshell.py", line 221, in handle_execute_reply
    status = content['status']
KeyError: 'status'

Creating REPL Classes using Reflection

Classes that are created in the REPL cannot be initiated using reflection (they show up with one argument when they should have 0).

A simple example is shown here
A normal class can be created without any problem

val testBasic = Try{classOf[java.io.ByteArrayOutputStream].newInstance()}
testBasic: scala.util.Try[java.io.ByteArrayOutputStream] = Success()

Making the classes in the REPL (even with a few tricks)

class TestClassA
object TestObjA {
    class TestClassB
}
object TestObjB {
    class TestClassC
}

Works fine with standard methods

val testA1 = new TestClassA
val testB1 = new TestObjA.TestClassB
import TestObjB._
val testC1 = new TestClassC
testA1: cmd2.INSTANCE.$ref$cmd0.TestClassA = cmd0$$user$TestClassA@65100245
testB1: cmd2.INSTANCE.$ref$cmd0.TestObjA.TestClassB = cmd0$$user$TestObjA$TestClassB@18654dc1
import TestObjB._
testC1: cmd2.INSTANCE.$ref$cmd0.TestObjB.TestClassC = cmd0$$user$TestObjB$TestClassC@54bd18ab

but not with reflection

import scala.util.Try
val testA2 = Try{classOf[TestClassA].newInstance()}
val testB2 = Try{classOf[TestObjA.TestClassB].newInstance()}
val testC2 = Try{classOf[TestClassC].newInstance()}
testA2: scala.util.Try[cmd4.INSTANCE.$ref$cmd0.TestClassA] = Failure(java.lang.InstantiationException: cmd0$$user$TestClassA)
testB2: scala.util.Try[cmd4.INSTANCE.$ref$cmd0.TestObjA.TestClassB] = Failure(java.lang.InstantiationException: cmd0$$user$TestObjA$TestClassB)
testC1: scala.util.Try[cmd4.INSTANCE.$ref$cmd0.TestObjB.TestClassC] = Failure(java.lang.InstantiationException: cmd0$$user$TestObjB$TestClassC)

See a more detailed example here. A similar problem is listed here

Adding a Jar

Is there any way to load jar in Jupyter-scala? I used :cp /something.jar to load local jar files into Scala REPL. How can I do it with jupiyer-scala? Thanks

Running Apache Spark- java.lang.OutOfMemoryError

Hi, I am trying to run Apache Spark in the notebook as follow:

load.jar("/Users/babak/App/spark-1.3.1/assembly/target/scala-2.10/spark-assembly-1.3.1-hadoop2.4.0.jar")
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
val conf = new SparkConf().setMaster("spark://smaster:7077").setAppName("test_jupyter")
val sc = new SparkContext(conf)

After the running the last line I see the following error in terminal:

2015-10-06 13:01:57.877 java[52661:3205521] Unable to load realm info from SCDynamicStore
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "sparkDriver-akka.actor.default-dispatcher-3"

I should say I am able to use spark from spark-shell (spark-shell --master spark://smaster:7077). So there is no problem with the Spark Cluster.

Missing Response on Shell Channel

Hi,

I am right now trying to develop an alternative Jupyter kernel manager, but during testing I had the problem when I worked with your kernel that I didn't get a response to my messages on the the shell channel.

I manually start the your kernel like this:

$ java -jar ~/.ipython/kernels/scala211/launcher.jar --connection-file  /tmp/scala_kernel/connection.json

Here the connection file:

{
  "stdin_port": 53888, 
  "ip": "127.0.0.1", 
  "control_port": 53889, 
  "hb_port": 53890, 
  "signature_scheme": "hmac-sha256", 
  "key": "", 
  "kernel_name": "", 
  "shell_port": 53886, 
  "transport": "tcp", 
  "iopub_port": 53887
}

And then run the following Scala code:

import org.zeromq.{ZMQ, ZMsg}
import scala.collection.JavaConversions._
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global

/**
  * Created by artjom on 14.04.16.
  */
object ScalaKernelProblem extends App {


  val msgTxt =
    s"""<IDS|MSG>
        |6ea6b213262402cc1ad3c1d3e342a9f6
        |{"date":"2013-04-27T23:22:13.522049","username":"test","session":"examplesession","msg_id":"examplemessage","msg_type":"execute_request"}
        |{}
        |{}
        |{"code":"68 - 26","silent":false,"allow_stdin":true,"store_history":true,"user_expressions":{}, "stop_on_error": false}
      """.stripMargin

  val msg = new ZMsg()
  msgTxt.split("\n").filter(!_.isEmpty).foreach(msg.add)


  val context = ZMQ.context(1)

  val iopub = context.socket(ZMQ.SUB)
  iopub.subscribe(Array.empty[Byte])
  iopub.connect("tcp://127.0.0.1:53887")

  val socket = context.socket(ZMQ.REQ)
  socket.connect("tcp://127.0.0.1:53886")

  println("Listening on iopub channel")
  Future {
      while(true){
        Option(ZMsg.recvMsg(iopub, 0)).foreach(zMsg =>
        zMsg.toSeq.map(zFrame => new String(zFrame.getData, ZMQ.CHARSET)).foreach(s => println(s"iopub> $s"))
        )
        println("Further listening")
      }
    }



  println("Sending msg...")
  val b = msg.send(socket)
  println(s"Send message ($b). Waiting for response...")
  val reply = ZMsg.recvMsg(socket, 0).toList
  println(s"Got response: $reply. Finished")

  Thread.sleep(1000)

}

The output looks like this:

> !!
kernel_manager/test:runMain com.test.kernel_manager.ScalaKernelProblem
[info] Running com.test.kernel_manager.ScalaKernelProblem 
[info] Listening on iopub channel
[info] Sending msg...
[info] Send message (true). Waiting for response...
[info] iopub> execute_input
[info] iopub> <IDS|MSG>
[info] iopub> 
[info] iopub> {"msg_id":"b2aa05e0-4533-4ac9-b777-ef07ad63da06","username":"test","msg_type":"execute_input","version":null,"session":"examplesession"}
[info] iopub> {"msg_id":"examplemessage","username":"test","msg_type":"execute_request","version":null,"session":"examplesession"}
[info] iopub> {}
[info] iopub> {"execution_count":1,"code":"68 - 26"}
[info] Further listening
[info] iopub> status
[info] iopub> <IDS|MSG>
[info] iopub> 
[info] iopub> {"msg_id":"3b077df4-c16e-4692-8cf4-e6c2d97d16dd","username":"test","msg_type":"status","version":"5.0","session":"examplesession"}
[info] iopub> {"msg_id":"examplemessage","username":"test","msg_type":"execute_request","version":null,"session":"examplesession"}
[info] iopub> {}
[info] iopub> {"execution_state":"busy"}
[info] Further listening
[info] iopub> display_data
[info] iopub> <IDS|MSG>
[info] iopub> 
[info] iopub> {"msg_id":"3099b7f9-2bd0-4e19-b3b2-5fbbd0b66f96","username":"test","msg_type":"display_data","version":null,"session":"examplesession"}
[info] iopub> {"msg_id":"examplemessage","username":"test","msg_type":"execute_request","version":null,"session":"examplesession"}
[info] iopub> {}
[info] iopub> {"metadata":{},"data":{"text/plain":"\u001b[36mres0\u001b[0m: \u001b[32mInt\u001b[0m = \u001b[32m42\u001b[0m"},"source":"interpreter"}
[info] Further listening
[info] iopub> status
[info] iopub> <IDS|MSG>
[info] iopub> 
[info] iopub> {"msg_id":"cb3252c7-c25b-4192-8567-be56a1d0518f","username":"test","msg_type":"status","version":"5.0","session":"examplesession"}
[info] iopub> {"msg_id":"examplemessage","username":"test","msg_type":"execute_request","version":null,"session":"examplesession"}
[info] iopub> {}
[info] iopub> {"execution_state":"idle"}
[info] Further listening

So, as you can see I get the expected messages on the iopub channel, but the programs hangs forever because I actually never get the response on the shell channel that the code was executed. The jupyter documentation describes this step here:

Upon completion of the execution request, the kernel always sends a reply, with a status code indicating what happened and additional data depending on the outcome. See below for the possible return codes and associated data.

So my question basically is: Is there any way to force a response on the shell channel (e.g. change my request method) or if it is a bug, is there a more recent version where the problem is fixed?

Best regards,
Artjom


I am using Scala 2.11.7, version 0.3.5 of jeromq and kernel_2.11-0.3.0-M2.jar of your kernel (at least that is what I can find in the ~/.jupyter-scala/bootstrap)

Jupyter Scala kernel appears to hang

I'm having an issue where jupyter scala is only returning the first statement (slowly, and not adding a line number upon completion), and not returning any of the following ones.

I put it in a docker image, so it can be reproduced confidently. (Target was to have a docker container that would allow to work with spark using not only the python api, but also the scala api)

StR:

  1. docker pull docxs/spark-notebook && docker run -it --rm -p 8888:8888 -h sandbox docxs/spark-notebook -bash
  2. Install Jupyter Scala according to the documentation
  3. Run jupyter (jupyter notebook --ip=0.0.0.0 --no-browser --port=8888)
  4. Start a new Scala211 notebook
  5. Make 2 statements

E:

  • Both statements return

A:

  • Only the first statement returns, and the notebook kernel seems to hang

import of packages with renaming fails between cells

The following code, each line in a separate cell:

import java.{io => jio}
import jio.File;
new File("/tmp");

results in the errors:

    Main.scala:16: object jio is not a member of package java
    import java.jio.File
                ^
    Main.scala:45: object jio is not a member of package java
    import java.jio.File
                ^
    Main.scala;54: not found: type File
    new File("/tmp")
        ^

Looks like "jio" is interpreted as an alias for "java.jio" while it should be an alias for "java.io". Attaching a demo notebook...

package_import_with_renaming_failure.ipynb.zip

P.S.: This issue may be more appropriate in the ammonite project, but I don't know how to test it or construct an illustrative example for pure ammonite.

Completion logic?

Great library! Your bootstrapping logic to setup a new kernel is quite nice.

Some of the other Scala kernel's I've used had completion logic so the Notebook will autocomplete method calls for you. I'm not seeing that with my version of jupyter-scala (using the 2.10.5 version and the latest SNAPSHOT on maven). Has that not been implemented yet or is it a problem on my side?

Configure java.library.path

For using libraries with JNI it is important to be able to change java.library.path. This is possible by a very hacked approach (taken from http://fahdshariff.blogspot.be/2011/08/changing-java-library-path-at-runtime.html), but it would be better to have a command load.library which handled this

val newPath = System.getProperty("java.library.path").split(":") ++ Array(jniPath)
System.setProperty("java.library.path",newPath.distinct.mkString(":"))
//set sys_paths to null so that java.library.path will be reevaluated next time it is needed
val sysPathsField = classOf[ClassLoader].getDeclaredField("sys_paths");
sysPathsField.setAccessible(true);
sysPathsField.set(null, null);

Wrong answers for flatMap

val xs = List(1,2,3)
xs flatMap {x => List(x, x*2)}

gives

List(1, 2, 3)

but

xs map {x =>List(x, x*2)} flatten

gives

List(1, 2, 2, 4, 3, 6)

Suppress output of statements

Hi,
Is there a way to suppress the output of statements?
For example, could I suppress "x: Int = 1" below?

In [24]: val x = 1
x: Int = 1

Thanks,
Praveen

Kernel won't list in ipython

After running the jupyter-scala file I get:

Error: Could not find or load main class files.jupyter-scala_2.11.6-0.2.0-SNAPSHOT.icon-mac.png
logout

[Process completed]

Afterwards the kernel doesn't list in the jupyter (ipython) kernels. How could I fix this?

Spark? More of a Q than an issue.

I know you were working on jove-spark before, and now this is your focus.

I believe I read somewhere that you have spark working with jupyter-scala. Are you creating the context in your notebooks, or do you have something else in the works for spark interop?

Having troubles adding Breeze

I'am a newbee to Scala/Java and having troubles adding Breeze to jupyter-scala.

My assumption is that appending the lines

libraryDependencies ++= Seq(
"org.scalanlp" %% "breeze" % "0.11.2",
"org.scalanlp" %% "breeze-natives" % "0.11.2"
)

to build.sbt should allow me later importing breeze into jupyter notebook. But I'm encountering a dozen of errors even when I simply try issuing sbt and later console commands in the ./jupyter-scala directory.

Am I doing something wrong and what would be a correct way of adding Breeze ?

Thanks!

Download as Scala code does not work

Downloading a notebook as Scala code gives a 500 error (internal server error)

nbconvert failed: FileExtension trait 'file_extension' does not begin with a dot: 'scala'

Scala kernel not showing up as 'available kernel'

I downloaded jupyter-scala from git, then built from sources.

$ sbt cli/target

$ ls -l /Users/davidlaxer/jupyter-scala/cli/target/scala-2.11/

total 224

drwxr-xr-x 4 davidlaxer staff 136 Sep 21 12:39 classes
-rw-r--r-- 1 davidlaxer staff 114465 Sep 21 12:39 jupyter-scala-cli_2.11.6-0.2.0-SNAPSHOT.jar

$ sudo jupyter kernelspec install scala-2.11/

Password:

[InstallKernelSpec] Installed kernelspec in /usr/local/share/jupyter/kernels/

David-Laxers-MacBook-Pro:target davidlaxer$ jupyter kernelspec list

Available kernels:

ir
julia-0.3
matlab_kernel
python2

Why don't I see the scala2.11 kernel?

screen shot 2015-09-21 at 2 06 12 pm

Add dir to classpath permanently

Hey,

I try to add new dependencies permanently so I do not need to add them every time. I rewrite jupyter-scala bash script. This does not work:

exec "$JAVACMD"
${JVM_OPT}
-cp "${PROG_HOME}/lib/${PSEP}/path/to/my/dir/${CLASSPATH_SUFFIX}"
...

Whatever is after ${PSEP} is not on classpath. I wonder how this PSEP should be used:

Path separator used in EXTRA_CLASSPATH
PSEP=":"

Is there any other way how you would do this?

jupyter-scala with Spark does not see imports

I am running jupyter notebook on the machine where is available configured spark distribution with spark-submit , spark-defaults etc. I have running Python and R kernels but I have issues with jupyter-scala. I try to run the following code inside the notebook;

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import com.datastax.spark.connector._

val conf = new SparkConf(true)
val sc = new SparkContext("spark://127.0.0.1:7077", "test", conf)

but it seems the jars are not visible by it

Main.scala:29: object datastax is not a member of package com
 ; import org.apache.spark.SparkConf ; import org.apache.spark.SparkContext ; import com.datastax.spark.connector._ ; val conf = { () =>
                                                                                         ^
Main.scala:29: object apache is not a member of package org
 ; import org.apache.spark.SparkConf ; import org.apache.spark.SparkContext ; import com.datastax.spark.connector._ ; val conf = { () =>
              ^
Main.scala:30: not found: type SparkConf
new SparkConf(true) 
    ^
Main.scala:29: object apache is not a member of package org
 ; import org.apache.spark.SparkConf ; import org.apache.spark.SparkContext ; import com.datastax.spark.connector._ ; val conf = { () =>
                                                  ^
Main.scala:33: not found: type SparkContext
new SparkContext("spark://127.0.0.1:7077", "test", conf) 
    ^

I have tried to add before it

classpath.addPath("/var/lib/spark/lib/spark-assembly-1.6.0-hadoop2.6.0.jar")
classpath.add("datastax" % "spark-cassandra-connector" % "1.6.0-M1-s_2.11")

but nothing changes.

I am running jupyter-scala from https://git.io/vzhRi .

magic support

It would be nice to be able to have scala in one cell and python in another one, as some libs are avaliable only in python. I know that such thing is done with % lsmagic but I cannot run it with scala kernel

ipython kernelspec list not work (No module named zmq)

> ipython kernelspec list

Traceback (most recent call last):
  File "/usr/bin/ipython", line 6, in <module>
    start_ipython()
  File "/usr/local/lib/python2.7/dist-packages/IPython/__init__.py", line 118, in start_ipython
    return launch_new_instance(argv=argv, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 591, in launch_instance
    app.initialize(argv)
  File "<decorator-gen-111>", line 2, in initialize
  File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 75, in catch_config_error
    return method(app, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/IPython/terminal/ipapp.py", line 305, in initialize
    super(TerminalIPythonApp, self).initialize(argv)
  File "<decorator-gen-7>", line 2, in initialize
  File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 75, in catch_config_error
    return method(app, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/IPython/core/application.py", line 386, in initialize
    self.parse_command_line(argv)
  File "/usr/local/lib/python2.7/dist-packages/IPython/terminal/ipapp.py", line 300, in parse_command_line
    return super(TerminalIPythonApp, self).parse_command_line(argv)
  File "<decorator-gen-4>", line 2, in parse_command_line
  File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 75, in catch_config_error
    return method(app, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 487, in parse_command_line
    return self.initialize_subcommand(subc, subargv)
  File "<decorator-gen-3>", line 2, in initialize_subcommand
  File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 75, in catch_config_error
    return method(app, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 418, in initialize_subcommand
    subapp = import_item(subapp)
  File "/usr/local/lib/python2.7/dist-packages/ipython_genutils/importstring.py", line 31, in import_item
    module = __import__(package, fromlist=[obj])
  File "/usr/local/lib/python2.7/dist-packages/jupyter_client/__init__.py", line 4, in <module>
    from .connect import *
  File "/usr/local/lib/python2.7/dist-packages/jupyter_client/connect.py", line 21, in <module>
    import zmq
ImportError: No module named zmq

ipython3 console --kernel scala210: type "load" and press tab

Hi!

I just install jupyter-scala and when i try write "load" and press tab i always get: "kernel died, restart ([y]/n)?"

And backtrace:
In [3]: loadException in thread "main" java.lang.UnsupportedOperationException: Position.end on class scala.reflect.internal.util.OffsetPosition
at scala.reflect.internal.util.Position.end(Position.scala:126)
at ammonite.interpreter.Pressy$Run.prefixed(Pressy.scala:80)
at ammonite.interpreter.Pressy$$anon$3.complete(Pressy.scala:175)
at ammonite.interpreter.Interpreter.complete(Interpreter.scala:155)
at jupyter.scala.ScalaInterpreter$$anon$1.complete(ScalaInterpreter.scala:170)
at jupyter.kernel.interpreter.InterpreterHandler$.complete(InterpreterHandler.scala:159)
at jupyter.kernel.interpreter.InterpreterHandler$.apply(InterpreterHandler.scala:233)
at jupyter.kernel.server.InterpreterServer$$anonfun$1.apply(InterpreterServer.scala:87)
at jupyter.kernel.server.InterpreterServer$$anonfun$1.apply(InterpreterServer.scala:82)
at scalaz.stream.Process$$anonfun$map$1.apply(Process.scala:47)
at scalaz.stream.Process$$anonfun$map$1.apply(Process.scala:47)
at scalaz.stream.Process$$anonfun$flatMap$1.apply(Process.scala:40)
at scalaz.stream.Process$$anonfun$flatMap$1.apply(Process.scala:40)
at scalaz.stream.Util$.Try(Util.scala:42)
at scalaz.stream.Process$class.flatMap(Process.scala:40)
at scalaz.stream.Process$Emit.flatMap(Process.scala:571)
at scalaz.stream.Process$$anonfun$flatMap$3.apply(Process.scala:41)
at scalaz.stream.Process$$anonfun$flatMap$3.apply(Process.scala:41)
at scalaz.Free$$anonfun$map$1.apply(Free.scala:52)
at scalaz.Free$$anonfun$map$1.apply(Free.scala:52)
at scalaz.Free$$anonfun$flatMap$1$$anonfun$apply$1.apply(Free.scala:60)
at scalaz.Free$$anonfun$flatMap$1$$anonfun$apply$1.apply(Free.scala:60)
at scalaz.Free.resume(Free.scala:72)
at scalaz.Free.go2$1(Free.scala:118)
at scalaz.Free.go(Free.scala:122)
at scalaz.Free.run(Free.scala:172)
at scalaz.stream.Process$$anonfun$go$3$3$$anonfun$6.apply(Process.scala:473)
at scalaz.stream.Process$$anonfun$go$3$3$$anonfun$6.apply(Process.scala:473)
at scalaz.stream.Util$.Try(Util.scala:42)
at scalaz.stream.Process$$anonfun$go$3$3.apply(Process.scala:473)
at scalaz.stream.Process$$anonfun$go$3$3.apply(Process.scala:472)
at scalaz.concurrent.Task$$anonfun$flatMap$1$$anonfun$1.apply(Task.scala:36)
at scalaz.concurrent.Task$$anonfun$flatMap$1$$anonfun$1.apply(Task.scala:36)
at scalaz.concurrent.Task$.Try(Task.scala:379)
at scalaz.concurrent.Task$$anonfun$flatMap$1.apply(Task.scala:36)
at scalaz.concurrent.Task$$anonfun$flatMap$1.apply(Task.scala:34)
at scalaz.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:59)
at scalaz.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:59)
at scalaz.concurrent.Future.step(Future.scala:111)
at scalaz.concurrent.Future.listen(Future.scala:76)
at scalaz.concurrent.Future$$anonfun$listen$1$$anonfun$apply$4.apply(Future.scala:80)
at scalaz.concurrent.Future$$anonfun$listen$1$$anonfun$apply$4.apply(Future.scala:80)
at scalaz.Free$$anonfun$map$1.apply(Free.scala:52)
at scalaz.Free$$anonfun$map$1.apply(Free.scala:52)
at scalaz.Free.resume(Free.scala:73)
at scalaz.Free.go2$1(Free.scala:118)
at scalaz.Free.go(Free.scala:122)
at scalaz.Free.run(Free.scala:172)
at scalaz.concurrent.Future$$anonfun$apply$15$$anon$3.call(Future.scala:367)
at scalaz.concurrent.Future$$anonfun$apply$15$$anon$3.call(Future.scala:367)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

If i type something else it works.

How does one add libraries to the classpath?

How do I add libraries to the classpath so that I can use them with jupyter-scala.

For example, IScala has a magic sbt command like %libraryDependencies += "org.apache.spark" %% "spark-assembly" % "1.1.0". Is there something simliar with jupyter-scala

p.s. Thanks for working on this project. I think it's great!

Case class issues (with rapture-json)

(This is with latest scala211 kernel).

Sometimes referencing case classes in other cells seem to behave strangely.

I've been testing the kernel with the rapture json library, which I believe uses macros underneath. It works to some degree, but also has some strange behavior. You can see the results of a session here...

https://github.com/tylerprete/random/blob/master/weird-behavior.ipynb

You'll notice that I can serialize and deserialize Dog just fine, but DogOwner has issues.
What's weird, though, is that I can serialize it just fine if I define the case class for DogOwner in the exact same cell.

I've ran these same commands in a normal ammonite-repl without issue, so it seems like it might be related to jupyter-scala.

"psp std demo" notebook example does not work

Executing the first line in file psp-std.ipynb fails in Jupyter notebook:
interpreter.init("-Yno-imports", "-Yno-predef")

The error output is:
Main.scala:24: value init is not a member of ammonite.api.Interpreter

Obviously, it seems that interpreter object does not expose init method (anymore).

The environment into which jupyter-scala was installed is WinPython 3.5.1.2 64-bit under Windows 10 Pro 64-bit.

Scalatest 3.0 - exception during macro expansion

Hi,

I'm getting error while trying to use Scalatest 3.0 M14 in jupyter-scala@master:

load.ivy("org.scalatest" %% "scalatest" % "3.0.0-M14")
import org.scalatest.Assertions
import org.scalatest.Assertions.assertionsHelper
Assertions.assert(2 == 2)

produces:

Compilation Failed
Main.scala:53: exception during macro expansion: 
scala.reflect.macros.TypecheckException: not found: value assertionsHelper
    at scala.reflect.macros.contexts.Typers$$anonfun$typecheck$2$$anonfun$apply$1.apply(Typers.scala:34)
    at scala.reflect.macros.contexts.Typers$$anonfun$typecheck$2$$anonfun$apply$1.apply(Typers.scala:28)
    at scala.reflect.macros.contexts.Typers$$anonfun$3.apply(Typers.scala:24)
    at scala.reflect.macros.contexts.Typers$$anonfun$3.apply(Typers.scala:24)
    at scala.reflect.macros.contexts.Typers$$anonfun$withContext$1$1.apply(Typers.scala:25)
    at scala.reflect.macros.contexts.Typers$$anonfun$withContext$1$1.apply(Typers.scala:25)
    at scala.reflect.macros.contexts.Typers$$anonfun$1.apply(Typers.scala:23)
    at scala.reflect.macros.contexts.Typers$$anonfun$1.apply(Typers.scala:23)
    at scala.reflect.macros.contexts.Typers$class.withContext$1(Typers.scala:25)
    at scala.reflect.macros.contexts.Typers$$anonfun$typecheck$2.apply(Typers.scala:28)
    at scala.reflect.macros.contexts.Typers$$anonfun$typecheck$2.apply(Typers.scala:28)
    at scala.reflect.internal.Trees$class.wrappingIntoTerm(Trees.scala:1716)
    at scala.reflect.internal.SymbolTable.wrappingIntoTerm(SymbolTable.scala:16)
    at scala.reflect.macros.contexts.Typers$class.withWrapping$1(Typers.scala:26)
    at scala.reflect.macros.contexts.Typers$class.typecheck(Typers.scala:28)
    at scala.reflect.macros.contexts.Context.typecheck(Context.scala:6)
    at scala.reflect.macros.contexts.Context.typecheck(Context.scala:6)
    at scala.reflect.macros.Typers$class.typeCheck(Typers.scala:58)
    at scala.reflect.macros.contexts.Context.typeCheck(Context.scala:6)
    at org.scalactic.MacroOwnerRepair.repairOwners(MacroOwnerRepair.scala:35)
    at org.scalactic.BooleanMacro.genMacro(BooleanMacro.scala:814)
    at org.scalatest.AssertionsMacro$.assert(AssertionsMacro.scala:34)

Assertions.assert(2 == 2)

The same code works fine in standard scala console (2.11.6) or original ammonite-repl (but also fails on the last working revision of your ammonite jupyter-scala/ammonium@95a6226)

Any ideas?

Issue with scala211 and Hydrogen

Hi,

First, thanks for this package. It looks very cool and it works well using the ipython repl.

However, it's not working under Hydrogen (an Atom editor plugin). See nteract/hydrogen#121 (comment) and following comments for more information, and I'm happy to post data from my installation here as well. I'm just not sure what you need, or even where the issue is.

In short: the kernel starts, but then just hangs (no output; the execution spinner keeps spinning).

It'd be great to be able to use scala with Hydrogen!

Pass CLASSPATH variable through to scala interpreter

The scala REPL can pull in JAR files from the CLASSPATH like so:

$ CLASSPATH=/foo/bar/baz.jar scala

However, trying the same with jupyter-console doesn't seem to work:

$  CLASSPATH=/foo/bar/baz.jar jupyter-console --kernel=scala211

I know about load.jar("/foo/bar/baz.jar"), but it's much easier to use the environment variable than to have to load each jar individually.

Add jars via classpath

The new README does not state how to add local jars. In the current version load.jar() doesn't work anymore. Is it possible to load jars directly or do they need to be published to a local maven repo?

OK to re-run jupyter-scala install?

Hi, thanks for all your good work on this -
I am working with a vagrant-based Jupyter ( from the EdX Spark courses), and want to add Scala so I can use it against my RDDs from that course.
I followed your install instructions for jupyter-scala 2.11 (I realize it should have been 2.10 for Spark), and indeed it shows up as an option in Jupyter, but when I try to execute any Scala code, it just hangs indefinitely with the asterisk inside the square brackets. I actually installed Scala 2.11 after installing your jupyter-scala, so I am wondering whether I need to re-install so it can find Scala. What do you suggest? (Scala itself works fine, I confirmed). Thanks!

Unexpected behaviour of some scala statements in notebook vs REPL

The following toy example successfully runs in standard REPL on Ubuntu machine (scala 2.11.7, Java 1.8.0_72):

@annotation.tailrec
def fives(st:Int, end:Int):Unit = {
    if (st <= end) {
    println(st)
    fives(st+5, end)
    }
}

However, when I run the same in jupyter notebook I'm getting an error message:

Main.scala:25: could not optimize @tailrec annotated method fives: it is neither private nor final so can be overridden
def fives(st:Int, end:Int):Unit = {

Is it normal and expected?

EDIT

One more strange result. The following runs in scala REPL (even in Ammonite REPL):

scala> def double(x:Int) = x*2
double: (x: Int)Int

scala> val myDouble: (Int) => Int=double
myDouble: Int => Int = <function1>

scala> myDouble(10)
res1: Int = 20

But when I try to run this snippet in notebook I get:

Main.scala:25: missing arguments for method double in class $user;
follow this method with `_' if you want to treat it as a partially applied function
double 

Error downloading notebook as scala

Trying to download as scala, a new tab opens with:
500 : Internal Server Error
The error was:
nbconvert failed: FileExtension trait 'file_extension' does not begin with a dot: u'scala'

Typeclass to define how the output is displayed

The iScala kernel had a nice mechanism that allows to customize the output based on the type. Basically, if you want to be able to display a type A, you have to define an implicit instance HTMLDisplay[A] (with a low-priority default that does _.toString).

This way, one can define how to visualize images, Bokeh plots and so on, and everything is rendered accordingly.

I see that in jupyter-scala there are a few methods like display.html and so on. It is a little less convenient, since one always has to figure the correct method for rendering, instead on relying on the compiler to summon the right typeclass.

Does a mechanism like the above exist for jupyter-scala?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.