Git Product home page Git Product logo

jupyter_client's Introduction

Jupyter Client

Build Status Documentation Status

jupyter_client contains the reference implementation of the Jupyter protocol. It also provides client and kernel management APIs for working with kernels.

It also provides the jupyter kernelspec entrypoint for installing kernelspecs for use with Jupyter frontends.

Development Setup

The Jupyter Contributor Guides provide extensive information on contributing code or documentation to Jupyter projects. The limited instructions below for setting up a development environment are for your convenience.

Coding

You'll need Python and pip on the search path. Clone the Jupyter Client git repository to your computer, for example in /my/project/jupyter_client

cd /my/projects/
git clone [email protected]:jupyter/jupyter_client.git

Now create an editable install and download the dependencies of code and test suite by executing:

cd /my/projects/jupyter_client/
pip install -e ".[test]"
pytest

The last command runs the test suite to verify the setup. During development, you can pass filenames to pytest, and it will execute only those tests.

Documentation

The documentation of Jupyter Client is generated from the files in docs/ using Sphinx. Instructions for setting up Sphinx with a selection of optional modules are in the Documentation Guide. You'll also need the make command. For a minimal Sphinx installation to process the Jupyter Client docs, execute:

pip install ".[doc]"

The following commands build the documentation in HTML format and check for broken links:

cd /my/projects/jupyter_client/docs/
make html linkcheck

Point your browser to the following URL to access the generated documentation:

file:///my/projects/jupyter_client/docs/_build/html/index.html

Contributing

jupyter-client has adopted automatic code formatting so you shouldn't need to worry too much about your code style. As long as your code is valid, the pre-commit hook should take care of how it should look. You can invoke the pre-commit hook by hand at any time with:

pre-commit run

which should run any autoformatting on your code and tell you about any errors it couldn't fix automatically. You may also install black integration into your text editor to format code automatically.

If you have already committed files before setting up the pre-commit hook with pre-commit install, you can fix everything up using pre-commit run --all-files. You need to make the fixing commit yourself after that.

Some of the hooks only run on CI by default, but you can invoke them by running with the --hook-stage manual argument.

About the Jupyter Development Team

The Jupyter Development Team is the set of all contributors to the Jupyter project. This includes all of the Jupyter subprojects.

The core team that coordinates development on GitHub can be found here: https://github.com/jupyter/.

Our Copyright Policy

Jupyter uses a shared copyright model. Each contributor maintains copyright over their contributions to Jupyter. But, it is important to note that these contributions are typically only changes to the repositories. Thus, the Jupyter source code, in its entirety is not the copyright of any single person or institution. Instead, it is the collective copyright of the entire Jupyter Development Team. If individual contributors want to maintain a record of what changes/contributions they have specific copyright on, they should indicate their copyright in the commit message of the change, when they commit the change to one of the Jupyter repositories.

With this in mind, the following banner should be used in any source code file to indicate the copyright and license terms:

# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.

jupyter_client's People

Contributors

bfroehle avatar blink1073 avatar bollwyvl avatar captainsafia avatar carreau avatar ccordoba12 avatar davidbrochart avatar dependabot[bot] avatar ellisonbg avatar fperez avatar github-actions[bot] avatar helioz11 avatar ivanov avatar jasongrout avatar jdemeyer avatar johanmabille avatar kevin-bates avatar martinrenou avatar minrk avatar mseal avatar pre-commit-ci[bot] avatar rahulporuri avatar rgbkrk avatar rolweber avatar sylvaincorlay avatar takluyver avatar tkf avatar vidartf avatar willingc avatar zsailer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jupyter_client's Issues

Doubt about how to trigger `history_request` requests

Hello,

As I commented in another issue, I'm developing a new Jupyter kernel for the PHP language.
Now I'm working on the history_request code, but I don't know hot to trigger a history_request request from the Jupyter notebook web UI (and I'm not sure of having understood the way this request should be responded).

Correct me if I'm wrong please. When a history_request arrives to the kernel, the kernel has to respond with an array of previous commands meeting the specified restrictions.

Thank you for your time!

conda installs 64bit jupyter.exe on a 32bit machine

Hi guys

I've recently installed Jupyter via Anaconda on a Windows 7 32bit machine using "conda install jupyter" for python 2.7.

When trying to run Jupyter, I get an error saying that the version of Anaconda\Scripts\jupyter.exe is not compatible with my version of Windows...

"conda info" shows that it is set up for 32-bit: platform : win-32

Am I reporting at the right place? If not please point me in the right direction.

thanx
gordon

'source' field of 'display_data' is unclear

I'm writing a kernel and the docs:

Message type: display_data:
content = {
    # Who create the data
    'source' : str,
    ...
}```

are not at all clear what source should be.

Experimenting with the jupyter notebook, it seems as if it doesn't even matter what value is used?

add `help_request` message

IDK if this is the right place for this, but here’s a feature request for the protocol.

my idea is to introduce a keyboard shortcut in the notebook which behaves similarly to the complete_request (also filed as jupyter/notebook#56)

the message would include code and cursor position, and expect a page payload as response.

i.e. you press F1 and a help pager pops up for the symbol under your cursor.

Replace calls to `ipconfig.exe` and the likes with actual API calls

Currently, IP addresses of the host are checked using OS utilities (ipconfig.exe, ifconfig and the likes).
While this currently works, it is at the risk of failing should the utilities change in their output format (I believe that Windows localization might already cause failures on some machines).

I suggest we change the code to use proper APIs.

this is a good starting point for the Windows implementation.

complete_reply message spec changes: types, values, and signatures

Motivation

With introspection in the kernels, we can create completions that are even more informative than the best static analysis tools in compiled languages. autocomplete-plus in Atom with Facebook's ObjectiveC tools provides a good bar:

autocomplete-plus ObjC completion

These tools are important because they allow our users to see and understand exactly what is going on without guesswork. Instead of requiring the user write some text and ask for an action to be performed on it, the interface should surface information when it is relevant. The user should never have to take an action blind or search for the information they need.

To aid in thinking about this, I knocked together a (highly experimental) implementation with Hydrogen and IJulia, as those are the codebases I know best. This is what the result looks like:

screenshot 2015-07-13 17 49 00

screenshot 2015-07-13 17 55 15

screenshot 2015-07-13 17 56 07

Note the live values in the right column of the variable completions.

Spec changes

To create the user experience we want, we need to change the spec for the complete_reply message to include information about variable types, function signatures, and even the live values of variables.

Presently the complete_reply message takes this form:

content = {
    # The list of all matches to the completion request, such as
    # ['a.isalnum', 'a.isalpha'] for the above example.
    'matches' : list,

    # The range of text that should be replaced by the above matches when a completion is accepted.
    # typically cursor_end is the same as cursor_pos in the request.
    'cursor_start' : int,
    'cursor_end' : int,

    # Information that frontend plugins might use for extra display information about completions.
    'metadata' : dict,

    # status should be 'ok' unless an exception was raised during the request,
    # in which case it should be 'error', along with the usual error message content
    # in other messages.
    'status' : 'ok'
}

Here is my proposal for what the object containing a suggestion should look like:

# Each suggestion is an object of this form
{
    # This field is an enum which describes "what kind of thing" the completed
    # object is. It is useful to have an enum for this to display icons (as 
    # Atom does) or have different behavior for types. Atom uses the 
    # following types:
    # variable, constant, property, value, method, function, class, type,
    # keyword, tag, snippet, import, require
    # I do not feel strongly about the inclusion of this field.
    "suggestion_type": enum,

    # The raw text to insert. Same as the elements of 'matches' now.
    "text": str,

    # The (language-specific) type of the object, as you would get from 
    # 'typeof()' or a comparable function.
    "type": type,

    # If the object being completed has a useful or meaningful value,
    # include it here as it would be given by 'string(x)' or comparable.
    # Things like functions or modules, whose 'string()' looks like 
    # 'function@0xab328', probably shouldn't include this.
    "value": str,

    # If this is a function-like object which may take arguments, this field 
    # should be populated.
    # This is a list to support languages (like Julia) that have multiple 
    # dispatch or via some implementation several signatures for a single 
    # function.
    "signatures": [
        {
            # If this language has known return types, this field should be 
            # populated with the (language-specific) return type of the 
            # function.
            "return_type": type,

            # This should be populated with an ordered list of the function's
            # argument's names and, if available, their (language-specific)
            # types.
            "arguments": [
                {
                    "name": str,
                    "type": type
                },
                {
                    "name": str,
                    "type": type
                }
            ]
        }
    ]
}

In action (Julia) this implementation looks like this for a function:

{
    "suggestion_type": "function",
    "text": "typed_completions",
    "type": "Function",
    "signatures": [
        {
            # a signature with two args
            "arguments": [
                {
                    "name": "text",
                    "type": "String"
                },
                {
                    "name": "pos",
                    "type": "Integer"
                }
            ]
        },
        {
            # and a signature with none
            "arguments": []
        }
    ]
}

and this for an object:

{
    "suggestion_type": "variable",
    "value": "70",
    "text": "seventy_var",
    "type": "Int64"
}

Placement

In my opinion the most "correct" version of the spec including this rich suggestion object would replace the existing matches list of strings with a list of suggestion objects. This would be a backwards-incompatible change, and as such seems more difficult.

We could put these objects in the metadata field (described as "Information that frontend plugins might use for extra display information about completions") and key them on the strings from the matches field. e.g.

content = {
    # these strings are the keys in the metadata dict
    'matches' : ["sweet_func", "seventy_var"],

    'cursor_start' : 2,
    'cursor_end' : 2,

    'metadata' : {
        # these are the strings from 'matches'
        "sweet_func": #<suggestion object of scheme above>,
        "seventy_var": #<suggestion object of scheme above>
    },

    'status' : 'ok'
}

This would be a backwards-compatible change, and as such seems like probably the more reasonable option.

Feedback

Please comment with thoughts, corrections, improvements, etc.

[Meta Issue] Message Spec changes proposal v6,0

This serves as a meta issue to accumulate message spec changes proposal for Major version bump, and maybe document minor changes that appear as "new" when bumping support from 5.0 to 6.0.

Feel free to edit this top message.

  • Remove Traceback redondencies
  • Structured tracebacks
    • Discussed in person with Thomas and Fernando
  • List Comms

Add documentation how to implement a display machinery in a new kernel

In the R kernel, there are currently quite a few discussions on how to implement the strategies of the current ipython display machinery (or better the discussion are about what needs to be implemented to have the same feature set as the python one).

  • how to "format" returned objects
  • how to handle plots
  • what kind of options should be available (disable formater; convenience option for plots)
  • convenience function to send objects over the display system

Maybe this could be added as a "highlevel" doc in this package for the case that someone else want's to add such a machinery in a new kernel.

My current understanding of this machinery is in this comment: IRkernel/IRdisplay#3 (comment)

no source directory specified - ipython kernelspec

What is source directory, why it is not in help options?

$ ipython kernelspec install --help
Install a kernel specification directory.

Options
-------

Arguments that take values are actually convenience aliases to full
Configurables, whose aliases are listed on the help line. For more information
on full configurables, see '--help-all'.

--debug
    set log level to logging.DEBUG (maximize logging output)
--user
    Install to the per-user kernel registry
--replace
    Replace any existing kernel spec with this name.
--prefix=<Unicode> (InstallKernelSpec.prefix)
    Default: ''
    Specify a prefix to install to, e.g. an env. The kernelspec will be
    installed in PREFIX/share/jupyter/kernels/
--config=<Unicode> (JupyterApp.config_file)
    Default: u''
    Full path of a config file.
--name=<Unicode> (InstallKernelSpec.kernel_name)
    Default: ''
    Install the kernel spec with this name
--log-level=<Enum> (Application.log_level)
    Default: 30
    Choices: (0, 10, 20, 30, 40, 50, 'DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL')
    Set the log level by value or name.

To see all available configurables, use `--help-all`


dta@DESKTOP-K8M8TNI C:\Users\dta
$ ipython kernelspec install --config="C:\Users\dta\.ipython\profile_icsharp\ipython_notebook_config.py"
No source directory specified.

kernelspec kernel.json from deleted virtualenv will be used leading to kernel error

Steps to reproduce

  1. Install jupyter in a python virtualenv
  2. start jupyter notebook and create a new python notebook
  3. stop jupyter and delete the virtualenv
  4. create a new virtualenv with a different name to 1. and install jupyter
  5. start jupyter and create a python notebook or open the one from 2.
  6. "Kernel Error" appears in notebook

Notes

I think this is probably WIP but I can't find any documentation on how it's supposed to work, closest I can find is https://ipython.org/ipython-doc/dev/development/kernels.html#kernelspecs but it's unclear if in the future we should manually configure and manage kernelspecs or some part of jupyter will do this for us. I would be happy to help fix this if the expected behaviour was specified.

Example kernel.json

{
 "language": "python",
 "display_name": "Python 3",
 "argv": [
  "C:\\Users\\tsimpson\\Envs\\jupyter\\Scripts\\python.exe",
  "-m",
  "ipykernel",
  "-f",
  "{connection_file}"
 ]
}

Workaround

You can fix this by manually deleting the kernel config director containing the offending kernel.json and jupyter will then regenerate one using the current virtualenv.

System info

{'commit_hash': 'b754d5e',
 'commit_source': 'repository',
 'default_encoding': 'cp850',
 'ipython_path': 'c:\\users\\tsimpson\\envs\\jupyter\\src\\ipython\\IPython',
 'ipython_version': '4.0.0-dev',
 'os_name': 'nt',
 'platform': 'Windows-8-6.2.9200',
 'sys_executable': 'C:\\Users\\tsimpson\\Envs\\jupyter\\Scripts\\python.exe',
 'sys_platform': 'win32',
 'sys_version': '3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 10:45:13) [MSC '
                'v.1600 64 bit (AMD64)]'}

List kernelspec in connection file

For any frontend that uses a connection file on the side (only sidecar to my knowledge), it would be helpful if the connection file listed the name of the kernelspec associated with that running kernel (as it does on the /api/kernels endpoint).

Otherwise, a developer has to make a kernel_info_request (even if they didn't need the SHELL socket) or do some tricky file descriptor -> pid nonsense.

message.header.date should specify a timezone

This came up from ipython-contrib/jupyter_contrib_nbextensions#549, where the lack of timezone causes problems for any javascript trying to use the date value. I'm not super sure where the date field is added to the header, but assume it's either in jupyter_client/adapter.py#L387-L388 or in jupyter_client/jsonutil.py#L75-L80 - so apologies if this issue should actually be somewhere else.

Could the above calls be modified to use something like

from datetime import datetime
import pytz
pytz.utc.localize(datetime.now()).isoformat()

Cannot remove "native" Python 2 kernel

When you look at the help for jupyter kernelspec remove (jupyter kernelspec remove -h), it gives you this example command:

jupyter kernelspec remove python2

So this leads me to believe you can delete the python2 kernel, but in fact, if I try to, it tells me the kernel doesn't exist:

[Anaconda2] C:\Users\ristew\PycharmProjects\ipywktest>jupyter kernelspec remove python2
Couldn't find kernel spec(s): python2

This is in spite of the fact that the python2 kernel is clearly installed:

[Anaconda2] C:\Users\ristew\PycharmProjects\ipywktest>jupyter kernelspec list
Available kernels:
  python2          C:\Users\ristew\AppData\Local\Continuum\Anaconda2\lib\site-packages\ipykernel\resources
  ipywktest        C:\ProgramData\jupyter\kernels\ipywktest
  pysparkkernel    C:\ProgramData\jupyter\kernels\pysparkkernel
  sparkkernel      C:\ProgramData\jupyter\kernels\sparkkernel

I had a look at the code and it seems like this is happening because of this line in jupyter_client.kernelspecapp.RemoveKernelSpec:

self.kernel_spec_manager.ensure_native_kernel = False

This line has the effect of "ignoring" the native kernel, as we see in jupyter_client.kernelspec.KernelSpecManager:

    if self.ensure_native_kernel and NATIVE_KERNEL_NAME not in d:
        try:
            from ipykernel.kernelspec import RESOURCES
            self.log.debug("Native kernel (%s) available from %s",
                           NATIVE_KERNEL_NAME, RESOURCES)
            d[NATIVE_KERNEL_NAME] = RESOURCES

So it's impossible to delete the python2 kernel. I've tried this on both my Windows machine and a Linux machine, to no avail.

A couple questions:

  1. Is it possible to work around this by e.g. deleting the folder where the kernel is located?
  2. Is this the intended behavior or is it a bug? If it's the intended behavior, can the documentation be updated to reflect this?

wrapper kernel banner missing

The docs on 'Making simple Python wrapper kernels' state, "The ‘banner’ is displayed to the user in console UIs before the first prompt", but it doesn't seem to be true. I've set banner and installed my kernel but the output I see is:

$> jupyter console --kernel=mathics
Jupyter Console 4.2.0.dev
()

In [1]: 

I can hide the first line by giving arguments to jupyter console:

$> jupyter console --kernel=mathics --ZMQTerminalInteractiveShell.banner1="my banner"
my banner()

In [1]: 

but MyKernel.banner still isn't being used anywhere.

The () output is also strange. I'd like to remove it too if possible.

I opened the issue here since it's related to the docs but maybe it should be somewhere else?

simplify get/find_kernel_specs.

listing all kernelspec ($ jupyter kernelspec list) make N+1 call to find_kernel_specs (one for each found kernel)

This could be highly simplified with a "find_all_specs".

Not important, but simple for beginner, so marking as sprint friendly.

Provide JSON output for kernelspec lists

Similar to jupyter --paths --json from jupyter_core, it would be great to get the full set of kernelspecs using:

jupyter kernelspecs list --json

or something similar. This is for the use case of alternative frontends, especially those written in other languages. The driving use case is those building Atom packages or Electron plugins which is JavaScript all the way down.

/cc @willwhitney @Karissa

How to get output from kernel?

I am trying to embed jupyter in my code, but don't have idea about getting output from it. Here is a snippet for code which I am using.

from jupyter_client import MultiKernelManager
from jupyter_client import find_connection_file, BlockingKernelClient

foo = MultiKernelManager()
kernel = foo.start_kernel()

cf = find_connection_file(kernel)
km = BlockingKernelClient(connection_file=cf)

km.load_connection_file()
km.start_channels()
km.execute('a=1024;print a')
# Here I want to get output from kernel
# According to docs I have assumed that
# I can get it from iopub_channel 
# But how? 

subprocesses of kernels are not killed by restart

Restarting a kernel only kills the direct child, but not any opened subprocesses of that child.

This is a problem if the kernel spawns multiple childprocess like the R kernel (on windows: R - cmd - R - cmd - Rterm) or if the python kernel opens a subprocess which sleeps.

Issue on the irkernel repo: IRkernel/IRkernel#226

IMO the kernel manager should kill the complete process group and not only the process which is directly spawned by the kernel manager.

I use restart kernel when I want resources back which I lost in the kernel (due to the stupid commands I executed) and currently I have a 4 processes running (cmd - R - cmd - Rterm) where I killed the kernel 15min ago (and it's still 25% CPU and 1GB Ram). So if I have to wait for the actual process which does the work to finish, the restart command is kind of useless :-(

reproducible example (at least on windows): put it into a cell, execute it and then restart the kernel via the notebook UI.

import platform
import subprocess
if platform.system() == 'Windows':
    CREATE_NO_WINDOW = 0x08000000
    subprocess_creation_flags = CREATE_NO_WINDOW
else:
    # Apparently None won't work here
    subprocess_creation_flags = 0
exec_line = u"""python -c "import time;time.sleep( 50000 )" """
p = subprocess.Popen(exec_line, shell=False,
                     stdout=subprocess.PIPE, stderr=subprocess.PIPE,
                     creationflags=subprocess_creation_flags)
p.communicate()

jupyter kernel launch

Similar to the ipython kernel launcher which would spawn a kernel and create a connection file, it would be great to have this in a way that returns some JSON. Example:

$ jupyter kernel --kernel-spec python2 --json
{"connection_file": "/Users/rgbkrk/Library/Jupyter/runtime/kernel-abcd.json"}

There's probably a better interface there, but that's the rough idea.

/cc @Karissa @willwhitney

Inscrutable error with bad kernelspec file

Having updated my python installation I had old and invalid kernelspec files lying around which prevented the notebook from starting a kernel. The error message which was thrown was OSError: [WinError 193] %1 is not a valid Win32 application which is pretty uninformative. To debug I had to manually patch launcher.py to show the cmd which failed to run.

[E 08:00:07.343 NotebookApp] Unhandled error in API request
    Traceback (most recent call last):
      File "C:\bin\Anaconda3\lib\site-packages\jupyter_client\launcher.py", line 109, in launch_kernel
        proc = Popen(cmd, **kwargs)
      File "C:\bin\Anaconda3\lib\subprocess.py", line 950, in __init__
        restore_signals, start_new_session)
      File "C:\bin\Anaconda3\lib\subprocess.py", line 1220, in _execute_child
        startupinfo)
    OSError: [WinError 193] %1 is not a valid Win32 application

gevent compatibility with jupyter client

I'm running multiple jupyter clients inside of a gevent server, and I would like clients to relinquish control when they perform blocking communications with the kernel. Is there a recommended way for doing this?

If I had control over the code, I could replace import zmq with import zmq.green as zmq in the manager and client. However, as a consumer of the library, I don't see any obvious ways of doing this aside from monkey-patching zmq.

Duplicate function `find_connection_file` with similar API to one in `ipykernel`

From Gitter October 21, 2015 5:11 PM:

So, I was using find_connection_file from jupyter_client.connect. It turns out there is another implementation under ipykernel.connect, which does work. Is there a reason for two different implementations of this function?

Seems like there are two copies of the same function, but they act differently.

If this is intentional, it may be worth finding a different name to highlight the differences and deprecate the old one. If they are meant to behave more similarly, then a bug fix is in order and some discussion should occur about which one to deprecate, if possible, and why.

Related: ipython/ipykernel#103

modified connection file not created with --existing --ssh

When I use ipython console --existing kernel-9072.json --ssh server, it works and informs me that

[ZMQTerminalIPythonApp] To connect another client via this tunnel, use:
[ZMQTerminalIPythonApp] --existing kernel-9072-ssh.json

However, that modified connection file is apparently never created. I think the reason is that with --existing a kernel manager is not created, so the code that is responsible for writing the new connection file is never run.
Also, the basename usage in the code responsible for creating the name of the new connection file seems strange, perhaps one of them was supposed to be dirname?

$ python -c "import IPython; print(IPython.sys_info())" 
{'commit_hash': u'f534027',
 'commit_source': 'installation',
 'default_encoding': 'UTF-8',
 'ipython_path': '/home/ondrej/anaconda/lib/python2.7/site-packages/IPython',
 'ipython_version': '4.0.0',
 'os_name': 'posix',
 'platform': 'Linux-3.16.0-4-amd64-x86_64-with-debian-8.2',
 'sys_executable': '/home/ondrej/anaconda/bin/python',
 'sys_platform': 'linux2',
 'sys_version': '2.7.10 |Anaconda 2.3.0 (64-bit)| (default, Oct 19 2015, 18:04:42) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]'}

Dual error messages cause confusion

When there's an error, the kernel is supposed to send both an error message on iopub, and an execute_reply with largely the same contents on the shell channel. This feels redundant, and it's easy to miss one out without causing enough problems to notice - issue takluyver/bash_kernel#44 is because bash_kernel was only sending the error data in an execute_reply message, and that had gone unnoticed until now.

When we get round to making the next major version of the message protocol, I'd like to simplify this to make it more obvious. I might remove the error data from execute_reply, so it just contains 'status': 'error', and the error message has to be used for the details of what went wrong.

Getting variables out from kernel.

How to get variables from kernel, in following example kernel executes this python code

a=10
b=a*a

how can I ask kernel to give me variables defined in its scopes, in my case here b. I tried to read messages from iopub , shell and stdin channel, but there are no sign for it.

In [1]: from jupyter_client import BlockingKernelClient, find_connection_file

In [2]: from jupyter_client import MultiKernelManager

In [3]: km = MultiKernelManager()

In [4]: kernel_id = km.start_kernel(kernel_name='python')

In [5]: cf = find_connection_file(kernel_id)

In [6]: kc = BlockingKernelClient(connection_file=cf)

In [7]: kc.load_connection_file()

In [8]: kc.start_channels()

In [9]: code = '\n'.join(['a=10','b=a*a'])

In [10]: code
Out[10]: 'a=10\nb=a*a'

In [11]: print code
a=10
b=a*a

In [12]: kc.execute(code)
Out[12]: 'c1dd90d1-88fb-4703-a85d-4a86dd4d355f'

In [13]: kc.iopub_channel.get_msgs()
Out[13]: 
[{'buffers': [],
  'content': {'execution_state': 'busy'},
  'header': {'date': datetime.datetime(2016, 4, 5, 19, 50, 12, 246710),
   'msg_id': '8304c140-bc89-40a4-8c53-7138a453836b',
   'msg_type': 'status',
   'session': 'e4c36888-f6d3-4146-9f5a-0479ef0c726e',
   'username': 'eddie7',
   'version': '5.0'},
  'metadata': {},
  'msg_id': '8304c140-bc89-40a4-8c53-7138a453836b',
  'msg_type': 'status',
  'parent_header': {'date': datetime.datetime(2016, 4, 5, 19, 50, 12, 244796),
   'msg_id': 'ecb9eac2-2b25-47e4-b58f-21fd790a379b',
   'msg_type': 'kernel_info_request',
   'session': '1af85535-0fac-4d98-8bc3-7dbf6eaa254c',
   'username': 'eddie7',
   'version': '5.0'}},
 {'buffers': [],
  'content': {'execution_state': 'idle'},
  'header': {'date': datetime.datetime(2016, 4, 5, 19, 50, 12, 248522),
   'msg_id': '735b536d-525c-4965-b06d-12396fe919e1',
   'msg_type': 'status',
   'session': 'e4c36888-f6d3-4146-9f5a-0479ef0c726e',
   'username': 'eddie7',
   'version': '5.0'},
  'metadata': {},
  'msg_id': '735b536d-525c-4965-b06d-12396fe919e1',
  'msg_type': 'status',
  'parent_header': {'date': datetime.datetime(2016, 4, 5, 19, 50, 12, 244796),
   'msg_id': 'ecb9eac2-2b25-47e4-b58f-21fd790a379b',
   'msg_type': 'kernel_info_request',
   'session': '1af85535-0fac-4d98-8bc3-7dbf6eaa254c',
   'username': 'eddie7',
   'version': '5.0'}},
 {'buffers': [],
  'content': {'execution_state': 'busy'},
  'header': {'date': datetime.datetime(2016, 4, 5, 19, 51, 14, 701661),
   'msg_id': '633bfded-cb2d-41fd-a514-a3e30abc7eda',
   'msg_type': 'status',
   'session': 'e4c36888-f6d3-4146-9f5a-0479ef0c726e',
   'username': 'eddie7',
   'version': '5.0'},
  'metadata': {},
  'msg_id': '633bfded-cb2d-41fd-a514-a3e30abc7eda',
  'msg_type': 'status',
  'parent_header': {'date': datetime.datetime(2016, 4, 5, 19, 51, 14, 700842),
   'msg_id': 'c1dd90d1-88fb-4703-a85d-4a86dd4d355f',
   'msg_type': 'execute_request',
   'session': '1af85535-0fac-4d98-8bc3-7dbf6eaa254c',
   'username': 'eddie7',
   'version': '5.0'}},
 {'buffers': [],
  'content': {'code': 'a=10\nb=a*a', 'execution_count': 1},
  'header': {'date': datetime.datetime(2016, 4, 5, 19, 51, 14, 702141),
   'msg_id': '8ce98198-16be-4e0e-a605-c6c4f05a70b5',
   'msg_type': 'execute_input',
   'session': 'e4c36888-f6d3-4146-9f5a-0479ef0c726e',
   'username': 'eddie7',
   'version': '5.0'},
  'metadata': {},
  'msg_id': '8ce98198-16be-4e0e-a605-c6c4f05a70b5',
  'msg_type': 'execute_input',
  'parent_header': {'date': datetime.datetime(2016, 4, 5, 19, 51, 14, 700842),
   'msg_id': 'c1dd90d1-88fb-4703-a85d-4a86dd4d355f',
   'msg_type': 'execute_request',
   'session': '1af85535-0fac-4d98-8bc3-7dbf6eaa254c',
   'username': 'eddie7',
   'version': '5.0'}},
 {'buffers': [],
  'content': {'execution_state': 'idle'},
  'header': {'date': datetime.datetime(2016, 4, 5, 19, 51, 14, 706094),
   'msg_id': '282c4a80-be37-452a-8dd7-1c718e52aaac',
   'msg_type': 'status',
   'session': 'e4c36888-f6d3-4146-9f5a-0479ef0c726e',
   'username': 'eddie7',
   'version': '5.0'},
  'metadata': {},
  'msg_id': '282c4a80-be37-452a-8dd7-1c718e52aaac',
  'msg_type': 'status',
  'parent_header': {'date': datetime.datetime(2016, 4, 5, 19, 51, 14, 700842),
   'msg_id': 'c1dd90d1-88fb-4703-a85d-4a86dd4d355f',
   'msg_type': 'execute_request',
   'session': '1af85535-0fac-4d98-8bc3-7dbf6eaa254c',
   'username': 'eddie7',
   'version': '5.0'}}]

KernelSpecManager seems not to be fully Configurable

I am working on a "conda kernels" project that overrides the KernelSpecManager. When I run the notebook like so:

ipython notebook --NotebookApp.kernel_spec_manager_class=conda_kernels.CondaKernelSpecManager

a flurry of print statements has convinced me that my custom subclass (CondaKernelSpecManager) is being called most of the time but the built-in KernelSpecManager is still being called in at least one case (specifically, here). I suspect that this line should have been changed as part of ipython/ipython#6969.

Does that seem plausible?

I'm developing this on IPython 3.2 while the dust settles from the big split. If you think things would go easier using jupyter_client, I'll switch over. Thanks.

Add a way to run a command before starting the kernel itself

[From the gitter chat at Febr. 16 22:46]
Usecase: an environment needs to be activated to get the path set correctly. This can in some cases be tricky, as e.g. conda lets you add batch files which will be run on activate and can manipulate the path (that's at least my understanding and I would like to try that for a miktex package, so that miktex is in path when the env is activated).

So in this case, it would be nice if the actual kernel command which is run would be activate env & python ..... This can be achieved by two ways:

Both need changes in jupyther-client: either a way to get the shell=True parameter into the Popen call or the handling of env export (and a flag in the kernel spec to trigger it).

A third option is using batch files as a kernel startup command in the kernel spec. The batch would first activate the env and then call the python with the kernel startup line and the command line parameter as given by the batch call. Unfortunately, this leads to #104 and it could be interesting how to handle it because this needs a writable place where the batch file is created :-/

I would like to get advise what would be the best way to go forward here :-)

Doubt about how to get rid of "Timeout waiting for kernel_info reply" error messages

Hello,

I'm developing a PHP kernel, and I'm handling the "kernel_info_request" messages on the "shell socket" by responding with a "kernel_info_reply" response on the same socket, but the Jupyter notebook writes on the console that my kernel didn't send anything, I added a lot of logs to my code base and I'm pretty sure that the reply is being sent...

Anyone have any idea about what might be happening? I've spent two weeks trying to solve this issue without finding any reliable solution.

The PHP kernel is hosted in github: https://github.com/litipk/jupyter-php .

The kernel.json file that I'm using is something like:

{
  "argv": [
    "php",
    "\/home\/castarco\/.jupyter-php\/pkgs\/src\/kernel.php",
    "{connection_file}"
  ],
  "display_name":"PHP",
  "language":"php",
  "env":{}
}

Thank you for your time.

KernelManager.autorestart = False does not seem to work

@ghazpar opened ipython/ipython#9038

Hi,

I don't want the notebook server to automatically restart my kernels when they die. How can I achieve this? The config variable KernelManager.autorestart should fill my need, but does not seem to have any effect. BTW, the documentation says that the default value for this variable is False, but when I kill a kernel, it is always restarted automatically by the notebook server. I've tried to set the variable explicitly to False, but the behavior remains the same: my kernels get restarted automatically as soon as I kill them. Is this normal?

I need to be able to kill older kernels in order to free up memory. Using JupyterHub, I run many notebook servers for many users on the same machine, but my users often don't cleanup their kernels themselves. Any suggestions?

Many thanks in advance.

BTW, I run ipython 4.0.0.

MultiKernelManager/BlockingKernelClient hangs if the kernel dies during startup

In IRkernel/IRkernel#93 I had the issue that the R kernel was closing due to a bug in ipython 3.0. Unfortunately, the MultiKernelManager didn't register that:

My code which starts the kernel is equivalent to:

import sys
def step(msg):
    print(msg)
    sys.stdout.flush()
from IPython.kernel.multikernelmanager import MultiKernelManager
step("start")
kernel_name = "broken"
step(2)
_km = MultiKernelManager()
step(3)
kernelid = _km.start_kernel(kernel_name=kernel_name)
step(4)
#km.list_kernel_ids()
kn = _km.get_kernel(kernelid)
step(5)
kc = kn.client()
step(6)
# now initalize the channels
kc.start_channels()
step(7)
kc.wait_for_ready()
step(8)

prints until "7" and stops without finishing (and not interruptible)...

"broken" is a kernel like this:

{"argv": ["R","-e","stopifnot(F)","--args","{connection_file}"],
 "display_name":"broken"
}

The result of such a kernel is basically this:

[ipython] c:\data\external\knitpy>r -e "stopifnot(F)"

R version 3.2.0 (2015-04-16) -- "Full of Ingredients"
Copyright (C) 2015 The R Foundation for Statistical Computing
Platform: x86_64-w64-mingw32/x64 (64-bit)
[...]
> stopifnot(F)
Fehler: F ist nicht TRUE
Ausführung angehalten

[ipython] c:\data\external\knitpy>echo %ERRORLEVEL%
1

wait_for_ready() (code) takes no timeout argument but on the other hand IMO such a case where the commandline crashes should be registered anyway, as e.g. the qtconsole goes into a kernel restart loop with the broken kernel :-(

make the documentation easier to find

As I mentioned in ipython/ipython#8571, going to the github page of jupyter_client is not a super obvious place to look for the core Jupyter protocol documentation.

Googling any of "jupyter protocol" or "messaging in jupyter" (gives "Making kernels for Jupyter" as 2nd link) or "jupyter documentation" don't find the docs, nor is it linked to from https://ipython.org/ipython-doc/dev/development/messaging.html ... nor are there any obvious links from http://jupyter.org/

Suggestions:

More specifcations for raw data publications needed?

I'm currently thinking how to get a string of code evaluated in a kernel and getting the raw data back to the frontend (in my case that's knitpy a clone of knitr -> knitr supports evaluating codechunk (codechunks ~= cells in the notebook) options in the context of the already evaluated codechunks and in knitpy the codechunks are running in a kernel).

So my current thinking is sending a equivalent of

ret = eval({code}) # code is formated with the right code string...
publish_data({"ret":ret})

to the kernel and have a look at the data_pub message. But I want to support R and other kernels too and just found out that the implementation of publish_data in python uses pickle, which is AFAIK not available in R.

So how would one implement the equivalent of publish_data in R (or other kernels) and how would one handle it reliable on the frontend side (e.g. do I have to keep track what format the kernel sends? How would I handle if the kernel switches the implementation?)?

The current message spec is rather vague on that part: https://ipython.org/ipython-doc/dev/development/messaging.html#raw-data-publication

CC: @flying-sheep as he is currently implementing display machinery in the R kernel

Better error message for NoSuchKernel

Trying to run nbconvert on a notebook, and I get

  File "<snip>/site-packages/jupyter_client/kernelspec.py", line 143, in get_kernel_spec
    raise NoSuchKernel(kernel_name)
jupyter_client.kernelspec.NoSuchKernel

It'd be helpful if the message included the kernel it's looking for and the set of known kernels.

Sidenote, does nbconvert have a 4.2 release? I'm trying to pass the kernel_name argument here, but I see that I'm on 4.1 (off PyPI) and that argument was added in 4.2.

An example

In [7]: from jupyter_client import KernelManager

In [8]: manager = KernelManager()

In [9]: manager.kernel_spec_manager.get_kernel_spec('foo')
---------------------------------------------------------------------------
NoSuchKernel                              Traceback (most recent call last)
<ipython-input-9-ff3677045dc1> in <module>()
----> 1 manager.kernel_spec_manager.get_kernel_spec('foo')

/Users/tom.augspurger/Envs/py27/lib/python2.7/site-packages/jupyter_client/kernelspec.pyc in get_kernel_spec(self, kernel_name)
    141             resource_dir = d[kernel_name.lower()]
    142         except KeyError:
--> 143             raise NoSuchKernel(kernel_name)
    144
    145         if kernel_name == NATIVE_KERNEL_NAME:

NoSuchKernel:

Doc discrepancy on iopub message topics

The convention used in the IPython kernel is to use the msg_type as the topic, and possibly extra information about the message, e.g. execute_result or stream.stdout

Looking at the code in ipykernel, it's getting set to e.g. kernel.{u-u-i-d}.execute_result, rather than just execute_result.

Kernel mysteriously dying with v4.2.1 but not with v4.1.1

I have been using the following code to export an iPython notebook to HTML for a while now:

report_config = Config({'ExecutePreprocessor': {'enabled': True,
                                                'timeout': 600},
                        'HTMLExporter': {'default_template': 'basic',
                                         'template_path': [template_path],
                                         'template_file': 'report.tpl'}})

exportHtml = HTMLExporter(config=report_config)
output, resources = exportHtml.from_filename(notebook_file)
open(html_file, mode='w',encoding='utf-8').write(output)

where report.tpl is a custom template. This code has worked fine until now but it breaks with jupyter_client v4.2.1 with the following error:

  File "/scratch/nmadnani/rsmtool/lib/python3.4/site-packages/nbconvert/exporters/exporter.py", line 165, in from_filename
    return self.from_notebook_node(nbformat.read(f, as_version=4), resources=resources, **kw)
  File "/scratch/nmadnani/rsmtool/lib/python3.4/site-packages/nbconvert/exporters/html.py", line 65, in from_notebook_node
    return super(HTMLExporter, self).from_notebook_node(nb, resources, **kw)
  File "/scratch/nmadnani/rsmtool/lib/python3.4/site-packages/nbconvert/exporters/templateexporter.py", line 196, in from_notebook_node
    nb_copy, resources = super(TemplateExporter, self).from_notebook_node(nb, resources, **kw)
  File "/scratch/nmadnani/rsmtool/lib/python3.4/site-packages/nbconvert/exporters/exporter.py", line 130, in from_notebook_node
    nb_copy, resources = self._preprocess(nb_copy, resources)
  File "/scratch/nmadnani/rsmtool/lib/python3.4/site-packages/nbconvert/exporters/exporter.py", line 302, in _preprocess
    nbc, resc = preprocessor(nbc, resc)
  File "/scratch/nmadnani/rsmtool/lib/python3.4/site-packages/nbconvert/preprocessors/base.py", line 47, in __call__
    return self.preprocess(nb,resources)
  File "/scratch/nmadnani/rsmtool/lib/python3.4/site-packages/nbconvert/preprocessors/execute.py", line 80, in preprocess
    cwd=path)
  File "/scratch/nmadnani/rsmtool/lib/python3.4/site-packages/jupyter_client/manager.py", line 433, in start_new_kernel
    kc.wait_for_ready(timeout=startup_timeout)
  File "/scratch/nmadnani/rsmtool/lib/python3.4/site-packages/jupyter_client/blocking/client.py", line 49, in wait_for_ready
    raise RuntimeError('Kernel died before replying to kernel_info')
RuntimeError: Kernel died before replying to kernel_info

However, as soon as I downgrade to v4.1.1 of jupyter_client in the same exact environment, the same code works just fine with no other modifications. It looks like this has something to do with the timeout addition in start_new_kernel that happened in 4.2?

Refresh docs to be more user and developer friendly

In preparation for 4.1, I going to do a basic refresh of the docs.

  • Check Sphinx builds cleanly without warnings
  • Add captions to table of contents to improve readability
  • Add a more prominent statement about this containing protocol information

Kernel manager that detects virtual/conda env & installs ipykernel

This may be a separate package rather than something that belongs in jupyter_client, but this seems like the logical place to record the idea.

A custom kernel manager could detect when a frontend has been started with a (virtual|conda) env active, by looking at environment variables, invoke pip/conda to ensure ipykernel is installed in the env, and then launch a kernel using it. This would be especially useful for the console interfaces, similar to the virtualenv detection we do in IPython terminal.

This would make a good project that someone technically savvy but unfamiliar with our architecture could put together in ~a day, with some support from a mentor who knows our architecture.

Follows on from discussion on ipython/ipykernel#96

New kernel signalling tests failing on Jenkins

It looks like the bash subprocesses are failing to start for some reason. This is on the jack-of-none job (buildlog).

======================================================================
FAIL: test_signal_kernel_subprocesses (jupyter_client.tests.test_kernelmanager.TestKernelManager)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/var/lib/jenkins/shiningpanda/jobs/92f5a430/virtualenvs/d41d8cd9/lib/python3.4/site-packages/ipython_genutils/testing/decorators.py", line 186, in skipper_func
    return f(*args, **kwargs)
  File "/var/lib/jenkins/shiningpanda/jobs/92f5a430/virtualenvs/d41d8cd9/lib/python3.4/site-packages/jupyter_client/tests/test_kernelmanager.py", line 104, in test_signal_kernel_subprocesses
    self.assertEqual(reply['user_expressions']['poll'], [None] * N)
nose.proxy.AssertionError: Lists differ: [127, 127, 127, 127, 127] != [None, None, None, None, None]

First differing element 0:
127
None

- [127, 127, 127, 127, 127]
+ [None, None, None, None, None]
-------------------- >> begin captured logging << --------------------
root: DEBUG: Found kernel signaltest in /tmp/tmpve4aomr4/jupyter_data/kernels
root: DEBUG: Native kernel (python3) available from /var/lib/jenkins/shiningpanda/jobs/92f5a430/virtualenvs/d41d8cd9/lib/python3.4/site-packages/ipykernel/resources
root: DEBUG: Starting kernel: ['/var/lib/jenkins/shiningpanda/jobs/92f5a430/virtualenvs/d41d8cd9/bin/python3.4', '-m', 'jupyter_client.tests.signalkernel', '-f', '/tmp/tmpb3_8dhxp.json']
root: DEBUG: Connecting to: tcp://127.0.0.1:41234
root: DEBUG: connecting shell channel to tcp://127.0.0.1:43459
root: DEBUG: Connecting to: tcp://127.0.0.1:43459
root: DEBUG: connecting iopub channel to tcp://127.0.0.1:55729
root: DEBUG: Connecting to: tcp://127.0.0.1:55729
root: DEBUG: connecting stdin channel to tcp://127.0.0.1:48693
root: DEBUG: Connecting to: tcp://127.0.0.1:48693
root: DEBUG: connecting heartbeat channel to tcp://127.0.0.1:59474
--------------------- >> end captured logging << ---------------------

Doc discrepancy on iopub error message

There is small mistake in messaging.rst:

Execution errors
----------------

When an error occurs during code execution

Message type: ``error``::

    content = {
       # Similar content to the execute_reply messages for the 'error' case,
       # except the 'status' field is omitted.
    }

This way one would guess that 'execution_count' is part of the message, as it is with execute_reply.
Error messages miss this field, so it should say # except the 'status' and 'execution_count' fields are omitted.

improved completions with type hinting

The autocomplete-plus package for the Atom editor (recently bundled into Atom) shows how powerful types in autocomplete can be.

atom completions

Something similar would be a huge boon to Jupyter. Instead of just showing text matches for variables in the running kernel (already very powerful!), show what they are: a variable of type String, a function that takes two integers and returns a float, etc.

This would be a huge step forward for interactive programming of interpreted languages, and would truly showcase the power of having a live kernel to introspect into.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.