Git Product home page Git Product logo

spark-kernel's Introduction

Build Status License Join the chat at https://gitter.im/ibm-et/spark-kernel

Spark Kernel

The main goal of the Spark Kernel is to provide the foundation for interactive applications to connect to and use Apache Spark.

Overview

The Spark Kernel provides an interface that allows clients to interact with a Spark Cluster. Clients can send libraries and snippets of code that are interpreted and ran against a preconfigured Spark context. These snippets can do a variety of things:

  1. Define and run spark jobs of all kinds
  2. Collect results from spark and push them to the client
  3. Load necessary dependencies for the running code
  4. Start and monitor a stream
  5. ...

The kernel's main supported language is Scala, but it is also capable of processing both Python and R. It implements the latest Jupyter message protocol (5.0), so it can easily plug into the 3.x branch of Jupyter/IPython for quick, interactive data exploration.

Try It

A version of the Spark Kernel is deployed as part of the Try Jupyter! site. Select Scala 2.10.4 (Spark 1.4.1) under the New dropdown. Note that this version only supports Scala.

Develop

This project uses make as the entry point for build, test, and packaging. It supports 2 modes, local and vagrant. The default is local and all command (i.e. sbt) will be ran locally on your machine. This means that you need to install sbt, jupyter/ipython, and other develoment requirements locally on your machine. The 2nd mode uses Vagrant to simplify the development experience. In vagrant mode, all commands are sent to the vagrant box that has all necessary dependencies pre-installed. To run in vagrant mode, run export USE_VAGRANT=true.

To build and interact with the Spark Kernel using Jupyter, run

make dev

This will start a Jupyter notebook server. Depending on your mode, it will be accessible at http://localhost:8888 or http://192.168.44.44:8888. From here you can create notebooks that use the Spark Kernel configured for local mode.

Tests can be run by doing make test.

NOTE: Do not use sbt directly.

Build & Package

To build and package up the Spark Kernel, run

make dist

The resulting package of the kernel will be located at ./dist/spark-kernel-<VERSION>.tar.gz. The uncompressed package is what is used is ran by Jupyter when doing make dev.

Version

Our goal is to keep master up to date with the latest version of Spark. When new versions of Spark require code changes, we create a separate branch. The table below shows what is available now.

Branch Spark Kernel Version Apache Spark Version
master 0.1.5 1.5.1
branch-0.1.4 0.1.4 1.4.1
branch-0.1.3 0.1.3 1.3.1

Please note that for the most part, new features to Spark Kernel will only be added to the master branch.

Resources

There is more detailed information available in our Wiki and our Getting Started guide.

spark-kernel's People

Contributors

brian-burns-bose avatar chipsenkbeil avatar guiwork avatar jodersky avatar lbustelo avatar lull3rskat3r avatar pberkland avatar rsgoodman avatar swelleck avatar wangmiao1981 avatar wellecks avatar zackmorris avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.