Git Product home page Git Product logo

repo2docker's Introduction

Build Status Documentation Status Contribute Docker Repository on Quay

repo2docker fetches a git repository and builds a container image based on the configuration files found in the repository.

See the repo2docker documentation for more information on using repo2docker.

For support questions please search or post to https://discourse.jupyter.org/c/binder.

See the contributing guide for information on contributing to repo2docker.


Please note that this repository is participating in a study into sustainability of open source projects. Data will be gathered about this repository for approximately the next 12 months, starting from 2021-06-11.

Data collected will include number of contributors, number of PRs, time taken to close/merge these PRs, and issues closed.

For more information, please visit our informational page or download our participant information sheet.


Using repo2docker

Prerequisites

  1. Docker to build & run the repositories. The community edition is recommended.
  2. Python 3.6+.

Supported on Linux and macOS. See documentation note about Windows support.

Installation

This a quick guide to installing repo2docker, see our documentation for a full guide.

To install from PyPI:

pip install jupyter-repo2docker

To install from source:

git clone https://github.com/jupyterhub/repo2docker.git
cd repo2docker
pip install -e .

Usage

The core feature of repo2docker is to fetch a git repository (from GitHub or locally), build a container image based on the specifications found in the repository & optionally launch the container that you can use to explore the repository.

Note that Docker needs to be running on your machine for this to work.

Example:

jupyter-repo2docker https://github.com/norvig/pytudes

After building (it might take a while!), it should output in your terminal something like:

    Copy/paste this URL into your browser when you connect for the first time,
    to login with a token:
        http://0.0.0.0:36511/?token=f94f8fabb92e22f5bfab116c382b4707fc2cade56ad1ace0

If you copy paste that URL into your browser you will see a Jupyter Notebook with the contents of the repository you had just built!

For more information on how to use repo2docker, see the usage guide.

Repository specifications

Repo2Docker looks for configuration files in the source repository to determine how the Docker image should be built. For a list of the configuration files that repo2docker can use, see the complete list of configuration files.

The philosophy of repo2docker is inspired by Heroku Build Packs.

Docker Image

Repo2Docker can be run inside a Docker container if access to the Docker Daemon is provided, for example see BinderHub. Docker images are published to quay.io. The old Docker Hub image is no longer supported.

repo2docker's People

Contributors

betatim avatar choldgraf avatar consideratio avatar davidanthoff avatar dependabot[bot] avatar evertrol avatar gedankenstuecke avatar gladysnalvarte avatar jrbourbeau avatar jtpio avatar jzf2101 avatar kardasbart avatar kirstiejane avatar madhur-tandon avatar manics avatar minrk avatar nhdaly avatar nuest avatar pablobernabeu avatar paugier avatar pre-commit-ci[bot] avatar sblack-usu avatar sylvaincorlay avatar tomyun avatar trallard avatar vsoch avatar willingc avatar wolfv avatar xarthisius avatar yuvipanda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

repo2docker's Issues

Stop installing JupyterHub by default with repo2docker

It's no longer required by binder, and we should remove it here too to simplify our codebase.

People using this for their own jupyterhub installs can / should specify jupyterhub version explicitly
in their requirements.txt file instead.

Document our expectations of people writing Dockerfiles (for now)

What I can remember...

  1. You need to explicitly do a 'COPY . $HOME' if you want your files in the final build.
  2. notebook and jupyterhub-singleuser (of appropriate version matching the jupyterhub version, which you can get from the build arg $JUPYTERHUB_VERSION) must be in $PATH.

Adding a logo to the docs

Now that we've back on alabaster, we've got an empty hole in the top left where the logo usually goes. Do we want to:

  1. Use the jupyter logo
  2. Use the binder logo
  3. Make a logo for repo2docker (maybe just something like below?)

image

?

Document creating a new builder

A rough sketch to start a doc for creating a new builder (cc @willingc):

So you want to create a new builder

Creating a new builder for jupyter-repo2docker

Repo2Docker uses s2i to turn git repositories into docker images that:

  1. can be used with jupyter notebooks, including via JupyterHub
  2. include the contents of the repo

This is handled via a collection of s2i builders. There are two pieces to a builder:

  1. the s2i builder image, which turns GitHub repos into docker images
  2. the BuildPack object in repo2docker/detectors.py, which detects which builder to use based on the contents, and invokes the appropriate builder image

Making the s2i builder

First: pick a name. Use mine

To get started creating a builder, start with s2i create. In s2i-builders, run:

s2i create jupyterhub/repo2docker-mine mine

To build the image, in the newly created s2i-builders/mine:

make

Now, you can work on your Dockerfile as you would any other, and fill out the docker build, as you want. There are two pieces here:

  1. the Dockerfile for the builder itself
  2. the s2i/bin/assemble script

The Dockerfile is used to create the base builder image (e.g. installing the base runtime environment)

The s2i/bin/assemble script is run when creating each new image from a given repository (e.g. pulling in repo-specific dependencies and repo contents)

You can view the existing builders for examples to follow.

Adding the BuildPack, so that your builder is used on the appropriate repos

Once your builder image is finished, you need to add a BuildPack to run your builds.

  • define your BuildPack class in detectors.py. In most cases, you only need to implement two things if you subclass S2IBuildPack:
    • the detect method to determine whether a repo should choose your BuildPack
    • setting the build_image attribute, either via config or during the .detect() method

Finally, to get the builder application to use your BuildPack, add it to the buildpacks list in app.py.

Once everything above is done, you should be able to build a repo with:

python -m repo2docker https://github.com/yourname/your-repo

Make port mapping work with non-jupyter workflows

If the image being built is a Dockerfile, it might expose arbitrary ports. Right now we don't do any port mappings for them.

instead, inspect the image that has been built, figure out what ports it says it needs & expose them!

This came up when working on getting R Studio to work here with @cboettig

apt-installs from text file

We should support apt-installs from a text file, maybe:

apt.txt

each line has a command that will be called with apt-get install <line>?

Javascript builds

We also need to support dependencies for Javascript (maybe including things that require a function call after installing, e.g. some of the jupyter extensions?). Discuss implementations etc here!

Note that our guide on repo2docker already starts out by showing how to create a builder with npm. Perhaps we can create a javascript build in parallel with finishing that document.

Provide clear error messages for Dockerfiles that aren't repo2docker compatible

Dockerfiles need to follow a specific set of guidelines to work with repo2docker (and binder). Specifically,

  1. They must be able to run as a non-privileged user with UID 1000
  2. The jupyterhub-singleuser command is in path and works as expected (usually you can do pip install jupyterhub==<version> to make this happen)
  3. It's using a tag in the FROM image, rather than latest or none (this makes the builds non-deterministic)

When these aren't met, we should provide clear error messages.

Note that this should come after #34, so most people wouldn't need Dockerfiles to work. We could also recommend that people inherit from https://github.com/jupyter/docker-stacks for when they need to, since those will work ok with binder.

Document a `verify` script pattern

Update 2022 by Erik

My understanding is that we don't look to implement a feature in repo2docker related to a pattern, but that we may wish to document ideas like this. An example to reference could be this from pangeo-stacks: #93 (comment)

Original post

I think for many people it'd be useful to run some simple tests whenever their r2d image is built. This is similar to postBuild (and the same behavior could be done w/ a few assert statements at the end of one of these files), but I wonder if it'd be worth having a file that specifically has the job of "throw a tests aren't passing error" so that the user knows their code isn't working.

Alternatively, we could try to set guidelines for this in the documentation, basically include a section on how to ensure your container runs after it builds. Thoughts?

Create a `hello world` repo

It'd be good to have a repo that we can point users to in order to make sure things are installed properly etc. Similar to how docker has the docker run hello-world image...

Julia builds

We should support dependencies for Julia.

Major to-do items

  • Allow users to specify requirements with a REQUIRE file
  • Allow users to pre-compile packages before building the docker image

Julia handles dependencies in a text file similar to how Python does this.

Thoughts on that? @yuvipanda

shim for current binder's Dockerfiles

People running the Dockerfile builder for binder inherit FROM andrewosh/binder-base. These Dockerfiles will lack jupyterhub, but otherwise should work. If we add an oldbinder-dockerfile builder that detects Dockerfile and this FROM, we should be able to append the bits needed to work.

Allow composing multiple methods of specifying environments

We want most people to not have to set up a Dockerfile, and be able to use other mechanisms instead. We also want most of these other mechanisms to be not things we set up and built, but rather things that already exist in the community. We want to be able to compose these together, rather than have to pick one or other.

In this vein, we should unify on one s2i builder script that can install many things. For example:

  1. Install apt packages from an apt.yaml file (could include other apt repositories too)
  2. Install pip packages from requirements.txt (already established convention)
  3. Install conda environments with environment.yaml (already established convention)
  4. Install npm packages with package.jsoin (already established convention)
  5. Pick version of python used with runtime.txt (established by heroku)
  6. Run a 'postInstall' script by a executable script of a particular name (TODO: find out what heroku uses)

If users don't want to put multiple files in their repo, they can instead add a Binderfile, which can either provide paths to these individual files that aren't the default (so you can contain them in a dir) or just contain the values of these individual files (so you can keep it all to one file).

We could use the same docker image for all of these things (not preferrable!) or produce base images that support a subset of these combinations.

Set JupyterHub version as a Build Arg

The version of JupyterHub in use inside the container must match the version of JupyterHub outside the container. We should enforce this by setting a build arg that must be consumed by the Dockerfile.

Note that this is only an issue for people who make their own Dockerfiles.

handle Python 2 in environment.yml

conda can specify the Python package version in environment.yml. Right now, if the user specifies a requirement for Python 2, this will just break.

We should detect if Python 2 is in environment.yml and install to a new env for the kernel, rather than updating the root env. In fact, it might be preferable to do this if the Python version is specified as anything other than the already-installed version.

Support `install.R` files

We should support R functionality. R handles dependencies differently from both Python and Julia, here are some thoughts from Carl:

Current proposal

  • Use runtime.txt to specify the R version as a date.
    • We'll expect a line with the format r-YYYY-MM-DD. E.g. 2017-01-21.
    • If that line is found, it will trigger and R build in r2d. This sets up a version of R that was current at that date, and sets the MRAN snapshot for that date as the default repo.
    • We'll also set up an R kernel for jupyter to discover. This R will also make sure we are installing in a user-owned library path so stuff can be installed there at any point.
  • Use install.R (must be executable) to execute with the installed R kernel (in the point above). This primarily lets people write R code that installs packages.
  • If install.R is given without a r-YYYY-MM-DD line in runtime.txt then we raise an error.

Main to-do items

  • How do users specify what servers contain what packages (e.g. rstudio vs. bioconductor)
  • What about non-CRAN packages? outside of the standard R repositories, we can't do much else other then tell people to clone a github repo of preference and install it with install.R or postBuild or something.
  • How can we ensure versions are frozen, since R doesn't easily handle specific version installs?
  • Someone needs to actually implement this!

Notes

In an R package, the DESCRIPTION file plays the role of a requirements.txt in stating the dependencies, minimal version needed, and where get them (e.g. CRAN or additional cran-type repo like bioconductor).

This approach does not accommodate installing something that is not the most recent version of a package. (CRAN archives old sources, but because, unlike python or ruby gems distribution, CRAN is designed to provide binaries & you can't guarantee binaries build for an old /archived source, the default install does not immediately support installing archived packages).

If you just have a list of packages you want, I recommend something along the lines of what we do with rocker, e.g.

install2.r cat deps.txt

Where deps.txt is just a list of package names you want to install. If these come from multiple repos (cran & bioconductor), just list those as arguments to -r:

install2.r -r "https://cran.rstudio.com" -r "https://bioconductor.org/pagkages/release" cat deps.txt

If you want to install the same version each time, just use an MRAN snapshot of the appropriate date.

Post-install commands

We should support users running post-install commands before the image gets built. E.g., so that they could use Julia to pre-compile their packages, or set Jupyter variables.

Allow r2d to work on pre-existing docker images

@arokem brought up an interesting idea: that we support building on top of pre-existing docker images. Basically take an image and then add an extra layer of jupyterhub on top so that people can interact with it. @yuvipanda what do you think? We're getting close to outside the scope of r2d, but I could see this being valuable

use `os.path.join` for source paths

I noticed that a few paths are written with 'nix-style separators. We should remove all instances of these and us os.path.join. This mostly pertains to Yuvi's dockerfiles branch

Add validation for --image-name

I try to do this:
jupyter-repo2docker --image-name picoquant.com/PQNotebooks:latest [email protected]:PicoQuant/Notebooks.git

which gives me the following output:

Cloning into '/tmp/8c11f9fe-045a-4aaf-947d-d4685d84d5c1'...
remote: Counting objects: 42, done.
remote: Compressing objects: 100% (36/36), done.
remote: Total 42 (delta 19), reused 18 (delta 3), pack-reused 0
Receiving objects: 100% (42/42), 654.85 KiB | 9.00 KiB/s, done.
Resolving deltas: 100% (19/19), done.
Using repo2docker-conda builder
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 600, in urlopen
    chunked=chunked)
  File "/usr/local/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 356, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1106, in request
    self._send_request(method, url, body, headers)
  File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1151, in _send_request
    self.endheaders(body)
  File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1102, in endheaders
    self._send_output(message_body)
  File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 936, in _send_output
    self.send(message_body)
  File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 905, in send
    self.sock.sendall(datablock)
BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/site-packages/requests/adapters.py", line 438, in send
    timeout=timeout
  File "/usr/local/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 649, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/local/lib/python3.5/site-packages/requests/packages/urllib3/util/retry.py", line 357, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/local/lib/python3.5/site-packages/requests/packages/urllib3/packages/six.py", line 685, in reraise
    raise value.with_traceback(tb)
  File "/usr/local/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 600, in urlopen
    chunked=chunked)
  File "/usr/local/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 356, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1106, in request
    self._send_request(method, url, body, headers)
  File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1151, in _send_request
    self.endheaders(body)
  File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1102, in endheaders
    self._send_output(message_body)
  File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 936, in _send_output
    self.send(message_body)
  File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 905, in send
    self.sock.sendall(datablock)
requests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', BrokenPipeError(32, 'Broken pipe'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/jupyter-repo2docker", line 11, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.5/site-packages/repo2docker/__main__.py", line 6, in main
    f.start()
  File "/usr/local/lib/python3.5/site-packages/repo2docker/app.py", line 325, in start
    for l in picked_buildpack.build(self.output_image_spec):
  File "/usr/local/lib/python3.5/site-packages/repo2docker/detectors.py", line 395, in build
    decode=True
  File "/usr/local/lib/python3.5/site-packages/docker/api/build.py", line 246, in build
    timeout=timeout,
  File "/usr/local/lib/python3.5/site-packages/docker/utils/decorators.py", line 46, in inner
    return f(self, *args, **kwargs)
  File "/usr/local/lib/python3.5/site-packages/docker/api/client.py", line 185, in _post
    return self.post(url, **self._set_request_timeout(kwargs))
  File "/usr/local/lib/python3.5/site-packages/requests/sessions.py", line 565, in post
    return self.request('POST', url, data=data, json=json, **kwargs)
  File "/usr/local/lib/python3.5/site-packages/requests/sessions.py", line 518, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.5/site-packages/requests/sessions.py", line 639, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.5/site-packages/requests/adapters.py", line 488, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', BrokenPipeError(32, 'Broken pipe'))

while
jupyter-repo2docker [email protected]:PicoQuant/Notebooks.git
works as expected.
Is there any easy way of defining container name and then maybe pushing it to docker?

Single binder directory support

We should support specifying binder environment information in a single location that is not used by anything else, a la .travis.yml (could be a file or directory), rather than requiring users to fill their repos with potentially several binder-specific files at the top-level that may conflict with files already there. Allowing these to be in top-level binder.yml or binder/ directory would be a big improvement.

Packages may have requirements.txt and/or environment.yml, etc. that describe certain dependencies. To use the same repo on binder, the user may want to have different dependencies than the most basic required for the project. As it is now, we are forcing people to make binder-specific repos in order to be able to make these choices. It would be nice if this could be done with a single file and/or directory in an existing repo, to avoid conflicting with existing files.

Next steps:

Chris

  • Make a logo
  • Build the Binder diagram
  • Confirm w/ Titus when the workshop is, get them to move it sometime outside of September.

Yuvi

  • Rebuild the UI landing page
  • Figure out a permalink solution
  • Figure out badges (maybe use same badge as now?)
  • Get an IP address for beta.mybinder.org

move env-creation, and cache it ?

Especially when iterating locally on some files, it appeared to me that the conda step was one of the longes, even if env.yml did not change.

Wold it be possible to change the process to cache the env building part ? Something along:

  • Copy env.yml
  • create env.
  • remove env.yml
    ...
  • Copy the repository at the end.

Potentially using the hash of env.yml content for caching purposes.

No pressure, just thinking.

Everything runs, but my browser can't find Jupyter

I am trying to run the PythonDataScienceHandbook example (jupyter-repo2docker https://github.com/jakevdp/PythonDataScienceHandbook).

Everything seems to build/run just fine, and I get the URL in the end of it all, but when I point my browser to the URL, I get a "This site can’t be reached" error (on Chrome). I tried several browsers, so I don't think it is just a browser-specific issue.

This is particularly odd, because this worked just a few days ago, before I updated my operating system (I am on a mac). OTOH, that might be a red herring, because I also had to restart my machine on the upgrade, and maybe that caused this?

Any ideas how to debug this?

Upgrade to jupyterlab 0.28.0

If I define jupyterlab>=0.28.0 in the requirements.txt of my own repo, will it install 0.28.0 or the version defined in repo2docker?

Add tests!

Right now we have absolutely no tests! Sad!

duplicate tests ?

From the doc page there seem to be duplicate postbuild-script:

Site Contents
    Sample build files
        System - Post-build scripts
        System - Post-build scripts
        System - Specifying runtime environments
        System - Specifying runtime environments
        System - APT Packages
        Python - Requirements.txt
        Python - Conda Environment
        Python - Mixed Requirements
        Python - Mixed Requirements
        Julia - REQUIRE
        Julia - REQUIRE
        Docker - Specifying dependencies
        Docker - Legacy Dockerfiles
        Docker - Running scripts
        Docker - Running scripts

This seem to be duplicate for example:

tests/venv/postBuild/postBuild
tests/venv/binder-dir/binder/postBuild

Could that be duplicate tests ?

requirements.txt and runtime.txt are not followed if environment.yaml is present

In https://github.com/simonsfoundation/regulatory_network_examples the requirements.txt
and runtime.txt were not being observed. In https://github.com/simonsfoundation/jp_svg_canvas they are observed. The only difference I can see is that https://github.com/simonsfoundation/jp_svg_canvas has a setup.py script.

I could work around by adding python=2 to environment.yml and adding
pip install -r requirements.txt to the postBuild script.

In my view these work arounds should not be needed.

Provide a mechanism for ignoring environment files

Since we are following mixed conventions of installation patterns (package.json, requirements.txt, etc.), we need to make sure that it's possible for repos to tell binder not to use some of these files, even if they are present.

Repos may have multiple of these files that should be considered mutually exclusive (e.g. requirements.txt and environment.yml to serve both pip and conda users, or package.json that's only part of a build step, and binder should ignore it, or a Dockerfile for standalone running/testing that shouldn't be used on binder). We should make sure that there is a way for users to tell binder what to do, regardless of the presence of top-level dependency files that binder should ignore.

This is related to #72, where the presence of a /binder directory or binder.yml file could prevent looking at any top-level files (for instance).

Support custom environment variables

We should let users define environment variables w/ their repositories.

Maybe env.txt ? Each line should have <KEY>=<VAL>?

There are two components to this:

  • Make it possible to invoke repo2docker with environment variables (#186 )
  • Document the above CLI functionality (#295)
  • Add support to perform this action with a config file (e.g. env.txt)

Easy way to run a single repo on your own cloud instance

We want to provide really easy ways for users to run a github repo on their own AWS or gcloud or OpenStack whatever instance. It should ideally allow them to do persistent long term work on it, including pushing stuff back to GitHub. We can use cloud-init + packer to make this happen across clouds without too much work on our part.

We also have a responsibility to do this as securely as possible, rather than leaving random botnet invitations across the internet.

Adding support for OSF

We want to be able to support the immutable frozen relases on OSF as an input to repo2docker.

So we should introduce the concept of content providers. Their job is to check out the given URL to the current local directory. They won't do any caching or whatever, and would use traitlets to be pluggable.

We decided against having autodetection, since there is no way to do that reliably for everything, and doing that only for some providers seems complex.

So new commandline would be like:

jupyter-repo2docker --provider=osf https://<some-osf-url>

And the provider class should be like:

class OSFProvider(Provider):
     name = 'osf'
     
     description = 'some description of this provider'

     @gen.coroutine
     def provide(self, identifier):
     """
     This method should fetch the contents of the given identifier to the current working directory
     """
     pass

This means we only support a single identifier (such as file path, git url including ref, or osf url) to each provider. This makes implementation simpler, but we can modify this in the future if we need.

/cc @betatim @mfraezz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.