Git Product home page Git Product logo

configurable-http-proxy's Introduction

Technical Overview | Installation | Configuration | Docker | Contributing | License | Help and Resources


Latest PyPI version Latest conda-forge version Documentation build status GitHub Workflow Status - Test Test coverage of code GitHub Discourse Gitter

With JupyterHub you can create a multi-user Hub that spawns, manages, and proxies multiple instances of the single-user Jupyter notebook server.

Project Jupyter created JupyterHub to support many users. The Hub can offer notebook servers to a class of students, a corporate data science workgroup, a scientific research project, or a high-performance computing group.

Technical overview

Three main actors make up JupyterHub:

  • multi-user Hub (tornado process)
  • configurable http proxy (node-http-proxy)
  • multiple single-user Jupyter notebook servers (Python/Jupyter/tornado)

Basic principles for operation are:

  • Hub launches a proxy.
  • The Proxy forwards all requests to Hub by default.
  • Hub handles login and spawns single-user servers on demand.
  • Hub configures proxy to forward URL prefixes to the single-user notebook servers.

JupyterHub also provides a REST API for administration of the Hub and its users.

Installation

Check prerequisites

  • A Linux/Unix based system

  • Python 3.8 or greater

  • nodejs/npm

    • If you are using conda, the nodejs and npm dependencies will be installed for you by conda.

    • If you are using pip, install a recent version (at least 12.0) of nodejs/npm.

  • If using the default PAM Authenticator, a pluggable authentication module (PAM).

  • TLS certificate and key for HTTPS communication

  • Domain name

Install packages

Using conda

To install JupyterHub along with its dependencies including nodejs/npm:

conda install -c conda-forge jupyterhub

If you plan to run notebook servers locally, install JupyterLab or Jupyter notebook:

conda install jupyterlab
conda install notebook

Using pip

JupyterHub can be installed with pip, and the proxy with npm:

npm install -g configurable-http-proxy
python3 -m pip install jupyterhub

If you plan to run notebook servers locally, you will need to install JupyterLab or Jupyter notebook:

python3 -m pip install --upgrade jupyterlab
python3 -m pip install --upgrade notebook

Run the Hub server

To start the Hub server, run the command:

jupyterhub

Visit http://localhost:8000 in your browser, and sign in with your system username and password.

Note: To allow multiple users to sign in to the server, you will need to run the jupyterhub command as a privileged user, such as root. The wiki describes how to run the server as a less privileged user, which requires more configuration of the system.

Configuration

The Getting Started section of the documentation explains the common steps in setting up JupyterHub.

The JupyterHub tutorial provides an in-depth video and sample configurations of JupyterHub.

Create a configuration file

To generate a default config file with settings and descriptions:

jupyterhub --generate-config

Start the Hub

To start the Hub on a specific url and port 10.0.1.2:443 with https:

jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert

Authenticators

Authenticator Description
PAMAuthenticator Default, built-in authenticator
OAuthenticator OAuth + JupyterHub Authenticator = OAuthenticator
ldapauthenticator Simple LDAP Authenticator Plugin for JupyterHub
kerberosauthenticator Kerberos Authenticator Plugin for JupyterHub

Spawners

Spawner Description
LocalProcessSpawner Default, built-in spawner starts single-user servers as local processes
dockerspawner Spawn single-user servers in Docker containers
kubespawner Kubernetes spawner for JupyterHub
sudospawner Spawn single-user servers without being root
systemdspawner Spawn single-user notebook servers using systemd
batchspawner Designed for clusters using batch scheduling software
yarnspawner Spawn single-user notebook servers distributed on a Hadoop cluster
wrapspawner WrapSpawner and ProfilesSpawner enabling runtime configuration of spawners

Docker

A starter docker image for JupyterHub gives a baseline deployment of JupyterHub using Docker.

Important: This quay.io/jupyterhub/jupyterhub image contains only the Hub itself, with no configuration. In general, one needs to make a derivative image, with at least a jupyterhub_config.py setting up an Authenticator and/or a Spawner. To run the single-user servers, which may be on the same system as the Hub or not, Jupyter Notebook version 4 or greater must be installed.

The JupyterHub docker image can be started with the following command:

docker run -p 8000:8000 -d --name jupyterhub quay.io/jupyterhub/jupyterhub jupyterhub

This command will create a container named jupyterhub that you can stop and resume with docker stop/start.

The Hub service will be listening on all interfaces at port 8000, which makes this a good choice for testing JupyterHub on your desktop or laptop.

If you want to run docker on a computer that has a public IP then you should (as in MUST) secure it with ssl by adding ssl options to your docker configuration or by using an ssl enabled proxy.

Mounting volumes will allow you to store data outside the docker image (host system) so it will be persistent, even when you start a new image.

The command docker exec -it jupyterhub bash will spawn a root shell in your docker container. You can use the root shell to create system users in the container. These accounts will be used for authentication in JupyterHub's default configuration.

Contributing

If you would like to contribute to the project, please read our contributor documentation and the CONTRIBUTING.md. The CONTRIBUTING.md file explains how to set up a development installation, how to run the test suite, and how to contribute to documentation.

For a high-level view of the vision and next directions of the project, see the JupyterHub community roadmap.

A note about platform support

JupyterHub is supported on Linux/Unix based systems.

JupyterHub officially does not support Windows. You may be able to use JupyterHub on Windows if you use a Spawner and Authenticator that work on Windows, but the JupyterHub defaults will not. Bugs reported on Windows will not be accepted, and the test suite will not run on Windows. Small patches that fix minor Windows compatibility issues (such as basic installation) may be accepted, however. For Windows-based systems, we would recommend running JupyterHub in a docker container or Linux VM.

Additional Reference: Tornado's documentation on Windows platform support

License

We use a shared copyright model that enables all contributors to maintain the copyright on their contributions.

All code is licensed under the terms of the revised BSD license.

Help and resources

We encourage you to ask questions and share ideas on the Jupyter community forum. You can also talk with us on our JupyterHub Gitter channel.

JupyterHub follows the Jupyter Community Guides.


Technical Overview | Installation | Configuration | Docker | Contributing | License | Help and Resources

configurable-http-proxy's People

Contributors

ashishdahiya avatar betatim avatar consideratio avatar dependabot[bot] avatar dowjones-jupyterhub avatar dtaniwaki avatar greenkeeper[bot] avatar ichasepucks avatar ivan-gomes avatar manics avatar minrk avatar misolietavec avatar parente avatar pre-commit-ci[bot] avatar pseudomuto avatar rafael-ladislau avatar rcthomas avatar renovate-bot avatar rgbkrk avatar robnagler avatar rushikeshraut777 avatar shoito avatar stefanc18 avatar stefanoborini avatar suryag10 avatar tgmachina avatar tmshn avatar willingc avatar wongannaw avatar yuvipanda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

configurable-http-proxy's Issues

Need help - Cannot make a simple reverse proxy work

Guys,

First of all let me thank you for this, as it seems to have exactly what I was looking for to a project of mine that requires me to dynamically create reverse proxy routes.

What I'm trying to accomplish here is to define a route that simply and transparently "points" to some other address. What I did so far:

  1. Started CHP using: configurable-http-proxy --default-target=http://localhost:3000 --insecure --log-level debug
  2. Defined a route using: curl -H "Content-Type: application/json" -X "POST" -d '{"target":"https://www.google.com/"}' http://localhost:8001/api/routes/goo/
  3. Tried to access the newly defined route: Opened chrome and navigated to http:/localhost:8000/goo

I then received a 404 error from google with the message "The requested URL /goo was not found on this server."

That was not the behaviour that I expected - I expected that the internal URL /goo was not transmitted to the target (which I was able to fix by adding the --no-include-prefix option) and also that all responses from the target would be parsed to "fix" all www.google.com or / URIs with the prefix /goo I just created, thus making this into a full-featured reverse proxy, but looking at the code returned by CHP all original URIs are untouched (this I could not fix at all).

Am I doing anything wrong or it is not possible to use CHP as a real reverse proxy?

Regards,
/Edson

Are POST requests proxied with the http body?

Hi, I was using your proxy in a non-jupyterhub use case and noticed that my POST body seemed to disappear. Is this a known limitation or maybe misconfiguration on my side?

The app is sending something like this:

Host: localhost:8787
Connection: keep-alive
Content-Length: 241
Cache-Control: max-age=0
Origin: http://localhost:8787
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.59 Safari/537.36
Content-Type: application/x-www-form-urlencoded
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Referer: http://localhost:8787/auth-sign-in
Accept-Encoding: gzip, deflate, br
Accept-Language: en-GB,en-US;q=0.8,en;q=0.6

persist=0&appUri=&clientPath=%2Fauth-sign-in&v=xy...

But if I go through the proxy I am just receiving the headers:

x-forwarded-host: localhost:8000
x-forwarded-proto: http
x-forwarded-port: 8000
x-forwarded-for: ::ffff:172.23.0.1
accept-language: en-GB,en-US;q=0.8,en;q=0.6
accept-encoding: gzip, deflate, br
referer: http://localhost:8000/auth-sign-in
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
content-type: application/x-www-form-urlencoded
user-agent: Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.59 Safari/537.36
upgrade-insecure-requests: 1
origin: http://localhost:8000
cache-control: max-age=0
content-length: 229
connection: close
host: localhost:8000

proxy hold many died died connections

proxy hold many died connections,which didn't existed in notebook or nginx,。
bug
like this,connection's dest notebook have already drop it, but CHP hold those connections forever

Reverse proxy with subpath

Hi,

I'm following this guide: https://github.com/jupyter/tmpnb

I've set up a local installation for a workshop, which works as expected.

Now I need to reverse proxy this installation with my Apache front web server (to share public URL to users)
I need to reverse proxy like this:
https://my.domain/jupyterworkshop/ ----> http://localserver:8000/

Which implies final urls like https://my.domain/jupyterworkshop/user/XnXqexlEzX8f for example.

But when I'm doing my ProxyPath rule, it always redirect to https://my.domain/spawn/jupyterworkshop/

Is there a way to specify a subpath to jupyter tmpnb ?

I don't know if I'm clear, ask me questions if needed.

BTW thanks for all your work on Jupyter!

"prepend path" option doesn't work properly

For simple proxy:

curl -X POST -H "Content-Type: application/json" -d '{"target": "http://192.168.23.251:49153"}' http://localhost:8001/api/routes/ythub

default GET:

curl http://localhost:8000/ythub/a1

results in the target object that's passed to node-http-proxy:

{ ws: true,
  prependPath: false,
  xfwd: true,
  auth_token: undefined,
  default_target: undefined,
  target: 
   { protocol: 'http:',
     slashes: true,
     auth: null,
     host: '192.168.23.251:49153',
     port: '49153',
     hostname: '192.168.23.251',
     hash: null,
     search: null,
     query: null,
     pathname: '/',
     path: '/',
     href: 'http://192.168.23.251:49153/' } }

and req.url equal to /ythub/a1
I believe that for prependPath to be working correctly target.path should be modified to /ythub (which is defined as prefix in configproxy.js) and req.url should be stripped of it. While I was able to more or less pin point this, implementing actual solution vastly exceeds my puny js skills :/

Disable SSL V3

It would make my sysadmin very happy if I could disable SSLV3 on this proxy. It appears that it is possible using some obscure options to createServer. (https://gist.github.com/3rd-Eden/715522f6950044da45d8) I think I might even have figured out where to add this to the configproxy.js file.

I cannot find an option for this in the existing code, and my tests using openssl indicate that SSLV3 is still allowed.

Before I try to get this working, just wondering whether I'm alone, or whether I missed an already existing way to do this.

Persistent storage

Hey folks, I have a question about routing table storing. I saw that default store class is in memory and I can pass a storage,

--storage-backend <storage-class> Use for custom storage classes What this actually means :) And whats the storage-class. My question is there any way to use persistent storage such as redis or anything, because I don't want to use in memory storage, will lose my routing table on service restart :) right ?

Thanks!

Issues in serving static files

I am trying to use configurable-http-proxy for proxying user specific ML flow dashboards generated by running mlflow ui. But the proxy fails in serving static files. I see two problems here,

  • Path rewriting is not taken care
  • Even if I try to access the correct path manually, It fails

Missing health check endpoint

The proxy doesn't have a health check endpoint, which is very handy in a Kubernetes environment for readiness and liveness probes.

(I'm happy to contribute with a PR, when the details are discussed.)

Upgrading to HTTPS automatically.

For the sake of JupyterHub, it would be really convenient if on simple deployments using certs if there was a way to forward port 80 to 443. Perhaps this could be an extra flag that runs a simple redirect server alongside the other services?

var http = require('http');
http.createServer(function (req, res) {
    res.writeHead(301, { "Location": "https://" + req.headers['host'] + req.url });
    res.end();
}).listen(80);

Not sure how best to expose this as an option or if you would want this in here.

YARN gateway

In the context of skein we would be interested in using this project as a dynamic gateway for dask clusters on a YARN deployment. skein would be accessible from the outside (on an "edge node") and handle talking to YARN to create scheduler/worker containers that are unreachable from the outside.

I am thinking that configurable-http-proxy could be called by skein to map a URL such that users can establish TCP connections with schedulers within the cluster, from outside the cluster.

Is this a reasonable use case for configurable-http-proxy? Given that we can handle the skein side to make appropriate REST calls and pass URLs back to the user, how much work would it be to set up such a configuration?

Query by last activity API

It would be nice for jupyter/tmpnb#1 if we could query for a range of routes that were last active during a period.

Normally I would propose:

GET /api/routes?since=<timestamp>

What I'd really like though are those that have not been active since some time. I can't think of any good verbage for this right away, but here's a shot:

GET /api/routes?inactive_since=<timestamp>

Additional TLS options

It would be useful to pass ca, requestCert and rejectUnauthorized options to the tls server.

TypeError: Cannot read property 'prototype' of undefined

I'm getting an exception when I try to run a freshly installed configurable-http-proxy:

$ sudo apt-get install npm nodejs-legacy
$ sudo npm install -g configurable-http-proxy
$ configurable-http-proxy

util.js:555
  ctor.prototype = Object.create(superCtor.prototype, {
                                          ^
TypeError: Cannot read property 'prototype' of undefined
    at Object.exports.inherits (util.js:555:43)
    at Object.<anonymous> (/usr/local/lib/node_modules/configurable-http-proxy/node_modules/http-proxy/lib/http-proxy/index.js:111:17)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Module.require (module.js:364:17)
    at require (module.js:380:17)
    at Object.<anonymous> (/usr/local/lib/node_modules/configurable-http-proxy/node_modules/http-proxy/lib/http-proxy.js:4:17)
    at Module._compile (module.js:456:26)

It was working fine on a machine I installed it on a couple weeks ago, but I reinstalled it today (on this machine and another) and found that it was broken. The working and broken versions were the same (0.2.1). Maybe some dependency in the chain got upgraded and broke things?

Here are the installed dependencies of the broken version:

Unfortunately I didn't get a snapshot of the dependencies of the working version before I reinstalled and broke it.

My node version:

{ http_parser: '1.0',
  node: '0.10.25',
  v8: '3.14.5.9',
  ares: '1.10.0',
  uv: '0.10.23',
  zlib: '1.2.8',
  modules: '11',
  openssl: '1.0.1f',
  npm: '1.3.10' }

If somebody with a working install could compare this against their installed dependencies/versions, I can try to pinpoint what broke.

route deletion leads to default route unavailability

I'm managing configurable-http-proxy in php.

Adding route is working fine :

$s = curl_init();
curl_setopt($s, CURLOPT_POST, 1);
curl_setopt($s, CURLOPT_URL,"http://localhost:81/api/routes/test");
curl_setopt($s, CURLOPT_POSTFIELDS,'{"target":"http://10.10.10.10:1234"}');
$x = curl_exec($s);
curl_close($s);

And everything is working fine after adding route

But deleting a route cause the default route to be deleted too :

$s = curl_init();
curl_setopt($s, CURLOPT_URL,"http://localhost:81/api/routes/test");
curl_setopt($s, CURLOPT_CUSTOMREQUEST, "DELETE");
$x = curl_exec($s);
curl_close($s);

I'm sure I'm wrong somewhere, but I can't find where.

Thanks

Release 3.1 version

This a placeholder issue to cut the 3.1 release. FYI @minrk.

The changelog was merged in #131. Let's double check the README, but that can be done after the release too.

Write Out PID To File

It'd be useful to have a command-line option to specific a file for the proxy to put its pid in.

Rewrite project in Python3 to get rid of NodeJS dependency

In enterprise environments it's often a problem to maintain different ecosystems, such as NodeJS. Jupyterhub is written in Python, so I don't really see a big reason to have NodeJS along, which makes life of maintainers and internal package repository maintainers harder.

I'd suggest scheduling configurable-http-proxy rewrite to Python given viable API backwards compatible replacement available.

An in-range update of request is breaking the build 🚨

☝️ Greenkeeper’s updated Terms of Service will come into effect on April 6th, 2018.

Version 2.84.0 of request was just published.

Branch Build failing 🚨
Dependency request
Current Version 2.83.0
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

request is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push The Travis CI build failed Details

Commits

The new version differs by 6 commits.

  • d77c839 Update changelog
  • 4b46a13 2.84.0
  • 0b807c6 Merge pull request #2793 from dvishniakov/2792-oauth_body_hash
  • cfd2307 Update hawk to 7.0.7 (#2880)
  • efeaf00 Fixed calculation of oauth_body_hash, issue #2792
  • 253c5e5 2.83.1

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

reload ssl certs with SIGHUP

when using letsencrypt, rereading SSL key/cert config is useful. With nginx, etc. sending SIGHUP is a common way to trigger 'reload config' without relaunching. It would be nice to reread ssl cert/key files on SIGHUP if we can.

Missing PFX or certificate + private key for SSL config

I'm trying to set up the proxy with an SSL cert.
I'm generating the cert and the key with the following command:
openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout key.pem -out cert.pem

The try run the proxy with the following commands:

configurable-http-proxy --ssl-cert=cert.pem --log-level=debug --port=443 --default-target=http://1.1.1.1
configurable-http-proxy --ssl-key=key.pem --log-level=debug --port=443 --default-target=http://1.1.1.1

I keep getting this error:

tls.js:1125
    throw new Error('Missing PFX or certificate + private key.');
          ^
Error: Missing PFX or certificate + private key.
    at Server (tls.js:1125:11)
    at new Server (https.js:35:14)
    at Object.exports.createServer (https.js:54:10)
    at new ConfigurableProxy (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:176:35)
    at Object.<anonymous> (/usr/lib/node_modules/configurable-http-proxy/bin/configurable-http-proxy:183:13)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Function.Module.runMain (module.js:497:10)

The proxy works well without the SSL bit.
Thank you for the help.

setTimeout values

Hi.

I have a route which sometimes handles long running requests (more than 2 minutes).
I realized that the proxy closes the connection after 2 minutes and I tried to play with timeouts.
When I set these lines in the configurable-http-proxy file it worked.

const xxx = proxy.proxyServer.listen(listen.port, listen.ip);
const yyy = proxy.apiServer.listen(listen.apiPort, listen.apiIp);
xxx.setTimeout(10 * 60 * 1000);
yyy.setTimeout(10 * 60 * 1000);

Is it possible to modify the timeout values without this hack? (from command line arg or env var or whatever)

Missing `GET /api/routes/{route_spec}` API

The API has a way to create a route using POST method and delete using the DELETE method, but there is no way to get the information about a route or test if a route exists.

Expected behavior:

Request:

    GET /api/routes/path1

Response:

    200 OK
    {"target":"http://1.2.3.4:8301","host":"a.example.com","last_activity":"2017-09-14T14:46:49.162Z"}

---
Request:

    GET /api/routes/no-such-route

Response:

    404 Not Found

As of now, the GET /api/routes/{route_spec} API work and it is equivalent to GET /api/routes. The {route_spec} is complete ignored. When a non existing route is tried, the DELETE returns 404 status and GET returns 200 status, which is very confusing.

    DELETE /api/routes/no-such-route
    404 Not Found

    GET /api/routes/no-such-route
    200 OK
    {...}

Cannot delete child route after deleting parent route

Thanks for making this proxy!

We are finding that deleting /x before deleting /x/y causes an error.

import requests
requests.post('http://localhost:8080/api/routes/x', json={'target': 'http://localhost:5000'})
requests.post('http://localhost:8080/api/routes/x/y', json={'target': 'http://localhost:5001'})
requests.delete('http://localhost:8080/api/routes/x')
requests.delete('http://localhost:8080/api/routes/x/y')

20:01:59.255 - error: [ConfigProxy] Error in handler for DELETE /api/routes/x/y:  TypeError: Cannot read property 'remove' of undefined
    at URLTrie.remove (/usr/lib/node_modules/configurable-http-proxy/lib/trie.js:83:10)
    at ConfigurableProxy.remove_route (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:203:19)
    at ConfigurableProxy.delete_routes (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:265:14)
    at ConfigurableProxy.<anonymous> (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:76:27)
    at Function.<anonymous> (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:26:16)
    at ConfigurableProxy.handle_api_request (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:441:21)
    at Server.<anonymous> (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:154:32)
    at emitTwo (events.js:87:13)
    at Server.emit (events.js:172:7)
    at HTTPParser.parserOnIncoming [as onIncoming] (_http_server.js:537:12)

However, deleting /x/y before deleting /x works as expected.

import requests
requests.post('http://localhost:8080/api/routes/x', json={'target': 'http://localhost:5000'})
requests.post('http://localhost:8080/api/routes/x/y', json={'target': 'http://localhost:5001'})
requests.delete('http://localhost:8080/api/routes/x/y')
requests.delete('http://localhost:8080/api/routes/x')

Review conda installation and node version

@minrk I recently tried to run another project that required node 8. What I noticed was that installation of CHP forced node to be downgraded to v6 (alternatively, upgrading to nodejs v8 forced CHP to 1.3. I looked through the config files and I'm unclear as to why conda is doing that. Perhaps somewhere 3.1 and 1.3 are being transposed for node 8???

(basepy3) 
carol at cw-pro in ~
$ node -v
v6.12.0
(basepy3) 
carol at cw-pro in ~
$ configurable-http-proxy --version
3.1.0
(basepy3) 
carol at cw-pro in ~
$ jupyterhub --version
0.8.1
(basepy3) 
carol at cw-pro in ~
$ conda list | grep nodejs
nodejs                    6.12.0                        0    conda-forge
(basepy3) 
carol at cw-pro in ~
$ source deactivate

carol at cw-pro in ~
$ conda create -n chp python=3
Fetching package metadata .............
Solving package specifications: .

Package plan for installation in environment /Users/carol/miniconda3/envs/chp:

The following NEW packages will be INSTALLED:

    ca-certificates: 2017.11.5-0      conda-forge
    certifi:         2017.11.5-py36_0 conda-forge
    ncurses:         5.9-10           conda-forge
    openssl:         1.0.2n-0         conda-forge
    pip:             9.0.1-py36_0     conda-forge
    python:          3.6.3-4          conda-forge
    readline:        7.0-0            conda-forge
    setuptools:      38.2.4-py36_0    conda-forge
    sqlite:          3.20.1-0         conda-forge
    tk:              8.6.7-0          conda-forge
    wheel:           0.30.0-py_1      conda-forge
    xz:              5.2.3-0          conda-forge
    zlib:            1.2.11-0         conda-forge

Proceed ([y]/n)? y

#
# To activate this environment, use:
# > source activate chp
#
# To deactivate an active environment, use:
# > source deactivate
#


carol at cw-pro in ~
$ source activate chp
(chp) 
carol at cw-pro in ~
$ conda install configurable-http-proxy
Fetching package metadata .............
Solving package specifications: .

Package plan for installation in environment /Users/carol/miniconda3/envs/chp:

The following NEW packages will be INSTALLED:

    configurable-http-proxy: 3.1.0-0  conda-forge
    nodejs:                  6.12.0-0 conda-forge

Proceed ([y]/n)? y

(chp) 
carol at cw-pro in ~
$ conda update nodejs
Fetching package metadata .............
Solving package specifications: .

Package plan for installation in environment /Users/carol/miniconda3/envs/chp:

The following packages will be UPDATED:

    nodejs:                  6.12.0-0 conda-forge --> 8.8.1-0 conda-forge

The following packages will be DOWNGRADED:

    configurable-http-proxy: 3.1.0-0  conda-forge --> 1.3.0-0 conda-forge

Proceed ([y]/n)? y

nodejs-8.8.1-0 100% |################################| Time: 0:00:05   2.65 MB/s
configurable-h 100% |################################| Time: 0:00:00  24.56 MB/s
(chp) 
carol at cw-pro in ~
$ 

Sockets ballooning proportionally to the number of websockets opened by clients.

This is dually reported as jupyter/tmpnb#73, as it was noticed with a relatively large amount of users.

Sockets are ballooning within the node proxy when handling the websockets. A port gets allocated for each one between node and Docker.

Every 2.0s: sudo lsof -i | grep node                                                                                                                      Thu Oct 23 22:58:28 2014

sudo: unable to resolve host dev
node    3924 nobody   10u  IPv4 117123      0t0  TCP *:8000 (LISTEN)
node    3924 nobody   11u  IPv4 117124      0t0  TCP ip6-localhost:8001 (LISTEN)
node    3924 nobody   12u  IPv4 117125      0t0  TCP ip6-localhost:8001->ip6-localhost:59783 (ESTABLISHED)
node    3924 nobody   13u  IPv4 185713      0t0  TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56590 (ESTABLISHED)
node    3924 nobody   14u  IPv4 184626      0t0  TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56577 (ESTABLISHED)
node    3924 nobody   15u  IPv4 184628      0t0  TCP ip6-localhost:40600->ip6-localhost:49155 (ESTABLISHED)
node    3924 nobody   16u  IPv4 182588      0t0  TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56578 (ESTABLISHED)
node    3924 nobody   17u  IPv4 182590      0t0  TCP ip6-localhost:40607->ip6-localhost:49155 (ESTABLISHED)
node    3924 nobody   18u  IPv4 186387      0t0  TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56579 (ESTABLISHED)
node    3924 nobody   19u  IPv4 186389      0t0  TCP ip6-localhost:40611->ip6-localhost:49155 (ESTABLISHED)
node    3924 nobody   20u  IPv4 185715      0t0  TCP ip6-localhost:40636->ip6-localhost:49155 (ESTABLISHED)
node    3924 nobody   21u  IPv4 175024      0t0  TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56591 (ESTABLISHED)
node    3924 nobody   22u  IPv4 175026      0t0  TCP ip6-localhost:40642->ip6-localhost:49155 (ESTABLISHED)
node    3924 nobody   23u  IPv4 151837      0t0  TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56592 (ESTABLISHED)
node    3924 nobody   24u  IPv4 151839      0t0  TCP ip6-localhost:40648->ip6-localhost:49155 (ESTABLISHED)
node    3924 nobody   25u  IPv4 184792      0t0  TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56602 (ESTABLISHED)
node    3924 nobody   26u  IPv4 184794      0t0  TCP ip6-localhost:40672->ip6-localhost:49155 (ESTABLISHED)
node    3924 nobody   27u  IPv4 173469      0t0  TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56603 (ESTABLISHED)
node    3924 nobody   28u  IPv4 173471      0t0  TCP ip6-localhost:40678->ip6-localhost:49155 (ESTABLISHED)
node    3924 nobody   29u  IPv4 184813      0t0  TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56604 (ESTABLISHED)
node    3924 nobody   30u  IPv4 184815      0t0  TCP ip6-localhost:40683->ip6-localhost:49155 (ESTABLISHED)

nodejs assertion leads to proxy death

In our jupyter.cloudet.xyz deploy, we occassionally get the following crash that ultimately leads to proxy death. This is using the latest jupyter/configurable-http-proxy docker image from Docker Hub:

17:06:26.765 - info: [ConfigProxy] Proxying http://*:8000 to http://127.0.0.1:9999
17:06:26.770 - info: [ConfigProxy] Proxy API at http://localhost:8001/api/routes
21:47:17.939 - error: [ConfigProxy] Proxy error:  code=ECONNRESET
21:47:17.942 - error: [ConfigProxy] 503 GET /spawn/user/ApCsqQZgteFU/notebooks/widgets/examples/urth-pyspark-streaming.ipynb
00:40:17.180 - error: [ConfigProxy] Proxy error:  code=ECONNRESET
00:40:17.181 - error: [ConfigProxy] 503 GET /spawn/user/ApCsqQZgteFU/notebooks/widgets/examples/urth-pyspark-streaming.ipynb
00:40:17.183 - error: [ConfigProxy] Proxy error:  code=ECONNRESET
00:40:17.183 - error: [ConfigProxy] 503 GET /user/ApCsqQZgteFU/notebooks/widgets/examples/urth-pyspark-streaming.ipynb
00:40:17.405 - error: [ConfigProxy] Proxy error:  code=ECONNRESET
00:40:17.405 - error: [ConfigProxy] 503 GET /spawn/user/ApCsqQZgteFU/notebooks/index.ipynb

assert.js:89
  throw new assert.AssertionError({
  ^
AssertionError: false == true
    at ServerResponse.resOnFinish (_http_server.js:474:7)
    at emitNone (events.js:72:20)
    at ServerResponse.emit (events.js:166:7)
    at finish (_http_outgoing.js:529:10)
    at doNTCallback0 (node.js:407:9)
    at process._tickCallback (node.js:336:13)

Different behavior from proxy targets

Hello,

Test with google.com

I tried to proxy to www.google.com: configurable-http-proxy --insecure --log-level debug --default-target https://www.google.com/, the log shows proxy rule is created:

12:09:54.249 - warn: [ConfigProxy] REST API is not authenticated.
12:09:54.253 - info: [ConfigProxy] Adding route / -> https://www.google.com/
12:09:54.258 - info: [ConfigProxy] Proxying http://*:8000 to https://www.google.com/
12:09:54.259 - info: [ConfigProxy] Proxy API at http://localhost:8001/api/routes

But I cannot reach google from http://127.0.0.1:8000, after waiting for a while, looks like it was trying to reach https://www.google.com:8000/.

Test with simple Flask application

I built a simple web service with Python, e.g.:

from flask import Flask


app = Flask(__name__)

@app.route('/')
def index():
    return "Hello"

if __name__ == '__main__':
    app.run(port=5002)

I can proxy it with configurable-http-proxy --default-target http://127.0.0.1:5002

Test with more complex Flask application

I changed the default target with another URL, e.g. http://127.0.0.1:5050, which is built by myself for other purpose, of course I can reach with http://127.0.0.1:5050, but I cannot reach it from http://127.0.0.1:8000, the target service log shows not found:

127.0.0.1 - - [08/Nov/2017 12:21:59] "GET / HTTP/1.1" 404 -

So, now I'm confused, I assume the above three cases should be the same, but just with different results.

Any follow up is appreciated!

-Tong

host routing

Would be nice to have the ability to use hostnames to route requests. This would allow the use of configurable-http-proxy to route to a demo application root.

Token Authorization to Route

I'm interested in using a proxy with a token auth scheme. Is it out of scope for configurable-http-proxy to check a token/jwt before routing ?

503 GET socket hang up with Websocket

Hi,
I manage to proxy websocket with nginx but with CHP, it doesn't work. I also try with http-proxy and I always obtain the same result.
I have a qemu VM with websocket option on 10.22.9.172:6740. So on my proxy (10.22.9.119), I use the following command : configurable-http-proxy --default-target=ws://10.22.9.172:6740 --port 81
On Firefox, I use vnc_auto.html file, from novnc projet with the following url : http://194.X.X.X:Y/vnc_auto.html?host=10.22.9.119&port=81
Thanks for you help

Help running the docker container

Requesting just a little bit of documentation for use of the Docker container.

I've tried running the docker container like so:

docker run -it -p 80:8000 -p 8001:8001 jupyterhub/configurable-http-proxy --default-target http://localhost:8080

But the server running on port 8080 doesn't receive requests, and the proxy reports a refused connection as so:

06:26:23.522 - error: [ConfigProxy] Proxy error:  Error: connect ECONNREFUSED 127.0.0.1:8080
    at Object.exports._errnoException (util.js:893:11)
    at exports._exceptionWithHostPort (util.js:916:20)
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1075:14)

I've confirmed I can reach the server on port 8080 directly.

I can fix this by changing the default route to the machine's actual IP.

docker run -it -p 80:8000 -p 8001:8001 jupyterhub/configurable-http-proxy --default-target http://10.0.0.7:8080

If I try to reach the API, I can do a GET, but it doesn't report the default route.

› curl http://localhost:8001/api/routes
curl: (52) Empty reply from server

POSTing a new route does not work...

curl -i -X POST http://localhost:8001/api/routes -d '{"/foo":{"target":"http://localhost:8082"}}'

curl: (52) Empty reply from server

curl http://localhost:8001/api/routes
curl: (52) Empty reply from server

I've also tried deriving a container and exposing the 8001 port explicitly, but I don't see any difference in behavior.

503 error

CHP doesn't work for my docker image anymore. I have been testing my ipython notebook running in a docker image for about two weeks. I had no problems until a few days ago, when I got the 503 error telling me that my upstream service is unavailable. What did you change with this update? Could you send me the previous copy so I can go back to it working?

Define a standard, reproducible way to load test CHP

Knowing the amount of load / routes that CHP can handle is important when doing large deployments of JupyterHub. Having a standard and easy way to load test this would be very useful. There are a bunch of factors that should be measured:

  1. Number of route definitions in place
  2. Number of open websocket connections
  3. Rate of activity over the websocket connections
  4. Rate of regular http requests proxied

Version 10 of node.js has been released

Version 10 of Node.js (code name Dubnium) has been released! 🎊

To see what happens to your code in Node.js 10, Greenkeeper has created a branch with the following changes:

  • Added the new Node.js version to your .travis.yml
  • The new Node.js version is in-range for the engines in 1 of your package.json files, so that was left alone

If you’re interested in upgrading this repo to Node.js 10, you can open a PR with these changes. Please note that this issue is just intended as a friendly reminder and the PR as a possible starting point for getting your code running on Node.js 10.

More information on this issue

Greenkeeper has checked the engines key in any package.json file, the .nvmrc file, and the .travis.yml file, if present.

  • engines was only updated if it defined a single version, not a range.
  • .nvmrc was updated to Node.js 10
  • .travis.yml was only changed if there was a root-level node_js that didn’t already include Node.js 10, such as node or lts/*. In this case, the new version was appended to the list. We didn’t touch job or matrix configurations because these tend to be quite specific and complex, and it’s difficult to infer what the intentions were.

For many simpler .travis.yml configurations, this PR should suffice as-is, but depending on what you’re doing it may require additional work or may not be applicable at all. We’re also aware that you may have good reasons to not update to Node.js 10, which is why this was sent as an issue and not a pull request. Feel free to delete it without comment, I’m a humble robot and won’t feel rejected 🤖


FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Using node:alpine as base image?

With setting the base image to node:alpine and building the docker container I got image of size 61 MB,
whilst the node:5-slim image has size 209 MB. I am testing Alpine Linux based image in Jupyter tmpnb notebooks on our university server, no problems as for now.

My consideration is not only about size. node:alpine has 13 (apk installed) packages, each of them really necessary but "slim" variant (debian based) has 135 packages, some of them very superfluous in this concrete case (in reality, I am debian user for many years...).

Are there any reasons not to use node:alpine as the base image?

An in-range update of ws is breaking the build 🚨

Version 3.3.2 of ws was just published.

Branch Build failing 🚨
Dependency ws
Current Version 3.3.1
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

ws is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push The Travis CI build could not complete due to an error Details

Release Notes 3.3.2

Bug fixes

  • The parser of the Sec-WebSocket-Extensions header has been rewritten to make
    it spec-compliant (#1240).
Commits

The new version differs by 11 commits.

  • 46b2547 [dist] 3.3.2
  • 02d0011 [test] Use an OS-assigned arbitrary unused port
  • 9c73abe [minor] Remove some redundant code
  • 0a9621f [minor] Merge pushOffer() and pushParam() into push()
  • 16f727d [doc] Clarify PerMessageDeflate options
  • 0c8c0b8 [minor] Add JSDoc for PerMessageDeflate constructor
  • b4465f6 [test] Increase code coverage
  • 5d973fb [minor] Parse the Sec-WebSocket-Extensions header only when necessary
  • b7089ff [fix] Rewrite the parser of the Sec-WebSocket-Extensions header
  • d96c58c chore(package): update eslint to version 4.11.0 (#1234)
  • cfdecae [security] Add DoS vulnerablity to SECURITY.md

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

A / A+ TLS termination config by default for CHP

Hello JupyterHub Experts:

I have the JupyrterHub deployment on Docker Swarm on Linux. The JupyterHub docker image is pushed on the local registry listening on port 5000. Besides, JupyterHub image there isn't any other image on this repo.

The following are the configuration entries in the Jupterhub config file. Desipte of explicitly setting it to TLSv.2, the TLSv1.0 and TLSv1.1 are still being present and getting flagged in the scan. How do I overcome this ? The intent here is to only have TLSv1.2 not the older versions.

c.ConfigurableHTTPProxy.command = ['configurable-http-proxy', '--ssl-protocol', 'TLSv1.2']
c.JupyterHub.proxy_cmd = ['configurable-http-proxy', '--ssl-protocol=TLSv1.2']

"The following is an extract from the TLS Scan Report"

TLSv1.0:
server selection: enforce server preferences
3f- (key: RSA) ECDHE_RSA_WITH_AES_128_CBC_SHA
3f- (key: RSA) ECDHE_RSA_WITH_AES_256_CBC_SHA
3-- (key: RSA) RSA_WITH_AES_128_CBC_SHA
3-- (key: RSA) RSA_WITH_AES_256_CBC_SHA
TLSv1.1: idem
TLSv1.2:
server selection: enforce server preferences
3f- (key: RSA) ECDHE_RSA_WITH_AES_128_GCM_SHA256
3f- (key: RSA) ECDHE_RSA_WITH_AES_128_CBC_SHA
3f- (key: RSA) ECDHE_RSA_WITH_AES_256_CBC_SHA
3-- (key: RSA) RSA_WITH_AES_128_CBC_SHA
3-- (key: RSA) RSA_WITH_AES_256_CBC_SHA

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.