jupyterhub / configurable-http-proxy Goto Github PK
View Code? Open in Web Editor NEWnode-http-proxy plus a REST API
License: BSD 3-Clause "New" or "Revised" License
node-http-proxy plus a REST API
License: BSD 3-Clause "New" or "Revised" License
Related to jupyterhub/jupyterhub#1289.
Use case:
CHP is listening on port 8000 (since it's running as user Nobody) but kubernetes maps that to port 80 on an external IP. We want to do HTTP -> HTTPS redirects here, but CHP assumes that bind IP == connect IP, and hence redirects users to its local IP address, which can not be resolved from the external internet.
@minrk I recently tried to run another project that required node 8. What I noticed was that installation of CHP forced node to be downgraded to v6 (alternatively, upgrading to nodejs v8 forced CHP to 1.3. I looked through the config files and I'm unclear as to why conda is doing that. Perhaps somewhere 3.1 and 1.3 are being transposed for node 8???
(basepy3)
carol at cw-pro in ~
$ node -v
v6.12.0
(basepy3)
carol at cw-pro in ~
$ configurable-http-proxy --version
3.1.0
(basepy3)
carol at cw-pro in ~
$ jupyterhub --version
0.8.1
(basepy3)
carol at cw-pro in ~
$ conda list | grep nodejs
nodejs 6.12.0 0 conda-forge
(basepy3)
carol at cw-pro in ~
$ source deactivate
carol at cw-pro in ~
$ conda create -n chp python=3
Fetching package metadata .............
Solving package specifications: .
Package plan for installation in environment /Users/carol/miniconda3/envs/chp:
The following NEW packages will be INSTALLED:
ca-certificates: 2017.11.5-0 conda-forge
certifi: 2017.11.5-py36_0 conda-forge
ncurses: 5.9-10 conda-forge
openssl: 1.0.2n-0 conda-forge
pip: 9.0.1-py36_0 conda-forge
python: 3.6.3-4 conda-forge
readline: 7.0-0 conda-forge
setuptools: 38.2.4-py36_0 conda-forge
sqlite: 3.20.1-0 conda-forge
tk: 8.6.7-0 conda-forge
wheel: 0.30.0-py_1 conda-forge
xz: 5.2.3-0 conda-forge
zlib: 1.2.11-0 conda-forge
Proceed ([y]/n)? y
#
# To activate this environment, use:
# > source activate chp
#
# To deactivate an active environment, use:
# > source deactivate
#
carol at cw-pro in ~
$ source activate chp
(chp)
carol at cw-pro in ~
$ conda install configurable-http-proxy
Fetching package metadata .............
Solving package specifications: .
Package plan for installation in environment /Users/carol/miniconda3/envs/chp:
The following NEW packages will be INSTALLED:
configurable-http-proxy: 3.1.0-0 conda-forge
nodejs: 6.12.0-0 conda-forge
Proceed ([y]/n)? y
(chp)
carol at cw-pro in ~
$ conda update nodejs
Fetching package metadata .............
Solving package specifications: .
Package plan for installation in environment /Users/carol/miniconda3/envs/chp:
The following packages will be UPDATED:
nodejs: 6.12.0-0 conda-forge --> 8.8.1-0 conda-forge
The following packages will be DOWNGRADED:
configurable-http-proxy: 3.1.0-0 conda-forge --> 1.3.0-0 conda-forge
Proceed ([y]/n)? y
nodejs-8.8.1-0 100% |################################| Time: 0:00:05 2.65 MB/s
configurable-h 100% |################################| Time: 0:00:00 24.56 MB/s
(chp)
carol at cw-pro in ~
$
I'm getting an exception when I try to run a freshly installed configurable-http-proxy:
$ sudo apt-get install npm nodejs-legacy
$ sudo npm install -g configurable-http-proxy
$ configurable-http-proxy
util.js:555
ctor.prototype = Object.create(superCtor.prototype, {
^
TypeError: Cannot read property 'prototype' of undefined
at Object.exports.inherits (util.js:555:43)
at Object.<anonymous> (/usr/local/lib/node_modules/configurable-http-proxy/node_modules/http-proxy/lib/http-proxy/index.js:111:17)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/usr/local/lib/node_modules/configurable-http-proxy/node_modules/http-proxy/lib/http-proxy.js:4:17)
at Module._compile (module.js:456:26)
It was working fine on a machine I installed it on a couple weeks ago, but I reinstalled it today (on this machine and another) and found that it was broken. The working and broken versions were the same (0.2.1). Maybe some dependency in the chain got upgraded and broke things?
Here are the installed dependencies of the broken version:
├── [email protected]
├── [email protected] ([email protected])
├── [email protected] ([email protected])
└── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected])
Unfortunately I didn't get a snapshot of the dependencies of the working version before I reinstalled and broke it.
My node version:
{ http_parser: '1.0',
node: '0.10.25',
v8: '3.14.5.9',
ares: '1.10.0',
uv: '0.10.23',
zlib: '1.2.8',
modules: '11',
openssl: '1.0.1f',
npm: '1.3.10' }
If somebody with a working install could compare this against their installed dependencies/versions, I can try to pinpoint what broke.
Thanks for making this proxy!
We are finding that deleting /x before deleting /x/y causes an error.
import requests
requests.post('http://localhost:8080/api/routes/x', json={'target': 'http://localhost:5000'})
requests.post('http://localhost:8080/api/routes/x/y', json={'target': 'http://localhost:5001'})
requests.delete('http://localhost:8080/api/routes/x')
requests.delete('http://localhost:8080/api/routes/x/y')
20:01:59.255 - error: [ConfigProxy] Error in handler for DELETE /api/routes/x/y: TypeError: Cannot read property 'remove' of undefined
at URLTrie.remove (/usr/lib/node_modules/configurable-http-proxy/lib/trie.js:83:10)
at ConfigurableProxy.remove_route (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:203:19)
at ConfigurableProxy.delete_routes (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:265:14)
at ConfigurableProxy.<anonymous> (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:76:27)
at Function.<anonymous> (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:26:16)
at ConfigurableProxy.handle_api_request (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:441:21)
at Server.<anonymous> (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:154:32)
at emitTwo (events.js:87:13)
at Server.emit (events.js:172:7)
at HTTPParser.parserOnIncoming [as onIncoming] (_http_server.js:537:12)
However, deleting /x/y before deleting /x works as expected.
import requests
requests.post('http://localhost:8080/api/routes/x', json={'target': 'http://localhost:5000'})
requests.post('http://localhost:8080/api/routes/x/y', json={'target': 'http://localhost:5001'})
requests.delete('http://localhost:8080/api/routes/x/y')
requests.delete('http://localhost:8080/api/routes/x')
Hello JupyterHub Experts:
I have the JupyrterHub deployment on Docker Swarm on Linux. The JupyterHub docker image is pushed on the local registry listening on port 5000. Besides, JupyterHub image there isn't any other image on this repo.
The following are the configuration entries in the Jupterhub config file. Desipte of explicitly setting it to TLSv.2, the TLSv1.0 and TLSv1.1 are still being present and getting flagged in the scan. How do I overcome this ? The intent here is to only have TLSv1.2 not the older versions.
c.ConfigurableHTTPProxy.command = ['configurable-http-proxy', '--ssl-protocol', 'TLSv1.2']
c.JupyterHub.proxy_cmd = ['configurable-http-proxy', '--ssl-protocol=TLSv1.2']
"The following is an extract from the TLS Scan Report"
TLSv1.0:
server selection: enforce server preferences
3f- (key: RSA) ECDHE_RSA_WITH_AES_128_CBC_SHA
3f- (key: RSA) ECDHE_RSA_WITH_AES_256_CBC_SHA
3-- (key: RSA) RSA_WITH_AES_128_CBC_SHA
3-- (key: RSA) RSA_WITH_AES_256_CBC_SHA
TLSv1.1: idem
TLSv1.2:
server selection: enforce server preferences
3f- (key: RSA) ECDHE_RSA_WITH_AES_128_GCM_SHA256
3f- (key: RSA) ECDHE_RSA_WITH_AES_128_CBC_SHA
3f- (key: RSA) ECDHE_RSA_WITH_AES_256_CBC_SHA
3-- (key: RSA) RSA_WITH_AES_128_CBC_SHA
3-- (key: RSA) RSA_WITH_AES_256_CBC_SHA
It would be helpful to add a small graphic to the section that mentions 2 HTTP(S) servers.
It would be useful to pass ca, requestCert and rejectUnauthorized options to the tls server.
In https://github.com/jupyterhub/configurable-http-proxy/blob/master/bin/configurable-http-proxy#L216, we caution against using '*' as an ip in the docs and in the warning a few lines above yet it is displayed by L216. Perhaps change to '0.0.0.0' or '' or emit a warning such a few lines above.
This is dually reported as jupyter/tmpnb#73, as it was noticed with a relatively large amount of users.
Sockets are ballooning within the node proxy when handling the websockets. A port gets allocated for each one between node and Docker.
Every 2.0s: sudo lsof -i | grep node Thu Oct 23 22:58:28 2014
sudo: unable to resolve host dev
node 3924 nobody 10u IPv4 117123 0t0 TCP *:8000 (LISTEN)
node 3924 nobody 11u IPv4 117124 0t0 TCP ip6-localhost:8001 (LISTEN)
node 3924 nobody 12u IPv4 117125 0t0 TCP ip6-localhost:8001->ip6-localhost:59783 (ESTABLISHED)
node 3924 nobody 13u IPv4 185713 0t0 TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56590 (ESTABLISHED)
node 3924 nobody 14u IPv4 184626 0t0 TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56577 (ESTABLISHED)
node 3924 nobody 15u IPv4 184628 0t0 TCP ip6-localhost:40600->ip6-localhost:49155 (ESTABLISHED)
node 3924 nobody 16u IPv4 182588 0t0 TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56578 (ESTABLISHED)
node 3924 nobody 17u IPv4 182590 0t0 TCP ip6-localhost:40607->ip6-localhost:49155 (ESTABLISHED)
node 3924 nobody 18u IPv4 186387 0t0 TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56579 (ESTABLISHED)
node 3924 nobody 19u IPv4 186389 0t0 TCP ip6-localhost:40611->ip6-localhost:49155 (ESTABLISHED)
node 3924 nobody 20u IPv4 185715 0t0 TCP ip6-localhost:40636->ip6-localhost:49155 (ESTABLISHED)
node 3924 nobody 21u IPv4 175024 0t0 TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56591 (ESTABLISHED)
node 3924 nobody 22u IPv4 175026 0t0 TCP ip6-localhost:40642->ip6-localhost:49155 (ESTABLISHED)
node 3924 nobody 23u IPv4 151837 0t0 TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56592 (ESTABLISHED)
node 3924 nobody 24u IPv4 151839 0t0 TCP ip6-localhost:40648->ip6-localhost:49155 (ESTABLISHED)
node 3924 nobody 25u IPv4 184792 0t0 TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56602 (ESTABLISHED)
node 3924 nobody 26u IPv4 184794 0t0 TCP ip6-localhost:40672->ip6-localhost:49155 (ESTABLISHED)
node 3924 nobody 27u IPv4 173469 0t0 TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56603 (ESTABLISHED)
node 3924 nobody 28u IPv4 173471 0t0 TCP ip6-localhost:40678->ip6-localhost:49155 (ESTABLISHED)
node 3924 nobody 29u IPv4 184813 0t0 TCP 23.253.157.134:8000->83-244-151-247.cust-83.exponential-e.net:56604 (ESTABLISHED)
node 3924 nobody 30u IPv4 184815 0t0 TCP ip6-localhost:40683->ip6-localhost:49155 (ESTABLISHED)
It'd be useful to have a command-line option to specific a file for the proxy to put its pid in.
To see what happens to your code in Node.js 10, Greenkeeper has created a branch with the following changes:
.travis.yml
package.json
files, so that was left aloneIf you’re interested in upgrading this repo to Node.js 10, you can open a PR with these changes. Please note that this issue is just intended as a friendly reminder and the PR as a possible starting point for getting your code running on Node.js 10.
Greenkeeper has checked the engines
key in any package.json
file, the .nvmrc
file, and the .travis.yml
file, if present.
engines
was only updated if it defined a single version, not a range..nvmrc
was updated to Node.js 10.travis.yml
was only changed if there was a root-level node_js
that didn’t already include Node.js 10, such as node
or lts/*
. In this case, the new version was appended to the list. We didn’t touch job or matrix configurations because these tend to be quite specific and complex, and it’s difficult to infer what the intentions were.For many simpler .travis.yml
configurations, this PR should suffice as-is, but depending on what you’re doing it may require additional work or may not be applicable at all. We’re also aware that you may have good reasons to not update to Node.js 10, which is why this was sent as an issue and not a pull request. Feel free to delete it without comment, I’m a humble robot and won’t feel rejected 🤖
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
minrk/chpbench#1 (comment) for rationale.
More things to add:
It would be nice for jupyter/tmpnb#1 if we could query for a range of routes that were last active during a period.
Normally I would propose:
GET /api/routes?since=<timestamp>
What I'd really like though are those that have not been active since some time. I can't think of any good verbage for this right away, but here's a shot:
GET /api/routes?inactive_since=<timestamp>
This would allow creation of alternative proxying setups (See jupyterhub/jupyterhub#109 for a possible use case) that 'conform to the spec'.
It would make my sysadmin very happy if I could disable SSLV3 on this proxy. It appears that it is possible using some obscure options to createServer. (https://gist.github.com/3rd-Eden/715522f6950044da45d8) I think I might even have figured out where to add this to the configproxy.js file.
I cannot find an option for this in the existing code, and my tests using openssl indicate that SSLV3 is still allowed.
Before I try to get this working, just wondering whether I'm alone, or whether I missed an already existing way to do this.
The doc/rest-api.yml
should be reviewed and updated if needed.
Guys,
First of all let me thank you for this, as it seems to have exactly what I was looking for to a project of mine that requires me to dynamically create reverse proxy routes.
What I'm trying to accomplish here is to define a route that simply and transparently "points" to some other address. What I did so far:
configurable-http-proxy --default-target=http://localhost:3000 --insecure --log-level debug
curl -H "Content-Type: application/json" -X "POST" -d '{"target":"https://www.google.com/"}' http://localhost:8001/api/routes/goo/
I then received a 404 error from google with the message "The requested URL /goo was not found on this server."
That was not the behaviour that I expected - I expected that the internal URL /goo was not transmitted to the target (which I was able to fix by adding the --no-include-prefix option) and also that all responses from the target would be parsed to "fix" all www.google.com or / URIs with the prefix /goo I just created, thus making this into a full-featured reverse proxy, but looking at the code returned by CHP all original URIs are untouched (this I could not fix at all).
Am I doing anything wrong or it is not possible to use CHP as a real reverse proxy?
Regards,
/Edson
Hi,
I manage to proxy websocket with nginx but with CHP, it doesn't work. I also try with http-proxy and I always obtain the same result.
I have a qemu VM with websocket option on 10.22.9.172:6740
. So on my proxy (10.22.9.119
), I use the following command : configurable-http-proxy --default-target=ws://10.22.9.172:6740 --port 81
On Firefox, I use vnc_auto.html file, from novnc projet with the following url : http://194.X.X.X:Y/vnc_auto.html?host=10.22.9.119&port=81
Thanks for you help
Any interest in writing routes to disk as a possible storage backend ?
For simple proxy:
curl -X POST -H "Content-Type: application/json" -d '{"target": "http://192.168.23.251:49153"}' http://localhost:8001/api/routes/ythub
default GET:
curl http://localhost:8000/ythub/a1
results in the target
object that's passed to node-http-proxy:
{ ws: true,
prependPath: false,
xfwd: true,
auth_token: undefined,
default_target: undefined,
target:
{ protocol: 'http:',
slashes: true,
auth: null,
host: '192.168.23.251:49153',
port: '49153',
hostname: '192.168.23.251',
hash: null,
search: null,
query: null,
pathname: '/',
path: '/',
href: 'http://192.168.23.251:49153/' } }
and req.url
equal to /ythub/a1
I believe that for prependPath to be working correctly target.path
should be modified to /ythub
(which is defined as prefix
in configproxy.js
) and req.url
should be stripped of it. While I was able to more or less pin point this, implementing actual solution vastly exceeds my puny js skills :/
Hey folks, I have a question about routing table storing. I saw that default store class is in memory and I can pass a storage,
--storage-backend <storage-class> Use for custom storage classes
What this actually means :) And whats the storage-class. My question is there any way to use persistent storage such as redis or anything, because I don't want to use in memory storage, will lose my routing table on service restart :) right ?
Thanks!
In our jupyter.cloudet.xyz deploy, we occassionally get the following crash that ultimately leads to proxy death. This is using the latest jupyter/configurable-http-proxy docker image from Docker Hub:
17:06:26.765 - info: [ConfigProxy] Proxying http://*:8000 to http://127.0.0.1:9999
17:06:26.770 - info: [ConfigProxy] Proxy API at http://localhost:8001/api/routes
21:47:17.939 - error: [ConfigProxy] Proxy error: code=ECONNRESET
21:47:17.942 - error: [ConfigProxy] 503 GET /spawn/user/ApCsqQZgteFU/notebooks/widgets/examples/urth-pyspark-streaming.ipynb
00:40:17.180 - error: [ConfigProxy] Proxy error: code=ECONNRESET
00:40:17.181 - error: [ConfigProxy] 503 GET /spawn/user/ApCsqQZgteFU/notebooks/widgets/examples/urth-pyspark-streaming.ipynb
00:40:17.183 - error: [ConfigProxy] Proxy error: code=ECONNRESET
00:40:17.183 - error: [ConfigProxy] 503 GET /user/ApCsqQZgteFU/notebooks/widgets/examples/urth-pyspark-streaming.ipynb
00:40:17.405 - error: [ConfigProxy] Proxy error: code=ECONNRESET
00:40:17.405 - error: [ConfigProxy] 503 GET /spawn/user/ApCsqQZgteFU/notebooks/index.ipynb
assert.js:89
throw new assert.AssertionError({
^
AssertionError: false == true
at ServerResponse.resOnFinish (_http_server.js:474:7)
at emitNone (events.js:72:20)
at ServerResponse.emit (events.js:166:7)
at finish (_http_outgoing.js:529:10)
at doNTCallback0 (node.js:407:9)
at process._tickCallback (node.js:336:13)
With setting the base image to node:alpine and building the docker container I got image of size 61 MB,
whilst the node:5-slim image has size 209 MB. I am testing Alpine Linux based image in Jupyter tmpnb notebooks on our university server, no problems as for now.
My consideration is not only about size. node:alpine has 13 (apk installed) packages, each of them really necessary but "slim" variant (debian based) has 135 packages, some of them very superfluous in this concrete case (in reality, I am debian user for many years...).
Are there any reasons not to use node:alpine as the base image?
when using letsencrypt, rereading SSL key/cert config is useful. With nginx, etc. sending SIGHUP is a common way to trigger 'reload config' without relaunching. It would be nice to reread ssl cert/key files on SIGHUP if we can.
I am trying to use configurable-http-proxy
for proxying user specific ML flow dashboards generated by running mlflow ui
. But the proxy fails in serving static files. I see two problems here,
Hi.
I have a route which sometimes handles long running requests (more than 2 minutes).
I realized that the proxy closes the connection after 2 minutes and I tried to play with timeouts.
When I set these lines in the configurable-http-proxy
file it worked.
const xxx = proxy.proxyServer.listen(listen.port, listen.ip);
const yyy = proxy.apiServer.listen(listen.apiPort, listen.apiIp);
xxx.setTimeout(10 * 60 * 1000);
yyy.setTimeout(10 * 60 * 1000);
Is it possible to modify the timeout values without this hack? (from command line arg or env var or whatever)
Should the argument or variable name change here ?
https://github.com/jupyterhub/configurable-http-proxy/blob/master/bin/configurable-http-proxy#L181
In enterprise environments it's often a problem to maintain different ecosystems, such as NodeJS. Jupyterhub is written in Python, so I don't really see a big reason to have NodeJS along, which makes life of maintainers and internal package repository maintainers harder.
I'd suggest scheduling configurable-http-proxy
rewrite to Python given viable API backwards compatible replacement available.
For the sake of JupyterHub, it would be really convenient if on simple deployments using certs if there was a way to forward port 80 to 443. Perhaps this could be an extra flag that runs a simple redirect server alongside the other services?
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(301, { "Location": "https://" + req.headers['host'] + req.url });
res.end();
}).listen(80);
Not sure how best to expose this as an option or if you would want this in here.
Branch | Build failing 🚨 |
---|---|
Dependency | ws |
Current Version | 3.3.1 |
Type | devDependency |
This version is covered by your current version range and after updating it in your project the build failed.
ws is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.
Sec-WebSocket-Extensions
header has been rewritten to makeThe new version differs by 11 commits.
46b2547
[dist] 3.3.2
02d0011
[test] Use an OS-assigned arbitrary unused port
9c73abe
[minor] Remove some redundant code
0a9621f
[minor] Merge pushOffer()
and pushParam()
into push()
16f727d
[doc] Clarify PerMessageDeflate
options
0c8c0b8
[minor] Add JSDoc for PerMessageDeflate
constructor
b4465f6
[test] Increase code coverage
5d973fb
[minor] Parse the Sec-WebSocket-Extensions header only when necessary
b7089ff
[fix] Rewrite the parser of the Sec-WebSocket-Extensions header
d96c58c
chore(package): update eslint to version 4.11.0 (#1234)
cfdecae
[security] Add DoS vulnerablity to SECURITY.md
See the full diff
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
The API has a way to create a route using POST method and delete using the DELETE method, but there is no way to get the information about a route or test if a route exists.
Expected behavior:
Request:
GET /api/routes/path1
Response:
200 OK
{"target":"http://1.2.3.4:8301","host":"a.example.com","last_activity":"2017-09-14T14:46:49.162Z"}
---
Request:
GET /api/routes/no-such-route
Response:
404 Not Found
As of now, the GET /api/routes/{route_spec}
API work and it is equivalent to GET /api/routes
. The {route_spec}
is complete ignored. When a non existing route is tried, the DELETE
returns 404 status and GET returns 200 status, which is very confusing.
DELETE /api/routes/no-such-route
404 Not Found
GET /api/routes/no-such-route
200 OK
{...}
I'm interested in using a proxy with a token auth scheme. Is it out of scope for configurable-http-proxy
to check a token/jwt before routing ?
Hi all,
I am trying install jupyterproxy in Cisco PNDA framework but can't access the following URL.
https://github.com/jupyterhub/configurable-http-proxy/archive/
Hello,
I tried to proxy to www.google.com: configurable-http-proxy --insecure --log-level debug --default-target https://www.google.com/
, the log shows proxy rule is created:
12:09:54.249 - warn: [ConfigProxy] REST API is not authenticated.
12:09:54.253 - info: [ConfigProxy] Adding route / -> https://www.google.com/
12:09:54.258 - info: [ConfigProxy] Proxying http://*:8000 to https://www.google.com/
12:09:54.259 - info: [ConfigProxy] Proxy API at http://localhost:8001/api/routes
But I cannot reach google from http://127.0.0.1:8000
, after waiting for a while, looks like it was trying to reach https://www.google.com:8000/
.
I built a simple web service with Python, e.g.:
from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return "Hello"
if __name__ == '__main__':
app.run(port=5002)
I can proxy it with configurable-http-proxy --default-target http://127.0.0.1:5002
I changed the default target with another URL, e.g. http://127.0.0.1:5050
, which is built by myself for other purpose, of course I can reach with http://127.0.0.1:5050
, but I cannot reach it from http://127.0.0.1:8000
, the target service log shows not found:
127.0.0.1 - - [08/Nov/2017 12:21:59] "GET / HTTP/1.1" 404 -
So, now I'm confused, I assume the above three cases should be the same, but just with different results.
Any follow up is appreciated!
-Tong
The proxy doesn't have a health check endpoint, which is very handy in a Kubernetes environment for readiness and liveness probes.
(I'm happy to contribute with a PR, when the details are discussed.)
I'm managing configurable-http-proxy in php.
Adding route is working fine :
$s = curl_init();
curl_setopt($s, CURLOPT_POST, 1);
curl_setopt($s, CURLOPT_URL,"http://localhost:81/api/routes/test");
curl_setopt($s, CURLOPT_POSTFIELDS,'{"target":"http://10.10.10.10:1234"}');
$x = curl_exec($s);
curl_close($s);
And everything is working fine after adding route
But deleting a route cause the default route to be deleted too :
$s = curl_init();
curl_setopt($s, CURLOPT_URL,"http://localhost:81/api/routes/test");
curl_setopt($s, CURLOPT_CUSTOMREQUEST, "DELETE");
$x = curl_exec($s);
curl_close($s);
I'm sure I'm wrong somewhere, but I can't find where.
Thanks
☝️ Greenkeeper’s updated Terms of Service will come into effect on April 6th, 2018.
Branch | Build failing 🚨 |
---|---|
Dependency | request |
Current Version | 2.83.0 |
Type | devDependency |
This version is covered by your current version range and after updating it in your project the build failed.
request is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.
The new version differs by 6 commits.
d77c839
Update changelog
4b46a13
2.84.0
0b807c6
Merge pull request #2793 from dvishniakov/2792-oauth_body_hash
cfd2307
Update hawk to 7.0.7 (#2880)
efeaf00
Fixed calculation of oauth_body_hash, issue #2792
253c5e5
2.83.1
See the full diff
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
Requesting just a little bit of documentation for use of the Docker container.
I've tried running the docker container like so:
docker run -it -p 80:8000 -p 8001:8001 jupyterhub/configurable-http-proxy --default-target http://localhost:8080
But the server running on port 8080 doesn't receive requests, and the proxy reports a refused connection as so:
06:26:23.522 - error: [ConfigProxy] Proxy error: Error: connect ECONNREFUSED 127.0.0.1:8080
at Object.exports._errnoException (util.js:893:11)
at exports._exceptionWithHostPort (util.js:916:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1075:14)
I've confirmed I can reach the server on port 8080 directly.
I can fix this by changing the default route to the machine's actual IP.
docker run -it -p 80:8000 -p 8001:8001 jupyterhub/configurable-http-proxy --default-target http://10.0.0.7:8080
If I try to reach the API, I can do a GET, but it doesn't report the default route.
› curl http://localhost:8001/api/routes
curl: (52) Empty reply from server
POSTing a new route does not work...
curl -i -X POST http://localhost:8001/api/routes -d '{"/foo":{"target":"http://localhost:8082"}}'
curl: (52) Empty reply from server
curl http://localhost:8001/api/routes
curl: (52) Empty reply from server
I've also tried deriving a container and exposing the 8001 port explicitly, but I don't see any difference in behavior.
#72 pins jasmine to 2.4 since jasmine 2.5 is producing no output.
Pending: jasmine/jasmine-npm#90
After pending jasmine-npm issue is resolved, unpin and support 2.5.
CHP doesn't work for my docker image anymore. I have been testing my ipython notebook running in a docker image for about two weeks. I had no problems until a few days ago, when I got the 503 error telling me that my upstream service is unavailable. What did you change with this update? Could you send me the previous copy so I can go back to it working?
In the context of skein we would be interested in using this project as a dynamic gateway for dask clusters on a YARN deployment. skein
would be accessible from the outside (on an "edge node") and handle talking to YARN to create scheduler/worker containers that are unreachable from the outside.
I am thinking that configurable-http-proxy
could be called by skein
to map a URL such that users can establish TCP connections with schedulers within the cluster, from outside the cluster.
Is this a reasonable use case for configurable-http-proxy
? Given that we can handle the skein
side to make appropriate REST calls and pass URLs back to the user, how much work would it be to set up such a configuration?
Would be nice to have the ability to use hostnames to route requests. This would allow the use of configurable-http-proxy to route to a demo application root.
Knowing the amount of load / routes that CHP can handle is important when doing large deployments of JupyterHub. Having a standard and easy way to load test this would be very useful. There are a bunch of factors that should be measured:
Hi,
I'm following this guide: https://github.com/jupyter/tmpnb
I've set up a local installation for a workshop, which works as expected.
Now I need to reverse proxy this installation with my Apache front web server (to share public URL to users)
I need to reverse proxy like this:
https://my.domain/jupyterworkshop/ ----> http://localserver:8000/
Which implies final urls like https://my.domain/jupyterworkshop/user/XnXqexlEzX8f
for example.
But when I'm doing my ProxyPath rule, it always redirect to https://my.domain/spawn/jupyterworkshop/
Is there a way to specify a subpath to jupyter tmpnb ?
I don't know if I'm clear, ask me questions if needed.
BTW thanks for all your work on Jupyter!
Hi, I was using your proxy in a non-jupyterhub use case and noticed that my POST body seemed to disappear. Is this a known limitation or maybe misconfiguration on my side?
The app is sending something like this:
Host: localhost:8787
Connection: keep-alive
Content-Length: 241
Cache-Control: max-age=0
Origin: http://localhost:8787
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.59 Safari/537.36
Content-Type: application/x-www-form-urlencoded
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Referer: http://localhost:8787/auth-sign-in
Accept-Encoding: gzip, deflate, br
Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
persist=0&appUri=&clientPath=%2Fauth-sign-in&v=xy...
But if I go through the proxy I am just receiving the headers:
x-forwarded-host: localhost:8000
x-forwarded-proto: http
x-forwarded-port: 8000
x-forwarded-for: ::ffff:172.23.0.1
accept-language: en-GB,en-US;q=0.8,en;q=0.6
accept-encoding: gzip, deflate, br
referer: http://localhost:8000/auth-sign-in
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
content-type: application/x-www-form-urlencoded
user-agent: Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.59 Safari/537.36
upgrade-insecure-requests: 1
origin: http://localhost:8000
cache-control: max-age=0
content-length: 229
connection: close
host: localhost:8000
I'm trying to set up the proxy with an SSL cert.
I'm generating the cert and the key with the following command:
openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout key.pem -out cert.pem
The try run the proxy with the following commands:
configurable-http-proxy --ssl-cert=cert.pem --log-level=debug --port=443 --default-target=http://1.1.1.1
configurable-http-proxy --ssl-key=key.pem --log-level=debug --port=443 --default-target=http://1.1.1.1
I keep getting this error:
tls.js:1125
throw new Error('Missing PFX or certificate + private key.');
^
Error: Missing PFX or certificate + private key.
at Server (tls.js:1125:11)
at new Server (https.js:35:14)
at Object.exports.createServer (https.js:54:10)
at new ConfigurableProxy (/usr/lib/node_modules/configurable-http-proxy/lib/configproxy.js:176:35)
at Object.<anonymous> (/usr/lib/node_modules/configurable-http-proxy/bin/configurable-http-proxy:183:13)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
The proxy works well without the SSL bit.
Thank you for the help.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.