Git Product home page Git Product logo

hipache's Introduction

Hipache: a Distributed HTTP and WebSocket Proxy

NPM version Build Status Dependency Status Coverage Status Code Climate Stories in Ready

DEPRECATED

This project is officially deprecated due to upstream inactivity (last updated Feb 2015, 2d36766; last release Apr 2014, 0.3.1).

The following is a list of other HTTP proxies which might be suitable replacements depending on your needs:

traefik vulcand nginx haproxy httpd

WARNING

This is the documentation for master. If you are installing Hipache from NPM, you should look at the documentation on the 0.3.x branch.

What Is It?

Hipache (pronounce hɪ'pætʃɪ) is a fully-featured distributed proxy designed to route high volumes of HTTP and WebSocket traffic to unusually large numbers of virtual hosts, in a highly dynamic topology where backends are added and removed several times per second. It is particularly well-suited for PaaS (Platform-as-a-Service) and other environments that are both business-critical and multi-tenant.

Hipache was originally developed at dotCloud, a popular platform-as-a-service, to replace its first-generation routing layer based on a heavily instrumented nginx deployment. It currently serves production traffic for tens of thousands of applications hosted on dotCloud. Hipache is based on the node-http-proxy library.

Run It!

1. Installation

From the shell:

$ npm install hipache -g

The '-g' option will make the 'hipache' bin-script available system-wide (usually linked from '/usr/local/bin').

2. Configuration File

Basic Hipache configuration is described in a config.json file. For example, this is the configuration file for the master version of Hipache (i.e. under development, you should rather look at the documentation of the latest stable version you installed):

{
    "server": {
        "debug": false,
        "workers": 10,
        "maxSockets": 100,
        "tcpTimeout": 30,
        "deadBackendTTL": 30,
        "retryOnError": 3,
        "accessLog": "/var/log/hipache/access.log",
        "httpKeepAlive": false,
        "deadBackendOn500": true,
        "staticDir": null
    },
    "http": {
        "port": 80,
        "bind": ["127.0.0.1"]
    },
    "https": {
        "bind": [],
        "port": 443,
        "ca": [],
        "secureProtocol": "SSLv23_method",
        "secureOptions": 50331648,
        "key": "/etc/ssl/ssl.key",
        "cert": "/etc/ssl/ssl.crt",
        "passphrase": undefined,
        "ciphers": "DH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+a RSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4",
        "honorCipherOrder": true
    },
    "driver": "redis:",
    "user": "www-data",
    "group": "www-data"
}
  • server: generic server settings, like accesslog location, or number of workers
    • server.debug: debug mode.
    • server.workers: number of workers to be spawned. You need to request to have at least 1 worker, as the master process does not serve any request. Defaults to 10 if not specified.
    • server.maxSockets: the maximum number of sockets which can be opened on each backend (per worker). Defaults to 100 if not specified.
    • server.tcpTimeout: the number of seconds of inactivity on the socket to wait before timeout-ing it. If sets to 0, then the existing idle timeout is disabled. Defaults to 30 seconds.
    • server.deadBackendTTL: the number of seconds a backend is flagged as 'dead' before retrying to proxy another request to it (doesn't apply if you are using a third-party health checker). Defaults to 30 seconds.
    • server.retryOnError: retries limit. Defaults to 3.
    • server.accessLog: location of the Access logs, the format is the same as nginx. Defaults to /var/log/hipache/access.log if not specified.
    • server.httpKeepAlive: enable/disable keep-alive functionality. Defaults to false (disabled).
    • server.deadBackendOn500: consider 500 HTTP status code as critical error if sets to true. Defaults to true.
    • server.staticDir: the absolute path of the directory containing your custom static error pages. Default value null means it uses Hipache's pages. Defaults to Hipache's static/ directory.
  • http: specifies on which ips/ports Hipache will listen for http traffic. By default, Hipache listens only on 127.0.0.1:80
    • http.port: port to listen to for http. Defaults to 80.
    • http.bind: IPv4 (or IPv6) address, or addresses to listen to. You can specify a single ip, an array of ips, or an array of objects {address: IP, port: PORT} if you want to use a specific port on a specific ip. Defaults to 127.0.0.1.
  • https: specifies on which ips/ports Hipache will listen for https traffic. By default, Hipache doesn't listens for https traffic.
    • https.port: port to listen to for https. Defaults to 443.
    • https.key: path to key file to use. No default.
    • https.passphrase: optional passphrase for the key file. No default.
    • https.cert: path to certificate file to use. No default.
    • https.ca: optional path to additional CA file to serve. Might be a string, or an array.
    • https.bind: similarly to http.bind, you can specific a single IP, an array of IP, or an array of objects to override the port, key/cert/ca files on a per-IP basis.
    • https.secureProtocol: SSL/TLS protocol to use. Defaults to SSLv23_method (auto-negotiation).
    • https.secureOptions: extra options to pass to the SSL/TLS layer. Raw values must be provided. For instance, defaults is 50331648, and stands for SSL_OP_NO_SSLv3 | SSL_OP_NO_SSLv2 (constants).
    • https.ciphers: cipher suites. See the default value above.
    • https.honorCipherOrder: when choosing a cipher, use the server's preferences instead of the client preferences. Defaults to true.
  • driver: driver URL to connect to for dynamic VHOST configurations. See drivers section for more information. Defaults to redis:.
  • user: if starting as root (which you might do if you want to use a privileged port), will drop root privileges as soon as it's bound. Defaults to www-data. Note that you MUST specify a user if you start Hipache as root. You can specify user: 'root' if you don't mind (strongly discouraged!). You can use either user names or identifiers.
  • group: if starting as root, will downgrade group to this. If left empty, will try to downgrade to a group named after the specified user. Defaults to www-data.

3. Spawning

From the shell (defaults to using the config/config.json file):

$ hipache

If you use a privileged port (e.g.: 80):

$ sudo hipache

If you want to use a specific configuration file:

$ hipache --config path/to/someConfig.json

If you want to just test a specific configuration file:

$ hipache --dry --config path/to/someConfig.json

Managing multiple configuration files:

The default configuration file is config/config.json. It's possible to have different configuration files named config_<suffix>.json, where the suffix is the value of an environment variable named SETTINGS_FLAVOR.

For instance, here is how to spawn the server with the config_test.json configuration file in order to run the tests.

$ SETTINGS_FLAVOR=test hipache

4. VHOST Configuration

All VHOST configuration is managed through a configuration backend (cf. drivers). This makes it possible to update the configuration dynamically and gracefully while the server is running, and have that state shared across workers and even across Hipache instances.

The recommended backend to use is Redis. It also makes it simple to write configuration adapters. It would be trivial to load a plain text configuration file into Redis (and update it at runtime).

Different configuration adapters will follow, but for the moment you have to provision the Redis manually.

Let's take an example to proxify requests to 2 backends for the hostname www.dotcloud.com. The 2 backends IP are 192.168.0.42 and 192.168.0.43 and they serve the HTTP traffic on the port 80.

redis-cli is the standard client tool to talk to Redis from the terminal.

Follow these steps:

  1. Create the frontend and associate an identifier:

     $ redis-cli rpush frontend:www.dotcloud.com mywebsite
     (integer) 1
    

The frontend identifer is mywebsite, it could be anything.

  1. Associate the 2 backends:

     $ redis-cli rpush frontend:www.dotcloud.com http://192.168.0.42:80
     (integer) 2
     $ redis-cli rpush frontend:www.dotcloud.com http://192.168.0.43:80
     (integer) 3
    
  2. Review the configuration:

     $ redis-cli lrange frontend:www.dotcloud.com 0 -1
     1) "mywebsite"
     2) "http://192.168.0.42:80"
     3) "http://192.168.0.43:80"
    

While the server is running, any of these steps can be re-run without messing up with the traffic.

5. OS Integration

Upstart

Copy upstart.conf to /etc/init/hipache.conf. Then you can use:

start hipache
stop hipache
restart hipache

The configuration file used is /etc/hipache.json.

Drivers

Hipache supports several drivers for dynamic VHOST configurations.

Redis

This is the default backend.

If you want a master/slave Redis, specify a second url for the master, e.g.: driver: ["redis://slave:port", "redis://master:port"]. More generally, the driver syntax is: redis://:password@host:port/database#prefix - all parameter are optional, hence just redis: is a valid driver URI. You can omit this entirely to use the local redis on the default port, which is the default.

Memcached

See the drivers documentation.

etcd

See the drivers documentation.

Zookeeper

See the drivers documentation.

Features

Load-Balancing Across Multiple Backends

As seen in the example above, multiple backends can be attached to a frontend.

All requests coming to the frontend are load-balanced across all healthy backends.

The backend to use for a specific request is determined randomly. Subsequent requests coming from the same client won't necessarily be routed to the same backend (since backend selection is purely random).

Dead Backend Detection

If a backend stops responding, it will be flagged as dead for a configurable amount of time. The dead backend will be temporarily removed from the load-balancing rotation.

Multi-Process Architecture

To optimize response times and make use of all your available cores, Hipache uses the cluster module (included in NodeJS), and spreads the load across multiple NodeJS processes. A master process is in charge of spawning workers and monitoring them. When a worker dies, the master spawns a new one.

Memory Monitoring

The memory footprint of Hipache tends to grow slowly over time, indicating a probable memory leak. A close examination did not turn up any memory leak in Hipache's code itself; but it doesn't prove that there is none. Also, we did not investigate (yet) thoroughly the code of Hipache's external dependencies, so the leaks could be creeping there.

While we profile Hipache's memory to further reduce its footprint, we implemented a memory monitoring system to make sure that memory use doesn't go out of bounds. Each worker monitors its memory usage. If it crosses a given threshold, the worker stops accepting new connections, it lets the current requests complete cleanly, and it stops itself; it is then replaced by a new copy by the master process.

Dynamic Configuration

You can alter the configuration stored in Redis at any time. There is no need to restart Hipache, or to signal it that the configuration has changed: Hipache will re-query Redis at each request. Worried about performance? We were, too! And we found out that accessing a local Redis is helluva fast. So fast, that it didn't increase measurably the HTTP request latency!

WebSocket

Hipache supports the WebSocket protocol. It doesn't do any fancy handling on its own and relies entirely on NodeJS and node-http-proxy.

SSL

Hipache supports SSL for "regular" requests as well as WebSocket upgrades. Hipache's default configuration matches latest recommandations for a secure and well-configured SSL/TLS layer.

Custom HTML Error Pages

When something wrong happens (e.g., a backend times out), or when a request for an undefined virtual host comes in, Hipache will display an error page. Those error pages can be customized, and a configuration parameter (server.staticDir) is available to specify where these custom pages are located.

Wildcard Domains Support

When adding virtual hosts in Hipache configuration, you can specify wildcards. E.g., instead (or in addition to) www.example.tld, you can insert *.example.tld. Hipache will look for an exact match first, and then for a wildcard one up to 5 subdomains deep, e.g. foo.bar.baz.qux.quux will attempt to match itself first, then *.bar.baz.qux.quux, then *.baz.qux.quux, etc.

Active Health-Check

Even though Hipache support passive health checks, it's also possible to run active health checks. This mechanism requires to run an external program (see third-party softwares below).

Contributing

See CONTRIBUTING.md

Third-Party Softwares of Interest

Health-checkers:

A web interface to manage VHOSTs:

PaaS

hipache's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hipache's Issues

Multiple SSL Keys

If listening on multiple IP addresses, it would be nice to be able to do an SSL key-pair per address, this way you can front different SSL sites in the one proxy. :)

Redis configuration

The redis configuration in config.json doesn't work.
Based on the content of cache.js, the configuration in config.json has to be

{
...
redisHost:host
redisPort:port,
...
}

instead of

redis:{
host:host,
port:port
}

configure hipache with Tomcat?

Since Hipache seems to be the replacement for ApacheHTTPD, how to configure Hipache in front of some Tomcats too?

We have a few nodejs instances that Hipache would proxy, but the existing infrastructure still needs to be integrated too, so Apache Tomcat in our case.

Thank you very much.

Long Polling

How well does, hipache or does it? handle long polling, say for browsers that don't support websockets.

SETTINGS_FLAVOR ignored

root@2f45f3e5078b:/# SETTINGS_FLAVOR=test hipache
5 Nov 19:43:09 - Loading config from /usr/local/lib/node_modules/hipache/config/config.json

I'm running this inside a docker container built using the dockerfile in this repo.

Doesn't Seem to Failover to Other Backends

I've been experimenting with hipache on EC2 with an autoscale group behind it. Our removal process hasn't been perfect and one thing I noticed is that if a backend is dead but still active in the backend list we get a few timed out requests even though there are some perfectly capable backends in the list.

hipache silently upgrades http/1.0 servers to http/1.1

I have a trivial http app running behind hipache. The problem is it's an http/1.0 server, and hipache rewrites the response header to be http/1.1. This is a problem for clients as http/1.1 requires a content-length header, which my app doesn't supply (because it's serving http/1.0).

Changing "HTTP11" to "True" will set the proper http/1.1 header and content-length, and then clients have no problem connecting through hipache.

#!/usr/bin/python
import os
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer

PORT = int(8888)
HTTP11 = False

class Handler(BaseHTTPRequestHandler):
        def do_GET(self):
                if HTTP11:
                    self.protocol_version = 'HTTP/1.1'
                body = "hello, world\n"
                self.send_response(200)
                self.send_header('Content-type','text/plain')
                if HTTP11:
                    self.send_header('Content-length', str(len(body)))
                self.end_headers()
                self.wfile.write(body)
                return

server = HTTPServer(('', PORT), Handler)
print 'Started httpserver on port ' , PORT
server.serve_forever()

Websocket connections lost with Node >= v0.10.0

Can't establish any websocket connections due to problems with ECONNRESET:

12 Apr 20:28:00 - (worker #28246) TCP error from     
{"remoteAddress":"127.0.0.1","remotePort":47976,"bytesWritten":10,
"bytesRead":384,"elapsed":0.003}; Error:     
{"code":"ECONNRESET","errno":"ECONNRESET","syscall":"read"}

(with latest version)
Handling of ECONNRESET seems to have changed in the in Node v0.10.
Downgrading Node to v0.9 solved the problem.

Bytes sent is allways 0 when using https

I've made the same request using https then http. In access logs I've got:

::ffff:127.0.0.1 - - [08/Feb/2013:16:52:37 +0000] "GET /img.png HTTP/1.1" 200 0 "" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:18.0) Gecko/20100101 Firefox/18.0" "test" 0.002 0.001
::ffff:127.0.0.1 - - [08/Feb/2013:16:52:42 +0000] "GET /img.png HTTP/1.1" 200 6033 "" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:18.0) Gecko/20100101 Firefox/18.0" "test" 0.001 0.001

When using https, bytes sent is 0.

Unable to build docker container

While running docker build, the container creation fails while downloading node:

Uploading context 235520 bytes
Step 1 : FROM ubuntu:12.04
 ---> 8dbd9e392a96
Step 2 : RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
 ---> Using cache
 ---> 267a0e952e26
Step 3 : RUN apt-get -y update
 ---> Using cache
 ---> 0dc6c6e826e2
Step 4 : RUN apt-get -y install wget git redis-server supervisor
 ---> Using cache
 ---> 78dd638e12f0
Step 5 : RUN wget -O - http://nodejs.org/dist/v0.8.23/node-v0.8.23-linux-x64.tar.gz | tar -C /usr/local/ --strip-components=1 -zxv
 ---> Running in 4cfae75afcb5
--2013-07-29 02:01:05--  http://nodejs.org/dist/v0.8.23/node-v0.8.23-linux-x64.tar.gz
Resolving nodejs.org (nodejs.org)... 165.225.133.150
Connecting to nodejs.org (nodejs.org)|165.225.133.150|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5089447 (4.9M) [application/octet-stream]
Saving to: `STDOUT'

     0K .......... .......... .......... ....node-v0.8.23-linux-x64/bin/
node-v0.8.23-linux-x64/bin/npm
node-v0.8.23-linux-x64/bin/node-waf
node-v0.8.23-linux-x64/bin/node
...... ..........  1%  720K 7s
    50K .......... .......... .......... .......... ..........  2% 1.57M 5s
   100K .......... .......... .......... .......... ..........  3% 3.15M 4s
   150K .......... .......... .......... .......... ..........  4% 2.83M 3s
   200K .......... .......... .......... .......... ..........  5% 1.95M 3s
   250K .......... .......... .......... .......... ..........  6% 2.48M 3s
   300K .......... .......... .......... .......... ..........  7% 3.03M 3s
   350K .......... .......... .......... .......... ..........  8% 3.22M 2s
   400K .......... .......... .......... .......... ..........  9% 3.68M 2s
   450K .......... .......... .......... .......... .......... 10% 3.53M 2s
   500K .......... .......... .......... .......... .......... 11% 3.12M 2s
   550K .......... .......... .......... .......... .......... 12% 3.86M 2s
   600K .......... .......... .......... .......... .......... 13% 3.77M 2s
   650K .......... .......... .......... .......... .......... 14% 3.86M 2s
   700K .......... .......... .......... .......... .......... 15% 3.73M 2s
   750K .......... .......... .......... .......... .......... 16% 3.57M 2s
   800K .......... .......... .......... .......... .......... 17% 3.45M 2s
   850K .......... .......... .......... .......... .......... 18% 3.00M 2s
   900K .......... .......... .......... .......... .......... 19% 1010K 2s
   950K .......... .......... .......... .......... .......... 20% 20.3M 2s
  1000K .......... .......... .......... .......... .......... 21%  432M 1s
  1050K .......... .......... .......... .......... .......... 22% 2.13M 1s
  1100K .......... .......... .......... .......... .......... 23% 1.17M 2s
  1150K .......... .......... .......... .......... .......... 24% 3.65M 2s
  1200K .......... .......... .......... .......... .......... 25% 2.63M 1s
  1250K .......... .......... .......... .......... .......... 26% 2.47M 1s
  1300K .......... .......... .......... .......... .......... 27% 3.42M 1s
  1350K .......... .......... .......... .......... .......... 28% 2.78M 1s
  1400K .......... .......... .......... .......... .......... 29% 2.75M 1s
  1450K .......... .......... .......... .......... .......... 30% 2.85M 1s
  1500K .......... .......... .......... .......... .......... 31% 2.61M 1s
  1550K .......... .......... .......... .......... .......... 32% 2.07M 1s
  1600K .......... .......... .......... .......... .......... 33% 2.04M 1s
  1650K .......... .......... .......... .......... .......... 34% 2.62M 1s
  1700K .......... .......... .......... .......... .......... 35% 2.47M 1s
  1750K .......... .......... .......... .......... .......... 36%  796K 1s
  1800K .......... .......... .......... .......... .......... 37%  245M 1s
  1850K .......... .......... .......... .......... .......... 38% 2.81M 1s
  1900K .......... .......... .......... .......... .......... 39% 1.67M 1s
  1950K .......... .......... .......... .......... .......... 40% 2.01M 1s
  2000K .......... .......... .......... .......... .......... 41% 1.68M 1s
  2050K .......... .......... .......... .......... .......... 42% 1.91M 1s
  2100K .......... .......... .......... .......... .......... 43% 1.91M 1s
  2150K .......... .......... .......... .......... .......... 44% 2.10M 1s
  2200K .......... .......... .......... .......... .......... 45% 2.03M 1s
  2250K .......... .......... .......... .......... .......... 46% 2.26M 1s
  2300K .......... .......... .......... .......... .......... 47% 2.40M 1s
  2350K .......... .......... .......... .......... .......... 48% 2.27M 1s
  2400K .......... .......... .......... .......... .......... 49% 1.95M 1s
  2450K .......... .......... .......... .......... .......... 50% 2.53M 1s
  2500K .......... .......... .......... .......... .......... 51% 2.12M 1s
  2550K .......... .......... .......... .......... .......... 52% 2.19M 1s
  2600K .......... .......... .......... .......... .......... 53% 2.29M 1s
  2650K .......... .......... .......... .......... .......... 54% 2.60M 1s
  2700K .......... .......... .......... .......... .......... 55% 2.55M 1s
  2750K .......... .......... .......... .......... .......... 56% 2.39M 1s
  2800K .......... .......... .......... .......... .......... 57% 2.52M 1s
  2850K .......... .......... .......... .......... .......... 58% 2.58M 1s
  2900K .......... .......... .......... .......... .......... 59% 2.41M 1s
  2950K .......... .......... .......... .......... .......... 60% 2.47M 1s
  3000K .......... .......... .......... .......... .......... 61% 2.76M 1s
  3050K .......... .......... .......... .......... .......... 62% 2.75M 1s
  3100K .......... .......... .......... .......... .......... 63% 2.31M 1s
  3150K .......... .......... .......... .......... .......... 64% 1.73M 1s
  3200K .......... .......... .......... .......... .......... 65% 1.75M 1s
  3250K .......... .......... .......... .......... .......... 66% 2.86M 1s
  3300K .......... .......... .......... .......... .......... 67% 2.54M 1s
  3350K .......... .......... .......... .......... .......... 68% 2.78M 1s
  3400K .......... .......... .......... .......... .......... 69% 2.89M 1s
  3450K .......... .......... .......... .......... .......... 70% 2.88M 1s
  3500K .......... .......... .......... .......... .......... 71% 2.31M 1s
  3550K ....node-v0.8.23-linux-x64/share/
node-v0.8.23-linux-x64/share/man/
node-v0.8.23-linux-x64/share/man/man1/
node-v0.8.23-linux-x64/share/man/man1/node.1
...node-v0.8.23-linux-x64/LICENSE
..node-v0.8.23-linux-x64/README.md
node-v0.8.23-linux-x64/lib/
node-v0.8.23-linux-x64/lib/node/
node-v0.8.23-linux-x64/lib/node/wafadmin/
.node-v0.8.23-linux-x64/lib/node/wafadmin/Task.py
 .......... .......... .....node-v0.8.23-linux-x64/lib/node/wafadmin/Build.py
...node-v0.8.23-linux-x64/lib/node/wafadmin/Utils.py
.node-v0.8.23-linux-x64/lib/node/wafadmin/Environment.py
node-v0.8.23-linux-x64/lib/node/wafadmin/Constants.py
. .node-v0.8.23-linux-x64/lib/node/wafadmin/TaskGen.py
node-v0.8.23-linux-x64/lib/node/wafadmin/Node.py
......... 72% 3.45M 1s
  3600K .......... .......node-v0.8.23-linux-x64/lib/node/wafadmin/Tools/
node-v0.8.23-linux-x64/lib/node/wafadmin/Tools/gcc.py
node-v0.8.23-linux-x64/lib/node/wafadmin/Tools/ccroot.py
.node-v0.8.23-linux-x64/lib/node/wafadmin/Tools/icc.py
node-v0.8.23-linux-x64/lib/node/wafadmin/Tools/unittestw.py
node-v0.8.23-linux-x64/lib/node/wafadmin/Tools/icpc.py
node-v0.8.23-linux-x64/lib/node/wafadmin/Tools/intltool.py
.2013/07/28 22:01:07 unexpected EOF

any idea on how to fix this? I can't tell if it's an issue with docker or the nodejs distribution.

Crashes on unknown host

hipache seems to work when there's a hostname in redis, the error page thingie just throws this:

node.js:201
        throw e; // process.nextTick error, or 'error' event on first tick
              ^
TypeError: Object #<Object> has no method 'exists'
    at /usr/local/lib/node_modules/hipache/lib/worker.js:169:12
    at /usr/local/lib/node_modules/hipache/lib/worker.js:270:28
    at Cache.<anonymous> (/usr/local/lib/node_modules/hipache/lib/cache.js:124:20)
    at Cache.<anonymous> (/usr/local/lib/node_modules/hipache/lib/cache.js:112:13)
    at /usr/local/lib/node_modules/hipache/node_modules/redis/index.js:1049:13
    at try_callback (/usr/local/lib/node_modules/hipache/node_modules/redis/index.js:522:9)
    at RedisClient.return_reply (/usr/local/lib/node_modules/hipache/node_modules/redis/index.js:592:13)
    at RedisReplyParser.<anonymous> (/usr/local/lib/node_modules/hipache/node_modules/redis/index.js:265:14)
    at RedisReplyParser.emit (events.js:67:17)
    at RedisReplyParser.add_multi_bulk_reply (/usr/local/lib/node_modules/hipache/node_modules/redis/lib/parser/javascript.js:306:18)
´´´

Define default command and ports in Dockerfile

Now that docker supports the CMD and EXPOSE commands, it would be nice to use them to set good defaults for the Hipache container. This way, people could just run 'docker run samalba/hipache' and something cool would happen out of the box :)

Redis client always returns 'OK' with Node.js 0.8.26 / Ubuntu 13.04 (raring)

This is a bug I encountered while trying to find out why hipache wasn't responding. I started using hipache with Node.js 0.10.x and it worked fine but then I found out that node-http-proxy doesn't support websockets when using 0.10.x so I switched to 0.8.26 instead and hipache stopped working.

After digging through the source code I found out that the callback passed to multi.exec never executes.

https://github.com/dotcloud/hipache/blob/master/lib/cache.js#L154

Okay, that's weird. Since I knew the frontend 'frontend:nodejs.dockerfile-deploy.com' exists I added the following line to find out what redis was returning.

this.redisClient.lrange('frontend:nodejs.dockerfile-deploy.com', 0, -1, function (err, list) {
    console.log(err, list);
});

When I curl'd nodejs.dockerfile-deploy.com hipache would log null 'OK'. This made me scratch my head since I was expecting lrange to return a list instead of 'OK'.

I also added a key to redis redis-cli set host foobar and then tried to get it with this.redisClient.get('host', function (err, value) { ... }); it would also log null 'OK'.

There was something wrong with this.redisClient so I re-instantiated it and used it to replace the var multi.

var rc = redis.createClient();
var multi = rc.multi();
//var multi = this.redisClient.multi();

I curl'd the url and this time got the right response back.

Since there was a problem redis related I updated the version from 0.8.x in the package.json to 0.9 and that also seems to solve the problem.

Commenting the monitorActiveChecker code would also result in successful requests.

Not really sure what causes this it seems to be a combination of a few things. For now I'll use hipache with redis 0.9 since that seems to be the easiest way to fix it.

Better redis/driver failure handling

Right now (assuming I'm reading that well):

  • if Hipache starts and there is no redis server (yet), we end-up with a non-fonctional Hipache that sits there (workers redis clients got ECONNREFUSED and are unable to operate)
  • moreover, if a worker (while operating) looses connection to redis, it will sit there and be unusable (even if Redis comes back later on), which (probably?) can lead to Hipache being entirely unusable if all running workers reach that condition

The reason for that is because driver emitted errors are just logged and no action is taken upon them: https://github.com/dotcloud/hipache/blob/ad8697347d3803bed61c78a51ba84635b9f42337/lib/cache.js#L58

IMHO:

  • if we are just starting and fail to reach Redis, the worker should suicide and "notify" the master - and hipache should exit with an error and tell the user (or: the master should test the redis before even trying to spawn)
  • if we were operating successfully and suddenly get ECONNREFUSED, then the worker should try to reconnect the driver to avoid becoming a "sloppy" (<- that's almost the same thing as a zombie, just sloppier :-))

This is quite touchy, as it affects both operating reliability and starting heuristic.

What do you think?

Application not responding

For some reason none of the applications are responding. I've tried
a few different names and ip addresses, but still no result.

Hipache 0.2.4 with node.js 0.10.20 on mac.

Configuration steps:

redis-cli
rpush frontend:test.site http://127.0.0.1:3000
rpush frontend:test.com http://localhost:3000
rpush frontend:test.com http://

Hipache config:

{
  "server": {
    "debug": true,
    "accessLog": "/tmp/hipache_access.log",
    "port": 7000,
    "workers": 5,
    "maxSockets": 100,
    "deadBackendTTL": 30
  },
  "redisHost": "127.0.0.1",
  "redisPort": 6379,
  "redisDatabase": 0,
  "redisPassword": "password"
}

Am i missing something here ?

Customisable prefix

I think it would be nice if the frontend: prefix could be configurable from the configuration file.

This way it would be more easier to avoid conflicts in the database.

Domain redirects?

Is there a way to setup domain-based redirection?

For instance, if i have foobar.com and www.foobar.com i want www-domain to redirect to non-www automatically without doing it in application.

Possible?

hipache errors in docker containers

my system:

  • kernel 3.12.4-031204-generic
  • docker 0.7.5
  • ubuntu 13.10

docker commands tested:
docker run -i -t -rm stackbrew/hipache
docker run -i -t -rm hipache
docker run -i -t -p 127.0.0.1:8080:80 -rm hipache
docker run -i -t -p 127.0.0.1:8080:80 -p 127.0.0.1:6379:6379 -rm hipache

also same error when running hipache 0.2.5 in a custom container.
also same errors when tried changing the config file that hipache runs. nothing I did would make the errors go away.

tons and tons of errors spew into the hipache.log file. and cpu usage at idle is ~ 25% of a 3ghz 8-core dev machine.

> head -n100 /var/log/supervisor/hipache.log

15 Jan 21:18:16 - Loading config from /usr/local/lib/node_modules/hipache/config/config_dev.json
15 Jan 21:18:16 - Spawning worker #0
15 Jan 21:18:16 - Spawning worker #1
15 Jan 21:18:16 - Server is running. {"debug":true,"accessLog":"/tmp/proxy2_access.log","port":80,"workers":2,"maxSockets":100,"deadBackendTTL":10,"tcpTimeout":10,"retryOnError":3,"deadBackendOn500":true,"httpKeepAlive":false,"lruCache":{"size":5,"ttl":5}}
15 Jan 21:18:16 - Loading config from /usr/local/lib/node_modules/hipache/config/config_dev.json
15 Jan 21:18:16 - Loading config from /usr/local/lib/node_modules/hipache/config/config_dev.json
15 Jan 21:18:16 - (worker #30) Cache: LRU cache is enabled
15 Jan 21:18:16 - (worker #29) Cache: LRU cache is enabled

net.js:943
    if (port && handle.getsockname && port != handle.getsockname().port) {
                      ^

net.js:943
    if (port && handle.getsockname && port != handle.getsockname().port) {
                      ^
TypeError: Cannot read property 'getsockname' of undefined
    at net.js:943:23
    at Object.cluster._getServer [as 2:2] (cluster.js:555:5)
    at handleResponse (cluster.js:149:41)
    at respond (cluster.js:170:5)
    at handleMessage (cluster.js:180:5)
    at process.EventEmitter.emit (events.js:126:20)
    at handleMessage (child_process.js:270:12)
    at Pipe.channel.onread (child_process.js:295:9)
TypeError: Cannot read property 'getsockname' of undefined
    at net.js:943:23
    at Object.cluster._getServer [as 1:2] (cluster.js:555:5)
    at handleResponse (cluster.js:149:41)
    at respond (cluster.js:170:5)
    at handleMessage (cluster.js:180:5)
    at process.EventEmitter.emit (events.js:126:20)
    at handleMessage (child_process.js:270:12)
    at Pipe.channel.onread (child_process.js:295:9)
15 Jan 21:18:16 - Worker died (pid: 30, suicide: false, exitcode: 1). Spawning a new one.
15 Jan 21:18:16 - Worker died (pid: 29, suicide: false, exitcode: 1). Spawning a new one.
15 Jan 21:18:16 - Loading config from /usr/local/lib/node_modules/hipache/config/config_dev.json
15 Jan 21:18:16 - Loading config from /usr/local/lib/node_modules/hipache/config/config_dev.json
15 Jan 21:18:16 - (worker #38) Cache: LRU cache is enabled
15 Jan 21:18:16 - (worker #39) Cache: LRU cache is enabled

net.js:943
    if (port && handle.getsockname && port != handle.getsockname().port) {
                      ^

net.js:943
    if (port && handle.getsockname && port != handle.getsockname().port) {
                      ^
TypeError: Cannot read property 'getsockname' of undefined
    at net.js:943:23
    at Object.cluster._getServer [as 3:2] (cluster.js:555:5)
    at handleResponse (cluster.js:149:41)
    at respond (cluster.js:170:5)
    at handleMessage (cluster.js:180:5)
    at process.EventEmitter.emit (events.js:126:20)
    at handleMessage (child_process.js:270:12)
    at Pipe.channel.onread (child_process.js:295:9)
TypeError: Cannot read property 'getsockname' of undefined
    at net.js:943:23
    at Object.cluster._getServer [as 4:2] (cluster.js:555:5)
    at handleResponse (cluster.js:149:41)
    at respond (cluster.js:170:5)
    at handleMessage (cluster.js:180:5)
    at process.EventEmitter.emit (events.js:126:20)
    at handleMessage (child_process.js:270:12)
    at Pipe.channel.onread (child_process.js:295:9)
15 Jan 21:18:16 - Worker died (pid: 38, suicide: false, exitcode: 1). Spawning a new one.
15 Jan 21:18:16 - Worker died (pid: 39, suicide: false, exitcode: 1). Spawning a new one.
15 Jan 21:18:16 - Loading config from /usr/local/lib/node_modules/hipache/config/config_dev.json
15 Jan 21:18:16 - Loading config from /usr/local/lib/node_modules/hipache/config/config_dev.json
15 Jan 21:18:16 - (worker #46) Cache: LRU cache is enabled
15 Jan 21:18:16 - (worker #45) Cache: LRU cache is enabled

net.js:943

    if (port && handle.getsockname && port != handle.getsockname().port) {
          net.js:943
            ^
    if (port && handle.getsockname && port != handle.getsockname().port) {
                      ^
TypeError: Cannot read property 'getsockname' of undefined
    at net.js:943:23
    at Object.cluster._getServer [as 6:2] (cluster.js:555:5)
    at handleResponse (cluster.js:149:41)
    at respond (cluster.js:170:5)
    at handleMessage (cluster.js:180:5)
    at process.EventEmitter.emit (events.js:126:20)
    at handleMessage (child_process.js:270:12)
    at Pipe.channel.onread (child_process.js:295:9)
TypeError: Cannot read property 'getsockname' of undefined
    at net.js:943:23
    at Object.cluster._getServer [as 5:2] (cluster.js:555:5)
    at handleResponse (cluster.js:149:41)
    at respond (cluster.js:170:5)
    at handleMessage (cluster.js:180:5)
    at process.EventEmitter.emit (events.js:126:20)
    at handleMessage (child_process.js:270:12)
    at Pipe.channel.onread (child_process.js:295:9)
15 Jan 21:18:16 - Worker died (pid: 46, suicide: false, exitcode: 1). Spawning a new one.
15 Jan 21:18:16 - Worker died (pid: 45, suicide: false, exitcode: 1). Spawning a new one.

How to configure it on ubuntu12.04

Hey i am trying to install it on ubuntu 12.04 and i have installed-
-redis server
-node.js

but when i am trying to spawn the server it showing error
"Error: EISDIR, illegal operation on a directory"

plzz guide me how to configure it on my system..
Capture

Possible to Bind to single IP

Would it be possible to add a config.address parameter allowing you to specify a specific port to bind to?

Ive made the change on my local, in worker.js, lines 351 and 368 to:

ipv4HttpServer.listen(config.port, config.address);
...
ipv4HttpsServer.listen(config.https.port, config.https.address);

Might be useful for others.

Andrew

Pluggable configuration store

Hi there,

I've been looking at using hipache as the router in a coreos cluster. As you might know coreos uses etcd as a shared configuration store for the cluster. I was wondering how feasible it would be to make the redis dependency (conf. store) a pluggable thing; e.g. making a configuration store api and have adapters for redis, etcd and others.

Is it a thing you have considered? Would you accept PRs on such a change?
Thoughs?
👍 👎 ?

Drop user priveleges

Is it possible to start hipache on port 80 and then drop priveleges to a non-admin user?

Investigate possible dos

If not draining the connection, it seems possible to flood the workers quickly.
Need to write tests for that to confirm.

error code HPE_INVALID_CONSTANT

Hello,

I am trying to use hipache with a docker container, that runs a rails app via passenger and apache.

The app loads fine, when I open it with https://staging:49160 , but when I configure hipache, I start getting:

(worker #96) staging: backend #0 reported an error ({"bytesParsed":0,"code":"HPE_INVALID_CONSTANT"}) while handling request for /

Any suggestions how to solve this?

My config.json looks like:

{
    "server": {
        "accessLog": "/var/log/shipyard/hipache.log",
        "port": 80,
        "workers": 5,
        "maxSockets": 100,
        "https": {
            "port": 443,
            "key": "/opt/apps/shipyard/cert/key.key",
            "cert": "/opt/apps/shipyard/cert/crt.crt"
        },
        "deadBackendTTL": 30
    },
    "redisHost": "127.0.0.1",
    "redisPort": 6379
}

I tried nodejs v0.10.25 and nodejs v0.8.26 , but the result is the same.

XHR requests slowing down

Hello

I have a very simple setup of hipache proxying a thin server running a Rails application on a docker container. (I am trying out the dokku project). The issue I have is when I go through the proxy, XHR requests end up being very very slow. If I remove the proxy and hit the thin server directly I have no issues.

My hipache config is as follows http://pastebin.com/1rFRtaiS.
It is almost identical to the default configuration.

I run my thin server as follows /app/vendor/bundle/ruby/2.0.0/bin/thin start -R config.ru -e production -p 49155

I have this issue whether we are hitting the thin server via http or https.

Any help is greatly appreciated.

Dockerfile build fails on Docker from docker ppa

running docker build . in the repository root produces the following output:
https://gist.github.com/hansent/5815177

Pretty sure its a docker bug, but thought i'd post here in case anyone runs into it. I tried changing the ADD to INSERT and giving it the raw.github url for the supervisord config, but that got me:
Error build: INSERT has been deprecated. Please use ADD instead

so yeah...wget it is for now:

Dockerfile with workaround is here:
https://gist.github.com/hansent/5815302

when I start hipache ,there is an error

when I start hipache ,there is an error,
anybody has the same error?

root@AY121114091050444f816:/usr/local/lib/node_modules/hipache/config# hipache
30 Jan 11:14:12 - Loading config from /usr/local/lib/node_modules/hipache/config/config.json
30 Jan 11:14:12 - Spawning worker #0
30 Jan 11:14:12 - Server is running. {"debug":true,"accessLog":"/tmp/proxy2_test_access.log","port":80,"workers":1,"maxSockets":100,"deadBackendTTL":30,"tcpTimeout":50,"retryOnError":30,"httpKeepAlive":false,"lruCache":{"size":5,"ttl":5}}
30 Jan 11:14:12 - Loading config from /usr/local/lib/node_modules/hipache/config/config.json
30 Jan 11:14:12 - (worker #19334) Cache: LRU cache is enabled

events.js:71
throw arguments[1]; // Unhandled 'error' event
^
Error: listen EADDRNOTAVAIL
at errnoException (net.js:769:11)
at Server._listen2 (net.js:892:19)
at net.js:933:12
at Object.cluster._getServer (cluster.js:555:5)
at handleResponse (cluster.js:149:41)
at respond (cluster.js:170:5)
at handleMessage (cluster.js:180:5)
at process.EventEmitter.emit (events.js:126:20)
at handleMessage (child_process.js:269:12)
at Pipe.channel.onread (child_process.js:293:9)
30 Jan 11:14:12 - Worker died (pid: 19334, suicide: false, exitcode: 1). Spawning a new one.

the Dockerfile does not work with docker >= 0.4

FROM ubuntu:12.04 ()
===> 8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list (8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c)
===> f011d5c79d692c4c144f994fec11f1c03d4ee19afcbd04b93c4fe204b54bfcad
RUN apt-get -y update (f011d5c79d692c4c144f994fec11f1c03d4ee19afcbd04b93c4fe204b54bfcad)
===> 15512e2dd508357a03cd14a08aa06ac478e69cf4a5aaf17a6a24984c772424cf
RUN apt-get -y install wget git redis-server supervisor (15512e2dd508357a03cd14a08aa06ac478e69cf4a5aaf17a6a24984c772424cf)
Error build: The command [/bin/sh -c apt-get -y install wget git redis-server supervisor] returned a non-zero code: 100

Sticky session support?

Hi,

I wanted to ask if sticky session support is available? The reason I ask is that WebSocket libraries like SockJS require requests for a session to be routed to the same application instance.

Great work and thanks for open sourcing it.

Question

is it possible to run 2 instances of hipache

eg

round robin dns

hipache server a
hipache server b

shared redis server

Hipache (light) version without Redis?

Any plans to make a "lightweight" version of Hipache that wouldn't require Redis?
E.g. for cases where virtual hosting that doesn't change that much, so a simple config file without the overhead of Redis would be more than enough?

Thank you very much.

Hipache -> squid with IPs as backends

Hi,

I have an instance of squid configured as a forward proxy, on a box with multiple IP addresses assigned to it. An HTTP request to squid is forwarded with the same outgoing IP as requested IP.

Example setup:

  • Squid listening on 127.0.0.1:3128 and 127.0.0.2:3128
  • User sends HTTP request for "destination.com" to 127.0.0.1:3128
  • Squid forwards request to "destination.com", with outgoing IP 127.0.0.1:3128

Questions:

  • Can hipache use complete "*" wildcard matching? Example:
$ redis-cli rpush frontend:* all
(integer) 1
  • Can hipache use set of IPs squid is listening to as backends? (not vhost) Example:
$ redis-cli rpush frontend:* http://127.0.0.1:3128
(integer) 2
$ redis-cli rpush frontend:* http://127.0.0.2:3128
(integer) 3
  • Must you specify http:// or https:// in the backend declaration? Example:
# Assume you already executed commands from question #2
# (notice https://)

$ redis-cli rpush frontend:* https://127.0.0.1:3128
(integer) 2
$ redis-cli rpush frontend:* https://127.0.0.2:3128
(integer) 3

Alert when backend node is down or not accessible?

I just started exploring hipache.
We are using docker for our paas product. we are planning to use hipache.
when the backend node is dead or not accessible any more i want to send an alert type of thing (basically a http rest call to my service) which makes the backend container up..any idea how can i do that ? thanks in advance :)

Problem with Apache mod_proxy or NGINX proxy_pass

Hello,

I notice very bad performance with this architecture
User --> Apache2.4 with proxy_pass,rewriting_url, mod_speed --> Hipache --> Tomcat JEE Webapp
User --> NGINX with proxy_pass --> Hipache --> Tomcat JEE Webapp

Is it an allowed configuration ?
I tried for the two webserver all the configuration and nothing changed.
I want to use Apache2 for HTTPS or Restricted zone.

I wonder if I should not change the order between Hipache and Apache/NGINGX.
User --> Hipache --> Apache with proxy_pass,rewriting_url, mod_speed --> Tomcat JEE Webapp

Thanks a lot for your help.
Best regards,
Nicolas

Generate a new version

There are some bugfixes and important features that have been added in the past 7 months (such as redis authentication). Please generate a new version.

hipache doesn't seems to follow redirect by referrer. Any workaround beside using relative urls ?

Little more googling help me realize the fact that some proxy send referer/referrer in the url and some don't. In case of https, i'll never get referrer/referer in the headers. And if url is user provided then also, it won't be there.

So, instead of pin pointing hipache shortcomings, i resort to add "redirectURL" key in my query string that will help my express controllers to resolve redirect URL.

I like hipache and it's pretty fast. One suggestion: Could it be possible to edit headers before forwarding request to backends? I've seen apache doing it. See if you guys can add it.

CNAME support

Currently, hipache does not support cnames. Any chance we can get it working without duplicating rules in Redis?

How would you feel about npmlog?

I'm not talking about accesslogs, but plain application logging.

Right now, master uses util.log.

Cache and Memory use either console.log or a manually provided log handler (which turns out to be forced in worker and sent to the master (fallback on util.log in case that fails), see: https://github.com/dotcloud/hipache/blob/master/lib/worker.js#L25)

Either way, I'm wondering if we could get benefits from npmlog (https://www.npmjs.org/package/npmlog):

  • being able to more easily change the write stream
  • being able to listen on emitted message from npmlogs instead of passing around "logHandlers" parameters
  • nicer display, more sugar

Now, that would sure introduce an additional (production) dependency.

What do you guys think?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.