Git Product home page Git Product logo

dinghy-http-proxy's Introduction

Dinghy HTTP Proxy

Docker Automated build

This is the HTTP Proxy and DNS server that Dinghy uses.

The proxy is based on jwilder's excellent nginx-proxy project, with modifications to make it more suitable for local development work.

A DNS resolver is also added. By default it will resolve all *.docker domains to the Docker VM, but this can be changed.

Configuration

Exposed Ports

The proxy will by default use the first port exposed by your container as the HTTP port to proxy to. This can be overridden by setting the VIRTUAL_PORT environment variable on the container to the desired HTTP port.

Docker Compose Projects

The proxy will auto-generate a hostname based on the docker tags that docker-compose adds to each container. This hostname is of the form <service>.<project>.<tld>. For instance, assuming the default *.docker TLD, a "web" service in a "myapp" docker-compose project will be automatically made available at http://web.myapp.docker/.

Explicitly Setting a Hostname

As in the base nginx-proxy, you can configure a container's hostname by setting the VIRTUAL_HOST environment variable in the container.

You can set the VIRTUAL_HOST environment variable either with the -e option to docker or the environment hash in docker-compose. For instance setting VIRTUAL_HOST=myrailsapp.docker will make the container's exposed port available at http://myrailsapp.docker/.

This will work even if dinghy auto-generates a hostname based on the docker-compose tags.

Multiple Hosts

If you need to support multiple virtual hosts for a container, you can separate each entry with commas. For example, foo.bar.com,baz.bar.com,bar.com and each host will be setup the same.

Additionally you can customize the port for each host by appending a port number: foo.bar.com,baz.bar.com:3000. Each name will point to its specified port and any name without a port will use the default.

Wildcard Hosts

You can also use wildcards at the beginning and the end of host name, like *.bar.com or foo.bar.*. Or even a regular expression, which can be very useful in conjunction with a wildcard DNS service like xip.io, using ~^foo\.bar\..*\.xip\.io will match foo.bar.127.0.0.1.xip.io, foo.bar.10.0.2.2.xip.io and all other given IPs. More information about this topic can be found in the nginx documentation about server_names.

Enabling CORS

You can set the CORS_ENABLED environment variable either with the -e option to docker or the environment hash in docker-compose. For instance setting CORS_ENABLED=true will allow the container's web proxy to accept cross domain requests.

If you want to be more specific, you can also set CORS_DOMAINS (along with CORS_ENABLED) to specify the domains you want to whitelist. They need to be separated using comma.

This is especially helpful when you have to deal with CORS with authenticated cross domain requests.

More information on this topic on MDN.

Subdomain Support

If you want your container to also be available at all subdomains to the given domain, prefix a dot . to the provided hostname. For instance setting VIRTUAL_HOST=.myrailsapp.docker will also make your app avaiable at *.myrailsapp.docker.

This happens automatically for the auto-generated docker-compose hostnames.

SSL Support

SSL is supported using single host certificates using naming conventions.

To enable SSL, just put your certificates and privates keys in the HOME/.dinghy/certs directory for any virtual hosts in use. The certificate and keys should be named after the virtual host with a .crt and .key extension. For example, a container with VIRTUAL_HOST=foo.bar.com.docker should have a foo.bar.com.docker.crt and foo.bar.com.docker.key file in the certs directory.

How SSL Support Works

The SSL cipher configuration is based on mozilla nginx intermediate profile which should provide compatibility with clients back to Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8, Android 2.3, Java 7. The configuration also enables HSTS, and SSL session caches.

The default behavior for the proxy when port 80 and 443 are exposed is as follows:

  • If a container has a usable cert, port 80 will redirect to 443 for that container so that HTTPS is always preferred when available.
  • If the container does not have a usable cert, port 80 will be used.

To serve traffic in both SSL and non-SSL modes without redirecting to SSL, you can include the environment variable HTTPS_METHOD=noredirect (the default is HTTPS_METHOD=redirect). You can also disable the non-SSL site entirely with HTTPS_METHOD=nohttp.

How to quickly generate self-signed certificates

You can generate self-signed certificates using openssl.

openssl req -x509 -newkey rsa:2048 -keyout foo.bar.com.docker.key \
-out foo.bar.com.docker.crt -days 365 -nodes \
-subj "/C=US/ST=Oregon/L=Portland/O=Company Name/OU=Org/CN=foo.bar.com.docker" \
-config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:foo.bar.com.docker")) \
-reqexts SAN -extensions SAN

To prevent your browser to emit warning regarding self-signed certificates, you can install them on your system as trusted certificates.

Using Outside of Dinghy

Since this functionality is generally useful for local development work even outside of Dinghy, this proxy now supports running standalone.

Environment variables

We include a few environment variables to customize the proxy / dns server:

  • DOMAIN_TLD default: docker - The DNS server will only respond to *.docker by default. You can change this to dev if it suits your workflow
  • DNS_IP default: 127.0.0.1 - Setting this variable is explained below

OS X

You'll need the IP of your VM:

  • For docker-machine, run docker-machine ip <machine_name> to get the IP.
  • For Docker for Mac, you can use 127.0.0.1 as the IP, since it forwards docker ports to the host machine.

Then start the proxy:

docker run -d --restart=always \
  -v /var/run/docker.sock:/tmp/docker.sock:ro \
  -v ~/.dinghy/certs:/etc/nginx/certs \
  -p 80:80 -p 443:443 -p 19322:19322/udp \
  -e DNS_IP=<vm_ip> -e CONTAINER_NAME=http-proxy \
  --name http-proxy \
  codekitchen/dinghy-http-proxy

You will also need to configure OS X to use the DNS resolver. To do this, create a file /etc/resolver/docker (creating the /etc/resolver directory if it does not exist) with these contents:

nameserver <vm_ip>
port 19322

You only need to do this step once, or when the VM's IP changes.

Linux

For running Docker directly on a Linux host machine, the proxy can still be useful for easy access to your development environments. Similar to OS X, start the proxy:

docker run -d --restart=always \
  -v /var/run/docker.sock:/tmp/docker.sock:ro \
  -v ~/.dinghy/certs:/etc/nginx/certs \
  -p 80:80 -p 443:443 -p 19322:19322/udp \
  -e CONTAINER_NAME=http-proxy \
  --name http-proxy \
  codekitchen/dinghy-http-proxy

The DNS_IP environment variable is not necessary when Docker is running directly on the host, as it defaults to 127.0.0.1.

Different Linux distributions will require different steps for configuring DNS resolution. The Dory project may be useful here, it knows how to configure common distros for dinghy-http-proxy.

Windows

  • For Docker for Windows, you can use 127.0.0.1 as the DNS IP.

From Powershell:

docker run -d --restart=always `
  -v /var/run/docker.sock:/tmp/docker.sock:ro `
  -p 80:80 -p 443:443 -p 19322:19322/udp `
  -e CONTAINER_NAME=http-proxy `
  -e DNS_IP=127.0.0.1 `
  --name http-proxy `
  codekitchen/dinghy-http-proxy

From docker-compose:

version: '2'
services:

  http-proxy:
    container_name: http-proxy
    image: codekitchen/dinghy-http-proxy
    environment:
      - DNS_IP=127.0.0.1
      - CONTAINER_NAME=http-proxy
    ports:
      - "80:80"
      - "443:443"
      - "19322:19322/udp"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

You will have to add the hosts to C:\Windows\System32\drivers\etc\hosts manually. There are various Powershell scripts available to help manage this:

dinghy-http-proxy's People

Contributors

aaronjensen avatar beardcoder avatar codekitchen avatar codysnider avatar djbender avatar ericparton avatar fuse avatar hezhizhen avatar jiahaog avatar marclennox avatar mickaelperrin avatar rwstauner avatar scotthelm avatar sergeyklay avatar spittal avatar taybenlor avatar timweprovide avatar ttrakos avatar vkurdin avatar xarg avatar yenif avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

dinghy-http-proxy's Issues

Port 19322 is not exposed by default

Ran into this using docker's remote rest api. Don't know if it can be stacked, so we could do EXPOSE 19322, if not we also have to define 80 443 19322 ๐Ÿ‘

docker-compose down fails

Hi there,

When creating a Docker Compose environment with a version 2 syntax compose file, the Dinghy HTTP Proxy seems to prevent the use of docker-compose down.

docker-compose down tries to delete the network that is created for the environment, but fails because the Dinghy HTTP Proxy is still in the network, running. Even if the rest of the environment is not.

It seems that the network should be left once there are no more other containers running in it - or, if possible, that is should react to docker-compose down being run, and leave then (otherwise with the first method it may not be fast enough and could still fail I suppose?)

Are there any other workarounds to this? I know docker-compose down isn't exactly a common thing, but I'm using it because I'm testing a script for use in a CI environment where I will need it to clean up after itself!

proxy to multiple endpoints in one container

This isn't a super common need, since best practice is to have one process running per container, but sometimes even one process might expose more than one HTTP server: one port for public connections, another port for an internal admin UI, for example. There's no fundamental reason the proxy can't handle this, but it'll take changes to how the env vars are interpreted.

One idea is to look for vars of the form VIRTUAL_HOST_X and VIRTUAL_PORT_X, where X is any number > 1, but there might be a better option.

xdebug, port 9000

Hi,

is it possible to also support xdebug with this proxy? I mostly use just xdebug without huge background knowledge.

Does it make sense?

dinghy-http-proxy:2.5 won't start

I'm running the latest dinghy installed via homebrew. When I try either dinghy up or dinghy restart, the VM and app starts up fine, but DNS and proxy do not start. The docker logs for the proxy container are:

forego     | starting nginx.1 on port 5000
forego     | starting dockergen.1 on port 5100
forego     | starting dnsmasq.1 on port 5300
nginx.1    | 2017/01/28 23:16:39 [crit] 12#12: pread() "/etc/nginx/conf.d/custom.conf" failed (21: Is a directory)
forego     | starting nginx.1 on port 5300
forego     | sending SIGTERM to dockergen.1
forego     | sending SIGTERM to nginx.1
forego     | sending SIGTERM to dnsmasq.1

Dinghy has been working beautifully for me for months. I recently destroyed and recreated my VM, but had used the proxy since then for a few days.

mac OS 10.12.2
Dinghy HEAD-9328532

port 443 https doesnt work

I have this docker-compose

version: '3.8'

services:
  php:
    image: thecodingmachine/php:7.3-v4-apache
    environment:
        VIRTUAL_HOST: "x.test"
        APACHE_DOCUMENT_ROOT: public/
        APACHE_EXTENSIONS: "headers socache_shmcb ssl"
        HTTPS_METHOD: nohttp
        VIRTUAL_PROTO: https
        VIRTUAL_PORT: 443
    volumes:
      - /Users/dimitar/Projects/x:/var/www/html
      - /Users/dimitar/.dinghy/certs/x.test.crt:/etc/ssl/certs/ssl-cert-snakeoil.pem
      - /Users/dimitar/.dinghy/certs/x.test.key:/etc/ssl/private/ssl-cert-snakeoil.key

Apperently the http-proxy doesnt like the upstream cert and i cant understand why..
Any ideas?


nginx.1    | 2021/02/13 00:02:45 [error] 721#721: *797 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 192.168.99.1, server: x.test, request: "GET /favicon.ico HTTP/2.0", upstream: "https://172.19.0.2:443/favicon.ico", host: "x.test", referrer: "https://x.test/?test"

Resolving address in container via dinghy

I have a http container running on port 80. dinghy is used to make container available via https://my.domain.tld.docker on my development machine. So far so good.

No I should run a job inside the container. The job should fetch some data from the current container and get's the URL via configuration. The configuration tells the job, that the URL is https://my.domain.tld.docker. The problem now is, that the app itself runs on port 80 only. So what I would need is a hosts entry which points to the dinghy IP.

Is this possible? Any ideas?

Network auto joining doesn't work since Docker 17.04

I just upgraded docker-machine with engine 17.04-CE and my docker client with homebrew.
The dinghy_http_proxy container is not longer able to connect to my other containers, it seems that the network joining mechanism is broken.

Using

docker network connect myproj_default dinghy_http_proxy

Makes it work again.

Add CORS Support

I am using docker-compose for a web API. I am developing a javascript front-end in a separate docker-compose project. I found that CORS was not enabled for the API endpoint. In a fork, I have added support for CORS to the nginx.tmpl. I am now able to reach the API from the front-end via CORS enabled xhr request.

I have a PR ready if you would like to consider adding CORS support.

How to point to app?

Going to *.docker takes me to here:

test

What am I supposed to do from this point? I can't seem to find any documentation on setting up a proxy..

Issue using SSL on container

Hey,

I've set my docker container up, exposing ports 443 and 80 and everything is working fine with normal HTTP traffic. But following the SSL guide in the readme (creating the cert files based on the hostname) and I'm still not able to connect via HTTPS.

I've gone in and had a look at the reverse proxy container and it seems that the default nginx config that is generated is not listening to port 443:

# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
server {
  listen 80 default_server;
  server_name _;
  root /var/www/default/htdocs;
  error_page 404 /index.html;
}
upstream justsunnies.docker {
    server 172.17.0.3:3306;
}
server {
    server_name justsunnies.docker;
    listen 80;
    access_log /var/log/nginx/access.log vhost;
    return 301 https://justsunnies.docker$request_uri;
}
server {
    server_name justsunnies.docker;
    listen 443 ssl;
    access_log /var/log/nginx/access.log vhost;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
    ssl_prefer_server_ciphers on;
    ssl_session_timeout 5m;
    ssl_session_cache shared:SSL:50m;
    ssl_certificate /etc/nginx/certs/justsunnies.docker.crt;
    ssl_certificate_key /etc/nginx/certs/justsunnies.docker.key;
    add_header Strict-Transport-Security "max-age=31536000";
    location / {
        proxy_pass http://justsunnies.docker;
    }
}

I've also had a look inside the certs folder on this server and it contains my self signed certificates so not exactly sure what's going on.

Thanks

new cert and docker tag change broke old dinghy behaviour

Hi, we encountered some issues with dinghy-http-proxy. Would be nice if you can take a look.
this commit d96dbee
added check for /etc/nginx/dhparam.pem in dinghy-http-proxy:2.5 docker image:
https://hub.docker.com/r/codekitchen/dinghy-http-proxy/tags/

however, the file in is not in dinghy machine, resulting in this error:

docker@dinghy:~$ docker exec 095 nginx -s reload
2017/10/24 06:51:48 [emerg] 77#77: BIO_new_file("/etc/nginx/dhparam.pem") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/dhparam.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
nginx: [emerg] BIO_new_file("/etc/nginx/dhparam.pem") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/dhparam.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
docker@dinghy:~$ exit status 1

we can only bypass the error manually by executing

docker exec <conatiner_id> openssl dhparam -dsaparam -out /etc/nginx/dhparam.pem 4096

can you help to fix the issue so that dinghy creates the file automatically? or are we missing something very obvious?

Can't make it work with latest Docker Mac Client Beta

Hi,

I tried to set up http-proxy for Docker for Mac, but it seems that on my setup it does not work (and worked very well with dinghy);

My setup :

http-proxy docker container :

docker run -d --restart=always \
  -v /var/run/docker.sock:/tmp/docker.sock:ro \
  -p 80:80 -p 443:443 -p 19322:19322/udp \
  -e DNS_IP=127.0.0.1 -e CONTAINER_NAME=http-proxy \
  --name http-proxy \
  codekitchen/dinghy-http-proxy

resolver `/etc/resolver/docker`` :

nameserver 127.0.0.1
port 19322

Docker client version:

Version 1.11.1-beta11 (build: 6974)
37559e5f6acd56a4810963acc7001e88f2d8801

In theory, any virtual host with *.docker should be resolved but I can't access any container.

Swarm mode DNS

Hello, I'm trying out dinghy because regular ol' docker for mac is slow as molasses with large bind mounts.

My stack consists of several docker stack deploy on a docker swarm running just the host machine in single node.

Dinghy comes up fine and all my containers seem much faster! ๐Ÿ‘

Where I'm stuck is the dns resolving. With vanilla d4m Traefik was bound to localhost:80 so it could automatically proxy to my containers. That doesn't seem to work anymore since I can't figure out what the dns names are supposed to be.

I've tried: <service_name>, <service_name.docker>, <container_name> <container_name.docker> and I've tried manually connecting to the ip:port that dingy ip reports.

Still nothing connects. Does dinghy not support swarm mode?

add a default vhost

I've talked to quite a few people who have been tripped up by the fact that the first container that happens to be started ends up as the default vhost, and any unknown vhost will be proxied to that container.

It'd also be nice to provide some inline instructions for using the proxy.

To solve both issues, I want to add a default vhost that just has a basic "welcome to dinghy HTTP proxy" message with a link to the docs.

Livereload support [ more generally: ports other than 80 and 433 ]

It would be great to add support for additional ports (that speak HTTP-ish). Specifically, I have quite a few projects that all want a "livereload" port. I currently just open them up to the top-level docker host on project-unique ports, but it would be better to add a VIRTUAL_PORT_35736=36736 (or whatever) environment variable, and have dinghy-http-proxy open up an http-ish port at 35736 and proxy that to the appropriate host at 36736.

Resolving in containers without dinghy

Hi,

I switched from dinghy to Docker on mac and use now your very good proxy / dns container.

While it's working well from host to container, I a unable to resolve *.docker inside containers.

Do you have some advice on how to achieve this ?

Thanks,

driver failed programming external connectivity on endpoint proxy

I ran this on mac:

docker run -d --restart=always \
> -v /var/run/docker.sock:/tmp/docker.sock:ro \
> -v ~/.dinghy/certs:/etc/nginx/certs \
> -p 80:80 -p 443:443 -p 19322:19322/udp \
> -e DNS_IP=127.0.0.1 -e CONTAINER_NAME=http-proxy \
> --name proxy \
> codekitchen/dinghy-http-proxy

But I end up with this error everytime:

f925d92d06f417bf89da2541d13f2a3b31f3aa8b52126c7cb3b781fdc73a0e3a
docker: Error response from daemon: driver failed programming external connectivity on endpoint proxy (558d85d62cc87ec2cc076251afa12fecc815b44477df8b13bc435912d9f69b2c): Error starting userland proxy: Bind for 0.0.0.0:443: unexpected error (Failure EADDRINUSE).

Would appreciate help. ๐Ÿ‘

Any way to use multiple DOMAIN_TLD values?

I don't mind creating different /etc/resolver/<tld> files myself.

Some projects I'm working on use domain.docker and others use domain.dev, finally another one uses a domain variant of our app. While that domain has DNS values, being able to use the resolver would mean development would work offline without any problems

Can't stop redirect to HTTPS

hey been a while!

I have a setup that has nginx > varnish > apache.

when you hit nginx.blah.docker, it forces HTTPS because I tell it to:

server {
    listen         80;
    return 301 https://$host$request_uri;
}

server {
    listen 443;
    server_name _;

    ssl                 on;
    ssl_certificate     /etc/nginx/ssl/self.pem;
    ssl_certificate_key /etc/nginx/ssl/self.key;

    location / {
      proxy_pass http://blah_varnish_1:80;
      proxy_set_header X-Real-IP  $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto https;
      proxy_set_header X-Forwarded-Port 443;
      proxy_set_header Host $host;
    }

    error_page   500 502 503 504  /50x.html;
      location = /50x.html {
    }
}

However, I also had it set up (not sure when it worked last) so that if you hit apache.blah.docker:8080 then it serves non-HTTPS and bypasses varnish:

AddHandler php5-script .php
AddType text/html .php

Listen 8080

<VirtualHost *:8080>
	DocumentRoot /var/www/html/docroot
</VirtualHost>

<Directory /var/www/html/docroot>
        Options Indexes FollowSymLinks
        AllowOverride All
        Require all granted
</Directory>

Now, it seems that when I go to http://apache.blah.docker:8080, it gets forced to HTTPS which is not what I want (and chrome throws an error).

I see that with the http proxy, I should be able to set HTTPS_METHOD=noredirect on the apache container and it shouldn't redirect to HTTPS correct? I have tried and it doesn't seem to work.

Is there some way to disable the redirect to HTTPS entirely? I need to be able to hit nginx and it goes all the way through varnish and apache as HTTPS, but when hitting apache, it stays HTTP.

I don't really need the proxy handling any redirects for me.

Any ideas?

Different tld per container

How feasible would it be to allow each individual container to specify an alternate tld (instead of the default '.docker' one. I realize you can change the default but not on a per container basis.

Ideally you could specify ssl keys per tld also.

Just curious is this is even feasible.

502 Bad Gateway

I'm get error on codekitchen/dinghy-http-proxy:2.0.3
https://i.imgur.com/bG5rWWm.png

https://i.imgur.com/EzlIPtr.png
Dinghy 4.3.1

2016/03/19 19:15:40 [error] 103#0: *11 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.99.1, server: app.magento2.docker, request: "GET /favicon.ico HTTP/1.1", upstream: "http://172.17.0.5:443/favicon.ico", host: "app.magento2.docker", referrer: "http://app.magento2.docker/"

Thx,

Access MySQL from the host environment directly.

Hi,

Good day.

Is it possible to access a MySQL container from the host? I've tried:
0.0.0.0
192.168.99.100
localhost
app.mysql.docker

All fails, maybe there is a MySQL setting I am missing? Thanks.

Regards.
JJ

Enable CORS on specific domains

When dealing with authenticated cross domain requests, we canโ€™t use wilcard for Access-Control-Allow-Headers.

In that case we need to be specific about domains we want to allow. This pull request permits it.

dinghy not picking SSL

Hi @codekitchen !

Thanks for the amazing work that makes our dev life a lot easier! I've used dinghy proxy and corresponding SSL support for a while. However when I recently install dinghy on a new machine and try to run some containers via https, I found the dinghy not picking the SSL.

proxy works fine on port 80 without cert;

But once I generate self-signed certification under ~/.dinghy/certs/ and restart dinghy, it shows default dinghy page on port 80 (like "http://example.docker"), and shows "example.docker refused to connect." on port 443. According to the document it should redirect http to https automatically, but now it neither work on http nor https, so I assume that it saw the certification, but somehow the redirect and SSL not working.

Following are something I dig out so far, but all seems normal

  • result of curl -vv https://example.docker
Trying 192.168.99.100...
TCP_NODELAY set
Connection failed
connect to 192.168.99.100 port 443 failed: Connection refused
Failed to connect to example.docker port 443: Connection refused
Closing connection 0
curl: (7) Failed to connect to example.docker port 443: Connection refused` 
  • you can see the cert in the proxy container /etc/nginx/certs

  • /etc/nginx/conf.d/default.conf

map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam.pem;
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
server {
  listen 80 default_server;
  server_name _;
  root /var/www/default/htdocs;
  error_page 404 /index.html;
}
upstream example.docker {
	server 172.18.0.5:80;
}
server {
	server_name example.docker;
	listen 80;
	access_log /var/log/nginx/access.log vhost;
	return 301 https://$host$request_uri;
}
server {
	server_name example.docker;
	listen 443 http2 ssl;
	access_log /var/log/nginx/access.log vhost;
	ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
	ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
	ssl_prefer_server_ciphers on;
	ssl_session_timeout 5m;
	ssl_session_cache shared:SSL:50m;
	ssl_certificate /etc/nginx/certs/example.docker.crt;
	ssl_certificate_key /etc/nginx/certs/example.docker.key;
	add_header Strict-Transport-Security "max-age=31536000";
	location / {
		proxy_pass http://example.docker;
	}
}```

everything looks fine but the SSL is not picked. I even reinstall dinghy but no help, do you have any idea why? thanks!

increase client_header_buffer_size

We are using the webdevops/vagrant-docker-vm (https://github.com/webdevops/vagrant-docker-vm). In this VM the dinghy-http-proxy is in usage.
My Problem:
I get an "upstream sent too big header while reading response header from upstream,โ€ Error.
I think its related to an xdebug Bug which sends to many headers, in my case the header
Set-Cookie: XDEBUG_SESSION=docker; expires=Wed, 12-Apr-2017 08:02:41 GMT; Max-Age=3600; path=/
is 140 times send.
There ist already an Ticket at: https://bugs.xdebug.org/view.php?id=1397

Nobody knows when this will be resolved.

My question, is it possible that you increase the โ€œclient_header_buffer_sizeโ€ as workaround for this issue?

Thank you

export port 443

I just noticed that the container doesn't actually export port 443. You can still forward it, of course, but that's needlessly confusing.

SSL enablement: Unable to mount volumes

Hi All, I have been facing a strange issue with mounting volumes using the following command:
docker run -d --restart=always -v /var/run/docker.sock:/tmp/docker.sock:ro -v ~/.dinghy/certs:/etc/nginx/certs -p 80:80 -p 443:443 -p 19322:19322/udp -e CONTAINER_NAME=http-proxy --name http-proxy codekitchen/dinghy-http-proxy. I am unable to see the files on the container. I am using docker 17.09.1-ce.mac42 ce on mac osx. Any suggestions?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.