byjg / docker-easy-haproxy Goto Github PK
View Code? Open in Web Editor NEWDiscover services and create dynamically the haproxy.cfg based on the labels defined in docker containers or from a simple static Yaml
License: MIT License
Discover services and create dynamically the haproxy.cfg based on the labels defined in docker containers or from a simple static Yaml
License: MIT License
Atm it seems like it always uses the productions server. Would be nice to be able to set it to the staging server for testing.
When using the default yml for Docker Swarm, I am getting the following error during start:
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.41/services/9gffebl2qqovhlyf8nuit3kkb/update?version=9790
[...]
docker.errors.APIError: 400 Client Error for http+docker://localhost/v1.41/services/9gffebl2qqovhlyf8nuit3kkb/update?version=9790: Bad Request ("rpc error: code = InvalidArgument desc = Service cannot be explicitly attached to the ingress network "ingress"")
[...]
File "/scripts/processor/__init__.py", line 176, in inspect_network
service.update(networks = network_list)
File "/scripts/processor/__init__.py", line 64, in refresh
self.inspect_network()
self.refresh()
File "/scripts/processor/__init__.py", line 43, in __init__
File "/scripts/main.py", line 7, in start
processor_obj = ProcessorInterface.factory(os.getenv("EASYHAPROXY_DISCOVER"))
File "/scripts/processor/__init__.py", line 52, in factory
return Swarm()
File "/scripts/processor/__init__.py", line 155, in __init__
super().__init__()
My yml looks like this:
version: "3"
services:
haproxy:
image: byjg/easy-haproxy
environment:
EASYHAPROXY_DISCOVER: swarm
EASYHAPROXY_SSL_MODE: "loose"
HAPROXY_CUSTOMERRORS: "true"
HAPROXY_USERNAME: admin
HAPROXY_PASSWORD: password
HAPROXY_STATS_PORT: 1936
ports:
- "80:80/tcp"
- "443:443/tcp"
- "1936:1936/tcp"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
replicas: 1
networks:
- proxy_public
networks:
proxy_public:
external: true
You can enable HTTP/2 to make better use of network resources and decrease perceived latency by enabling multiple concurrent requests/responses to be multiplexed over a single TCP connection. This can be done by replacing alpn http/1.1 with alpn h2,http/1.1.
What this new order does is it uses HTTP/2 if supported, otherwise switch back to HTTP/1.1.
Source: https://www.haproxy.com/documentation/hapee/latest/load-balancing/protocols/http-2/
Is it possible to run easy-haproxy in docker to read container labels, but then actually configure haproxy on a different host? I have haproxy running on my opnsense router and I have multiple servers at home. At the moment I'm using traefik but if I could have a single instance of haproxy on the router and it would automatically update when I bring containers up on various hosts, that would be the ideal setup.
If it's not possible, what would be the steps needed to implement something like that?
Whenever a container is running in network host mode, docker-easy-haproxy
refuses to start. Log is below:
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/docker/api/client.py", line 268, in _raise_for_status
response.raise_for_status()
File "/usr/lib/python3.10/site-packages/requests/models.py", line 960, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.43/networks/ebf21d2da2ef49b72f40147ca4ea6180b5dde83622981664ae107d9491ed22e2/connect
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/scripts/main.py", line 73, in <module>
main()
File "/scripts/main.py", line 70, in main
start()
File "/scripts/main.py", line 7, in start
processor_obj = ProcessorInterface.factory(os.getenv("EASYHAPROXY_DISCOVER"))
File "/scripts/processor/__init__.py", line 50, in factory
return Docker()
File "/scripts/processor/__init__.py", line 128, in __init__
super().__init__()
File "/scripts/processor/__init__.py", line 43, in __init__
self.refresh()
File "/scripts/processor/__init__.py", line 64, in refresh
self.inspect_network()
File "/scripts/processor/__init__.py", line 145, in inspect_network
ha_proxy_network.connect(container.name)
File "/usr/lib/python3.10/site-packages/docker/models/networks.py", line 58, in connect
return self.client.api.connect_container_to_network(
File "/usr/lib/python3.10/site-packages/docker/utils/decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "/usr/lib/python3.10/site-packages/docker/api/network.py", line 254, in connect_container_to_network
self._raise_for_status(res)
File "/usr/lib/python3.10/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e) from e
File "/usr/lib/python3.10/site-packages/docker/errors.py", line 39, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation) from e
docker.errors.APIError: 400 Client Error for http+docker://localhost/v1.43/networks/ebf21d2da2ef49b72f40147ca4ea6180b5dde83622981664ae107d9491ed22e2/connect: Bad Request ("container sharing network namespace with another container or host cannot be connected to any other network")
Maybe the host network containers can be excluded, or at least this error be ignored?
I have another proposal, so here it comes... If you have time, I'd like your input. I'll build everything, etc..
I want a basic service firewall so I can stop dealing with iptables/firewalld on the docker host system.
So, my "proposal" is: I wanted a way to set ACL on frontends, via service/container labels.
A simple ACL to ensure a service can be accessed only from 1.1.1.1
:
labels:
- com.byjg.easyhaproxy.definitions=service
- com.byjg.easyhaproxy.mode.service=tcp
- com.byjg.easyhaproxy.port.service=443
- com.byjg.easyhaproxy.host.service=dns.service.name
# ACL here
- com.byjg.easyhaproxy.acl-name.0.service=service-fw
- com.byjg.easyhaproxy.acl-value.0.service="src 1.1.1.1"
It would render the following config block:
frontend service_in_443_1
bind *:443
mode tcp
acl service-fw src 1.1.1.1
tcp-request connection reject if !service-fw
default_backend my_backend_server
Do you think this is over-complicating it?
Here is another example — I have an internal API server, I want to ensure that requests originate from 10.0.1.0/24
or 10.0.2.2
and each request must start with /api/v2
:
labels:
- com.byjg.easyhaproxy.definitions=web
- com.byjg.easyhaproxy.port.web=80
- com.byjg.easyhaproxy.host.web=dns
# ACL here
- com.byjg.easyhaproxy.acl-name.0.web=service-fw
- com.byjg.easyhaproxy.acl-value.0.web="src 10.0.1.0/24 10.0.2.2"
- com.byjg.easyhaproxy.acl-name.1.web=service-fw
- com.byjg.easyhaproxy.acl-value.1.web="path_beg -i /api/v2"
It would render to:
frontend http_in_80_1
bind *:80
mode http
# same ACL name, I think combines them — both must be true
acl service-fw src 10.0.1.0/24 10.0.2.2
acl service-fw path_beg -i /api/v2
http-request deny if !service-sw
acl is_rule_dns_1_1 hdr(host) -i dns
acl is_rule_dns_1_2 hdr(host) -i dns:80
use_backend srv_dns_80_1 if is_rule_dns_1_1 OR is_rule_dns_1_2
Different acl-name
s would render another http-request deny if
statement.
Thoughts?
I refactored the code and added some tests. Would you enable CI for this repo? I don't know if you have any preference. :)
Some of the other proxy plugins I have in Dokku use comma-separated lists. Is that the method here, or should we have 1 definition per domain?
Hi, this is not an issue but a question -
If a request comes in for a host (say foo.com) on port 80 I want to redirect it to port 443, so I want to redirect
http://foo.com
to
https://foo.com
How can I do this with easy-haproxy? I will be using a stack and so setting up my labels in my yml file.
Thanks for this nice tool!
The documentation doesn't say which labels get scanned for containers. Is it possible to specify a network? For instance, I'd want to run this in the bridge
network as that is where all Dokku apps would otherwise be.
After a bit of testing, it looks like if the containers aren't both in a custom network, they can't route to each other because the routing happens via the assumed network alias.
For the default bridge network:
If you run the same application stack on the default bridge network, you need to manually create links between the containers (using the legacy
--link
flag). These links need to be created in both directions, so you can see this gets complex with more than two containers which need to communicate. Alternatively, you can manipulate the/etc/hosts
files within the containers, but this creates problems that are difficult to debug.
I think in this case, we want the internal ip address if the container is on the bridge network. Maybe we can have a "shadow" label that gets injected into the labels here and then pick it off later when we set the upstreams here?
I use your setup for docker containers. Looks respectable but since I rely on haproxy for my other projects I'd like to have my "own" common logging setup. Currently, there's no way to do it.
My request is to add extra configuration for:
Here's my proposal for addind extra configuration to those configuration sections.
easyhaproxy.wiki.host: wiki.example.org,wiki.example.org:1080
easyhaproxy.wiki.options.extra: |
stats socket /sockets/hadmin.sock mode 660 level admin
nbthread 16
easyhaproxy.wiki.defaults.extra: |
timeout connect 5s
timeout client 50s
timeout server 5s
maxconn 1000000
timeout client-fin 5s
unique-id-format %{+X}o\ %cp-%ci-%Ts-%H-%rt-%ms
unique-id-header X-Unique-ID
log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r %ID"
easyhaproxy.wiki.frontend.extra: |
acl wirki_blacklist src 10.20.30.0/24
tcp-request connection reject if !wiki_blacklist
easyhaproxy.wiki.backend.extra: |
acl wiki_burstable src 192.168.0.0/16
acl wiki_burstable src 10.0.0.0/8
acl wiki_burstable src 172.16.0.0/12
http-request track-sc0 src
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 33 } !wiki_burstable
http-response add-header Connection close
Create the possibility to attach plugins to EasyHAProxy processing.
Every time a new site is discovery the plugin will be invoked and process some feature.
There are two types of plugin and it is configured during the plugin creation
Hi! I'm looking at ways to add new proxy implementations for Dokku, and this repo is in-line with what I'm doing around Caddy and Traefik.
One thing I'd like to do is change the label prefix to haproxy
instead of easyhaproxy
. I believe this will make it easier for developers to integrate with (since they'll know what haproxy is).
Would it be possible to expose an environment variable for customizing the label prefix?
Hi, I'm trying to use the standalone docker discovery but can't make it work.
root@z-srv-1:~# docker inspect haproxy | jq '.[].Config.Env'
[
"EASYHAPROXY_DISCOVER=docker",
"EASYHAPROXY_LOG_LEVEL=DEBUG",
"HAPROXY_LOG_LEVEL=ERROR",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"RELEASE_VERSION=\"4.4.0\"",
"TZ=Etc/UTC"
]
root@z-srv-1:~# docker inspect haproxy | jq '.[].Mounts'
[
{
"Type": "bind",
"Source": "/var/run/docker.sock",
"Destination": "/var/run/docker.sock",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
]
root@z-srv-1:~# docker inspect grafana | jq '.[].Config.Labels'
{
"easyhaproxy.grafana.host": "grafana.zasdaym.my.id",
"easyhaproxy.grafana.localport": "3000",
"maintainer": "Grafana Labs <[email protected]>"
}
root@z-srv-1:~# docker exec haproxy cat /etc/haproxy/haproxy.cfg
global
log stdout format raw local0 err
maxconn 2000
tune.ssl.default-dh-param 2048
# intermediate configuration
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options prefer-client-ciphers no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-server-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-server-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-dh-param-file /etc/haproxy/dhparam
defaults
log global
option httplog
timeout connect 3s
timeout client 10s
timeout server 10m
frontend stats
bind *:1936
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
default_backend srv_stats
backend srv_stats
mode http
server Local 127.0.0.1:1936
backend certbot_backend
mode http
server certbot 127.0.0.1:2080
Upon hitting the Let's Encrypt quota of issuing 5 certificates for the exact set of domains within a span of 168 hours, the Certbot process is rendered unable to generate any new certificate orders until a specific timeframe set by Let's Encrypt. Nevertheless, Certbot persists in transmitting requests, leading to an increase in unnecessary network traffic and potentially causing strain on system resources. This condition warrants optimization to prevent Certbot from making redundant certificate requests once the rate limit is reached, thereby enhancing overall system performance and stability.
Comprehensive Strategies to Address Let's Encrypt Rate Limit Issue in Certbot:
Rate Limit Check: Implement a check within Certbot to ascertain if the rate limit has been reached before attempting to request a new certificate.
Backoff and Retry Logic: Introduce an exponential backoff and retry logic to Certbot, reducing the frequency of requests to Let's Encrypt when the rate limit has been reached.
Configurable Rate Limit Alerts: Add a feature in Certbot that alerts the administrator when the rate limit is close to being reached for proactive manual intervention.
Rate Limit Documentation: Improve the project's documentation to clearly explain Let's Encrypt's rate limits, helping users understand and potentially adjust their certificate issuance strategies.
Adjust Certificate Request Strategy: Consider adjusting the certificate request strategy to prevent hitting the rate limit by grouping multiple domains under fewer certificates or adjusting the timing of certificate requests.
Next Eligible Time Retry or Notification: Implement a feature to notify the admin when the next eligible time for certificate issuance arrives, or program Certbot to automatically attempt a new request at this given time.
Switching Certificate Provider (Let's Encrypt vs ZeroSSL): Consider switching to ZeroSSL, which offers unlimited certificates without rate limits, and provides a user-friendly web interface for certificate management.
Automated Certificate Provider Switch: Implement an automated solution where Let's Encrypt is the primary choice, and the system automatically switches to ZeroSSL when the rate limit is reached on Let's Encrypt.
These solutions aim to mitigate the issue of hitting rate limits, enhance system performance and stability, and provide flexibility in handling SSL certificates. As always, any changes should be thoroughly tested to ensure they do not introduce new issues or conflicts with existing functionality.
This would make it so definitions scope their rules. Traefik does similar with their "routers" idea: https://doc.traefik.io/traefik/routing/providers/docker/
It would be cool to have automatic letsencrypt support for this codebase. While haproxy doesn't support it directly (yet? ever?), I found this repo from the official Github account, documented here on their blog.
Might be a cool addition to add somehow - maybe something like easyproxy.letsencrypt=true
? The config file could be templated based on env vars for the easyhaproxy container :)
It would be great to somehow be able to configure the log-level of haproxy, as well as where app request logs go (assuming we can write them to a file per backend or something).
Trying to enable certbot, but I get this error (on LetsEncrypt, but ZeroSSL has the same issue):
[CERTBOT] 08/15/23 14:09:07 [DEBUG]: [not_found] Request new certificate for -DOMAIN-
[CERTBOT] 08/15/23 14:09:08 [INFO]: Account registered.
[CERTBOT] 08/15/23 14:09:08 [WARN]: Saving debug log to /var/log/letsencrypt/letsencrypt.log
[CERTBOT] 08/15/23 14:09:08 [INFO]: Requesting a certificate for -DOMAIN-
[CERTBOT] 08/15/23 14:09:10 [WARN]: Some challenges have failed.
[CERTBOT] 08/15/23 14:09:10 [WARN]: Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.
[CERTBOT] 08/15/23 14:09:10 [INFO]: Certbot failed to authenticate some domains (authenticator: standalone). The Certificate Authority reported these problems:
[CERTBOT] 08/15/23 14:09:10 [INFO]: Domain: -DOMAIN-
[CERTBOT] 08/15/23 14:09:10 [INFO]: Type: connection
[CERTBOT] 08/15/23 14:09:10 [INFO]: Detail: 49.13.73.162: Fetching http://-DOMAIN-/.well-known/acme-challenge/fXpTY0iMRtl5GuLhg07-uBv75L9NTJrCSDUfJr82zL8: Connection refused
[CERTBOT] 08/15/23 14:09:10 [INFO]:
[CERTBOT] 08/15/23 14:09:10 [INFO]: Hint: The Certificate Authority failed to download the challenge files from the temporary standalone webserver started by Certbot on port 2080. Ensure that the listed domains point to this machine and that it can accept inbound connections from the internet.
[CERTBOT] 08/15/23 14:09:10 [INFO]:
[CERTBOT] 08/15/23 14:09:10 [DEBUG]: Freeze issuing ssl for -DOMAIN- due failure. The certificate is not_found
(domain redacted to -DOMAIN-)
This is when running :master
, :latest
does not seem to spawn port 443 at all.
Now that haproxy supports "HTTP/3" with a docker image, perhaps support could be added to this project? Only thing holding me back from using it at the moment.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.