Git Product home page Git Product logo

marathon-lb's Introduction

marathon-lb Build Status

Marathon-lb is a tool for managing HAProxy, by consuming Marathon's app state. HAProxy is a fast, efficient, battle-tested, highly available load balancer with many advanced features which power a number of high-profile websites.

Features

  • Stateless design: no direct dependency on any third-party state store like ZooKeeper or etcd (except through Marathon)
  • Idempotent and deterministic: scales horizontally
  • Highly scalable: can achieve line-rate per instance, with multiple instances providing fault-tolerance and greater throughput
  • Real-time LB updates, via Marathon's event bus
  • Support for Marathon's health checks
  • Multi-cert TLS/SSL support
  • Zero-downtime deployments
  • Per-service HAProxy templates
  • DC/OS integration
  • Automated Docker image builds (mesosphere/marathon-lb)
  • Global HAProxy templates which can be supplied at launch
  • Supports IP-per-task integration, such as Project Calico
  • Includes tini zombies reaper

Getting Started

Take a look at the marathon-lb wiki for example usage, templates, and more.

Architecture

The marathon-lb script marathon_lb.py connects to the marathon API to retrieve all running apps, generates a HAProxy config and reloads HAProxy. By default, marathon-lb binds to the service port of every application and sends incoming requests to the application instances.

Services are exposed on their service port (see Service Discovery & Load Balancing for reference) as defined in their Marathon definition. Furthermore, apps are only exposed on LBs which have the same LB tag (or group) as defined in the Marathon app's labels (using HAPROXY_GROUP). HAProxy parameters can be tuned by specify labels in your app.

To create a virtual host or hosts the HAPROXY_{n}_VHOST label needs to be set on the given application. Applications with a vhost set will be exposed on ports 80 and 443, in addition to their service port. Multiple virtual hosts may be specified in HAPROXY_{n}_VHOST using a comma as a delimiter between hostnames.

All applications are also exposed on port 9091, using the X-Marathon-App-Id HTTP header. See the documentation for HAPROXY_HTTP_FRONTEND_APPID_HEAD in the templates section

You can access the HAProxy statistics via :9090/haproxy?stats, and you can retrieve the current HAProxy config from the :9090/_haproxy_getconfig endpoint.

Deployment

The package is currently available from the universe. To deploy marathon-lb on the public slaves in your DC/OS cluster, simply run:

dcos package install marathon-lb

To configure a custom ssl-certificate, set the dcos cli option ssl-cert to your concatenated cert and private key in .pem format. For more details see the HAProxy documentation.

For further customization, templates can be added by pointing the dcos cli option template-url to a tarball containing a directory templates/. See comments in script on how to name those.

Docker

Synopsis: docker run -e PORTS=$portnumber --net=host mesosphere/marathon-lb sse|poll ...

You must set PORTS environment variable to allow haproxy bind to this port. Syntax: docker run -e PORTS=9090 mesosphere/marathon-lb sse [other args]

You can pass in your own certificates for the SSL frontend by setting the HAPROXY_SSL_CERT environment variable. If you need more than one certificate you can specify additional ones by setting HAPROXY_SSL_CERT0 - HAPROXY_SSL_CERT100.

sse mode

In SSE mode, the script connects to the marathon events endpoint to get notified about state changes. This only works with Marathon 0.11.0 or newer versions.

Syntax: docker run mesosphere/marathon-lb sse [other args]

poll mode

If you can't use the HTTP callbacks, the script can poll the APIs to get the schedulers state periodically.

Syntax: docker run mesosphere/marathon-lb poll [other args]

To change the poll interval (defaults to 60s), you can set the POLL_INTERVAL environment variable.

Direct Invocation

You can also run the update script directly. To generate an HAProxy configuration from Marathon running at localhost:8080 with the marathon_lb.py script, run:

$ ./marathon_lb.py --marathon http://localhost:8080 --group external --strict-mode --health-check

It is possible to pass --auth-credentials= option if your Marathon requires authentication:

$ ./marathon_lb.py --marathon http://localhost:8080 --auth-credentials=admin:password

It is possible to get the auth credentials (user & password) from VAULT if you define the following environment variables before running marathon-lb: VAULT_TOKEN, VAULT_HOST, VAULT_PORT, VAULT_PATH where VAULT_PATH is the root path where your user and password are located.

This will refresh haproxy.cfg, and if there were any changes, then it will automatically reload HAProxy. Only apps with the label HAPROXY_GROUP=external will be exposed on this LB.

marathon_lb.py has a lot of additional functionality like sticky sessions, HTTP to HTTPS redirection, SSL offloading, virtual host support and templating capabilities.

To get the full documentation run:

$ ./marathon_lb.py --help

Providing SSL Certificates

You can provide your SSL certificate paths to be placed in frontend marathon_https_in section with --ssl-certs.

$ ./marathon_lb.py --marathon http://localhost:8080 --group external --ssl-certs /etc/ssl/site1.co,/etc/ssl/site2.co --health-check --strict-mode

If you are using the script directly, you have two options:

  • Provide nothing and config will use /etc/ssl/cert.pem as the certificate path. Put the certificate in this path or edit the file for the correct path.
  • Provide --ssl-certs command line argument and config will use these paths.

If you are using the provided run script or Docker image, you have three options:

  • Provide your certificate text in HAPROXY_SSL_CERT environment variable. Contents will be written to /etc/ssl/cert.pem. Config will use this path unless you specified extra certificate paths as in the next option.
  • Provide SSL certificate paths with --ssl-certs command line argument. Your config will use these certificate paths.
  • Provide nothing and it will create self-signed certificate on /etc/ssl/cert.pem and config will use it.

Skipping Configuration Validation

You can skip the configuration file validation (via calling HAProxy service) process if you don't have HAProxy installed. This is especially useful if you are running HAProxy on Docker containers.

$ ./marathon_lb.py --marathon http://localhost:8080 --group external --skip-validation

Using HAProxy Maps for Backend Lookup

You can use HAProxy maps to speed up web application (vhosts) to backend lookup. This is very useful for large installations where the traditional vhost to backend rules comparison takes considerable time since it sequentially compares each rule. HAProxy map creates a hash based lookup table so its fast compared to the other approach, this is supported in marathon-lb using --haproxy-map flag.

$ ./marathon_lb.py --marathon http://localhost:8080 --group external --haproxy-map

Currently it creates a lookup dictionary only for host header (both HTTP and HTTPS) and X-Marathon-App-Id header. But for path based routing and auth, it uses the usual backend rules comparison.

API Endpoints

Marathon-lb exposes a few endpoints on port 9090 (by default). They are:

Endpoint Description
:9090/haproxy?stats HAProxy stats endpoint. This produces an HTML page which can be viewed in your browser, providing various statistics about the current HAProxy instance.
:9090/haproxy?stats;csv This is a CSV version of the stats above, which can be consumed by other tools. For example, it's used in the zdd.py script.
:9090/_haproxy_health_check HAProxy health check endpoint. Returns 200 OK if HAProxy is healthy.
:9090/_haproxy_getconfig Returns the HAProxy config file as it was when HAProxy was started. Implemented in getconfig.lua.
:9090/_haproxy_getvhostmap Returns the HAProxy vhost to backend map. This endpoint returns HAProxy map file only when the --haproxy-map flag is enabled, it returns an empty string otherwise. Implemented in getmaps.lua.
:9090/_haproxy_getappmap Returns the HAProxy app ID to backend map. Like _haproxy_getvhostmap, this requires the --haproxy-map flag to be enabled and returns an empty string otherwise. Also implemented in getmaps.lua.
:9090/_haproxy_getpids Returns the PIDs for all HAProxy instances within the current process namespace. This literally returns $(pidof haproxy). Implemented in getpids.lua. This is also used by the zdd.py script to determine if connections have finished draining during a deploy.
:9090/_mlb_signal/hup* Sends a SIGHUP signal to the marathon-lb process, causing it to fetch the running apps from Marathon and reload the HAProxy config as though an event was received from Marathon.
:9090/_mlb_signal/usr1* Sends a SIGUSR1 signal to the marathon-lb process, causing it to restart HAProxy with the existing config, without checking Marathon for changes.
:9090/metrics Exposes HAProxy metrics in prometheus format.

* These endpoints won't function when marathon-lb is in poll mode as there is no marathon-lb process to be signaled in this mode (marathon-lb exits after each poll).

HAProxy Configuration

App Labels

App labels are specified in the Marathon app definition. These can be used to override HAProxy behaviour. For example, to specify the external group for an app with a virtual host named service.mesosphere.com:

{
  "id": "http-service",
  "labels": {
    "HAPROXY_GROUP":"external",
    "HAPROXY_0_VHOST":"service.mesosphere.com"
  }
}

Some labels are specified per service port. These are denoted with the {n} parameter in the label key, where {n} corresponds to the service port index, beginning at 0.

See the configuration doc for the full list of labels.

Templates

Marathon-lb global templates (as listed in the Longhelp) can be overwritten in two ways: -By creating an environment variable in the marathon-lb container -By placing configuration files in the templates/ directory (relative to where the script is run from)

For example, to replace HAPROXY_HTTPS_FRONTEND_HEAD with this content:

frontend new_frontend_label
  bind *:443 ssl crt /etc/ssl/cert.pem
  mode http

Then this environment variable could be added to the Marathon-LB configuration:

"HAPROXY_HTTPS_FRONTEND_HEAD": "\\nfrontend new_frontend_label\\n  bind *:443 ssl {sslCerts}\\n  mode http"

Alternately, a file calledHAPROXY_HTTPS_FRONTEND_HEAD could be placed in templates/ directory through the use of an artifact URI.

Additionally, some templates can also be overridden per app service port. You may add your own templates to the Docker image, or provide them at startup.

See the configuration doc for the full list of templates.

Overridable Templates

Some templates may be overridden using app labels, as per the labels section. Strings are interpreted as literal HAProxy configuration parameters, with substitutions respected (as per the templates section). The HAProxy configuration will be validated for correctness before reloading HAProxy after changes. Note: Since the HAProxy config is checked before reloading, if an app's HAProxy labels aren't syntactically correct, HAProxy will not be reloaded and may result in stale config.

Here is an example for a service called http-service which requires that http-keep-alive be disabled:

{
  "id": "http-service",
  "labels":{
    "HAPROXY_GROUP":"external",
    "HAPROXY_0_BACKEND_HTTP_OPTIONS":"  option forwardfor\n  no option http-keep-alive\n  http-request set-header X-Forwarded-Port %[dst_port]\n  http-request add-header X-Forwarded-Proto https if { ssl_fc }\n"
  }
}

The full list of per service port templates which can be specified are documented here.

HAProxy Global Default Options

As a shortcut to add haproxy global default options (without overriding the global template) a comma-separated list of options may be specified via the HAPROXY_GLOBAL_DEFAULT_OPTIONS environment variable. The default value when not specified is redispatch,http-server-close,dontlognull; as an example, to add the httplog option (and keep the existing defaults), one should specify HAPROXY_GLOBAL_DEFAULT_OPTIONS=redispatch,http-server-close,dontlognull,httplog.

  • Note that this setting has no effect when the HAPROXY_HEAD template has been overridden.

Operational Best Practices

  • Use service ports within the reserved range (which is 10000 to 10100 by default). This will prevent port conflicts, and ensure reloads don't result in connection errors.
  • Avoid using the HAPROXY_{n}_PORT label; prefer defining service ports.
  • Consider running multiple marathon-lb instances. In practice, 3 or more should be used to provide high availability for production workloads. Running 1 instance is never recommended, and unless you have significant load running more than 5 instances may not add value. The number of MLB instances you run will vary depending on workload and the amount of failure tolerance required. Note: do not run marathon-lb on every node in your cluster. This is considered an anti-pattern due to the implications of hammering the Marathon API and excess health checking.
  • Consider using a dedicated load balancer in front of marathon-lb to permit upgrades/changes. Common choices include an ELB (on AWS) or a hardware load balancer for on-premise installations.
  • Use separate marathon-lb groups (specified with --group) for internal and external load balancing. On DC/OS, the default group is external. A simple options.json for an internal load balancer would be:
  {
    "marathon-lb": {
      "name": "marathon-lb-internal",
      "haproxy-group": "internal",
      "bind-http-https": false,
      "role": ""
    }
  }
  • For HTTP services, consider setting VHost (and optionally a path) to access the service on ports 80 and 443. Alternatively, the service can be accessed on port 9091 using the X-Marathon-App-Id header. For example, to access an app with the ID tweeter:
$ curl -vH "X-Marathon-App-Id: /tweeter" marathon-lb.marathon.mesos:9091/
*   Trying 10.0.4.74...
* Connected to marathon-lb.marathon.mesos (10.0.4.74) port 9091 (#0)
> GET / HTTP/1.1
> Host: marathon-lb.marathon.mesos:9091
> User-Agent: curl/7.48.0
> Accept: */*
> X-Marathon-App-Id: /tweeter
>
< HTTP/1.1 200 OK
  • Some of the features of marathon-lb assume that it is the only instance of itself running in a PID namespace. i.e. marathon-lb assumes that it is running in a container. Certain features like the /_mlb_signal endpoints and the /_haproxy_getpids endpoint (and by extension, zero-downtime deployments) may behave unexpectedly if more than one instance of marathon-lb is running in the same PID namespace or if there are other HAProxy processes in the same PID namespace.
  • Sometimes it is desirable to get detailed container and HAProxy logging for easier debugging as well as viewing connection logging to frontends and backends. This can be achieved by setting the HAPROXY_SYSLOGD environment variable or container-syslogd value in options.json like so:
  {
    "marathon-lb": {
      "container-syslogd": true
    }
  }

Zero-downtime Deployments

  • Please note that zdd.py is not to be used in a production environment and is purely developed for demonstration purposes.

Marathon-lb is able to perform canary style blue/green deployment with zero downtime. To execute such deployments, you must follow certain patterns when using Marathon.

The deployment method is described in this Marathon document. Marathon-lb provides an implementation of the aforementioned deployment method with the script zdd.py. To perform a zero downtime deploy using zdd.py, you must:

  • Specify the HAPROXY_DEPLOYMENT_GROUP and HAPROXY_DEPLOYMENT_ALT_PORT labels in your app template
    • HAPROXY_DEPLOYMENT_GROUP: This label uniquely identifies a pair of apps belonging to a blue/green deployment, and will be used as the app name in the HAProxy configuration
    • HAPROXY_DEPLOYMENT_ALT_PORT: An alternate service port is required because Marathon requires service ports to be unique across all apps
  • Only use 1 service port: multiple ports are not yet implemented
  • Use the provided zdd.py script to orchestrate the deploy: the script will make API calls to Marathon, and use the HAProxy stats endpoint to gracefully terminate instances
  • The marathon-lb container must be run in privileged mode (to execute iptables commands) due to the issues outlined in the excellent blog post by the Yelp engineering team found here
  • If you have long-lived TCP connections using the same HAProxy instances, it may cause the deploy to take longer than necessary. The script will wait up to 5 minutes (by default) for connections to drain from HAProxy between steps, but any long-lived TCP connections will cause old instances of HAProxy to stick around.

An example minimal configuration for a test instance of nginx is included here. You might execute a deployment from a CI tool like Jenkins with:

./zdd.py -j 1-nginx.json -m http://master.mesos:8080 -f -l http://marathon-lb.marathon.mesos:9090 --syslog-socket /dev/null

Zero downtime deployments are accomplished through the use of a Lua module, which reports the number of HAProxy processes which are currently running by hitting the stats endpoint at the /_haproxy_getpids. After a restart, there will be multiple HAProxy PIDs until all remaining connections have gracefully terminated. By waiting for all connections to complete, you may safely and deterministically drain tasks. A caveat of this, however, is that if you have any long-lived connections on the same LB, HAProxy will continue to run and serve those connections until they complete, thereby breaking this technique.

The ZDD script includes the ability to specify a pre-kill hook, which is executed before draining tasks are terminated. This allows you to run your own automated checks against the old and new app before the deploy continues.

Traffic Splitting Between Blue/Green Apps

Zdd has support to split the traffic between two versions of same app (version 'blue' and version 'green') by having instances of both versions live at the same time. This is supported with the help of the HAPROXY_DEPLOYMENT_NEW_INSTANCES label.

When you run zdd with the --new-instances flag, it creates only the specified number of instances of the new app, and deletes the same number of instances from the old app (instead of the normal, create all instances in new and delete all from old approach), to ensure that the number of instances in new app and old app together is equal to HAPROXY_DEPLOYMENT_TARGET_INSTANCES.

Example: Consider the same nginx app example where there are 10 instances of nginx running image version v1, now we can use zdd to create 2 instances of version v2, and retain 8 instances of V1 so that traffic is split in ratio 80:20 (old:new).

Creating 2 instances with new version automatically deletes 2 instances in existing version. You could do this using the following command:

$ ./zdd.py -j 1-nginx.json -m http://master.mesos:8080 -f -l http://marathon-lb.marathon.mesos:9090 --syslog-socket /dev/null --new-instances 2

This state where you have instances of both old and new versions of same app live at the same time is called hybrid state.

When a deployment group is in hybrid state, it needs to be converted into completely current version or completely previous version before deploying any further versions, this could be done with the help of the --complete-cur and --complete-prev flags in zdd.

When you run the below command, it converts all instances to new version so that traffic split ratio becomes 0:100 (old:new) and it deletes the old app. This is graceful as it follows usual zdd procedure of waiting for tasks/instances to drain before deleting them.

$ ./zdd.py -j 1-nginx.json -m http://master.mesos:8080 -f -l http://marathon-lb.marathon.mesos:9090 --syslog-socket /dev/null --complete-cur

Similarly you can use --complete-prev flag to convert all instances to old version (and this is essentially a rollback) so that traffic split ratio becomes 100:0 (old:new) and it deletes the new app.

Currently only one hop of traffic split is supported, so you can specify the number of new instances (directly proportional to traffic split ratio) only when app is having all instances of same version (completely blue or completely green). This implies --new-instances flag cannot be specified in hybrid mode to change traffic split ratio (instance ratio) as updating Marathon label (HAPROXY_DEPLOYMENT_NEW_INSTANCES) currently triggers new deployment in marathon which will not be graceful. Currently for the example mentioned, the traffic split ratio is 100:0 -> 80:20 -> 0:100, where there is only one hop when both versions get traffic simultaneously.

Mesos with IP-per-task Support

Marathon-lb supports load balancing for applications that use the Mesos IP-per-task feature, whereby each task is assigned unique, accessible, IP addresses. For these tasks services are directly accessible via the configured discovery ports and there is no host port mapping. Note, that due to limitations with Marathon (see mesosphere/marathon#3636) configured service ports are not exposed to marathon-lb for IP-per-task apps.

For these apps, if the service ports are missing from the Marathon app data, marathon-lb will automatically assign port values from a configurable range if you specify it. The range is configured using the --min-serv-port-ip-per-task and --max-serv-port-ip-per-task options. While port assignment is deterministic, the assignment is not guaranteed if you change the current set of deployed apps. In other words, when you deploy a new app, the port assignments may change.

Zombie reaping

When running with isolated containers, you may need to take care of reaping orphaned child processes. HAProxy typically produces orphan processes because of its two-step reload mechanism. Marathon-LB uses tini for this purpose. When running in a container without PID namespace isolation, setting the TINI_SUBREAPER environment variable is recommended.

Contributing

PRs are welcome, but here are a few general guidelines:

  • Avoid making changes which may break existing behaviour

  • Document new features

  • Update/include tests for new functionality. To install dependencies and run tests:

    pip install -r requirements-dev.txt
    nosetests
    
  • Use the pre-commit hook to automatically generate docs:

    bash /path/to/marathon-lb/scripts/install-git-hooks.sh
    

Using the Makefile and docker for developement and testing

Running unit and integration tests is automated as make targets. Docker is required to use the targets as it will run all tests in containers.

Several environment variables can be set to control the image tags, DCOS version/variant, etc. Check the top of the Makefile for more info.

To run the unit tests:

make test-unit

To run the integration tests a DCOS installation will be started via dcos-e2e. The installation of dcos-e2e and management of the cluster will all be done in docker containers. Since the installers are rather large downloads, it is benificial to specify a value for DCOS_E2E_INSTALLERS_DIR. By default DCOS_E2E_INSTALLERS_DIR is inside the .cache directory that will be removed upon make clean. You must provide a repository for the resultant docker image to be pushed to via the CONTAINTER_REPO environemnt variable. It is assumed that the local docker is already logged in and the image will be pushed prior to launching the cluster.

To run the integration tests on the OSS variant of DCOS:

DCOS_E2E_INSTALLERS_DIR="${HOME}/dcos/installers" \
CONTAINTER_REPO="my_docker_user/my-marathon-lb-repo" make test-integration

To run the integration tests on the ENTERPRISE variant of DCOS:

DCOS_LICENSE_KEY_PATH=${HOME}/license.txt \
DCOS_E2E_VARIANT=enterprise \
DCOS_E2E_INSTALLERS_DIR="${HOME}/dcos/installers"\
CONTAINTER_REPO="my_docker_user/my-marathon-lb-repo" make test-integration

To run both unit and integration tests (add appropriate variables):

CONTAINTER_REPO="my_docker_user/my-marathon-lb-repo" make test

Troubleshooting your development environment setup

FileNotFoundError: [Errno 2] No such file or directory: 'curl-config'

You need to install the curl development package.

# Fedora
dnf install libcurl-devel

# Ubuntu
apt-get install libcurl-dev

ImportError: pycurl: libcurl link-time ssl backend (nss) is different from compile-time ssl backend (openssl)

The pycurl package linked against the wrong SSL backend when you installed it.

pip uninstall pycurl
export PYCURL_SSL_LIBRARY=nss
pip install -r requirements-dev.txt

Swap nss for whatever backend it mentions.

Release Process

  1. Create a Github release. Follow the convention of past releases. You can find something to copy/paste if you hit the "edit" button of a previous release.

  2. The Github release creates a tag, and Dockerhub will build off of that tag.

  3. Make a PR to Universe. The suggested way is to create one commit that only copies the previous dir to a new one, and then a second commit that makes the actual changes. If unsure, check out the previous commits to the marathon-lb directory in Universe.

marathon-lb's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

marathon-lb's Issues

Stale HAProxy configurations remain listening

We are seeing an interesting issue with v1.0.1 of marthon-lb running in SSE.
On a reconfiguration, occasionally the older process remains listening causing stale configuration and the newer process to fail.

I have not be able to reliable reproduce it but it has cause several issues for us recently and the only way. It seems to occur when a service is flapping, or when many deployments are occurring at once. I have a theory that there is a race condition between when the new process has successfully started and the pidfile is read by the next starting process. (https://github.com/mesosphere/marathon-lb/blob/master/service/haproxy/run#L30)

I'm interested in any ideas anyone might have on how to reliably reproduce this or how to guaranty it doesn't bite us anymore. We are not ready to upgrade to v1.1.0 because we don't currently have an up front list of ports.

how can I add stats page?

how can I enable stats page?

I must add
stats uri /

after this block

listen stats
  bind 0.0.0.0:9090
  balance
  mode http
  stats enable
  monitor-uri /_haproxy_health_check
  acl getpid path /_haproxy_getpids
  http-request use-service lua.getpids if getpid
  acl getconfig path /_haproxy_getconfig
  http-request use-service lua.getconfig if getconfig

or there is another way?
tks
Luca

Marathon-lb and Marathon 0.14

Running Marathon 0.14 with Marathon-lb (current). Marathon-lb deploys fine, but when I try to connect to to port on host (portmap), I get 503, service unavailable

JSON config file:

{
    "id": "marathon-lb",
    "cpus": 0.5,
    "mem": 64.0,
    "instances": 1,
    "container": {
        "type": "DOCKER",
        "docker": {
            "image": "mesosphere/marathon-lb",
            "network": "BRIDGE",
            "portMappings": [{
                "containerPort": 80,
                "hostPort": 0,
                "servicePort": 0,
                "protocol": "tcp"
            }, {
                "containerPort": 8080,
                "hostPort": 0,
                "servicePort": 0,
                "protocol": "tcp"
            }, {
                "containerPort": 9090,
                "hostPort": 0,
                "servicePort": 0,
                "protocol": "tcp"
            }, {
                "containerPort": 9091,
                "hostPort": 0,
                "servicePort": 0,
                "protocol": "tcp"
            }],
            "privileged": true
        }
    }
}

Log output:

Generating RSA private key, 2048 bit long modulus
..................................+++
............................................................................................................................................+++
e is 65537 (0x10001)
Signature ok
subject=/CN=*
Getting Private key
Reloading haproxy
[ALERT] 042/120529 (20) : Could not open configuration file /marathon-lb/haproxy.cfg : No such file or directory
Invalid config
marathon_lb: setting default value for HAPROXY_HEAD
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_HEAD
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_APPID_HEAD
marathon_lb: setting default value for HAPROXY_HTTPS_FRONTEND_HEAD
marathon_lb: setting default value for HAPROXY_FRONTEND_HEAD
marathon_lb: setting default value for HAPROXY_BACKEND_REDIRECT_HTTP_TO_HTTPS
marathon_lb: setting default value for HAPROXY_BACKEND_HEAD
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_ACL
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_APPID_ACL
marathon_lb: setting default value for HAPROXY_HTTPS_FRONTEND_ACL
marathon_lb: setting default value for HAPROXY_BACKEND_HTTP_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_HTTP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_TCP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_STICKY_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_SERVER_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_SERVER_HTTP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_SERVER_TCP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_FRONTEND_BACKEND_GLUE
marathon_lb: SSE Active, trying fetch events from from http://master.mesos:8080/v2/events
marathon_lb: fetching apps
marathon_lb: received event of type event_stream_attached
marathon_lb: GET http://master.mesos:8080/v2/apps?embed=apps.tasks
marathon_lb: got apps ['/marathon-lb', '/nginx', '/mesos-dns', '/product/database/mongo004', '/marathon-ha-demo/moby-counter']
marathon_lb: generating config
marathon_lb: reading running config from /marathon-lb/haproxy.cfg
marathon_lb: couldn't open config file for reading
marathon_lb: running config is different from generated config - reloading
marathon_lb: writing config to temp file /tmp/tmprg_5mjxp
marathon_lb: checking config with command: ['haproxy', '-f', '/tmp/tmprg_5mjxp', '-c']
Configuration file is valid
marathon_lb: moving temp file /tmp/tmprg_5mjxp to /marathon-lb/haproxy.cfg
marathon_lb: reloading using sv reload /marathon-lb/service/haproxy
ok: run: /marathon-lb/service/haproxy: (pid 17) 1s
Reloading haproxy
Configuration file is valid
cat: /tmp/haproxy.pid: No such file or directory
[WARNING] 042/120530 (59) : Can't read first line of the server state file '/var/state/haproxy/global'
[ALERT] 042/120530 (59) : sendmsg logger #1 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #2 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #1 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #2 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #1 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #2 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #1 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #2 failed: No such file or directory (errno=2)
marathon_lb: reload finished, took 1.1839869022369385 seconds
marathon_lb: updating tasks finished, took 1.3044192790985107 seconds
marathon_lb: skipping empty message
[root@rsomtapae264 mesos]# docker logs f985e2ee23a7
Generating RSA private key, 2048 bit long modulus
..................................+++
............................................................................................................................................+++
e is 65537 (0x10001)
Signature ok
subject=/CN=*
Getting Private key
Reloading haproxy
[ALERT] 042/120529 (20) : Could not open configuration file /marathon-lb/haproxy.cfg : No such file or directory
Invalid config
marathon_lb: setting default value for HAPROXY_HEAD
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_HEAD
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_APPID_HEAD
marathon_lb: setting default value for HAPROXY_HTTPS_FRONTEND_HEAD
marathon_lb: setting default value for HAPROXY_FRONTEND_HEAD
marathon_lb: setting default value for HAPROXY_BACKEND_REDIRECT_HTTP_TO_HTTPS
marathon_lb: setting default value for HAPROXY_BACKEND_HEAD
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_ACL
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_APPID_ACL
marathon_lb: setting default value for HAPROXY_HTTPS_FRONTEND_ACL
marathon_lb: setting default value for HAPROXY_BACKEND_HTTP_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_HTTP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_TCP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_STICKY_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_SERVER_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_SERVER_HTTP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_SERVER_TCP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_FRONTEND_BACKEND_GLUE
marathon_lb: SSE Active, trying fetch events from from http://master.mesos:8080/v2/events
marathon_lb: fetching apps
marathon_lb: received event of type event_stream_attached
marathon_lb: GET http://master.mesos:8080/v2/apps?embed=apps.tasks
marathon_lb: got apps ['/marathon-lb', '/nginx', '/mesos-dns', '/product/database/mongo004', '/marathon-ha-demo/moby-counter']
marathon_lb: generating config
marathon_lb: reading running config from /marathon-lb/haproxy.cfg
marathon_lb: couldn't open config file for reading
marathon_lb: running config is different from generated config - reloading
marathon_lb: writing config to temp file /tmp/tmprg_5mjxp
marathon_lb: checking config with command: ['haproxy', '-f', '/tmp/tmprg_5mjxp', '-c']
Configuration file is valid
marathon_lb: moving temp file /tmp/tmprg_5mjxp to /marathon-lb/haproxy.cfg
marathon_lb: reloading using sv reload /marathon-lb/service/haproxy
ok: run: /marathon-lb/service/haproxy: (pid 17) 1s
Reloading haproxy
Configuration file is valid
cat: /tmp/haproxy.pid: No such file or directory
[WARNING] 042/120530 (59) : Can't read first line of the server state file '/var/state/haproxy/global'
[ALERT] 042/120530 (59) : sendmsg logger #1 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #2 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #1 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #2 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #1 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #2 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #1 failed: No such file or directory (errno=2)
[ALERT] 042/120530 (59) : sendmsg logger #2 failed: No such file or directory (errno=2)
marathon_lb: reload finished, took 1.1839869022369385 seconds
marathon_lb: updating tasks finished, took 1.3044192790985107 seconds
marathon_lb: skipping empty message
marathon_lb: skipping empty message

Any hints, ideas ?

Alex

poll mode removed ?

I believe POLL_INTERVAL is not read anywhere with current master.

Due to #35 I'm trying to switch to polling every sec or so, but I don't see any flag.

Websocket disconnections and proper configuration

When using marathon-lb to route a websocket connection we observe disconnections after a few seconds. I assume this is a setting that is configurable - how to go about persisting an established websocket connection through marathon-lb?

As an aside, how does the round-robin algorithm work when multiple instances of a service are available in the case of websockets?

need configurable syslog-socket parameter via enviroment variable

we have an unexpected value in metric for marathon_https_in frontend at prometheus:

sum(increase(haproxy_frontend_http_responses_total{code="5xx"}[1m])) by (frontend)

because syslog-socket parameter value is hard-coded as /dev/null, we can't enable logging to see further information.

If it could be configured via enviroment variable, we can apply a workaround like:

docker run ... -E SYSLOG_SOCKET=/dev/log -v /dev/log:/dev/log mesosphere/marathon-lb

Marathon-LB Keeps Dying

Environment:

OS: RHEL7.2
Docker: 1.10
Mesos: 0.26.0
Marathon: 0.14

Deployed mesosphere/marathon-lb:latest, runs it fine for about 2 hours and then dies. Looking through logs, can't spot anything abnormal:

marathon_lb: generating config
marathon_lb: reading running config from /marathon-lb/haproxy.cfg
marathon_lb: updating tasks finished, took 19.396591901779175 seconds
marathon_lb: received event of type event_stream_attached
marathon_lb: received event of type event_stream_detached
marathon_lb: setting default value for HAPROXY_HEAD
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_HEAD
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_APPID_HEAD
marathon_lb: setting default value for HAPROXY_HTTPS_FRONTEND_HEAD
marathon_lb: setting default value for HAPROXY_FRONTEND_HEAD
marathon_lb: setting default value for HAPROXY_BACKEND_REDIRECT_HTTP_TO_HTTPS
marathon_lb: setting default value for HAPROXY_BACKEND_HEAD
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_ACL
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_APPID_ACL
marathon_lb: setting default value for HAPROXY_HTTPS_FRONTEND_ACL
marathon_lb: setting default value for HAPROXY_BACKEND_HTTP_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_HTTP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_TCP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_STICKY_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_SERVER_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_SERVER_HTTP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_SERVER_TCP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_FRONTEND_BACKEND_GLUE
marathon_lb: SSE Active, trying fetch events from from http://master.mesos:8080/v2/events
marathon_lb: fetching apps
marathon_lb: GET http://master.mesos:8080/v2/apps?embed=apps.tasks
marathon_lb: got apps ['/nginx3', '/mesos-dns', '/marathon-lb-internal', '/product/database/mongo004', '/chronos-marathon', '/marathon-lb-internal-autoscale']
marathon_lb: generating config
marathon_lb: reading running config from /marathon-lb/haproxy.cfg
marathon_lb: updating tasks finished, took 23.323400735855103 seconds
marathon_lb: received event of type event_stream_attached
marathon_lb: received event of type event_stream_attached
marathon_lb: received event of type event_stream_detached
marathon_lb: setting default value for HAPROXY_HEAD
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_HEAD
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_APPID_HEAD
marathon_lb: setting default value for HAPROXY_HTTPS_FRONTEND_HEAD
marathon_lb: setting default value for HAPROXY_FRONTEND_HEAD
marathon_lb: setting default value for HAPROXY_BACKEND_REDIRECT_HTTP_TO_HTTPS
marathon_lb: setting default value for HAPROXY_BACKEND_HEAD
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_ACL
marathon_lb: setting default value for HAPROXY_HTTP_FRONTEND_APPID_ACL
marathon_lb: setting default value for HAPROXY_HTTPS_FRONTEND_ACL
marathon_lb: setting default value for HAPROXY_BACKEND_HTTP_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_HTTP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_TCP_HEALTHCHECK_OPTIONS
marathon_lb: setting default value for HAPROXY_BACKEND_STICKY_OPTIONS
[root@node261 mesos-slave]# docker ps -a
CONTAINER ID        IMAGE                                    COMMAND                  CREATED             STATUS                        PORTS               NAMES
d134889fc94e        mesosphere/marathon-lb                   "/marathon-lb/run sse"   2 hours ago         Exited (137) 12 minutes ago                       mesos-add95ef3-c87b-4c84-98c2-2497129cf7f8-S3.9f47ae5c-47d3-4e2c-94df-2ccbdad1b1f9
5e59cf9be0eb        mesosphere/marathon-lb                   "/marathon-lb/run sse"   3 hours ago         Exited (137) 2 hours ago                          mesos-add95ef3-c87b-4c84-98c2-2497129cf7f8-S3.99bc34cc-e92a-4209-996a-b80f13ba1ee9

e4466bce5be4        mesosphere/marathon-lb-autoscale         "/marathon-lb-autosca"   30 hours ago        Up 30 hours                                       mesos-add95ef3-c87b-4c84-98c2-2497129cf7f8-S3.a686d337-e286-4dac-b722-d71706bf0315
[root@node261 mesos-slave]# 

Anyone has encountered similar issues ?

Thanks!!

Alex

What is the PORTS variable used for?

Hi all,

This piece of code :

if [ -n "${PORTS-}" ]; then
  echo $PORTS > $HAPROXY_SERVICE/env/PORTS
else
  echo "Define $PORTS with a comma-separated list of ports to which HAProxy binds" >&2
  exit 1
fi

Is making my container crash with the following error :
/marathon-lb/run: line 10: PORTS: unbound variable

What is it used for ? I can't find any reference to this env/PORTS file anywhere in the code : https://github.com/mesosphere/marathon-lb/search?utf8=%E2%9C%93&q=env%2FPORTS

Thanks!

Missing License

There is no license info or headers in the repository. I guess it's supposed to be Apache 2.0, same as Mesos, but it's not something that can be assumed :)

sse is not working anymore on Marathon v0.14.x

Using current master 4d521e1112af1aaaa73147fae886262107f1c6bf and Marathon 0.14.0-1.0.442.ubuntu1404 (RC), I'm running into issues.

I modified the code to print events and you can see that nothing's happening:

 * Reloading haproxy haproxy                                                                                                                                                                                                                                             [ OK ]
marathon-lb: updating tasks finished, took 0.039763689041137695 seconds
marathon-lb: SSE Active, trying fetch events from from http://prod-marathon.gearzero.us:8080/v2/events
marathon-lb: {"remoteAddress":"10.92.32.192","eventType":"event_stream_attached","timestamp":"2015-12-24T00:06:46.432Z"}
marathon-lb: received event of type event_stream_attached

It looks like no events are coming in.

Inaccurate resource recommendation

When I run the command "dcos package install marathon-lb" I get a prompt that says:

We recommend a minimum of 0.5 CPUs and 256 MB of RAM available for the Marathon Service.
Continue installing? [yes/no]

But the actual resources the app requests are 2 CPUs and 1GB of memory. Shouldn't the prompt reflect this?

Launching multiple instances from CLI

For the life of me, I can't get the package to launch with 2 instances. I've tried:

dcos package install --options=public-lb-descriptor.json marathon-lb

Where the content of public-lb-descriptor.json reads as:

{
  "id":"marathon-lb",
  "instances": 2
}

To no avail.

Scalling with the Marathon UI (using the Scale Application button) works fine ... but I need to script it.

Problem with CORS configured via HAPROXY_{0}_BACKEND_HEAD

We have a front-end application which fetches data from a GraphQL server. The GraphQL server needs to serve responses with proper CORS headers, which is configured via HAPROXY_0_BACKEND_HEAD in the GraphQL server's Marathon app descriptor. The GraphQL server has HTTP => HTTPS redirects enabled.

Marathon-LB specific labels defined in the GraphQL server's Marathon app descriptor (reformatted to make more readable):

"HAPROXY_GROUP": "external",
"HAPROXY_0_VHOST": "app.example.org",
"HAPROXY_0_REDIRECT_TO_HTTPS": "true",
"HAPROXY_0_MODE": "http",
"HAPROXY_0_BACKEND_HEAD": "
    backend {backend}\n
    balance roundrobin\n
    mode {mode}\n
    rspadd Access-Control-Allow-Origin:\\ *\n
    rspadd Access-Control-Allow-Methods:\\ GET,\\ POST,\\ PUT,\\ OPTIONS,\\ DELETE\n
    rspadd Access-Control-Allow-Headers:\\ Origin,\\ X-Requested-With,\\ Content-Type,\\ Accept,\\ Authorization\n"

Before allowing a request to the GraphQL server the browser will first perform a preflight check by doing an OPTIONS requests. This fails (most of the time) because the CORS headers are not included in the response. The reason seems to be that the request is not always served by the GraphQL server backend.

$ http OPTIONS https://app.example.org/graphql
HTTP/1.1 302 Found
Cache-Control: no-cache
Connection: close
Content-length: 0
Location: https://app.example.org/graphql

GET requests are usually fine, but I've seen issues with that for a previous (unreleased) version of Marathon we were using.

$ http https://app.example.org/graphql
HTTP/1.1 200 OK
Accept-Ranges: bytes
Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept, Authorization
Access-Control-Allow-Methods: GET, POST, PUT, OPTIONS, DELETE
Access-Control-Allow-Origin: *
Content-Length: 3604
Content-Type: text/html; charset=UTF-8
Date: Thu, 10 Mar 2016 14:25:06 GMT
ETag: "287001531335db50"
Last-Modified: Wed, 24 Feb 2016 12:16:50 GMT
Server: akka-http/2.4.2

Our current workaround is the following change to marathon_lb.py, but I am wondering if there is some configuration we are missing.

--- /opt/marathon-lb-1.1.1/marathon_lb.py.bak   2016-03-10 09:27:14.795155222 -0500
+++ /opt/marathon-lb-1.1.1/marathon_lb.py   2016-03-10 09:59:08.433670953 -0500
@@ -133,8 +133,9 @@
       mode {mode}
     ''')

+    # FIXME(fonseca): removed this line
+    #   bind {bindAddr}:80
     HAPROXY_BACKEND_REDIRECT_HTTP_TO_HTTPS = '''\
-  bind {bindAddr}:80
   redirect scheme https if !{{ ssl_fc }}
 '''

System info:

ConnectionResetError: [Errno 104] Connection reset by peer

Hi there,

I just noticed a crash in the python script. Here is the stack trace

marathon-lb: skipping empty message
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    58    0    58    0     0   3635      0 --:--:-- --:--:-- --:--:-- 58000
<html><body><h1>200 OK</h1>
Service ready.
</body></html>
Traceback (most recent call last):
  File "/marathon-lb/marathon_lb.py", line 1158, in <module>
    not args.dont_bind_http_https, args.ssl_certs)
  File "/marathon-lb/marathon_lb.py", line 1085, in process_sse_events
    for event in events:
  File "/usr/local/lib/python3.4/dist-packages/sseclient.py", line 60, in __next__
    nextchar = next(self.resp.iter_content(decode_unicode=True))
  File "/usr/lib/python3/dist-packages/requests/utils.py", line 329, in stream_decode_response_unicode
    for chunk in iterator:
  File "/usr/lib/python3/dist-packages/requests/models.py", line 653, in generate
    for chunk in self.raw.stream(chunk_size, decode_content=True):
  File "/usr/lib/python3/dist-packages/urllib3/response.py", line 256, in stream
    data = self.read(amt=amt, decode_content=decode_content)
  File "/usr/lib/python3/dist-packages/urllib3/response.py", line 186, in read
    data = self._fp.read(amt)
  File "/usr/lib/python3.4/http/client.py", line 500, in read
    return super(HTTPResponse, self).read(amt)
  File "/usr/lib/python3.4/http/client.py", line 539, in readinto
    n = self.fp.readinto(b)
  File "/usr/lib/python3.4/socket.py", line 371, in readinto
    return self._sock.recv_into(b)
ConnectionResetError: [Errno 104] Connection reset by peer

Marathon-LB Server Entries don't consider ip-per-container tasks

Since v0.14.0, Marathon has supported launching tasks with their own uniquely addressable ipAddress [0] [1].

Mesos-dns has been updated to check for this information and generate records which take this into account [2]. I'm opening this issue to suggest that Marathon-lb should too.

To expand on what a "relevant record" refers to - the current marathon-lb implementation creates the following mapping for docker bridge tasks:
<marathon-lb-ip>:<servicePort> โžก๏ธ <agent-ip>:<hostPort> โžก๏ธ <container-ip>:<containerPort>

IP-per-container means that the container is routable directly from Marathon-lb. So we can replace the middle entry with the container directly:
<marathon-lb-ip>:<servicePort> โžก๏ธ <container-ip>:<discoveryPort>

(I use docker bridge tasks as an example, but the Mesos task implementation should similarly map directly to the task's ip and port).

The implementation should mirror the one used in the linked mesos-dns PR:

  • The presence of the ipAddress field should trigger whether this type of entry is created or not.
  • We no longer forward requests to the agent-ip, opting instead for the task's IP which is stored in app['tasks']['ipAddresses'][0]['ipAddress'].
    • This field is in a list, but marathon currently only supports 1 ip address per container. So we can always grab the first entry
  • We no longer forward to the host port, opting instead to use the container's service port which is specified in app['discovery']['ports'] (possibly defaulting to 80)

I'm going to close #122 in favor of this issue, as failing design it properly resulted in a fundamentally incorrect implementation. (Marathon restricts tasks which launch with ip-per-container from also specifiying portmappings.)

subprocess call should close file descriptors in child

I've run into a strange problem with servicerouter.py. I'm using it in listening mode, like so:
python ./servicerouter.py --marathon=http://localhost:8080 --listening=http://localhost:8081
I noticed that after shutting down the servicerouter (Ctrl-C/SIGTERM), I couldn't start it back up listening on the same port. This only occurred if the process did a reload of the haproxy config. Netstat tells me that my listening port (8081) is owned by haproxy.

Apparently what is happening is that the haproxy reload subprocess spawned by the servicerouter script is inheriting the file descriptors (specifically, the one corresponding to listening socket). When it spawns a new haproxy, that inherits them as well. The reload process finishes and terminates, so that if I kill the servicerouter some time later, the new haproxy is left holding these file descriptors.

A simple workaround is to restart haproxy before trying to run the servicerouter again.

I'm not sure whether there's an underlying issue with the WSGI library, i.e., whether the web server isn't closing the sockets appropriately on shutdown. One solution I've found is to start the subprocesses from the Python servicerouter in such a way that they don't inherit file descriptors. This seems to be as simple as changing the call from
subprocess.check_call(reloadCommand)
to
subprocess.check_call(reloadCommand, close_fds=True)
I've tested this and it settles the problem as I've observed it.

Details:
I'm using servicerouter.py from Marathon 0.11.1:
https://github.com/mesosphere/marathon/blob/v0.11.1/bin/servicerouter.py
on Centos 6.7 (6-7.el6.centos.12.3), with python 2.6.6.

Support nested paths

Marathon-lb currently uses the path_beg syntax for evaluating paths; can we change to or enable an option for path_reg syntax to allow nested paths?

Wiki with examples

i've seen so many questions here now about marathon and marathon-lb.

why not providing an wiki page with example marathon configurations for applications. like overwriting templates, using HAPROXY_PORT or HAPROXY_0_VHOST.

i'd help too.

Haproxy configuration not properly updated in real-time

Hello,

I'm using marathon 0.9.1 with marathon-lb:v1.0.0. It is run as follow:

{
  "id": "servicerouter",
  "instances": 3,
  "cpus": 0.8,
  "mem": 512,
  "constraints": [
    [
      "hostname",
      "UNIQUE"
    ]
  ],
  "acceptedResourceRoles": [
    "slave_public"
  ],
  "container": {
    "type": "DOCKER",
    "docker": {
      "image":          "mesosphere/marathon-lb:v1.0.0",
      "network":        "HOST",
      "forcePullImage": true,
      "privileged":     true
    }
  },
  "args": ["sse", "-m", "http://leader.mesos:8080", "--health-check", "--group", "external"],
  "uris": [],
  "env": {},
  "upgradeStrategy": {
    "minimumHealthCapacity": 0.5,
    "maximumOverCapacity": 0
  },
  "healthChecks": [
    {
      "protocol": "COMMAND",
      "command": { "value": "curl -f -X GET http://$HOST:9090/_haproxy_health_check" },
      "gracePeriodSeconds": 45,
      "intervalSeconds": 30,
      "timeoutSeconds": 20,
      "maxConsecutiveFailures": 6
    }
  ]
}

When i deploy services with marathon, haproxy is left outdated. It seems events are not triggered or received. I need to restart haproxy instances ... not very convenient.

Have you also noticed this? Is it a bug in marathon? I'm waiting DCOS 1.4 to upgrade. Will it solve the issue?

Do you guys need anything else to investigate?

Thanks

Ability to query the currently registered services

Not sure if this capability already exists and i'm just missing it - if so please let me know how to use it.
It would be great if marathon-lb exposed an api endpoint to query the current apps that are registered and obtain the assigned ports.

For example, assuming a marathon-lb-internal.marathon.mesos deployment, being able to:

curl http://marathon-lb-internal.marathon.mesos/v1/apps
that would return as a minimum the list of known apps and their current marathon-lb service port

[{
    "id" : "backend-service", 
    "servicePort":10004, 
    "instancesInRotation": 4, 
    "algorithm": "roundrobin"
}]

Problem using docker image /marathon-lb/haproxy.cfg missing

I dont know if I did something wrong or the image is broken, but I cant launch marathon-lb using the official image. I always get the same error:

[root@xxx-1 ~]# docker run -e "PORTS=80" -e "URL=http://my-ip:8080" mesosphere/marathon-lb event http://my-ip:9090
Generating RSA private key, 2048 bit long modulus
.......................................+++
...........................+++
e is 65537 (0x10001)
Signature ok
subject=/CN=*
Getting Private key
Using http://my-ip:9090 as event callback-url
Reloading haproxy
[ALERT] 063/203255 (19) : Could not open configuration file /marathon-lb/haproxy.cfg : No such file or directory
Invalid config
usage: marathon_lb.py [-h] [--longhelp] [--marathon MARATHON [MARATHON ...]]
                      [--listening LISTENING] [--callback-url CALLBACK_URL]
                      [--haproxy-config HAPROXY_CONFIG] [--group GROUP]
                      [--command COMMAND] [--sse] [--health-check]
                      [--dont-bind-http-https] [--ssl-certs SSL_CERTS]
                      [--skip-validation] [--dry]
                      [--syslog-socket SYSLOG_SOCKET]
                      [--log-format LOG_FORMAT]
                      [--marathon-auth-credential-file MARATHON_AUTH_CREDENTIAL_FILE]
                      [--auth-credentials AUTH_CREDENTIALS]
marathon_lb.py: error: argument --marathon/-m is required

The hub.docker.io documentation do not cover the usage, can you help me?

does it support proxypass?

I have been looking for the proxy pass function for marathon-lb in internet, however don't see an example.

May I know if it's possible? and how? thanks

On status_update_event of TASK_KILLED drop from backend immediately

I've been doing some heavy load testing to see what happens when marathon scales up and down apps that are behind HAProxy using marathon-lb.

I noticed that, when Marathon kills (scales down) instances, for a short period of time, users will get errors if HAProxy passes them to a dead backend.

I believe this is because: Marathon kills the tasks, then Marathon-lb sees it on the event bus, then Marathon-lb schedules a refresh of the entire config, pulls the latest app info, rebuilds config, and finally reloads HAProxy.

In the case of scaling down or killing an app, Marathon sends eventType == status_update_event with taskStatus == TASK_KILLED with information of which host was killed.

If a task is killed, I believe it would be more optimal if Marathon-lb immediately marked the affected backend as down or removed it from the config entirely and reloaded HA Proxy. After that, it could continue the usual, slower process of reloading the full configuration.

This should result in significantly less errors during a scale-down of an app.

virtual hosts are not properly handled on HTTP

Looks like that on HTTP the load balancer is not correctly distinguishing between virtual hosts.
For example if we configure two apps with:


app 1:
HAPROXY_GROUP=external
HAPROXY_0_VHOST=example.com
HAPROXY_0_PORT=80

app 2:
HAPROXY_GROUP=external
HAPROXY_0_VHOST=api.example.com (note the addition of api.)
HAPROXY_0_PORT=80

with the load balancer:
I am using the official docker container by mesosphere managed via marathon with following settings:
{
"type": "DOCKER",
"docker": {
"image": "mesosphere/marathon-lb:latest",
"network": "HOST",
"forcePullImage": true
}
}
required ports are 80,443 and the env variable HAPROXY_SSL_CERT has been set.


By trying to call http://api.example.com the load balancer will round robin over app 1 or app 2.
I would expect the load balancer to always cal app 2.

NOTE: the same configuration using HTTPS works like a charm, the cal is always routed to app 2.

HAPROXY_SYSCTL_PARAMS: unbound variable

Hello, I'm getting this error with any release >v1.0.0

Generating RSA private key, 2048 bit long modulus ...............................................................................................+++ .................+++ e is 65537 (0x10001) Signature ok subject=/CN=* Getting Private key /marathon-lb/run: line 70: HAPROXY_SYSCTL_PARAMS: unbound variable

Multiple -m parameters saturate threads on mesos slave

If start marathon-lb with all marathon masters (multiple "-m" argument):

  • messos sandbox log reports multiple errors
  • after some hours (eg 8) marathon stop working because marathon-lb use all threads available on mesos slave

Everything works as expected with only one -m master

I suggest to:

  • allow marathon-lb to parse multiple -m option so it is possible to pass all marathon nodes
  • periodically get elected marathon leader from marathon api
  • get apps configuration from elected leader

Cluster: CentOS 7.2 - Mesos 0.27.1 - Marathon 0.15.3 - last mesos-lb docker from docker hub

Support adding a custom header of the backend to the response

ie:

X-haproxy-backend: hostname.of.mesos.agent:port_of_app

This would ideally be a global flag for marathon-lb, and would ease troubleshooting in certain instances for http applications. If you like the idea, let me know and I can likely come up with a patch to enable it (disabled by default).

overall confusion on requirements and usage

What are the requirements for running the marathon-lb as a docker container via Marathon/Mesos? Are any special docker permissions required or special paths that need to be mounted from the host? It seems to require access to run iptables. Is this correct?
Is the haproxy web ui exposed?

Thanks

marathon-lb causes an exception when using COMMAND as health check protocol

It looks like marathon-lb is breaking when you use COMMAND as a health check protocol...
See below for the error marathon-lb is throwing:

marathon-lb: Reconnecting...
marathon-lb: setting default value for HAPROXY_HEAD
marathon-lb: setting default value for HAPROXY_HTTP_FRONTEND_HEAD
marathon-lb: setting default value for HAPROXY_HTTP_FRONTEND_APPID_HEAD
marathon-lb: setting default value for HAPROXY_HTTPS_FRONTEND_HEAD
marathon-lb: setting default value for HAPROXY_FRONTEND_HEAD
marathon-lb: setting default value for HAPROXY_BACKEND_REDIRECT_HTTP_TO_HTTPS
marathon-lb: setting default value for HAPROXY_BACKEND_HEAD
marathon-lb: setting default value for HAPROXY_HTTP_FRONTEND_ACL
marathon-lb: setting default value for HAPROXY_HTTP_FRONTEND_APPID_ACL
marathon-lb: setting default value for HAPROXY_HTTPS_FRONTEND_ACL
marathon-lb: setting default value for HAPROXY_BACKEND_HTTP_OPTIONS
marathon-lb: setting default value for HAPROXY_BACKEND_HTTP_HEALTHCHECK_OPTIONS
marathon-lb: setting default value for HAPROXY_BACKEND_TCP_HEALTHCHECK_OPTIONS
marathon-lb: setting default value for HAPROXY_BACKEND_STICKY_OPTIONS
marathon-lb: setting default value for HAPROXY_BACKEND_SERVER_OPTIONS
marathon-lb: setting default value for HAPROXY_BACKEND_SERVER_HTTP_HEALTHCHECK_OPTIONS
marathon-lb: setting default value for HAPROXY_BACKEND_SERVER_TCP_HEALTHCHECK_OPTIONS
marathon-lb: setting default value for HAPROXY_FRONTEND_BACKEND_GLUE
marathon-lb: fetching apps
marathon-lb: GET http://tst-marathon-app.service.ams4.consul:8080/v2/apps?embed=apps.tasks
marathon-lb: got apps ['/stretch', '/tst-foo-app', '/tst-logredis', '/tst-innogate-app-mark', '/mesos-consul']
marathon-lb: Caught exception
Traceback (most recent call last):
  File "/marathon-lb/marathon_lb.py", line 1163, in <module>
    args.ssl_certs)
  File "/marathon-lb/marathon_lb.py", line 1083, in process_sse_events
    ssl_certs)
  File "/marathon-lb/marathon_lb.py", line 949, in __init__
    self.reset_from_tasks()
  File "/marathon-lb/marathon_lb.py", line 954, in reset_from_tasks
    self.__apps = get_apps(self.__marathon)
  File "/marathon-lb/marathon_lb.py", line 876, in get_apps
    appId, servicePort, get_health_check(app, i))
  File "/marathon-lb/marathon_lb.py", line 850, in get_health_check
    if check['portIndex'] == portIndex:
KeyError: 'portIndex'

Funny thing is, there are no HAPROXY labels specified when launching this task with Marathon.

HTTPS & marathon-lb

Hi All,

We have a service that we'd like to expose over https, we have an external marathon-lb set up (using DCOS), and we have an app which speaks https running inside the cluster (which needs to be exposed).

The service ports are mapped as follows:

  • 80 --> 10080
  • 443 --> 10443

We can see our service (called apiv01) on http:

$ curl http://marathon-lb.marathon.mesos:9091 -H "X-Marathon-App-Id: /apiv01"
<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
<hr><center>nginx/1.9.3</center>
</body>
</html>

Trying the same w/ https fails

$ curl https://marathon-lb.marathon.mesos:9091 -H "X-Marathon-App-Id: /apiv01" -k
curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol

Trying to resolve the service port directly:

$ curl http://marathon-lb.marathon.mesos:10443 -v
* Rebuilt URL to: http://marathon-lb.marathon.mesos:10443/
*   Trying 10.0.0.5...
* Connected to marathon-lb.marathon.mesos (10.0.0.5) port 10443 (#0)
> GET / HTTP/1.1
> Host: marathon-lb.marathon.mesos:10443
> User-Agent: curl/7.46.0
> Accept: */*
> 
< HTTP/1.1 302 Found
< Cache-Control: no-cache
< Content-length: 0
< Location: https://marathon-lb.marathon.mesos:10443/
< Connection: close
< 
* Closing connection 0

we get an 'unknown protocol' error:

$ curl https://marathon-lb.marathon.mesos:10443 -v
* Rebuilt URL to: https://marathon-lb.marathon.mesos:10443/
*   Trying 10.0.0.5...
* Connected to marathon-lb.marathon.mesos (10.0.0.5) port 10443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /opt/mesosphere/active/python-requests/lib/python3.4/site-packages/requests/cacert.pem
  CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
* Closing connection 0
curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol

additionally:

$ curl https://marathon-lb.marathon.mesos:443 -H "X-Marathon-App-Id: /apiv01" -k
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>

finally, hitting the service directly works perfectly:

$ curl https://10.0.0.9:19920 -k
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Callbacks not working

If you try to set up a callback url, the Marathon responses are not read correctly:

Traceback (most recent call last):
  File "/usr/lib/python3.4/wsgiref/handlers.py", line 137, in run
    self.result = application(self.environ, self.start_response)
  File "marathon_lb.py", line 1061, in wsgi_app
    processor.handle_event(json.loads(data))
  File "/usr/lib/python3.4/json/__init__.py", line 312, in loads
    s.__class__.__name__))
TypeError: the JSON object must be str, not 'bytes'
10.92.42.25 - - [24/Dec/2015 00:52:41] "POST / HTTP/1.1" 500 59

Appears to be a Python3 issue I'm guessing per:

Looks like json.loads expect a string but we're passing a bytes type.

but could be wrong as always ๐Ÿ˜„

[redux] 'Module_six_moves_urllib' object has no attribute 'urlparse'

Using Python 3.4.3 and a3a080e commit:

Traceback (most recent call last):
  File "marathon_lb.py", line 1181, in <module>
    not args.dont_bind_http_https, args.ssl_certs)
  File "marathon_lb.py", line 1095, in run_server
    listen_uri = urllib.urlparse(listen_addr)
AttributeError: 'Module_six_moves_urllib' object has no attribute 'urlparse'

The original issue is: #54

Possibly related: #56

Add host header HTTP 301 redirect

It would be nice to be able to define an HTTP frontend 301 redirect based on a host header, like this:

redirect prefix <vhost> code 301 if { HDR{host} -i <redirected-host> }

For example, if my backend requires the host header to be a FQDN, I can redirect "http://service/" to "http://service.my.purple.com/" before hitting the backend.

I can't figure out how to do this in marathon-lb. I'm no Python expert, but to me it looks like this isn't possible (yet).

Marathon Authentication

Can we add a principal and secret_file_location flag so if our marathon is asking for authentication, we can provide it?

"--listening" documentation is highly confusing

Hllo guys,

It tooks me some time to get the callback mode working because of this --help snippet:

  --listening LISTENING, -l LISTENING
                        The address this script listens on for marathon events
                        (default: None)

After polling the office, everybody agrees: what should be set here is something like "127.0.0.1:8080" or "0.0.0.0:82".
But this setting doesn't work. After dumping variable inside the code, I figured out the following code did not work:

    listen_uri = parse.urlparse(listen_addr)
    httpd = make_server(listen_uri.hostname, listen_uri.port, wsgi_app)

listen_uri got no hostname attr because the parsing failed. I had to set "---listening=http://0.0.0.0:8080" to get it working.

Btw, README displayed here at github doesn't talk at all about this --listening parameter (give a try with ctrl+f ;))

Thanks,

Adam.

Different path routing on same multiple host

Is it not possible to use 2 backends on 1 virtual host and do the routing based on path

For example:
app1:
HAPROXY_VHOST = app.example.com
HAPROXY_PATH = /app1
app2:
HAPROXY_VHOST = app.example.com
HAPROXY_PATH = /app2

Both acl names in the haproxy.cfg will be called path_app_example.com so the first one always get selected.

Both requests app.example.com/app1 and app.example.com/app2 go to app1

Possible solution: include marathon app id in acl name.

unbound variable PORTS v1.1.0

I'm getting an error when trying to upgrade to v1.1.0:

/marathon-lb/run: line 6: PORTS: unbound variable

Am I missing something?

Support routing based on path prefix (suffix)

I'd like to implement rules that would route requests to different backends based on request URI prefix based on acl path_beg (or suffix if we want to make it more general - path_end). HAProxy config:

frontend parent_80
  bind *:80
  acl path_backendA path_beg /prefixA
  use_backend backendA_10002 if path_path_backendA

  acl path_backendB path_beg /prefixB
  use_backend backendB_10003 if path_path_backendB
  // when to rule matches, we'll use default backend
  use_backend parent_80

... (standard backend configuration)

Each of backends would need to specify its parent e.g. HAPROXY_PARENT and then prefix HAPROXY_0_PATH_PREFIX.

Does this make sense to you? Would you accept a PR for such feature?

'Module_six_moves_urllib' object has no attribute 'urlparse'

Looks like it can't start anymore:

Traceback (most recent call last):
  File "marathon_lb.py", line 1181, in <module>
    not args.dont_bind_http_https, args.ssl_certs)
  File "marathon_lb.py", line 1095, in run_server
    listen_uri = urllib.urlparse(listen_addr)
AttributeError: 'Module_six_moves_urllib' object has no attribute 'urlparse'

how to use neweast docker?

it report some error as below with docker version v1.1.0
ubuntu@ip-10-0-4-155:~$ docker run --name marathon-lb mesosphere/marathon-lb sse --marathon http://10.0.3.153:8080 --group external-test --skip-validation"
/marathon-lb/run: line 10: PORTS: unbound variable

I tried #docker run --name marathon-lb mesosphere/marathon-lb sse --marathon http://10.0.3.153:8080 --group external-test --skip-validation -e PORTS="80,443,9090"

it reports the same error.

when I was using the old version v1.0.1, I am able to use it this way #docker run --name marathon-lb -p 80:80 -p 443:443 -p 9090:9090 mesosphere/marathon-lb:old $mode --marathon $m1 $m2 $m3 --group external

will not deploy on marathon user service

I was told that it is generally not a good idea to run my web app and services on the native DCOS marathon service. I would like to use this package to load balance HTTP traffic and do service discovery on my user marathon service.

However when I try to run "dcos package install marathon-lb" with config marathon.url set to my user marathon service, it just hangs in the "waiting" state.

Is this package only meant to be run on the native marathon service?

marathon-lb stoped when upgrade marathon

When I upgrade marathon, eg from 0.14.0 to 0.14.1, then marathon-lb will stopped and I get 503 for apps run in marathon. Here is the stderr when marathon-lb stopped.

And, the args used for marathon-lb is: pool -m http://marathon.mesos:8080 --health-check --group external

Generating RSA private key, 2048 bit long modulus
....................................................................................................................+++
.+++
e is 65537 (0x10001)
Signature ok
subject=/CN=*
Getting Private key
marathon_lb: fetching apps
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 516, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 308, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.4/http/client.py", line 1090, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python3.4/http/client.py", line 1128, in _send_request
    self.endheaders(body)
  File "/usr/lib/python3.4/http/client.py", line 1086, in endheaders
    self._send_output(message_body)
  File "/usr/lib/python3.4/http/client.py", line 924, in _send_output
    self.send(msg)
  File "/usr/lib/python3.4/http/client.py", line 859, in send
    self.connect()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 154, in connect
    conn = self._new_conn()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 133, in _new_conn
    (self.host, self.port), self.timeout, **extra_kw)
  File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 87, in create_connection
    raise err
  File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 78, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 362, in send
    timeout=timeout
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 559, in urlopen
    _pool=self, _stacktrace=stacktrace)
  File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 245, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/lib/python3/dist-packages/six.py", line 624, in reraise
    raise value.with_traceback(tb)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 516, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 308, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.4/http/client.py", line 1090, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python3.4/http/client.py", line 1128, in _send_request
    self.endheaders(body)
  File "/usr/lib/python3.4/http/client.py", line 1086, in endheaders
    self._send_output(message_body)
  File "/usr/lib/python3.4/http/client.py", line 924, in _send_output
    self.send(msg)
  File "/usr/lib/python3.4/http/client.py", line 859, in send
    self.connect()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 154, in connect
    conn = self._new_conn()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 133, in _new_conn
    (self.host, self.port), self.timeout, **extra_kw)
  File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 87, in create_connection
    raise err
  File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 78, in create_connection
    sock.connect(sa)
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionRefusedError(111, 'Connection refused'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/marathon-lb/marathon_lb.py", line 1368, in <module>
    regenerate_config(get_apps(marathon), args.haproxy_config, args.group,
  File "/marathon-lb/marathon_lb.py", line 966, in get_apps
    apps = marathon.list()
  File "/marathon-lb/marathon_lb.py", line 529, in list
    params={'embed': 'apps.tasks'})["apps"]
  File "/marathon-lb/marathon_lb.py", line 516, in api_req
    return self.api_req_raw(method, path, self.__auth, **kwargs).json()
  File "/marathon-lb/marathon_lb.py", line 502, in api_req_raw
    **kwargs
  File "/usr/lib/python3/dist-packages/requests/api.py", line 49, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 457, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 569, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 407, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionRefusedError(111, 'Connection refused'))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.