Git Product home page Git Product logo

consul-esm's Introduction

Go Reference build ci

Consul ESM (External Service Monitor)

This project provides a daemon to run alongside Consul in order to run health checks for external nodes and update the status of those health checks in the catalog. It can also manage updating the coordinates of these external nodes, if enabled. See Consul's External Services guide for some more information about external nodes.

Community Support

If you have questions about how consul-esm works, its capabilities or anything other than a bug or feature request (use github's issue tracker for those), please see our community support resources.

Community portal: https://discuss.hashicorp.com/tags/c/consul/29/consul-esm

Other resources: https://www.consul.io/community.html

Additionally, for issues and pull requests, we'll be using the πŸ‘ reactions as a rough voting system to help gauge community priorities. So please add πŸ‘ to any issue or pull request you'd like to see worked on. Thanks.

Prerequisites

Consul ESM requires at least version 1.4.1 of Consul.

ESM version Consul version required
0.3.2 and higher 1.4.1+
0.3.1 and lower 1.0.1-1.4.0

Installation

  1. Download a pre-compiled, released version from the Consul ESM releases page.

  2. Extract the binary using unzip or tar.

  3. Move the binary into $PATH.

To compile from source, please see the instructions in the contributing section.

Usage

In order for the ESM to detect external nodes and health checks, any external nodes must be registered directly with the catalog with "external-node": "true" set in the node metadata. Health checks can also be registered with a 'Definition' field which includes the details of running the check. For example:

$ curl --request PUT --data @node.json localhost:8500/v1/catalog/register

node.json:

{
  "Datacenter": "dc1",
  "ID": "40e4a748-2192-161a-0510-9bf59fe950b5",
  "Node": "foo",
  "Address": "192.168.0.1",
  "TaggedAddresses": {
    "lan": "192.168.0.1",
    "wan": "192.168.0.1"
  },
  "NodeMeta": {
    "external-node": "true",
    "external-probe": "true"
  },
  "Service": {
    "ID": "web1",
    "Service": "web",
    "Tags": [
      "v1"
    ],
    "Address": "127.0.0.1",
    "Port": 8000
  },
  "Checks": [{
    "Node": "foo",
    "CheckID": "service:web1",
    "Name": "Web HTTP check",
    "Notes": "",
    "Status": "passing",
    "ServiceID": "web1",
    "Definition": {
      "HTTP": "http://localhost:8000/health",
      "Interval": "10s",
      "Timeout": "5s"
    }
  },{
    "Node": "foo",
    "CheckID": "service:web2",
    "Name": "Web TCP check",
    "Notes": "",
    "Status": "passing",
    "ServiceID": "web1",
    "Definition": {
      "TCP": "localhost:8000",
      "Interval": "5s",
      "Timeout": "1s",
      "DeregisterCriticalServiceAfter": "30s"
     }
  }]
}

The external-probe field determines whether the ESM will do regular pings to the node and maintain an externalNodeHealth check for the node (similar to the serfHealth check used by Consul agents).

The ESM will perform a leader election by holding a lock in Consul, and the leader will then continually watch Consul for updates to the catalog and perform health checks defined on any external nodes it discovers. This allows externally registered services and checks to access the same features as if they were registered locally on Consul agents.

Each ESM registers a health check for itself with the agent with "DeregisterCriticalServiceAfter": "30m", which is currently not configurable. This means after failing its health check, the ESM will switch from passing status to critical status. If the ESM remains in critical status for 30 minutes, then the agent will attempt to deregister the ESM. During critical status the ESM’s assigned external health checks will be reassigned to another ESM with passing status to monitor. Note: this is separate from the example JSON above for registering an external health check which has a DeregisterCriticalServiceAfter of 30 seconds.

Command Line

To run the daemon, pass the -config-file or -config-dir flag, giving the location of a config file or a directory containing .json or .hcl files.

$ consul-esm -config-file=/path/to/config.hcl -config-dir /etc/consul-esm.d
Consul ESM running!
            Datacenter: "dc1"
               Service: "consul-esm"
           Service Tag: ""
            Service ID: "consul-esm:5a6411b3-1c41-f272-b719-99b4f958fa97"
Node Reconnect Timeout: "72h"

Log data will now stream in as it occurs:

2017/10/31 21:59:41 [INFO] Waiting to obtain leadership...
2017/10/31 21:59:41 [INFO] Obtained leadership
2017/10/31 21:59:42 [DEBUG] agent: Check 'foo/service:web1' is passing
2017/10/31 21:59:42 [DEBUG] agent: Check 'foo/service:web2' is passing

Configuration

Configuration files can be provided in either JSON or HashiCorp Configuration Language (HCL) format. For more information, please see the HCL specification. The following is an example HCL config file, with the default values filled in:

// The log level to use.
log_level = "INFO"

// Controls whether to enable logging to syslog.
enable_syslog = false

// The syslog facility to use, if enabled.
syslog_facility = ""

// Whether to log in json format
log_json = false

// The unique id for this agent to use when registering itself with Consul.
// If unconfigured, a UUID will be generated for the instance id.
// Note: do not reuse the same instance id value for other agents. This id
// must be unique to disambiguate different instances on the same host.
// Failure to maintain uniqueness will result in an already-exists error.
instance_id = ""

// The service name for this agent to use when registering itself with Consul.
consul_service = "consul-esm"

// The service tag for this agent to use when registering itself with Consul.
// ESM instances that share a service name/tag combination will have the work
// of running health checks and pings for any external nodes in the catalog
// divided evenly amongst themselves.
consul_service_tag = ""

// The directory in the Consul KV store to use for storing runtime data.
consul_kv_path = "consul-esm/"

// The node metadata values used for the ESM to qualify a node in the catalog
// as an "external node".
external_node_meta {
    "external-node" = "true"
}

// The length of time to wait before reaping an external node due to failed
// pings.
node_reconnect_timeout = "72h"

// The interval to ping and update coordinates for external nodes that have
// 'external-probe' set to true. By default, ESM will attempt to ping and
// update the coordinates for all nodes it is watching every 10 seconds.
node_probe_interval = "10s"

// Controls whether or not to disable calculating and updating node coordinates
// when doing the node probe. Defaults to false i.e. coordinate updates
// are enabled.
disable_coordinate_updates = false

// The address of the local Consul agent. Can also be provided through the
// CONSUL_HTTP_ADDR environment variable.
http_addr = "localhost:8500"

// The ACL token to use when communicating with the local Consul agent. Can
// also be provided through the CONSUL_HTTP_TOKEN environment variable.
token = ""

// The Consul datacenter to use.
datacenter = "dc1"

// The target Admin Partition to use.
partition = ""

// The CA file to use for talking to Consul over TLS. Can also be provided
// though the CONSUL_CACERT environment variable.
ca_file = ""

// The path to a directory of CA certs to use for talking to Consul over TLS.
// Can also be provided through the CONSUL_CAPATH environment variable.
ca_path = ""

// The client cert file to use for talking to Consul over TLS. Can also be
// provided through the CONSUL_CLIENT_CERT environment variable.
cert_file = ""

// The client key file to use for talking to Consul over TLS. Can also be
// provided through the CONSUL_CLIENT_KEY environment variable.
key_file = ""

// The server name to use as the SNI host when connecting to Consul via TLS.
// Can also be provided through the CONSUL_TLS_SERVER_NAME environment
// variable.
tls_server_name = ""

// The CA file to use for talking to HTTPS checks.
https_ca_file = ""

// The path to a directory of CA certs to use for talking to HTTPS checks.
https_ca_path = ""

// The client cert file to use for talking to HTTPS checks.
https_cert_file = ""

// The client key file to use for talking to HTTPS checks.
https_key_file = ""

// Client address to expose API endpoints. Required in order to expose /metrics endpoint for Prometheus. Example: "127.0.0.1:8080"
client_address = ""

// The method to use for pinging external nodes. Defaults to "udp" but can
// also be set to "socket" to use ICMP (which requires root privileges).
ping_type = "udp"

// The telemetry configuration which matches Consul's telemetry config options.
// See Consul's documentation https://www.consul.io/docs/agent/options#telemetry
// for more details on how to configure
telemetry {
	circonus_api_app = ""
 	circonus_api_token = ""
 	circonus_api_url = ""
 	circonus_broker_id = ""
 	circonus_broker_select_tag = ""
 	circonus_check_display_name = ""
 	circonus_check_force_metric_activation = ""
 	circonus_check_id = ""
 	circonus_check_instance_id = ""
 	circonus_check_search_tag = ""
 	circonus_check_tags = ""
 	circonus_submission_interval = ""
 	circonus_submission_url = ""
 	disable_hostname = false
 	dogstatsd_addr = ""
 	dogstatsd_tags = []
 	filter_default = false
 	prefix_filter = []
 	metrics_prefix = ""
 	prometheus_retention_time = "0"
 	statsd_address = ""
 	statsite_address = ""
}

// The number of additional successful checks needed to trigger a status update to
// passing. Defaults to 0, meaning the status will update to passing on the
// first successful check.
passing_threshold = 0

// The number of additional failed checks needed to trigger a status update to
// critical. Defaults to 0, meaning the status will update to critical on the
// first failed check.
critical_threshold = 0

Threshold for Updating Check Status

To prevent flapping, thresholds for updating a check status can be configured by passing_threshold and critical_threshold such that a check will update and switch to be passing / critical after an additional number of consecutive or non-consecutive checks.

By default, these configurations are set to 0, which retains the original ESM behavior. If the status of a check is 'passing', then the next failed check will cause the status to update to be 'critical'. Hence, the first failed check causes the update and 0 additional checks are needed.

If a check is currently 'passing' and configuration is critical_threshold=3, then after the first failure, 3 additional consecutive failures (4 in total) are needed in order to update the status to 'critical'.

ESM also employs a counting system that allows for non-consecutive checks to aggregate and update the check status. This counting system increments when a check result is the opposite of the current status and decrements when same as the current status.

For an example of how non-consecutive checks are counted, we have a check that has the status 'passing', critical_threshold=3, and the counter is at 0 (c=0). The following pattern of pass/fail will decrement/increment the counter as such:

PASS (c=0), FAIL (c=1), FAIL (c=2), PASS (c=1), FAIL (c=2), FAIL (c=3), PASS (c=2), FAIL (c=3), FAIL (c=4)

When the counter reaches 4 (1 initial fail + 3 additional fails), the critical_threshold is met and the check status will update to 'critical' and the counter will reset.

Note: this implementation diverges from Consul's anti-flapping thresholds, which counts total consecutive checks.

Consul ACL Policies

With ACL system enabled on Consul agents, a specific ACL policy may be required for ESM's token in order for ESM to perform its functions. To narrow down the privileges required for ESM the following ACL policy rules can be used:

agent_prefix "" {
  policy = "read"
}

key_prefix "consul-esm/" {
  policy = "write"
}

node_prefix "" {
  policy = "write"
}

service_prefix "" {
  policy = "write"
}

session_prefix "" {
   policy = "write"
}

The key_prefix rule is set to allow the consul-esm/ KV prefix, which is defined in the config file using the consul_kv_path parameter.

It is possible to have even finer-grained ACL policies if you know the the set name of the consul agent that ESM is registered with and a set list of nodes that ESM will monitor.

  • <consul-agent-node-name>: insert the node name for the consul agent that consul-esm is registered with
  • <monitored-node-name>: insert the name of the nodes that ESM will monitor
  • <consul-esm-name>: insert the name that ESM is registered with. Default value is 'consul-esm' if not defined in config file using the consul_service parameter
agent "<consul-agent-node-name>" {
  policy = "read"
}

key_prefix "consul-esm/" {
  policy = "write"
}

node "<monitored-node-name: one acl block needed per node>" {
  policy = "write"
}

node_prefix "" {
  policy = "read"
}

service "<consul-esm-name>" {
  policy = "write"
}

session "<consul-agent-node-name>" {
   policy = "write"
}

For context on usage of each ACL:

  • agent:read - for features to check version compatibility and calculating network coordinates
  • key:write - to store assigned checks
  • node:write - to update the status of each node that esm monitors
  • node:read - to retrieve nodes that need to be monitored
  • service:write - to register esm service
  • session:write - to acquire esm cluster leader lock

Consul Namespaces (Enterprise Feature)

ESM supports Consul Enterprise Namespaces . When run with enterprise Consul servers it will scan all accessible Namespaces for external nodes and health checks to monitor. What is meant by "all accessible" is all Namespaces accessible via Namespace ACL rules that provide read level access to the Namespace. The simplest case of wanting to access all Namespaces would add the below rule to the ESM ACL policy in the previous section...

namespace_prefix "" {
  acl = "read"
}

If an ESM instance needs to monitor only a subset of existing Namespaces, the policy will need to grant access to each Namespace explicitly. For example lets say we have 3 Namespaces, "foo", "bar" and "zed" and you want this ESM to only monitor "foo" and "bar". Your policy would need to have these listed (or a common prefix would work)...

namespace "foo" {
  acl = "read"
}
namespace "bar" {
  acl = "read"
}

Namespaces + consul_kv_path config setting:

  • If you have multiple ESMs for HA (secondary, backup ESMs) have the same value set to consul_kv_path. (in practice these configs are identical)

  • If you have multiple ESMs for separate Namespaces each must use a different setting for consul_kv_path.

ESM uses the consul_kv_path to determine where to keep its meta data. This meta data will be different for each ESM monitoring different Namespaces.

Note you can have both, those in HA clusters would have the same value and each separate HA cluster would use different values.

Contributing

Note if you run Linux and see socket: permission denied errors with UDP ping, you probably need to modify system permissions to allow for non-root access to the ports. Running sudo sysctl -w net.ipv4.ping_group_range="0 65535" should fix the problem (until you reboot, see sysctl man page for how to persist).

To build and install Consul ESM locally, you will need to install the Docker engine:

Clone the repository:

$ git clone https://github.com/hashicorp/consul-esm.git

To compile the consul-esm binary for your local machine:

$ make dev

This will compile the consul-esm binary into bin/consul-esm as well as your $GOPATH and run the test suite.

If you want to compile a specific binary, run make XC_OS/XC_ARCH. For example:

make darwin/amd64

Or run the following to generate all binaries:

$ make build

If you just want to run the tests:

$ make test

Or to run a specific test in the suite:

go test ./... -run SomeTestFunction_name

consul-esm's People

Contributors

cbroglie avatar danstough avatar dependabot[bot] avatar edevil avatar eikenb avatar findkim avatar freddygv avatar hashicorp-tsccr[bot] avatar hc-github-team-consul-ecosystem avatar kyhavlov avatar lornasong avatar magiconair avatar mawag avatar mdeggies avatar mikeyyuen avatar mitchellh avatar nbouabdalla1 avatar ndhanushkodi avatar nicoletapopoviciu avatar radekdvorak avatar roncodingenthusiast avatar sarahethompson avatar skpratt avatar slackpad avatar srahul3 avatar t-davies avatar tristanmorgan avatar varnson avatar wangxinyi7 avatar yurkeen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

consul-esm's Issues

multi k8s cluster esm deployment?

We are running the same services on multiple k8s clusters, advertising to a single consul cluster via esm. esm is only running on a single cluster. Can esm be run reliably across every cluster without any weird health check collisions, etc? Every cluster esm deployment would be sidecar'd to a consul-agent calling the same consul cluster. We'd like to be able to have HA across the clusters in the event we lose the one that esm is deployed out to.

Make ESM Instance ID Configurable

Problem: when ESM is terminated ungracefully (e.g. SIGKILL) it takes 30 minutes before it is reaped. During that 30 minutes, the ESM instance remains in the Catalog and is displayed in Consul UI as with a health status as critical. When another ESM instance starts, it has new ID which can lead to multiple ESM instances rather than 'replacing' the terminated one.

Proposed solution: allow ESM instance id to be configuration so that when an ESM instance terminates and another ESM starts, they can have the same id. This will allow the new ESM instance register in Catalog with the same id and 'replace' the terminated one. This will give more of a 'restart' experience in Catalog and Consul UI.

Issue capturing this problem: #39
Issue benefiting allowing reregistering ESM in catalog with the same ID: #53

Service status in Consul is never updated

I just tried following all the instructions in the README and while I can see checks failing or succeeding in the ESM output they never seem to be updated in Consul itself.

Here is what I did.

Start Consul

$ consul agent -dev -client 0.0.0.0 -bind 0.0.0.0 -serf-lan-port 18301 -http-port 18500 -dns-port 18600
==> Starting Consul agent...
==> Consul agent running!
          Version: 'v1.3.1+ent'
          Node ID: '73eb31bf-abab-77c5-3bcd-3472dd69356e'
        Node name: 'host01'
       Datacenter: 'dc1' (Segment: '<all>')
           Server: true (Bootstrap: false)
      Client Addr: [0.0.0.0] (HTTP: 18500, HTTPS: -1, gRPC: 8502, DNS: 18600)
     Cluster Addr: 172.17.0.1 (LAN: 18301, WAN: 8302)
          Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
Entire log
==> Log data will now stream in as it occurs:

   2019/03/03 02:13:05 [DEBUG] agent: Using random ID "73eb31bf-abab-77c5-3bcd-3472dd69356e" as node ID
   2019/03/03 02:13:05 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:73eb31bf-abab-77c5-3bcd-3472dd69356e Address:172.17.0.1:8300}]
   2019/03/03 02:13:05 [INFO] raft: Node at 172.17.0.1:8300 [Follower] entering Follower state (Leader: "")
   2019/03/03 02:13:05 [INFO] serf: EventMemberJoin: host01.dc1 172.17.0.1
   2019/03/03 02:13:06 [INFO] serf: EventMemberJoin: host01 172.17.0.1
   2019/03/03 02:13:06 [INFO] consul: Adding LAN server host01 (Addr: tcp/172.17.0.1:8300) (DC: dc1)
   2019/03/03 02:13:06 [INFO] consul: Handled member-join event for server "host01.dc1" in area "wan"
   2019/03/03 02:13:06 [INFO] agent: Started DNS server 0.0.0.0:18600 (udp)
   2019/03/03 02:13:06 [DEBUG] agent/proxy: managed Connect proxy manager started
   2019/03/03 02:13:06 [INFO] agent: Started DNS server 0.0.0.0:18600 (tcp)
   2019/03/03 02:13:06 [INFO] agent: Started HTTP server on [::]:18500 (tcp)
   2019/03/03 02:13:06 [INFO] agent: started state syncer
   2019/03/03 02:13:06 [INFO] agent: Started gRPC server on [::]:8502 (tcp)
   2019/03/03 02:13:06 [WARN] raft: Heartbeat timeout from "" reached, starting election
   2019/03/03 02:13:06 [INFO] raft: Node at 172.17.0.1:8300 [Candidate] entering Candidate state in term 2
   2019/03/03 02:13:06 [DEBUG] raft: Votes needed: 1
   2019/03/03 02:13:06 [DEBUG] raft: Vote granted from 73eb31bf-abab-77c5-3bcd-3472dd69356e in term 2. Tally: 1
   2019/03/03 02:13:06 [INFO] raft: Election won. Tally: 1
   2019/03/03 02:13:06 [INFO] raft: Node at 172.17.0.1:8300 [Leader] entering Leader state
   2019/03/03 02:13:06 [INFO] consul: cluster leadership acquired
   2019/03/03 02:13:06 [INFO] consul: New leader elected: host01
   2019/03/03 02:13:06 [INFO] connect: initialized CA with provider "consul"
   2019/03/03 02:13:06 [DEBUG] consul: Skipping self join check for "host01" since the cluster is too small
   2019/03/03 02:13:06 [INFO] consul: member 'host01' joined, marking health alive
   2019/03/03 02:13:06 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
   2019/03/03 02:13:06 [INFO] agent: Synced node info
   2019/03/03 02:13:08 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
   2019/03/03 02:13:08 [DEBUG] agent: Node info in sync
   2019/03/03 02:13:08 [DEBUG] agent: Node info in sync
   2019/03/03 02:13:11 [DEBUG] http: Request GET /v1/catalog/datacenters (697.31Β΅s) from=130.14.25.182:35322
   2019/03/03 02:13:11 [DEBUG] http: Request GET /v1/catalog/datacenters (493.883Β΅s) from=130.14.25.182:35322
   2019/03/03 02:13:11 [DEBUG] http: Request GET /v1/internal/ui/services?dc=dc1 (733.966Β΅s) from=130.14.25.182:35320
   2019/03/03 02:13:11 [DEBUG] http: Request GET /v1/catalog/datacenters (129.652Β΅s) from=130.14.25.182:35322
   2019/03/03 02:13:15 [DEBUG] http: Request GET /v1/catalog/datacenters (1.285973ms) from=130.14.25.182:35322
   2019/03/03 02:13:19 [DEBUG] http: Request GET /v1/catalog/datacenters (2.248289ms) from=130.14.25.182:35322
   2019/03/03 02:13:19 [DEBUG] http: Request GET /v1/acl/list?dc=dc1 (195.834Β΅s) from=130.14.25.182:35320
   2019/03/03 02:13:19 [DEBUG] http: Request GET /v1/catalog/datacenters (662.248Β΅s) from=130.14.25.182:35320
   2019/03/03 02:13:20 [DEBUG] http: Request GET /v1/catalog/datacenters (238.131Β΅s) from=130.14.25.182:35320
   2019/03/03 02:13:20 [DEBUG] http: Request GET /v1/internal/ui/services?dc=dc1 (449.556Β΅s) from=130.14.25.182:35320
   2019/03/03 02:14:06 [DEBUG] consul: Skipping self join check for "host01" since the cluster is too small
   2019/03/03 02:14:16 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
   2019/03/03 02:14:16 [DEBUG] agent: Node info in sync
   2019/03/03 02:14:33 [DEBUG] http: Request PUT /v1/catalog/register (14.594367ms) from=127.0.0.1:48938
   2019/03/03 02:14:38 [DEBUG] http: Request GET /v1/catalog/datacenters (484.556Β΅s) from=130.14.25.182:36012
   2019/03/03 02:14:38 [DEBUG] http: Request GET /v1/internal/ui/services?dc=dc1 (1.270023ms) from=130.14.25.182:36012
   2019/03/03 02:14:39 [DEBUG] http: Request GET /v1/internal/ui/nodes?dc=dc1 (1.802238ms) from=130.14.25.182:36012
   2019/03/03 02:14:41 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (453.426Β΅s) from=130.14.25.182:36012
   2019/03/03 02:14:41 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (1.269195ms) from=130.14.25.182:36012
   2019/03/03 02:14:41 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (386.751Β΅s) from=130.14.25.182:36012
   2019/03/03 02:14:55 [DEBUG] http: Request GET /v1/status/leader (2.229379ms) from=127.0.0.1:49104
   2019/03/03 02:14:55 [INFO] agent: Synced service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d"
   2019/03/03 02:14:55 [DEBUG] agent: Node info in sync
   2019/03/03 02:14:55 [DEBUG] http: Request PUT /v1/agent/service/register (2.185451ms) from=127.0.0.1:49104
   2019/03/03 02:14:55 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:14:55 [DEBUG] agent: Node info in sync
   2019/03/03 02:14:55 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:14:55 [DEBUG] agent: Node info in sync
   2019/03/03 02:14:55 [DEBUG] http: Request PUT /v1/session/create (1.113677ms) from=127.0.0.1:49104
   2019/03/03 02:14:55 [DEBUG] http: Request GET /v1/kv/__esm__/leader?wait=15000ms (294.545Β΅s) from=127.0.0.1:49106
   2019/03/03 02:14:55 [DEBUG] http: Request GET /v1/kv/__esm__/agents/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d (94.07Β΅s) from=127.0.0.1:49108
   2019/03/03 02:14:55 [DEBUG] http: Request PUT /v1/kv/__esm__/leader?acquire=3a3072d2-6232-a3cb-7176-57c1026a14da&flags=3304740253564472344 (676.685Β΅s) from=127.0.0.1:49106
   2019/03/03 02:14:55 [DEBUG] http: Request GET /v1/kv/__esm__/agents/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d?index=1 (56.991Β΅s) from=127.0.0.1:49108
   2019/03/03 02:14:55 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:14:55 [INFO] agent: Synced check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl"
   2019/03/03 02:14:55 [DEBUG] agent: Node info in sync
   2019/03/03 02:14:55 [DEBUG] http: Request PUT /v1/agent/check/register (4.403039ms) from=127.0.0.1:49104
   2019/03/03 02:14:55 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:14:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:14:55 [DEBUG] agent: Node info in sync
   2019/03/03 02:14:55 [DEBUG] http: Request GET /v1/catalog/nodes?node-meta=external-node%3Atrue (597.496Β΅s) from=127.0.0.1:49110
   2019/03/03 02:14:55 [DEBUG] http: Request GET /v1/kv/__esm__/leader?consistent= (181.359Β΅s) from=127.0.0.1:49106
   2019/03/03 02:14:56 [DEBUG] http: Request GET /v1/health/service/consul-esm?passing=1 (856.566Β΅s) from=127.0.0.1:49110
   2019/03/03 02:14:56 [DEBUG] http: Request PUT /v1/txn (4.00394ms) from=127.0.0.1:49110
   2019/03/03 02:14:56 [DEBUG] http: Request GET /v1/kv/__esm__/agents/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d?index=19 (1.00929522s) from=127.0.0.1:49108
   2019/03/03 02:14:56 [DEBUG] http: Request GET /v1/catalog/nodes?node-meta=external-node%3Atrue (266.743Β΅s) from=127.0.0.1:49108
   2019/03/03 02:14:56 [DEBUG] http: Request GET /v1/health/state/any?node-meta=external-node%3Atrue (300.917Β΅s) from=127.0.0.1:49108
   2019/03/03 02:15:00 [DEBUG] http: Request GET /v1/health/node/foo (409.855Β΅s) from=127.0.0.1:49110
   2019/03/03 02:15:00 [DEBUG] http: Request PUT /v1/txn (499.959Β΅s) from=127.0.0.1:49110
   2019/03/03 02:15:02 [DEBUG] http: Request GET /v1/catalog/datacenters (679.964Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:02 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (734.577Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:02 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (269.458Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:02 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (213.886Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:03 [DEBUG] http: Request GET /v1/health/node/foo (593.281Β΅s) from=127.0.0.1:49110
   2019/03/03 02:15:03 [DEBUG] http: Request PUT /v1/txn (447.261Β΅s) from=127.0.0.1:49110
   2019/03/03 02:15:03 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (301.312Β΅s) from=127.0.0.1:49110
   2019/03/03 02:15:03 [DEBUG] http: Request GET /v1/catalog/datacenters (276.35Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:03 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (570.932Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:03 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (229.15Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:03 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (217.558Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:03 [DEBUG] http: Request GET /v1/catalog/datacenters (359.411Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:03 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (417.446Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:04 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (357.994Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:04 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (219.236Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:04 [DEBUG] http: Request GET /v1/catalog/datacenters (292.715Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:04 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (565.89Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:04 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (257.323Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:04 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (351.793Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:05 [DEBUG] http: Request GET /v1/catalog/datacenters (396.828Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:05 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (428.792Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:05 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (262.725Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:05 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (282.999Β΅s) from=130.14.25.182:36014
   2019/03/03 02:15:06 [DEBUG] manager: Rebalanced 1 servers, next active server is host01.dc1 (Addr: tcp/172.17.0.1:8300) (DC: dc1)
   2019/03/03 02:15:06 [DEBUG] consul: Skipping self join check for "host01" since the cluster is too small
   2019/03/03 02:15:06 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (179.677Β΅s) from=127.0.0.1:49168
   2019/03/03 02:15:06 [DEBUG] http: Request PUT /v1/txn (1.249728ms) from=127.0.0.1:49168
   2019/03/03 02:15:10 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (1.369879ms) from=127.0.0.1:49192
   2019/03/03 02:15:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:15:10 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:15:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:15:10 [DEBUG] agent: Node info in sync
   2019/03/03 02:15:10 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (353.922Β΅s) from=127.0.0.1:49168
   2019/03/03 02:15:16 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (6.972ms) from=127.0.0.1:49192
   2019/03/03 02:15:16 [DEBUG] http: Request PUT /v1/txn (1.860698ms) from=127.0.0.1:49192
   2019/03/03 02:15:18 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (725.823Β΅s) from=127.0.0.1:49192
   2019/03/03 02:15:19 [DEBUG] http: Request GET /v1/catalog/datacenters (884.077Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:19 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (533.123Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:19 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (873.615Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:19 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (232.043Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:20 [DEBUG] http: Request GET /v1/health/node/foo (628.884Β΅s) from=127.0.0.1:49192
   2019/03/03 02:15:20 [DEBUG] http: Request PUT /v1/txn (774.284Β΅s) from=127.0.0.1:49192
   2019/03/03 02:15:20 [DEBUG] http: Request GET /v1/catalog/datacenters (272.789Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:20 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (501.101Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:20 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (220.865Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:20 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (200.506Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:21 [DEBUG] http: Request GET /v1/catalog/datacenters (238.862Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:21 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (334.908Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:21 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (197.96Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:21 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (153.576Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:21 [DEBUG] http: Request GET /v1/catalog/datacenters (280.324Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:21 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (472.701Β΅s) from=130.14.25.182:36012
   2019/03/03 02:15:21 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (217.466Β΅s) from=130.14.25.182:36012
   2019/03/03 02:15:21 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (117.718Β΅s) from=130.14.25.182:36012
   2019/03/03 02:15:22 [DEBUG] http: Request GET /v1/catalog/datacenters (185.473Β΅s) from=130.14.25.182:36012
   2019/03/03 02:15:22 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (293.515Β΅s) from=130.14.25.182:36012
   2019/03/03 02:15:22 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (543.569Β΅s) from=130.14.25.182:36012
   2019/03/03 02:15:22 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (254.379Β΅s) from=130.14.25.182:36012
   2019/03/03 02:15:22 [DEBUG] http: Request GET /v1/catalog/datacenters (359.425Β΅s) from=130.14.25.182:36012
   2019/03/03 02:15:22 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (328.285Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:22 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (141.998Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:22 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (262.704Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:23 [DEBUG] http: Request GET /v1/catalog/datacenters (300.844Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:23 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (453.955Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:23 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (181.17Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:23 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (159.288Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:23 [DEBUG] http: Request GET /v1/catalog/datacenters (444.087Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:23 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (393.357Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:23 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (257.261Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:23 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (135.044Β΅s) from=130.14.25.182:36010
   2019/03/03 02:15:24 [DEBUG] http: Request GET /v1/health/node/foo (371.022Β΅s) from=127.0.0.1:49192
   2019/03/03 02:15:24 [DEBUG] http: Request PUT /v1/txn (409.672Β΅s) from=127.0.0.1:49192
   2019/03/03 02:15:25 [DEBUG] http: Request GET /v1/agent/services (1.228479ms) from=127.0.0.1:49192
   2019/03/03 02:15:25 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (378.518Β΅s) from=127.0.0.1:49168
   2019/03/03 02:15:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:15:25 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:15:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:15:25 [DEBUG] agent: Node info in sync
   2019/03/03 02:15:25 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (185.736Β΅s) from=127.0.0.1:49192
   2019/03/03 02:15:26 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (773.22Β΅s) from=127.0.0.1:49168
   2019/03/03 02:15:26 [DEBUG] http: Request PUT /v1/txn (302.024Β΅s) from=127.0.0.1:49168
   2019/03/03 02:15:33 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (1.82123ms) from=127.0.0.1:49168
   2019/03/03 02:15:36 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (733.635Β΅s) from=127.0.0.1:49168
   2019/03/03 02:15:36 [DEBUG] http: Request PUT /v1/txn (466.621Β΅s) from=127.0.0.1:49168
   2019/03/03 02:15:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:15:40 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:15:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:15:40 [DEBUG] agent: Node info in sync
   2019/03/03 02:15:40 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (393.607Β΅s) from=127.0.0.1:49168
   2019/03/03 02:15:40 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (1.495546ms) from=127.0.0.1:49168
   2019/03/03 02:15:46 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (1.106464ms) from=127.0.0.1:49168
   2019/03/03 02:15:46 [DEBUG] http: Request PUT /v1/txn (1.390673ms) from=127.0.0.1:49168
   2019/03/03 02:15:48 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (583.284Β΅s) from=127.0.0.1:49168
   2019/03/03 02:15:54 [DEBUG] http: Request GET /v1/catalog/datacenters (1.260879ms) from=130.14.25.182:36008
   2019/03/03 02:15:54 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (706.525Β΅s) from=130.14.25.182:36008
   2019/03/03 02:15:54 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (520.896Β΅s) from=130.14.25.182:36008
   2019/03/03 02:15:54 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (122.202Β΅s) from=130.14.25.182:36008
   2019/03/03 02:15:55 [DEBUG] http: Request GET /v1/catalog/datacenters (250.063Β΅s) from=130.14.25.182:36008
   2019/03/03 02:15:55 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (418.21Β΅s) from=130.14.25.182:36008
   2019/03/03 02:15:55 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (229.159Β΅s) from=130.14.25.182:36008
   2019/03/03 02:15:55 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (141.163Β΅s) from=130.14.25.182:36008
   2019/03/03 02:15:55 [DEBUG] http: Request GET /v1/agent/services (295.093Β΅s) from=127.0.0.1:49168
   2019/03/03 02:15:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:15:55 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:15:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:15:55 [DEBUG] agent: Node info in sync
   2019/03/03 02:15:55 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (772.567Β΅s) from=127.0.0.1:49168
   2019/03/03 02:15:55 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (371.877Β΅s) from=127.0.0.1:49168
   2019/03/03 02:15:56 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (400.378Β΅s) from=127.0.0.1:49168
   2019/03/03 02:15:56 [DEBUG] http: Request PUT /v1/txn (415.284Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:03 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (3.400188ms) from=127.0.0.1:49168
   2019/03/03 02:16:06 [DEBUG] consul: Skipping self join check for "host01" since the cluster is too small
   2019/03/03 02:16:06 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (743.2Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:06 [DEBUG] http: Request PUT /v1/txn (591.575Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:07 [DEBUG] http: Request GET /v1/catalog/datacenters (991.956Β΅s) from=130.14.25.206:44700
   2019/03/03 02:16:07 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (676.462Β΅s) from=130.14.25.206:44702
   2019/03/03 02:16:07 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (265.435Β΅s) from=130.14.25.206:44702
   2019/03/03 02:16:07 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (291.352Β΅s) from=130.14.25.206:44702
   2019/03/03 02:16:07 [DEBUG] http: Request GET /v1/catalog/datacenters (398.117Β΅s) from=130.14.25.206:44702
   2019/03/03 02:16:07 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (496.205Β΅s) from=130.14.25.206:44702
   2019/03/03 02:16:07 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (197.856Β΅s) from=130.14.25.206:44702
   2019/03/03 02:16:07 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (158.952Β΅s) from=130.14.25.206:44702
   2019/03/03 02:16:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:16:10 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:16:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:16:10 [DEBUG] agent: Node info in sync
   2019/03/03 02:16:10 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (436.406Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:10 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (680.629Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:13 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
   2019/03/03 02:16:13 [INFO] agent: Synced service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d"
   2019/03/03 02:16:13 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:16:13 [DEBUG] agent: Node info in sync
   2019/03/03 02:16:16 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (565.243Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:16 [DEBUG] http: Request PUT /v1/txn (973.2Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:18 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (813.064Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:25 [DEBUG] http: Request GET /v1/agent/services (866.625Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:16:25 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:16:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:16:25 [DEBUG] agent: Node info in sync
   2019/03/03 02:16:25 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (373.864Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:25 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (438.136Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:26 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (1.396959ms) from=127.0.0.1:49168
   2019/03/03 02:16:26 [DEBUG] http: Request PUT /v1/txn (1.102458ms) from=127.0.0.1:49168
   2019/03/03 02:16:33 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (1.180329ms) from=127.0.0.1:49168
   2019/03/03 02:16:36 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (814.703Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:36 [DEBUG] http: Request PUT /v1/txn (1.335298ms) from=127.0.0.1:49168
   2019/03/03 02:16:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:16:40 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:16:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:16:40 [DEBUG] agent: Node info in sync
   2019/03/03 02:16:40 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (493.077Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:40 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (952.955Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:46 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (1.362089ms) from=127.0.0.1:49168
   2019/03/03 02:16:46 [DEBUG] http: Request PUT /v1/txn (1.006586ms) from=127.0.0.1:49168
   2019/03/03 02:16:48 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (514.757Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:55 [DEBUG] http: Request GET /v1/agent/services (15.891096ms) from=127.0.0.1:49168
   2019/03/03 02:16:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:16:55 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:16:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:16:55 [DEBUG] agent: Node info in sync
   2019/03/03 02:16:55 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (510.818Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:55 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (631.784Β΅s) from=127.0.0.1:49168
   2019/03/03 02:16:56 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (4.559402ms) from=127.0.0.1:49168
   2019/03/03 02:16:56 [DEBUG] http: Request PUT /v1/txn (1.084601ms) from=127.0.0.1:49168
   2019/03/03 02:17:03 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (2.886761ms) from=127.0.0.1:49168
   2019/03/03 02:17:06 [DEBUG] consul: Skipping self join check for "host01" since the cluster is too small
   2019/03/03 02:17:06 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (1.193589ms) from=127.0.0.1:49168
   2019/03/03 02:17:06 [DEBUG] http: Request PUT /v1/txn (542.028Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:17:10 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:17:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:17:10 [DEBUG] agent: Node info in sync
   2019/03/03 02:17:10 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (378.005Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:10 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (677.766Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:16 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (1.186807ms) from=127.0.0.1:49168
   2019/03/03 02:17:16 [DEBUG] http: Request PUT /v1/txn (965.603Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:18 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (409.095Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:25 [DEBUG] http: Request GET /v1/agent/services (959.552Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:17:25 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:17:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:17:25 [DEBUG] agent: Node info in sync
   2019/03/03 02:17:25 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (475.719Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:25 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (450.526Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:26 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (391.807Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:26 [DEBUG] http: Request PUT /v1/txn (1.628579ms) from=127.0.0.1:49168
   2019/03/03 02:17:33 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (1.874083ms) from=127.0.0.1:49168
   2019/03/03 02:17:36 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (1.132251ms) from=127.0.0.1:49168
   2019/03/03 02:17:36 [DEBUG] http: Request PUT /v1/txn (2.613603ms) from=127.0.0.1:49168
   2019/03/03 02:17:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:17:40 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:17:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:17:40 [DEBUG] agent: Node info in sync
   2019/03/03 02:17:40 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (532.476Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:40 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (905.54Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:46 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (791.888Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:46 [DEBUG] http: Request PUT /v1/txn (345.886Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:48 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (1.145735ms) from=127.0.0.1:49168
   2019/03/03 02:17:55 [DEBUG] manager: Rebalanced 1 servers, next active server is host01.dc1 (Addr: tcp/172.17.0.1:8300) (DC: dc1)
   2019/03/03 02:17:55 [DEBUG] http: Request GET /v1/agent/services (1.031212ms) from=127.0.0.1:49168
   2019/03/03 02:17:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:17:55 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:17:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:17:55 [DEBUG] agent: Node info in sync
   2019/03/03 02:17:55 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (401.047Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:55 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (831.523Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:56 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (560.98Β΅s) from=127.0.0.1:49168
   2019/03/03 02:17:56 [DEBUG] http: Request PUT /v1/txn (355.27Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:03 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (533.71Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:06 [DEBUG] consul: Skipping self join check for "host01" since the cluster is too small
   2019/03/03 02:18:06 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (1.334773ms) from=127.0.0.1:49168
   2019/03/03 02:18:06 [DEBUG] http: Request PUT /v1/txn (401.847Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:08 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
   2019/03/03 02:18:08 [INFO] agent: Synced service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d"
   2019/03/03 02:18:08 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:18:08 [DEBUG] agent: Node info in sync
   2019/03/03 02:18:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:18:10 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:18:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:18:10 [DEBUG] agent: Node info in sync
   2019/03/03 02:18:10 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (371.43Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:10 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (372.17Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:16 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (799.567Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:16 [DEBUG] http: Request PUT /v1/txn (2.449953ms) from=127.0.0.1:49168
   2019/03/03 02:18:18 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (334.321Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:25 [DEBUG] http: Request GET /v1/agent/services (8.98066ms) from=127.0.0.1:49168
   2019/03/03 02:18:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:18:25 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:18:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:18:25 [DEBUG] agent: Node info in sync
   2019/03/03 02:18:25 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (316.332Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:25 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (695.496Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:26 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (288.191Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:26 [DEBUG] http: Request PUT /v1/txn (408.63Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:33 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (677.323Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:36 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (601.865Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:36 [DEBUG] http: Request PUT /v1/txn (763.544Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:18:40 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:18:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:18:40 [DEBUG] agent: Node info in sync
   2019/03/03 02:18:40 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (253.488Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:40 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (580.073Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:46 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (401.563Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:46 [DEBUG] http: Request PUT /v1/txn (821.15Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:48 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (358.577Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:55 [DEBUG] http: Request GET /v1/agent/services (588.355Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:18:55 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:18:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:18:55 [DEBUG] agent: Node info in sync
   2019/03/03 02:18:55 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (235.613Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:55 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (436.643Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:56 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (458.727Β΅s) from=127.0.0.1:49168
   2019/03/03 02:18:56 [DEBUG] http: Request PUT /v1/txn (208.356Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:03 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (626.37Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:06 [DEBUG] consul: Skipping self join check for "host01" since the cluster is too small
   2019/03/03 02:19:06 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (875.07Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:06 [DEBUG] http: Request PUT /v1/txn (1.657829ms) from=127.0.0.1:49168
   2019/03/03 02:19:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:19:10 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:19:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:19:10 [DEBUG] agent: Node info in sync
   2019/03/03 02:19:10 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (294.232Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:10 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (467.503Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:16 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (332.779Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:16 [DEBUG] http: Request PUT /v1/txn (596.174Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:18 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (298.571Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:25 [DEBUG] http: Request GET /v1/agent/services (276.414Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:19:25 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:19:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:19:25 [DEBUG] agent: Node info in sync
   2019/03/03 02:19:25 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (337.793Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:25 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (448.882Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:26 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
   2019/03/03 02:19:26 [INFO] agent: Synced service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d"
   2019/03/03 02:19:26 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:19:26 [DEBUG] agent: Node info in sync
   2019/03/03 02:19:26 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (645.733Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:26 [DEBUG] http: Request PUT /v1/txn (895.819Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:33 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (716.412Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:36 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (666.687Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:36 [DEBUG] http: Request PUT /v1/txn (1.18223ms) from=127.0.0.1:49168
   2019/03/03 02:19:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:19:40 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:19:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:19:40 [DEBUG] agent: Node info in sync
   2019/03/03 02:19:40 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (262.599Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:40 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (448.076Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:46 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (420.712Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:46 [DEBUG] http: Request PUT /v1/txn (312.653Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:47 [DEBUG] http: Request GET /v1/catalog/datacenters (559.021Β΅s) from=130.14.25.206:44702
   2019/03/03 02:19:47 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (754.78Β΅s) from=130.14.25.206:45640
   2019/03/03 02:19:47 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (540.282Β΅s) from=130.14.25.206:45640
   2019/03/03 02:19:47 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (156.614Β΅s) from=130.14.25.206:45640
   2019/03/03 02:19:48 [DEBUG] http: Request GET /v1/catalog/datacenters (245.681Β΅s) from=130.14.25.206:45640
   2019/03/03 02:19:48 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (316.49Β΅s) from=130.14.25.206:45640
   2019/03/03 02:19:48 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (195.082Β΅s) from=130.14.25.206:45640
   2019/03/03 02:19:48 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (101.515Β΅s) from=130.14.25.206:45640
   2019/03/03 02:19:48 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (240.162Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:48 [DEBUG] http: Request GET /v1/catalog/datacenters (310.297Β΅s) from=130.14.25.206:45640
   2019/03/03 02:19:48 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (363.328Β΅s) from=130.14.25.206:45640
   2019/03/03 02:19:48 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (170.237Β΅s) from=130.14.25.206:45640
   2019/03/03 02:19:48 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (160.963Β΅s) from=130.14.25.206:45640
   2019/03/03 02:19:55 [DEBUG] http: Request GET /v1/agent/services (221.146Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:19:55 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:19:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:19:55 [DEBUG] agent: Node info in sync
   2019/03/03 02:19:55 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (310.864Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:55 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (377.877Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:56 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (211.972Β΅s) from=127.0.0.1:49168
   2019/03/03 02:19:56 [DEBUG] http: Request PUT /v1/txn (488.982Β΅s) from=127.0.0.1:49168
   2019/03/03 02:20:03 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (522.079Β΅s) from=127.0.0.1:49168
   2019/03/03 02:20:06 [DEBUG] consul: Skipping self join check for "host01" since the cluster is too small
   2019/03/03 02:20:06 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (764.452Β΅s) from=127.0.0.1:49168
   2019/03/03 02:20:06 [DEBUG] http: Request PUT /v1/txn (339.444Β΅s) from=127.0.0.1:49168
   2019/03/03 02:20:08 [DEBUG] manager: Rebalanced 1 servers, next active server is host01.dc1 (Addr: tcp/172.17.0.1:8300) (DC: dc1)
   2019/03/03 02:20:09 [DEBUG] http: Request GET /v1/kv/__esm__/leader?consistent=&index=19 (5m13.639807932s) from=127.0.0.1:49106
   2019/03/03 02:20:10 [DEBUG] http: Request GET /v1/kv/__esm__/agents/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d?index=21 (5m3.681787282s) from=127.0.0.1:49104
   2019/03/03 02:20:10 [DEBUG] http: Request GET /v1/catalog/nodes?node-meta=external-node%3Atrue (545.197Β΅s) from=127.0.0.1:49104
   2019/03/03 02:20:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:20:10 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:20:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:20:10 [DEBUG] agent: Node info in sync
   2019/03/03 02:20:10 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (328.778Β΅s) from=127.0.0.1:49104
   2019/03/03 02:20:10 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (329.818Β΅s) from=127.0.0.1:49104
   2019/03/03 02:20:13 [DEBUG] http: Request GET /v1/health/service/consul-esm?index=20&passing=1 (5m15.403718386s) from=127.0.0.1:49108
   2019/03/03 02:20:13 [DEBUG] http: Request PUT /v1/txn (1.768636ms) from=127.0.0.1:49108
   2019/03/03 02:20:16 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (247.329Β΅s) from=127.0.0.1:49104
   2019/03/03 02:20:16 [DEBUG] http: Request PUT /v1/txn (492.829Β΅s) from=127.0.0.1:49104
   2019/03/03 02:20:17 [DEBUG] http: Request GET /v1/health/state/any?index=20&node-meta=external-node%3Atrue (5m10.491558243s) from=127.0.0.1:49170
   2019/03/03 02:20:17 [DEBUG] http: Request GET /v1/health/state/any?node-meta=external-node%3Atrue (337.355Β΅s) from=127.0.0.1:49170
   2019/03/03 02:20:18 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (306.008Β΅s) from=127.0.0.1:49170
   2019/03/03 02:20:20 [DEBUG] http: Request GET /v1/kv/__esm__/agents/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d?index=21 (375.863Β΅s) from=127.0.0.1:49170
   2019/03/03 02:20:20 [DEBUG] http: Request GET /v1/catalog/nodes?node-meta=external-node%3Atrue (289.981Β΅s) from=127.0.0.1:49170
   2019/03/03 02:20:20 [DEBUG] http: Request GET /v1/health/state/any?node-meta=external-node%3Atrue (276.887Β΅s) from=127.0.0.1:49170
   2019/03/03 02:20:20 [DEBUG] http: Request GET /v1/catalog/nodes?index=15&node-meta=external-node%3Atrue (5m14.807037535s) from=127.0.0.1:49110
   2019/03/03 02:20:20 [DEBUG] http: Request PUT /v1/txn (10.660592ms) from=127.0.0.1:49170
   2019/03/03 02:20:25 [DEBUG] http: Request GET /v1/agent/services (886.299Β΅s) from=127.0.0.1:49170
   2019/03/03 02:20:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:20:25 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:20:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:20:25 [DEBUG] agent: Node info in sync
   2019/03/03 02:20:25 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (417.702Β΅s) from=127.0.0.1:49170
   2019/03/03 02:20:25 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (307.269Β΅s) from=127.0.0.1:49170
   2019/03/03 02:20:26 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (701.055Β΅s) from=127.0.0.1:49170
   2019/03/03 02:20:26 [DEBUG] http: Request PUT /v1/txn (285.085Β΅s) from=127.0.0.1:49170
   2019/03/03 02:20:30 [DEBUG] http: Request GET /v1/kv/__esm__/agents/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d?index=46 (891.643Β΅s) from=127.0.0.1:49170
   2019/03/03 02:20:30 [DEBUG] http: Request GET /v1/catalog/nodes?node-meta=external-node%3Atrue (484.311Β΅s) from=127.0.0.1:49104
   2019/03/03 02:20:33 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (332.827Β΅s) from=127.0.0.1:49168
   2019/03/03 02:20:36 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (306.686Β΅s) from=127.0.0.1:49168
   2019/03/03 02:20:36 [DEBUG] http: Request PUT /v1/txn (256.992Β΅s) from=127.0.0.1:49168
   2019/03/03 02:20:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:20:40 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:20:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:20:40 [DEBUG] agent: Node info in sync
   2019/03/03 02:20:40 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (370.136Β΅s) from=127.0.0.1:50620
   2019/03/03 02:20:40 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (371.588Β΅s) from=127.0.0.1:50620
   2019/03/03 02:20:46 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (753.798Β΅s) from=127.0.0.1:50620
   2019/03/03 02:20:46 [DEBUG] http: Request PUT /v1/txn (841.64Β΅s) from=127.0.0.1:50620
   2019/03/03 02:20:48 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
   2019/03/03 02:20:48 [INFO] agent: Synced service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d"
   2019/03/03 02:20:48 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:20:48 [DEBUG] agent: Node info in sync
   2019/03/03 02:20:48 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (394.138Β΅s) from=127.0.0.1:50620
   2019/03/03 02:20:55 [DEBUG] http: Request GET /v1/agent/services (262.986Β΅s) from=127.0.0.1:50620
   2019/03/03 02:20:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:20:55 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:20:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:20:55 [DEBUG] agent: Node info in sync
   2019/03/03 02:20:55 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (288.585Β΅s) from=127.0.0.1:50620
   2019/03/03 02:20:55 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (321.093Β΅s) from=127.0.0.1:50620
   2019/03/03 02:20:56 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (253.612Β΅s) from=127.0.0.1:50620
   2019/03/03 02:20:56 [DEBUG] http: Request PUT /v1/txn (726.79Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:03 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (929.949Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:04 [DEBUG] http: Request GET /v1/catalog/datacenters (677.187Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:04 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (509.74Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:04 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (240.924Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:04 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (266.762Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:05 [DEBUG] http: Request GET /v1/catalog/datacenters (947.751Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:05 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (505.494Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:05 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (152.559Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:05 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (109.806Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:05 [DEBUG] http: Request GET /v1/catalog/datacenters (351.09Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:05 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (399.064Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:05 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (123.336Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:05 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (152.588Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:06 [DEBUG] consul: Skipping self join check for "host01" since the cluster is too small
   2019/03/03 02:21:06 [DEBUG] http: Request GET /v1/catalog/datacenters (220.735Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:06 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (368.324Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:06 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (258.193Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:06 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (98.107Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:06 [DEBUG] http: Request GET /v1/catalog/datacenters (235.175Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:06 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (340.704Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:06 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (241.961Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:06 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (180.385Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:06 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (376.645Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:06 [DEBUG] http: Request PUT /v1/txn (577.293Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:21:10 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:21:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:21:10 [DEBUG] agent: Node info in sync
   2019/03/03 02:21:10 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (413.752Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:10 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (499.876Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:16 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (316.729Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:16 [DEBUG] http: Request PUT /v1/txn (571.357Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:18 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (333.566Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:25 [DEBUG] http: Request GET /v1/agent/services (675.155Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:21:25 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:21:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:21:25 [DEBUG] agent: Node info in sync
   2019/03/03 02:21:25 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (159.851Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:25 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (289.072Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:26 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (871.045Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:26 [DEBUG] http: Request PUT /v1/txn (318.58Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:33 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (603.683Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:36 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (420.793Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:36 [DEBUG] http: Request PUT /v1/txn (287.204Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:21:40 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:21:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:21:40 [DEBUG] agent: Node info in sync
   2019/03/03 02:21:40 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (264.699Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:40 [DEBUG] http: Request GET /v1/catalog/datacenters (435.48Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:40 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (755.655Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:40 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (783.111Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:40 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (287.572Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:40 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (161.222Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:41 [DEBUG] http: Request GET /v1/catalog/datacenters (223.477Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:41 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (508.53Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:41 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (208.368Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:41 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (162.316Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:41 [DEBUG] http: Request GET /v1/catalog/datacenters (216.408Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:41 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (263.92Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:41 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (230.937Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:41 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (155.285Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:41 [DEBUG] http: Request GET /v1/catalog/datacenters (207.464Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:41 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (390.474Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:41 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (271.959Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:41 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (259.444Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:42 [DEBUG] http: Request GET /v1/catalog/datacenters (590.696Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:42 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (510.355Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:42 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (242.305Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:42 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (220.142Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:42 [DEBUG] http: Request GET /v1/catalog/datacenters (267.73Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:42 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (276.709Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:42 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (221.113Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:42 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (151.009Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:42 [DEBUG] http: Request GET /v1/catalog/datacenters (239.93Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:42 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (352.574Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:42 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (256.262Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:43 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (158.755Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:43 [DEBUG] http: Request GET /v1/catalog/datacenters (214.13Β΅s) from=130.14.25.206:45640
   2019/03/03 02:21:43 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (672.132Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:43 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (248.407Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:43 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (181.54Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:43 [DEBUG] http: Request GET /v1/catalog/datacenters (221.09Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:43 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (262.13Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:43 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (186.704Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:43 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (182.547Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:44 [DEBUG] http: Request GET /v1/catalog/datacenters (293.383Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:44 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (310.434Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:44 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (249.885Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:44 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (241.226Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:44 [DEBUG] http: Request GET /v1/catalog/datacenters (310.99Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:44 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (335.843Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:44 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (267.475Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:44 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (202.18Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:45 [DEBUG] http: Request GET /v1/catalog/datacenters (234.44Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:45 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (488.039Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:45 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (158.526Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:45 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (121.612Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:45 [DEBUG] http: Request GET /v1/catalog/datacenters (256.369Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:45 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (305.187Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:45 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (217.604Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:45 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (199.142Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:46 [DEBUG] http: Request GET /v1/catalog/datacenters (188.839Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:46 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (408.934Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:46 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (197.526Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:46 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (104.891Β΅s) from=130.14.25.206:44700
   2019/03/03 02:21:46 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (367.233Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:46 [DEBUG] http: Request PUT /v1/txn (658.468Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:48 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (372.165Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:54 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
   2019/03/03 02:21:54 [INFO] agent: Synced service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d"
   2019/03/03 02:21:54 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:21:54 [DEBUG] agent: Node info in sync
   2019/03/03 02:21:55 [DEBUG] http: Request GET /v1/agent/services (599.44Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:21:55 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:21:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:21:55 [DEBUG] agent: Node info in sync
   2019/03/03 02:21:55 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (256.803Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:55 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (860.089Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:56 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (399.264Β΅s) from=127.0.0.1:50620
   2019/03/03 02:21:56 [DEBUG] http: Request PUT /v1/txn (327.668Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:03 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (793.96Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:06 [DEBUG] consul: Skipping self join check for "host01" since the cluster is too small
   2019/03/03 02:22:06 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (1.39978ms) from=127.0.0.1:50620
   2019/03/03 02:22:06 [DEBUG] http: Request PUT /v1/txn (834.622Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:22:10 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:22:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:22:10 [DEBUG] agent: Node info in sync
   2019/03/03 02:22:10 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (311.653Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:10 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (563.622Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:16 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (381.021Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:16 [DEBUG] http: Request PUT /v1/txn (248.673Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:18 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (313.312Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:25 [DEBUG] http: Request GET /v1/agent/services (7.652215ms) from=127.0.0.1:50620
   2019/03/03 02:22:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:22:25 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:22:25 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:22:25 [DEBUG] agent: Node info in sync
   2019/03/03 02:22:25 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (230.247Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:25 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (269.735Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:26 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (903.858Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:26 [DEBUG] http: Request PUT /v1/txn (318.449Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:31 [DEBUG] manager: Rebalanced 1 servers, next active server is host01.dc1 (Addr: tcp/172.17.0.1:8300) (DC: dc1)
   2019/03/03 02:22:33 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (307.712Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:36 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (595.656Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:36 [DEBUG] http: Request PUT /v1/txn (1.292063ms) from=127.0.0.1:50620
   2019/03/03 02:22:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:22:40 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:22:40 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:22:40 [DEBUG] agent: Node info in sync
   2019/03/03 02:22:40 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (320.589Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:40 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (244.997Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:46 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (335.825Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:46 [DEBUG] http: Request PUT /v1/txn (298.572Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:48 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (531.322Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:55 [DEBUG] http: Request GET /v1/agent/services (264.419Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:22:55 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:22:55 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:22:55 [DEBUG] agent: Node info in sync
   2019/03/03 02:22:55 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (278.409Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:55 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (1.018086ms) from=127.0.0.1:50620
   2019/03/03 02:22:56 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (386.399Β΅s) from=127.0.0.1:50620
   2019/03/03 02:22:56 [DEBUG] http: Request PUT /v1/txn (255.362Β΅s) from=127.0.0.1:50620
   2019/03/03 02:23:03 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (652.156Β΅s) from=127.0.0.1:50620
   2019/03/03 02:23:06 [DEBUG] consul: Skipping self join check for "host01" since the cluster is too small
   2019/03/03 02:23:06 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (837.254Β΅s) from=127.0.0.1:50620
   2019/03/03 02:23:06 [DEBUG] http: Request PUT /v1/txn (561.553Β΅s) from=127.0.0.1:50620
   2019/03/03 02:23:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" status is now passing
   2019/03/03 02:23:10 [DEBUG] agent: Service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d" in sync
   2019/03/03 02:23:10 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:23:10 [DEBUG] agent: Node info in sync
   2019/03/03 02:23:10 [DEBUG] http: Request PUT /v1/agent/check/update/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl (471.828Β΅s) from=127.0.0.1:50620
   2019/03/03 02:23:10 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (686.774Β΅s) from=127.0.0.1:50620
   2019/03/03 02:23:15 [DEBUG] http: Request GET /v1/catalog/datacenters (537.47Β΅s) from=130.14.25.206:44702
   2019/03/03 02:23:15 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (1.775213ms) from=130.14.25.206:44702
   2019/03/03 02:23:15 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (233.727Β΅s) from=130.14.25.206:44702
   2019/03/03 02:23:15 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (187.698Β΅s) from=130.14.25.206:44702
   2019/03/03 02:23:16 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
   2019/03/03 02:23:16 [INFO] agent: Synced service "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d"
   2019/03/03 02:23:16 [DEBUG] agent: Check "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d:agent-ttl" in sync
   2019/03/03 02:23:16 [DEBUG] agent: Node info in sync
   2019/03/03 02:23:16 [DEBUG] http: Request GET /v1/catalog/datacenters (258.841Β΅s) from=130.14.25.206:44702
   2019/03/03 02:23:16 [DEBUG] http: Request GET /v1/internal/ui/node/foo?dc=dc1 (325.171Β΅s) from=130.14.25.206:44700
   2019/03/03 02:23:16 [DEBUG] http: Request GET /v1/coordinate/nodes?dc=dc1 (165.34Β΅s) from=130.14.25.206:44700
   2019/03/03 02:23:16 [DEBUG] http: Request GET /v1/session/node/foo?dc=dc1 (196.491Β΅s) from=130.14.25.206:44700
   2019/03/03 02:23:16 [DEBUG] http: Request GET /v1/kv/__esm__/probes/foo (321.605Β΅s) from=127.0.0.1:50620
   2019/03/03 02:23:16 [DEBUG] http: Request PUT /v1/txn (405.177Β΅s) from=127.0.0.1:50620
   2019/03/03 02:23:18 [DEBUG] http: Request PUT /v1/session/renew/3a3072d2-6232-a3cb-7176-57c1026a14da (257.554Β΅s) from=127.0.0.1:50620
^C    2019/03/03 02:23:18 [INFO] agent: Caught signal:  interrupt
   2019/03/03 02:23:18 [INFO] agent: Graceful shutdown disabled. Exiting
   2019/03/03 02:23:18 [INFO] agent: Requesting shutdown
   2019/03/03 02:23:18 [WARN] agent: dev mode disabled persistence, killing all proxies since we can't recover them
   2019/03/03 02:23:18 [DEBUG] agent/proxy: Stopping managed Connect proxy manager
   2019/03/03 02:23:18 [INFO] consul: shutting down server
   2019/03/03 02:23:18 [WARN] serf: Shutdown without a Leave
   2019/03/03 02:23:18 [WARN] serf: Shutdown without a Leave
   2019/03/03 02:23:18 [INFO] manager: shutting down
   2019/03/03 02:23:18 [INFO] agent: consul server down
   2019/03/03 02:23:18 [INFO] agent: shutdown complete
   2019/03/03 02:23:18 [INFO] agent: Stopping DNS server 0.0.0.0:18600 (tcp)
   2019/03/03 02:23:18 [INFO] agent: Stopping DNS server 0.0.0.0:18600 (udp)
   2019/03/03 02:23:18 [INFO] agent: Stopping HTTP server [::]:18500 (tcp)
   2019/03/03 02:23:19 [WARN] agent: Timeout stopping HTTP server [::]:18500 (tcp)
   2019/03/03 02:23:19 [INFO] agent: Waiting for endpoints to shut down
   2019/03/03 02:23:19 [INFO] agent: Endpoints down
   2019/03/03 02:23:19 [INFO] agent: Exit code: 1

Start ESM

v0.3.2 built from a tag.

Config:

{
  "consul_kv_path": "__esm__/",
  "log_level": "debug",
  "http_addr": "localhost:18500"
}
$ ./consul-esm -config-dir=.
2019/03/03 02:14:55 [INFO] Connecting to Consul on localhost:18500...
Consul ESM running!
            Datacenter: (default)
               Service: "consul-esm"
           Service Tag: ""
            Service ID: "consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d"
Node Reconnect Timeout: "72h0m0s"
Entire log
Log data will now stream in as it occurs:

2019/03/03 02:14:55 [DEBUG] Registered ESM service with Consul
2019/03/03 02:14:55 [INFO] Trying to obtain leadership...
2019/03/03 02:14:55 [INFO] Obtained leadership
2019/03/03 02:14:56 [INFO] Rebalanced 1 external nodes across 1 ESM instances
2019/03/03 02:14:56 [DEBUG] Now waiting 10s between node pings
2019/03/03 02:14:56 [INFO] Now managing 2 health checks across 1 nodes
2019/03/03 02:14:56 [DEBUG] agent: pausing 3.744009718s before first socket connection of localhost:8000
2019/03/03 02:14:56 [DEBUG] agent: pausing 6.390416152s before first HTTP request of http://localhost:8000/health
2019/03/03 02:15:00 [WARN] agent: socket connection failed 'localhost:8000': dial tcp 127.0.0.1:8000: connect: connection refused
2019/03/03 02:15:03 [WARN] agent: http request failed 'http://localhost:8000/health': Get http://localhost:8000/health: dial tcp 127.0.0.1:8000: connect: connection refused
2019/03/03 02:15:05 [WARN] agent: socket connection failed 'localhost:8000': dial tcp 127.0.0.1:8000: connect: connection refused
2019/03/03 02:15:06 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:15:10 [WARN] agent: socket connection failed 'localhost:8000': dial tcp 127.0.0.1:8000: connect: connection refused
2019/03/03 02:15:13 [WARN] agent: http request failed 'http://localhost:8000/health': Get http://localhost:8000/health: dial tcp 127.0.0.1:8000: connect: connection refused
2019/03/03 02:15:15 [WARN] agent: socket connection failed 'localhost:8000': dial tcp 127.0.0.1:8000: connect: connection refused
2019/03/03 02:15:16 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:15:20 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:15:23 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:15:25 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:15:26 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:15:30 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:15:33 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:15:35 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:15:36 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:15:40 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:15:43 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:15:45 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:15:46 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:15:50 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:15:53 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:15:55 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:15:56 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:16:00 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:16:03 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:16:05 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:16:06 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:16:10 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:16:13 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:16:15 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:16:16 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:16:20 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:16:23 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:16:25 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:16:26 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:16:30 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:16:33 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:16:35 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:16:36 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:16:40 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:16:43 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:16:45 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:16:46 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:16:50 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:16:53 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:16:55 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:16:56 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:17:00 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:17:03 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:17:05 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:17:06 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:17:10 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:17:13 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:17:15 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:17:16 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:17:20 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:17:23 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:17:25 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:17:26 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:17:30 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:17:33 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:17:35 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:17:36 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:17:40 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:17:43 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:17:45 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:17:46 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:17:50 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:17:53 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:17:55 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:17:56 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:18:00 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:18:03 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:18:05 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:18:06 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:18:10 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:18:13 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:18:15 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:18:16 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:18:20 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:18:23 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:18:25 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:18:26 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:18:30 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:18:33 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:18:35 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:18:36 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:18:40 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:18:43 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:18:45 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:18:46 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:18:50 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:18:53 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:18:55 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:18:56 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:19:00 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:19:03 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:19:05 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:19:06 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:19:10 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:19:13 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:19:15 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:19:16 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:19:20 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:19:23 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:19:25 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:19:26 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:19:30 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:19:33 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:19:35 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:19:36 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:19:40 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:19:43 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:19:45 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:19:46 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:19:50 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:19:53 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:19:55 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:19:56 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:20:00 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:20:03 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:20:05 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:20:06 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:20:10 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:20:13 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:20:15 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:20:16 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:20:20 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:20:23 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:20:25 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:20:26 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:20:30 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:20:33 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:20:35 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:20:36 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:20:40 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:20:43 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:20:45 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:20:46 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:20:50 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:20:53 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:20:55 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:20:56 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:21:00 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:21:03 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:21:05 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:21:06 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:21:10 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:21:13 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:21:15 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:21:16 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:21:20 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:21:23 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:21:25 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:21:26 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:21:30 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:21:33 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:21:35 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:21:36 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:21:40 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:21:43 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:21:45 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:21:46 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:21:50 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:21:53 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:21:55 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:21:56 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:22:00 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:22:03 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:22:05 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:22:06 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:22:10 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:22:13 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:22:15 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:22:16 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:22:20 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:22:23 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:22:25 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:22:26 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:22:30 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:22:33 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:22:35 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:22:36 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:22:40 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:22:43 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:22:45 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:22:46 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:22:50 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:22:53 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:22:55 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:22:56 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:23:00 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:23:03 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:23:05 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:23:06 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:23:10 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:23:13 [WARN] agent: Check 'foo/web1/service:web1' is now critical
2019/03/03 02:23:15 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
2019/03/03 02:23:16 [WARN] could not ping node "foo": socket: permission denied
2019/03/03 02:23:19 [WARN] Error getting external node list: Get http://localhost:18500/v1/catalog/nodes?index=15&node-meta=external-node%3Atrue: dial tcp 127.0.0.1:18500: connect: connection refused
2019/03/03 02:23:19 [WARN] Lost leadership
2019/03/03 02:23:19 [INFO] Trying to obtain leadership...
2019/03/03 02:23:19 [ERR] Unable to use leader lock that was held previously and presumed lost, giving up the lock (will retry): Lock already held
2019/03/03 02:23:19 [WARN] Error querying for health check info: Get http://localhost:18500/v1/health/service/consul-esm?index=20&passing=1: dial tcp 127.0.0.1:18500: connect: connection refused
2019/03/03 02:23:19 [WARN] Error querying for node watch list: Get http://localhost:18500/v1/kv/__esm__/agents/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d?index=47: dial tcp 127.0.0.1:18500: connect: connection refused
2019/03/03 02:23:19 [WARN] Error querying for health check info: Get http://localhost:18500/v1/health/state/any?index=20&node-meta=external-node%3Atrue: dial tcp 127.0.0.1:18500: connect: connection refused
2019/03/03 02:23:19 [WARN] Error querying for health check info: Get http://localhost:18500/v1/health/state/any?node-meta=external-node%3Atrue: dial tcp 127.0.0.1:18500: connect: connection refused
2019/03/03 02:23:20 [DEBUG] agent: Check 'foo/web1/service:web2' is passing
^C2019/03/03 02:23:21 [INFO] Caught signal: interrupt
2019/03/03 02:23:21 [INFO] Shutting down...
^C2019/03/03 02:23:25 [INFO] Caught signal: interrupt
2019/03/03 02:23:25 [INFO] Shutting down...
^C2019/03/03 02:23:25 [INFO] Caught signal: interrupt
2019/03/03 02:23:25 [INFO] Shutting down...
^C2019/03/03 02:23:25 [INFO] Caught signal: interrupt
2019/03/03 02:23:25 [INFO] Shutting down...
^C2019/03/03 02:23:26 [INFO] Caught signal: interrupt
2019/03/03 02:23:26 [INFO] Shutting down...
^C2019/03/03 02:23:26 [INFO] Caught signal: interrupt
2019/03/03 02:23:26 [INFO] Shutting down...
^C2019/03/03 02:23:26 [INFO] Caught signal: interrupt
2019/03/03 02:23:26 [INFO] Shutting down...
2019/03/03 02:23:29 [WARN] Failed to deregister service: Put http://localhost:18500/v1/agent/service/deregister/consul-esm:5d120f8a-1ecb-2cce-7ba7-4beb006bc11d: dial tcp 127.0.0.1:18500: connect: connection refused

Register External Service

node.json
{
  "Datacenter": "dc1",
  "ID": "40e4a748-2192-161a-0510-9bf59fe950b5",
  "Node": "foo",
  "Address": "192.168.0.1",
  "TaggedAddresses": {
    "lan": "192.168.0.1",
    "wan": "192.168.0.1"
  },
  "NodeMeta": {
    "external-node": "true",
    "external-probe": "true"
  },
  "Service": {
    "ID": "web1",
    "Service": "web",
    "Tags": [
      "v1"
    ],
    "Address": "127.0.0.1",
    "Port": 8000
  },
  "Checks": [{
    "Node": "foo",
    "CheckID": "service:web1",
    "Name": "Web HTTP check",
    "Notes": "",
    "Status": "passing",
    "ServiceID": "web1",
    "Definition": {
      "HTTP": "http://localhost:8000/health",
      "Interval": "10s",
      "Timeout": "5s"
    }
  },{
    "Node": "foo",
    "CheckID": "service:web2",
    "Name": "Web TCP check",
    "Notes": "",
    "Status": "passing",
    "ServiceID": "web1",
    "Definition": {
      "TCP": "localhost:8000",
      "Interval": "5s",
      "Timeout": "1s",
      "DeregisterCriticalServiceAfter": "30s"
     }
  }]
}

Registration:

$ curl -v --request PUT --data @node.json localhost:18500/v1/catalog/register
* About to connect() to localhost port 18500 (#0)
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 18500 (#0)
> PUT /v1/catalog/register HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:18500
> Accept: */*
> Content-Length: 955
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 955 out of 955 bytes
< HTTP/1.1 200 OK
< Content-Type: application/json
< Vary: Accept-Encoding
< Date: Sun, 03 Mar 2019 02:14:33 GMT
< Content-Length: 5
<
true
* Connection #0 to host localhost left intact

Test Server

Health-Check Target
$ python -m SimpleHTTPServer 8000
Serving HTTP on 0.0.0.0 port 8000 ...
127.0.0.1 - - [02/Mar/2019 21:15:23] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:15:23] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:15:33] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:15:33] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:15:43] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:15:43] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:15:53] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:15:53] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:16:03] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:16:03] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:16:13] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:16:13] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:16:23] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:16:23] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:16:33] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:16:33] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:16:43] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:16:43] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:16:53] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:16:53] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:17:03] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:17:03] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:17:13] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:17:13] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:17:23] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:17:23] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:17:33] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:17:33] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:17:43] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:17:43] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:17:53] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:17:53] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:18:03] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:18:03] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:18:13] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:18:13] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:18:23] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:18:23] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:18:33] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:18:33] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:18:43] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:18:43] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:18:53] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:18:53] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:19:03] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:19:03] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:19:13] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:19:13] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:19:23] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:19:23] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:19:33] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:19:33] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:19:43] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:19:43] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:19:53] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:19:53] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:20:03] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:20:03] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:20:13] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:20:13] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:20:23] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:20:23] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:20:33] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:20:33] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:20:43] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:20:43] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:20:53] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:20:53] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:21:03] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:21:03] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:21:13] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:21:13] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:21:23] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:21:23] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:21:33] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:21:33] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:21:43] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:21:43] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:21:53] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:21:53] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:22:03] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:22:03] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:22:13] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:22:13] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:22:23] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:22:23] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:22:33] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:22:33] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:22:43] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:22:43] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:22:53] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:22:53] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:23:03] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:23:03] "GET /health HTTP/1.1" 404 -
127.0.0.1 - - [02/Mar/2019 21:23:13] code 404, message File not found
127.0.0.1 - - [02/Mar/2019 21:23:13] "GET /health HTTP/1.1" 404 -
^C

Result

Even though we can see that ESM observes them failing, they never updated in Consul.
The issue was observed on real cluster and later reproduced it with configs shown above.

consul-esm could not ping node permission denied

Hello,

we are adding our consul-esm health checks with a go program dynamically, the name of the nodes has an invalid format and consul-esm is not able to ping. The health checks are working fine.
node name has a format like this: worker1.xxx.xxx.xxx_dev-app1.test.com or worker11.xxx.xxx.xxx:dev-app11.test.com

error in log:
[WARN] could not ping node "worker1.xxx.xxx.xxx_dev-app1.test.com": socket: permission denied

Here is the related code
consul-esm/coordinate.go

// pingNode runs an ICMP ping against an address and returns the round-trip time.
func pingNode(addr string, method string) (time.Duration, error) {

Is it possible to disable the ping on nodes?

Thanks,
Nitharsan

ESM Stops getting Health after PUT

In testing we noticed that if we do a PUT to catalog register in consul to update a Tag and include the "Checks", ESM will stop getting the response and the health check will go critical. If we remove the "Checks" from the PUT, we can edit the tags without any issues.

Run consul-esm without ping

I'm testing consul-esm as a means to register and monitor external services whose "Node" is not pingable(icmp/udp). How do I skip "ping" checks and tell consul-esm to run the HTTP/TCP health checks defined? Or am I missing something?

esm-leadership

The ESM currently distributes the work of computing coordinates / RTT through out its ESM cluster. However health checks (tcp/http checks) are only run on the leader node, this puts quite a lot of load on the leader and prevents much horizontal scaling.

Rather than having one ESM handle all TCP/HTTP health checks, can a feature be added so that a cluster of ESMs can distribute the health checks amongst all its nodes?

Node object out of scope when catalog is updated?

I noticed that the esm can appear to just stop monitoring nodes sometimes, i think this is caused by a pass-by-pointer issue in runNodePing

func (a *Agent) runNodePing(node *api.Node) {

Is the node object passed to the runNodePing function getting updated in other parts of the code base e.g. shuffleNodes? I think when this happens it causes the inflightPings map to be set/unset to the wrong values.

Example ESM output when this happens

ConsulΒ ESMΒ running!
Β Β Β Β Β Β Β Β Β Β Β Β Datacenter:Β "dc1"
Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Service:Β "consul-esm"
Β Β Β Β Β Β Β Β Β Β Β ServiceΒ Tag:Β ""
Β Β Β Β Β Β Β Β Β Β Β Β ServiceΒ ID:Β "consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
NodeΒ ReconnectΒ Timeout:Β "72h0m0s"

LogΒ dataΒ willΒ nowΒ streamΒ inΒ asΒ itΒ occurs:

Β Β Β Β 2018/11/05Β 12:35:12Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
2018/11/05Β 12:35:12Β [INFO]Β TryingΒ toΒ obtainΒ leadership...
Β Β Β Β 2018/11/05Β 12:35:12Β [INFO]Β agent:Β SyncedΒ checkΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838:agent-ttl"
2018/11/05Β 12:35:56Β [INFO]Β ObtainedΒ leadership
2018/11/05Β 12:35:57Β [INFO]Β RebalancedΒ 0Β externalΒ nodesΒ acrossΒ 1Β ESMΒ instances
Β Β Β Β 2018/11/05Β 12:37:10Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 12:38:38Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 12:39:42Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
2018/11/05Β 12:40:18Β [INFO]Β RebalancedΒ 1Β externalΒ nodesΒ acrossΒ 1Β ESMΒ instances
2018/11/05Β 12:40:18Β [INFO]Β NowΒ managingΒ 1Β healthΒ checksΒ acrossΒ 1Β nodes
2018/11/05Β 12:40:27Β [INFO]Β NowΒ runningΒ probesΒ forΒ 1Β externalΒ nodes
2018/11/05Β 12:40:28Β [INFO]Β RebalancedΒ 2Β externalΒ nodesΒ acrossΒ 1Β ESMΒ instances
2018/11/05Β 12:40:28Β [INFO]Β NowΒ managingΒ 2Β healthΒ checksΒ acrossΒ 2Β nodes
2018/11/05Β 12:40:37Β [INFO]Β NowΒ runningΒ probesΒ forΒ 2Β externalΒ nodes
Β Β Β Β 2018/11/05Β 12:41:33Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
2018/11/05Β 12:42:55Β [INFO]Β UpdatingΒ HTTPΒ checkΒ 
"cloudkey/cloudkey/service:cloudkey"
Β Β Β Β 2018/11/05Β 12:43:14Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 12:45:04Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
2018/11/05Β 12:45:52Β [WARN]Β ErrorΒ pingingΒ nodeΒ "router"Β (ID:Β ):Β lastΒ requestΒ 
stillΒ outstanding
2018/11/05Β 12:45:54Β [INFO]Β UpdatingΒ HTTPΒ checkΒ 
"cloudkey/cloudkey/service:cloudkey"
2018/11/05Β 12:46:22Β [WARN]Β ErrorΒ pingingΒ nodeΒ "cloudkey"Β (ID:Β ):Β lastΒ requestΒ 
stillΒ outstanding
Β Β Β Β 2018/11/05Β 12:46:27Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 12:47:31Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 12:49:03Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 12:50:43Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
2018/11/05Β 12:51:27Β [WARN]Β ErrorΒ pingingΒ nodeΒ "cloudkey"Β (ID:Β ):Β lastΒ requestΒ 
stillΒ outstanding
Β Β Β Β 2018/11/05Β 12:52:30Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 12:53:43Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 12:55:25Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
2018/11/05Β 12:56:22Β [WARN]Β ErrorΒ pingingΒ nodeΒ "cloudkey"Β (ID:Β ):Β lastΒ requestΒ 
stillΒ outstanding
2018/11/05Β 12:56:42Β [WARN]Β ErrorΒ pingingΒ nodeΒ "router"Β (ID:Β ):Β lastΒ requestΒ 
stillΒ outstanding
Β Β Β Β 2018/11/05Β 12:56:54Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 12:58:43Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 13:00:22Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
2018/11/05Β 13:01:47Β [WARN]Β ErrorΒ pingingΒ nodeΒ "router"Β (ID:Β ):Β lastΒ requestΒ 
stillΒ outstanding
Β Β Β Β 2018/11/05Β 13:01:48Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 13:02:52Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 13:04:34Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 13:06:32Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 13:08:30Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 13:10:06Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"
Β Β Β Β 2018/11/05Β 13:11:11Β [INFO]Β agent:Β SyncedΒ serviceΒ 
"consul-esm:051e73b6-024d-b154-8bc8-95f588aa4838"

UDP mode ping on Linux always has RTT of 0

The fastping upstream package used has a bug such that it always returns 0 RTTs when using UDP on Linux. This is trivial to reproduce by grabbing the fastping package and using its example ping program.

This bug was reported > a year back but hasn't been fixed as the fastping package hasn't been updated in 4 years... So we need to replace it. Luckily others seem to have run into this as well as there is a new, maintained library very similar to fastping that works.

https://github.com/sparrc/go-ping

Update consul-esm to use it instead of fastping.

I'm already working on this... just wanted a bug to help in communications.

Build instructions are broken

Tried with:

$ go version
go version go1.12.5 darwin/amd64

And here is what I get:

$ export GOPATH=$(mktemp -d)

$ export REPO=github.com/hashicorp/consul-esm

$ mkdir -p $GOPATH/src/$REPO

$ git clone [email protected]:hashicorp/consul-esm.git $GOPATH/src/$REPO
Cloning into '/var/folders/0j/xssdyhw14935pw9qk43k8jhc0000gn/T/tmp.U3uI4A2l/src/github.com/hashicorp/consul-esm'...
remote: Enumerating objects: 1466, done.
remote: Total 1466 (delta 0), reused 0 (delta 0), pack-reused 1466
Receiving objects: 100% (1466/1466), 1.66 MiB | 8.97 MiB/s, done.
Resolving deltas: 100% (520/520), done.

$ cd $GOPATH/src/$REPO

$ make darwin/amd64
-->       darwin/amd64: /private/var/folders/0j/xssdyhw14935pw9qk43k8jhc0000gn/T/tmp.U3uI4A2l/src/github.com/hashicorp/consul-esm
main.go:11:2: cannot find package "github.com/hashicorp/consul-esm/version" in any of:
	/go/src/private/var/folders/0j/xssdyhw14935pw9qk43k8jhc0000gn/T/tmp.U3uI4A2l/src/github.com/hashicorp/consul-esm/vendor/github.com/hashicorp/consul-esm/version (vendor tree)
	/usr/local/go/src/github.com/hashicorp/consul-esm/version (from $GOROOT)
	/go/src/github.com/hashicorp/consul-esm/version (from $GOPATH)
make: *** [darwin/amd64] Error 1

Am I doing something wrong?

ACLs in README are insufficient for Consul v0.4.0

When using the ACLs mentioned in the README, consul-esm errors out:

consul-esm -config-file=/config/consul_esm.hcl
2020/08/06 16:33:45 [INFO] Connecting to Consul on 10.8.1.2:8500...
unable to check version compatibility with Consul servers: Unexpected response code: 403 (Permission denied)

If you add operator = "read" to the policy, consul-esm is able to start as expected

Support Rotating ACL Tokens

When a consul-esm instance's token is revoked, maybe from rotating acl tokens, there are some unexpected outcomes for consul-esm:

  • the instance's status remains passing/healthy and is never marked critical. This can be seen at /v1/health/node/:node
  • the instance's assigned external health checks are not successfully executed. as a result of staying "passing"/"healthy", the instance's assigned external health checks are not reassigned to other actually healthy instances with appropriate tokens
  • the instance is not able to successfully deregister

The revoked token is needed to update the health check and deregister. This is expected as a result of anti-entropy.

The larger issue around supporting rotating acl tokens is already captured in hashicorp/consul#4372. The recommendation is to reregister the application (consul-esm in this case) with the new token.

Currently, consul-esm doesn't have a way to reregister itself. On stopping and restarting consul-esm, the stopped instance will fail to deregister while the newly started instance will obtain a new id. This leads to having 'dead', floating consul-esm instances in the cluster. A serious consequence is that these dead consul-esm instances retain responsibility for their external health checks since they remain marked as healthy/passing in the catalog.

This issue arises from comment: #39 (comment)

Steps to reproduce

  1. Start consul (I used v1.6.2) with ACLs enabled
  2. Register two external health checks
  3. Start consul-esm (I used v0.3.3) with relevant token needed to operate and log_level=DEBUG
  4. Start another consul-esm with a different token needed to operate and log_level=DEBUG
  5. Observe that each consul-esm is executing one of the external health checks
  6. Delete token for one of the consul-esms
  7. Observe in consul-logs that revoked-token consul-esm has failed its TTL check
  8. Query /v1/health/node/<revoked-token-consul-esm-id> and see that the status is still passing
  9. Stop revoked-token consul-esm instance (Control+C)
  10. Observe in consul-logs that consul-esm was not able to successfully deregister
  11. Observe in remaining healthy consul-esm instance that it is executing only one external health check - the one it was originally assigned - and it did not inherit the other external health check

Narrow the scope of ESM ACLs in README

An ACL like service_prefix "" seems overly permissive and there may be ways to customize these ACLs in order to have them be tighter. For example, practitioners might be able to create separate ACL blocks for the specific external service that consul-esm is monitoring.

Deadlock in go-ping

It looks like go-fastping was replaced with go-ping last year here.

However go-ping seems to have deadlock issues at scale.

go-ping/ping#85
go-ping/ping#77

We have been observing when we upgraded to this latest version of the code we picked up the go-ping change. When we scale it up we see a memory leak that also leaks go-routines, raw sockets & file handles.

When we apply something like this patch into a test environment we are able to see that the issue goes away.

Charts below show the difference when we deployed the change at about 3pm on the 28th. (You can also see a previous deploy at 7am which did not have the fix and shows the resource leek happening quickly after a restart.)

memory_and_swap_48_hours

goroutines_48_hours

file_handles_48_hours

The go-ping library does have a suggested fix go-ping/ping#85, however that fix will still deadlock at scale. I think to fix it properly the wg.Wait() calls in this block need to be replaced with something async (a channel) so that the loop does not ever block the recv channel.

That partial improvement PR go-ping/ping#85 has also beem awaiting merge since March.

Question for HashiCorp is how to proceed? Do we want to

  1. Revert to go-fastping (probably not it will likely still have the issues that caused us to pick up go-ping).
  2. Submit a patch to go-ping (although that seems like it could take a while).
  3. Do something else? Fork the lib again? Embed the pinger directly here?

Consul ESM anti-flapping protection proposal

ESM anti flapping protection

Problem

The ESM monitors services periodically (polling). In case of intermittent infrastructure outages, services might repeatedly come online and offline, that has several up-stream implications that are worth allowing service providers and Consul admin's to manage.

  • Excessive noise - Probably the most visible problem is just a lot of noise, alarm spam, log trace
  • Client reconnection churn - Initial connection to a service is typically quite expensive, in many cases (https services) it can be significantly more expensive then the query cost. A flapping service can multiple that cost.
  • Infrastructure churn - Discovery, Load Balancing or a Service Mesh control plane must carry the cost of registering, deregistering flapping end-points continuously.

Some Typical causes of a flapping service:

  • Switchs with cabling issue - e.g. loose or broken cabling
  • Switch experiences frame broadcast storms - e.g. abusive software at the network level
  • Routing is temporarily broken - e.g. problems with BGP or stability when trying to update routes during high load
  • data center bring-up causes intermittent failures

Solution

Establish thresholds (kind of low-pass filters) that will prevent confusing status updates.
Few consecutive status confirmations would be needed to flip service health status.

A simple Schmitt Trigger should provide enough protection against
spurious failures and recoveries.

https://en.wikipedia.org/wiki/Schmitt_trigger

Number of failed checks and passing changes
would set the thresholds. 2 thresholds should be set via health-check definition: one for failed, one for passed.

  • A service would be marked as failed only when a number of checks failed, crossing the failure threshold.
  • A service would be marked as healthy only when a number of subsequent checks pass, crossing a passing threshold.

Exact trigger strategy should be hidden behind and interface, allowing diverse strategies
tuned to various service needs.

Implementation 1 - Consul + ESM

Consul ESM modification

Check updates

Consul ESM agent maintains checks via Agent.watchHealthChecks()=>CheckRunner.UpdateChecks(checks api.HealthChecks) method.
It adds, removes and updates currently running checks according to check definition received from
Consul.

Check modification

  • CheckTCP
  • CheckHTTP
  • CheckDocker
  • CheckGRPC

We propse checks to be extended making them stateful, so checks can track recent service changes
using proposed Schmitt Trigger (or any other health evaluation mechanism).

We propse that CheckNotifier remains unchanged, propagating status as before, contrary to other
solution proposed here: https://github.com/hashicorp/consul/pull/5739/files

Vendored consul library update

Consul ESM vendors in Consul library. This vendored library is somewhat out of sync with main
Consul agent code.

This library should be either upgraded or forked at some point. Implementing this feature
would cause effectively to fork an outdated codebase.

Key questions:

  1. can we keep Consul and ESM version in sync and distribute/deploy in pairs?
  2. is it possible to move the check definition modifications to consul library and maintain it there?

Consul Modification

In order to update health check definition schema to carry additional properties, health check
definition served by consul must be extended to propagate additional settings.

Health check definitions to modify:

  • consul/api/health.go
  • consul-esm/vendor/hashicorp/api/health.go (lib)

Agent.Checks() => map of AgentCheck => HealthCheckDefinition

Impact

  • consul-esm - need to update checks to carry thresholds and current state counters
  • consul - need to update health check definition schema to carry additional properties
  • health check registration API must expose method to add those extra properties

By default, thresholds should be disabled, preserving old behavior in existing tests.

Implementation 2 - only ESM, no Consul change

ESM modification

The ESM covers relatively new ground in that it expects users to register health
checks for services not running locally, because of this, unlike localhost / consul
agent based health checks, its more likely to experience network performance issues
and temporary failures. A purely ESM solution would solve this need.

To avoid modifying main consul server, the functionality can be first implemented
in Consul ESM daemon only.

This is essentially identical with ESM implementation #1, with 2 exceptions:

  1. instead of configuring the threshold for individual checks, we draw global thresholds from a local daemon config file
  2. no health check schema definition, as the config is not retrieved from Consul

Pros:

  • faster rollout
  • changes decoupled from main consul codebase (no impact there)
  • lower risk

Cons:

  • ESM configuration is less flexible
  • Check configuration must be deployed alongside ESM daemons
  • Configuration must be kept consistent across whole network

Consul modification

None in this variant.

Impact

Only ESM daemon is impacted:

  1. adding check thresholds
  2. extending ESM configuration with threshold settings

A/C

  1. users can register ESM health checks with separate failure/recovery thresholds
  2. existing use cases not changed - thresholds are optional and older checks are not altered
  3. thresholds can be set for individual health checks

Consul ESM has dissociative identity disorder?

We runs Consul ESM in 3-node "clusters" in several Consul clusters.

TL;DR this is what we see:

Sample 1

image

Here is how the KV looks like in YAML-like representation:

KV:
  __esm__:
      leader: "" # part of session '3034f8c2-ddd3-a733-c3a4-a07b173b0ab7' held by host ''***31'
      agents:
        consul-esm:5460c1fa-4368-62c3-311a-0934fa005cdc: {"Nodes":null,"Probes":["esm-test-host-192-10","esm-test-host-192-15","esm-test-host-192-20"]} # entry name corresponds to ESM running on ***33, part of session '3034f8c2-ddd3-a733-c3a4-a07b173b0ab7' owned by host ''***31'
        consul-esm:6b66d99a-f83b-d678-b710-42b5242347af: {"Nodes":null,"Probes":["esm-test-host-192-11","esm-test-host-192-16","esm-test-host-192-21"]} # entry name corresponds to ESM running on ***33, part of session '3034f8c2-ddd3-a733-c3a4-a07b173b0ab7' owned by host ''***31'
        consul-esm:ab78b0e1-5bb7-6114-d758-5a59dc814361: {"Nodes":null,"Probes":["esm-test-host-192-12","esm-test-host-192-17","esm-test-host-192-22"]} # entry name corresponds to ESM running on ***31, part of session '3034f8c2-ddd3-a733-c3a4-a07b173b0ab7' owned by host ''***31'
        consul-esm:ec9c23a5-b092-295b-76d6-dc7f0508da69: {"Nodes":null,"Probes":["esm-test-host-192-13","esm-test-host-192-18","esm-test-host-192-23"]} # entry name corresponds to ESM running on ***32, part of session '3034f8c2-ddd3-a733-c3a4-a07b173b0ab7' owned by host ''***31'
        consul-esm:f2001c4c-07ce-7a91-bcf3-67c168c9236d: {"Nodes":null,"Probes":["esm-test-host-192-14","esm-test-host-192-19"]}                        # entry name corresponds to ESM running on ***32, part of session '3034f8c2-ddd3-a733-c3a4-a07b173b0ab7' owned by host ''***31'
      probes:
        esm-test-host-192-10: AQAAAA7UZ58zIPqZFf//
        esm-test-host-192-12: AQAAAA7UZ580Dz4bBf//
        esm-test-host-192-14: AQAAAA7UZ58tMd74BP//
        esm-test-host-192-15: AQAAAA7UZ58vMuHD9///
        esm-test-host-192-17: AQAAAA7UZ58tJvKprv//
        esm-test-host-192-19: AQAAAA7UZ58yLAwK2f//
        esm-test-host-192-20: AQAAAA7UZ58pCS3VxP//
        esm-test-host-192-22: AQAAAA7UZ58xCQc+a///
        

Node status:
image

Sample 2

image

Here is how the KV looks like in YAML-like representation:

KV:
  __esm__:
      leader: "" # part of session 'edc63fda-a348-f7a2-bdd5-749cf57751f1' held by host ''***11'
      agents:
        consul-esm:556c6406-b7fe-199f-716c-6593e2512d46: {"Nodes":null,"Probes":["esm-test-host-190-10","esm-test-host-190-14","esm-test-host-190-18"]} # entry name corresponds to ESM running on ***12
        consul-esm:a73bd45f-8ddd-26e1-2e0d-9ffa62306be5: {"Nodes":null,"Probes":["esm-test-host-190-11","esm-test-host-190-15","esm-test-host-190-19"]} # entry name corresponds to ESM running on ***21
        consul-esm:bedfcb00-5198-4e4c-b644-9c6df993a222: {"Nodes":null,"Probes":["esm-test-host-190-12","esm-test-host-190-16","esm-test-host-190-20"]} # entry name corresponds to ESM running on ***11
        consul-esm:d1da7756-897e-c35b-7bb1-a9bd8cd34705: {"Nodes":null,"Probes":["esm-test-host-190-13","esm-test-host-190-17","esm-test-host-190-21"]} # entry name corresponds to ESM running on ***11

When we just registered external services, a week or few ago, 3 oof them that correspond to:

consul-esm:d1da7756-897e-c35b-7bb1-a9bd8cd34705: {"Nodes":null,"Probes":["esm-test-host-190-13","esm-test-host-190-17","esm-test-host-190-21"]} # entry name corresponds to ESM running on ***11

we're missing info about the status but after a while became "green":
image

Conclusion

Not really sure what it all means but my gut feeling is that it's not normal. We would be looking forward to getting this addressed and would appeciate an advice on how we can get it to a "normal state". Thanks!

Support Service-Level UDP Check

Currently ESM supports a node-level UDP check but not a service-level UDP check.

Practitioners may want to have a UDP check to know whether a service on a specific port if healthy and not just overall node health. These external services might be dynamically registered and not just a static list of services. It would therefore be helpful to be able to dynamically register UDP checks similar to how HTTP checks can be registered dynamically via /catalog/register

It would be possible to use a script check feature (not yet built out) to implement service-level UDP checks for a static list of services. Since we want to limit dynamic script checks in order to prevent remote code execution security risk, they are not a good option for dynamically registering UDP checks.

References:

Add support for health checks which require mTLS

We'd like to use ESM to monitor external services which require client authentication via mTLS. The Consul agent supports a version of this via the enable_agent_tls_for_checks option, but for ESM it would be nice to configure separate credentials for health checks, rather than reusing the ones for Consul communication. It's fine to use a single key pair for all health checks, however.

Feature request: Official docker image

It would be nice to have an "official" docker image. (just like consul-template already has one)
Having a pre-build image makes it even easier to get started with this little service.

Service Status in Consul can be poisoned for a prolonged period of time.

Running the latest Consul 1.4.3 and ESM 0.3.2.

Steps to reproduce

  1. Start Consul & ESM.
  2. Register some service with a health check, have the initial status set to "critical"
  3. Wait for the ESM to begin monitoring the check (e.g. watch for a status change from critical to passing).
  4. Re-register the exact same check again setting the status to "critical"

The service will never get corrected to be marked passing again.

The check logic will always follow this continue. So will never update the status & output of the check here.

This will cause the check update to continuously skip updating the check as its internal record for the status will continue to be "passing".

Lines https://github.com/hashicorp/consul-esm/blob/master/check.go#L198 and https://github.com/hashicorp/consul-esm/blob/master/check.go#L132 could be updated to aos do c.checks[checkHash].Status = check.Status. This would allow later health check probes to correct any incorrect values in the catalog.

Add a mode to not require root privileges

We noticed that the consul-esm requires root access in order to ping nodes that it monitors. Running the ESM with root privileges is not really reasonable in most cases.

A log sample:

2018/02/23 06:52:20 [WARN] could not ping node "": listen ip4:icmp : socket: operation not permitted

The ESM uses go-fastping to conduct this, which has the ability to ping over UDP without root privileges; can the default network mode be set to UDP or can an option be exposed to we can change it to udp?

Relevant links:

standardize -version output

We should have the first line of the -version output be the same across all our team's projects. It should look something like.. "consul-esm v0.3.3 (17c7ce4)"

Consul esm stay in "Trying to obtain leadership..." for more than 20 minutes after leader restart

Overview of the Issue

Consul esm is hang and the esm log stays in "Trying to obtain leadership..." for more than 20 minutes after leader restart.
The health check of external services do not work in this window

Reproduction Steps

consul version: 1.3.0
consul esm version: 0.3.1
5 consul servers and esm instances

  1. Restart consul server and esm one by one
  2. The consul esm log stays in "Trying to obtain leadership...". All the external service health check do not work. They come back to work after 20 minutes.

The time is not always 20 minutes, some case is 3 minutes.

Consul ESM Imports Entire Consul Core Project

Currently consul-esm imports all of consul. You can observe this in the go.mod file that consul v1.6.1 is imported:

Screen Shot 2020-01-31 at 12 16 19 PM

This issue is to look into and fixing the dependency on the entire consul project so that only consul sub-packages are imported.

Make it possible to run multiple ESM pools per cluster

By making the external-node key configurable, or by adding an additional constraint key, we can limit which nodes a given set of ESMs will service, allowing multiple pools of ESMs per cluster. This is useful if the machines running the ESMs need some kind of special setup, and a cluster has a mix of different requirements for those.

Consul ESM anti-flapping protection proposal

ESM anti flapping protection

Problem

The ESM monitors services periodically (polling). In case of intermittent infrastructure outages, services might repeatedly come online and offline, that has several up-stream implications that are worth allowing service providers and Consul admin's to manage.

  • Excessive noise - Probably the most visible problem is just a lot of noise, alarm spam, log trace
  • Client reconnection churn - Initial connection to a service is typically quite expensive, in many cases (https services) it can be significantly more expensive then the query cost. A flapping service can multiple that cost.
  • Infrastructure churn - Discovery, Load Balancing or a Service Mesh control plane must carry the cost of registering, deregistering flapping end-points continuously.

Some Typical causes of a flapping service:

  • Switchs with cabling issue - e.g. loose or broken cabling
  • Switch experiences frame broadcast storms - e.g. abusive software at the network level
  • Routing is temporarily broken - e.g. problems with BGP or stability when trying to update routes during high load
  • data center bring-up causes intermittent failures

Solution

Establish thresholds (kind of low-pass filters) that will prevent confusing status updates.
Few consecutive status confirmations would be needed to flip service health status.

A simple Schmitt Trigger should provide enough protection against
spurious failures and recoveries.

https://en.wikipedia.org/wiki/Schmitt_trigger

Number of failed checks and passing changes
would set the thresholds. 2 thresholds should be set via health-check definition: one for failed, one for passed.

  • A service would be marked as failed only when a number of checks failed, crossing the failure threshold.
  • A service would be marked as healthy only when a number of subsequent checks pass, crossing a passing threshold.

Exact trigger strategy should be hidden behind and interface, allowing diverse strategies
tuned to various service needs.

Implementation 1 - Consul + ESM

Consul ESM modification

Check updates

Consul ESM agent maintains checks via Agent.watchHealthChecks()=>CheckRunner.UpdateChecks(checks api.HealthChecks) method.
It adds, removes and updates currently running checks according to check definition received from
Consul.

Check modification

  • CheckTCP
  • CheckHTTP
  • CheckDocker
  • CheckGRPC

We propse checks to be extended making them stateful, so checks can track recent service changes
using proposed Schmitt Trigger (or any other health evaluation mechanism).

We propse that CheckNotifier remains unchanged, propagating status as before, contrary to other
solution proposed here: https://github.com/hashicorp/consul/pull/5739/files

Vendored consul library update

Consul ESM vendors in Consul library. This vendored library is somewhat out of sync with main
Consul agent code.

This library should be either upgraded or forked at some point. Implementing this feature
would cause effectively to fork an outdated codebase.

Key questions:

  1. can we keep Consul and ESM version in sync and distribute/deploy in pairs?
  2. is it possible to move the check definition modifications to consul library and maintain it there?

Consul Modification

In order to update health check definition schema to carry additional properties, health check
definition served by consul must be extended to propagate additional settings.

Health check definitions to modify:

  • consul/api/health.go
  • consul-esm/vendor/hashicorp/api/health.go (lib)

Agent.Checks() => map of AgentCheck => HealthCheckDefinition

Impact

  • consul-esm - need to update checks to carry thresholds and current state counters
  • consul - need to update health check definition schema to carry additional properties
  • health check registration API must expose method to add those extra properties

By default, thresholds should be disabled, preserving old behavior in existing tests.

Implementation 2 - only ESM, no Consul change

ESM modification

The ESM covers relatively new ground in that it expects users to register health
checks for services not running locally, because of this, unlike localhost / consul
agent based health checks, its more likely to experience network performance issues
and temporary failures. A purely ESM solution would solve this need.

To avoid modifying main consul server, the functionality can be first implemented
in Consul ESM daemon only.

This is essentially identical with ESM implementation #1, with 2 exceptions:

  1. instead of configuring the threshold for individual checks, we draw global thresholds from a local daemon config file
  2. no health check schema definition, as the config is not retrieved from Consul

Pros:

  • faster rollout
  • changes decoupled from main consul codebase (no impact there)
  • lower risk

Cons:

  • ESM configuration is less flexible
  • Check configuration must be deployed alongside ESM daemons
  • Configuration must be kept consistent across whole network

Consul modification

None in this variant.

Impact

Only ESM daemon is impacted:

  1. adding check thresholds
  2. extending ESM configuration with threshold settings

A/C

  1. users can register ESM health checks with separate failure/recovery thresholds
  2. existing use cases not changed - thresholds are optional and older checks are not altered
  3. thresholds can be set for individual health checks

Consul ESM cannot validate Auto Encrypt Agent Certificates with expired cross signed certificates

I am running Consul on Kubernetes using the official Helm chart. I have Auto Encrypt turned on and I recently rotated Connect CA to use Vault.

This has resulted in certificates issued to Consul agents to contain certificate that was cross signed by the old Connect CA. The cross signed certificate has since expired and Consul ESM emits errors like

2020/09/03 10:08:14 [ERR] error getting leader status: "Get https://10.1.1.40:8501/v1/status/leader: x509: certificate has expired or is not yet valid", retrying in 10s...

I had to set CONSUL_HTTP_SSL_VERIFY=false for Consul ESM to work.

This does not seem to be a problem for Consul Template 0.25.1. I noticed that Consul Template depends on Consul API v1.4.0 and SDK v0.4.0 whereas Consul ESM depends on Consul API v1.2.0 and SDK v0.4.0. I couldn't really identify the changes between 1.4 and 1.2 that might have fixed this. Could a bump to at least API 1.4 fix this?

udp ping mode erroring

Hello :) Thanks for this project - it was exactly what I was looking for!

In UDP ping mode I kept getting a socket error regarding permissions. Even using setcap didn't seem to help. Switching to socket mode (and using setcap) fixes things.

Is health-check interval honored?

Using consul-esm 0.3.3 with default settings.
Defining a node, external service and a corresponding health-check using the following definition:

[
  {
    "Node": "my-node",
    "Address": "my-node-address",
    "NodeMeta": {
      "external-node": "true",
      "external-probe": "true"
    },
    "Service": {
      "Service": "pangalactic-gargleblaster",
      "ID": "pangalactic-gargleblaster",
      "Port": 5002
    },
    "Checks": [
      {
        "Name": "tcp-check",
        "CheckID": "pangalactic-gargleblaster@1:tcp-check",
        "ServiceID": "pangalactic-gargleblaster",
        "Definition": {
          "tcp": "my-node-address:5002",
          "interval": "10s"
        }
      }
    ]
  }
]

At the time of running consul-esm, the external service is down.
consul-esm correctly registers the service health-check in critical state.

Then, after starting the external service, it takes approximately 60 seconds for consul-esm to switch the health-check from "critical" to "success".
The same is observed when taking the service down - it takes more than a minute for consul-esm to switch the service health-check to "critical" when the service becomes unavailable.

Is this by design?
I'd expect the health-check interval of 10 seconds in the check definition to be honored by consul-esm.

Process consuming 100% CPU

If no external service is defined with NodeMeta: external-probe then the loop in the updateCoords function consumes the 100% of the CPU. I need to run health checks against an external service but I don't necessary need to ping it.

PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
805 root      20   0   16320  10160   6916 S 100.0  0.3   3:11.06 consul-esm

This is an example of an external service created without the external-probe,

curl -X PUT -d '{"Datacenter": "dc1", "Node": "google", "Address": "google.com", "Service": {"Service": "search", "Port": 80}, "NodeMeta": {"external-node": "true"}}' http://127.0.0.1:8500/v1/catalog/register

I'm using Consul v1.0.7 and consul-esm 0.2.0.

Must provide node and address

Using the example configuration, I get some unexpected results:

  1. The consul-esm daemon periodically logs messages:
  • [ERR] http: Request PUT /v1/catalog/register, error: Must provide node and address from=127.0.0.1:60372
  • [WARN] error updating node: could not update external node check for node "foo": Unexpected response code: 500 (Must provide node and address)
  • [WARN] could not update coordinate for node "foo": error applying coordinate update for node "foo": Unexpected response code: 404 ()
  1. Service web is registered and both its Web HTTP check and Web TCP check health-checks are set as passing although there is nothing listening on localhost:8000. I'd expect to see failing checks.

And I have a general question - how is ESM supposed to check the health of an external system with a health-check definition such as "http://localhost:8000/health"?
Isn't it supposed to hit the service on its external address?

Consul-ESM rewrites check interval/timeout to default values

Hello!

Versions in use:

consul --version
Consul v1.4.4
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)

consul-esm --version
v0.3.3

Consul members:

Node      Address               Status  Type    Build  Protocol  DC    Segment
consul-1  10.10.10.1:8301       alive   server  1.4.4  2         main  <all>
consul-2  10.10.10.2:8301       alive   server  1.4.4  2         main  <all>
consul-3  10.10.10.3:8301       alive   server  1.4.4  2         main  <all>

Consul-1 configuration is as follows:

{
  "datacenter": "main",
  "data_dir": "/var/consul",
  "log_level": "INFO",
  "log_file": "/var/log/consul/consul.log",
  "node_name": "consul-1",
  "server": true,
  "bind_addr": "10.10.10.1",
  "advertise_addr": "10.10.10.1",
  "client_addr": "0.0.0.0",
  "enable_script_checks": true,
  "recursors": ["127.0.0.1"],
  "telemetry": {
     "disable_hostname": true,
     "prometheus_retention_time": "120s"
  }
}

Consul-2 and consul-3 nodes are set with "start_join" and "retry_join" directives containing first ones IP address, so that Consul nodes could form a cluster. Note the rest of configuration also persists, meaning every node is acting as a server.

Besides Consul itself, each node runs consul-esm service. This is the configuration in use on all nodes:

log_level = "INFO"
enable_syslog = false
syslog_facility = ""
consul_service = "consul-esm"
consul_service_tag = ""
consul_kv_path = "consul-esm/"
external_node_meta {
    "external-node" = "true"
}
node_reconnect_timeout = "72h"
node_probe_interval = "10s"
http_addr = "localhost:8500"
token = ""
datacenter = "main"
ca_file = ""
ca_path = ""
cert_file = ""
key_file = ""
tls_server_name = ""
ping_type = "udp"

Flags for launching services are:

/usr/local/bin/consul agent -ui -config-dir=/etc/consul.d -config-file=/etc/consul.json
/usr/local/bin/consul-esm -config-dir=/etc/consul-esm.d -config-file=/etc/consul-esm.hcl

With this being said, here are instructions to reproduce a bug. First, register a new node with custom intervals.

curl -X PUT -d '{"Datacenter":"main", "Node":"my.hardware.device", "Address":"my.hardware.device", "Service":{"ID":"my.hardware.device", "Service":"my.hardware.device"}, "NodeMeta":{"external-node":"true", "external-probe":"false", "type":"hardware", "class":"network", "serial":"xxxxx"}, "Checks":[{"Node":"my.hardware.device", "CheckID":"firstcheck", "Name":"firstcheck", "Notes":"", "Status":"warning", "Definition":{"HTTP":"http://consul.check.node:8081", "Interval":"60s", "Timeout":"10s", "Method":"GET", "Header":{"hostname":["my.hardware.device"]}}}, {"Node":"my.hardware.device", "CheckID":"secondcheck", "Name":"secondcheck", "Notes":"", "Status":"warning", "Definition":{"HTTP":"http://consul.check.node:8082", "Interval":"60s", "Timeout":"10s", "Method":"GET", "Header":{"hostname":["my.hardware.device"]}}}]}' http://consul-1:8500/v1/catalog/register

Secondly, ensure check configuration is correct. Note interval is still correct.

curl http://consul-1:8500/v1/health/node/my.hardware.device

[{"Node":"my.hardware.device","CheckID":"firstcheck","Name":"firstcheck","Status":"warning","Notes":"","Output":"","ServiceID":"","ServiceName":"","ServiceTags":[],"Definition":{"Interval":"1m0s","Timeout":"10s","HTTP":"http://consul.check.node:8081","Header":{"hostname":["my.hardware.device"]},"Method":"GET"},"CreateIndex":19510337,"ModifyIndex":19510337},{"Node":"my.hardware.device","CheckID":"secondcheck","Name":"secondcheck","Status":"warning","Notes":"","Output":"","ServiceID":"","ServiceName":"","ServiceTags":[],"Definition":{"Interval":"1m0s","Timeout":"10s","HTTP":"http://consul.check.node:8082","Header":{"hostname":["my.hardware.device"]},"Method":"GET"},"CreateIndex":19510337,"ModifyIndex":19510337}]

Finally, wait 1 minute and query for health checks once again. Note interval and timeout settings are absent despite the results.

curl http://consul-1:8500/v1/health/node/my.hardware.device

[{"Node":"my.hardware.device","CheckID":"firstcheck","Name":"firstcheck","Status":"passing","Notes":"","Output":"HTTP GET http://consul.check.node:8081: 200 OK Output: There is a host","ServiceID":"","ServiceName":"","ServiceTags":[],"Definition":{"HTTP":"http://consul.check.node:8081","Header":{"hostname":["my.hardware.device"]},"Method":"GET"},"CreateIndex":19510337,"ModifyIndex":19510342},{"Node":"my.hardware.device","CheckID":"secondcheck","Name":"secondcheck","Status":"critical","Notes":"","Output":"HTTP GET http://consul.check.node:8082: 404 Not Found Output: There is no host","ServiceID":"","ServiceName":"","ServiceTags":[],"Definition":{"HTTP":"http://consul.check.node:8082","Header":{"hostname":["my.hardware.device"]},"Method":"GET"},"CreateIndex":19510337,"ModifyIndex":19510348}]

In fact, checks will be executed with default interval now as seen from the HTTP server log:

10.10.10.2 - - [26/Apr/2019:07:59:57 +0000] "GET / HTTP/1.1" 404 47 "-" "Consul Health Check"
10.10.10.2 - - [26/Apr/2019:08:00:28 +0000] "GET / HTTP/1.1" 404 47 "-" "Consul Health Check"
10.10.10.2 - - [26/Apr/2019:08:01:07 +0000] "GET / HTTP/1.1" 404 47 "-" "Consul Health Check"
10.10.10.2 - - [26/Apr/2019:08:01:38 +0000] "GET / HTTP/1.1" 404 47 "-" "Consul Health Check"
10.10.10.2 - - [26/Apr/2019:08:02:08 +0000] "GET / HTTP/1.1" 404 47 "-" "Consul Health Check"
10.10.10.2 - - [26/Apr/2019:08:02:39 +0000] "GET / HTTP/1.1" 404 47 "-" "Consul Health Check"

Let me know if you would require any more information.

ESM Check Never Passes

I'm working on an implementation of ESM. I was able to get it running, and register an external service. However, the service never shows as passing. I was able to register the example external service from the Register External Services with Consul Service Discovery guide. I see it in Consul's UI as a service, and in the logs for ESM it show it as being registered. The service never transitions from critical to passing. It just stays critical permanently. I'm trying to understand why, and if something about the way I have ESM configured is the problem. I tried switching from ping_type udp to socket but it made no difference. Everything seems to be working, except the actual status of a check (which is the whole point). If I deploy it with a status of passing it just stays passing (obviously).

Here is a sanitized copy of the config I'm using:

"ca_file" = "/local/consul_ca.pem"
"cert_file" = "/local/consul_cert.pem"
"consul_kv_path" = "consul-esm/"
"consul_service" = "consul-esm"
"datacenter" = "dev-aws"
"enable_syslog" = false
"external_node_meta" = {
  "external-node" = "true"
}
"http_addr" = "https://consul.service.consul:8501"
"key_file" = "/local/consul_key.pem"
"log_level" = "INFO"
"node_probe_interval" = "10s"
"node_reconnect_timeout" = "72h"
"ping_type" = "socket"
"token" = "my-token-123"
"tls_server_name" = "consul.service.consul"

I also noticed when setting the log level to debug that I get entries with '[DEBUG] No nodes to probe', even though there's a node registered. I'm not sure what that indicates. I'm running the latest 0.4.0 release.

Compatability with consul replicate?

Hi,

My company is a consul enterprise subscriber and we use consul-esm internally for service discovery across our infrastructure. Do you know offhand if consul-esm is compatible with consul-replicate? https://github.com/hashicorp/consul-replicate

I imagine consul-replicate is tested with typical deployments of consul-server, but would you expect it to be compatible with consul-esm? And if so, are there any special considerations that come to mind?

Thanks

Prototype burndown list

Health checking

  • HTTP fields (method, header, TLSSkipVerify)
  • Deregister critical service after time period
  • Look into refreshing changed checks
  • Deregister node after failing virtual health check for ~72 hours

Coordinates

  • Coordinate update http endpoint (in Consul)
  • /v1/coordinate/node endpoint (in Consul)
  • Near: "_ip" support in prepared queries (in Consul)(hashicorp/consul#3798)
  • Periodic probing
  • Virtual health check based on probing

Config

  • Config file parsing
  • Basic CLI stuff (-config-file and help text)

Running Consul ESM in docker container results in orphaned service nodes on redeploy

This might be more of a question but we rolled our own docker container to run consul-esm. We noticed that when we redeploy these containers, the new instances get new unique Service IDs and the previous container's service IDs are "orphaned" in Consul.

Is there a way to avoid this or what would be a good workaround strategy to avoid this? Can we use something like the consul_service_tag to assign a consistent ID value or set node_reconnect_timeout to a very low value to force prompt reaping?

missing builds for arm64

I see arm builds but not arm64 looking to add aws graviton to my cluster and this was discovered during testing.

Expose HTTP endpoint for metrics scraping (Prometheus)

Metrics were added in #67 and a Prometheus sink is supported to collect them. However, we have no way to scrape them from the ESM application.

The usual way this is done is by configuring Prometheus to scrape an HTTP endpoint of the application that exposes these metrics on a certain path, but ESM has no such endpoint. There is also the pushgateway approach but a PrometheusPushSink is not configured and is also not the recommended approach as seen here https://prometheus.io/docs/practices/pushing/.

The telemetry lib being used is the one from Consul, but in this case the agent has an http endpoint and exposes the metrics here https://github.com/hashicorp/consul/blob/1b413b0444fe91a30bf18989fe8c668a767c9c8a/agent/agent_endpoint.go#L151. I propose that we make ESM export an http interface just for collecting metrics when Prometheus telemetry is enabled (prometheus_retention_time > 0).

Config Struct Contains Unconfigurable Fields

Currently configuration is split between the Config struct and the HumanConfig struct where HumanConfig is a subset of Config fields that are actually configurable by the practitioner. The HumanConfig values are then merged with the Config struct that is then consumed.

This set-up can be confusing as not all fields in Config are actually configurable. New configurable configs also have to be added into both structs.

One possible solution would be to refactor out non-configurable options out of Config and into their own variables and remove the need for HumanConfig.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.