Git Product home page Git Product logo

conntrack_exporter's Introduction

conntrack_exporter

Prometheus exporter for tracking network connections

Grafana screenshot

Uses

  • Operations: monitor when a critical link between your microservices is broken.
  • Security: monitor which remote hosts your server is talking to.
  • Debugging: correlate intermittently misbehaving code with strange connection patterns.
  • Audit: store JSON logs of connection events for future analysis.

Features

conntrack_exporter exposes Prometheus metrics showing the state and remote endpoint for each connection on the server. For example:

# HELP conntrack_opening_connections How many connections to the remote host are currently opening?
# TYPE conntrack_opening_connections gauge
conntrack_opening_connections{host="10.0.1.5:3306"} 2
conntrack_opening_connections{host="10.0.1.12:8080"} 0

# HELP conntrack_open_connections How many open connections are there to the remote host?
# TYPE conntrack_open_connections gauge
conntrack_open_connections{host="10.0.1.5:3306"} 49
conntrack_open_connections{host="10.0.1.12:8080"} 19

# HELP conntrack_closing_connections How many connections to the remote host are currently closing?
# TYPE conntrack_closing_connections gauge
conntrack_closing_connections{host="10.0.1.5:3306"} 0
conntrack_closing_connections{host="10.0.1.12:8080"} 1

# HELP conntrack_closed_connections How many connections to the remote host have recently closed?
# TYPE conntrack_closed_connections gauge
conntrack_closed_connections{host="10.0.1.5:3306"} 3
conntrack_closed_connections{host="10.0.1.12:8080"} 0

Optionally, it can also emit logs of connection events. Ship these logs to your favourite log processors and alerting systems or archive them for future audit capabilities.

Quick Start

docker run -d --cap-add=NET_ADMIN --net=host --name=conntrack_exporter hiveco/conntrack_exporter

Then open http://localhost:9318/metrics in your browser to view the Prometheus metrics.

To change the listen port:

docker run -d --cap-add=NET_ADMIN --net=host --name=conntrack_exporter hiveco/conntrack_exporter --listen-port=9101

Run with --help to see all available options.

Logging

conntrack_exporter can emit logs of all connection events it processes. Example:

$ docker run -it --rm --cap-add=NET_ADMIN --net=host hiveco/conntrack_exporter --log-events --log-events-format=json
...
{"event_type":"new","original_source_host":"10.0.1.65:40806","original_destination_host":"151.101.2.49:443","reply_source_host":"151.101.2.49:443","reply_destination_host":"10.0.1.65:40806","remote_host":"151.101.2.49:443","state":"Open"}
{"event_type":"new","original_source_host":"10.0.1.65:34900","original_destination_host":"162.247.242.20:443","reply_source_host":"162.247.242.20:443","reply_destination_host":"10.0.1.65:34900","remote_host":"162.247.242.20:443","state":"Opening"}

In the typical case, the remote_host and state keys would be the most interesting. event_type and the keys prefixed original_ and reply_ expose slightly lower level information obtained from libnetfilter_conntrack.

The --log-events-format argument currently supports two logging formats: json or netfilter (default) for the familiar and human-friendly conntrack tools format.

Building

Prerequisites:

  • Bazel (tested with v6.4.0)
  • libnetfilter-conntrack-dev (Ubuntu/Debian: apt-get install libnetfilter-conntrack-dev)

conntrack_exporter builds as a mostly-static binary, only requiring that the libnetfilter_conntrack library is available on the system. To build the binary, run make. To build the hiveco/conntrack_exporter Docker image, run make build_docker.

NOTE: Building is only tested on Ubuntu 22.04.

Connection States

There are four possible states a TCP connection can be in: opening, open, closing, and closed. These map to traditional TCP states as follows:

TCP State Reported As
SYN_SENT Opening
SYN_RECV Opening
ESTABLISHED Open
FIN_WAIT Closing
CLOSE_WAIT Closing
LAST_ACK Closing
TIME_WAIT Closing
CLOSE Closed

The TCP states are generalized as above because they tend to be a useful abstraction for typical users, and because they help minimize overhead.

FAQs

Doesn't Prometheus already monitor the system's connections natively?

Not exactly. Prometheus's node_exporter has the disabled-by-default tcpstat module, which exposes the number of connections in each TCP state. It does not expose which remote hosts are being connected to and which states those connections are in.

Why not? One difficulty is that tcpstat parses /proc/net/tcp to obtain its metrics, which can become slow on a busy server. In addition, there would be significant overhead for the Prometheus server to scrape and store the large amount of constantly-changing label:value pairs of metrics that such a busy server would expose. As stated in the docs accompanying tcpstat, "the current version has potential performance issues in high load situations". So the Prometheus authors decided to expose only totals for each TCP state, which is quite a reasonable choice for the typical user.

conntrack_exporter exists to put that choice in the hands of the user. It is written in C++ (node_exporter is written in Golang) and instead of parsing /proc/net/tcp, it uses libnetfilter_conntrack for direct access to the Linux kernel's connection table. This should make it reasonably fast even on busy servers, and allows more visibility into what's behind the summarized totals exposed by tcpstat.

Should I run this on a server that listens for external Internet connections?

Probably not, since a large number of unique connecting clients will create many metric labels and your Prometheus instance may be overwhelmed. conntrack_exporter is best used with internal servers (like application servers behind a load balancer, databases, caches, queues, etc), since the total number of remote endpoints these connect to tends to be small and fixed (i.e. usually just the other internal services behind your firewall).

I know some open connections were closed, why is conntrack_closed_connections not reporting them?

conntrack_exporter just exposes the system's connection table in a format Prometheus can scrape, and it's likely the closed connections are being dropped from the system table very quickly. It could be that this guage goes up for some short period and then goes back down again before your Prometheus server can scrape it.

Either increase the scrape frequency so Prometheus is more likely to notice the change, or increase how long the system "remembers" closed connections, which is controlled by nf_conntrack_tcp_timeout_close (this value is in seconds).

Check the current setting:

sysctl net.netfilter.nf_conntrack_tcp_timeout_close

Update it temporarily (lasts until next reboot):

sysctl -w net.netfilter.nf_conntrack_tcp_timeout_close=60

Update it permanently:

echo "net.netfilter.nf_conntrack_tcp_timeout_close=60" >> /etc/sysctl.conf
sysctl -p

WARNING: Raising this setting too high is not recommended, especially on high-traffic servers, because it may overflow your system's connection table due to all the extra closed connections it has to keep track of.

Similar issues with other connection states (besides closed) might be resolved by updating the other net.netfilter.nf_conntrack_tcp_timeout_* settings as appropriate. Run sysctl -a | grep conntrack | grep timeout to see all available settings.

It's great, but I wish it...

Please open a new issue.

conntrack_exporter's People

Contributors

chaostheorie avatar dbendelman avatar sumkincpp avatar tolleiv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

conntrack_exporter's Issues

Grafana Dashboard file

Hi,
Can somebody provide the Grafana dashboard json file? I could not find it in the source repository.

Thanks in advance.

Regards,

Joga Singh

Change metrics path

Hi there, would it be possible to implement an additional command line option to change the metrics path from /metrics to something else like /conntrack/metrics for example? I need to be able to do this due to some nuances working with Cloudflare Argo Tunnels.

Intermittent Missing metrics

I think this is related to issue #3. But since It's already closed. I will open a new Issue.

Sometimes I only get these metrics

# HELP exposer_transferred_bytes_total Transferred bytes to metrics services
# TYPE exposer_transferred_bytes_total counter
exposer_transferred_bytes_total 1160731
# HELP exposer_scrapes_total Number of times metrics were scraped
# TYPE exposer_scrapes_total counter
exposer_scrapes_total 519
# HELP exposer_request_latencies Latencies of serving scrape requests, in microseconds
# TYPE exposer_request_latencies summary
exposer_request_latencies_count 519
exposer_request_latencies_sum 776717
exposer_request_latencies{quantile="0.5"} 2713
exposer_request_latencies{quantile="0.9"} 3124
exposer_request_latencies{quantile="0.99"} 3492

Conntrack Exporter that I use is v0.3.1
I'm running it on my GCE VM

I don't know if it's related but in debug log I only see this warning.

[DEBUG] WARNING: Tried to update an existing connection in our table but a match was not found (rebuilding=false):
	event=update     ipv4     2 tcp      6 120 TIME_WAIT src=10.0.0.1 dst=10.0.0.2 sport=64841 dport=9318 src=10.0.0.2 dst=10.0.0.1 sport=9318 dport=64841 [ASSURED]

Not working for centos7 :-(

I have been looking for something like this for a long time!!!! It’s a great idea.
Unfortunately the provided binary does not work in centos 7, and although i can compile it in Centos7 without any issue, when run, it does not provide any specific connection stats, just general counters, but not a single metric related to any specific connection or its state.

Compilation is done using bazel4, as this is the one which works in Centos7

Any idea about what can be wrong or how to fix it?

conntrack_exporter for windows

Hi,
you have exactly what I search.

Did you have also Version for Windows Server?

Docker I can't run on the server.

Thanks

Add support to run without `--net=host`

I'm not sure about the specifics but I'd like to track only the connections inside a container and not of the host network namespace. Seems like if you conntrack_exporter inside a network namespace even with NET_ADMIN it won't capture any connections.

Ideas?

aarch64/arm64 support

conntrack_exporter         | exec /usr/bin/conntrack_exporter: exec format error

docker-compose.yaml

  conntrack_exporter:
    container_name: conntrack_exporter
    image: hiveco/conntrack_exporter:0.3.1
    command: --bind-address 0.0.0.0 --listen-port 9318
    restart: always
    cap_add:
      - NET_ADMIN
    network_mode: host
    ports:
      - 9318:9318
    logging: *loki-logging

Exporter not showing all open connections

Hello,

I noticed the metrics are not showing all of the currently open connections on my server. Here is what I get when I grep for listening TCP connections on my server:
image

Vs what I see in the metrics page:
image

1/2 of the connections are not listed.

ERROR: regex_error

Hi, I have compiled an exporter for centos7 but I get an error on startup.
[root@cent7 conntrack_exporter-0.3]# ./conntrack_exporter
conntrack_exporter v0.3
ERROR: regex_error
What could be the problem?

Thanks!

docker-compose example

I'm doing a weird example:

version: '3.8'

x-logging: &loki-logging
  driver: json-file
  options:
    tag: "{{.ImageName}}|{{.Name}}|{{.ImageFullID}}|{{.FullID}}"

services:
  postgres:
    container_name: postgres
    image: postgres:16.1
    restart: always
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: redacted
      POSTGRES_DB: db
      PGDATA: /var/lib/postgresql/data/pgdata
    ports:
      - 5432:5432
    volumes:
      - ./pgdata:/var/lib/postgresql/data/pgdata
    logging: *loki-logging

  postgres_exporter:
    container_name: postgres_exporter
    image: prometheuscommunity/postgres-exporter:v0.15.0
    restart: always
    environment:
      DATA_SOURCE_NAME: postgresql://postgres:redacted@postgres:5432/db?sslmode=disable
    ports:
      - 9187:9187
    depends_on:
      - postgres

  prometheus:
    container_name: prometheus
    image: prom/prometheus:v2.49.1
    restart: always
    ports:
      - 9090:9090
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      - ./rules.yml:/etc/prometheus/rules.yml
      - ./prometheus:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/etc/prometheus/console_libraries'
      - '--web.console.templates=/etc/prometheus/consoles'
      - '--web.enable-lifecycle'
    logging: *loki-logging  

  node_exporter:
    container_name: node_exporter
    image: prom/node-exporter:v1.7.0
    restart: always
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.rootfs=/rootfs'
      - '--path.sysfs=/host/sys'
      - '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
    ports:
      - 9100:9100
    depends_on:
      - prometheus
    logging: *loki-logging

  loki:
    container_name: loki
    image: grafana/loki:2.9.4
    restart: always
    ports:
      - 3100:3100
    volumes:
      - ./loki.yml:/etc/loki/loki.yml
    command: -config.file=/etc/loki/loki.yml
    logging: *loki-logging

  promtail:
    container_name: promtail
    image: grafana/promtail:2.9.4
    restart: always
    ports:
      - 9080:9080
    volumes:
      - ./promtail.yml:/etc/promtail/promtail.yml
      - /var/lib/docker/containers:/host/containers
    command: -config.file=/etc/promtail/promtail.yml
    logging: *loki-logging
    depends_on:
      - loki

  cadvisor:
    image: gcr.io/cadvisor/cadvisor:v0.47.2
    container_name: cadvisor
    restart: always
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:rw
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
    ports:
      - 8080:8080

  alertmanager:
    container_name: alertmanager
    image: prom/alertmanager:v0.26.0
    restart: always
    ports:
      - 9093:9093
    volumes:
      - ./alertmanager.yml:/etc/alertmanager/config.yml
      - ./alertmanager:/alertmanager
    command: --config.file=/etc/alertmanager/config.yml --storage.path=/alertmanager
    logging: *loki-logging

  grafana:
    container_name: grafana
    image: grafana/grafana:10.3.1
    restart: always
    ports:
      - 3000:3000
    environment:
      GF_PATHS_CONFIG: /etc/grafana/grafana.ini
      GF_PATHS_DATA: /var/lib/grafana
      GF_PATHS_HOME: /usr/share/grafana
      GF_PATHS_LOGS: /var/log/grafana
      GF_PATHS_PLUGINS: /var/lib/grafana/plugins
      GF_PATHS_PROVISIONING: /provisioning
    volumes:
      - ./grafana:/var/lib/grafana
      - ./provisioning:/provisioning
    depends_on:
      - prometheus
      - node_exporter
      - loki
      - cadvisor
    logging: *loki-logging

  conntrack_exporter:
    container_name: conntrack_exporter
    image: hiveco/conntrack_exporter:0.3.1
    restart: always
    ports:
      - 9318:9318
    cap_add:
      - NET_ADMIN
    logging: *loki-logging

  thinkorswim_scraper:
    container_name: thinkorswim_scraper
    build:
      context: ../
      dockerfile: Dockerfile
    restart: always
    volumes:
      - ../.env:/home/runner/.env
    depends_on:
      - postgres
    logging: *loki-logging

I don't think I can do network: host here... do I need to? I feel like that will conflict with my prometheus scraper config:

global:
  scrape_interval: 15s
  external_labels:
    monitor: 'my-project'

rule_files:
  - rules.yml

alerting:
  alertmanagers:
    - scheme: http
      static_configs:
      - targets:
        - alertmanager:9093

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['prometheus:9090']

  - job_name: 'postgres_exporter'
    static_configs:
      - targets: ['postgres_exporter:9187']

  - job_name: 'node_exporter'
    static_configs:
      - targets: ['node_exporter:9100']

  - job_name: 'cadvisor'
    static_configs:
      - targets: ['cadvisor:8080']

  - job_name: 'alertmanager'
    static_configs:
      - targets: ['alertmanager:9093']

  - job_name: 'loki'
    static_configs:
      - targets: ['loki:3100']

  - job_name: 'promtail'
    static_configs:
      - targets: ['promtail:9080']

  - job_name: 'grafana'
    static_configs:
      - targets: ['grafana:3000']

  - job_name: 'conntrack_exporter'
    static_configs:
      - targets: ['conntrack_exporter:9318']

Sometimes failed to get metric

Below is the entire output of the exporter

# HELP exposer_bytes_transferred bytesTransferred to metrics services
# TYPE exposer_bytes_transferred counter
exposer_bytes_transferred 165825104.000000
# HELP exposer_total_scrapes Number of times metrics were scraped
# TYPE exposer_total_scrapes counter
exposer_total_scrapes 961.000000
# HELP exposer_request_latencies Latencies of serving scrape requests, in milliseconds
# TYPE exposer_request_latencies histogram
exposer_request_latencies_count 961
exposer_request_latencies_sum 28441.000000
exposer_request_latencies_bucket{le="1.000000"} 633
exposer_request_latencies_bucket{le="5.000000"} 639
exposer_request_latencies_bucket{le="10.000000"} 651
exposer_request_latencies_bucket{le="20.000000"} 696
exposer_request_latencies_bucket{le="40.000000"} 772
exposer_request_latencies_bucket{le="80.000000"} 837
exposer_request_latencies_bucket{le="160.000000"} 933
exposer_request_latencies_bucket{le="320.000000"} 959
exposer_request_latencies_bucket{le="640.000000"} 959
exposer_request_latencies_bucket{le="1280.000000"} 959
exposer_request_latencies_bucket{le="2560.000000"} 959
exposer_request_latencies_bucket{le="+Inf"} 961

build: no native function or rule 'http_archive'

on the master

bazel build -c dbg //:conntrack_exporter
DEBUG: Rule 'prometheus_cpp' indicated that a canonical reproducible form can be obtained by modifying arguments shallow_since = "1509747915 -0700"
DEBUG: Repository prometheus_cpp instantiated at:
  /tmp/conntrack_exporter/WORKSPACE:3:15: in <toplevel>
Repository rule git_repository defined at:
  /root/.cache/bazel/_bazel_root/d91834a1ac54014cf5ba7f51135e5a3c/external/bazel_tools/tools/build_defs/repo/git.bzl:199:33: in <toplevel>
ERROR: Traceback (most recent call last):
	File "/tmp/conntrack_exporter/WORKSPACE", line 10, column 28, in <toplevel>
		prometheus_cpp_repositories()
	File "/root/.cache/bazel/_bazel_root/d91834a1ac54014cf5ba7f51135e5a3c/external/prometheus_cpp/repositories.bzl", line 129, column 29, in prometheus_cpp_repositories
		load_com_google_protobuf()
	File "/root/.cache/bazel/_bazel_root/d91834a1ac54014cf5ba7f51135e5a3c/external/prometheus_cpp/repositories.bzl", line 99, column 11, in load_com_google_protobuf
		native.http_archive(
Error: no native function or rule 'http_archive'
Available attributes: aar_import, action_listener, alias, android_binary, android_device, android_device_script_fixture, android_host_service_fixture, android_instrumentation_test, android_library, android_local_test, android_ndk_repository, android_sdk, android_sdk_repository, android_tools_defaults_jar, apple_cc_toolchain, available_xcodes, bazel_version, bind, cc_binary, cc_host_toolchain_alias, cc_import, cc_libc_top_alias, cc_library, cc_proto_library, cc_shared_library, cc_shared_library_permissions, cc_test, cc_toolchain, cc_toolchain_alias, cc_toolchain_suite, config_feature_flag, config_setting, constraint_setting, constraint_value, environment, existing_rule, existing_rules, exports_files, extra_action, fdo_prefetch_hints, fdo_profile, filegroup, genquery, genrule, glob, j2objc_library, java_binary, java_import, java_library, java_lite_proto_library, java_package_configuration, java_plugin, java_plugins_flag_alias, java_proto_library, java_runtime, java_test, java_toolchain, label_flag, label_setting, local_config_platform, local_repository, new_local_repository, objc_import, objc_library, package_group, package_name, platform, propeller_optimize, proto_lang_toolchain, proto_library, py_binary, py_library, py_runtime, py_test, register_execution_platforms, register_toolchains, repository_name, sh_binary, sh_library, sh_test, subpackages, test_suite, toolchain, toolchain_type, xcode_config, xcode_config_alias, xcode_version
ERROR: Traceback (most recent call last):
	File "/tmp/conntrack_exporter/WORKSPACE", line 13, column 32, in <toplevel>
		conntrack_exporter_dependencies()
	File "/tmp/conntrack_exporter/repositories.bzl", line 40, column 24, in conntrack_exporter_dependencies
		argagg_repositories()
	File "/tmp/conntrack_exporter/repositories.bzl", line 30, column 11, in argagg_repositories
		native.new_git_repository(
Error: no native function or rule 'new_git_repository'
Available attributes: aar_import, action_listener, alias, android_binary, android_device, android_device_script_fixture, android_host_service_fixture, android_instrumentation_test, android_library, android_local_test, android_ndk_repository, android_sdk, android_sdk_repository, android_tools_defaults_jar, apple_cc_toolchain, available_xcodes, bazel_version, bind, cc_binary, cc_host_toolchain_alias, cc_import, cc_libc_top_alias, cc_library, cc_proto_library, cc_shared_library, cc_shared_library_permissions, cc_test, cc_toolchain, cc_toolchain_alias, cc_toolchain_suite, config_feature_flag, config_setting, constraint_setting, constraint_value, environment, existing_rule, existing_rules, exports_files, extra_action, fdo_prefetch_hints, fdo_profile, filegroup, genquery, genrule, glob, j2objc_library, java_binary, java_import, java_library, java_lite_proto_library, java_package_configuration, java_plugin, java_plugins_flag_alias, java_proto_library, java_runtime, java_test, java_toolchain, label_flag, label_setting, local_config_platform, local_repository, new_local_repository, objc_import, objc_library, package_group, package_name, platform, propeller_optimize, proto_lang_toolchain, proto_library, py_binary, py_library, py_runtime, py_test, register_execution_platforms, register_toolchains, repository_name, sh_binary, sh_library, sh_test, subpackages, test_suite, toolchain, toolchain_type, xcode_config, xcode_config_alias, xcode_version
ERROR: error loading package 'external': Package 'external' contains errors
INFO: Elapsed time: 0.087s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
make: *** [Makefile:4: build] Error 1

build: name 'git_repository' is not defined

On master

bash-5.1# make
bazel build -c dbg //:conntrack_exporter
ERROR: /tmp/conntrack_exporter/WORKSPACE:1:1: name 'git_repository' is not defined
ERROR: error loading package '': Encountered error while reading extension file 'repositories.bzl': no such package '@prometheus_cpp//': error loading package 'external': Could not load //external package
INFO: Elapsed time: 0.074s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
make: *** [Makefile:4: build] Error 1
bash-5.1# 

Related description

git_repository is no longer a native rule. You need to include it at the top of your WORKSPACE with:

load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")

Link

Segfault with docker

Hello,

I have issues with conntrack_exporter in docker.

After a long run, conntrack_exporter crashed without logs.

In my kernel log, I found :

[586087.070984] traps: conntrack_expor[4084] general protection ip:7fc28be79196 sp:7ffea652b7e0 error:0 in libc-2.23.so[7fc28be42000+1c0000]

And the output of docker info :

docker info
Containers: 21
 Running: 2
 Paused: 0
 Stopped: 19
Images: 66
Server Version: 18.05.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.16.13-300.fc28.x86_64
Operating System: Fedora 28 (Workstation Edition)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.672GiB
Name: localhost.localdomain
ID: 7SBO:6IM6:SKBT:LFCV:PUT6:7VI7:IXB7:QIUY:LINL:QCE4:LXNH:GZZ3
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Feature: Optionally expose metrics by source

We were looking to use this exporter to monitor connections on a number of machines being used as NAT Gateways. In this use case whilst monitoring connections by destination is certainly useful, it is much more beneficial to be able to see the number of connections being opened by a given source, to help identify bad clients that are performing far too much connection churn.

It would be great to have some toggles to be able to optionally include a source cardinality (rolled up by host, dropping the port due to cardinality), and also to drop the destination cardinality

Impact of exposing raw connection states?

The docs say:

The TCP states are generalized as above because they tend to be a useful abstraction for typical users, and because they help minimize overhead.

I would prefer the raw connection states; for the kinds of problems we tend to have around this, the collapsed state is not useful.

What "minimized" overheads are we talking about? Just fewer metrics?

If there's no particular issue, then I'll look at adding a switch to produce the raw states instead.

Build failure on Bazel v0.13.1

$ mv ~/.cache/bazel ~/.cache/bazel.bak
$ make build_stripped
bazel build --strip=always -c opt --verbose_failures //:conntrack_exporter
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
..........
ERROR: [redacted]/.cache/bazel/_bazel_[redacted]/352b60becb5b3a57268dc445b088f116/external/prometheus_cpp/BUILD:1:1: no such package '@prometheus_client_model//': Prefix client_model-e2da43a was given, but not found in the archive and referenced by '@prometheus_cpp//:prometheus_cpp'
ERROR: Analysis of target '//:conntrack_exporter' failed; build aborted: no such package '@prometheus_client_model//': Prefix client_model-e2da43a was given, but not found in the archive
INFO: Elapsed time: 5.586s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (9 packages loaded)
Makefile:8: recipe for target 'build_stripped' failed
make: *** [build_stripped] Error 1

Bazel version:

$ sudo apt-cache show bazel
Package: bazel
Version: 0.13.1
Architecture: amd64
Depends: google-jdk | java8-sdk-headless | java8-jdk | java8-sdk |
 oracle-java8-installer, g++, zlib1g-dev, bash-completion
Maintainer: The Bazel Authors <[email protected]>
Priority: optional
Section: contrib/devel
Filename: pool/jdk1.8/b/bazel/bazel_0.13.1_amd64.deb
Size: 103915534
SHA256: b91e6ce034260ebbde6a1b31f7eaf2c755acb5d30eb7f9186765ee694a2948f3
SHA1: f394b13650088c3e2c0706608016ba5f1865cc85
MD5sum: 9f33d2122bab8e6f6bebe8cd9bcbcd3c
Description: Bazel is a tool that automates software builds and tests.
 Supported build tasks include running compilers and linkers to produce
 executable programs and libraries, and assembling deployable packages
 for Android, iOS and other target environments. Bazel is similar to
 other tools like Make, Ant, Gradle, Buck, Pants and Maven.
Description-md5: 72cec041981bfe49fb7d7b6d540d9076
Homepage: http://bazel.build
Built-Using: bazel (HEAD)

Building works again after downgrading to v0.7.0.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.