Git Product home page Git Product logo

ping_exporter's People

Contributors

46bit avatar aklinkert avatar casteloig avatar cinatic avatar corny avatar czerwonk avatar databus23 avatar dependabot[bot] avatar dmke avatar drtr0jan avatar ebarped avatar eklesel avatar foogod avatar hikhvar avatar j13tw avatar leahoswald avatar momorientes avatar nhinds avatar nomaster avatar pberndro avatar rtgnx avatar sknss avatar skyxx avatar soapiestwaffles avatar stevegrey2 avatar tdabasinskas avatar xykong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ping_exporter's Issues

Alternate DNS server

Hi,

I am using this docker on an Unraid server. I can't find a way to have it start with the --dns.nameserver option to specify a different DNS server. Is there a way to do that via the config.yml or can I add that as a feature request if not? Thanks.

Metrics in seconds actually give nanoseconds results

Hello,
I'm trying to migrate the old version of ping exporter to the new one (v0.4.7) using values expressed in seconds.

I may just have messed up with something, but after I started the ping exporter with the following command
docker run --rm -it --net host czerwonk/ping_exporter:v0.4.7 ./ping_exporter --metrics.rttunit=s
expressing the values in seconds, I find this when querying the exporter with curl:

# HELP ping_rtt_best_seconds_seconds Best round trip time in seconds
# TYPE ping_rtt_best_seconds_seconds gauge
ping_rtt_best_seconds_seconds{ip="XXXXX",ip_version="4",target="XXXXX"} 0.00013146699965000153
ping_rtt_best_seconds_seconds{ip="XXXXX",ip_version="4",target="XXXXX"} 0.00017787399888038636
ping_rtt_best_seconds_seconds{ip="XXXXX",ip_version="4",target="XXXXX"} 0.00017915000021457672
ping_rtt_best_seconds_seconds{ip="XXXXX",ip_version="4",target="XXXXX"} 0.00020117099583148956

The float number doesn't represent the value in seconds since this would mean that in this case I have microseconds rtt between my hosts. I'm sure 100% that it is not the case and it wasn't before the update of the exporter.

Am I doing anything wrong or is this a bug?
Thank you in advance!

Add Grafana dashboard

It would be nice to have an "official" ping_exporter Grafana dashboard on this GitHub.

In the meanwhile, this is my (partial) attempt:
ping.zip

image

PS: this assumes that VictoriaMetrics is used, as it uses the sort_by_label function, you can simply remove it if you use plain old Prometheus.

Release of new version (Prebuild binaries)

Hey @czerwonk, first of all, thank you for that great exporter!

Would it be possible that you release the latest changes from your master branch as prebuild binaries, as you did it with v0.4.4? Currently, the changes are only available within the docker container (with tag: latest), using docker is not possible within my current environment.

Thank you!

Sort Order

Currently the metrics are sorted by ip address.

I would like a config option to sort by:
none (the order in the config targets)
target
ip address

Is it normal ping_rtt_std_deviation_ms less than ping_rtt_best_ms?

# HELP ping_loss_percent Packet loss in percent
# TYPE ping_loss_percent gauge
ping_loss_percent{ip="172.16.129.2",ip_version="4",target="172.16.129.2"} 0
ping_loss_percent{ip="172.16.130.2",ip_version="4",target="172.16.130.2"} 0
# HELP ping_rtt_best_ms Best round trip time in millis
# TYPE ping_rtt_best_ms gauge
ping_rtt_best_ms{ip="172.16.129.2",ip_version="4",target="172.16.129.2"} 0.0960950031876564
ping_rtt_best_ms{ip="172.16.130.2",ip_version="4",target="172.16.130.2"} 0.08521000295877457
# HELP ping_rtt_mean_ms Mean round trip time in millis
# TYPE ping_rtt_mean_ms gauge
ping_rtt_mean_ms{ip="172.16.129.2",ip_version="4",target="172.16.129.2"} 0.11187940090894699
ping_rtt_mean_ms{ip="172.16.130.2",ip_version="4",target="172.16.130.2"} 0.13122600317001343
# HELP ping_rtt_ms Round trip time in millis (deprecated)
# TYPE ping_rtt_ms gauge
ping_rtt_ms{ip="172.16.129.2",ip_version="4",target="172.16.129.2",type="best"} 0.0960950031876564
ping_rtt_ms{ip="172.16.129.2",ip_version="4",target="172.16.129.2",type="mean"} 0.11187940090894699
ping_rtt_ms{ip="172.16.129.2",ip_version="4",target="172.16.129.2",type="std_dev"} 0.01460417453199625
ping_rtt_ms{ip="172.16.129.2",ip_version="4",target="172.16.129.2",type="worst"} 0.14642100036144257
ping_rtt_ms{ip="172.16.130.2",ip_version="4",target="172.16.130.2",type="best"} 0.08521000295877457
ping_rtt_ms{ip="172.16.130.2",ip_version="4",target="172.16.130.2",type="mean"} 0.13122600317001343
ping_rtt_ms{ip="172.16.130.2",ip_version="4",target="172.16.130.2",type="std_dev"} 0.026729168370366096
ping_rtt_ms{ip="172.16.130.2",ip_version="4",target="172.16.130.2",type="worst"} 0.18367899954319
# HELP ping_rtt_std_deviation_ms Standard deviation in millis
# TYPE ping_rtt_std_deviation_ms gauge
ping_rtt_std_deviation_ms{ip="172.16.129.2",ip_version="4",target="172.16.129.2"} 0.01460417453199625
ping_rtt_std_deviation_ms{ip="172.16.130.2",ip_version="4",target="172.16.130.2"} 0.026729168370366096
# HELP ping_rtt_worst_ms Worst round trip time in millis
# TYPE ping_rtt_worst_ms gauge
ping_rtt_worst_ms{ip="172.16.129.2",ip_version="4",target="172.16.129.2"} 0.14642100036144257
ping_rtt_worst_ms{ip="172.16.130.2",ip_version="4",target="172.16.130.2"} 0.18367899954319

socket: permission denied

Hi,

$uname -prsv
OpenBSD 6.5 GENERIC.MP#5 amd64

When I'm launching ping_exporter with an user I got:

$/usr/local/bin/ping_exporter --web.listen-address=0.0.0.0:9101 --config.path=/etc/ping_exporter.conf
ERRO[0000] listen ip4:icmp 0.0.0.0: socket: permission denied  source="main.go:97"

the same user can:

  • use ping
$ping x.org  
PING x.org (131.252.210.176): 56 data bytes
64 bytes from 131.252.210.176: icmp_seq=0 ttl=46 time=157.838 ms
  • launch node_exporter.

ping_exporter not detecting downtimes when reusing ICMP identifiers

Thank you for this project, it already helped a lot for drilling down on my internet outages at home!

Unfortunately, in my case, it's often caused by some NAT table filling up upstream (at least that's my running theory), which leads to currently running pings continuing to work, while new connections can't be established anymore.

For my use case, having some option like alwaysUseNewIcmpIdentifier would be sufficient.

Another, maybe even better, solution would be what @foogod requested to merge - this way, I could just set ping.id-change.interval: 5s.

But I guess the necessary PR for that in go-ping, which isn't merge yet, would be required for any of those two solutions?

Edit: when pinging "manually", I can see the difference in behaviour with ping 8.8.8.8 vs while true; do ping 8.8.8.8 -c 1; done.
Is there a way to use go-ping in a similar fashion, always restarting the ping to get a new identifier, without losing on history / metrics? Then it should be possible to do this already without having digineo/go-ping/pull/19 merged.

Duplicate unit in metric name

Currently I get metrics with names like ping_rtt_seconds_msor ping_rtt_best_seconds_seconds if I use seconds as unit. So there is a duplicate of the unit. This is also not directly matching the metric names in your README docs. So I think the duplicate unit should be removed. Looks like a problem introduced with #45

Github releases?

Could binaries for this exporter please be released on github, similar to what others do for e.g. node_exporter, process_exporter?

Binary releases are handy to avoid having to compile the exporter on each machine with go get, for people who don't wish to run it inside a docker container.

Service is not starting in Centos but same is working fine in Ubuntu system

Hi, The same Ping_exporter works fine in Ubuntu OS, But Centos OS giving following error.

$ sudo systemctl status ping_exporter_uat -l
● ping_exporter_uat.service - Ping Exporter UAT
Loaded: loaded (/usr/lib/systemd/system/ping_exporter_uat.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2022-06-27 11:22:14 UTC; 2s ago
Process: 23193 ExecStart=/usr/local/bin/ping_exporter_uat --config.path="/etc/ping_exporter_uat/ping_exporter_uat.yaml" --web.listen-address=":9427" --metrics.deprecated=disable --metrics.rttunit=ms (code=exited, status=1/FAILURE)
Main PID: 23193 (code=exited, status=1/FAILURE)

Jun 27 11:22:14 uat-cdc-1 ping_exporter_uat[23193]: (ping_rtt_ms{type=best|worst|mean|std_dev}).
Jun 27 11:22:14 uat-cdc-1 ping_exporter_uat[23193]: Valid choices: [enable, disable]
Jun 27 11:22:14 uat-cdc-1 ping_exporter_uat[23193]: --metrics.rttunit="ms" Export ping results as either millis (default),
Jun 27 11:22:14 uat-cdc-1 ping_exporter_uat[23193]: or seconds (best practice), or both (for
Jun 27 11:22:14 uat-cdc-1 ping_exporter_uat[23193]: migrations). Valid choices: [ms, s, both]
Jun 27 11:22:14 uat-cdc-1 ping_exporter_uat[23193]: Args:
Jun 27 11:22:14 uat-cdc-1 ping_exporter_uat[23193]: [] A list of targets to ping
Jun 27 11:22:14 uat-cdc-1 systemd[1]: ping_exporter_uat.service: main process exited, code=exited, status=1/FAILURE
Jun 27 11:22:14 uat-cdc-1 systemd[1]: Unit ping_exporter_uat.service entered failed state.
Jun 27 11:22:14 uat-cdc-1 systemd[1]: ping_exporter_uat.service failed.

dynamic list of targets

I have a suggestion for enhancement. Currently a list of targets can be specified only statically in a config file. It would be useful when there would be some way how to change targets dynamically.

It could use the same principle as file_sd_config option in Prometheus. A config file can point to a json/yaml file with a list of targets. Prometheus checks this file and reloads itself in case of change.

I really appreciate it and thanks for the work you did so far!

ping_loss_ratio averaging concern/question.

Hi all,

I'm running a configuration which uses ping_exporter to query a range of services (primarily devices within our network and VoIP phone provider) for packet loss via the ping_loss_ratio metric.

We've had packet loss issues for a while now, although the data I am getting into Prometheus (and then Grafana) seems to be slightly incorrect.

I notice the packet loss numbers seems very simalir, for example, I only seem to be getting data values of 2.38%, 4.76% 7.14%, 9.52%, etc. These numbers are NOT random, they seem to be averaged somehow. For example, 2.38 * 2 = 4.76, 4.76 * 2 = 9.52, they seem to be multiples somehow. This data is represented within both Grafana and Promethues.

Screenshot 2023-11-06 at 9 59 43 AM Screenshot 2023-11-06 at 10 00 09 AM

I use the following settings within the prometheus.yml file.

- job_name: ping_exporter
  honor_timestamps: true
  scrape_interval: 1s
  scrape_timeout: 1s
  metrics_path: /metrics
  scheme: http
  follow_redirects: true
  enable_http2: true
  static_configs:
  - targets:
    - 10.20.60.11:9427

What could I be doing wrong to get somewhat malformed data?

Cheers,
Randommmm,

Updating to v0.4.8 Caused Loss of Connectivity

Hello,
We're using ping_exporter in a limited capacity to test uptime to certain equipment. The software runs in a container through Docker Swarm Mode and, after the update from v0.4.7 to v0.4.8, connectivity was lost. ping_exporter did not provide metrics for the devices it was pinging, but it did continue to provide its own status. From Grafana, through Prometheus, it looked like this:

2021-12-10 Ping Exporter

The yellow and green show the ping_up metric. Green is v0.4.7 and yellow is v0.4.8. As you can see, for the time it ran on v0.4.8, the other metrics were not provided to Prometheus. The instant we swapped back to v0.4.7, these metrics began logging again. No configuration change was made between updates - the only that changed was the version number in the compose file. No log lines were generated in ping_exporter's logs.

Here's our running config:

# ping_exporter/config.yml
targets:
  - "[REDACTED IP]"
  - "[REDACTED IP]"
  - "[REDACTED IP]"
  - "[REDACTED IP]"
dns:
  refresh: 30m
  nameserver: [REDACTED IP]
ping:
  interval: 5s
  timeout: 2s
  history-size: 100
  payload-size: 10

This is obviously non-urgent as we're fine with running v0.4.7 for the time being. Please let me know if we can provide any additonal information to help you troubleshoot, and thank you for making such a handy tool.

Ship arm64 binary

Thank you for this project, very useful!

Could you please release an arm64 binary as well?

Thanks!

Docker startup fails if config path is a directory instead of a file

I'm getting the following output when attempting to run the docker image with the same command line as suggested on the readme. I've saved a 'config.yml' file in the mounted volume.

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x4 pc=0x3daf70]

goroutine 1 [running]:
main.addFlagToConfig(0x0)
        /go/ping_exporter/main.go:236 +0x14
main.loadConfig(0x0, 0x0, 0x0)
        /go/ping_exporter/main.go:213 +0x100
main.main()
        /go/ping_exporter/main.go:78 +0x168

I spent some time trying to figure this out but it's simply that the '/config' path needs to point at the config.yml file, not at a directory containing the config.yml.

Pong not being processed

Hi Dan

I have a simple configuration:

targets:
  - c-68-49-xxx-yyy.hsd1.mi.comcast.net

ping:
  interval: 5s
  history-size: 3

Then I run ping_exporter like this:

# strace -etrace=network -f ./ping_exporter --config.path ./config.yml --web.listen-address 127.0.0.1:9427

I see the requests/responses on the network:

12:17:58.981103 IP 41.78.aaa.bbb > 68.49.xxx.yyy: ICMP echo request, id 29746, seq 1, length 8
12:17:59.256714 IP 68.49.xxx.yyy > 41.78.aaa.bbb: ICMP echo reply, id 29746, seq 1, length 8

But even after running for some time, the metrics for this target stay stuck at 100% loss:

$ curl localhost:9427/metrics
# HELP ping_loss_percent Packet loss in percent
# TYPE ping_loss_percent gauge
ping_loss_percent{ip="68.49.xxx.yyy",ip_version="4",target="c-68-49-xxx-yyy.hsd1.mi.comcast.net"} 1
# HELP ping_up ping_exporter version
# TYPE ping_up gauge
ping_up{version="0.4.4"} 1

Strace shows the recvfrom() call returning the response to the application:

[pid 29771] sendto(3, "\10\0\203\256tH\0\t", 8, 0, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("68.49.xxx.yyy")}, 16) = 8
[pid 29772] recvfrom(3, "E\0\0\349@\0\0(\1\2XD1i\310)N\200\2\0\0\213\256tH\0\t", 1500, 0, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("68.49.xxx.yyy")}, [112->16]) = 28

Any idea what is causing this, or how I can continue troubleshooting?

Regards,
Simeon.

ARM Docker builds

Hi,

is there a chance to get arm docker builds. Would love to run this on my raspberry :)

deprecated prometheus log package breaks build

I am packaging this exporter, and current the build fails with:

...
   dh_auto_build -O--builddirectory=_build -O--buildsystem=golang
        cd _build && go install -trimpath -v -p 8 github.com/czerwonk/ping_exporter github.com/czerwonk/ping_exporter/config
src/github.com/czerwonk/ping_exporter/target.go:11:2: cannot find package "github.com/prometheus/common/log" in any of:
        /usr/lib/go-1.17/src/github.com/prometheus/common/log (from $GOROOT)
...

87ca369 replaced prometheus/common/log with sirupsen/logrus, but it seems that target.go (line 11) was missed.

Error trying to add ping_exporter as Grafana datasource

Hi,

thanks for your work.

I just installed version 1.0.0 as Debian (version 10.11) package for the first time.

When i manually enter the exposed interface (http://192.168.2.50:9427) in my web-browser it's working and i can see the metrics (http://192.168.2.50:9427/metrics) as configured in /etc/ping_exporter/config.yml.

When i try to add it as a Grafana datasource i get the following error:
Error reading Prometheus: bad_response: readObjectStart: expect { or n, but found <, error found in #1 byte of ...|<!doctype h|..., bigger context ...|<!doctype html> <html> <head> <meta charset="UTF-8|...

In the Grafana URL field i pasted http://192.168.2.50:9427.

It's working without issues with the node_exporter, which is running on the same host as ping_exporter.
I use Grafana in version 8.3.3.

Here is the output of curl from the Grafana host to the host running ping_exporter.

curl -I http://192.168.2.50:9427/metrics
HTTP/1.1 200 OK
Content-Type: text/plain; version=0.0.4; charset=utf-8
Date: Sat, 05 Feb 2022 10:16:49 GMT
Content-Length: 1453
curl http://192.168.2.50:9427/metrics
# HELP ping_loss_ratio Packet loss from 0.0 to 1.0
# TYPE ping_loss_ratio gauge
ping_loss_ratio{ip="127.0.0.1",ip_version="4",target="127.0.0.1"} 0
ping_loss_ratio{ip="8.8.8.8",ip_version="4",target="8.8.8.8"} 0
# HELP ping_rtt_best_seconds Best round trip time in seconds
# TYPE ping_rtt_best_seconds gauge
ping_rtt_best_seconds{ip="127.0.0.1",ip_version="4",target="127.0.0.1"} 0.00020129199326038362
ping_rtt_best_seconds{ip="8.8.8.8",ip_version="4",target="8.8.8.8"} 0.009192828178405762
# HELP ping_rtt_mean_seconds Mean round trip time in seconds
# TYPE ping_rtt_mean_seconds gauge
ping_rtt_mean_seconds{ip="127.0.0.1",ip_version="4",target="127.0.0.1"} 0.00037390387058258055
ping_rtt_mean_seconds{ip="8.8.8.8",ip_version="4",target="8.8.8.8"} 0.009936306953430176
# HELP ping_rtt_std_deviation_seconds Standard deviation in seconds
# TYPE ping_rtt_std_deviation_seconds gauge
ping_rtt_std_deviation_seconds{ip="127.0.0.1",ip_version="4",target="127.0.0.1"} 8.027490228414536e-05
ping_rtt_std_deviation_seconds{ip="8.8.8.8",ip_version="4",target="8.8.8.8"} 0.0005426779389381409
# HELP ping_rtt_worst_seconds Worst round trip time in seconds
# TYPE ping_rtt_worst_seconds gauge
ping_rtt_worst_seconds{ip="127.0.0.1",ip_version="4",target="127.0.0.1"} 0.0004887250065803528
ping_rtt_worst_seconds{ip="8.8.8.8",ip_version="4",target="8.8.8.8"} 0.011926032066345215
# HELP ping_up ping_exporter version
# TYPE ping_up gauge
ping_up{version="1.0.0"} 1

Thanks for your support.

Wolfgang

Run as non-root user on kubernetes

Hi!

Im trying to run this exporter on kubernetes without being a root user.

Currently, i can at least execute it as root but with dropped capabilities:

securityContext:
  capabilities:
    drop:
      - all
    add: ["NET_RAW"]

But when i change to a non-root user, with the following securityContext:

securityContext:
  runAsUser: 65534
  runAsNonRoot: true
  capabilities:
    drop:
      - all
    add: ["NET_RAW"]

it fails with:
ERRO[0000] cannot start monitoring: listen ip4:icmp 0.0.0.0: socket: operation not permitted

I have tried to add more capabilities (NET_ADMIN, SYS_ADMIN) without success.

Expose counters

It would be helpful to have raw counters (number of pings sends, pongs received, etc.) in addition to aggregated values. That would allow one to use PromQL for aggregation.

config.yaml file parsing error

I just had to restart the host the ping_exporter runs on and now it won't start.

ping_exporter --config.path=/etc/ping_exporter/config.yml

INFO[0000]github.com/czerwonk/ping_exporter/main.go:78 main.main() rtt units: 2                                 
ping_exporter: error: could not load config.path: failed to decode YAML: yaml: unmarshal errors:
  line 6: cannot unmarshal !!map into string
usage: ping_exporter [<flags>] [<targets>...]

Here is the config.yaml - it's basically just the example given.

targets:
  - 8.8.8.8
  - 8.8.4.4
  - 2001:4860:4860::8888
  - 2001:4860:4860::8844
  - google.com:
      asn: 15169

dns:
  refresh: 2m15s
  nameserver: 1.1.1.1

ping:
  interval: 15s
  timeout: 3s
  history-size: 42 
  size: 120

options:
  disableIPv6: false

Can we get a new build?

It looks like the most recent release version was from back in March, but there have been a number of updates to the git repo since then. In particular, there is 943e338 which fixes the completely broken metric names, but is not present in the latest 0.4.7 release (making it a poor choice for anybody to actually use, IMHO).

Current master panics with "inconsistent label cardinality"

From one terminal, ping an unreachable target:

$ sudo ./ping_exporter 10.255.255.254

From another terminal, continuously request metrics:

$ while true; do curl -s localhost:9427/metrics; done

The panic occurs before any metrics are returned, it seems to happen during the request which would have returned metrics:

INFO[0000] adding target for host 10.255.255.254 (10.255.255.254)  source="target.go:72"
INFO[0000] Starting ping exporter (Version: 0.4.3)       source="main.go:147"
INFO[0000] Listening for /metrics on :9427               source="main.go:167"
panic: inconsistent label cardinality

goroutine 316 [running]:
github.com/czerwonk/ping_exporter/vendor/github.com/prometheus/client_golang/prometheus.MustNewConstMetric(0xc420193d50, 0x2, 0x3ff0000000000000, 0xc420133b60, 0x2, 0x2, 0x2, 0x2)
	/home/nicholash/Src/go/src/github.com/czerwonk/ping_exporter/vendor/github.com/prometheus/client_golang/prometheus/value.go:172 +0xb0
main.(*pingCollector).Collect(0xc4200a1640, 0xc420146f60)
	/home/nicholash/Src/go/src/github.com/czerwonk/ping_exporter/collector.go:70 +0x179
github.com/czerwonk/ping_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func2(0xc4200b5580, 0xc420146f60, 0x92ee60, 0xc4200a1640)
	/home/nicholash/Src/go/src/github.com/czerwonk/ping_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:433 +0x61
created by github.com/czerwonk/ping_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather
	/home/nicholash/Src/go/src/github.com/czerwonk/ping_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:431 +0x302

--log.path feature

Can we have flag to configure where logs should be written? Streaming directly to syslog is not always the best thing.

Issue or need explantion

Hi,

I'm using ping_exporter (thanks for this tool!).
On generated graph I can see this output about packet loss
packet_loss

In the same time (between 10:38 and 11:08) I have a ping command pointing to the same IP address and I have only 2 ping with no answer:

Fri Oct 27 10:58:23 CEST 2023: no answer yet for icmp_seq=260
Fri Oct 27 11:06:50 CEST 2023: no answer yet for icmp_seq=767

My question is: How ping_exporter is able to generate graph (like screenshot) if there is only 2 ping with no answer?

Maybe ping command is not showing the same output than ping_exporter ?

Sorry if i'm not clear ....

Thanks!

ping_loss_ratio not work (Version: 1.0.1)

After running for a period of time, the ping_loss_ratio suddenly stopped working, and other metrics were updated in real-time, but the ping_loss_ratio remained at 1 until it was restarted.

======before restart===============

[root@gz-yw-internet-ping-83-10 ping_exporter]# curl http://127.0.0.1:9102/metrics  
# HELP ping_loss_ratio Packet loss from 0.0 to 1.0
# TYPE ping_loss_ratio gauge
ping_loss_ratio{ip="119.29.29.29",ip_version="4",target="119.29.29.29"} 1
ping_loss_ratio{ip="180.76.76.76",ip_version="4",target="180.76.76.76"} 1
ping_loss_ratio{ip="223.5.5.5",ip_version="4",target="223.5.5.5"} 1
# HELP ping_up ping_exporter version
# TYPE ping_up gauge
ping_up{version="1.0.1"} 1
[root@gz-yw-internet-ping-83-10 ping_exporter]# ping 119.29.29.29
PING 119.29.29.29 (119.29.29.29) 56(84) bytes of data.
64 bytes from 119.29.29.29: icmp_seq=1 ttl=46 time=6.87 ms
64 bytes from 119.29.29.29: icmp_seq=2 ttl=46 time=6.96 ms
^C
--- 119.29.29.29 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 6.875/6.921/6.968/0.095 ms
[root@gz-yw-internet-ping-83-10 ping_exporter]# ping 180.76.76.76
PING 180.76.76.76 (180.76.76.76) 56(84) bytes of data.
64 bytes from 180.76.76.76: icmp_seq=1 ttl=49 time=30.6 ms
64 bytes from 180.76.76.76: icmp_seq=2 ttl=49 time=30.6 ms
^C
--- 180.76.76.76 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 30.626/30.633/30.640/0.007 ms
[root@gz-yw-internet-ping-83-10 ping_exporter]# curl http://127.0.0.1:9102/metrics
# HELP ping_loss_ratio Packet loss from 0.0 to 1.0
# TYPE ping_loss_ratio gauge
ping_loss_ratio{ip="119.29.29.29",ip_version="4",target="119.29.29.29"} 1
ping_loss_ratio{ip="180.76.76.76",ip_version="4",target="180.76.76.76"} 1
ping_loss_ratio{ip="223.5.5.5",ip_version="4",target="223.5.5.5"} 1
# HELP ping_up ping_exporter version
# TYPE ping_up gauge
ping_up{version="1.0.1"} 1

======after restart===============

[root@gz-yw-internet-ping-83-10 ~]# systemctl restart ping_exporter
[root@gz-yw-internet-ping-83-10 ~]# curl http://127.0.0.1:9102/metrics
# HELP ping_up ping_exporter version
# TYPE ping_up gauge
ping_up{version="1.0.1"} 1
[root@gz-yw-internet-ping-83-10 ~]# curl http://127.0.0.1:9102/metrics
# HELP ping_loss_ratio Packet loss from 0.0 to 1.0
# TYPE ping_loss_ratio gauge
ping_loss_ratio{ip="119.29.29.29",ip_version="4",target="119.29.29.29"} 0
ping_loss_ratio{ip="180.76.76.76",ip_version="4",target="180.76.76.76"} 0
ping_loss_ratio{ip="223.5.5.5",ip_version="4",target="223.5.5.5"} 0

==============
image
There are no log displays :
image

IPv4 or v6

Thanks for this ping exactly what I needed ! <3

an enhancement could be a parameter to choose if we want ipv4 or v6 resolution

A semi-random subset of metrics can be returned for any given scrape

When configured to scrape a large-enough number of targets, the metrics provided by this exporter do not consistently include all configured targets.

I am running with 68 targets, half of which are unreachable (will timeout) and half of which have varying ping times. The commandline is approximately ping_exporter -ping.interval 1m -ping.timeout 10s <first ip> <second ip> ....

Running curl http://localhost:9427/metrics every few seconds returns different subsets of my monitored IPs. The first few curls might return all of the IPs which responded quickly, the next few curls might return the IPs which were slower to respond, and the next few curls might return all of the IPs which timed out.

# Scrape the server running the ping exporter every 2s, counting the number of ping_rtt_worst_ms lines:
$ for i in {1..30}; do curl -s server:9427/metrics|grep ping_rtt_worst_ms | wc -l; sleep 2; done
19
9
9
9
9
39
39
39
39
39
39
39
39
39
39
39
39
39
39
39
39
39
39
39
39
39
39
26
26
26

It seems like the same data is returned for a few requests in a row, then something changes and different data is returned (which does not include the same IPs as the previous requests). This is consistent with collector.go which seems to delete all information about previously-known ping statistics whenever there are results.

In real-world terms, I've got a HA Prometheus setup with multiple Prometheus instances scraping the same ping_exporter instances. Usually exporters return consistent data for consecutive scrapes, but this behaviour of clearing old data means that the different Prometheus instances each see a different subset of the data, and do not agree on alerting.

Provide build system

Provide a build system -> A Makefile for example, also would be nice to provide target to build debian packages.

Systemd failed - wrong flags

Env
CentOS 7.6 cloud instance, systemd-219-67.el7

Failed Systemd Unit
`[Unit]
Description=Prometheus ping_exporter
After=network.target

[Service]
Type=simple
User=prometheus
Group=prometheus

ExecStart=/usr/local/bin/ping_exporter --web.listen-address=":9116" --config.path="/etc/ping_exporter/config.yaml"
SyslogIdentifier=prometheus_ping_exporter
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target`

Description
When exporter started with this unit file, with flags from exporter's example/help, it fails with error "no such file/ permission denied /etc/ping_exporter/config.yaml". However, permissions are correct, I can run exporter with this ExecStart command with prometheus user. After some painful research I found that there is something wrong with other flags too - log.level (error - "not valid level for logrus"), web.listen-address (error cannot listen socket), etc....

Solution
In my case, ping_exporter Systemd unit works fine with flags supplied as "--flag value", i.e without '='.

Other info
This issue is not reproducible via CLI - hence, it mb not a exporter issue at first place..
Main purpose of this issue is to help other users who struggling with systemd

Working systemd unit:

[Unit]
Description=Prometheus ping_exporter
After=network.target

[Service]
Type=simple
User=prometheus
Group=prometheus

ExecStart=/usr/local/bin/ping_exporter --web.listen-address ":9116" --config.path "/etc/ping_exporter/config.yaml"
SyslogIdentifier=prometheus_ping_exporter
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target```

main.go undefined: target

G'day,

This project looks pretty cool, but seems like it doesn't build, although maybe I'm being silly.

das@das-dell5580:~/go/src/github.com/czerwonk$ rm -rf ping_exporter/
das@das-dell5580:~/go/src/github.com/czerwonk$ git clone https://github.com/czerwonk/ping_exporter.git
Cloning into 'ping_exporter'...
remote: Enumerating objects: 25, done.
remote: Counting objects: 100% (25/25), done.
remote: Compressing objects: 100% (22/22), done.
remote: Total 3162 (delta 8), reused 4 (delta 0), pack-reused 3137
Receiving objects: 100% (3162/3162), 5.35 MiB | 7.58 MiB/s, done.
Resolving deltas: 100% (1092/1092), done.
das@das-dell5580:~/go/src/github.com/czerwonk$ cd ping_exporter/
das@das-dell5580:~/go/src/github.com/czerwonk/ping_exporter$ go build main.go 
# command-line-arguments
./main.go:138:21: undefined: target
./main.go:140:9: undefined: target
./main.go:159:61: undefined: target
./main.go:169:28: undefined: target
./main.go:172:15: undefined: target
./main.go:188:20: undefined: pingCollector

Possibly:

  • a Makefile would make things more clear how to build this.
  • target.go seems to define "target"

New version stopped working

I have been running with this simple config for a few months

docker-compose.yml:

  ping-exporter:
    volumes:
      - ./config:/config:ro
    image: czerwonk/ping_exporter

config

targets:
  - 1.1.1.1
  - 192.168.1.1

Since an update on 6.2.2019 the container exits with:

ping_exporter: error: ping.history-size must be greater than 0
usage: ping_exporter [<flags>] [<targets>...]
...

Since I'm not specifying the history size anywhere, I assume it has default value. Does that mean that the default value is 0? Could it be changed to a sensible default so that it works without the need to specify? Also, the readme examples do not mention this.

ping_loss_percent

Hi! Thanks for exporter!
Can you be kind, please, explain how ping_loss_percent shows the value?
Why is see 1 on inactive host? Shouldn't it bee 100? Why I also see value 1 on working hosts?
I supposed there should be the percent value. like rate(ping_loss_percent[2m]) 100 means that in last 2 minutes 100% of packets were lost.

Add target name

Add target name or alias for each target, ix targets dont have hostname.

multi IP protocol support

ping_exporter should be able to ping IPv4 and IPv6 (if present) when given an DNS name

  • add ip_protocol as new label

add more control on ping interval

It would be interesting to have more control on ping intervals.

For instance I would like to be able to send 5 pings with 20ms interval every 10 seconds.

This is usefull for precise VOIP network monitoring.

Labels

I would like to add labels to my addresses.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.