Git Product home page Git Product logo

collectd_exporter's Introduction

Prometheus
Prometheus

Visit prometheus.io for the full documentation, examples and guides.

CI Docker Repository on Quay Docker Pulls Go Report Card CII Best Practices Gitpod ready-to-code Fuzzing Status OpenSSF Scorecard

Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when specified conditions are observed.

The features that distinguish Prometheus from other metrics and monitoring systems are:

  • A multi-dimensional data model (time series defined by metric name and set of key/value dimensions)
  • PromQL, a powerful and flexible query language to leverage this dimensionality
  • No dependency on distributed storage; single server nodes are autonomous
  • An HTTP pull model for time series collection
  • Pushing time series is supported via an intermediary gateway for batch jobs
  • Targets are discovered via service discovery or static configuration
  • Multiple modes of graphing and dashboarding support
  • Support for hierarchical and horizontal federation

Architecture overview

Architecture overview

Install

There are various ways of installing Prometheus.

Precompiled binaries

Precompiled binaries for released versions are available in the download section on prometheus.io. Using the latest production release binary is the recommended way of installing Prometheus. See the Installing chapter in the documentation for all the details.

Docker images

Docker images are available on Quay.io or Docker Hub.

You can launch a Prometheus container for trying it out with

docker run --name prometheus -d -p 127.0.0.1:9090:9090 prom/prometheus

Prometheus will now be reachable at http://localhost:9090/.

Building from source

To build Prometheus from source code, You need:

Start by cloning the repository:

git clone https://github.com/prometheus/prometheus.git
cd prometheus

You can use the go tool to build and install the prometheus and promtool binaries into your GOPATH:

GO111MODULE=on go install github.com/prometheus/prometheus/cmd/...
prometheus --config.file=your_config.yml

However, when using go install to build Prometheus, Prometheus will expect to be able to read its web assets from local filesystem directories under web/ui/static and web/ui/templates. In order for these assets to be found, you will have to run Prometheus from the root of the cloned repository. Note also that these directories do not include the React UI unless it has been built explicitly using make assets or make build.

An example of the above configuration file can be found here.

You can also build using make build, which will compile in the web assets so that Prometheus can be run from anywhere:

make build
./prometheus --config.file=your_config.yml

The Makefile provides several targets:

  • build: build the prometheus and promtool binaries (includes building and compiling in web assets)
  • test: run the tests
  • test-short: run the short tests
  • format: format the source code
  • vet: check the source code for common errors
  • assets: build the React UI

Service discovery plugins

Prometheus is bundled with many service discovery plugins. When building Prometheus from source, you can edit the plugins.yml file to disable some service discoveries. The file is a yaml-formated list of go import path that will be built into the Prometheus binary.

After you have changed the file, you need to run make build again.

If you are using another method to compile Prometheus, make plugins will generate the plugins file accordingly.

If you add out-of-tree plugins, which we do not endorse at the moment, additional steps might be needed to adjust the go.mod and go.sum files. As always, be extra careful when loading third party code.

Building the Docker image

The make docker target is designed for use in our CI system. You can build a docker image locally with the following commands:

make promu
promu crossbuild -p linux/amd64
make npm_licenses
make common-docker-amd64

Using Prometheus as a Go Library

Remote Write

We are publishing our Remote Write protobuf independently at buf.build.

You can use that as a library:

go get buf.build/gen/go/prometheus/prometheus/protocolbuffers/go@latest

This is experimental.

Prometheus code base

In order to comply with go mod rules, Prometheus release number do not exactly match Go module releases. For the Prometheus v2.y.z releases, we are publishing equivalent v0.y.z tags.

Therefore, a user that would want to use Prometheus v2.35.0 as a library could do:

go get github.com/prometheus/[email protected]

This solution makes it clear that we might break our internal Go APIs between minor user-facing releases, as breaking changes are allowed in major version zero.

React UI Development

For more information on building, running, and developing on the React-based UI, see the React app's README.md.

More information

  • Godoc documentation is available via pkg.go.dev. Due to peculiarities of Go Modules, v2.x.y will be displayed as v0.x.y.
  • See the Community page for how to reach the Prometheus developers and users on various communication channels.

Contributing

Refer to CONTRIBUTING.md

License

Apache License 2.0, see LICENSE.

collectd_exporter's People

Contributors

beorn7 avatar blkperl avatar brandonweeks avatar brian-brazil avatar carlpett avatar dependabot[bot] avatar discordianfish avatar fabxc avatar gambol99 avatar grobie avatar hasso avatar infra-red avatar inosato avatar iwinux avatar juliusv avatar kormat avatar lig avatar m42u avatar octo avatar prombot avatar roidelapluie avatar sdurrheimer avatar simonpasquier avatar stuartnelson3 avatar superq avatar tclavier avatar tuannh99 avatar vitt-bagal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

collectd_exporter's Issues

error "*** was collected before with the same name and label values"

An error has occurred during metrics collection:

300 error(s) occurred:
* collected metric collectd_PCHMonitor__User_BSOS1_SweepFramework_SweepQueue_AbandonedContentDeletionSweep_gauge gauge:<value:0 >  was collected before with the same name and label values
* collected metric collectd_PCHMonitor__CPU_gauge gauge:<value:150 >  was collected before with the same name and label values

When using collectd exporter v0.4.0, we run into above error. By searching in google, seems this error is due to duplicate metrics. We are aware the best option to resolve this problem is to remove the duplicate ones. But compared with this, with similar configuration, the metrics works well for Graphite Writer. Seems that means this exporter is not flexible enough.

We found a possible workaround here: prometheus/node_exporter#377 (comment). Not sure whether collectd exporter support this.

Release

While Prometheus support in collectd will make collectd_exporter obsolete, we are not there yet and it would be nice to have a release.

imports context: unrecognized import path "context"

Hi,

I tried to get collectd_exporter with go get command and this error is triggered :

$ go get github.com/prometheus/collectd_exporter
package github.com/prometheus/collectd_exporter
    imports collectd.org/api
    imports context: unrecognized import path "context"

Protection of web web endpoint

Hi,

it seems like there’s no way of protecting the web endpoint although Prometheus supports both basic auth TLS certificates (at least according to the docs :)).

Any chance of collected_exporter growing them? I would prefer to not install a nginx on each server just to be able to use HTTPS for my metrics…

Metrics might not be the most sensitive data and our metrics subnet is private but for example on customer servers I’m uncomfortable to expose all system statistics to all users.

Collected Exporter with heavy load

Can I use 1 collectd-exporter instance to accept metrics from 5-6 VM's collectd? Each sending using binary network protocol every 5-10 secs?

Collectd exporter can handle that load right?

Not receiving any metrics from collectd

Hi folks,

I'm trying to pipe through collectd metrics to prometheus, but I am not receiving any metrics other than the internal metrics for the collectd exporter. collectd seems to be talking to the exporter properly, as the collectd_last_push_timestamp_seconds metric is getting updated regularly according to the set interval. The RRD files on my device running collectd are being regularly updated as well, and using rrdtool I have confirmed that collectd is updating its metrics locally. I have used tcpdump and the machine running the collectd_exporter is receiving traffic from the machine gathering metrics.

I've tried both the "network" and "write_http" plugins for collectd. Possibly related, I only see the collectd_last_push_timestamp_seconds metric updated with the "write_http" plugin, but otherwise still get no collectd metrics with either one.

Do you have any pointers to how I can further debug why the collectd metrics are not showing up on the prometheus end? Or any insights to what may be wrong with my configuration?

Thanks for the help!

Release 0.1.0 - 05.05.2015 does not work

The documentation for collectd_exporter does not match the version in the release tab - perhaps it's old?

./collectd_exporter -collectd.listen-address=":25826"
flag provided but not defined: -collectd.listen-address

Support descriptive names from JSON payload.

collectd knows what the various fields in a metric are, and passes them in the JSON. the exporter should use this information.

For example collectd_processes_ps_cputime_0 is user and ..._1 is syst.

Blocked in collectd server

Experiencing a very weird issue ... (Note: i'm using the prom/collectd-exporter:latest) I get;

/bin $ /bin/collectd_exporter -collectd.listen-address=":25826" 
panic: runtime error: index out of range

goroutine 1 [running]:
net.IP.IsMulticast(0x0, 0x0, 0x0, 0x6)
    /usr/local/go/src/net/ip.go:132 +0x9b
collectd.org/network.(*Server).ListenAndWrite(0xc20802e5a0, 0x0, 0x0)
    /app/.build/gopath/src/collectd.org/network/server.go:49 +0x125
main.startCollectdServer(0x7fe1919972c0, 0xc20800aa80)
    /app/main.go:247 +0x284
main.main()
    /app/main.go:256 +0xa2

goroutine 7 [select]:
main.(*collectdCollector).processSamples(0xc20800aa80)
    /app/main.go:156 +0x43f
created by main.newCollectdCollector
    /app/main.go:131 +0x118

I can avoid this by using the -collectd.listen-address="0.0.0.0:25826", however, whenever the collectd interface is enabled, the prometheus endpoint is never started.

[jest@starfury services]$ netstat -antu | grep 9103
[jest@starfury services]$ netstat -antu | grep 25826
udp6       0      0 :::25826                :::* 

i.e. it never gets to the http.ListenAndServe(*webAddress, nil) line

[jest@starfury services]$ ps aux | grep collect
jest     14923  0.0  0.0   8648  4512 pts/0    Sl+  13:33   0:00 ./collectd_exporter -collectd.listen-address=0.0.0.0:25826
jest     14952  0.0  0.0 114332  2320 pts/1    S+   13:35   0:00 grep --color=auto collect

goroutine 9 [syscall]:
runtime.notetsleepg(0x96e0b8, 0xdf845c6cf, 0x0)
    /usr/lib/golang/src/runtime/lock_futex.go:201 +0x52 fp=0xc20801d768 sp=0xc20801d740
runtime.timerproc()
    /usr/lib/golang/src/runtime/time.go:207 +0xfa fp=0xc20801d7e0 sp=0xc20801d768
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:2232 +0x1 fp=0xc20801d7e8 sp=0xc20801d7e0
created by runtime.addtimerLocked
    /usr/lib/golang/src/runtime/time.go:113 +0x1ba

goroutine 1 [IO wait]:
net.(*pollDesc).Wait(0xc208010ae0, 0x72, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc208010ae0, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc208010a80, 0xc20809c000, 0x5ac, 0x5ac, 0x0, 0x7f055a63cc68, 0xc208032c18)
    /usr/lib/golang/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc208036050, 0xc20809c000, 0x5ac, 0x5ac, 0x5ac, 0x0, 0x0)
    /usr/lib/golang/src/net/net.go:121 +0xdc
collectd.org/network.(*Server).ListenAndWrite(0xc2080385a0, 0x0, 0x0)
    /home/jest/go/src/github.com/gambol99/collectd_exporter/.build/gopath/src/collectd.org/network/server.go:76 +0x323
main.startCollectdServer(0x7f055a63d2c0, 0xc20800aa80)
    /home/jest/go/src/github.com/gambol99/collectd_exporter/main.go:247 +0x284
main.main()
    /home/jest/go/src/github.com/gambol99/collectd_exporter/main.go:256 +0xa2

goroutine 7 [select]:
main.(*collectdCollector).processSamples(0xc20800aa80)
    /home/jest/go/src/github.com/gambol99/collectd_exporter/main.go:156 +0x43f
created by main.newCollectdCollector
    /home/jest/go/src/github.com/gambol99/collectd_exporter/main.go:131 +0x118

rax     0xfffffffffffffffc
rbx     0x3b9938cf
rcx     0x440503

You can see the main go routine inside the collectd code waiting for metrics to come in. The issue appears to be line 247

go log.Fatal(srv.ListenAndWrite())

Changing this to allow srv.ListenAndWrite() to run within the scope of the goroutine fixed the issue for me

go func() {
    log.Fatal(srv.ListenAndWrite())
}()

[jest@starfury services]$ netstat -antu | grep 25826
udp6 0 0 :::25826 :::*
[jest@starfury services]$ netstat -antu | grep 9103
tcp6 0 0 :::9103 :::* LISTEN

I'll raise a PR for it?

Metric timestamps should be passed through

Collectd supplies Posix timestamps, and Prometheus understands them in the exposition formats. However, collectd_exporter exports all metrics with the implied "now" timestamp. This leads to calculation errors. For example, I recently had a normally-1minly counter which was scraped only 52s after the preceding scrape (due to a prometheus restart). The underlying counter values were time-stamped 1 minute apart in the source collectd, but Prometheus thought they were only 52s apart, so it computed the rate incorrectly. The result was a spike to over 110% CPU, apparently.

metrics endpoint is broken with signalfx-metadata plugin

curl -g collectd-exporter.monitoring.svc:9103/metrics 
An error has occurred during metrics collection:

collected metric collectd_signalfx_metadata_gauge gauge:<value:910.000819206238 >  was collected before with the same name and label values```

This issue comes with latest version (0.4)

when i downgrade it to (0.3.1) , i see invalid Prometheus metric name being generated

curl -g collectd-exporter.monitoring.svc:9103/metrics | grep "collectd"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  8122  100  8122   # HELP collectd_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which collectd_exporter was built.
 # TYPE collectd_exporter_build_info gauge
0     collectd_exporter_build_info{branch="cut-0.3.0",goversion="go1.6.2",revision="3abb95c",version="0.3.1"} 1
0  # HELP collectd_last_push_timestamp_seconds Unix timestamp of the last received collectd metrics push in seconds.
127# TYPE collectd_last_push_timestamp_seconds gauge
3kcollectd_last_push_timestamp_seconds 1.5447246335487258e+09
    # HELP collectd_signalfx-metadata_counter_total Collectd exporter: 'signalfx-metadata' Type: 'counter' Dstype: 'api.Counter' Dsname: 'value'
  # TYPE collectd_signalfx-metadata_counter_total counter
0 collectd_signalfx-metadata_counter_total 0
--:# HELP collectd_signalfx-metadata_gauge Collectd exporter: 'signalfx-metadata' Type: 'gauge' Dstype: 'api.Gauge' Dsname: 'value'
--# TYPE collectd_signalfx-metadata_gauge gauge
:collectd_signalfx-metadata_gauge 13252.9735565186
--collectd_signalfx-metadata_gauge 1430.0007982254
 -# HELP collectd_statsd_count Collectd exporter: 'statsd' Type: 'count' Dstype: 'api.Gauge' Dsname: 'value'
# TYPE collectd_statsd_count gauge
-:-collectd_statsd_count{instance="collectd-statsd-7f7587bc46-qj2nh",statsd="prometheus_target_kbasync_lenh_secc"} 100
-:--# HELP collectd_statsd_derive Collectd exporter: 'statsd' Type: 'derive' Dstype: 'api.Derive' Dsname: 'value'
 -# TYPE collectd_statsd_derive counter
collectd_statsd_derive{instance="collectd-statsd-7f7587bc46-qj2nh",statsd="prometheus_target_kbasync_lenh_secc"} 300
-:--:-- 1321k

here invalid metric names are collectd_signalfx-metadata_counter_total notice the -

In both cases it causes a scraping error in prometheus.

only limited set of types processed

When I query the metrics endpoint of this it does not include many of the collectd data I am collecting.
On the nodes I am running collect the following plugins are enabled:

[root@compute1 collectd.d]# ls cpu.conf disk.conf interface.conf libvirt.conf load.conf memory.conf network.conf swap.conf syslog.conf

However when I query the end point on my prometheus server I only see the following (e.g. no virt stats). I temporarily set up an influxdb with collectd listener and it gets all of the stats, so they are being sent. Is there some additional configuration that I am missing?

[robm@sysmgr ~]$ curl http://localhost:9103/metrics|grep -v '^#'`

 Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7184  100  7184    0     0   928k      0 --:--:-- --:--:-- --:--:-- 1002k`

collectd_exporter_build_info{branch="cut-0.3.0",goversion="go1.6.2",revision="3abb95c",version="0.3.1"} 1
collectd_last_push_timestamp_seconds 0
go_gc_duration_seconds{quantile="0"} 7.452400000000001e-05
go_gc_duration_seconds{quantile="0.25"} 0.000101897
go_gc_duration_seconds{quantile="0.5"} 0.000103992
go_gc_duration_seconds{quantile="0.75"} 0.000108567
go_gc_duration_seconds{quantile="1"} 0.000231744
go_gc_duration_seconds_sum 0.011762737
go_gc_duration_seconds_count 110
go_goroutines 13
go_memstats_alloc_bytes 2.322928e+06
go_memstats_alloc_bytes_total 3.4490968e+08
go_memstats_buck_hash_sys_bytes 1.449911e+06
go_memstats_frees_total 222582
go_memstats_gc_sys_bytes 210944
go_memstats_heap_alloc_bytes 2.322928e+06
go_memstats_heap_idle_bytes 3.334144e+06
go_memstats_heap_inuse_bytes 2.6624e+06
go_memstats_heap_objects 6883
go_memstats_heap_released_bytes_total 0
go_memstats_heap_sys_bytes 5.996544e+06
go_memstats_last_gc_time_seconds 1.4685134677145672e+09
go_memstats_lookups_total 2321
go_memstats_mallocs_total 229465
go_memstats_mcache_inuse_bytes 1200
go_memstats_mcache_sys_bytes 16384
go_memstats_mspan_inuse_bytes 14280
go_memstats_mspan_sys_bytes 32768
go_memstats_next_gc_bytes 4.194304e+06
go_memstats_other_sys_bytes 524609
go_memstats_stack_inuse_bytes 294912
go_memstats_stack_sys_bytes 294912
go_memstats_sys_bytes 8.526072e+06
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} 2603.717
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} 3856.373
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} 3956.057
http_request_duration_microseconds_sum{handler="prometheus"} 711660.7530000001
http_request_duration_microseconds_count{handler="prometheus"} 230
http_request_size_bytes{handler="prometheus",quantile="0.5"} 305
http_request_size_bytes{handler="prometheus",quantile="0.9"} 305
http_request_size_bytes{handler="prometheus",quantile="0.99"} 305
http_request_size_bytes_sum{handler="prometheus"} 67972
http_request_size_bytes_count{handler="prometheus"} 230
http_requests_total{code="200",handler="prometheus",method="get"} 230
http_response_size_bytes{handler="prometheus",quantile="0.5"} 1594
http_response_size_bytes{handler="prometheus",quantile="0.9"} 1601
http_response_size_bytes{handler="prometheus",quantile="0.99"} 1601
http_response_size_bytes_sum{handler="prometheus"} 416246
http_response_size_bytes_count{handler="prometheus"} 230
process_cpu_seconds_total 0.94
process_max_fds 1.048576e+06
process_open_fds 8
process_resident_memory_bytes 9.621504e+06
process_start_time_seconds 1.46850995405e+09
process_virtual_memory_bytes 1.654784e+07

Incompatibility with collectd Plugin:ConnTrack

Hi,

When exporting data from a collectd instance having the ConnTrack plugin activated, I get this error:

# curl -s -D /dev/stderr localhost:9103/metrics
HTTP/1.1 500 Internal Server Error
Content-Type: text/plain; charset=utf-8
X-Content-Type-Options: nosniff
Date: Fri, 26 Jan 2018 15:03:48 GMT
Content-Length: 292

An error has occurred during metrics collection:

collected metric collectd_conntrack label:<name:"conntrack" value:"max" > label:<name:"instance" value:"gw1.example.net" > gauge:<value:256000 >  has label dimensions inconsistent with previously collected metrics in the same metric family

It seems that there is a mismatch between received metrics over time making the exporter to cease to function properly.

Any idea of something obvious I missed here?

Kind regards,
Vincent

Derive type support

Hi there,

I'm trying to use the PostgreSQL CollectD plugin to export metrics to Prometheus.
Most of the stats in PostgreSQL are of the derive type, but when I try to plot the graph, it looks like if the type was gauge.
I'm using the :latest docker image. When I tried to upgrade to :master, all the derive metrics went away leaving me with no data to work with.

My question is: is the derive type supported by exporter?

TIA.

invalid metric name "collectd_docker_cpu.percent"

Hi Team,

I am getting invalid metric name error in prometheus targets when i configure collectd exporter.
The docker metric collectd_docker_cpu.percent contains "." which is causing the issue.
Is there any way to fix the issue.

collectd.listen-address dont change listen port

Hello!
I run collectd_exporter

collectd_exporter --collectd.listen-address=":25826"

INFO[0000] Starting collectd_exporter (version=0.4.0, branch=HEAD, revision=ad6568ba50d6dabfb03588f42b05adb35fe21019)  source="main.go:303"
INFO[0000] Build context (go=go1.7.6, user=root@ff2f923e5942, date=20180122-15:19:23)  source="main.go:304"
INFO[0000] Listening on :9103                            source="main.go:326"

netstat -tnlp | grep collectd_expor
tcp6 0 0 :::9103 :::* LISTEN 2485/collectd_expor

But collectd.listen-address dont change listen port
How change listen port ?

errors in metrics output

when i run 0.4.0, i get the following when looking at prometheus output

An error has occurred during metrics collection:

62 error(s) occurred:
* collected metric collectd_ctl_disk_octets_0_total label:<name:"ctl" value:"iscsi" > label:<name:"instance" value:"mediastore.home" > label:<name:"type" value:"1-10" > counter:<value:7.4947129e+07 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric collectd_ctl_disk_octets_1_total label:<name:"ctl" value:"iscsi" > label:<name:"instance" value:"mediastore.home" > label:<name:"type" value:"1-10" > counter:<value:6.2783488e+07 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric collectd_ctl_disk_octets_0_total label:<name:"ctl" value:"iscsi" > label:<name:"instance" value:"mediastore.home" > label:<name:"type" value:"1-2" > counter:<value:2.122446e+06 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric collectd_ctl_disk_octets_1_total label:<name:"ctl" value:"iscsi" > label:<name:"instance" value:"mediastore.home" > label:<name:"type" value:"1-2" > counter:<value:0 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric collectd_ctl_disk_ops_0_total label:<name:"ctl" value:"tpc" > label:<name:"instance" value:"mediastore.home" > label:<name:"type" value:"0-0" > counter:<value:0 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric collectd_ctl_disk_ops_1_total label:<name:"ctl" value:"tpc" > label:<name:"instance" value:"mediastore.home" > label:<name:"type" value:"0-0" > counter:<value:0 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric collectd_ctl_disk_ops_0_total label:<name:"ctl" value:"iscsi" > label:<name:"instance" value:"mediastore.home" > label:<name:"type" value:"1-5" > counter:<value:0 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric collectd_ctl_disk_ops_1_total label:<name:"ctl" value:"iscsi" > label:<name:"instance" value:"mediastore.home" > label:<name:"type" value:"1-5" > counter:<value:0 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric collectd_ctl_disk_octets_0_total label:<name:"ctl" value:"iscsi" > label:<name:"instance" value:"mediastore.home" > label:<name:"type" value:"1-8" > counter:<value:2.2153884e+07 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric collectd_ctl_disk_octets_1_total label:<name:"ctl" value:"iscsi" > label:<name:"instance" value:"mediastore.home" > label:<name:"type" value:"1-8" > counter:<value:6.1763584e+07 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric collectd_ctl_disk_octets_0_total label:<name:"ctl" value:"iscsi" > label:<name:"instance" value:"mediastore.home" > label:<name:"type" value:"1-1" > counter:<value:2.0629049e+07 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric collectd_ctl_disk_octets_1_total label:<name:"ctl" value:"iscsi" > label:<name:"instance" value:"mediastore.home" > label:<name:"type" value:"1-1" > counter:<value:3.83522816e+08 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric collectd_ctl_disk_ops_0_total label:<name:"ctl" value:"iscsi" > label:<name:"instance" value:"mediastore.home" > label:<name:"type" value:"1-3" > counter:<value:78676 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric collectd_ctl_disk_ops_1_total label:<name:"ctl" value:"iscsi" > label:<name:"instance" value:"mediastore.home" > la

any ideas? these are coming from freenas collectd instance

Prefix to differentiate metrics from different clusters

We are trying to migrate from our existing setup of collectd->graphite->grafana to collectd->collectd_exporter->prometheus->grafana. However, we are having difficulty differentiating metrics from different clusters. For example, when using the write_graphite plugin, we could use the prefix keyword to append a custom prefix to metrics that we could use to differentiate metrics from different sources in graphite/grafana.

<Plugin write_graphite>
  <Carbon>
    Host "abc.example.com"
    Port "2003"
    Prefix "test."
    Protocol "tcp"
    LogSendErrors true
    StoreRates true
    AlwaysAppendDS false
    EscapeCharacter "_"
  </Carbon>
</Plugin>

However, when using the network plugin to send metrics to collectd_exporter there does not seem to be an equivalent "prefix" keyword or similar. So how would one contruct dasbhoards/queries on a per cluster basis (differentiate metrics from different openstack clusters for example, cluster-1, cluster-2 etc.)

When using collectd_exporter/prometheus we see the exported_instance field in metrics

collectd_cpu_total{cpu="0",exported_instance="sai",instance="localhost:9103",job="sai",type="idle"}

The exported_instance field refers to the Hostname field in collectd.conf but there does not seem anyway to send a custom keyword/label to differentiate metrics from different clusters.

This is not really an issue, but reaching out for help/guidance. Thanks in advance!

collectd_exporter - Prometheus target

Hello,

I installed Prometheus on Kubernetes, with grafana and node-exporter. Then, I added collectd-exporter, I am able to collectd the metrics successfully by browsing the URL http://<ip_address>:port/metrics.

The problem I am facing is that I am unable to see the collectd_exporter under the target section in Prometheus, please advise what would be the problem and how can I proceed?

Regards,

expected gauge in metric collectd_nginx_connections

how can I fix this problem?

expected gauge in metric collectd_nginx_connections label:<name:"instance" value:"my.host.net" > label:<name:"nginx" value:"accepted" > counter:<value:22386 > ```

collectd-core 5.4.1, collectd_exporter 0.2.0

make fails with tcp timeout.

I am trying to build this project with make and DOCKER_ARCHS set as amd64 in environment variable. This is the error I am seeing.

make
>> checking code style
>> checking license header
mkdir -p /Users/nsubrahm/go/bin
curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/v1.36.0/install.sh \
		| sed -e '/install -d/d' \
		| sh -s -- -b /Users/nsubrahm/go/bin v1.36.0
golangci/golangci-lint info checking GitHub for tag 'v1.36.0'
golangci/golangci-lint info found version: 1.36.0 for v1.36.0/darwin/amd64
golangci/golangci-lint info installed /Users/nsubrahm/go/bin/golangci-lint
>> running golangci-lint
GO111MODULE=on go list -e -compiled -test=true -export=false -deps=true -find=false -tags= -- ./... > /dev/null
go: [email protected]: Get "https://proxy.golang.org/collectd.org/@v/v0.4.0.mod": dial tcp: i/o timeout
make: *** [common-lint] Error 1

high availability for collectd_exporter

is there a recommended installation/deployment option for collectd_exporter considering high availability?

to run two exporters under load balancer, let prometheus talk to the node balancer but asks collectd client to talk to both exporter instances. Is this can work?

<Plugin network>
  Server "prometheus1.example.com" "25826"
</Plugin>
<Plugin network>
  Server "prometheus2.example.com" "25826"
</Plugin>

Feature request - Make Timeout configurable via a flag

Hello,

In our setup of collectd-exporter, we would like to configure the timeout interval here:
https://github.com/prometheus/collectd_exporter/blob/master/main.go#L41-L44

The reason we want to do that is because, we have some metrics that are pushed to collectd every 5 iterations (because they are quite heavy to compute).
And our prometheus server is scraping at every iterations --- Marking stale the timeserie in every "gap" when removed by the exporter.

Before wrinting any code/sending any PR, I wanted to get approval by maintainers that this change might be accepted.

Thank you

panic: runtime error: index out of range

I run exporter with docker: docker run -p 9103:9103 prom/collectd-exporter:latest -collectd.listen-address=":25826" and this is what I got:

panic: runtime error: index out of range

goroutine 10 [running]:
net.IP.IsMulticast(0x0, 0x0, 0x0, 0x6)
/usr/local/go/src/net/ip.go:132 +0x9b
collectd.org/network.(*Server).ListenAndWrite(0xc20802e5a0, 0x0, 0x0)
/app/.build/gopath/src/collectd.org/network/server.go:49 +0x125
main.func·001()
/app/main.go:248 +0x27
created by main.startCollectdServer
/app/main.go:249 +0x2c2

goroutine 1 [chan receive]:
github.com/prometheus/client_golang/prometheus.(_registry).Register(0xc2080300c0, 0x7f5290535488, 0xc208032210, 0x0, 0x0, 0x0, 0x0)
/app/.build/gopath/src/github.com/prometheus/client_golang/prometheus/registry.go:233 +0x247
github.com/prometheus/client_golang/prometheus.(_registry).RegisterOrGet(0xc2080300c0, 0x7f5290535488, 0xc208032210, 0x0, 0x0, 0x0, 0x0)
/app/.build/gopath/src/github.com/prometheus/client_golang/prometheus/registry.go:296 +0x69
github.com/prometheus/client_golang/prometheus.RegisterOrGet(0x7f5290535488, 0xc208032210, 0x0, 0x0, 0x0, 0x0)
/app/.build/gopath/src/github.com/prometheus/client_golang/prometheus/registry.go:136 +0x62
github.com/prometheus/client_golang/prometheus.MustRegisterOrGet(0x7f5290535488, 0xc208032210, 0x0, 0x0)
/app/.build/gopath/src/github.com/prometheus/client_golang/prometheus/registry.go:142 +0x44
github.com/prometheus/client_golang/prometheus.InstrumentHandlerFuncWithOpts(0x0, 0x0, 0x7ba390, 0x4, 0x7f7410, 0x13, 0x8123d0, 0x21, 0xc20803d320, 0x0, ...)
/app/.build/gopath/src/github.com/prometheus/client_golang/prometheus/http.go:132 +0x286
github.com/prometheus/client_golang/prometheus.InstrumentHandlerFunc(0x7d49d0, 0xa, 0xc20800abc0, 0x7f52905353b0)
/app/.build/gopath/src/github.com/prometheus/client_golang/prometheus/http.go:73 +0x107
github.com/prometheus/client_golang/prometheus.InstrumentHandler(0x7d49d0, 0xa, 0x7f52905353b0, 0xc2080300c0, 0xc20803ca80)
/app/.build/gopath/src/github.com/prometheus/client_golang/prometheus/http.go:61 +0x98
github.com/prometheus/client_golang/prometheus.Handler(0x0, 0x0)
/app/.build/gopath/src/github.com/prometheus/client_golang/prometheus/registry.go:91 +0x73
main.main()
/app/main.go:264 +0x12a

goroutine 7 [select]:
main.(*collectdCollector).processSamples(0xc20800ab60)
/app/main.go:156 +0x43f
created by main.newCollectdCollector
/app/main.go:131 +0x118

goroutine 11 [runnable]:
github.com/prometheus/client_golang/prometheus.func·011()
/app/.build/gopath/src/github.com/prometheus/client_golang/prometheus/registry.go:220
created by github.com/prometheus/client_golang/prometheus.(*registry).Register
/app/.build/gopath/src/github.com/prometheus/client_golang/prometheus/registry.go:223 +0x154

Not receiving any metrics

Hi,

I'm trying to use collectd_exporter with collectd 5.4 and Prometheus 0.18.
From the Promdash level I am able to see collectd_exporter_build_info and collectd_last_push_timestamp_seconds, but the latter value us 0.
So the communication between Prometheus and collectd_exporter seems to be working fine.

But it looks like collectd_exporter is not receiving any data from collectd. As I'm on collectd 5.4, I'm trying to use network plugin, since the write_http one does not support JSON format in this version.

Are there any know compatibility problems regarding the binary protocol and/or network plugin itself?

Official dockerhub image

Hi,

Want to check if there is a docker image for collectd_exporter which is a official docker image? I could not find one under dockerhub official images https://hub.docker.com/explore/

If not, want to check if there is any plan to add an official collectd_exporter image to dockerhub? https://docs.docker.com/docker-hub/official_repos/#how-do-i-create-a-new-official-repository,

Also want to check if it makes sense to make collectd_exporter an official image based on the subjective considerations from the link

Arm builds

Arm builds seem to be supported after ce323be, but do not appear on Dockerhub or Quay. Can these be pushed?

s390x support in CI

s390x support was added to CI through #98 and could see building binaries for s390x in logs here
However now I can't see s390x in logs etc. Also there is no new release after May 2020.
Is s390x support is dropped? or included in CI only when there is a release.
Please let me know.

make breaks on CentOS 7.2

This change was in the last few days.
I had a script which install collectd_exporter as follow:

git clone https://github.com/prometheus/collectd_exporter
cd collectd_exporter
make

Now, I get the following:

$ make
>> formatting code
can't load package: package _/home/centos/collectd_exporter: cannot find package "_/home/centos/collectd_exporter" in any of:
        /usr/lib/golang/src/_/home/centos/collectd_exporter (from $GOROOT)
        ($GOPATH not set)
make: *** [format] Error 1

Running on CentOS 7.2

HTTPS endpoint

Hi,

Is there any work planned for adding TLS support (like in node_exporter) to server accessed by prometheus? I know that question was asked and answered few years ago but maybe there is some update.

Thanks,
Kamil

Metrics has different names that with write_prometheus

Hello,

I'm starting to use Prometheus and come from collectd/InfluxDB. I'm keeping collectd but get rid of InfluxDB to store data using Prometheus. On systems where this is available, I use the collectd write_prometheus plugin. But on FreeBSD 14, the default collectd package does not provide this plugin ; but they provide the collectd_exporter package.

Using both tools, I noticed differences in metrics' name.
This leads to issues in Grafana/PromQL dashboard/requests.

With collectd_prometheus (on FreeBSD), I get:

  • collectd_interface_if_errors_0_total
  • collectd_interface_if_errors_1_total
  • collectd_interface_if_octets_0_total
  • collectd_interface_if_octets_1_total
  • (...)

With write_prometheus (on Illumos and OpenBSD), those have different names:

  • collectd_interface_if_errors_rx_total
  • collectd_interface_if_errors_tx_total
  • collectd_interface_if_octets_rx_total
  • collectd_interface_if_octets_tx_total
  • (...)

Is this expected / know?
Is there a way to have the same metrics names with both tools?

Thank you.

Many errors when getting /metrics

Hello,

I'm using version v0.5.0 in its docker version. I collect metrics coming from rhel6 and 7.

collectd version on rhel6 is 4.10.9-5
collectd version on rhel7 is 5.8.1-1

I had collectd_exporter working with only rhel6 for some times it looks rhel7 bring this regression.

Here is the one of the 350 errors I get when I call /metrics :

An error has occurred while serving metrics:

350 error(s) occurred:
[...]
* collected metric collectd_interface_if_errors_1_total label:<name:"instance" value:"myserver" > label:<name:"interface" value:"eth1" > counter:<value:0 >  has help "Collectd exporter: 'interface' Type: 'if_errors' Dstype: 'api.Counter' Dsname: '1'" but should have "Collectd exporter: 'interface' Type: 'if_errors' Dstype: 'api.Derive' Dsname: '1'"

it appears on many different metrics.

Collectd is configured using network plugin :

LoadPlugin network
<Plugin network>
  Server "collectd-exporter-server" "25826"
</Plugin>

I'll test to remove rhel7 server and see if it works back. I've already tries to destroy container. I was first on latest version and switch to named version v0.5.0

Does this issue is known ? Should i switch on write_http plugin for rhel7?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.