qubitproducts / exporter_exporter Goto Github PK
View Code? Open in Web Editor NEWA reverse proxy designed for Prometheus exporters
License: Apache License 2.0
A reverse proxy designed for Prometheus exporters
License: Apache License 2.0
Seeing something strange, this is with the official exporter_exporter 0.3.0 binary. Sometimes error reports just show "context canceled" with no other message.
root@ldex-mon2:~# exporter_exporter -config.file=/etc/prometheus/exporter_exporter.yml -config.dirs=/etc/prometheus/exporter_exporter.d -web.tls.listen-address '127.0.0.1:9998' -web.tls.verify
FATA[0000] context canceled source="main.go:270"
But then a little later when I ran the exact same command:
root@ldex-mon2:~# exporter_exporter -config.file=/etc/prometheus/exporter_exporter.yml -config.dirs=/etc/prometheus/exporter_exporter.d -web.tls.listen-address '127.0.0.1:9998' -web.tls.verify
FATA[0000] Could not parse key/cert, open cert.pem: no such file or directory source="main.go:234"
I'm not sure what's going on, but it seems like some sort of race condition.
It's a bit of a pain because I know something is going on here, but I can't see the error message:
root@ldex-mon2:~# exporter_exporter -config.file=/etc/prometheus/exporter_exporter.yml -config.dirs=/etc/prometheus/exporter_exporter.d -web.tls.listen-address ':9998' -web.tls.cert=/path/to/cert.pem -web.tls.key=/path/to/privkey.pem
FATA[0000] context canceled source="main.go:270"
At least on Centos 7 the curl fails with:
curl: (58) could not load PEM client certificate, OpenSSL error error:140AB18F:SSL routines:SSL_CTX_use_certificate:ee key too small, (no key found, wrong pass phrase, or wrong file format?)
If you create the certificates with -newkey rsa:2048 then everything is fine.
Hi Team,
Is there any option with exporter_exporter to fix below mentioned vulnerabilities ?
Expecting immediate response . Thanks in advance.
Thanks and Regards,
Vinod M V
I have many nodes that should push and delete metrics to pushgateway on a remote server. Since the labels are parsed from the url, I am wondering if it is possible to do it through exporter exporter or if I should access pgw directly.
Thanks
Hi,
First of all: I was looking for a small & simple, easy to deploy on various system (golang, yeay!), tls-enabled http proxy to secure all prometheus exporter for some time and it your exporter_exporter is doing exactly that! Thanks a lot :-)
(and the consistency with other prometheus components is very welcome)
I was wondering if you would consider adding basic http-auth to remove the burden of using tls-client cert verification to secure the access to the metrics?
Cheers
Currently, exporter_exporter does not have means to run as a windows service. Running it as a service on windows without a service wrapper a la nssm/winsw would be ideal.
I also investigated the possibility of building it as an MSI package with INSTALL_OPTIONS a la wmi_exporter.
Is this worth investigating?
I was running into an issue where exporter_exporter
couldn't verify the metrics being exported by https://github.com/prometheus/jmx_exporter (v 0.18.0 running the standalone http server) and was able to resolve it by setting verify: false
in the exporter config in expexp.yaml
.
Though disabling verify
works, I'm hoping to understand the underlying issue; a sample log entry from the exporter_exporter
service looks like this:
Apr 17 16:45:14 api0 exporter_exporter[677967]: time="2023-04-17T16:45:14-06:00" level=error msg="Verification for module 'core_api-jmx' failed: Failed to decode metrics from proxied server: text format parsing error in line 12: unknown metric
type \"unknown\""
And the (truncated) output from the endpoint:
runofthemill@api0:~$ curl http://localhost:9999/proxy?module=core_api-jmx
# HELP jmx_config_reload_success_total Number of times configuration have successfully been reloaded.
# TYPE jmx_config_reload_success_total counter
jmx_config_reload_success_total 0.0
# HELP jmx_exporter_build_info A metric with a constant '1' value labeled with the version of the JMX exporter.
# TYPE jmx_exporter_build_info gauge
jmx_exporter_build_info{version="0.18.0",name="jmx_prometheus_httpserver",} 1.0
# HELP jmx_config_reload_failure_total Number of times configuration have failed to be reloaded.
# TYPE jmx_config_reload_failure_total counter
jmx_config_reload_failure_total 0.0
# HELP java_lang_MemoryPool_UsageThresholdSupported java.lang:name=Metaspace,type=MemoryPool,attribute=UsageThresholdSupported
# TYPE java_lang_MemoryPool_UsageThresholdSupported untyped
java_lang_MemoryPool_UsageThresholdSupported{name="Metaspace",} 1.0
java_lang_MemoryPool_UsageThresholdSupported{name="Code Cache",} 1.0
java_lang_MemoryPool_UsageThresholdSupported{name="Compressed Class Space",} 1.0
java_lang_MemoryPool_UsageThresholdSupported{name="G1 Eden Space",} 0.0
java_lang_MemoryPool_UsageThresholdSupported{name="G1 Old Gen",} 1.0
java_lang_MemoryPool_UsageThresholdSupported{name="G1 Survivor Space",} 0.0
Line 12 seems to be untyped from the preceding annotation, which (I think) should be okay? Any suggestions on how I can better understand why the verification is failing?
Thank you!
Hello,
Is there a limit to scrape metrics with exporter_exporter ?
I have an exporter with 5102 metrics lines, but when i request it :
curl: (52) Empty reply from server
Config :
modules:
gitlab:
method: http
http:
address: 10.0.0.152
port: 8080
All other modules works, but this one always failed
Any idea ?
Regards
I have 29 microservices and want to provide it metrics from single endpoint.
Now I'm write some code by hand to get metrics from instances and provide output
We're looking for utilizing exporter_exporter to expose additional metadata about each module that would then be consumed by third-party systems via the JSON response.
Currently, what we're using is labels
. I.e. each module has a section of labels
that is also exposed via JSON and can then be used by another system which constructs scrape targets for file_sd
. Note, these aren't actually added to the metrics themselves by exporter_exporter as it still only acts as a proxy. Would that be something that could be added to exporter_exporter?
E.g.
modules:
node:
method: http
http:
port: 9100
labels:
labelname: labelval
labelname2: labelval2
and
type moduleConfig struct {
Method string `yaml:"method"`
Timeout time.Duration `yaml:"timeout"`
Labels map[string]string `yaml:"labels"`
That's pretty much it, we're not actually doing anything more than exposing it in the json response.
Hi, I believe the (latest) v0.4.0 binaries were built with go v1.14.0 that contains old/imprecise checks for a specific kernel issue: golang/go#37436
When we used the prebuilt binaries on a fresh Ubuntu 20.04 (kernel 5.4.0-65-generic
), the binary crashed after a few minutes with the mlock
issue described in the golang/go
thread, even though the kernel is already patched for this issue. When we rebuilt the binaries from source with the latest go (go1.15.8
), the crashes went away.
I think this can be mitigated by releasing a new version with a more recent go version, or compiling the source manually; I wanted to raise this in case someone else encounters this issue.
Thanks for the exporter!
Hello,
I am trying to use blackbox_exporter which needs among other things a "module" parameter, but I cant figure out how to send it.
README mentions that the /proxy endpoint supports a "params" parameter:
params (optional): named parameter to pass to the module (either as CLI args, or http parameters).
Is there an example of how to use this somewhere? I tried to find it in the code but I couldn't find anything. Trying it out it seems to only a (list of) strings, not a dict/map, so how does one give a named parameter to the proxied module?
Thanks! :)
The exe contained in https://github.com/QubitProducts/exporter_exporter/releases/download/v0.4.0/exporter_exporter-0.4.0.windows-amd64.zip is not a windows executable:
$ unzip ../exporter_exporter-0.4.0.windows-amd64.zip
Archive: ../exporter_exporter-0.4.0.windows-amd64.zip
inflating: build/exporter_exporter-0.4.0.windows-amd64/exporter_exporter.exe
$ file build/exporter_exporter-0.4.0.windows-amd64/exporter_exporter.exe
build/exporter_exporter-0.4.0.windows-amd64/exporter_exporter.exe: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, Go BuildID=-Q9NPMunilMXaOGIuByt/_eiyrk3CQNfYU62rUW7n/VKYarLwOWrQMst-s20fd/BrW2dD2LRqFZcSBwgZze, not stripped
Hello,
How do I configure if I need exporter_exporter to proxy to something protected by basic auth?
Thanks! :)
I can't find any documentation related to the build process. While the Makefile is pretty explanatory, there's some docker magic that needs to be figured out.
Alternatively, it would be helpful if you'd provide a recent build. The windows binaries don't have support for --web.bearer.token or --web.bearer.token-file.
Can we add support to provide a list of metrics which can be whitelisted/blacklisted, so that we need not wait till prometheus drops metrics at relabel stage.
Also a support to add extra labels
Would it be possible to add server SAN verification to the tls config in this exporter? This would allow us to use our existing certificate setup without having to take extra steps to do verification.
I'm not a go programmer, but after a brief look at the code I think the methods outlined in this blog post would do the job:
https://dev.to/living_syn/validating-client-certificate-sans-in-go-i5p
I'm happy to try and create a PR if that would be useful?
It seems that 0.3.1
, or more specifically #38 broke the ability to return JSON of all modules and their configuration on request.
Executing curl -XGET -H "Accept: application/json" localhost:9999
now logs ERRO[0013] json: unsupported type: func(*http.Request)
Hi,
i am running the exporter with this command line:
EXTERNAL_IP=$(curl -s ifconfig.me)
cajetan/bin/exporter_exporter \
--config.file /home/mon/cajetan/etc/exporter_exporter/expexp.yml \
--log.level debug \
--web.tls.cert cajetan/etc/exporter_exporter/prom_node_cert.pem \
--web.tls.key cajetan/etc/exporter_exporter/prom_node_key.pem \
--web.tls.ca cajetan/etc/exporter_exporter/prometheus_cert.pem \
--web.tls.listen-address ${EXTERNAL_IP}:9999 \
--web.listen-address ${EXTERNAL_IP}:1234 \
--web.tls.verify \
--web.tls.certmatch=^prometheus$
Https is fine
OMD[cajetan@admin]:~$ curl --cert ~/clients/Debian/20/x86_64/etc/exporter_exporter/prometheus_cert.pem \
--key ~/etc/prometheus/ssl/prometheus_key.pem \
--cacert ~/clients/Debian/20/x86_64/etc/exporter_exporter/prom_node_cert.pem \
--resolve prom_node:9999:11.203.192.54 -vvv https://prom_node:9999/metrics
...
# HELP build_info A metric with a constant '1' value labeled by version, revision, branch and goversion from which exporter_exporter was built.
# TYPE build_info gauge
build_info{branch="",goversion="go1.17",revision="",version="0.4.5"} 1
using any other certificate/key fails as expected:
OMD[cajetan@admin]:~$ curl --cert /tmp/prometheus_cert.pem \
--key /tmp/prometheus_key.pem \
--cacert ~/clients/Debian/20/x86_64/etc/exporter_exporter/prom_node_cert.pem \
--resolve prom_node:9999:11.203.192.54 \
-vvv https://prom_node:9999/metrics
...
* TLSv1.3 (IN), TLS alert, bad certificate (554):
* OpenSSL SSL_read: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate, errno 0
* Closing connection 0
But: the http-port is still wide open for everybody:
OMD[cajetan@admin]:~$curl http://11.203.192.54:1234/metrics
# HELP build_info A metric with a constant '1' value labeled by version, revision, branch and goversion from which exporter_exporter was built.
# TYPE build_info gauge
build_info{branch="",goversion="go1.17",revision="",version="0.4.5"} 1
...
I don't see a way to close the http port. If i leave --web.listen-address, then it opens the default port 9999. The only way to block the http access is a firewall rule. Am i missing something here?
When a backend gives an error status, it appears to confuse exporter_exporter.
Here is an example using snmp_exporter. If I talk to it directly with bad arguments, it returns a 400 status code and a useful message.
root@prometheus:~# curl 'localhost:9116/snmp?target=10.12.255.1&module=WRONG'
Unknown module 'WRONG'
root@prometheus:~# curl -v 'localhost:9116/snmp?target=10.12.255.1&module=WRONG'
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9116 (#0)
> GET /snmp?target=10.12.255.1&module=WRONG HTTP/1.1
> Host: localhost:9116
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Sat, 19 Oct 2019 09:11:42 GMT
< Content-Length: 23
<
Unknown module 'WRONG'
* Connection #0 to host localhost left intact
But using exporter_exporter:
modules:
snmp:
method: http
http:
port: 9116
path: snmp
Here is what I get:
root@prometheus:~# curl 'localhost:9999/proxy?module=snmp&target=10.12.255.1&module=WRONG'
An error has occurred while serving metrics:
text format parsing error in line 1: expected float as value, got "module"
root@prometheus:~# curl -v 'localhost:9999/proxy?module=snmp&target=10.12.255.1&module=WRONG'
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9999 (#0)
> GET /proxy?module=snmp&target=10.12.255.1&module=WRONG HTTP/1.1
> Host: localhost:9999
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Sat, 19 Oct 2019 09:13:07 GMT
< Content-Length: 121
<
An error has occurred while serving metrics:
text format parsing error in line 1: expected float as value, got "module"
* Connection #0 to host localhost left intact
I looks like exporter_exporter has attempted to parse the error message as a metric, and this in turn has caused a 500 internal server error.
I am not actually sure why exporter_exporter needs to parse metrics - why not pass the body through unchanged?
But in any case, when the backend returns a non-2xx code then I think that the backend response code and body should be passed through unchanged.
Hello,
getting Client sent an HTTP request to an HTTPS server. when going on http://myip:5757/
Where can I disable HTTPS?
I only want to use HTTP.
Thank you!
Sometimes, I need to time the scraping of an endpoint or to run a script less frequently than the minimum scrape interval (~5min in prometheus before declaring the data stale). This may happens when the scraping may have a cost (contention, costly operations or on weak servers) and
The idea would be to add the possibility to cache the result of the scrape and set a stamp on it. When the next scrape arrives
Currently, the only way to achieve that is to write the result in a promfile, execute the script at the wanted interval and use node_exporter to collect it. It adds a lot of setup for such a simple use case.
In terms of configuration, it could be something similar to HTTP configuration:
somescript:
method: exec
...
cache:
enabled: True
max_age: 30m
exec:
command: /tmp/don_t_launch_that_too_often.sh
Another nice addition for the HTTP method would be the possibility to make a HEAD request on the endpoint to at least check if it is up but this would depend on the exporter supporting it.
We're trying to scrape the control plane metrics of Kubernetes, e.g. the kube-apiserver
that exposes metrics on localhost:6443/metrics
but that requires authorization via the bearer-token header when using RBAC. (Metrics For The Kubernetes Control Plane)
This could be done by a dedicated job in Prometheus, however, when exporter_exporter is already used to proxy other exporters on the same machine, we'd prefer to do the same here.
That's only a single example derived from an actual use case that we've encountered. Implementing this would require a simple modification to the reverse proxy so that headers from the module config (if any) are added when proxying, would this be acceptable?
Thanks for this exporter, it's really useful.
For your information, I released an Ansible role to deploy it.
https://github.com/umanit/ansible-prometheus_exporter_exporter
Yoann
There is already an issue for this here: #36. Adding an additional use case for it here.
I have a setup where a host has
I also have exporter exporter as a proxy to all the exporters on the host.
the same setup is deployed in dev, staging and production environments.
I see lot of value in adding common properties like "environment" added (as extra metadata) to all the exported metrics. One immediate benefit is that the same prometheus job can be used to scrape from all environments.
Hello,
I think that exporter_exporter binary should provide basic authentication support.
Coupled with tls config, it will help to better secure the unsecured exporter that exist still without relying to apache or nginx.
That said, to serve the unsecured exporter via exporter_exporter is more secure than the exporter itself, but I believe that the authentication may add a stronger security.
Best Regards,
Christophe
I like glog as well, I even have my own fork with some additional stuff but "github.com/prometheus/common/log" should probably be used to behave more like the "stock" exporters.
Hi *,
I need for a project an ARM release, can you adjust the build and also build an ARM binary on every release?
Cheers ๐ป
Richard
Is ed25519 supported?
I noticed this module is using an older version of crypto and I wasn't able to find details of when support was added.
was unsure where to flag this, so i chose to create an issue :)
latest version on docker hub is v0.4.5
Several exporters already make use of a 'module' query argument: e.g.
This means you might have to do:
localhost:9999/proxy?module=snmp&target=10.12.255.1&module=if_mib_secret
^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^
exporter_exporter snmp_exporter
This is confusing at first glance, and exporter_exporter has to strip off the first instance of module and leave the remaining instances in place.
I have tested this (with 0.2.9, prior to the parsing changes in #19), and it does work: tcpdump shows the above example proxies to
GET /snmp?module=if_mib_secret&target=10.12.255.1 HTTP/1.1
Host: localhost:9116
User-Agent: Go-http-client/1.1
Accept-Encoding: gzip
However, I wonder if this could be done in a cleaner way? Suggestions:
Make the exporter_exporter module selection be part of the path, REST-style
localhost:9999/proxy/snmp?target=10.12.255.1&module=if_mib
Use a query string parameter which is unlikely to conflict
localhost:9999/proxy?exporter_exporter=snmp&target=10.12.255.1&module=if_mib
(This might be better for dynamic rewrite rules: setting __params_exporter_exporter
is easy, but I'm not sure if __metrics_path__
can be rewritten)__metrics_path__
can be set in relabelling, but __param_module
cannot be set to a list.
Both could be done in a backwards-compatible way by falling back to the existing 'module' query parameter.
I plan to run exporter_export as a TLS server.
Issue 1: --help
output says:
-web.tls.verify
Disable client verification
Looking at the code:
verify = flag.Bool("web.tls.verify", false, "Disable client verification")
...
if *verify {
pool := x509.NewCertPool()
cabs, err := ioutil.ReadFile(*caPath)
if err != nil {
log.Fatalf("Could not open ca file,, " + err.Error())
}
ok := pool.AppendCertsFromPEM(cabs)
if !ok {
log.Fatalf("Failed loading ca certs")
}
tlsConfig.ClientAuth = tls.RequireAndVerifyClientCert
tlsConfig.ClientCAs = pool
}
So it seems that this option is to enable client verification (and defaults to false). Should the help text output be updated to reflect this?
(Aside: two commas in error message?)
Issue 2: it's unclear what sort of verification is done on client certs. My best assumption is: the server will accept any client cert, as long as it's signed by any CA in the web.tls.ca
file. That is: it does not check the certificate identity or fingerprint.
If that's true, you'd have to set up a separate dummy CA for client authentication, rather than using any existing PKI. I don't have a problem with this, I just want to ensure I understand it properly.
I'm using the ceph exporter and having just started investigating rbd mirroring the following error just occurred:
An error has occurred while serving metrics:
text format parsing error in line 2614: second HELP line for metric name "ceph_rbd_mirror_replay"
When looking at the ceph endpoint, it looks like the problematic HELP item is for an item without a metric to go with it:
# HELP ceph_rocksdb_get_latency_count Get latency Count
# TYPE ceph_rocksdb_get_latency_count counter
ceph_rocksdb_get_latency_count{ceph_daemon="mon.link"} 4236013.0
ceph_rocksdb_get_latency_count{ceph_daemon="mon.yoshi"} 4188788.0
ceph_rocksdb_get_latency_count{ceph_daemon="mon.bowser"} 4158142.0
# HELP ceph_rbd_mirror_replay Replays
# TYPE ceph_rbd_mirror_replay counter
# HELP ceph_prioritycache:meta_pri0_bytes bytes allocated to pri0
# TYPE ceph_prioritycache:meta_pri0_bytes gauge
ceph_prioritycache:meta_pri0_bytes{ceph_daemon="osd.6"} 0.0
ceph_prioritycache:meta_pri0_bytes{ceph_daemon="osd.14"} 0.0
I understand this may be a bug in the ceph exporter, but is there a way to avoid this causing the loss of all metrics in exporter_exporter?
Exporter_exporter currently returns a 404 when accessing its http root. If we look at other exporters, e.g. blackbox_exporter, it might be better to return some information about the endpoints which the exporter can proxy.
Display a list of configured module names that link to /proxy?module=$name
when accessing /
.
PR: #16
Hi,
After a lot of tests with certificate/TLS, I need some help.
Most of my nodes answer with the same domain xxx.uman-it.fr so I generate a node cert with *.uman-it.fr as Common Name
Now I need to monitor one node which answer with another domain lets say uman-it.infra, If I try to use the same certificate, it fail (invalid certificate)
I tried to create a SAN certificate (Common Name = *.uman-it.fr ; SAN DNS = *.uman-it.infra) but it doesn't work.
Then I found issue #48 about subjectAltName verification, I tried to use web.tls.certmatch but no more success...
Is it possible to make it work or must I generate multiple node certificate ?
Thanks a lot
Hello,
Thank you for a great product, I use it on all my servers and VMs.
When trying to use https://github.com/prometheus/client_python#info metric type in an exporter behind expexp I get this error:
Mar 9 06:51:35 blackbox3 supervisord[64200]: exporter_exporter time="2023-03-09T06:51:35Z" level=error msg="Verification for module 'dnsexp' failed: Failed to decode metrics from proxied server: text format parsing error in line 20: unknown metric type \"info\""
I think it will be the same issue for https://github.com/prometheus/client_python#enum type metrics.
It would be nice to support both types.
Using the published released binary:
# /usr/local/bin/exporter_exporter --version
Version: 0.4.5- (from , built by on )
I don't think that's expected?
Goal:
I wish to use exporter_exporter to safely export metrics about the local consul daemon. Consul already cooperates with prometheus by provding a prometheus-compatible metrics output if I ask properly.
What I did:
modules:
consul:
http: { path: '/v1/agent/metrics?format=prometheus', port:8500 }
method: http
What I expected:
curl http://localhost:9999/proxy?module=consul would return the same data that I get from curl http://localhost:8500/v1/agent/metrics?format=prometheus
What I got:
An error has occurred during metrics gathering: text format parsing error in line 1: invalid metric name
Extra information:
running a strace on exporter_exporter (I don't have tcpdump on this machine) I can see that exporter_exporter is sending an http request "GET /v1/agent/metrics%3Fformat..."
In more dynamic environment it would be very nice to just be able to drop in a configuration in a directory, and restart exporter_exporter and have it amend those configs to scrape_configs. One could also imagine this was reloaded on HUP. Would such a feature be of interest?
For example:
/etc/exporter_exporter/exporter.yml
/etc/exporter_exporter/scrape_config.d/service1.yml
/etc/exporter_exporter/scrape_config.d/service2.yml
...
Where exporter.yml
has some directive for pointing out the scrape_config.d
path.
When using TLS without client verification, the parameter -web.tls.ca
is not required (as far as I understand). However, it is required to pass a (valid?) CA path, otherwise startup fails with
main.go:168] Could not open ca file,, open : no such file or directory
Current workaround is to pass some dummy CA.
It should be possible to load the CA only when -web.tls.verify
is set.
Hello,
Is there any way to allow queries only from a specific IP, or network range (maybe something like mrtg's allowed_hosts option)? Using TLS is a perfect option, but I would also like to be able to limit access to expexp's port by source ip address (without having to configure an iptables rule, or use an external firewall), so it will only respond to queries from my prometheus server.
Thanks in advance!
exporter_exporter/service_windows.go
Line 10 in f1fb610
All the other files use "github.com/sirupsen/logrus" and it looks it could be used as a dropin replacement, replacing the import line with
log "github.com/sirupsen/logrus"
seems to do the trick.
Hi,
first, thank you for this awesome project, it's working very well.
I have the following issue: A gitlab-rails exporter exports a metric with the following line:
curl -sk -o - https://localhost/-/metrics | grep -E '^rack_requests_total.*sessions'
rack_requests_total{action="new",controller="sessions",route="g\u003c/w",state="completed"} 2
and exporter exporter breaks with the following error:
level=error msg="Verification for module 'gitlab-rails' failed: Failed to decode metrics from proxied server: text format parsing error in line 37055: invalid escape sequence '\\u'"
AFAIK labels should support utf-8. Is this reproducible and a bug in exporter-exporter?
Thank you,
keachi
I am not sure is that the proper way or I am doing something wrong but when I configured my prometheus (v2.33.5) as
- job_name: 'myjob'
scheme: 'https'
metrics_path: '/proxy'
params:
module:
- 'node'
- 'nginx'
- 'mysqld'
- 'php-fpm'
- 'postgres'
static_configs:
- targets:
- 'targethost:9998'
labels:
environment: 'staging'
I see on the target:
May 11 07:09:05 targethost prometheus-exporter-exporter[1422569]: time="2023-05-11T07:09:05+02:00" level=info msg="10.28.1.1 - GET \"/proxy?module=node&module=process&module=nginx&module=mysqld&module=php-fpm&module=postgres\" 200 OK (took 108.894097ms)"
But only the node metric shown up in the Prometheus, so I executed the following on the node runs the prometheus
wget -O - 'https://treebeard.login.hu:9998/proxy?module=node&module=nginx&module=mysqld&module=php-fpm&module=postgres'
and looks like only the first module (eg. in this case the node) is included in the response
wget -q -O - 'https://treebeard.login.hu:9998/proxy?module=node&module=nginx&module=mysqld&module=php-fpm&module=postgres' | grep -E '^(node_load1|nginx_connections_writing) '
node_load1 0.71
if i start switch the order, I get only the nginx data
wget -q -O - 'https://treebeard.login.hu:9998/proxy?module=nginx&module=node&module=mysqld&module=php-fpm&module=postgres' | grep -E '^(node_load1|nginx_connections_writing) '
nginx_connections_writing 1
Is this the expected behaviour? So Am I have to configure like the following?
- job_name: 'myjob-node'
scheme: 'https'
metrics_path: '/proxy'
params:
module: [ 'node' ]
static_configs:
- targets:
- 'targethost:9998'
labels:
module: 'node'
environment: 'staging'
- job_name: 'myjob-nginx'
scheme: 'https'
metrics_path: '/proxy'
params:
module: [ 'nginx' ]
static_configs:
- targets:
- 'targethost:9998'
labels:
module: 'nginx'
environment: 'staging'
...
- job_name: 'myjob-{module}'
scheme: 'https'
metrics_path: '/proxy'
params:
module: [ '{module}' ]
static_configs:
- targets:
- 'targethost:9998'
labels:
module: '{module}'
environment: 'staging'
Example config
modules:
consul:
method: http
http:
port: 8500
path: "/v1/agent/metrics?format=prometheus"
Expected behaviour:
proxy request /proxy?module=consul
must return metrics from consul agent
Actual behaviour:
proxy request /proxy?module=consul
returns 404
Actual request to consul are URL encoded by exporter_exporter which is lead to the following
/v1/agent/metrics%3Fformat=prometheus
On that request consul returns 404
But if try to get metrics directly from consul it works well.
$ curl -v '0:8500/v1/agent/metrics%3Fformat=prometheus'
* Trying 0.0.0.0...
* TCP_NODELAY set
* Connected to 0 (127.0.0.1) port 8500 (#0)
> GET /v1/agent/metrics%3Fformat=prometheus HTTP/1.1
> Host: 0:8500
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Date: Tue, 31 Mar 2020 14:34:19 GMT
< Content-Length: 0
<
* Connection #0 to host 0 left intact
$ curl -v '0:8500/v1/agent/metrics?format=prometheus'
* Trying 0.0.0.0...
* TCP_NODELAY set
* Connected to 0 (127.0.0.1) port 8500 (#0)
> GET /v1/agent/metrics?format=prometheus HTTP/1.1
> Host: 0:8500
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/plain; version=0.0.4; charset=utf-8
< Vary: Accept-Encoding
< Date: Tue, 31 Mar 2020 14:34:33 GMT
< Transfer-Encoding: chunked
<
# HELP consul_client_api_catalog_datacenters consul_client_api_catalog_datacenters
# TYPE consul_client_api_catalog_datacenters counter
...
It's often the case that there's already an exporter in place, and one might want to add another. exporter_exporter has the nice property of listing all the exporters it reverse proxies, listen on a single port, etc.
But at scale, changing the address of all targets from /metrics
to /proxy?module=previous-exporter
at the same time is often not feasible. It might be desirable to keep the existing route working during a transition.
Would you be review to a pull request for this ? The goal would be add an option for overriding the /metrics
and maybe /
route by the route of a module that is being replaced.
exporter_exporter
leaks sockets when serving metrics for module with verify=false
.
Run exporter_exporter
with some module configured like this:
modules:
nodeNoVerify:
method: http
http:
verify: false
port: 9100
Inspect current statistics on sockets in your system:
$ ss -s
Total: 1295
TCP: 71 (estab 20, closed 16, orphaned 0, timewait 2)
Transport Total IP IPv6
RAW 1 0 1
UDP 31 18 13
TCP 55 38 17
INET 87 56 31
FRAG 0 0 0
Generate some load and check statistics again e.g.:
$ go-wrk -c=4 -n=2000 http://localhost:9999/proxy?module=nodeNoVerify
...
$ ss -s
Total: 3329
TCP: 6283 (estab 2054, closed 4193, orphaned 0, timewait 4180)
Transport Total IP IPv6
RAW 1 0 1
UDP 31 18 13
TCP 2090 1054 1036
INET 2122 1072 1050
FRAG 0 0 0
Checkout lots of open connections between exporter_exporter
and node_exporter
:
$ ss -etnp | grep 9100
Checkout logs of exporter_exporter
:
2020/01/15 12:15:01 http: Accept error: accept tcp [::]:9999: accept4: too many open files; retrying in 5ms
2020/01/15 12:15:01 http: proxy error: dial tcp: lookup localhost: device or resource busy
One open connection to node_exporter
Approx. 2000 open connections to node_exporter
This happens because http.Transport
struct must be reused (as doc suggests) but new instance is created every time httpConfig.ServerHTTP
method is invoked. Creating http.Transport
with DisableKeepAlives: true
also solves the issue
On docker build, we got an error because go should be at least version 1.13
I use this one and it works.
FROM golang:1.13.7-alpine AS build
Wanted to make a PR but it seems like there is no solution near.
Did not find a parser for OpenMetrics so could not create a PR here.
Workaround for now is to skip verify:
modules:
mimir:
method: http
http:
verify: false
port: 8080
Error code from exporter_exporter is:
ERRO[0003] Verification for module 'mimir' failed: Failed to decode metrics from proxied server: text format parsing error in line 89: unknown metric type "unknown"
I think this is doable when prometheus is compiled with go 1.9 and gets socks5
proxy support for it's proxy_url
argument.
something like this could work where node:1 is resolved by the socks proxy to
match a module named node in exporter_exporter as the first entry in the node
exporter section.
- job_name: 'node'
scrape_interval: 1s
static_configs:
- targets: ['node:1']
scheme: https
proxy_url: "socks5://localhost:3000"
tls_config:
ca_file: ca.crt
key_file: client.key
cert_file: client.crt
server_name: localhost
It get's interesting when we suddenly have a process cluster running on a node
which exporterexporter needs to pick up.
One could imagine something like this:
modules:
myapp:
method: http
http:
ports: 7000-7032
which would map to scrapable myqpp-targets 1-16
- job_name: 'myapp'
scrape_interval: 1s
static_configs:
- targets: ['myapp:1', 'myapp:2', 'myapp:3', 'myapp:4', 'myapp:5', 'myapp:6', ....
scheme: https
proxy_url: "socks5://localhost:3000"
tls_config:
ca_file: ca.crt
key_file: client.key
cert_file: client.crt
server_name: localhost
As exporterexporter works right now I need one job for each myapp
process in that host local application cluster which is the main problem I want to solve.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.