Comments (14)
First of all: If a label value changes, it creates a new series, also called a metric. So to get the terminology right, you are creating a new metric (and remove an old one).
Staleness handling in Prometheus should handle exactly this case, i.e. you should not see the overlap.
However, as you can see the overlap, my assumption is that your node_exporter instance actually exposes both metrics at the same time for a few minutes. You could check that by manually looking at the /metrics
endpoint of your node_exporter during the time of a change. If you see the duplication there, you need to troubleshoot your node exporter next.
from prometheus.
@beorn7 thank you for attention!
I've checked node_exporter's /metric
by running every minute script curl http://localhost:9100/metrics -o - 2>1 | grep -e ^nvme_device_info -e ^node_textfile_scrape_error | ts "[%Y-%m-%d %H:%M:%S]" >> node-exporter-test.txt
including the time of the change and didn't find any duplications at all.
But in this attempt, I made requests in real-time and noticed that, from the moment of the change (reboot) at 18:29:00 until the moment at 18:32:00, on the graph above there was only one series labeled {device="nvme0n1"}. The new series {device="nvme1n1"} appeared in the database's results at 18:33:00 with three retrospective values.
from prometheus.
Are you using normal scrapes to collect the date, or are you doing something special? It looks like staleness isn't working in your case, and you seem to have delayed ingestion. Are you maybe using the OTel collector in between?
from prometheus.
In this case, only standard scrapes are used. Prometheus and node_exporter are both installed on the same host from the Ubuntu Noble repo and interact directly.
I'm using remote_write in the config, but the issue occurs in the local prometheus instance (yet another screenshot below). I've just noticed, that in the local database, an overlapping series appears immediately without waiting for a three intervals.
Full configuration for completeness:
global:
scrape_interval: 1m
scrape_timeout: 10s
evaluation_interval: 1m
scrape_configs:
- job_name: prometheus
honor_timestamps: true
scrape_interval: 1m
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
follow_redirects: true
enable_http2: true
static_configs:
- targets:
- localhost:9090
- job_name: node
honor_timestamps: true
scrape_interval: 1m
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
follow_redirects: true
enable_http2: true
static_configs:
- targets:
- chronos:9100
- <secret>:9100
- <secret>:9100
- <secret>:9100
- job_name: unbound
honor_timestamps: true
scrape_interval: 1m
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
follow_redirects: true
enable_http2: true
static_configs:
- targets:
- localhost:9167
remote_write:
- url: <secret>
remote_timeout: 30s
basic_auth:
username: <secret>
password: <secret>
follow_redirects: true
enable_http2: true
queue_config:
capacity: 10000
max_shards: 50
min_shards: 1
max_samples_per_send: 2000
batch_send_deadline: 5s
min_backoff: 30ms
max_backoff: 5s
metadata_config:
send: true
send_interval: 1m
max_samples_per_send: 2000
from prometheus.
Are you using timestamps in your nvme.prom files?
from prometheus.
No, I don't use timestamps.
nvme.prom is generated by the prometheus-community/node-exporter-textfile-collector-scripts/nvme_metrics.py without timestamps.
Sample: nvme.prom.gz
from prometheus.
I have run out of explanations. I cannot imagine that the staleness handling doesn't work as expected in such a vanilla situation (or in other words: I don't think this is a bug in Prometheus).
I strongly suspect something subtle is off in your setup. Feel free to further troubleshoot and provide more evidence here. Maybe somebody else is seeing the same.
from prometheus.
Maybe we are back at the assumption that your node exporter is indeed exposing all those series at the same time for a short period, and your Prometheus happens to scrape in exactly that moment (while your curl
wasn't so "lucky"). Just another wild guess.
from prometheus.
exposing all those series at the same time for a short period, and your Prometheus happens to scrape in exactly that moment (while your
curl
wasn't so "lucky")
nvme.prom is written through sponge
/usr/share/prometheus-node-exporter-collectors/nvme_metrics.py | sponge /var/lib/prometheus/node-exporter/nvme.prom
and I think we can rely on its integrity in time.
An overlaps every time lasts for three intervals/minutes but curl
shows a changes immediately. Values did not flap in time not in /metrics
curl, not in tsdb. Labels change only once and an interval between changes is constant. I think we can also exclude stochastic processes.
from prometheus.
Perhaps the node_exporter is being scraped twice during that time period. The target from before the reboot is still being scraped at the same time as the new one. Do node-exporters run in containers?
You might want to examine the prometheus_sd_discovered_targets
, prometheus_target_scrape_pool_targets
, and node_textfile_mtime_seconds
metrics.
I notice that curl is scraping localhost:9100
, but Prometheus is pointed to chronos:9100
. Perhaps curl doesn’t accurately reflect the scraping. Maybe the target scraped by prometheus exposes the old+new series during that time period.
You can take a look at the scrape_series_added
and scrape_samples_scraped
metrics.
from prometheus.
I've created synthetic metrics that swap labels every 1m, 5m, 15m intervals bvasiliev/prometheus-overlap-test to localize the issue.
I found that, with uninterrupted operation of Prometheus, labels changes are handled as expected.
But the issue occurs when a label is changed while Prometheus is off, and a downtime is shorter than a scrape_interval
. So yes, under these conditions the node_exporter is being scraped twice.
Below are the charts showing the Prometheus restart at 09:45 and the shutdown for one minute at 09:55:
from prometheus.
I hadn’t realized that Prometheus itself was restarted. In such a case, because Prometheus cannot write the staleness markers, I believe it will fall back to the old behavior of "keeping the series for X intervals before dropping it." This behavior seems to be what you’re observing.
By the way, good debugging effort!
from prometheus.
I hadn’t realized that Prometheus itself was restarted
Oh, that little detail I didn't highlight :-)
I temporarily masked this with a start delay in the prometheus-node-exporter.service unit:
[Service]
ExecStartPre=/usr/bin/sleep 60
I'm not sure if this staleness behavior is a bug, but I think it may not be uncommon in scenarios where Prometheus is in agent mode.
from prometheus.
For staleness handling to work across restarts, we needed to reconstruct a lot of state from the scraping… not sure if it is worth the effort.
I'll close this issue as we know now what happens.
A "restart-resilient staleness handling" feature would be quite involved and require a design doc. If somebody wants to work on it, please let us know.
from prometheus.
Related Issues (20)
- [Feature] Add new labels to time series
- Prometheus Mixin Dashboard: Prometheus/Overview Grafana Dashboard Using Deprecated Angular components HOT 2
- remote write 2.0 - decide how to handle no metadata found
- remote write 2.0 - decide whether addition of metadata should count towards max samples in write request
- remote write 2.0 - update write handler benchmarks for 2.0 format
- remote write 2.0 - DRY the queue manager code HOT 2
- feat: Move remote write receive to runtime reloadable config HOT 1
- [flaky test] TestEvaluations/testdata/native_histograms.test HOT 1
- ui (tests): Add tests for Native histogram helpers HOT 2
- remote write 2.0 - update `TestSampleDelivery` to check for metadata in 2.0 proto
- remote write 2.0 - update test for old samples filtering for 2.0
- @ modifier with future return inconsistent value for sum_over_time HOT 3
- Prometheus stucks on protection from Host Header Injection HOT 1
- SIGSEGV after writing block HOT 5
- Not enough memory resources HOT 2
- `navigator.clipboard` may not be available HOT 2
- Prometheus Staleness Issue on Fedora 39
- Prometheus reload not exist block: opening storage failed
- Metrics in "/actuator/prometheus" are not consistent in a multi nodes environment (kubernetes) HOT 1
- Recommendation for PGO with Prometheus HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from prometheus.