Comments (12)
Bryan and I discussed on Slack; over text we'd misunderstood each other. His config changes were definitely valid given the low scrape load he had. Remote write has some gaps when it comes to handling timely sending of data in that kind of a scenario, the hard coding of the reshard check ticker is just one of those gaps.
I'll be opening a few issues soon for some things we can try out, there are a number of people interested in taking on some smaller tasks in RW and those could be good first issues for them.
from prometheus.
I notice in a similar previous issue @csmarchbanks said #7124 (comment)
At very low remote write volumes it is very easy to go through multiple batch send durations without new samples coming in
Which matches the situation in this case (volume was about 500 series, scraped every 5s).
Anyway this judgement seems highly dependent on the value of BatchSendDeadline
; is it unreasonable to set it to 100ms?
low volumes are unlikely to ever reshard above the minimum anyway
More context: the machine was occasionally under heavy CPU load; I believe this generated a backlog on the send queue.
(Sadly I don't have metrics to confirm this)
from prometheus.
is this good @bboreham or do we need to wait for others review .
from prometheus.
Best to comment on the PR within the PR itself.
from prometheus.
okay @bboreham .
from prometheus.
Which matches the situation in this case (volume was about 500 series, scraped every 5s).
Anyway this judgement seems highly dependent on the value ofBatchSendDeadline
; is it unreasonable to set it to 100ms?
I would have suggested/assumed people would drop the batch size max_samples_per_send
a lot lower before dropping the BatchSendDeadline
that low.
from prometheus.
I don't think that helps. In my example, Prometheus scraped 509 series every 5 seconds; I wanted it to send those 509 series without waiting 5 seconds.
If I reduce max_samples_per_send
from 2000 default to 100, say, it will send 500 of them, but I still want it to send all 509 series.
from prometheus.
Personally I would still lower the batch size before the send deadline timeout, but even so I think guarding against excessive resharding checks is a valid change. Reviewing the PR again today.
from prometheus.
I don't think I am understanding your point. What would you lower max_samples_per_send
to, given my example?
from prometheus.
I don't think I am understanding your point. What would you lower
max_samples_per_send
to, given my example?
Something below 509
? Or even just 1000
or so and lower the send deadline to ~1s. I don't know exactly what your use case is but scraping a small amount of samples and then always sending all of them via remote write ASAP isn't really a situation we've designed for. Setting the send deadline to 100ms is just a workaround that's worked in your case.
This is separate from the issue of the the resharding check happening too often when the send deadline is < 5s, which I don't have any issue with merging a fix for.
from prometheus.
lower the send deadline to ~1s
OK, that case still shows the issue I am talking about, because two times 1 second is way less than the 10s interval it checks at.
always sending all of them via remote write ASAP
That isn't what I asked for; I asked for:
in a timely manner
and
without waiting 5 seconds
Setting the send deadline to 100ms is just a workaround that's worked in your case.
I disagree, it matches what I wanted.
Bryan
from prometheus.
yeah after seeing and understanding code @bboreham assumption here is valid as we dont need to wait for 5 seconds , and yeah as @bboreham also asked for feeding data timely manner . and also we need to remove that hard coding of the resharding check ticker .
from prometheus.
Related Issues (20)
- Use humanizeDuration from prometheus/common HOT 1
- Documentation--default docker user is nobody HOT 1
- Why Does Prometheus Kubernetes Discovery Use a Single Informer 'process' Function and Queue? HOT 3
- Support exemplars with no labels HOT 2
- The `golangcilint` linter of `perfsprint` is not working as expected HOT 7
- [Meta Issue] Created Timestamp Plan
- Created Timestamp: Store CT in per series metadata storage
- Created Timestamp: Use CT from metadata storage in PromQL engine
- Created Timestamp: Prometheus progates CT in Remote-Write 2.0
- insecure_skip_verify: true ignored
- Remote Write 2.0: Benchmarks vs 1.0 and OTLP HOT 1
- Remote Write 2.0: Write a blog post
- Problem with promtheus, cadvisor and node exporter in docker swarm with no Leader role node
- The password in the Prometheus configuration file needs to support encrypted storage. HOT 5
- "Label names with matchers" call returns "not found" during compaction
- Broken link in readme
- promql: sort_by_label should fallback to a consistent ordering by metric, not by sample value HOT 2
- There is a tag v2.52.1 with no release or Docker images HOT 7
- Add AWS Sigv4 support to scraper and http_sd HOT 2
- The issue of increased load due to Prometheus hot restart HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from prometheus.