Comments (13)
One idea would be to do this via console templates, we could add a function that'd take the output of a query and produce text/protobuf format. We'd need some hook to set the content type too.
from prometheus.
@brian-brazil This would certainly be possible, but since this is an integral feature, it arguably deserves its own specialized and optimized implementation and endpoint, no?
from prometheus.
A separate endpoint would be best.
from prometheus.
A way to do this via console templates, until we've got a full-on solution: https://github.com/prometheus/prometheus/blob/master/consoles/federation_template_example.txt
from prometheus.
The solution might include "streaming" as in "transfer more than one timestamped sample per time series during one scrape by a higher-level Prometheus server of a lower-level Prometheus server".
from prometheus.
I'm a little wary of doing more than one value. The main reason you'd need that would be if a previous scrape failed, and requesting more data from a server that failed last time may lead to a cascading failure.
from prometheus.
There are two common use-cases for federation:
- Scaling, as folks have mentioned. Given prometheus' scaling this is actually probably the rarer use-case
- Aggregating data across zones of some form
It's generally important to monitor a target from "nearby". You want to run prometheus as close to the target in the network sense as possible. It's actually generally a good idea to run it in the same failure domain as well, as then your monitoring goes down exactly when your system goes down, instead of alternate with it, this helps avoid your system being up while your monitoring is down, minimizes netsplits impacting monitoring, etc.
In the case of multiple zones though it's often useful to cross-correlate data across those zones. So you'd use the federation to pull the data in to a "global" level prometheus. In this case it'd be fairly common for a scrape to fail due to a network-level event (fiber cut, router failure, etc.)... and it kind of sucks to just lose that data from your global level prometheus instance when it still exists in the lower level monitoring.
I should note here that in the prometheus model there isn't a global store to pull from, so if the data isn't in that top-level right now, you'll never get it there. You'd end up having to do periodic dumps and imports from your lower-level promethei to fill in holes for network outages... ick :(.
I'd suggest pulling data in in a more "streaming" fashion with a bounding the window. The default bound can be relatively small to avoid the cascading problem, this way it should at least be able to bridge small network "glitches" like those frequently seen in intercontinental links. If someone wants to expose themselves to cascade failures to handle a cruddy network, they could extend the window if desired.
from prometheus.
Oh, also, this way you can handle high-frequency data without having a high-frequency poll at the federation layer.
from prometheus.
I don't think a bounding window is sufficient to prevent cascading failures, even if it now requests at most two data points that means that the load on the slave prometheus server could double in an outage - which would be bad.
My experience is that gaps due to small blips due to network fun don't usually cause problems in practice. I'd try to avoid putting anything critical in a global prometheus, due to the fundamental unreliability of the WAN (and data appearing a bit back in time may cause weirdness with rules) - it's more for general information with the per-cluster/failure domain prometheus servers being the place you usually go to first.
from prometheus.
What about higher frequency data? It seems the scrapes will have to happen at least as fast as the fastest scrape that the lower-level prometheus is doing. Which, assuming prometheus is as well written as I think it is (I'm new to the community)... could be very very fast.
from prometheus.
At the global level, high frequency data is much less useful than at a local level.
High-frequency data (on the order of seconds) is primarily useful for debugging things like microbursts for which you usually want to look at a handful of variables in roughly one datacenter at a time to figure things out, and reduce the impact of the various race conditions inherent in monitoring.
At a global level you tend to want a wide range of metrics at no more than a minute granularity. A well instrumented server will tend to have hundreds to thousands of metrics, and many thousands of time series. Doing scrapes more often will make you run into performance problems sooner without much benefit from the increased frequency, rather it's the breadth of instrumentation that helps you pin down all bar the microbust-level issues. If anything you'd be looking at downsampling a bit at the global level.
from prometheus.
This has been implemented. http://prometheus.io/docs/operating/federation/
from prometheus.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
from prometheus.
Related Issues (20)
- Promethues counter decreases by 1 for some time series data HOT 7
- prometheus is very slow for query and almost unavailable HOT 3
- Persist alert 'keep_firing_for' state across restarts HOT 6
- --enable-feature: Consider removing no-default-scrape-port HOT 1
- promtool syntax detects errors HOT 1
- Please sign your releases HOT 2
- Default --storage.tsdb.retention.time HOT 6
- Prometheus too old sample issue
- docs: Remove the section about remote read JSON responses - it only supports proto response or errors HOT 2
- Corrupting data written to remote storage in case sample_age_limit is hit HOT 2
- Implement support for dots in metric and label names. HOT 1
- Do the remote-write support the recording rule data? HOT 1
- Unable to add namespace in nomad_sd_configs HOT 1
- remote write 2.0 - benchmarking HOT 6
- The results calculated by irate are different.
- Ever-growing WAL folder
- Zookeeper service discovered adding matedata
- Range-vector functions double-count samples if their timestamps align with query step HOT 4
- Queries failing after upgrade to v2.52.0rc0 HOT 5
- api/v1/metadata and api/v1/label/__name__/values returning different values HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from prometheus.