Comments (13)
Blast from the past! I'll close this now, there's a lot of tools nowadays to do this: InfluxDB, Prometheus, etc
from community.
cc @wolfeidau
from community.
The main issue with the statsd + graphite integration is that they are painfully hard to setup.
Node.js+LevelUp can provide a low-scale solution that Just Works in a npm install
.
from community.
also have a look at tsd and levelweb.
from community.
@mcollina exactly what I had in mind, pure node statsd, though I think we'd need to make it more like a RRD to make it usable...
from community.
what does RRD stand for?
from community.
Round robin database http://en.wikipedia.org/wiki/RRDtool ?
from community.
If you want to expire stuff, you can try https://github.com/rvagg/node-level-ttl, expiring values older than X.
It works well, better than MongoDB ttl support.
from community.
I had a shot at using leveldb for storing data similar to how RRD and graphite whisper files work and ran into a few challenges.
Firstly some background, RRD doesn't store the time series data, it stores a rolling series of values based on things like average, mean percentile. RRD pre-allocates the buckets for the data being stored, say averages for the intervals 1min for a week, 1hour for a month and 1 day for a year. When time series data is fed to RRD it updates these buckets to reflect the changing average across the time periods.
So back to leveldb, in my case I employed a rather simplistic sort of continuous map reduce job where data was fed in and rolled into the aggregates based on a trigger. This trigger had quite a bit of work to do, it would update each of the windows I had specified.
The resulting implementations main flaw was it just stored way to much data, this was mainly because of how level map reduce works.
I moved onto hacking on another implementation using the raw triggers and my own state table, however this had issues again with data volume and how much i churned through leveldb.
That said all is not lost, there are people using log structured data stores for this kind of data I just haven't had a chance to search for papers or ideas on how to adapt this type of data to leveldb.
from community.
@mcollina just using the TTL isn't enough for a round robin database
Ahhh @wolfeidau, thanks for the writeup. I did think of using map though I had a feeling that there'd be a better way that involves less recomputation. I did think it may involve some statistical optimisation, which would require some math-smarts, nevertheless, here's my LevelDB RRD design:
So a Round Robin Database is essentially a circular buffer, and let's say our circular buffer can store 1Mb of data. We need to fit this data not in an array, but in a set of sorted key-value pairs. So, if we use the key naming convention:
rrd-data-<epoch time>
the data will be sorted from oldest to newest. Maybe an extra key rrd-total-size
to store the current size (or we could use approximateSize).
Note: the each entry will look something like:
{
"counters": {
"statsd.bad_lines_seen": 0,
"statsd.packets_received": 98,
"bucket": 26
},
"timers": {},
"gauges": {
"gaugor": 303
},
"timer_data": {},
"counter_rates": {
"statsd.bad_lines_seen": 0,
"statsd.packets_received": 9.8,
"bucket": 2.6
},
"sets": [
[
"5"
]
],
"pctThreshold": [
90
]
}
So the compression step, would be: as the size reaches our arbitrary limit, we'll stream off as much of the oldest data (top of the stream) as required to fit in the new entries and statistically combine the old values it into a single value (the data would need to include the range somehow).
This would cause every batch of data to trigger this "compression" process, and I'm not how well this would perform.
Thoughts?
from community.
Also note, this is just a rough outline, we would need to make the compress algorithm smarter, so we're not only combining the oldest data. Instead we need to combine the data by specific time periods. What we want - for example - is to use 33% capacity for data within now to -1month another 33% for -1month to -6months and then the remaining -6months to beginning of time. With configuration to set these thresholds and time periods. Also, it would handy to be able to set the amount of granularity for each compression step - though maybe by default, each compression may steps might follow S
-> M
-> h
-> d
-> m
-> y
. Also we need to only allow combination of the data of the same approximate granularity, for example, there'd be no point in combining 5s
with 1d
, the 5s
data entry would just be swallowed up. We want to go for evenly balanced data. Anyway, will probably run into more issues, though that's all I've got so far. Especially if we're scaling up to GBs of data...
from community.
Probably won't get time to start this for a few weeks, so if anyone else does, please post the link to the repo here 😄
from community.
Fair enough. I realized I was jumping the gun on closing a lot of issues. Changed my mind and re-opened and moved to community
repo instead, because a lot of the discussions were really interesting to keep around.
from community.
Related Issues (20)
- Proposal: add map method to abstract-down HOT 4
- Add `db.getMany(keys)` across the board HOT 8
- Refactor encodings HOT 6
- `rocksdb`: to be ported from `leveldown` (after other recent PRs)
- Fix Typings on DefinitelyTyped HOT 7
- Proposal: Add `db.has(key)` and `db.hasMany(keys)` HOT 12
- Deprecate old modules
- Package level with electron HOT 2
- Redisdown: how to create new `level-` libs? HOT 1
- Replace Sauce Labs with Playwright HOT 3
- Tracking issue: implicit and explicit snapshots
- willing to help revive some databases with abstract-level api HOT 4
- rocks-level implementation HOT 6
- Getting no entry found error in Chrome when using .get("key") HOT 2
- Dump Buffer for TypedArrays (for compactness and efficiency) HOT 6
- Any interest in maintaining 'lmdb'? HOT 3
- Counting entries in a level database HOT 4
- leveldown to remote database HOT 1
- Maintenance round: drop legacy features & runtime environments HOT 1
- Move to GitHub Actions HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from community.