Comments (9)
OK, after Slack discussion with @fabxc, we learn that this enhancement is nice-to-have, to detect lost data at e.g. query or read repair time. With that in mind, we expect that a custom entropy io.Reader that e.g. starts at a true random n value and yields (n+1) with each subsequent call can get this continuity without comparative loss of uniqueness within a millisecond.
We should reserve a few high bits in the original n to allow some headspace before hitting overflow limit, at which point we can transparently hand-off to new logical stream in ingester, or hard-disconnect the client and force a new physical stream.
from oklog.
OK Log uses time component of ULID as primary key for all queries. Each record must retain some form of timestamp in its per-record prefix. I think a better angle would be to create e.g. ULID+ which swaps random component of ULID for logical clock. For example, ULID is currently
+-------|-------|-------|-------+
| 32b time (hi) |
|-------|-------|-------|-------|
| 16b time (lo) | 16b random |
|-------|-------|-------|-------+
| 32b random |
+-------|-------|-------|-------+
| 32b random |
+-------|-------|-------|-------+
we could change to e.g.
+-------|-------|-------|-------+
| 32b time (hi) |
|-------|-------|-------|-------|
| 16b time (lo) | 16b random ID |
|-------|-------|-------|-------+
| 32b logical clock (hi) |
+-------|-------|-------|-------+
| 32b logical clock (lo) |
+-------|-------|-------|-------+
This preserves important stateless properties of record identifiers including lexicographical sort (also deeply ingrained in data model), while also granting some (all?) of the desired causality properties. WDYT?
from oklog.
Are ULIDs generated in the ingesters or in the forwarders?
from oklog.
Ingesters for sure. We want to be able to support commodity log shippers.
from oklog.
That's certainly an interesting option.
We need to be able to map each record to its stream though to actually be able to detect potential holes in a stream. If the timestamp is record specific again, the only thing left to identify the stream would be the 16b random ID. (If it were per-record, the per-producer ordering guarantee would be lost again.)
16 bits of entropy seem too low to have confidence that no two stream are created with the same identifier by accident.
So having the per-record timestamp back in front sounds good. But I think the random component needs enough entropy to be unique for each stream. Assuming 128 bit is the generally accepted amount for UUIDs we may not get around the overhead in that case.
I might be getting something wrong though.
from oklog.
Would there be a substantial advantage in reserving some space in a possible ULID+ for a topic identifier (hash value?)?
from oklog.
I think Peter feels relatively strongly about not growing the ULID(+) size if avoidable and we probably cannot fit a reliable hash in there – with #116 it already feels pretty maxed out.
The main benefit of topics is probably that you can use them to group data on disk and avoid reading irrelevant records to begin with. So I'd imagine that one would just add a directory per topic and store segments there as they are today.
from oklog.
@fabxc Do you envision topic prefix after ULID on per-record basis, or is it not necessary?
from oklog.
If we have that information from elsewhere (e.g. directory, API args) I think it's unnecessary overhead. Though it may make sense to add it on the fly in cases where we are emitting records from multiple topics at once (queries, maybe whatever replication turns out feasible?).
from oklog.
Related Issues (20)
- Panic: short record HOT 2
- Move to length-delimited records HOT 8
- Add first-class concept of topics HOT 11
- Long-term storage
- Docker repo doesn't contain recent releases HOT 4
- network HOT 3
- forwarder mangles bare percent signs HOT 2
- Integration with Fluentd HOT 1
- bufio.Scanner: token too long HOT 1
- Log records can be stored in wrong order HOT 8
- Implement optional text indexing for faster queries
- Implement token based access control
- Compacter writes too much data to disk HOT 1
- Proposal: Create ULID from timestamp contained in record HOT 1
- Local node unable to connect to self after netowrk change
- Unable to "make install" using the tar.gz package and no oklog command found HOT 2
- What's the best way to test the cluster in large installation is working
- How oklog stores data?
- Web UI Improvements
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from oklog.