Comments (3)
I think that this discussion becomes even more important with #298 and #296. Right now we are exporting a flat-JSON document, which is fine for now, but should we think about revisiting this since we are adding more fields?
Thoughts?
@whitleykeith
@jtaleric
@dry923
from benchmark-wrapper.
Elasticsearch is built on top of Lucene, which stores data in an inverted index. Simply put, Lucene maps terms -> documents instead of documents -> terms. Lucene doesn't really support nesting as it only supports numeric, binary, and text fields. ES does a lot of optimizations on top of Lucene to support nesting but I'm not sure about the index/search performance of them as I've only ever used Solr. But in general I would default to not nesting given the base technology ES uses (to a point).
ES has some guidelines on doc structure which is generally pretty solid. I think we should start here and look at defining the common core
fields for all benchmarks and then specific core
fields for each benchmark.
I think of the common core
fields as things like uuid, run_id, start_time, end_time, ...
. Things that every benchmark should have.
I think every common core
field should be left at the document root with no nesting. Every doc should have these fields defined and we should reject docs that don't. After that, we should look at how we search the data and fields that are heavily used in queries should be moved to the root of the doc. For instance I think all the environment information of a run (i.e. cluster_name, platform, etc.) should be as close to the root of the doc as possible to make searches easier and more performant.
The counter to that is when fields may or may not exist doc to doc, or are specific to the environment/benchmark. For instance not every snafu run may be running in k8s, so cluster_name, etc. may not be collected. For those we don't necessarily want them to be flat because we want a uniform root doc structure for better readability and also indexing.
Given that I think we can afford to nest somewhat. I think a good starting schema would look something like this:
{
"uuid": "str",
"run_id": "str",
"start_time": "datetime",
"end_time": "datetime",
"duration": "Number",
"type": "string",
"iteration": "Number",
"kubernetes": {
"cluster_version": "str"
},
"openshift": {
"cluster_version": "str",
"cluster_name": "str",
"platform": "str",
"network_type": "str"
},
"config": {
"foo": "bar"
},
"data": {
"foo": "bar"
}
}
Where config
and data
would have benchmark-specific schemas. That way we nest fields that aren't always there while still not going too crazy and make it hard to query. Nesting config and data is fine because we don't really search on that, but rather retrieve those results from a query
from benchmark-wrapper.
I like that structure, and I think you have a really good point about modeling after ECS. I think it might be worth doing something like this:
{
"uuid": "str",
"run_id": "str",
"start_time": "datetime",
"end_time": "datetime",
"duration": "Number",
"type": "string",
"iteration": "Number",
"environment": "flattened",
"config": "flattened",
"data": "flattened
}
If we create an environment key in the root and take advantage of the flattened data type then we don't have to worry so much about the structure of the document in order to have successful searches. At least, that's my understanding but I could be wrong.
from benchmark-wrapper.
Related Issues (20)
- Add CLI Option to Purge Empty Fields in ES
- Determine and Implement Versioning Schema
- Addition of Collectors and Sample Abstraction Request HOT 2
- Create Signal Protocol HOT 4
- Extend Common Benchmark Fields HOT 4
- [Proposal] Index Environment Metadata for every benchmark HOT 4
- Quay trigger sometimes fails HOT 2
- snafu base image HOT 16
- 2 things I don't like about current cache dropping HOT 1
- backlog of ES results results in failure to get yielded results to ES?
- Inconsistent Benchmark and Quay Image Names HOT 10
- Can the grafana dashboard metric percentiles of bandwidth/latency in the fio ES index can use compare different run? HOT 5
- run_snafu does not protect against bad JSON data in certain failures HOT 2
- FIO and SF failed to run HOT 8
- Uperf test fails but does not exit with error HOT 2
- Elasticsearch: credential authentication HOT 5
- centos image resources .repo points to wrong URL
- scale_openshift_wrapper should expose parameters to target machineset from different labels HOT 1
- uperf Docker Image creation fails on ppc64le HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from benchmark-wrapper.