Comments (6)
Ok, so it’s prob good to agree on lingo on a couple of types of block/definitions of things. Mostly thing you’re already doing so I just want to make sure I follow them correctly.
safe_blocks: easy one here - all blocks that are safe in the global view. Starts at some “safe tip”, and then goes all the way back to the genesis block.
unsafe_blocks: (or orphaned) blocks that are absolutely not safe. So this means, they are not a child of the safe_tip, and they are not a safe block.
questionable_blocks: (name pending) blocks that are children of the safe tip, and could be safe or could be not safe - we aren’t sure yet.
safe_to_tip_length: length from the last safe block (or genesis if there are no safe blocks) to the current fork choice. this should be pretty easy to measure - just count tip until you reach the safe block.
safe_to_tip_width: number of “forks” in the questionable blocks. this is a bit harder to measure. I think you can just do a search through the children dict in the global view starting from the last safe block. Whenever there is more than one child, then add one to the width (if two, add one, if three, add two, etc).
I think it might make sense to run these experiments with some fixed weights — I have a feeling this accounts for most of the variance in the numbers we are seeing. At least, seems like that would be the case in random message prop, but this would be an interesting experiment to see if this is the case.
Possibly Interesting Experiments:
- How do validator weights affect the percent of orphaned blocks we get with random message propagation? What weight distributions lead to the lowest number of orphaned blocks (I bet there is some interesting result here).
- A variant on the experiment above is having validators still propagate messages randomly but do so at a frequency proportional to their weights (aka, if Jim has 2x more weight, I send blocks to him 2x more often). I bet we would see some interesting results from this.
- Find the ratio of safe_blocks/unsafe_blocks as the number of messages in a view goes to some large number, say 1000 (this gives us interesting information about different propagation schemes!)
- Interesting one: how the ratio of safe_blocks/unsafe_blocks affects the avg(safe_to_tip_length), where it is the average taken of the lengths taken after every message. The theory here is that if we have more orphans, or unsafe_blocks, or a lower ratio, we are likely to have a shorter avg(safe_to_tip_length). Obviously, there are some massive exceptions here (nofinal), but I bet there is some interesting data here.
- How the number of validators affects the safe_to_tip_width. I think it’s probably pretty obvious, but it would be interesting to see none-the-less.
@djrtwo let me know what you think. Will continue to post more here.
from cbc-casper.
A few thoughts/questions:
- For safe_to_tip_length it's not necessarily the case that the estimate is the deepest block in terms of links, right? Just the deepest in terms of weight. So you'd have to check depth against all the
latest_messages
. - This begs the question, is depth in terms of weight a relevant way to consider this tree?
- Should we add a height to message in general (or at least to the Block version of message when the distinction occurs)? This can be justified in that it's 1) very blockchain-y and 2) would be useful as a cache to calculate depths easily. Can easily make new message's height be +1 of the estimate height. We can then get the
questionable_blocks
for way cheeper and can do relative depth calculations for cheep. - average or effective branching_factor might be a good way to assess the complexity of the child tree. I think we can get this value by getting the average length of the children of the questionable_blocks in the
children
dict. We can also do a [http://ozark.hendrix.edu/~ferrer/courses/335/f11/lectures/effective-branching.html](rough approximation). This measure start's framing the depth in terms of some sort of "width" - safe_to_tip_width would just be a sum of the lengths of the children of the
questionable_blocks + safe_tip
, right?
from cbc-casper.
We've just been discussing "global orphan rate", but it might be interesting to also assess an individual validator's orphan rate. Maybe finding the orphan rate as a function of validator weight in some distribution of validator weights (under some message propagation scheme).
from cbc-casper.
For safe_to_tip_length it's not necessarily the case that the estimate is the deepest block in terms of links, right? Just the deepest in terms of weight. So you'd have to check depth against all the latest_messages.
Yeah, good point. But I guess the point here isn't to find the max length of the tree - it's more so we can figure out the overhead that the forkchoice can have, so it seems like the estimate is the most relevant length. Also, another reason just overall max height isn't so relevant is that a single validator could totally fuck with it (just by making tons of blocks :) ).
Should we add a height to message in general (or at least to the Block version of message when the distinction occurs)? This can be justified in that it's 1) very blockchain-y and 2) would be useful as a cache to calculate depths easily. Can easily make new message's height be +1 of the estimate height. We can then get the questionable_blocks for way cheeper and can do relative depth calculations for cheep.
There is a height in the blocks currently -- but I think it's only used for displaying. It also isn't +1 of the estimate height (which I think makes more sense), but +1 from the. I think this probably has to do with the display looking prettier. I'll do an experiment now to see if we can change it, or if we have to add a new field.
average or effective branching_factor might be a good way to assess the complexity of the child tree. I think we can get this value by getting the average length of the children of the questionable_blocks in the children dict. We can also do a [http://ozark.hendrix.edu/~ferrer/courses/335/f11/lectures/effective-branching.html](rough approximation). This measure start's framing the depth in terms of some sort of "width"
Sure - I think there's probably also some simple algorithm w/ a stack that can just compute the number of leafs in the "children" tree.
safe_to_tip_width would just be a sum of the lengths of the children of the questionable_blocks + safe_tip, right?
I think it would be the number of leafs of the questionable_blocks subtree.
from cbc-casper.
We've just been discussing "global orphan rate", but it might be interesting to also assess an individual validator's orphan rate. Maybe finding the orphan rate as a function of validator weight in some distribution of validator weights (under some message propagation scheme).
Hmm, not sure I see the benefit. Anything that's an orphan in the global view will necessarily end up being an orphan in a users view.
from cbc-casper.
Closing this. Experiment is in master. refer to #60 for actually getting the good stuff out of the code
from cbc-casper.
Related Issues (20)
- Rename/move testing lang HOT 10
- Add descriptive comments to casper.py HOT 2
- View becomes positive ontology HOT 1
- Validator Strategies HOT 3
- Network.send to handle sender other than message.sender HOT 1
- Reconsider existing msg_gens used by SimulationRunner HOT 1
- Network that can handle peer connectivity
- Add last_finalized_estimate to protocols that need it HOT 1
- Oracle Comparisons HOT 1
- Add ability to specify initial bets HOT 2
- Add test language to protocols where it is missing HOT 1
- Add different estimate rules for protocols w/ non-deterministic estimates HOT 1
- Add safety detection to protocols where it is missing HOT 1
- Restructure Codebase HOT 16
- CliqueOracle optimisation? HOT 1
- Refactor network delay functions to be more dynamic HOT 1
- Add SkipBlockchain protocol to support skip blocks HOT 1
- Silly typo in wiki HOT 2
- Implement CBC-Casper HOT 1
- some thoughts on "CBC Casper the Friendly Ghost" HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cbc-casper.