Git Product home page Git Product logo

drops-of-diamond / diamond_drops Goto Github PK

View Code? Open in Web Editor NEW
57.0 16.0 14.0 76.25 MB

WIP on sharding and Ethereum 2.0 with enshrined-in-consensus data availability and Rust: a fast, safe, concurrent and practical programming language

License: The Unlicense

Rust 98.19% Shell 1.81%
ethereum diamond-drops scalability sharding rust ethereum-2 evm-abstraction enshrined-in-consensus-data data-availability blockchain

diamond_drops's Introduction

Parity - fast, light, and robust Ethereum client

build status codecov Snap Status GPLv3

Join the chat!

Get in touch with us on Gitter: Gitter: Parity Gitter: Parity.js Gitter: Parity/Miners Gitter: Parity-PoA

Or join our community on Matrix: Riot: +Parity

Official website: https://parity.io

Be sure to check out our wiki and the internal documentation for more information.


About Parity

Parity's goal is to be the fastest, lightest, and most secure Ethereum client. We are developing Parity using the sophisticated and cutting-edge Rust programming language. Parity is licensed under the GPLv3, and can be used for all your Ethereum needs.

Parity comes with a built-in wallet. To access Parity Wallet simply go to http://web3.site/ (if you don't have access to the internet, but still want to use the service, you can also use http://127.0.0.1:8180/). It includes various functionality allowing you to:

  • create and manage your Ethereum accounts;
  • manage your Ether and any Ethereum tokens;
  • create and register your own tokens;
  • and much more.

By default, Parity will also run a JSONRPC server on 127.0.0.1:8545 and a websockets server on 127.0.0.1:8546. This is fully configurable and supports a number of APIs.

If you run into an issue while using Parity, feel free to file one in this repository or hop on our Gitter or Riot chat room to ask a question. We are glad to help!

For security-critical issues, please refer to the security policy outlined in SECURITY.MD.

Parity's current release is 1.9. You can download it at https://github.com/paritytech/parity/releases or follow the instructions below to build from source.


Build dependencies

Parity requires Rust version 1.23.0 to build

We recommend installing Rust through rustup. If you don't already have rustup, you can install it like this:

  • Linux:

     $ curl https://sh.rustup.rs -sSf | sh

    Parity also requires gcc, g++, libssl-dev/openssl, libudev-dev and pkg-config packages to be installed.

  • OSX:

     $ curl https://sh.rustup.rs -sSf | sh

    clang is required. It comes with Xcode command line tools or can be installed with homebrew.

  • Windows Make sure you have Visual Studio 2015 with C++ support installed. Next, download and run the rustup installer from https://static.rust-lang.org/rustup/dist/x86_64-pc-windows-msvc/rustup-init.exe, start "VS2015 x64 Native Tools Command Prompt", and use the following command to install and set up the msvc toolchain:

      $ rustup default stable-x86_64-pc-windows-msvc

Once you have rustup, install Parity or download and build from source


Install from the snap store

In any of the supported Linux distros:

sudo snap install parity

Or, if you want to contribute testing the upcoming release:

sudo snap install parity --beta

And to test the latest code landed into the master branch:

sudo snap install parity --edge

Build from source

# download Parity code
$ git clone https://github.com/paritytech/parity
$ cd parity

# build in release mode
$ cargo build --release

This will produce an executable in the ./target/release subdirectory.

Note: if cargo fails to parse manifest try:

$ ~/.cargo/bin/cargo build --release

Note: When compiling a crate and you receive the following error:

error: the crate is compiled with the panic strategy `abort` which is incompatible with this crate's strategy of `unwind`

Cleaning the repository will most likely solve the issue, try:

$ cargo clean

This will always compile the latest nightly builds. If you want to build stable or beta, do a

$ git checkout stable

or

$ git checkout beta

first.


Simple one-line installer for Mac and Ubuntu

bash <(curl https://get.parity.io -Lk)

The one-line installer always defaults to the latest beta release. To install a stable release, run:

bash <(curl https://get.parity.io -Lk) -r stable

Start Parity

Manually

To start Parity manually, just run

$ ./target/release/parity

and Parity will begin syncing the Ethereum blockchain.

Using systemd service file

To start Parity as a regular user using systemd init:

  1. Copy ./scripts/parity.service to your systemd user directory (usually ~/.config/systemd/user).
  2. To configure Parity, write a /etc/parity/config.toml config file, see Configuring Parity for details.

diamond_drops's People

Contributors

jamesray1 avatar ltfschoen avatar timxor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

diamond_drops's Issues

Write benchmarks for existing code, e.g. blob serialization

Is your feature request related to a problem? Please describe.

geth-sharding had results (see the below link) of 80000 ns/op roundtrip serializing/deserializing a blob to a collation body and back for their benchmark, while other results ranged from 100th to less than half of this. Note that we will need to do #78 first before this is done, so this might be a task suited for @tcsiwula when done with #78, if he's up for it?

Describe the solution you'd like

Write bench tests and compare with other results e.g. in the geth-sharding issue below, to decide whether further optimizations are needed. Writing bench tests are a nightly feature, see https://doc.rust-lang.org/stable/unstable-book/library-features/test.html FMI. If results indicate a large improvement is needed, e.g. worse than XML, then fixing this would be a more immediate priority, otherwise it could be postponed until e.g. post-production.

Describe alternatives you've considered

Additional context
prysmaticlabs/prysm#139

Cute Animal Picture

put a cute animal picture link inside the parentheses

Node main function is not designed or written well

Is your feature request related to a problem? Please describe.
The node crate has a main function which only contains functionality for building the UML diagram. This is problematic because it doesn't adequately link to the rest of the functionality in the crate, and there is no error handling, which makes it very difficult to fix errors when a panic occurs, as has happened a few times now, and then I have had to spend hours trying to find the cause of the error by trial and error. Additionally, this also points to the need to do test-driven development, which may seem to be harder, but may be worth it in the long run.

Describe the solution you'd like
Write a new main function and rename the current one to make_uml_diagram. Strive to do TDD from now on; it will probably only get harder the longer we put off doing that.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Full sync

Be able to download and verify all the collation bodies and headers in one shard, a subset of shards and all shards.

cargo run -- mode --help doesn't work

> Executing task: cargo 'run -- mode --help' <

error: no such subcommand: `run -- mode --help`
The terminal process terminated with exit code: 101

Create a subclient / CLI options for setting up a proposer node.

Refactor into smaller issues/tasks (probably have the shard_ID as an option for each command):

  • options for register_proposer: --deposit=<ETH_value>, return result, or errors for asserts
  • options for deregister_proposer(): no options, return true or errors for asserts
  • release_proposer(): return result: no options, return true or errors for asserts
  • proposer_add_balance: --deposit=<ETH_value>, return result, or errors for asserts
  • proposer_withdraw_balance: --withdraw=<ETH_value>, return result, or errors for asserts
  • During period period - lookahead_length to period period - 1
  • Collects blobs from the shard network
  • In period period:
  • Prepare a proposal with proposer's address, bid and signature
  • Reveal the proposal to the shard network
  • Broadcast the collation body (TBD) to the shard network

Handle unused imports and variables before a production release

It's OK to use #![allow(unused_imports)] and #![allow(unused_variables)]in crates for now in order to make tests and builds easier to read, but we should remove these and actually handle these cases before a production release.

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

P2P message layer

Is your feature request related to a problem? Please describe.
Related to #34, implementing the message layer of P2P for sharding.

Describe the solution you'd like
Possibly use floodsub as implemented by rust-libp2p, or gossipsub as implemented in Go, which isn't implemented by rust-libp2p.

Describe alternatives you've considered
Instead of using rust-libp2p's floodsub, for more future extensibility we could use create bindings or call into Go floodsub, or write up gossipsub in rust-libp2p, which would probably be a formidable project.

Additional context
Add any other context or screenshots about the feature request here.

https://github.com/libp2p/specs/blob/master/4-architecture.md#45-messaging

More resources on enshrined-in-consensus

Hi! Wanted to say great work so far.

This isn't probably the best place to ask this, but...

I noticed you used the term enshrined-in-consensus. I was wondering if you could point me to some resources on this and how it relates to data availability. As data availability is a big concern of ours at Truebit.

Thanks!

Data spec for sharding visualization

Is anyone interested in collaborating on a data spec for sharding visualization? I’m looking into defining an API / schema for clients to implement for a graphical interface to display data. Like eth stats
We’re hoping to push this work onto Gitcoin to develop that application while we (and other teams) write the logic to expose this data to said application. We’re hoping to push this work onto Gitcoin to develop that application while we (and other teams) write the logic to expose this data to said application... We’re happy to kick things off and pass it around for review. —https://gitter.im/ethereum/sharding?at=5b226e8ae87f0c7bee8b1804

Suggesting @samparsky, @yaliu14 or anyone else interested in contributing.

Rename repo to "diamond_drops"

Currently it's difficult to separate code into crates due to the name of the repo containing nonstandard naming. This causes compiler errors when attempting to use the line:
``extern crate Diamond-drops;

I propose renaming the repo and associated files to the standard snake case "diamond_drops" to rectify the issue. If there is another way to use crates with the current name then posting that here would be sufficient.

Implement Gossipsub for sharding p2p

Go PR: libp2p/go-libp2p-pubsub#67

I figure I'll implement gossipsub for Rust rather than wait until a go-libp2p daemon is implemented, which wouldn't be optimal anyway.

Note that gossipsub would not only be useful for sharding p2p, but general p2p networks.

Ignore the rest of this post, due to tomaka's comment below.

Only the differences to Floodsub are included below.

To implement with [Rust-libp2p]:

  • comm: rpcWithControl: uses ControlIHave, ControlIWant, ControlGraft, and ControlPrune messages,
  • which are added to pb,
  • which can be converted from rpc.proto (already implemented) to rpc.rs (therefore do these in reverse order)
  • floodsub: Join and Leave methods for FloodSubRouter
  • floodsub_test: refactor sparseConnect into connectSome with an input for number of hosts, and call it with 3 hosts, as wel as call connectSome from new denseConnect with 10 hosts
  • gossipsub
  • pb/rpc.pb.go
    • Only this line is modified in the RPC struct: Control *ControlMessage protobuf:"bytes,3,opt,name=control" json:"control,omitempty"``
      - [ ] GetControl (needs ControlMessage)
    • ControlMessage
      • ControlIHave
      • ControlIWant
      • ControlGraft
      • ControlPrune
      • Methods:
        • Reset
        • String
        • ProtoMessage
        • GetIhave
        • GetIwant
        • GetGraft
        • GetPrune
    • ControlIHave
      - [ ] Reset
      - [ ] String
      - [ ] ProtoMessage
      - [ ] GetTopicID
      - [ ] GetMessageIDs
    • ControlIWant struct + methods:
      - [ ] Reset
      - [ ] String
      - [ ] ProtoMessage
      - [ ] GetMessageIDs
    • ControlGraft
      - [ ] Reset
      - [ ] String
      - [ ] ProtoMessage
      - [ ] GetTopicID
    • ControlPrune
      - [ ] Reset
      - [ ] String
      - [ ] ProtoMessage
      - [ ] GetTopicID
    • Additions to TopicDescriptor struct:
      - [ ] proto.RegisterType((*ControlMessage)(nil), "floodsub.pb.ControlMessage")
      - [ ] proto.RegisterType((*ControlIHave)(nil), "floodsub.pb.ControlIHave")
      - [ ] proto.RegisterType((*ControlIWant)(nil), "floodsub.pb.ControlIWant")
      - [ ] proto.RegisterType((*ControlGraft)(nil), "floodsub.pb.ControlGraft")
      - [ ] proto.RegisterType((*ControlPrune)(nil), "floodsub.pb.ControlPrune")
    • rpc.proto:
      - [ ] at the end of message RPC: optional ControlMessage control = 3;
      - [ ] message ControlMessage {
      - [ ] message ControlIHave {
      - [ ] message ControlIWant
      - [ ] message ControlGraft
      - [ ] message ControlPrune
    • pubsub
      - [ ] rand import (use the rand crate)
      - [ ] eval channel
      - [ ] PubSubRouter
      - [ ] Adds Join and Leave plus comments
      - [ ] NewPubSub (renamed from NewFloodSub, probably correlates to FloodSubController.new with Rust. Rust appears to uses inner instead of context,
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
      - [ ]
    • [ ]
    • [ ]
    • [ ]
    • [ ]
    • [ ]
    • [ ]
    • [ ]
  • [ ]
  • more to be added from the above PR.

Is your feature request related to a problem? Please describe.
Floodsub is already implemented for rust-libp2p, however it isn't really suitable for a sharding p2p network, as it involves broadcasting a message to all available peers, which is very bandwidth intensive.

Describe the solution you'd like
Gossipsub is the next development by Protocol labs and the lib-p2p team in developing PubSub. More details about it are in the above PR, as well as here

Describe alternatives you've considered

Additional context

Cute Animal Picture

put a cute animal picture link inside the parentheses

remove lint unused_doc_comment

To Reproduce
Steps to reproduce the behavior:

1. cargo make all

Describe the bug

warning: lint unused_doc_comment has been renamed to unused_doc_comments
  --> env/src/lib.rs:11:5
   |
11 |     error_chain!{}
   |     ^^^^^^^^^^^^^^
   |
   = note: #[warn(renamed_and_removed_lints)] on by default
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

warning: lint unused_doc_comment has been renamed to unused_doc_comments
  --> env/src/lib.rs:11:5
   |
11 |     error_chain!{}
   |     ^^^^^^^^^^^^^^
   |
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

warning: lint unused_doc_comment has been renamed to unused_doc_comments
  --> env/src/lib.rs:11:5
   |
11 |     error_chain!{}
   |     ^^^^^^^^^^^^^^
   |
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

warning: lint unused_doc_comment has been renamed to unused_doc_comments
  --> env/src/lib.rs:11:5
   |
11 |     error_chain!{}
   |     ^^^^^^^^^^^^^^
   |
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

   Compiling syntex_syntax v0.58.1
   Compiling env_logger v0.5.10
   Compiling ethereum-types-serialize v0.2.1
   Compiling chrono v0.4.2
   Compiling ethbloom v0.5.0
   Compiling diamond-drops-cli v0.1.0-a (file:///Users/tim.siwula/Desktop/diamond_drops/cli)
warning: lint unused_doc_comment has been renamed to unused_doc_comments
  --> cli/src/lib.rs:16:5
   |
16 |     error_chain! { }
   |     ^^^^^^^^^^^^^^^^
   |
   = note: #[warn(renamed_and_removed_lints)] on by default
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

warning: lint unused_doc_comment has been renamed to unused_doc_comments
  --> cli/src/lib.rs:16:5
   |
16 |     error_chain! { }
   |     ^^^^^^^^^^^^^^^^
   |
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

warning: lint unused_doc_comment has been renamed to unused_doc_comments
  --> cli/src/lib.rs:16:5
   |
16 |     error_chain! { }
   |     ^^^^^^^^^^^^^^^^
   |
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

warning: lint unused_doc_comment has been renamed to unused_doc_comments
  --> cli/src/lib.rs:16:5
   |
16 |     error_chain! { }
   |     ^^^^^^^^^^^^^^^^
   |
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

warning: lint unused_doc_comment has been renamed to unused_doc_comments
 --> cli/src/modules/mod.rs:7:5
  |
7 |     error_chain! { }
  |     ^^^^^^^^^^^^^^^^
  |
  = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

warning: lint unused_doc_comment has been renamed to unused_doc_comments
 --> cli/src/modules/mod.rs:7:5
  |
7 |     error_chain! { }
  |     ^^^^^^^^^^^^^^^^
  |
  = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

warning: lint unused_doc_comment has been renamed to unused_doc_comments
 --> cli/src/modules/mod.rs:7:5
  |
7 |     error_chain! { }
  |     ^^^^^^^^^^^^^^^^
  |
  = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

warning: lint unused_doc_comment has been renamed to unused_doc_comments
 --> cli/src/modules/mod.rs:7:5
  |
7 |     error_chain! { }
  |     ^^^^^^^^^^^^^^^^
  |
  = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

   Compiling mml v0.1.41
   Compiling diamond-drops-node v0.1.0-a (file:///Users/tim.siwula/Desktop/diamond_drops/node)
warning: lint unused_doc_comment has been renamed to unused_doc_comments
 --> node/src/modules/errors.rs:1:1
  |
1 | error_chain!{}
  | ^^^^^^^^^^^^^^
  |
  = note: #[warn(renamed_and_removed_lints)] on by default
  = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

warning: lint unused_doc_comment has been renamed to unused_doc_comments
 --> node/src/modules/errors.rs:1:1
  |
1 | error_chain!{}
  | ^^^^^^^^^^^^^^
  |
  = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

warning: lint unused_doc_comment has been renamed to unused_doc_comments
 --> node/src/modules/errors.rs:1:1
  |
1 | error_chain!{}
  | ^^^^^^^^^^^^^^
  |
  = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

warning: lint unused_doc_comment has been renamed to unused_doc_comments
 --> node/src/modules/errors.rs:1:1
  |
1 | error_chain!{}
  | ^^^^^^^^^^^^^^
........

Raise funds

We need to raise funds for R&D.

Summary:

I haven't received a grant yet for sharding development (to now deprecated specifications) and gossipsub work. I've tried to clarify with the grants team how much more progress is needed to get a grant, e.g. specific milestones to achieve, but I didn't receive a reply.

It seems to me that the Ethereum Foundation may have a preference for funding development of clients that it owns: Geth, Py-EVM, Java, JavaScript, etc. This is based on the lack of funding so far, as well as Vitalik seeming to favour promoting Prysmatic Labs without mentioning the names of other teams.

Other devs have contributed such as @ChosunOne, Luke Scoen, and Tim Siwula, but they have understandably dropped out, while many more others may have contributed if funds were available. I have changed careers from renewable energy and left a full time job.

I can't fathom why a grant was given to Prysmatic Labs for Geth-sharding back in March, yet no grant has been received yet for any other sharding team. Nor do I know how long a lack of funding will drag out for. Is the Ethereum Foundation trying to manipulate developers? Given that the sharding aspect of the software of a client is a public resource and has no inherent revenue, funding needs to be obtained via alternative means, e.g. grants, donations, employment from the organisation that owns the client, etc. Having an ICO, DAICO, DAII, VC funding, etc. seems to be out of the question since it's a public resource without revenue directly related to sharding development, only indirectly e.g. via the below forms of revenue and driving ecosystem growth (and thus potentially portfolio growth). I have sought grants and donations and even VC funding, and approached Parity and the Web3 Foundation, but people have been non-responsive or at best apologetic. There are ways for clients to generate revenue, e.g. full nodes serving light clients to access data, VIP node, EIP 908, Pocket Network, etc.


We've already applied with the EF and ECF. I think we'll be much more likely to be considered for a grant for other programs if we get a grant from these two parties, whereas I think other grant programs will be reluctant to fund us when they look into us and find out that the EF and ECF has not funded us yet.

Therefore I think it's best to continue working away until we get a grant from them, then we can apply for more grants.

Key selling/pitching points:

  • Sharding is necessary for the long-term scalability and sustainability of Ethereum. Layer 2 solutions reduce the security of layer 1 solutions, see also here.
  • Why Rust?
  • Why open source? So that anyone can freely scrutinize the code for security, optimize it for performance, or adapt it for their own needs (for instance, another blockchain project that competes with Ethereum which may develop unique ideas that Ethereum can then use).
  • This project will not just work on sharding, in the long-term it will work on everything in the sharding roadmap, which includes the EVM with EWasm, better incentivization of protocol resources such as storage and bandwith, Casper FFG and CBC, state minimization, etc.

Grants have been applied for with the Ethereum Foundation, the Ethereum Community Fund, and others listed here:

To apply:

The project may be considered as a public good, so may not be considered to be monetizable (although there are potential revenue streams for clients as linked to below).

One option is to build a Decentralized Autonomous Iterative Investment (DAII, pronounced dai-yee or die-yee). It may be simpler to wait for Dogezer to launch a beta in July or a stable release in December (2018) rather than independently develop our own.

Information that influenced this idea includes:

However, this may not be practical since we plan on merging with Parity eventually, so at the least they would have to agree to such a scheme, and this project is developing a public good, so raising funds via selling tokens may not necessarily be the best option.

Potential revenue streams are included in this EIP, as well as being an enabler for quadratic and potentially exponential growth of Ethereum and the Ethereum ecosystem, powering virtually any activity that has an economic or governance aspect, thus enhancing portfolio opportunity.

`cli` in cargo run -- mode --help and cargo run -- --help is misleading

We will need to update to be able to use cli as a command, or update these help instructions. But this isn't a high priority as we are not production ready.

cargo run -- mode --help
// Snipped DEBUG outputs
cli-mode
Choose mode(s) to run the sharding node

USAGE:
    cli mode [FLAGS]

FLAGS:
    -b, --both        both proposer and notary modes
    -h, --help        Prints help information
    -n, --notary      notary mode
    -p, --proposer    proposer mode
    -V, --version     Prints version information

$ cargo run -- --help
    Finished dev [unoptimized + debuginfo] target(s) in 0.09 secs
     Running `target/debug/cli --help`
Processing arguments: ["--help"]
// Snipped DEBUG lines
diamond-drops-cli 0.1.0-a
James Ray (@jamesray1), Josiah (@ChosunOne), Luke Schoen (@ltfschoen)
Implementation of Ethereum Sharding in Rust

USAGE:
    cli [OPTIONS] [SUBCOMMAND]

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

OPTIONS:
    -l, --log <LOG>    Set logging verbosity to error 0, warn 1, debug 2, info 3, or trace 4

SUBCOMMANDS:
    help    Prints this message or the help of the given subcommand(s)
    mode    Choose mode(s) to run the sharding node
$ cli mode -b
bash: cli: command not found

Create a subclient / CLI options for setting up a notary node.

Refactor into smaller issues/tasks (have the shard_ID as an option for each command):

  • options for register-collator: deposit, return result, or errors for asserts
  • options for deregister_collator(): no options, return true or errors for asserts
  • release_collator(): return result: no options, return true or errors for asserts
  • periodically checks SMC status (particularly for the outcome of get_eligible_collator()
  • During period period - lookahead_length to period period - 1
    • applies windback: "The process of attempting to determine the head by winding back recent collation headers, checking availability of collation bodies, and applying the fork choice rule. Also known as “head fetching” or “head guessing”.
    • gets collation headers from the SMC
    • downloads collations from the shard network
  • In period period:
    • send proposals of the given period to the shard network
    • gets proposals from the shard network
    • selects one proposal
    • Calls add_header() with the proposal.

Sharding protocol research tracker

Here is a link for the source of the tasks: https://ethresear.ch/t/a-general-framework-of-overhead-and-finality-time-in-sharding-and-a-proposal/1638/11. Vitalik said:

I would recommend at this point continuing to build at least:

  • The capability of having 100 separate shard p2p networks, and building and sending collations across those networks [see #7 for P2P sub protocol messages]
  • The ability to read logs emitted by an SMC
  • The ability to send transactions that call an addHeader function of the SMC
  • The ability for a client to maintain a database of which collation roots it has downloaded the full body for
  • The ability of a validator to (i) log in, (ii) detect that it has been randomly sampled, switch to the right p2p network, and start doing stuff, (iii) log out

Integration tests are not being included with default test command

In the 'collator' branch, when I run the tests with cargo test it does not picking up and run the tests that are in the tests/ directory anymore. It only runs the new tests that are within the implementation files. So to run all tests at the moment we have to run cargo test --test cli_test && cargo test instead of just cargo test, which is not ideal

Fix run tasks to include output e.g. for make docs and make uml

If you run cargo make docs and cargo make uml-default-recomended in a terminal, the docs are displayed in the default browser and ml.svg is displayed in the default SVG app. However, when you run these as a task, no such app opens to display the output.

Separate protocol / SMC handler functionality from node functionality in different crates

Describe the solution you'd like
To keep crates modular and maintainable, we should separate the functionality in the node crate related to a collation to a separate crate, and only keep functionality in the node crate that is related to functions that a node performs. Protocol functionality such as opcodes and SMC handlers should go in separate crates.

Fix OS compatibility issues with building Clap on Windows so the build succeeds

Describe the bug

error[E0433]: failed to resolve. Could not findunixinos--> C:\Users\ChosunOne\.cargo\registry\src\github.com-1ecc6299db9ec823\clap-2.31.2\src\app\parser.rs:7:14 | 7 | use std::os::unix::ffi::OsStrExt; | ^^^^ Could not findunixinos``

From https://gitter.im/Drops-of-Diamond/Development?at=5aed4e101eddba3d04d3ddb2.

To Reproduce
Steps to reproduce the behavior:

  1. Run cargo make build on a Windows OS

Expected behavior
The project should build.

Desktop (please complete the following information):

  • OS: Windows 10 64-bit

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.