datdotorg / datdot-node-rust Goto Github PK
View Code? Open in Web Editor NEWdatdot blockchain node in rust
Home Page: https://playproject.io/datdot-node-rust
License: GNU General Public License v3.0
datdot blockchain node in rust
Home Page: https://playproject.io/datdot-node-rust
License: GNU General Public License v3.0
@todo
the issue is we have SCALE-encoded bits that have a tiny bit of Metadata and all numbers are LE
The weight system in substrate/FRAME is key to preventing resource exhaustion attacks on the chain, so it is important to set sane values for weights.
Specifically, weights in https://github.com/playproject-io/datdot-substrate/blob/master/pallet-datdot/src/lib.rs are currently all placeholder values.
Some resources - https://www.shawntabrizi.com/substrate/substrate-weight-and-fees/
It seems like it doesn't log the correct user nor hypercore address. I get all event logs look the same (same user + same key (not matching the actual hypercore key)). The user account logged in the event seem to be always the one that gets finalized first (we're also creating users)
polkadot-types-from-chain:
https://polkadot.js.org/api/start/typescript.user.html#chain-modules
The merkle tree (and proofs) expected in the dat_verify.rs pallet should match the tree used in hypercore-crypto/hypercore with the exception that the merkle root passed to substrate is the checksum used to calculate the signature, not the roots used to calculate it.
How it's verified in datdot-substrate
Here is the function signature of the submit_proof
function:
where the Proof type is a struct defined here:
https://github.com/playproject-io/datdot-substrate/blob/ac0e44e02c34c454c7bda58eee855de2054e34a4/bin/node/runtime/src/dat_verify.rs#L192-L196
this should match the [merkle proofs returned by hypercore]:(https://github.com/mafintosh/hypercore/blob/1082cc5f8803f5bce65686f799784920d1426088/index.js#L537)
First we verify that the proof is being submitted by the correct user:
(I am considering removing this check)
We verify that the signature provided matches the merkle root (checksum) provided and is signed by the public key associated with the challenge (currently PUBLISHER, should be ENCODER):
We verify the chunk hash matches the chunk hash provided in the Proof by recalculating it and getting the node with the index of the chunk from the proof:
finally, based on the index being proved, we calculate the merkle roots (using a hacky linear-time calculation to get the expected indeces the roots should contain), and use them to rebuild the merkle root checksum:
There is currently an oversight in the lack of verification of intermediary nodes of the merkle path - this would be the final step.
@TODO
hashtype
entirely and just have it internal to substrate, because it can be calculated/derived from other paramsI'm considering implementing #6 by turning dat_verify into an instantiable module, and making each "parallel" track a separate instance of the module - they would all call into a separate "dat_store" module.
It may also be a good idea to have the different challenge types be in different modules (maybe sharing a challenge trait).
I need to think about the internal api here though so it isn't too complex.
@todo
we get merkle root like this https://pastebin.com/QH7egWUX and then submit it to the chain
await API.tx.datVerify.registerData(merkleRoot)
SomethingStored
eventawait API.tx.datVerify.registerEncoder()
await API.tx.datVerify.registerSeeder()
await API.tx.datVerify.registerAttestor()
NewPin
) where encoder and hoster are notified about what feed needs hosting/encodingconst [encoderID, hosterID, datID] = event.data
const args = [hosterID, datID, start, end]
// if more ranges, send same tx for each range separately
await API.tx.datVerify.confirmHosting(datID, index) // index = HostedMap.encoded[index] (get encoderID and then loop through to get position in the array)
HostingStarted
const data = event.data // [hosterID, datID]
await API.tx.datVerify.submitChallenge(hosterID, datID)
Challenge
event is emitted where hoster is notified about the challengesconst [hosterKey, datKey] = event.data
// hostedMap => see which chunks are hosted by this hoster for this key (Rust API line 317)
//challengeID no longer exists -> use [datID, chunk] directly
await API.tx.datVerify.submitProof(challengeID, []) //challenge ID is parsed
AttestPhase
const [challengeID, expectedAttestors] = event.data
const attestorIDs = JSON.parse(expectedAttestors).expected_attestors
// change proposal:
// could we just pass an array of attestors instead of an object
const [challengeID, attestorIDs] = event.data
function getAttestation () {
const location = 0
const latency = 0
return [location, latency]
}
const attestation = await getAttestation()
//challengeID no longer exists -> use [hosterId, datID, chunk] directly
const submit = await API.tx.datVerify.submitAttestation(challengeID, attestation)
currently there is only one active challenge at a time.
Problem:
Solution?
On the datdot-substrate landing page is a link to the getting started guide: https://substrate.dev/docs/en/overview/getting-started/ but it seems to result in a 404
Date: July 8
Not fixed yet: I added a fix to the lib.rs (created contract_id, inserted contract to GetContractByID and emitted an event). Event now does log, but there's a new error (maybe related to me not fixing this in the correct way)
Date: July 8
Scenario:
alice([user, publisher])
bob([user, hoster, attestor])
charlie([user, hoster, attestor])
dave([user, hoster, encoder])
eve([user, encoder, attestor])
Error: NewContract event doesn't get emited
Date: July 8
Fix: Had to change the data type for NoiseKey in lib.rs to H512
and it works. Updates types and also agreggated new types locally. Tried also with Public
type but it didn't work. I guess because the value of the NoiseKey is <Buffer 26 6f 1c df f6 c0 e6 98 c9 36 60 8f 50 b4 8d ad a4 53 82 1f 5c 46 9c 9d b5 6a bc 91 a1 47 54 3b, secure: true>
Logs after the fix and a rebuild
Date: July 8
Scenario:
alice([user, publisher])
bob([user, hoster, attestor])
charlie([user, hoster, attestor])
dave([user, hoster, encoder])
eve([user, encoder, attestor])
Error: Struct: failed on 'noise_key'
Date: July 8
Fix: Error was related to the nonce we were passing => need to decide where nonce is created (locally and passed with the tx or on chain).
Date: July 7
Scenario:
log('start scenario')
alice([user, hoster])
bob([user, hoster])
Date: July 7
Scenario:
log('start scenario')
alice([user])
bob([user, publisher])
document how the dispatchcenter + dat_verify work together (maybe reconsider this design)
@todo
datdot-substrate#6
[runtime] simultaneous challengessubstrate-epoch#4
[runtime] Fix builddatdot-substrate#2
Update README.mddatdot-substrate#10
cleanup, pull upstream, fix builddatdot-substrate#14
make custom types in json formatdatdot-substrate#13
Unexpected epoch changedatdot-substrate#31
Getting started link?account_create()
extrinsic for new users to submit a signed public key with a "micro proof of work" and get a unique user ID and a small amount of minted funds in return, so they can start using other chain features.provider pool
to track and manage available providers and match jobs to the best providers available based on plans service level agreements given by sponsorsdatdot-substrate#23
Updates/Bug fixesdatdot-substrate#20
make attestations & encoding-hosting-challenge flows workdatdot-substrate#19
Generate types automatically in js instead of maintaining a types.json file indefinitely.datdot-substrate#17
Merkle Tree Proof deep dive.datdot-substrate#18
Issues from running substrate & serviceregisterUser
to registerSeeder
datdot-substrate#15
bit shuffleingdatdot-substrate#16
make lab environment to reliably test substrate chain scenariosdatdot-substrate#11
Basic Calls Flowdatdot-substrate#9
[runtime] figure out testing framework for runtimedatdot-substrate#8
[runtime/idea] take advantage of instantiable modules.datdot-substrate#7
[runtime] random chunk-range seeding and seeder size preferencesdatdot-substrate#5
[runtime] document/discuss runtime apidatdot-substrate#3
[runtime] refactor datdot runtime modules into frame palletsdatdot-substrate#22
changes to the chain to match mauve's logic (mvp)@todo
Build node binaries to a wasm blob importable as a js lib
I saw this project was part of wave 4, it seems really interesting - similar to things we have thought of building at http://github.com/joystream , but documentation is quite lacking.
What is the most complete explanation of the goal here?
@todo
TESTING
set up testing environment to spin up 1+ tests and run them
GOAL put our "specification" into code
STRATEGY:
transaction-factory
in substratetape
to write node testsCURRENT PRACTICE:
GOAL:
have integration tests where we can spin up
multiple substrate nodes on different platforms and see if they
connect to each other and work in expected ways.
If people will install datdot on windows, linux, macosx
some maybe rip out the internals to run it on servers
GOAL:
forceRegistering
to check basic logicusersStorage
and datHosters
JOSHUAS REMARK:
just grepping the codebase for node_testing to see how it's being used
but there are some really good benchmarks and tests in node/executor
I think I'll adapt them for our module and just use that
just discovered they can give is the concrete block size info you wanted
JOSHUAS COMMENT: (NOVEMBER 13th)
there are some testing scaffolding
and you can test runtime functions
individually,
for example writing integration tests
by calling RPC
via rust,
but presetting state
and simulating interactions
is not super easy.
network testing tools
to easily writing some scenarios
and executing a quick simulation
like:
spawn a bunch of substrate nodes
pre-configure state
execute a fixed set of transactions
to simulate what happensrun some assertions against the state after the state transition
fix problems with PublishData (not working atm)
When I run PublishData, I don't get any error, chain logs this and it also freezes (ctrl C doesn't work, can't stop the process)
add HostingStarted event after confirmHosting is triggered
add ability to get archive index from the feed key
const archiveIndex = api.query.dat_verify.arhiveIndices(dat_pubkey)
Are there any prior discussions available to read about the incentive design of datdot?
(Current Runtime as of 24/12/19)
Source code: https://github.com/playproject-io/datdot-substrate/blob/master/bin/node/runtime/src/dat_verify.rs
No Seeders, No registered Dats
As a Dat publisher: call register_data
to register the current state of your archive on-chain.
As a seeder: call register_seeder
to register your intention to seed a dat. watch for the NewPin event
, which will tell you which dat archive you should be pinning. You can also query UsersStorage(AccountId)
to see all dats you should be pinning simultaneously.
Every Block after there are registered seeders with Pinned dats, a Challenge event will be emitted by the chain.
Seeder and Dat registered successfully
As a Dat publisher: no action required. As a seeder: watch for Challenge events to your AccountId
AccountId
is the selected seeder, and Blocknumber
is the deadline. (All active challenges are enumerated in the ChallengeMap
linked_map storage item - (x, y) where x refers to the index of a challenge in the SelectedChallenges
map, and y refers to the challenged seeder in the SelectedUsers
map.) for reference, SelectedUserIndex
is a mapping from AccountId
-> (index, challenge count)submitProof
with the proof for the requested chunk (chunk index retrievable via the SelectedChallenges
map).ChallengeFailed
event and punish the failed seeder.seeders need to be able to specify the amount of storage they want to use
this will also need a way to select random chunk ranges for specific hypercores
leaving this for milestone 2:
after we have a working runtime, I need to do the minor refactoring required to move the modules into a pallet - then we can also get reorganize the repo so we aren't a substrate for (currently this is the case because it's easier to keep up with upstream this way, but after substrate 2.0 stabilizes we probably wont need to keep it that way.)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.