Git Product home page Git Product logo

anon's People

Contributors

ahmedkorim avatar dependabot[bot] avatar drewstone avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

anon's Issues

Add new pallet for variable tree gadget.

Currently we showcase a single pallet with mixing from a fixed deposit tree. We should also add showcases of how this will work with our other gadgets, while keeping the existing pallets the same.

Update hash_leaves to actually hash and not just concat child nodes

Our current implementation just adds the two EC points together. We'll want to do a proper hash here, perhaps a pedersen hash eventually.

pub fn hash_leaves(left: MerkleLeaf, right: MerkleLeaf) -> MerkleLeaf {
	return MerkleLeaf::from_ristretto(
		left.0.decompress().unwrap() + right.0.decompress().unwrap(),
	);
}

Spec out group migrations

If we want to increase the size of a merkle group, we currently don't have this functionality at the pallet level. This is useful for:

  • Mixers, when we fill up all leaves
  • Groups, when we want to extend the size of our group.

Add more to spec for generalising structure and features of trees and mixers.

The running notion document contains information about what features we would like to generalise over. We should continue to add to this and help flush out a more balanced implementation that feels production ready.

Additionally, there's a variety of tree types, commitment types, and element types we want to support for future use cases. We should aim to build a better API over this. For example, we have Curve25519 working directly over this now, but with more research into Arkworks we may want to support the PrimeField type. How we should we go about merging these should be answered in order to complete this issue.

Add server for consuming events, storing in DB, and recreating the tree.

We will need a server implementation for rebuilding the merkle tree that functions as the relayer system similar to https://github.com/tornadocash/tornado-relayer.

This can be done by consuming the historical events of tree insertions and rebuilding the tree locally, likely in conjunction with a database of sorts. As the trees grows large, checkpointing will also be worthwhile.

On the topic of testing/rebuilding the tree, it might make sense to also support for the time being a flag that stores all leaves on-chain. Having the flag would reduce complexity on the client front, as users can just query the chain for the time being.

Improve error handling in wasm functions

Return Result<JsValue, OperationCode> which will throw in js. Rust side will return operation code inside the error which looks like:

#[wasm_bindgen]
#[repr(i8)]
enum OperationCode {
    Unknown = -1,
     Success = 0,
     MerkleTreeFull = 1,
     // ..etc
}

Fix mixer add leaves in client

When we generate a proof against a merkle tree for some root, we need to ensure that the respective merkle tree that we reconstruct stops at that root. Right now it is possible for the root we want to prove against and the current state of the merkle tree on chain to diverge, and so fail to generate proofs.

This first requires adding a target root to the function and incrementally building the tree UNTIL we reach that root, this way we then have the state of the tree we can generate a proof path from.

Design linking mechanism to link fixed/reward/variable trees together.

When people move between the various fixed and variable trees, we still want some way of linking them up, so that the underlying asset remains the same inside the trees linked together.

I should have a design for how we should link these together so that we ensure no mixing of asset types occurs.

Add documentation of MIMC & Posedon implementation

We should add docs to where the implementations were derived from. I was looking more intensely through the implementation yesterday and had some questions. It seems our Poseidon implementation doesn't use much of what Lovesh did in his repo. In addition, we should have a no-std implementation of LinearCombination simplify too in case that helps optimise any loose ends.

On the topic of documentation, we should aim to be explicit about our zkSNARK construction. Our proof relies entirely on commitments to values as we are not using a pairing-based curve and I'd like to make sure it's undertandable from a reader's perspective. This is something I can likely work on but would be good to sync on.

Test out batching withdrawals

Current proving system isn't the best by any standards but it is sufficient for launch. There may be a way we can squeeze a bit more performance out of the system as well by batching wihdrawals and doing a batch proof.

Basically,

  • Users submit withdrawals but delay verification, instead proof data is added to a queue
  • Every so often, a keeper executes a batch process withdrawals and ideally processes more TXes/s. Shouldn't work in theory since bulletproofs have linear verification but could in practice up to some batch count.

Fix bug and broken tests from VSMT updates

It seems we had taken out the larger test runner from the CI and so we didn't realise we had broken some tests (it's only testing the mixer client).

These tests fail.

test smt::tests::test_vsmt_prove_verif ... FAILED
test time_based_rewarding::tests::test_time_based_reward_gadget_verification ... FAILED

My investigation shows that update method on the SMT does not properly add leaves into leaf_indices afaik. So these modules try to unwrap that at some point and end up with an empty collection.

Implement delayed withdrawal processing.

We need infrastructure to delay withdrawals so that our bridge can prevent double-spends, in theory. So, we will need to instead add withdraw records to a queue, which an offchain worker or on_finalize function will read from and process from.

  • Define withdraw records with a creation time
  • Have offchain worker or on_finalize process records from the queue.
  • Create dummy challenge function.

Improve events in pallets

  • New member to group and include new merkle root
  • Verification in group and include current merkle root / nullifier
  • Mixer deposit and include new merkle root
  • Mixer Withdraw and include current root / nullifier

Track TVL in each mixer group

We should add items to the MixerInfo to track new deposits/withdrawals so that stats are easy to grab on the front-end and metrics can be viewed by most other third-parties.

Generalise the mixer sizes as a trait parameter.

Remove the hardcoded fixed sized deposits from the module implementation and add those hardcoded value to the Config trait defintion.

impl mixer::Config for Runtime {
	type Currency = Balances;
        type MixerSizes = [10 * DOLLAR, 100 * DOLLAR]
	type DefaultAdmin = DefaultAdminKey;
	type DepositLength = MinimumDepositLength;
	type Event = Event;
	type Group = Merkle;
	type MaxTreeDepth = MaxTreeDepth;
	type ModuleId = MixerModuleId;
}

Then subsequently, in the initialize, we will fetch the mixer sizes and iterate over each of them, to create the mixer groups for them.

Add pedersen commitments to leaf commitments instead of only Poseidon

We should test out pedersen commitments to (r, nullifier) as well as our current Poseidon(r, nullifier). This would potentially present some speedups on the front-end and could be more optimised for the leaf commitments.

For the front-end, we wouldn't need to instantiate the Poseidon parameters to generate deposit notes. Instead, we could simple compute a pedersen hash from the base field generator and the random elements (r, nullifier) sampled from the field.

max_leaves of group tree looks incorrect

impl GroupTree {
	pub fn new<T: Config>(depth: u8) -> Self {
		let init_edges: Vec<Data> = ZERO_TREE[0..depth as usize].iter().map(|x| Data::from(*x)).collect();
		let init_root = Data::from(ZERO_TREE[depth as usize]);
		Self {
			root_hash: init_root,
			leaf_count: 0,
			depth,
			max_leaves: u32::MAX >> (T::MaxTreeDepth::get() - depth),
			edge_nodes: init_edges,
		}
	}
}

If I have a tree of depth 3, then max_leaves = 2^3. Why don't we just do 2u32.pow(depth)?

Need a strategy for rebuilding merkle proof path with paginated results

When we get to larger trees, it will be important to have a strategy that is low in its storage / memory. We may want to expose an RPC function that serves paginated leaves for a specific mixer group.

Then simultaneously, we need an algorithm that rebuilds the merkle tree incrementally with these paginated results and KNOWS which intermediate merkle tree nodes are necessary for the proof path. If we have this algorithm, it is likely that we won't need to fetch all leaves at once, but only some leaves in paginated batches, making the UI a lot more efficient.

Add nullifiers into a merkle tree managed by the pallet.

Currently we maintain a map, we should instead consider using a 256 depth merkle tree to store nullifiers using a properly indexed SMT.

Since each nullifier is currently a 32 byte number, we can still maintain our map structure, while also merkle-izing the nullifiers into a root for extensions to zero-knowledge membership proofs (though with proper indexing we won't need this).

I would like to consider generalising the API over trees more so that a tree creator selects the type of indexing they want for the leaves in the tree, for example, incrementally adding leaves or adding them at an index chosen by some function of the leaf (such as the identity). Having this will allow future applications the ability to submit non-membership zkps.

Add integration tests for fixed tree -> reward tree -> variable tree and reward tree -> variable tree.

There are 2 potential integration test flows:

  1. We start with a fixed deposit, move to another fixed deposit, claim a reward and add a commitment to the variable deposit tree, then move variable deposits around.

  2. We start with a fixed deposit into the reward tree, claim the reward and add a commitment to the variable deposit tree, then, move variable deposits around.

We should have full flow tests of this including failing cases.

Add new pallet for reward tree gadget.

Currently we showcase a single pallet with mixing from a fixed deposit tree. We should also add showcases of how this will work with our other gadgets, while keeping the existing pallets the same.

Update `VanillaSparseMerkleTree`

Basically, this VanillaSparseMerkleTree will replace the one we have in our client, so we need the support for:

  • zk proof construction
  • Adding an array of leaves, in successive order
  • Keeping track of leaf indecies

Nullifier tree

Instead of having a map of all nullifiers, we should use Merkle trees with leaves initialized as zero.

Step 1: Since the index of the leaf can be derived from the path, we can check if get_index(path) == nullifier.
Step 2: Verify that leaf at the provided path is 0: verify(path, 0).
Step 3: Update the Nullifier tree at provided path with non-zero value: update(path, 1).

If we keep track of every nullifier, it will take 256 * ((2 ** 32) -1) = 137GB (128 + 8) * (2 ** 32) = 73GB (assuming map keys are 128 bits blake2_128_concat, and bool is 8 bits) per group tree, assuming the tree is 32 in height and is full.

Nullifier tree will take 8kB. We are just saving the root hash so 256 bits.

extract `gadgets`, `crypto-constants` and `wasm-utils` into a seprate crates/repos

We need to look into the way we need to manage this, as currently there will be a lot of shared code between the pallets, WASM, CLI, and in the future a Mobile App.

the following crates need to be extracted and live in their independent repos, so it makes it easy for other projects to pick things up and easily shared between all of our projects.

Set boolean flag for using local storage for generated notes

We should default to not storing notes in local storage, but simultaneously provide the user with the ability to do so when they are prompted with our modal, describing the note and its purpose. This can be toggled ideally from the front-end eventually.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.