Git Product home page Git Product logo

crypto's People

Contributors

0xkanekiken avatar al-kindi-0 avatar austinabell avatar birchmd avatar bitwalker avatar bobbinth avatar cristiantroy avatar dominik1999 avatar frisitano avatar fumuran avatar grjte avatar gswirski avatar hackaugusto avatar itzmeanjan avatar nnsw3 avatar plafer avatar polydez avatar shuoer86 avatar themenko avatar tohrnii avatar vlopes11 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

crypto's Issues

MerklePathSet is using irregular path

The MerklePath structure assumes the input node is an argument of the verification

/// Computes the merkle root for this opening.
pub fn compute_root(&self, mut index: u64, node: Word) -> Word {
self.nodes.iter().copied().fold(node, |node, sibling| {
// build the input node, considering the parity of the current index.
let is_right_sibling = (index & 1) == 1;
let input = if is_right_sibling {
[sibling.into(), node.into()]
} else {
[node.into(), sibling.into()]
};
// compute the node and move to the next iteration.
index >>= 1;
Rpo256::merge(&input).into()
})
}

But MerklePathSet injects the input node as the first element of the path

/// Adds the specified Merkle path to this [MerklePathSet]. The `index` and `value` parameters
/// specify the leaf node at which the path starts.
///
/// # Errors
/// Returns an error if:
/// - The specified index is not valid in the context of this Merkle path set (i.e., the index
/// implies a greater depth than is specified for this set).
/// - The specified path is not consistent with other paths in the set (i.e., resolves to a
/// different root).
pub fn add_path(
&mut self,
index: u64,
value: Word,
path: Vec<Word>,
) -> Result<(), MerkleError> {
let depth = (path.len() + 1) as u32;
if depth != self.total_depth {
return Err(MerkleError::InvalidDepth(self.total_depth, depth));
}
// Actual number of node in tree
let pos = 2u64.pow(self.total_depth) + index;
// Index of the leaf path in map. Paths of neighboring leaves are stored in one key-value pair
let half_pos = pos / 2;
let mut extended_path = path;
if is_even(pos) {
extended_path.insert(0, value);
} else {
extended_path.insert(1, value);
}

And, when queried, it will remove it

/// Returns a Merkle path to the node at the specified index. The node itself is
/// not included in the path.

It should probably store the path only with the siblings, having a different strategy to store the leaf node, if required

After this refactor is done, MerklePathSet should consume the MerklePath functions to compute roots and verify openings.

#53 (comment)

Add `get_leaf_depth` method to `MerkleStore`

This method is needed to support MASM implementation of Tiered STM. The signature of the method could be:

pub fn get_leaf_depth(&self, root: Word, index: u64) -> Result<u8, MerkleError>

This method would traverse the tree specified by the root from the top and return as soon as it finds a leaf node (i.e., node with no children) or a root of an empty subtree.

The error conditions would be:

  • There is no tree for the specified root in the store.
  • We got to depth 64 but the node at that depth is not a leaf.

Implement ECDSA over ECExt5

Right now we're trying to implement ECDSA ( over ec_ext5 curve ) signature verification routine in Miden assembly, for which all necessary underlying constructions are almost ready

But if you notice carefully, you'll see for this we had to write important cryptographic primitives inside test directory of https://github.com/maticnetwork/miden which is not good. I'd like to have all Miden related cryptographic requirements living here and get them imported as development dependencies for ensuring that test vectors can be generated on-the-fly and functional correctness of Miden assembly implementations can be ensured that way. This also makes dependency graph more modular which makes it easier to comprehend and finally audit.

So I'll move all these routines here ( now living here ) so that we have ECDSA ( over ec_ext5 curve ) keygen/ sign/ verify implemented.

Implement Falcon signature

Ideally, we'd use an official implementation, but I couldn't find any native Rust implementations. There is one project (PQ Cyrpto) which provides Rust bindings for C implementation. But I'm not sure that's the right way to go.

I think the verification part should be relatively simple to implement, but the signing part is quite complicated.

In terms of API we should expose, I think we should use a "compressed" public key, where the key is basically a hash of the actual public key. Then, the signature would consist of the actually signature + the expanded public key. For the hash function used to "compress" the key we should use RPO so that it is easy to "unhash" within the VM.

Tracking issue: benchmark for hash functions

Goal

We want to add benchmarks to the repo which test how various hash functions perform against each other.

Details

The hash functions to be included:

  • Rescue Prime Optimized
  • BLAKE3
  • SHA256
  • Poseidon (Polygon Zero implementation).

We should benchmark this on a few machines (e.g., AMD, Intel, Apple) and put a summary table in the main readme file.

Tasks

  1. Al-Kindi-0
  2. Al-Kindi-0
  3. Dominik1999

Working group:

@Dominik1999, @Al-Kindi-0

Workflow
  • Discussion should happen here or in the related sub-issues.
  • PRs should only be merged by the coordinator, to ensure everyone is able to review.
  • Aim to complete reviews within 24 hours.
  • When a related sub-issue is opened:
    • add it to the list of sub-issues in this tracking issue
  • When opening a related PR:
    • request review from everyone in this working group
  • When a sub-issue is completed:
    • close the related issue with a comment that links to the PR where the work was completed

Coordinator: @Dominik1999

Workflow

The working group coordinator ensures scope & progress tracking are transparent and accurate. They will:

  • Merge approved PRs after all working group members have completed their reviews.
    • add the PR # to the relevant section of the current tracking PR.
    • close any completed sub-issue(s) with a comment that links to the PR where the work was completed
  • Monitor workflow items and complete anything that slips through the cracks.
  • Monitor scope to see if anything is untracked or unclear. Create missing sub-issues or initiate discussion as required.
  • Monitor progress to see if there's anything which isn't moving forward. Initiate discussion as required.
  • Identify PRs with especially significant changes and add @grjte and @bobbinth for review.

Tiered sparse merkle tree requirements

Miden VM - TSMT

Definition

A tiered sparse merkle tree will be a compound of sparse merkle trees combined into tiers. It will map 256-bits keys to 256-bits values, and will extend the functionality of sparse merkle trees.

Keys and values are arrays of 64-bits field elements F. We will use the last limb of the key to represent our traversing key.

  • T: Depth per tier, set to 16.
  • M: Maximum number of tiers, set to 4.
  • D: Maximum depth of the tree, set to T ā‹… M = 64
  • H({F}) -> F: Hash primitive of choice
  • H_{t}({F}) -> F: Hash in domain t, where the maximum t = D = 64
  • H_0(depth) -> F: Set of empty sub-tree constants for a target depth.

Lookup

Requirements:

  • H_0(depth) -> F
  • H_{t}({F}) -> F
  • Node(state, depth, index) -> Node Value: F
  • Traverse(state, bits) -> Target Node Depth Index: (F, F)
  • Traverse Key(Key, B) -> Bits: F

For a state of the tree (i.e. root), given a key, it will lookup its structure and find the appropriate node for that key. This will be referenced as target node within this document.

Lookup(Root, Key, Value) -> (Depth, Index)

For the first tier, we will:

  • Compute bits := Traverse Key(Key, B) for B = T = 16

Traverse key will get the most significant bits B from the traversing key of Key.

  • Compute the node value for that tier: V := H_{B}(Key, Value)
  • Fetch the target node via a traverse function (depth, index) := Traverse(state, bits)

The tree traversal is straightforward as it will be the same as a regular sparse merkle tree. For the sake of simplicity, we use below a T = 4 and a 8-bits key 0b10010110.

image

We start from the orange root, traverse the green nodes in accordance to the most significant bits of the key, and finish at the purple target node. The index pair for that tree will be (depth = 4, index = 9).

  • Fetch a node value N := Node(state, depth, index)
  • If the node value N equals V or H_0(depth), returns (depth, index). Otherwise, continue traversing until the next tier.

For the second tier t = 2, we will:

  • Compute bits := Traverse Key(Key, B) for B = t ā‹… T = 2 ā‹… 16 = 32
  • Compute the node value for that tier: N := H_{B}(Key, Value)
  • Fetch the target node via a traverse function (depth, index) := Traverse(state, bits)
  • Fetch a node value V := Node(state, depth, index)
  • If the node value N equals V, returns (depth, index). If V = H_0(depth), returns (depth, index) of the previous iteration.

Otherwise, continue traversing until B = 64. If we reach the maximum depth = D = 64, returns (64, index), where index is the traversed value for that maximum depth.

Node value for the last tier

Requirements:

  • Order Leaves By Key({(Key, Value)}) -> {(Key, Value})

The last tier might contain multiple leaves per node. We introduce a Node Value Last Tier(leaves: {Key, Value}) -> F procedure to compute the node values, and will behave as follows

  • Order leaves via ordered leaves := Order Leaves By Key(leaves)
  • Return the hash in domain for the max depth D = 64 as Node Value Last Tier(ordered leaves: {Key, Value}) -> H_64(ordered leaves)

Membership proof

Requirements:

  • H_{t}({F}) -> F
  • Last Tier Leaves(index) -> {(Key, Value)}
  • Lookup(Root, Key, Value) -> (Depth, Index)
  • Merkle Opening(Root, V, index, Merkle Path) -> boolean
  • Merkle Path(Root, depth, index) -> Merkle Path
  • Node(state, depth, index) -> Node Value: F
  • Node Value Last Tier(leaves) -> F

To check if a candidate leaf (Key, Value) opens to a state Root, we proceed as follows:

  • Fetch the target node via (depth, index) := Lookup(Root, Key, Value)
  • Compute the node value for that tier via N := H_{depth}(Key, Value)
  • If the depth is 64, fetch all leaves of the target node via leaves := Last Tier Leaves(index), assert (Key, Value) belongs to leaves, and replace N := Node Value Last Tier(leaves)
  • Fetch the node value of the state via V := Node(Root, depth, index)
  • Fetch the merkle path for the target node via P := Merkle Path(Root, depth, index)
  • Assert the correctness of the opening via Merkle Opening(Root, V, index, P)

Note: Both Merkle Path and Merkle Opening definitions doesn't belong to the scope of this document and are assumed to be known by the reader.

Non-Membership proof

Requirements:

  • Fetch Pre Image(Root, depth, index) -> {F}
  • H_0(depth) -> F
  • H_{t}({F}) -> F
  • Last Tier Leaves(index) -> {(Key, Value)}
  • Lookup(Root, Key, Value) -> (Depth, Index)
  • Merkle Opening(Root, V, index, Merkle Path) -> boolean
  • Merkle Path(Root, depth, index) -> Merkle Path
  • Node(state, depth, index) -> Node Value: F
  • Node Value Last Tier(leaves) -> F

To check if a candidate leaf (Key, Value) doesn't exist for a state Root, we proceed as follows:

  • Fetch the target node via (depth, index) := Lookup(Root, Key, Value)
  • Fetch the node value via N := Node(Root, depth, index)
  • Fetch the merkle path for the target node via P := Merkle Path(Root, depth, index)
  • Assert the correctness of the opening via Merkle Opening(Root, V, P)

In order to prove non-membership, we need to open the pre-image of target node and assert it doesn't contain the candidate leaf. The proof that we know a pre-image that will compute the opening correctly will suffice to prove the non-membership.

  • If N = H_0(depth), then the proof holds.
  • Otherwise, if depth < D, we fetch the pre-image (Key', Value') := Fetch Pre Image(Root, depth, index) and assert N = H_{depth}(Key', Value') && Key != Key' && Value != Value'. We can, alternatively, skip the value check as the key could independently verify the set membership - that would be up to the target use case.
  • Finally, if depth = D, then we fetch leaves := Last Tier Leaves(index), assert (Key, Value) doesn't exist in leaves, and assert N = Node Value Last Tier(leaves).

Insert new leaf

Requirements:

  • Fetch Pre Image(Root, depth, index) -> {F}
  • H_0(depth) -> F
  • H_{t}({F}) -> F
  • Insert Leaf(Root, N, depth, index, Key, Value) -> New Root: F
  • Last Tier Leaves(index) -> {(Key, Value)}
  • Lookup(Root, Key, Value) -> (Depth, Index)
  • Node(state, depth, index) -> Node Value: F
  • Node Value Last Tier(leaves) -> F
  • Traverse(state, bits) -> Target Node Depth Index: (F, F)
  • Traverse Key(Key, B) -> Bits: F

The insertion of a leaf (Key, Value) for a state Root should be implemented as follows:

  • Fetch the target node via (depth, index) := Lookup(Root, Key, Value)
  • Compute the node value for that tier via N := H_{depth}(Key, Value)
  • Fetch the node value of the state via V := Node(Root, depth, index)
  • If V = H_0(depth), then Insert Leaf(Root, N, depth, index, Key, Value)
  • Otherwise, if depth = D = 64, then we fetch leaves := Last Tier Leaves(index), append (Key, Value) to the set leaves (note that a set cannot hold duplicated values), compute the node value N := Node Value Last Tier(leaves), and Insert Leaf(Root, N, depth, index, Key, Value)

Finally, if the previous cases aren't met, we have a collision. We should promote both leaves until they don't collide.

  • Compute the depth of the next tier via next tier depth := depth + T
  • Fetch the node pre-image of the sibling via (Sibling Key, Sibling Value) := Fetch Pre Image(Root, depth, index)
  • Compute the sibling traversing bits via sibling bits := Traverse Key(Sibling Key, next tier depth)
  • Fetch the target sibling node via the traverse function (sibling new depth, sibling new index) := Traverse(state, sibling bits)
  • Compute the traversing bits via bits := Traverse Key(Key, next tier depth)
  • Fetch the target node via the traverse function (new depth, new index) := Traverse(state, bits)

If new depth = D = 64, then we insert the sibling and leaf into their indexes using the previous method for the maximum depth.

Otherwise, if new index != sibling new index, it means we can insert the nodes back into the tree:

  • Compute the sibling node value for that tier via Sibling N := H_{new depth}(Sibling Key, Sibling Value), then New Root := Insert Leaf(Root, Sibling N, new depth, sibling new index, Sibling Key, Sibling Value)

Note that this command should override the previous leaf from the set. It means it should behave the same as if we entirely removed the sibling from the set, and inserted it into the new depth and index.

  • Compute the value for that tier via N := H_{new depth}(Key, Value), then Insert Leaf(New Root, N, new depth, new index, Key, Value)

Finally, if new index = sibling new index, we repeat the procedure, promoting the leaves to the next tier.

MidenVM

The Lookup operation is used for all the routines. These are the requirements for its implementation.

  • H_0(depth) -> F
  • H_{t}({F}) -> F
  • Node(state, depth, index) -> Node Value: F
  • Traverse(state, bits) -> Target Node Depth Index: (F, F)
  • Traverse Key(Key, B) -> Bits: F

The remainder operations will require:

  • Fetch Pre Image(Root, depth, index) -> {F}
  • Insert Leaf(Root, N, depth, index, Key, Value) -> New Root: F
  • Last Tier Leaves(index) -> {(Key, Value)}
  • Merkle Opening(Root, V, index, Merkle Path) -> boolean
  • Merkle Path(Root, depth, index) -> Merkle Path
  • Node Value Last Tier(leaves) -> F
  • Order Leaves By Key({(Key, Value)}) -> {(Key, Value})

Known blockers

These implementations will require new VM instructions and/or advice provider modification:

  • H_{t}({F}) -> F

Hash in domain is currently not supported as VM instruction

  • Insert Leaf(Root, N, depth, index, Key, Value) -> New Root: F

We can map (depth, index) -> (Key, Value) via the advice provider, but we cannot insert a leaf on an arbitrary index.

Nice to have

The advice provider will allow us to map values to sequence of values. This theoretically can solve any situation, but might be too resource demanding to work with. Example:

If I'm to implement the Node(state, depth, index) -> Node Value: F, then I can do the following:

  • Compute the hash of the arguments x := H(state, depth, index)
  • Map x -> v

This means we will have to store every node, not only as a node, but also as a value mapping.

If we have a way to perform a custom request that will take n numbers as argument, we can easily implement more functions:

  • Define a constant N that will work as function selector
  • Invoke the advice provider, passing as arguments (N, state, depth, index)
  • Given N, the advice provider will know it should use the 3 arguments to fetch the node for the given state at depth+index, and return it

Efficient lazy loading of tree data for VM execution

The VM supports a mtree_get operation. This operation references a tree by its root. For the VM to work with the tree's data (for queries / updates / copies), the VM has to load the data into memory. This could be optimized to only load the necessary data in memory.

Open questions are: At which point will this be a good strategy? (code complexity vs. performance gains)

Improve error `ImportedProcModuleNotFound`

This is an example for current error:

Failed to compile test source.: ImportedProcModuleNotFound(ProcedureId([76, 6, 174, 102, 136, 197, 185, 191, 0, 2, 63, 207, 185, 182, 172, 255, 235, 226, 6, 56]))

The ProcedureId is not useful to a human, so it would be useful to have the original name in there too.

Additionally it would be nice to have a levenshtein distance or something similar to propose a fix.

Support loading instantiated data structures into the `MerkleStore`

As it stands there is not way to load an instantiated data structure (Mmr, Smt, MerkleTree) into the MerkleStore. The MerkleStore API expects iterators of a specific format depending on the structure being loaded. We should implement methods on Mmr, Smt, MerkleTree that produce the required iterators that will allow the instantiated data structures to be loaded into the MerkleStore.

Speed up instruction search

ATM the first few characters in the search take a lot of time, subsequent runs are much faster, I assume it is because as the data is filtered down the search has to run on fewer entries and becomes faster.

Possible solutions:

  • debounce user input to reduce the number of filtering operations
  • add a loader
  • try to use a library (maybe what is used by the rust or python docs) that can produce a reverse index to speed up the search

re-export `RandomCoin` and `RandomCoinError` from Winterfell

Miden VM depends on RandomCoin, RandomCoinError, which are currently being imported from Winterfell. It would be better to import them from this miden-crypto package instead.

We should add them to this package as re-exports and then update the dependencies in miden-vm when the next version of this package is released.

Add BLAKE3 hash function

We should add wrappers for BLAKE3 hash function into hash module. These could be similar to the ones implemented in Winterfell. In terms of variants, would be great to have the following:

  • Blake3_256 - same as in Winterfell.
  • Blake3_192 - same as in Winterfell.
  • Blake3_160 - a version of BLAKE3 with output truncated to 20 bytes.

Implement `no-std` support

It would be very useful to have no-std support for the crypto crate. Most of the code should already support this, we just need to update crate metadata to enable this. We can do this similarly to how it is done in the main Miden repo.

Setup `CI`

Setup basic CI checks to assert the correctness of the implementation.

The same model used in Miden VM will be replicated here, that is:

  • Run tests
  • Enforce rustfmt
  • Reject warnings
  • Merge first to next

Simple program causing panic

Using this sample program:

begin
  adv.keyval
end

Causes the VM to fail with:

thread 'main' panicked at 'internal error: entered unreachable code', assembly/src/assembler/span_builder.rs:154:13

On latest next commit 1de0502d087ced0bb9d7e1d754669244c7c177a7 .

Implement `MerkleStore` data structure

Update by @grjte on 2023/03/09:
we now intend to implement MerkleStore to hold all Merkle-based data in the advice provider as described by @bobbinth in this comment below: #89 (comment)

Original issue description

One of the Merkle tree-based data structures we have is a MerklePathSet. This data structure can be viewed as a set of Merkle paths resolving to the same root. Current limitation is that all paths must be of the same length - but I'm wondering if we can remove this limitation.

Partial Merkle tree

If we do, this updated data structure could be named PartialMerleTree and it could look something like this:

image

In the above, we know node value at (index = 0, depth = 2) but we don't know node value at (index = 0, depth = 3).

I think such a data structure shouldn't be too difficult to implement, and the nice thing is that it will work naturally with our mtree_get and mtree_set instructions for any node depth.

For mtree_set specifically, we can easily replace a node in one PartialMerkleTree with a root of another PartialMerkleTree (assuming the combined depth doesn't exceed 64). However, we would not be able to replace a node in a PartialMerkleTree with a root of a Sparse Merkle tree (or rather that would be a very expensive operation).

So, I'm wondering if we could take this a step further.

Merkle Store

A MerkleStore could be data structure similar to PartialMerkleTreee but such that we allow nodes to be roots of empty subtrees at no extra cost. For example:

image

In the above, E2 is a root of an empty subtree of depth 2. Instead of storing all the leaves of this subtree explicitly, we just store E2.

If we can implement this data structure efficiently (and I don't know if we'll run into some complications here), we could potentially replace other Merkle tree-based data structures with it. Specifically:

  • SimpleSmt would be a special case of this data structure such that we know values of all leaves.
  • MerkleTree would also be a special case such that we know all the leaves and most of them are non-zero.
  • TieredSmt would also be a special case such that we allow leaves only on specific tiers and impose additional structure on the leaf nodes.
  • And as already mentioned, this data structure would directly replace MerklePathSet.

The main benefit (the way I see it) is that we end up with a single data structure which is easily "composable" - i.e., we can relatively easily replace a node in MerkleStore with a root of another MerkleStore to combine two trees.

Compact representation for persistent structures

This is a proposal on how to represent multiple versions of the supported data structures efficiently in memory and disk.

Properties of this proposal:

  • It should be possible to move forward and backwards in a tree's history efficiently rationale
  • Data is laid out sequentially
    • easy to index the data
    • fast to send the data over network
    • fast to load the structure in memory
  • The representation should be compact enough as to not become prohibitively for thousands of versions of the tree, both in terms of required disk and memory space
    • The disk use case is the same for the rationale on the link above
    • The memory use case is to support copy-on-write in the advice provider in the Miden VM

On any Merkle structure, the contents of a internal node depends on the contents of the leaves, this proposal assumes there is no additional data in any of the internal nodes, i.e the leaves alone are sufficient to reconstruct the structures.

The idea is to maintain a log of the changes to the leaves. This could be as simple as:

#[repr(transparent)]
struct Append {
  el: Word,
}

struct Log {
  Vec<Append>
}

The above representation is optimal w.r.t storage, since only the data that can't be recomputed is stored. It is a great representation for archive nodes and for a warp sync style protocol.

First optimization

Caching: With the above when a tree is required, it would need to be reconstructed from the log, this would be very wasteful for the most important case, using the last version of the tree. A solution is to have snapshots of the tree managed in a caching strategy, a proposed strategy is latest version + LRU cache of size $n$ of requested versions + snapshots every $2^n$ steps.

Extending the log

The log proposed above is only for append operations, this is useful for structures like MMR, but it is not so useful for a tiered sparse merkle tree, that structure supports deletion, which is internally implemented as setting a specific leaf back to the zero value. This opens first design point

  1. Should there be a generic log available for all structures? Or a log tailored for each?

I'm going to assume the later, the idea is that a tailored log would permit better optimizations to exist, for example, the log above uses a transparent operation, whereas the following representation for a tiered sparse merkle tree needs extra space for the tagged union.

Second log:

enum Op {
  Insert{leaf: Word}, // alternatively would be (key, value)
  Delete{pos: usize},
}

struct Op {
  Vec<Append>
}

Second optimization

Moving backwards: The caching above would be limited in size, for example, we could implement a LRU cache with 2 entries in addition to the latest version. If these entries are for versions [1024, 2048] but the structure receives a request for version 1000, without the ability to move backwards in time a new structure starting from 0 would have to be constructed. If the version can go backwards then only 24 ops would have to be applied.

To achieve this, the insert operation would need to declare what it overwrote, e.g.:

enum Op {
  Insert{new: Word, old: Word},  // alternatively old: usize, this would require a full log
  Delete{pos: usize},
}

struct Op {
  Vec<Append>
}

When going backwards in the tiered sparse Merkle tree history, the Insert would mean replace the value in new with old.

Lookups

The structure represented here uses version numbers to represent the data structure, this may be fine on some use cases, but for the advice provider in the VM the goal is to support mtree_get, which operates on the structure hash instead of its version number.

To support this we can add reverse maps:

BTreeMap<Word, usize>

Using this map we can lookup the corresponding tree. This proposal has the map outside of the operation log, the operation log stays more compact that way, and it allows for the map above to have fewer entries if necessary.

Implement ECDSA over Secp256k1

Right now it's little hard for us to test our implementation of ECDSA over Secp256k1 curve ( in Miden assembly, see 0xPolygonMiden/miden-vm#489 ), so we generate test cases using https://github.com/itzmeanjan/secp256k1.git which is somewhat constraining. I'd ideally like to generate random test vectors much more flexibly and pass them through both tunnels i.e.

tunnel0 -> Rust implementation of some logic
tunnel1 -> Miden Assembly implementation of same logic

and finally compare their output.

We kind of do that right now by putting important crypographic construction's logic inside test directory. For more context, I suggest you to look at 0xPolygonMiden/miden-vm#481 , 0xPolygonMiden/miden-vm#482 , 0xPolygonMiden/miden-vm#487 & 0xPolygonMiden/miden-vm#489 . I want all those cryptographic constructions to be living here, which can be much more easily audited on a later date. And that'll make writing test cases much more flexible. We'll just import Secp256k1 implementation from this crate and use it for generating test vectors in our standard library tests.

Note
This is also applicable for Falcon512 signature implementation, there we also have limited test coverage. See 0xPolygonMiden/miden-vm#368 (review) where @bobbinth suggested that. We've tried to address it 0xPolygonMiden/miden-vm#398 but not very satisfying still.

Implement `MerkleStore::{with_tiered_smt, add_tiered_smt}`

Following the same pattern to add leaves of other algorithms into the MerkleStore, we should introduce the following methods:

MerkeStore::with_tiered_smt<R, I>(
    mut self,
    entries: R
) -> Result<Self, MerkleError>
where
    R: IntoIterator<IntoIter = I>,
    I: Iterator<Item = (Word, Word)>
MerkeStore::add_tiered_smt<R, I>(
    &mut self,
    entries: R
) -> Result<Word, MerkleError>
where
    R: IntoIterator<IntoIter = I>,
    I: Iterator<Item = (Word, Word)>

with_tiered_smt will be a builder pattern, and add_tiered_smt will return the new root.

The inclusion should follow the implementation described in #22

Originally posted by @bobbinth in 0xPolygonMiden/miden-vm#808 (comment)

Encapsulate `NodeIndex` traversal

NodeIndex, introduced to help the indexation of Merkle tree nodes, is the responsible to perform efficient & correct traversal over trees.

/// A Merkle tree address to an arbitrary node.
#[derive(Debug, Default, Copy, Clone, Eq, PartialEq, PartialOrd, Ord, Hash)]
pub struct NodeIndex {
depth: u8,
value: u64,
}

As pointed here, we have some common patterns that will be beneficial if generalized

// TODO should we create a helper in `NodeIndex` that will encapsulate traversal to root so
// we always use inlined `for` instead of `while`? the reason to use `for` is because its
// easier for the compiler to vectorize.
let mut path = Vec::with_capacity(index.depth() as usize);
for _ in 0..index.depth() {
let sibling = index.sibling().to_scalar_index() as usize;
path.push(self.nodes[sibling]);
index.move_up();
}

In that case, it's easier for the compiler if we iterate n times, instead of while !index.is_root().

With that said, it might be beneficial if we have an API in NodeIndex that will allow traversal to the root & execution of simple functions, while always guaranteeing a friendly interface with the compiler.

One option would be to create a #[inline(always)] NodeIndex::traverse_to_root<F, A, T>(&mut self, f: F, args: A) where A: BorrowMut<T>, F: FnMut(&mut Self, &mut T) and always perform the loop there.

Replace `MerklePathSet` with `PartialMerkleTree`

Our current MerklePathSet is rather inefficient because it stores a lot of extra data (which leads to a lot of extra work on updates). I think we can replace it with something more efficient which could work more like SimpleSmt. The struct could look something like this (though maybe there are other better ways):

pub struct SimpleSmt {
    root: Word,
    max_depth: u8,
    leaves: BTreeMap<u64, Word>,
    branches: BTreeMap<NodeIndex, BranchNode>,
}

struct BranchNode {
    left: RpoDigest,
    right: RpoDigest,
}

The basic idea is that the struct could contain Merkle paths of various lengths (max length of 64) but it would do so without duplicating most of the internal nodes.

This struct could have additional convenience functions:

  • inner_nodes() returning an iterator similar to what we do for MerkleTree, SimpleSmt, and Mmr.
  • paths() returning a set of paths contained in the partial Merkle tree.
  • Serialization/deserialization functions which would enable efficient serialization and deserialization (i.e., without storing redundant nodes).

To give a more concrete example, consider the tree below:

image

In this tree, we know only the grey nodes (both light and dark grey) - white nodes are not known to us. Thus, the tree contains 4 full Merkle paths (to indexes 2, 3, 10, and 11).

The tree can be fully defined by the dark grey nodes (and only these ones will need to be serialized), but when we instantiate it in memory, we would also compute the light grey nodes to simplify updates and tree traversals.

Implement domain separation for compact sparse Merkle tree

As part of the design in #22 for a tiered compact SMT, we need to create domain separation between internal nodes and leaf nodes i.e. each type of node should be hashed differently.
The rule that was proposed by @bobbinth is the following:

  1. Internal nodes are hashed as hash(left,right) where right and left are the left and right child, respectively.
  2. Leaf nodes are hashed as hash(r,v) where r is the remaining key (i.e. with the prefix leading to the current position removed) and v is the values associated with key. Further, we set one of the capacity registers to the depth i.e. 16, 32, 48 or 64.

At the moment, accessing the capacity registers externally is not possible to realize point 2. The following issue aims at exposing some methods from RPO so that point 2 above can be implemented in the final implementation of our compact SMT #22 .

Add benchmarks for RPO hash function

We should add benchmarks for RPO implementation in this crate. These should be similar to the benchmarks in the Winterfell repo. The things which would be good to benchmark are:

  1. 2-to-1 hashing (using merge() function).
  2. Sequential hashing of 100 field elements (using hash_elements() function).

Export winter-crypto base traits

We need to export the winter-crypto traits in order to access the available functionality downstream.

0.1.2 introduced en encapsulation of winter-crypto via this library, so the imports are now extracted from here to miden.

Alternatively, we can export the entire winter-crypto library as re-export, but the ideal approach would be to cherry pick the types as we will have more control over what is used downstream.

Set correct collision resistance for Rpo256

We are currently defining the collision resistance of Rpo256 as a constant for the type.

However, this will not always be true, as we might drop it to 126-bits, depending on the capacity registers setup.

Ideally, we should split the Rpo256 into two distinct types, and set the collision attribute/capacity setup strategies individually. This way, we would create a better distinction and unmistakable representation of what we should expect from the implementation.

We should evaluate the options we have available and solve the comment below.

Originally posted by @bobbinth in #68 (comment)

Support tree operation on mixed tree types

The virtual machine has the following instructions: mtree_get, mtree_set. These instructions use the tree's root to identify the target tree of the operation.

That means it is possible to write code that runs mtree_get in a tree backup by a Merkle Mountain Range, and mtree_set the data into a Simple Merkle Tree.

The open questions are, which combinations of trees and operations should be supported?

  • column is write target
  • row is read target
Sparse Merkle Tree Merkle Mountain Range Simple Merkle Tree
Sparse Merkle Tree x
Merkle Mountain Range ? x
Simple Merkle Tree ? ? x

Note: This is not important for mtree_cwm since we can assume the resulting copy has the same type, i.e. the instruction does not have a way to change the resulting tree type. This could in theory be modified by the VM's advice provider though.

Implement tiered compact sparse Merkle tree

The following issue aims at implementing the idea of a tiered sparse Merkle tree as proposed by @bobbinth in #22. The aim of such an implementation is to get a gauge of the out-of-VM performance and complexity so that these could be compared with their counterparts for the VM.
Operations at depths greater than 64 can for the time being use the idea of a sorted list as explained here. In the future we might use a more robust solution if the latter proves frail in some realistic scenarios.

`Rpo256::hash` might panic depending on input size

Currently, Rpo256::hash is expecting well-delimited field elements. However, we will panic in case the length isn't evenly divided by the field element length.

The following snippet will cause the panic

use miden_crypto::hash::rpo::Rpo256;
let message = vec![0_u8; 111];
let message = Rpo256::hash(&message);
thread 'main' panicked at 'source slice length (6) does not match destination slice length (7)', src/hash/rpo/mod.rs:120:42

We should probably pad the bytes with zeroes here

crypto/src/hash/rpo/mod.rs

Lines 118 to 120 in 3c60484

for chunk in bytes.chunks(BINARY_CHUNK_SIZE) {
if i < num_elements - 1 {
buf[..BINARY_CHUNK_SIZE].copy_from_slice(chunk);

Because chunks, as opposed to chunks_exact, might yield slices with a length smaller than the specified amount of bytes

`MerkleStore` API uses `Word` instead of `Digest`

The MerkleStore API uses Word instead of Digest for all of it's public methods (with_merkle_tree / with_sparse_merkle_tree / get_node etc). All merkle objects in the MerkleStore are natively of type Digest i.e. they are the digest of a hashing permutation - this is inclusive of leaves, inner nodes and the root. Should the API be updated such that it expects arguments of type Digest instead of Word or is there some reason behind using Word?

Add Ord trait for RPO digest

In many situations it may be convenient to be able to put a digest as a key into a BTreeMap. But with the current implementation this is not possible to do directly. So, what we do now is convert a digest into [u8; 32] and then use this value as a key in a BTreeMap.

We can probably just implement Ord (and other requited traits) directly on RpoDigest256 to enable using it in a BTreeMap.

`SimpleSmt.get_node(..)` infallible branch

When fetching a leaf from a SimpleSmt the lookup should be infallible. Hence we should replace ok_or(...) with expect(..) in the code below.

} else if index.depth() == self.depth() {
self.get_leaf_node(index.value())
.or_else(|| self.empty_hashes.get(index.depth() as usize).copied().map(Word::from))
.ok_or(MerkleError::NodeNotInSet(index.value()))

Refactor `SimpleSmt` to simplify storage map

Currently SimpleSmt backend uses mappings of indexes to tuples of nodes that are used to compose a child.

// TODO consider using a map `index |-> word` instead of `index |-> (word, word)`
let mut index = NodeIndex::new(self.depth(), key);
let mut value = RpoDigest::from(value);
for _ in 0..index.depth() {
let is_right = index.is_value_odd();
index.move_up();
let BranchNode { left, right } = self
.store
.get_branch_node(&index)
.unwrap_or_else(|_| self.store.get_empty_node(index.depth() as usize + 1));
let (left, right) = if is_right {
(left, value)
} else {
(value, right)
};
self.store.insert_branch_node(index, left, right);
value = Rpo256::merge(&[left, right]);
}
self.root = value.into();
Ok(())

It could be beneficial to reevaluate this approach and maybe use simple NodeIndex |-> Node mappings so traversal will be cheaper and node values will be readily available.

Add benchmarks for BLAKE3 hash function

Once #6 is closed, we should add benchmarks for BLAKE3. These should be similar to the RPO benchmarks described in #13:

  • 2-to-1 hash (using merge() function).
  • Sequential hash of 100 field elements (using hash_elements() function).

Define general tree storage trait

It might be desirable to have a general storage trait so all the tree implementations (mountain range, sparse merkle tree, etc) can be purely functional and focus only on algorithm optimization, leaving storage optimization details outside of their scope.

A tree-like storage is usually composed of simple map-like operations for nodes and leaves. They are distinguished because leaves are often some arbitrary data while nodes are tuples of the digest of a given hash implementation.

The issue started with the PR #27 , but it was better to decouple such big topic from the implementation of the SMT itself.

Here is a suggested implementation with comments:

#![no_std]

extern crate alloc;

// cow is optimal to ease the storage requirements on whether or not he will pass references or
// owned values. sometimes the storage might have to fetch the data from async contexts - to not
// force the storage to hold the lock of the context while the data is being processed by the
// consumer, it will have the option to send it as owned. in other cases, the fetch operation is
// trivial (i.e. memory maps such as BTreeMap) so the storage can just return a reference.
use alloc::borrow::Cow;

// the hasher will be the responsible to define the concrete digest implementation for nodes and
// merge them, producing merkle paths.
use winter_crypto::Hasher;

// a tree node is either a single node (likely the root) or a pair with two digests, one
// representing a left value, the other a right value.
pub enum TreeNode<D> {
    Pair { left: D, right: D },
    Single(D),
}

// a tree storage is a common definition of map-like implementations optimized for merkle trees.
pub trait TreeStorage: Sized {
    // defines the index of a leaf in the tree. it is often just a single scalar value whereas a
    // node is indexed by a pair defining both the depth and its index for that depth. the leaf
    // will often have as depth the maximum value allowed for the concrete instance.
    type LeafIndex;

    // usually a pair defining the depth + index of the node for that depth.
    type NodeIndex;

    // an arbitrary error definition defined by the provider.
    type Error;

    // a common hash definition used for the backend. the purpose is to define the digest
    // concretely, allowing the storage to implement its functionality efficiently since it will be
    // aware of the details of the digest - such as bits count, parity, etc.
    type Hasher: Hasher;

    // a leaf that is separated from the nodes of the tree. usually the leaf contains information
    // relevant for the consumers of the tree, while the nodes contains only the required digest to
    // traverse the tree and generate paths.
    type Leaf: Clone;

    // create a new tree with for a given depth.
    fn new_with_depth(depth: u32) -> Result<Self, Self::Error>;

    // fetch the depth of the storage so tree implementers will be able to contruct merkle paths.
    fn depth(&self) -> u32;

    // fetch the fixed empty digest for a given depth, provided the upstream branch for that depth
    // is composed only of empty leaves.
    fn empty_digest(
        &self,
        depth: u32,
    ) -> Result<Option<Cow<'_, <Self::Hasher as Hasher>::Digest>>, Self::Error>;

    // append a new leaf to the storage, replacing a previous value, if applicable. we opt to not
    // return the previous value so the storage can freely discard the replaced value. if the user
    // needs the previous value, he can call `Self::get_leaf` and check the underlying `Option`.
    fn insert_leaf<L>(&mut self, index: Self::LeafIndex, leaf: L) -> Result<(), Self::Error>
    where
        L: Into<Self::Leaf>;

    // fetch a previously inserted leaf. usually the index will be a common reference scalar (i.e.
    // u64).
    fn get_leaf(&self, index: &Self::LeafIndex)
        -> Result<Option<Cow<'_, Self::Leaf>>, Self::Error>;

    // append a new node to the storage, replacing a previous value, if applicable. it follows the
    // same pattern of `Self::insert_leaf`; thus, it will not return the previous value.
    fn insert_node<N>(&mut self, index: Self::NodeIndex, node: N) -> Result<(), Self::Error>
    where
        N: Into<TreeNode<<Self::Hasher as Hasher>::Digest>>;

    // fetch a previously inserted node. usually the index will be a pair defining both the depth,
    // and the index of the node for that depth.
    fn get_node(&self, index: &Self::NodeIndex)
        -> Result<Option<Cow<'_, Self::Leaf>>, Self::Error>;
}

// create a new list of pairs `(depth, digest)` for a provided hasher and a given depth. it is a
// generic implementation to initialize storages and provide the empty digests via
// [`TreeStorage::empty_digest`]. we don't make assumptions on how the user wants to store this
// list so it is returned as a generic iterator, allowing the user to collect it how it best suits
// him.
pub fn compute_empty_digests<H>(depth: u32) -> impl Iterator<Item = (u32, H::Digest)>
where
    H: Hasher,
{
    (0..=depth).scan(H::Digest::default(), move |state, i| {
        Some((depth - i, core::mem::replace(state, H::merge(&[*state; 2]))))
    })
}

Tracking issue: Sparse Merkle tree implementations

Goal

We should add implementations of Sparse Merkle tree.

Details

Specifically, I'm thinking of the following variants:

  • Simple SMT (could go into /merkle/simple_smt.rs module).
  • Compact SMT (could go into /merkle/compact_smt.rs module).

References:

NICE TO HAVE

No tasks being tracked yet.

Working group:

@Al-Kindi-0, @bobbinth, @grjte, @vlopes11, @hackaugusto

Workflow
  • Discussion should happen here or in the related sub-issues.
  • PRs should only be merged by the coordinator, to ensure everyone is able to review.
  • Aim to complete reviews within 24 hours.
  • When a related sub-issue is opened:
    • add it to the list of sub-issues in this tracking issue
  • When opening a related PR:
    • request review from everyone in this working group
  • When a sub-issue is completed:
    • close the related issue with a comment that links to the PR where the work was completed

Coordinator: @vlopes11

Workflow

The working group coordinator ensures scope & progress tracking are transparent and accurate. They will:

  • Merge approved PRs after all working group members have completed their reviews.
    • add the PR # to the relevant section of the current tracking PR.
    • close any completed sub-issue(s) with a comment that links to the PR where the work was completed
  • Monitor workflow items and complete anything that slips through the cracks.
  • Monitor scope to see if anything is untracked or unclear. Create missing sub-issues or initiate discussion as required.
  • Monitor progress to see if there's anything which isn't moving forward. Initiate discussion as required.
  • Identify PRs with especially significant changes and add @grjte and @bobbinth for review.

Tracking issue: Implement Merkle mountain range

Goal

Implement Merkle mountain range in Rust and Miden Assembly.

Details

The MMR is an append-only log used for the notes database, which stores all notes ever created. This data structure was chosen because notes can be easily added even if most nodes have been discarded. Additionally, inclusion witnesses never become stale (although they may need to be extended)

The MMR can also be used to track the history of the chain, as described here: 0xPolygonMiden/miden-base#17

Must have (Rust)

Must have (Miden assembly)

Nice to have

Non goals

Working group:

@hackaugusto , @bobbinth, @Al-Kindi-0
(Augusto to do the work, Bobbin & Al for discussion & review)

Workflow
  • Discussion should happen here or in the related sub-issues.
  • PRs should only be merged by the coordinator, to ensure everyone is able to review.
  • Aim to complete reviews within 24 hours.
  • When a related sub-issue is opened:
    • add it to the list of sub-issues in this tracking issue
  • When opening a related PR:
    • request review from everyone in this working group
  • When a sub-issue is completed:
    • close the related issue with a comment that links to the PR where the work was completed

Coordinator: @hackaugusto

Workflow

The working group coordinator ensures scope & progress tracking are transparent and accurate. They will:

  • Merge approved PRs after all working group members have completed their reviews.
    • add the PR # to the relevant section of the current tracking PR.
    • close any completed sub-issue(s) with a comment that links to the PR where the work was completed
  • Monitor workflow items and complete anything that slips through the cracks.
  • Monitor scope to see if anything is untracked or unclear. Create missing sub-issues or initiate discussion as required.
  • Monitor progress to see if there's anything which isn't moving forward. Initiate discussion as required.
  • Identify PRs with especially significant changes and add @grjte and @bobbinth for review.

Migrate simple Sparse Merkle tree

We have an implementation of a simple SMT in the main Miden repo here. We should migrate this implementation to this crate, probably under src/merkle/simple_smt.rs.

We can improve the implementation and documentation during the migration process, but I would spend too much time on this.

`MerkleStore` extract structures

AccountStorage is structured as follow:

pub struct AccountStorage {
    slots: SimpleSmt,   // SMT of depth 8
    store: MerkleStore, // Merkle store for any tree data
}

The leaves of the SimpleSmt can represent roots of Merkle structures stored in the MerkleStore. For more information on this implementation see here. To support such an implementation we need the ability to extract Merkle structures from the MerkleStore based on their roots. The interface we would need to implement is:

impl MerkleStore {
    pub fn subtree(&self, root: Word) -> MerkleStore
}

Or maybe even:

impl MerkleStore {
    pub fn subset(&self, roots: &[Word]) -> MerkleStore
}

Efficient tree implementations for the mtree_cwm instruction

The instruction mtree_cwm is defined as:

Copies a Merkle tree with root R and updates a node at depth d and index i in the copied tree to value Vā€².

We currently have the following tree implementations:

  • Sparse Merkle Tree
  • Merkle Mountain Range
  • Simple Merkle Tree

For all the data structures above, depending on the size of the tree a copy is potentially a very expensive operation. The issue is about how to design the above structure as so to support a more efficient mtree_cwm

implement `.inner_nodes()` on `MerkleStore`

To combine two MerkleStores we should implement .inner_nodes() on MerkleStore such that we can achieve the following:

merkle_store.extend(other_merkle_store.inner_nodes());

Implement compact sparse Merkle tree in Miden assembly

The following issue aims at implementing the idea of a tiered sparse Merkle tree as proposed by @bobbinth in #22 in Miden assembly. The aim of such an implementation is to get a gauge of the performance and complexity of such an approach so that bottlenecks are identified and new VM instructions and decorators are introduced if needed.

Efficient batch operations on trees

When a user inserts a new element into a Sparse Merkle Tree, if that tree has a conflict, this operation becomes two inserts (the old element has to be pushed down a level, and the new element has to be inserted).

This causes hashing of the complete merkle path twice, once for each update. It may make sense to batch operations and remove the extra unnecessary work (in the above example, that would be the hashing from the common parent up to the root)

Create a wrapper for `Word`

Currently Word is a type alias

crypto/src/lib.rs

Lines 24 to 34 in 0c242d2

// TYPE ALIASES
// ================================================================================================
/// A group of four field elements in the Miden base field.
pub type Word = [Felt; WORD_SIZE];
// CONSTANTS
// ================================================================================================
/// Number of field elements in a word.
pub const WORD_SIZE: usize = 4;

However, it is a critical component of our stack as it is used everywhere, in different forms. One example is RpoDigest:

// DIGEST TRAIT IMPLEMENTATIONS
// ================================================================================================
#[derive(Debug, Default, Copy, Clone, Eq, PartialEq)]
pub struct RpoDigest([Felt; DIGEST_SIZE]);

They both have the same structure, but different concrete types. This leads to many cases where unsafe optimizations are required, such as in here:

#58

One of the problems highlighted by this PR is the need to convert between Word and RpoDigest efficiently (i.e. without copying/allocating) & safely.

We could achieve major simplification if:

  • Word was a struct
  • The digest of Rpo256 was a Word

Design compact Sparse Merkle tree

The original issue describing the need for compact SMTs is here and the PR which has a partially working implementation is here.

The high-level summary is that we need to support a Sparse Merkle tree which enables keys of size ~256 bits. This SMT would be used in many places in the VM. For example, account storage, account vaults, account database, nullifier database etc.

We need to balance efficiency both in Rust and in Miden assembly. There are many implementations of SMTs out there (e.g., Jellyfish Merkle Tree), however, my intuition is that implementing them will be pretty costly in Miden assembly because of the large number of conditional checks.

So, while this issue is not about Miden assembly, we should probably get a good idea of how the construction we come up with will be implemented in Miden assembly and what the associated costs would be (i.e., in terms of cycle counts for both get and set operations).

One approach is to build a "Tiered tree". In this tree, the first $n$ level (say 21) would be a simple SMT (no compaction) and then starting at the 22nd level, we could do a more sophisticated compaction scheme. Or we could have 2 tiers of 21 levels each and then do more sophisticated compaction (we probably should benchmark these and other approaches.

I think the performance targets could be something like (in an average case):

  • For a SMT with under $10^6$ elements:
    • Getting a node: ~200 VM cycles
    • Setting a node: ~500 VM cycles
  • For a SMT with between $10^6$ and $10^{12}$ elements
    • Getting a node: ~500 VM cycles
    • Setting a node: ~1000 VM cycles

Document key bits of `SimpleSmt`

Currently, the key bits are documented as 63-bit

/// A sparse Merkle tree with 63-bit keys and 4-element leaf values, without compaction.
/// Manipulation and retrieval of leaves and internal nodes is provided by its internal `Store`.
/// The root of the tree is recomputed on each new leaf update.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct SimpleSmt {

However, we take u64 as arguments. This difference might create a confusion downstream.

/// Inserts a leaf located at the specified key, and recomputes hashes by walking up the tree
pub fn insert_leaf(&mut self, key: u64, value: Word) -> Result<(), MerkleError> {

One option is to, instead, take a Felt as argument. But then, we also need to rework the rpo module documentation that also mentions 64-bit - and it is important we keep consistency between these, as RPO is the backend of SimpleSmt

/// The parameters used to instantiate the function are:
/// * Field: 64-bit prime field with modulus 2^64 - 2^32 + 1.
/// * State width: 12 field elements.
/// * Capacity size: 4 field elements.
/// * Number of founds: 7.
/// * S-Box degree: 7.

Originally posted by @grjte in #64 (comment)

Always show the search input on the instructions table

ATM the search input is only visible when the page is scrolled all the way to the top, if the users don't pay attention they may not realize that search is available, and users that know of the search feature have to scroll all the way back to the top to use it.

A solution is to add the search input along side the table's header, which is sticky and always visible.

`MerkleStore` performance optimisations

There may be some optimisations we can implement on the MerkleStore to improve it's performance. We should benchmark to asses impact on performance of the changes. The two optimisations we should consider are:

  1. Replace the BTreeMap store with a HashMap, specifically hashbrown which is no_std compatible.
  2. As the keys in the MerkleStore are the digests of the Rpo hash function, which is cryptographically secure, we should see if we can avoid performing an additional hash when we do lookups and inserts by replacing the standard hasher. For inspiration on how this can be implemented we can take a look at the nohash-hasher library.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    šŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. šŸ“ŠšŸ“ˆšŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ā¤ļø Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.