axone-protocol / contracts Goto Github PK
View Code? Open in Web Editor NEW๐ Smart contracts for the Axone protocol (powered by CosmWasm)
Home Page: https://axone.xyz
License: BSD 3-Clause "New" or "Revised" License
๐ Smart contracts for the Axone protocol (powered by CosmWasm)
Home Page: https://axone.xyz
License: BSD 3-Clause "New" or "Revised" License
Note
Severity: Low
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex
The Cognitarium and Objectarium contracts exhibit limitations in their query functionalities. This forces users and other contracts to perform a raw query to read the stored value, tying their code to the current implementation of the cognitarium contract, which is error-prone.
NAMESPACE_KEY_INCREMENT
and BLANK_NODE_IDENTIFIER_COUNTER
. These variables are fundamental for tracking the increments of namespace keys and the identifiers for blank nodes, pivotal for organizing and retrieving semantic data. The absence of query functions for these variables restricts the ability to monitor and manage internal state changes effectively.size
, compressed_size
, object_count
), and other configuration parameters. This limitation hinders users or applications from retrieving essential information that could assist in assessing the usage, management, and ownership of buckets.We recommend exposing a smart query that returns the above-mentioned elements.
Remove the possibility to target blank nodes through the cognitarium messages interface when using triple pattern VarOrNode
& VarOrNodeOrLiteral
.
Blank nodes shall have an internal identifier that should not be used externally, they should only represent a link between triples without the possibility to target them directly from external interfaces. Moreover, I think their internal value should not be exposed, the IdentifierIssuer could be used for this purpose.
I propose to remove from the VarOrNode
and VarOrNodeOrLiteral
the possibility of referencing a blank node by its name. And rename the blank nodes when exposing query results in a deterministic manner.
I think this matter should be addressed in conjunction with #434.
Currently, the objectarium smart contract only allows the use of SHA-256 algorithm for computing object Ids. It would be beneficial if the contract could support a range of hash algorithms, which can be selected during the contract's instantiation.
Add support for other hash algorithms, specifically the SHA-2 family algorithms (SHA-224, SHA-384, and SHA-512) as well as the MD-5 algorithm.
Implement the ExecuteMsg::RegisterService
message according to the current specification of the okp4-dataverse
smart contract.
Note
Severity: Critical
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex
The instantiation process of the Law Stone contract is susceptible to a front-running vulnerability when interacting with the Objectarium contract for storing .pl files. This vulnerability stems from the public visibility of transaction data in the mempool, which allows attackers to intercept and replicate the initialization parameters. The core issue arises during the instantiate function's call to store_object in the Objectarium. If an attacker captures and submits the same data/program to the Objectarium ahead of the legitimate transaction, the Law Stone's initialization will fail, leading to repeated Denial of Service (DoS)
This vulnerability exposes the Law Stone contract to a persistent threat of initialization failure, which can be systematically exploited to prevent its deployment.
Ensure that the Objectarium is aware of the Law Stone's dependencies and enforces checks that the calling contract matches expected parameters.
contracts/okp4-law-stone/src/contract.rs
contracts/okp4-objectarium/src/contract.rs
Note
Severity: Low
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex
In the Objectarium contract, there is a discrepancy in how compressed sizes are handled during the lifecycle of an object. While the store_object
function correctly increments the compressed_size
statistic upon storing an object, the corresponding decrement operation is missing in the forget_object
function when an object is removed. This oversight leads to inaccurate tracking of the compressed data size within the system.
To resolve this issue, update the forget_object function to include a decrement operation for the compressed_size
stat similar to how it handles other metrics.
For the OKP4 protocol, we require a smart contract that lets users hold tokens as a pre-authorization for a service provider. This is analogous to how pre-authorizations work in traditional finance with credit cards. While the working name for this smart contract is pay-secure
, it is subject to revision.
Essentially, the smart contract allows a Client to temporarily lock tokens as a pre-authorization tool for a Provider. This reserves a specific sum without actually deducting it, thereby verifying that the Client has sufficient funds. The smart contract is designed to operate as a Smart Contract Account (SCA), with the consumer serving as the account Holder.
As of date there is no test to cover the decompression of data in the objectarium smart contract. We should add tests to cover this feature.
Recently, we've faced a scenario where just changing the version of a component (Rust, in this case) had the potential to disrupt our smart contract execution - cf. #292 (comment). Such cases can lead to disastrous problems if not detected early.
To ascertain the proper functioning of the Smart Contracts within the OKP4 blockchain, I propose the addition of a new CI workflow designed to detect and resolve such disruption early in the development process. This workflow could incorporate a variety of tests, including on-chain deployment tests, as well as certain smart contract calls to validate the correct interaction of the contracts amongst themselves and with the entire blockchain.
The current cargo make targets managing the local okp4d
chain start are running the v4.0.0
version, we should upgrade to the v5.0.0 version.
Unfortunately with this version the transaction broadcast mode doesn't support block
anymore, a trick will be needed.
Implement the CONSTRUCT query over the triple store in the cognitarium
smart contract, allowing to retrieve a set of triples serialized in a specific format (turtle, rdf/xml, ...).
This way of querying is particularly relevant for the purpose of Governance evaluation because it allows the retrieval of triples that constitute an RDF graph. Indeed, this graph seamlessly integrates with Prolog due to the inherent equivalence between an RDF triple and a Prolog predicate. This approach enables us to harness the semantics of ontological knowledge while maintaining its essence during interpretation within a Prolog program.
Notes:
Note
Severity: Low
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex
The Cognitarium smart contract is designed to support multiple RDF formats for data submission, including NQuads, RDF.XML, Turtle, and NTriples. This versatility is intended to enhance the contractโs adaptability and user experience by allowing for flexible data representation. However, the Dataverse contract, which controls the data input to the Cognitarium, is currently hardcoded to
use only the NQuads format. This restriction arises from a limitation within the submit_claims function, which lacks the capability to accept a format parameter, contrary to what is documented. As a result, despite the Cognitariumโs ability to handle various formats, this functionality is underutilized due to the Dataverseโs fixed format implementation.
This issue not only limits the flexibility of data input but also leads to a discrepancy between the system's documented capabilities and its actual functionality. The mismatch can cause confusion among users and developers, who may expect broader format support based on the official contract documentation.
Implement the Store
query message handling, intended to return information regarding current store usage.
We need to conduct a benchmark campaign for the OKP4 chain and its smart contracts to evaluate their performance. The outcome of this campaign should be metrics (e.g. gas consumption, storage size, ...) and graphs that will help us understand the behavior of the blockchain and the smart contracts under different loads and conditions. To accomplish this, we need to implement a benchmarking tool.
Implement the benchmarking tool using Python, and use any libraries or frameworks suitable for the task.
objectarium
, store logic programs and submit logic queries to the law-stone
...Provide a beginning of specification for a smart contract intended to manage ontological elements.
It shall expose execution messages allowing to feed the ontology by providing RDF Graphs, supporting updates and removal of triples.
It needs also to provide a mean to query the content with a way to filter the triples in output, the semantic triples having the form:
Subject Predicate Object
Find a suitable name!
The ontology being a semantical representation of a system's state and the system by itself, we should consider to provide a mean to restrict write permissions on the managed data in order to let only described domain to maintain it.
Regarding querying capabilities there is multiple elements to take into account:
Finally, in order to ease interoperability with this smart contract, keeping in mind that we should be able to fetch elements from Prolog programs executed by the Logic module we may need to consider supporting multiple output formats.
Implement the Select
query message handling.
The query execution model of WhereConditions
needs a proper design which should leave place for eventual query optimizations that could be brought in further developments.
Implement the ExecuteMsg::RegisterDigitalResource
message according to the current specification of the okp4-dataverse
smart contract.
This ticket highlights the need to define the interface contract for the Dataverse smart contract responsible for managing all the resources within the dataverse.
It's also suggested here to rename the contract "Dataverse" instead of "Zone Hub". Given the contract's nature and the positioning of resources (datasets, services) relative to the Zones, it's clear that this smart contract oversees the entire dataverse. Additionally, it's entirely feasible to create as many isolated dataverses as desired.
Note
Severity: Medium
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex
The OKP4 ecosystem is currently using version 1.5.3 of the cosmwasm-std library, which has been found to contain arithmetic overflow issues as detailed in advisory CWA-2024-002. This vulnerability affects all contracts that perform arithmetic operations, including Objectarium, Cognitarium, Dataverse, and Law Stone. Arithmetic overflows can alter the expected behavior of smart contracts by causing computations to wrap incorrectly.
This overflow can lead to incorrect data processing, resulting in potential state corruption or mismanagement of contract logic. It directly threatens the reliability and effectiveness of the contract's intended functionalities.
Upgrade the cosmwasm-std library to the latest patched version as recommended in the advisory.
Note
Severity: High
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex
In the Objectarium contract, the store_object
function allows users to store data objects with an option to pin them. Pinning an object prevents its deletion unless explicitly unpinned by the owner. However, a issue arrises when objects are stored without being pinned (pin: false
). In such cases, any user, regardless of ownership, can invoke the forget_object
function to delete these objects.
This flaw arises from the lack of proper ownership verification and permission checks before object deletion. Consequently, this issue allows unauthorized users to delete objects they do not own, leading to potential Denial of Service (DoS) attacks.If a user consistently stores objects with pin set to false, an attacker can monitor these actions and systematically delete newly stored objects by repeatedly calling the forget_object
function. This behavior can repeatedly disrupt legitimate users' attempts to
store data, effectively denying them the service of the contract.
The primary impact of this vulnerability is unauthorized data manipulation leading to Denial of Service. By allowing any user to delete objects they do not own, the system's integrity and reliability are compromised. This not only violates access controls but also exposes the system to targeted attacks where critical data can be maliciously wiped out.
To mitigate this vulnerability, implement stringent ownership checks within the forget_object function to ensure that only the object's owner can delete it. This could involve Verifying that the caller of forget_object matches the stored owner's information before allowing the object's deletion.
Implement the InsertData
execute message handling.
The implementation should avoid loading the entire RDF input and process the data as a stream instead. The error cases, mainly regarding limitations shall be implemented as well.
Regarding the state design, a proposal needs to be made considering both querying needs and storage efficiency.
Following specifications PR #143, It would be nice to add a query on the cw-law-stone
smart contract that allow retrieve the list of all dependencies used by the prolog program. See #143 (comment).
This is not mandatory now but this could be a first improvement.
As of now, crates are not published to crates.io or any registry, making it difficult for users and developers to access the technical information of the latest versions of our smart contracts (and libraries).
Implementation the automated crate publication as part of our CI process upon release.
It would be convenient for developers to have a query in the law-stone-contract that allows to directly query the program. Currently the workflow for a frontend engineer would be to QueryMsg::Program
, and then use the object_id and/or the storage_address to query the Objectorium.
This could be reduced by adding a query to the law-stone
contract that returns the program directly, instead of only the object_id and the strorage_address
Related code:
When using the docs-generate task in the makefile we expect the dependency task schema to be executed but it is not. This makes for bad DX and should be fixed so the task correctly calls the schema task in the contract makefiles.
To reproduce this bug, launch the cargo make docs-generate
task. No new schemas are generated.
When removing an object, we can't store it again.
# Store an object
okp4d tx wasm execute --from $ADDR $CONTRACT_ADDR --gas 1000000 "{\"store_object\":{\"data\":\"a2p5dkVGCg==\",\"pin\":false}}"
# Remove the object
okp4d tx wasm execute --from $ADDR $CONTRACT_ADDR --gas 1000000 "{\"forget_object\":{\"id\":\"8b44318b26e8a799537a172009fc74ff79ec916b1d303bd4e18110b0105f1255\"}}"
# Store the object again
okp4d tx wasm execute --from $ADDR $CONTRACT_ADDR --gas 1000000 "{\"store_object\":{\"data\":\"a2p5dkVGCg==\",\"pin\":false}}"
By querying the transaction result we get:
raw_log: 'failed to execute message; message index: 0: Object is already stored: execute
wasm contract failed'
I expect to be able to store an object that has been previously removed.
Note
Severity: Critical
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex
break_stone
function in the Law Stone contract fails to emit detailed events for actions undertaken within the function. Notably, actions such as unpinning or forgetting objects lack corresponding information in event emissions, which are crucial for auditing and tracing the state changes within the contract.The documentation we produce (thanks to fadroma/schema - #311) relies on a specific format of comments. For instance, if types aren't correctly described in the comments of the source code, they'll appear as undefined
in the documentation.
To maintain documentation quality, we should integrate a check process that ensures no undefined
definitions are present in the committed documentation.
There are several ways of doing this, but it could be an additional control step in the makefile when generating the documentation (which will also be used in the CI).
Implement the InstantiateMsg
message according to the current specification of the okp4-dataverse
smart contract.
We propose to create a foundational Prolog file that serves as a basis for expressing governance rules for data spaces. This file will provide a standardized vocabulary for expressing governance rules and ensure consistency across different data spaces governances
The file should contain elemental predicates, terminology to express the concepts of authorization, permission, and forbidden, predicates that enable querying for authorization in a standardized way, some formalisms to describe the rules that allow the extraction of textual descriptions.
By using the file, the builders can easily implement governance rules in their data spaces without having to develop these rules from scratch. This can save time and ensure that governance rules are implemented consistently and correctly.
We'd propose to name this file gov-elements.pl
, the term element
reflecting the purpose of the file to contain the fundational elements required to build governance rules (this can be discussed, of course).
Note
Severity: Low
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex
The instantiation process of the law-Stone does not include validation for the storage_address
provided in the InstantiateMsg
.This will result in a transaction failure.
Implement proper validation checks in the instantiate function to ensure the storage_address
is well-formed and authorized for the intended operational context.
With the initial introduction of the okp4-dataverse smart contract, it's time to provide a short presentation for it in the README.md
.
The current method of storing object identifiers as string values in hex
format, which are actually hash values. It's not the most efficient approach as binary content of the same value can significantly reduce the amount of storage required. For instance, the SHA-256
hash algorithm, which generates a 256-bit hash value, would require 64 bytes in hex
format (if we consider each character being a hexadecimal digit), while it would only require 32 bytes (each byte being 8 bits) to represent the same hash.
Implement the storage of object ids as binary content in the state
. Additionally, implement serialization, deserialization, and validation when necessary.
The purpose here is to enhance the performance globally by removing unnecessary mapping layers.
The main point comes from the QueryEngine
design which is processing the select query returning directly the solutions as msg::SelectResponse
. By converting the resolved variables to the message model it induces a costly mapping as it needs to resolve each namespace keys to their actual values.
This is a needed behaviour to handle the msg::SelectQuery
but as the QueryEngine
is also used in the processing of other messages it has bring unnecessary mapping layers. For example, the triple deletion needs first to select the triples to delete which resolves the namespaces from key to value, and then proceed with the reversed mapping.
Implements the already defined DeleteData
execution message.
When inserting triples, avoid same blank nodes being reused across insertions.
Blank nodes allowing to create edges in graphs at insertion, their meaning is only linked to the set of triples being inserted and shall be ensured unique per insertion. Allowing to define same blank nodes across multiples insertions could affect previous ones and therefore alter unintentionally the data contained.
When inserting blank nodes, they shall be ensured unique scoped per insert message execution.
In order to provide unique blank nodes at insertion, their identifier shall be formed with the context of this particular execution. I propose so to prefix alter their ids before writing by prefixing them with a hash representing the execution context, which shall carry the block height, the index of the transaction the message come from, and the insert message itself.
The objectarium
smart contract enables object storage on-chain. However, as the size of objects stored on-chain increases, so does the gas cost required to store them.
One way to address this issue is by adding support for compression. This would allow users to compress their data before storing it on-chain, potentially reducing the amount of gas required to store the object. Users could make a trade-off between the gas cost required for storage (due to the size of the object) and the gas cost required for computation (due to compression).
To gain a deeper understanding of how the smart contract behaves with compression in terms of gas consumption, benchmarking could be a useful approach (cf. #191).
The objective is to incorporate the LZMA compression algorithm into the objectarium
.
Rationale: Selecting a compression algorithm to include into the objectarium
is a complex decision, considering factors such as compression ratio and compression/decompression performance, particularly within the realm of blockchain. The LZMA family of compressions holds a prominent position among the compression algorithms, renowned for its effectiveness and widespread adoption. As such, it's a good candidate, IMHO.
Details: Consider utilizing the following project for the implementation https://github.com/gendx/lzma-rs
Note
Severity: Low
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex
The Objectarium contract's instantiate function lacks necessary validations on the bucket name parameter during the bucket creation process. This oversight allows for the creation of buckets with arbitrary lengths and potentially malicious content in their names. The absence of strict checks and sanitization on the bucket names could facilitate phishing attacks or cross-site scripting (XSS) vulnerabilities when these names are displayed in a frontend application.
If exploited, this vulnerability could lead to phishing attacks and execution of unauthorized scripts in the context of a user's session (XSS), compromising the security of the frontend application and the integrity of user interactions.
Recommendation
Implement a validation check to ensure all input conforms to the URL-safe alphabet as specified in RFC 4648, using the defined character set.
Implement the ExecuteMsg::FoundZone
message according to the current specification of the okp4-dataverse
smart contract.
Implement the ExecuteMsg::RevokeClaims
message according to the current specification of the okp4-dataverse
smart contract.
When using the PlanBuilder
to establish a query plan from a query based on triple patterns, the triple patterns are considered as a Basic Graph Pattern. The current implementation build it in the order the triple patterns are provided but it is not optimal.
Before building the Basic Graph Pattern relating to a set of triple patterns, an optimization step could be added to order the patterns based on a computation indice expressing the complexity needed to resolve them. The built BGP would then be optimal, and users won't need to make their own optimization before querying anymore, this we'll also help to decrease resource usages on nodes.
Implement the ExecuteMsg::SubmitClaims
message according to the current specification of the okp4-dataverse
smart contract.
NOTE: I first started by creating a branch with it's associated PR with a smart contract intended for the distribution of the tax according to the planned token model defined in whitepaper, but I finally decided to create an issue to initiate a discussion on the architecture of it.
Based on the token model specified in whitepaper and discussion on this issue, we will need a mechanism to distribute the tax collected from the workflow execution to a list of recipients including the possibility to burn a part of this tax.
To dispatch the responsibility from the workflow contract execution, I propose to create a smart contract that will take care of the tax distribution. A smart contract that can be only instantiated and updated by governance vote (for our needs), and allowing to configure and update the list of recipients and burn amount.
To resolve rounding and accuracy issue, I propose to set a default recipient and a list of recipient with their corresponding token distribution ratio, after distribution of total token on all destination, the remaining token will be transfer to the default recipient. It's allow also to add or remove recipient without send the whole list of recipient and ratio, unless necessary.
For exemple, based on the this figure, when workflow is executed, 3% tax of the total cost will be transferred to the tax distribution contract. Contract is configured with one destination: burn, set to 66.6%, and the community pool as a default recipient. Once 66.6% of token has been burnt, the remaining token is sent to the default recipient (set to the community pool). We can also configure recipient with a wallet address.
use cosmwasm_schema::{cw_serde, QueryResponses};
use cosmwasm_std::Decimal;
/// InstantiateMsg is the default config used for the tax distribution contract.
///
/// This contract is intended to be stored with `--instantiate-nobody ` args to prevent only the
/// governance that is allowed to configure and instantiate this contract and be set as the contract
/// admin.
#[cw_serde]
pub struct InstantiateMsg {
/// Configure the destination of the remaining tokens after distribution to other recipients
/// (defined in `destinations`).
default_recipient: Recipient,
/// Define the distribution rate of tokens to the intended recipients.
/// The total rate sum should not exceed 1.
destinations: Vec<Destination>,
}
#[cw_serde]
pub enum Recipient {
/// Send token to the community pool.
CommunityPool,
/// Burn token.
Burn,
/// Send token to a specific wallet address.
Address(String),
}
#[cw_serde]
pub struct Destination {
/// Recipient of tokens
recipient: Recipient,
/// Set the token rate to receive for this recipient.
/// Value should be between zero exclusive and one exclusive.
ratio: Decimal,
}
/// Execute messages
#[cw_serde]
pub enum ExecuteMsg {
/// # Distribute
/// Distributes the tokens received from the transaction to the recipients following
/// the configured apportionment.
Distribute,
/// # UpdateDefaultRecipient
/// Change the default recipient used for distribute remaining token.
///
/// Only contract admin can update the default recipient.
UpdateDefaultRecipient { recipient: Recipient },
/// # UpsertDestinations
/// Add new recipients for receiving token with it's corresponding ratio.
/// If one of recipient already exist and configured, the ratio is updated.
/// Don't forget that the total ratio already configured shouldn't exceed 1, but can be less
/// than 1, since the remaining token will be transfer to the default recipient.
///
/// Only contract admin can add or update destinations.
UpsertDestinations { destinations: Vec<Destination> },
/// # RemoveRecipients
/// Remove recipients from the tax distribution.
///
/// Only contract admin can remove recipients.
RemoveRecipients { recipients: Vec<Recipient> },
}
/// Query messages
#[cw_serde]
#[derive(QueryResponses)]
pub enum QueryMsg {
/// # Destinations
/// Returns the current configuration for all tax distribution destinations with their
/// corresponding ratio.
#[returns(DestinationsResponse)]
Destinations,
}
/// # DestinationsResponse
#[cw_serde]
pub struct DestinationsResponse {
/// The current configured default recipient for remaining token after distribution.
pub default_recipient: Recipient,
/// All recipients with their corresponding token ratio.
pub destinations: Vec<Destination>,
}
This is only a preliminary contract definition but maybe it doesn't cover all ours needs, I'm waiting your feedback to see if is missing some needed features.
Other solution is possible for the tax distribution :
Implement the Describe
query message handling.
Note
Severity: Medium
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex
The Law Stone contract utilizes the store_object
function of the Objectarium to store .pl
files containing rule sets. A key function within this contract is break_stone
, designed to modify or remove these rules by unpinning or forgetting them. The function implements an admin check to restrict access: if an admin address is configured, it ensures that only the admin can execute the function by comparing the info.sender with the admin address.
Specifically, the function logic:
{
Some(admin_addr) if admin_addr != info.sender => Err(ContractError::Unauthorized),
_ => Ok(()),
};
indicates that if an admin is set and the caller is not the admin, the function will deny access. However, if no admin is defined, particularly in deployments initiated with the --no-admin
flag in CosmWasm CLI, the function proceeds without any access restrictions. This lack of checks in the absence of an admin means that anyone can invoke break_stone and potentially disrupt the rule enforcement by the Law Stone, impacting the governance or operational constraints enforced by these rules.
There are some inconsistencies in the way we handle optional values and defaults in our smart contracts, especially with regard to messages and state.
As we are currently experiencing variations in the principles applied, I propose the following rule to address the matter. This is something that is open to discussion of course.
Messages
Regarding messages
, in the definition of instantiate
and execute
messages, optional values can be considered, with explicit default values in the comments. In the implementation, methods should be included to should to retrieve values or default values - for instance like max_page_size_or_default
for max_page_size
property.
For message responses
, types with default values should not have optional values. So when a type has a default value, the response should always include the value or the default value if none. It will avoid the need for clients to explicitly handle optional values, and help them handle default values.
State
On the state
side, I believe that default values should be used whenever possible, which eliminates any use of the optional type.
Provide the first specification elements about a dataspace smart contract that'll be in charge of performing all the possible actions in a dataspace.
The smart contract specification shall meet the following responsibilities:
Finally, we shall find a name for it.
The README.md
file could be enhanced in the following ways:
Regarding namespace management, and especially keeping information in cache when performing Triple IRI mappings between their string representation and their u128
key identifier in state. Some optimization improvement can be made, and a canonical way can be also setup in order to be reused and ease upcoming evolution.
The current status of namespace management at this point is:
The TripleStorer
contains a dedicated namespace cache implementation allowing the resolve existing namespace from their string string representation to their u128
state key identifier or creating a new one. On the other hand the QueryEngine
SolutionsIterator
provides the opposite implementation, resolving a namespace string representation from its u128
state identifier. And aside those, the PlanBuilder
could take benefits from a namespace caching mechanism as it query systematically the state.
Those usages are different but I think a single implementation could be made to satisfy each use cases taking advantage of composability and avoiding duplication, a single implementation that could be decoupled to enhance testing capabilities.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.