Git Product home page Git Product logo

contracts's People

Contributors

ad2ien avatar amimart avatar bdeneux avatar bot-anik avatar ccamel avatar dependabot[bot] avatar egasimus avatar eltociear avatar erikssonjoakim avatar jeremylgn avatar mdechampg avatar omahs avatar stakeandrelax-validator avatar studentmtk avatar warstrolo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

contracts's Issues

๐Ÿ›ก๏ธ Not All Storage Elements Are Exposed Through Queries

Note

Severity: Low
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex

Description

The Cognitarium and Objectarium contracts exhibit limitations in their query functionalities. This forces users and other contracts to perform a raw query to read the stored value, tying their code to the current implementation of the cognitarium contract, which is error-prone.

  • Cognitarium Contract Limitations: This contract lacks query functions for critical state variables such as NAMESPACE_KEY_INCREMENT and BLANK_NODE_IDENTIFIER_COUNTER. These variables are fundamental for tracking the increments of namespace keys and the identifiers for blank nodes, pivotal for organizing and retrieving semantic data. The absence of query functions for these variables restricts the ability to monitor and manage internal state changes effectively.
  • Objectarium Contract Limitations: Similarly, the Objectarium contract does not provide adequate query functions for accessing important metadata about buckets, such as the owner details, statistics (size, compressed_size, object_count), and other configuration parameters. This limitation hinders users or applications from retrieving essential information that could assist in assessing the usage, management, and ownership of buckets.

Recommendation

We recommend exposing a smart query that returns the above-mentioned elements.

๐Ÿคฏ Cognitarium: Remove blank node filtering from messages

Remove the possibility to target blank nodes through the cognitarium messages interface when using triple pattern VarOrNode & VarOrNodeOrLiteral.

Description

Blank nodes shall have an internal identifier that should not be used externally, they should only represent a link between triples without the possibility to target them directly from external interfaces. Moreover, I think their internal value should not be exposed, the IdentifierIssuer could be used for this purpose.

Proposal

I propose to remove from the VarOrNode and VarOrNodeOrLiteral the possibility of referencing a blank node by its name. And rename the blank nodes when exposing query results in a deterministic manner.

I think this matter should be addressed in conjunction with #434.

๐Ÿ’ฟ Objectarium: add support for different hash algorithms

Issue

Currently, the objectarium smart contract only allows the use of SHA-256 algorithm for computing object Ids. It would be beneficial if the contract could support a range of hash algorithms, which can be selected during the contract's instantiation.

Todo

Add support for other hash algorithms, specifically the SHA-2 family algorithms (SHA-224, SHA-384, and SHA-512) as well as the MD-5 algorithm.

๐ŸŒ Implement RegisterService

Purpose

Implement the ExecuteMsg::RegisterService message according to the current specification of the okp4-dataverse smart contract.

  • Consider the business rules and invariants required for data integrity, security and adherence to specification
  • Implement
  • Test

๐Ÿ›ก๏ธ Denial of Service (DOS) via Front-Running Leads to Law Stone Initialization Failure

Note

Severity: Critical
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex

Description

The instantiation process of the Law Stone contract is susceptible to a front-running vulnerability when interacting with the Objectarium contract for storing .pl files. This vulnerability stems from the public visibility of transaction data in the mempool, which allows attackers to intercept and replicate the initialization parameters. The core issue arises during the instantiate function's call to store_object in the Objectarium. If an attacker captures and submits the same data/program to the Objectarium ahead of the legitimate transaction, the Law Stone's initialization will fail, leading to repeated Denial of Service (DoS)

Impact

This vulnerability exposes the Law Stone contract to a persistent threat of initialization failure, which can be systematically exploited to prevent its deployment.

Recommendation

Ensure that the Objectarium is aware of the Law Stone's dependencies and enforces checks that the calling contract matches expected parameters.

Ref

  • contracts/okp4-law-stone/src/contract.rs
  • contracts/okp4-objectarium/src/contract.rs

๐Ÿ›ก๏ธ Inaccurate Compressed Size Tracking on Object Deletion in Objectarium

Note

Severity: Low
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex

Description

In the Objectarium contract, there is a discrepancy in how compressed sizes are handled during the lifecycle of an object. While the store_object function correctly increments the compressed_size statistic upon storing an object, the corresponding decrement operation is missing in the forget_object function when an object is removed. This oversight leads to inaccurate tracking of the compressed data size within the system.

Recommendation

To resolve this issue, update the forget_object function to include a decrement operation for the compressed_size stat similar to how it handles other metrics.

๐Ÿฆ First specification of the "pay-secure" smart contract

Idea

For the OKP4 protocol, we require a smart contract that lets users hold tokens as a pre-authorization for a service provider. This is analogous to how pre-authorizations work in traditional finance with credit cards. While the working name for this smart contract is pay-secure, it is subject to revision.

Purpose

Essentially, the smart contract allows a Client to temporarily lock tokens as a pre-authorization tool for a Provider. This reserves a specific sum without actually deducting it, thereby verifying that the Client has sufficient funds. The smart contract is designed to operate as a Smart Contract Account (SCA), with the consumer serving as the account Holder.

Objectarium decompression tests

As of date there is no test to cover the decompression of data in the objectarium smart contract. We should add tests to cover this feature.

๐Ÿ’š Implement CI workflow for Smart Contracts on-chain tests

Background

Recently, we've faced a scenario where just changing the version of a component (Rust, in this case) had the potential to disrupt our smart contract execution - cf. #292 (comment). Such cases can lead to disastrous problems if not detected early.

Proposal

To ascertain the proper functioning of the Smart Contracts within the OKP4 blockchain, I propose the addition of a new CI workflow designed to detect and resolve such disruption early in the development process. This workflow could incorporate a variety of tests, including on-chain deployment tests, as well as certain smart contract calls to validate the correct interaction of the contracts amongst themselves and with the entire blockchain.

๐Ÿ”จ Bump `okp4d` version in local chain setup

Description

The current cargo make targets managing the local okp4d chain start are running the v4.0.0 version, we should upgrade to the v5.0.0 version.

Unfortunately with this version the transaction broadcast mode doesn't support block anymore, a trick will be needed.

๐Ÿคฏ Implement CONSTRUCT query over triple store

Purpose

Implement the CONSTRUCT query over the triple store in the cognitarium smart contract, allowing to retrieve a set of triples serialized in a specific format (turtle, rdf/xml, ...).

This way of querying is particularly relevant for the purpose of Governance evaluation because it allows the retrieval of triples that constitute an RDF graph. Indeed, this graph seamlessly integrates with Prolog due to the inherent equivalence between an RDF triple and a Prolog predicate. This approach enables us to harness the semantics of ontological knowledge while maintaining its essence during interpretation within a Prolog program.

Notes:

Todo

  • Specify the contract messages
  • Implement the query

๐Ÿ›ก๏ธ Format Restriction in Data Submission Leads to Underutilization of RDF Options

Note

Severity: Low
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex

Description

The Cognitarium smart contract is designed to support multiple RDF formats for data submission, including NQuads, RDF.XML, Turtle, and NTriples. This versatility is intended to enhance the contractโ€™s adaptability and user experience by allowing for flexible data representation. However, the Dataverse contract, which controls the data input to the Cognitarium, is currently hardcoded to
use only the NQuads format. This restriction arises from a limitation within the submit_claims function, which lacks the capability to accept a format parameter, contrary to what is documented. As a result, despite the Cognitariumโ€™s ability to handle various formats, this functionality is underutilized due to the Dataverseโ€™s fixed format implementation.

This issue not only limits the flexibility of data input but also leads to a discrepancy between the system's documented capabilities and its actual functionality. The mismatch can cause confusion among users and developers, who may expect broader format support based on the official contract documentation.

Recommendation

  • Dynamic Format Handling: Revise the submit_claims function in the Dataverse contract to include a format parameter, allowing users to specify the desired RDF format for their data submissions. This adjustment will enable the contract to dynamically select the appropriate data handling method based on user input.
  • Documentation Alignment: Update the contract documentation to accurately reflect the operational capabilities and limitations. If the implementation of dynamic format selection is deferred, the documentation should clearly state that currently, only the NQuads format is supported.

๐Ÿ“ˆ Implement benchmarking tool

Idea

We need to conduct a benchmark campaign for the OKP4 chain and its smart contracts to evaluate their performance. The outcome of this campaign should be metrics (e.g. gas consumption, storage size, ...) and graphs that will help us understand the behavior of the blockchain and the smart contracts under different loads and conditions. To accomplish this, we need to implement a benchmarking tool.

Expectation

Implement the benchmarking tool using Python, and use any libraries or frameworks suitable for the task.

  • the tool should be able to generate different types of loads and scenarios to test the OKP4 blockchain and its smart contracts under varying conditions: store and query objects of different size in the objectarium, store logic programs and submit logic queries to the law-stone...
  • the tool should be able to collect various performance metrics, such as gas usage, storage size, etc...
  • the tool should be flexible enough to allow customization of the benchmarking parameters and scenarios.
  • the tool should have a user-friendly CLI with a clear documentation.
  • the tool should be able to report the performance metrics in different format, generate visual graphs and metrics to present the benchmarking results in an easy-to-understand format, first of which the well-known infamous CSV.
  • the tool could be able to generate visual graphs to present the benchmarking results.

๐Ÿคฏ Specify Ontology/RDF storage contract

Description

Provide a beginning of specification for a smart contract intended to manage ontological elements.

It shall expose execution messages allowing to feed the ontology by providing RDF Graphs, supporting updates and removal of triples.

It needs also to provide a mean to query the content with a way to filter the triples in output, the semantic triples having the form:

Subject Predicate Object

Considerations

Find a suitable name!

The ontology being a semantical representation of a system's state and the system by itself, we should consider to provide a mean to restrict write permissions on the managed data in order to let only described domain to maintain it.

Regarding querying capabilities there is multiple elements to take into account:

  • Computation resources: Navigating in a Graph made of triples may cause heavy processing costs, so we should provide means to keep control over it;
  • Resolution depth: When querying triples containing edges we need to consider the question of edge resolution and depth resolution.

Finally, in order to ease interoperability with this smart contract, keeping in mind that we should be able to fetch elements from Prolog programs executed by the Logic module we may need to consider supporting multiple output formats.

๐Ÿคฏ Cognitarium: Select query implementation

Purpose

Implement the Select query message handling.

Considerations

The query execution model of WhereConditions needs a proper design which should leave place for eventual query optimizations that could be brought in further developments.

๐ŸŒ Implement RegisterDigitalResource

Purpose

Implement the ExecuteMsg::RegisterDigitalResource message according to the current specification of the okp4-dataverse smart contract.

  • Consider the business rules and invariants required for data integrity, security and adherence to specification
  • Implement
  • Test

๐ŸŒ Specify Dataverse smart contract interface

Purpose

This ticket highlights the need to define the interface contract for the Dataverse smart contract responsible for managing all the resources within the dataverse.

It's also suggested here to rename the contract "Dataverse" instead of "Zone Hub". Given the contract's nature and the positioning of resources (datasets, services) relative to the Zones, it's clear that this smart contract oversees the entire dataverse. Additionally, it's entirely feasible to create as many isolated dataverses as desired.

Todo

  • Add a new blueprint smart contract
  • Specify the interface of the smart contract for the management of all the resources of the Dataverse, including: Dataset, Service, Zone and metadata.

๐Ÿ›ก๏ธ Arithmetic Overflow in CosmWasm-Std Affects OKP4 Contracts

Note

Severity: Medium
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex

Description

The OKP4 ecosystem is currently using version 1.5.3 of the cosmwasm-std library, which has been found to contain arithmetic overflow issues as detailed in advisory CWA-2024-002. This vulnerability affects all contracts that perform arithmetic operations, including Objectarium, Cognitarium, Dataverse, and Law Stone. Arithmetic overflows can alter the expected behavior of smart contracts by causing computations to wrap incorrectly.

Impact

This overflow can lead to incorrect data processing, resulting in potential state corruption or mismanagement of contract logic. It directly threatens the reliability and effectiveness of the contract's intended functionalities.

Recommendation

Upgrade the cosmwasm-std library to the latest patched version as recommended in the advisory.

๐Ÿ›ก๏ธ Unauthorized Object Deletion Triggers Denial of Service

Note

Severity: High
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex

Description

In the Objectarium contract, the store_object function allows users to store data objects with an option to pin them. Pinning an object prevents its deletion unless explicitly unpinned by the owner. However, a issue arrises when objects are stored without being pinned (pin: false). In such cases, any user, regardless of ownership, can invoke the forget_object function to delete these objects.

This flaw arises from the lack of proper ownership verification and permission checks before object deletion. Consequently, this issue allows unauthorized users to delete objects they do not own, leading to potential Denial of Service (DoS) attacks.If a user consistently stores objects with pin set to false, an attacker can monitor these actions and systematically delete newly stored objects by repeatedly calling the forget_object function. This behavior can repeatedly disrupt legitimate users' attempts to
store data, effectively denying them the service of the contract.

Impact

The primary impact of this vulnerability is unauthorized data manipulation leading to Denial of Service. By allowing any user to delete objects they do not own, the system's integrity and reliability are compromised. This not only violates access controls but also exposes the system to targeted attacks where critical data can be maliciously wiped out.

Recommendation

To mitigate this vulnerability, implement stringent ownership checks within the forget_object function to ensure that only the object's owner can delete it. This could involve Verifying that the caller of forget_object matches the stored owner's information before allowing the object's deletion.

๐Ÿคฏ Cognitarium: Insert implementation

Purpose

Implement the InsertData execute message handling.

The implementation should avoid loading the entire RDF input and process the data as a stream instead. The error cases, mainly regarding limitations shall be implemented as well.

Regarding the state design, a proposal needs to be made considering both querying needs and storage efficiency.

๐Ÿ’š Implement Crate Publication in CI Upon Release

Issue

As of now, crates are not published to crates.io or any registry, making it difficult for users and developers to access the technical information of the latest versions of our smart contracts (and libraries).

Todo

Implementation the automated crate publication as part of our CI process upon release.

Feature request: Add convenience query to the law-stone contract to query the program definition directly

It would be convenient for developers to have a query in the law-stone-contract that allows to directly query the program. Currently the workflow for a frontend engineer would be to QueryMsg::Program, and then use the object_id and/or the storage_address to query the Objectorium.

This could be reduced by adding a query to the law-stone contract that returns the program directly, instead of only the object_id and the strorage_address

Related code:

https://github.com/okp4/contracts/blob/8a145c3f3a0f452afb0a3191e2dd51ccd0e622e2/contracts/okp4-law-stone/src/msg.rs#L31C1-L52

๐Ÿชฒ task docs-generate do not launch schema dependency task

When using the docs-generate task in the makefile we expect the dependency task schema to be executed but it is not. This makes for bad DX and should be fixed so the task correctly calls the schema task in the contract makefiles.

To reproduce this bug, launch the cargo make docs-generate task. No new schemas are generated.

๐Ÿ’ฟ Objectarium: unable to store a removed object

Summary

When removing an object, we can't store it again.

Steps to reproduce

# Store an object
okp4d tx wasm execute --from $ADDR $CONTRACT_ADDR --gas 1000000 "{\"store_object\":{\"data\":\"a2p5dkVGCg==\",\"pin\":false}}"

# Remove the object
okp4d tx wasm execute --from $ADDR $CONTRACT_ADDR --gas 1000000 "{\"forget_object\":{\"id\":\"8b44318b26e8a799537a172009fc74ff79ec916b1d303bd4e18110b0105f1255\"}}"

# Store the object again
okp4d tx wasm execute --from $ADDR $CONTRACT_ADDR --gas 1000000 "{\"store_object\":{\"data\":\"a2p5dkVGCg==\",\"pin\":false}}"

By querying the transaction result we get:

raw_log: 'failed to execute message; message index: 0: Object is already stored: execute
  wasm contract failed'

Expected behavior

I expect to be able to store an object that has been previously removed.

๐Ÿ›ก๏ธ Insufficient information in Events

Note

Severity: Critical
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex

Description

  • Law Stone Contract: The break_stone function in the Law Stone contract fails to emit detailed events for actions undertaken within the function. Notably, actions such as unpinning or forgetting objects lack corresponding information in event emissions, which are crucial for auditing and tracing the state changes within the contract.
  • Objectarium Contract: The functions within the Objectarium contract consistently emit events that only include the "action" and "id" parameters. This results in the omission of critical information such as pin counts, the pin status of objects, and the remaining pins after an object is forgotten. This lack of detail in event logs hampers effective monitoring and debugging of the
    system.

Recommendation

  • For the Law Stone Contract: Enhance the break_stone function to include detailed event emissions for all significant actions, particularly for unpinning and forgetting operations. Each event should detail the affected objects and the resultant state changes.
  • For the Objectarium Contract: Modify event emissions to include additional data such as pin counts, the current pin status of each object, and updates following any "forget" operations. This improvement will enable better insight into the contract's operation and facilitate data tracking and integrity checks.

๐Ÿ“ Add check to avoid 'undefined' in generated documentation

Purpose

The documentation we produce (thanks to fadroma/schema - #311) relies on a specific format of comments. For instance, if types aren't correctly described in the comments of the source code, they'll appear as undefined in the documentation.

image

Todo

To maintain documentation quality, we should integrate a check process that ensures no undefined definitions are present in the committed documentation.

There are several ways of doing this, but it could be an additional control step in the makefile when generating the documentation (which will also be used in the CI).

๐ŸŒ Implement InstantiateMsg

Purpose

Implement the InstantiateMsg message according to the current specification of the okp4-dataverse smart contract.

  • Consider the business rules and invariants required for data integrity, security and adherence to specification
  • Implement
  • Test

๐Ÿ“ Create the foundational Prolog file for expressing governance rules

Purpose

We propose to create a foundational Prolog file that serves as a basis for expressing governance rules for data spaces. This file will provide a standardized vocabulary for expressing governance rules and ensure consistency across different data spaces governances

The file should contain elemental predicates, terminology to express the concepts of authorization, permission, and forbidden, predicates that enable querying for authorization in a standardized way, some formalisms to describe the rules that allow the extraction of textual descriptions.

By using the file, the builders can easily implement governance rules in their data spaces without having to develop these rules from scratch. This can save time and ensure that governance rules are implemented consistently and correctly.

We'd propose to name this file gov-elements.pl, the term element reflecting the purpose of the file to contain the fundational elements required to build governance rules (this can be discussed, of course).

๐Ÿ›ก๏ธ Lack of Storage Address Validation in Law-Stone Instantiation

Note

Severity: Low
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex

Description

The instantiation process of the law-Stone does not include validation for the storage_address provided in the InstantiateMsg.This will result in a transaction failure.

Recommendation

Implement proper validation checks in the instantiate function to ensure the storage_address is well-formed and authorized for the intended operational context.

โ™ป๏ธ ๐Ÿ’ฟ Objectarium: consider binary storage of hash identifiers

Issue

The current method of storing object identifiers as string values in hex format, which are actually hash values. It's not the most efficient approach as binary content of the same value can significantly reduce the amount of storage required. For instance, the SHA-256 hash algorithm, which generates a 256-bit hash value, would require 64 bytes in hex format (if we consider each character being a hexadecimal digit), while it would only require 32 bytes (each byte being 8 bits) to represent the same hash.

Todo

Implement the storage of object ids as binary content in the state. Additionally, implement serialization, deserialization, and validation when necessary.

๐Ÿคฏ Reduce mapping layers

The purpose here is to enhance the performance globally by removing unnecessary mapping layers.

Description

The main point comes from the QueryEngine design which is processing the select query returning directly the solutions as msg::SelectResponse. By converting the resolved variables to the message model it induces a costly mapping as it needs to resolve each namespace keys to their actual values.

This is a needed behaviour to handle the msg::SelectQuery but as the QueryEngine is also used in the processing of other messages it has bring unnecessary mapping layers. For example, the triple deletion needs first to select the triples to delete which resolves the namespaces from key to value, and then proceed with the reversed mapping.

๐Ÿคฏ Cognitarium: Prevent blank nodes conflicts

When inserting triples, avoid same blank nodes being reused across insertions.

Issue description

Blank nodes allowing to create edges in graphs at insertion, their meaning is only linked to the set of triples being inserted and shall be ensured unique per insertion. Allowing to define same blank nodes across multiples insertions could affect previous ones and therefore alter unintentionally the data contained.

When inserting blank nodes, they shall be ensured unique scoped per insert message execution.

Proposal

In order to provide unique blank nodes at insertion, their identifier shall be formed with the context of this particular execution. I propose so to prefix alter their ids before writing by prefixing them with a hash representing the execution context, which shall carry the block height, the index of the transaction the message come from, and the insert message itself.

๐Ÿ’ฟ ๐Ÿ”„ Objectarium : support for Object Compression

Idea

The objectarium smart contract enables object storage on-chain. However, as the size of objects stored on-chain increases, so does the gas cost required to store them.

One way to address this issue is by adding support for compression. This would allow users to compress their data before storing it on-chain, potentially reducing the amount of gas required to store the object. Users could make a trade-off between the gas cost required for storage (due to the size of the object) and the gas cost required for computation (due to compression).

To gain a deeper understanding of how the smart contract behaves with compression in terms of gas consumption, benchmarking could be a useful approach (cf. #191).

Todo

  • Evaluate the relevance of the proposal.
  • Research and select one or two CPU-efficient compression algorithms.
  • Design the contract interface to accommodate compression.
  • Implement.
  • Test.
  • Document the feature. Would be nice to consider the impact of compression on gas consumption and document any potential trade-offs.

Add support for LZMA compression into objectarium

Todo

The objective is to incorporate the LZMA compression algorithm into the objectarium.

Rationale: Selecting a compression algorithm to include into the objectarium is a complex decision, considering factors such as compression ratio and compression/decompression performance, particularly within the realm of blockchain. The LZMA family of compressions holds a prominent position among the compression algorithms, renowned for its effectiveness and widespread adoption. As such, it's a good candidate, IMHO.

Details: Consider utilizing the following project for the implementation https://github.com/gendx/lzma-rs

๐Ÿ›ก๏ธ Insufficient Input Validation in Objectarium Bucket Instantiation

Note

Severity: Low
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex

Description

The Objectarium contract's instantiate function lacks necessary validations on the bucket name parameter during the bucket creation process. This oversight allows for the creation of buckets with arbitrary lengths and potentially malicious content in their names. The absence of strict checks and sanitization on the bucket names could facilitate phishing attacks or cross-site scripting (XSS) vulnerabilities when these names are displayed in a frontend application.

Impact

If exploited, this vulnerability could lead to phishing attacks and execution of unauthorized scripts in the context of a user's session (XSS), compromising the security of the frontend application and the integrity of user interactions.

Recommendation

Implement a validation check to ensure all input conforms to the URL-safe alphabet as specified in RFC 4648, using the defined character set.

๐ŸŒ Implement FoundZone

Purpose

Implement the ExecuteMsg::FoundZone message according to the current specification of the okp4-dataverse smart contract.

  • Consider the business rules and invariants required for data integrity, security and adherence to specification
  • Implement
  • Test

๐ŸŒ Implement RevokeClaims

Purpose

Implement the ExecuteMsg::RevokeClaims message according to the current specification of the okp4-dataverse smart contract.

  • Consider the business rules and invariants required for data integrity, security and adherence to specification
  • Implement
  • Test

๐Ÿคฏ Cognitarium: BGP optimization

Purpose

When using the PlanBuilder to establish a query plan from a query based on triple patterns, the triple patterns are considered as a Basic Graph Pattern. The current implementation build it in the order the triple patterns are provided but it is not optimal.

Proposal

Before building the Basic Graph Pattern relating to a set of triple patterns, an optimization step could be added to order the patterns based on a computation indice expressing the complexity needed to resolve them. The built BGP would then be optimal, and users won't need to make their own optimization before querying anymore, this we'll also help to decrease resource usages on nodes.

๐ŸŒ Implement SubmitClaims

Purpose

Implement the ExecuteMsg::SubmitClaims message according to the current specification of the okp4-dataverse smart contract.

  • Consider the business rules and invariants required for data integrity, security and adherence to specification
  • Implement
  • Test

๐Ÿ’ต Specify the tax distribution mechanism

๐Ÿ“„ Purpose

NOTE: I first started by creating a branch with it's associated PR with a smart contract intended for the distribution of the tax according to the planned token model defined in whitepaper, but I finally decided to create an issue to initiate a discussion on the architecture of it.

Based on the token model specified in whitepaper and discussion on this issue, we will need a mechanism to distribute the tax collected from the workflow execution to a list of recipients including the possibility to burn a part of this tax.

๐Ÿงฎ Proposed solution

To dispatch the responsibility from the workflow contract execution, I propose to create a smart contract that will take care of the tax distribution. A smart contract that can be only instantiated and updated by governance vote (for our needs), and allowing to configure and update the list of recipients and burn amount.

To resolve rounding and accuracy issue, I propose to set a default recipient and a list of recipient with their corresponding token distribution ratio, after distribution of total token on all destination, the remaining token will be transfer to the default recipient. It's allow also to add or remove recipient without send the whole list of recipient and ratio, unless necessary.

For exemple, based on the this figure, when workflow is executed, 3% tax of the total cost will be transferred to the tax distribution contract. Contract is configured with one destination: burn, set to 66.6%, and the community pool as a default recipient. Once 66.6% of token has been burnt, the remaining token is sent to the default recipient (set to the community pool). We can also configure recipient with a wallet address.

๐Ÿ”Ž Example

Contract schema definition example
use cosmwasm_schema::{cw_serde, QueryResponses};
use cosmwasm_std::Decimal;

/// InstantiateMsg is the default config used for the tax distribution contract.
///
/// This contract is intended to be stored with `--instantiate-nobody ` args to prevent only the
/// governance that is allowed to configure and instantiate this contract and be set as the contract
/// admin.
#[cw_serde]
pub struct InstantiateMsg {
    /// Configure the destination of the remaining tokens after distribution to other recipients
    /// (defined in `destinations`).
    default_recipient: Recipient,
    /// Define the distribution rate of tokens to the intended recipients.
    /// The total rate sum should not exceed 1.
    destinations: Vec<Destination>,
}

#[cw_serde]
pub enum Recipient {
    /// Send token to the community pool.
    CommunityPool,
    /// Burn token.
    Burn,
    /// Send token to a specific wallet address.
    Address(String),
}

#[cw_serde]
pub struct Destination {
    /// Recipient of tokens
    recipient: Recipient,
    /// Set the token rate to receive for this recipient.
    /// Value should be between zero exclusive and one exclusive.
    ratio: Decimal,
}

/// Execute messages
#[cw_serde]
pub enum ExecuteMsg {
    /// # Distribute
    /// Distributes the tokens received from the transaction to the recipients following
    /// the configured apportionment.
    Distribute,

    /// # UpdateDefaultRecipient
    /// Change the default recipient used for distribute remaining token.
    ///
    /// Only contract admin can update the default recipient.
    UpdateDefaultRecipient { recipient: Recipient },

    /// # UpsertDestinations
    /// Add new recipients for receiving token with it's corresponding ratio.
    /// If one of recipient already exist and configured, the ratio is updated.
    /// Don't forget that the total ratio already configured shouldn't exceed 1, but can be less 
    /// than 1, since the remaining token will be transfer to the default recipient.
    ///
    /// Only contract admin can add or update destinations.
    UpsertDestinations { destinations: Vec<Destination> },

    /// # RemoveRecipients
    /// Remove recipients from the tax distribution.
    ///
    /// Only contract admin can remove recipients.
    RemoveRecipients { recipients: Vec<Recipient> },
}

/// Query messages
#[cw_serde]
#[derive(QueryResponses)]
pub enum QueryMsg {
    /// # Destinations
    /// Returns the current configuration for all tax distribution destinations with their
    /// corresponding ratio.
    #[returns(DestinationsResponse)]
    Destinations,
}

/// # DestinationsResponse
#[cw_serde]
pub struct DestinationsResponse {
    /// The current configured default recipient for remaining token after distribution.
    pub default_recipient: Recipient,
    /// All recipients with their corresponding token ratio.
    pub destinations: Vec<Destination>,
}

โ“ Remarks & Questions

This is only a preliminary contract definition but maybe it doesn't cover all ours needs, I'm waiting your feedback to see if is missing some needed features.

Other solution is possible for the tax distribution :

  • Included in the workflow contract, avoiding create another contract but increase responsibility of this one.
  • Contract like this one proposed but, to limit gas fees, implement a withdraw feature for address recipient, this avoid transfer token in a one transaction and do the same like is done with the distribution module allowing recipient to withdraw their token. Is not possible to do the same either for the burn and the community pool, since no one can withdraw the community pool except the governance (maybe a solution).
  • Namely that the execution of a workflow will remunerate all the actors who are allowed to provide the data to the workflow, it would perhaps be wise to integrate the distribution of the tax at the same level as the remuneration distribution.

๐Ÿ›ก๏ธ Unrestricted Access to Break_Stone if Deployed without admin

Note

Severity: Medium
target: v5.0.0 - Commit: cde785fbd2dad71608d53f8524e0ef8c8f8178af
Ref: OKP4 CosmWasm Audit Report v1.0 - 02-05-2024 - BlockApex

Description

The Law Stone contract utilizes the store_object function of the Objectarium to store .pl files containing rule sets. A key function within this contract is break_stone, designed to modify or remove these rules by unpinning or forgetting them. The function implements an admin check to restrict access: if an admin address is configured, it ensures that only the admin can execute the function by comparing the info.sender with the admin address.

Specifically, the function logic:

{
  Some(admin_addr) if admin_addr != info.sender =&gt; Err(ContractError::Unauthorized),
  _ =&gt; Ok(()),
};

indicates that if an admin is set and the caller is not the admin, the function will deny access. However, if no admin is defined, particularly in deployments initiated with the --no-admin flag in CosmWasm CLI, the function proceeds without any access restrictions. This lack of checks in the absence of an admin means that anyone can invoke break_stone and potentially disrupt the rule enforcement by the Law Stone, impacting the governance or operational constraints enforced by these rules.

Recommendation

  • Enforce Admin Configuration: Modify the contract deployment process to require an admin address explicitly. This change would prevent the contract from being deployed without admin oversight.
  • Default Admin Fallback: Implement a default admin setting that can be used if no specific admin is provided during deployment, ensuring there's always some level of controlled access.

โ™ป๏ธ Smart contracts best practices for optional values with defaults

Issue

There are some inconsistencies in the way we handle optional values and defaults in our smart contracts, especially with regard to messages and state.

Idea

As we are currently experiencing variations in the principles applied, I propose the following rule to address the matter. This is something that is open to discussion of course.

Messages

Regarding messages, in the definition of instantiate and execute messages, optional values can be considered, with explicit default values in the comments. In the implementation, methods should be included to should to retrieve values or default values - for instance like max_page_size_or_default for max_page_size property.

For message responses, types with default values should not have optional values. So when a type has a default value, the response should always include the value or the default value if none. It will avoid the need for clients to explicitly handle optional values, and help them handle default values.

State

On the state side, I believe that default values should be used whenever possible, which eliminates any use of the optional type.

Todo

  • reach a consensus on this matter
  • implement the proposed change

๐Ÿง  Specify Dataspace smart contract

Description

Provide the first specification elements about a dataspace smart contract that'll be in charge of performing all the possible actions in a dataspace.

The smart contract specification shall meet the following responsibilities:

  • Condition any actions against governance rules
  • Provide changing governance rules mechanism
  • Sync the ontology by reflecting any mutations in it

Finally, we shall find a name for it.

Improve README.md (emojis + quality section)

Purpose

The README.md file could be enhanced in the following ways:

  • Emojis are used for some section headers, but not all. It would be beneficial to add emojis to the remaining section titles for consistency.

image

  • A section should be added to describe the quality assurance approach used in this project, which ensures that the Smart Contracts developed meet high-quality standards. The section should especially mention that this is achieved through:
    • The enforcement of stringent rules, monitored by a linter (Clippy) within the Github CI environment.
    • A high level of code coverage through systematic unit testing.
    • Future considerations for additional testing approaches, such as fuzzy testing or end-to-end testing, to further enhance quality.

๐Ÿคฏ Cognitarium: reuse namespace cache

Purpose

Regarding namespace management, and especially keeping information in cache when performing Triple IRI mappings between their string representation and their u128 key identifier in state. Some optimization improvement can be made, and a canonical way can be also setup in order to be reused and ease upcoming evolution.

Details

The current status of namespace management at this point is:

The TripleStorer contains a dedicated namespace cache implementation allowing the resolve existing namespace from their string string representation to their u128 state key identifier or creating a new one. On the other hand the QueryEngine SolutionsIterator provides the opposite implementation, resolving a namespace string representation from its u128 state identifier. And aside those, the PlanBuilder could take benefits from a namespace caching mechanism as it query systematically the state.

Those usages are different but I think a single implementation could be made to satisfy each use cases taking advantage of composability and avoiding duplication, a single implementation that could be decoupled to enhance testing capabilities.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.