Git Product home page Git Product logo

ink's Introduction

ink!

Parity's ink! for writing smart contracts

linux codecov coveralls loc stack-exchange

squink, the ink! mascotink! is an eDSL to write smart contracts in Rust for blockchains built on the Substrate framework. ink! contracts are compiled to WebAssembly.


Guided Tutorial for Beginners  •   ink! Documentation Portal  •   Developer Documentation


More relevant links:

Table of Contents

Play with It

The best way to start is to check out the Getting Started page in our documentation.

If you want to have a local setup you can use our substrate-contracts-node for a quickstart. It's a simple Substrate blockchain which includes the Substrate module for smart contract functionality ‒ the contracts pallet (see How it Works for more).

We also have a live testnet named "Contracts" on Rococo. Rococo is a Substrate based parachain which supports ink! smart contracts. For further instructions on using this testnet, follow the instructions in our documentation.

The Contracts UI can be used to instantiate your contract to a chain and interact with it.

Usage

A prerequisite for compiling smart contracts is to have Rust and Cargo installed. Here's an installation guide.

We recommend installing cargo-contract as well. It's a CLI tool which helps set up and manage WebAssembly smart contracts written with ink!:

cargo install cargo-contract --force

Use the --force to ensure you are updated to the most recent cargo-contract version.

In order to initialize a new ink! project you can use:

cargo contract new flipper

This will create a folder flipper in your work directory. The folder contains a scaffold Cargo.toml and a lib.rs, which both contain the necessary building blocks for using ink!.

The lib.rs contains our hello world contract ‒ the Flipper, which we explain in the next section.

In order to build the contract just execute this command in the flipper folder:

cargo contract build

As a result you'll get a target/flipper.wasm file, a flipper.json file and a <contract-name>.contract file in the target folder of your contract. The .contract file combines the Wasm and metadata into one file and needs to be used when instantiating the contract.

Hello, World! ‒ The Flipper

The Flipper contract is a simple contract containing only a single bool value.

It provides methods to:

  • flip its value from true to false (and vice versa) and
  • return the current state.

Below you can see the code using ink!.

#[ink::contract]
mod flipper {
    /// The storage of the flipper contract.
    #[ink(storage)]
    pub struct Flipper {
        /// The single `bool` value.
        value: bool,
    }

    impl Flipper {
        /// Instantiates a new Flipper contract and initializes
        /// `value` to `init_value`.
        #[ink(constructor)]
        pub fn new(init_value: bool) -> Self {
            Self {
                value: init_value,
            }
        }

        /// Flips `value` from `true` to `false` or vice versa.
        #[ink(message)]
        pub fn flip(&mut self) {
            self.value = !self.value;
        }

        /// Returns the current state of `value`.
        #[ink(message)]
        pub fn get(&self) -> bool {
            self.value
        }
    }

    /// Simply execute `cargo test` in order to test your contract
    /// using the below unit tests.
    #[cfg(test)]
    mod tests {
        use super::*;

        #[ink::test]
        fn it_works() {
            let mut flipper = Flipper::new(false);
            assert_eq!(flipper.get(), false);
            flipper.flip();
            assert_eq!(flipper.get(), true);
        }
    }
}

The flipper/src/lib.rs file in our examples folder contains exactly this code. Run cargo contract build to build your first ink! smart contract.

Examples

In the examples repository you'll find a number of examples written in ink!.

Some of the most interesting ones:

  • basic_contract_ref ‒ Implements cross-contract calling.
  • trait-erc20 ‒ Defines a trait for Erc20 contracts and implements it.
  • erc721 ‒ An exemplary implementation of Erc721 NFT tokens.
  • dns ‒ A simple DomainNameService smart contract.
  • …and more, just rummage through the folder 🙃.

To build a single example navigate to the root of the example and run:

cargo contract build

You should now have an <name>.contract file in the target folder of the contract.

For information on how to upload this file to a chain, please have a look at the Play with It section or our smart contracts workshop.

How it Works

  • Substrate's Framework for Runtime Aggregation of Modularized Entities (FRAME) contains a module which implements an API for typical functions smart contracts need (storage,querying information about accounts, …). This module is called the contracts pallet,
  • The contracts pallet requires smart contracts to be uploaded to the blockchain as a Wasm blob.
  • ink! is a smart contract language which targets the API exposed by contracts. Hence ink! contracts are compiled to Wasm.
  • When executing cargo contract build an additional file <contract-name>.json is created. It contains information about e.g. what methods the contract provides for others to call.

ink! Macros & Attributes Overview

Entry Point

In a module annotated with #[ink::contract] these attributes are available:

Attribute Where Applicable Description
#[ink(storage)] On struct definitions. Defines the ink! storage struct. There can only be one ink! storage definition per contract.
#[ink(message)] Applicable to methods. Flags a method for the ink! storage struct as message making it available to the API for calling the contract.
#[ink(constructor)] Applicable to method. Flags a method for the ink! storage struct as constructor making it available to the API for instantiating the contract.
#[ink(event)] On struct definitions. Defines an ink! event. A contract can define multiple such ink! events.
#[ink(anonymous)] Applicable to ink! events. Tells the ink! codegen to treat the ink! event as anonymous which omits the event signature as topic upon emitting. Very similar to anonymous events in Solidity.
#[ink(signature_topic = _)] Applicable to ink! events. Specifies custom signature topic of the event that allows to use manually specify shared event definition.
#[ink(topic)] Applicable on ink! event field. Tells the ink! codegen to provide a topic hash for the given field. Every ink! event can only have a limited number of such topic fields. Similar semantics as to indexed event arguments in Solidity.
#[ink(payable)] Applicable to ink! messages. Allows receiving value as part of the call of the ink! message. ink! constructors are implicitly payable.
#[ink(selector = S:u32)] Applicable to ink! messages and ink! constructors. Specifies a concrete dispatch selector for the flagged entity. This allows a contract author to precisely control the selectors of their APIs making it possible to rename their API without breakage.
#[ink(selector = _)] Applicable to ink! messages. Specifies a fallback message that is invoked if no other ink! message matches a selector.
#[ink(namespace = N:string)] Applicable to ink! trait implementation blocks. Changes the resulting selectors of all the ink! messages and ink! constructors within the trait implementation. Allows to disambiguate between trait implementations with overlapping message or constructor names. Use only with great care and consideration!
#[ink(impl)] Applicable to ink! implementation blocks. Tells the ink! codegen that some implementation block shall be granted access to ink! internals even without it containing any ink! messages or ink! constructors.

See here for a more detailed description of those and also for details on the #[ink::contract] macro.

Trait Definitions

Use #[ink::trait_definition] to define your very own trait definitions that are then implementable by ink! smart contracts. See e.g. the examples/trait-erc20 contract on how to utilize it or the documentation for details.

Off-chain Testing

The #[ink::test] procedural macro enables off-chain testing. See e.g. the examples/erc20 contract on how to utilize those or the documentation for details.

Developer Documentation

We have a very comprehensive documentation portal, but if you are looking for the crate level documentation itself, then these are the relevant links:

Crate Docs Description
ink Language features exposed by ink!. See here for a detailed description of attributes which you can use in an #[ink::contract].
ink_storage Data structures available in ink!.
ink_env Low-level interface for interacting with the smart contract Wasm executor. Contains the off-chain testing API as well.
ink_prelude Common API for no_std and std to access alloc crate types.

Community Badges

Normal Design

Built with ink!

[![Built with ink!](https://raw.githubusercontent.com/paritytech/ink/master/.images/badge.svg)](https://github.com/paritytech/ink)

Flat Design

Built with ink!

[![Built with ink!](https://raw.githubusercontent.com/paritytech/ink/master/.images/badge_flat.svg)](https://github.com/paritytech/ink)

Contributing

Visit our contribution guidelines for more information.

Use the scripts provided under scripts/check-* directory in order to run checks on either the workspace or all examples. Please do this before pushing work in a PR.

License

The entire code within this repository is licensed under the Apache License 2.0.

Please contact us if you have questions about the licensing of our products.

ink's People

Contributors

agryaznov avatar alvicsam avatar ascjones avatar athei avatar austinabell avatar bullrich avatar cmichi avatar davidsemakula avatar dependabot[bot] avatar gnunicorn avatar hcastano avatar honeywest avatar jubnzv avatar koushiro avatar pepyakin avatar pgherveou avatar pmikolajczyk41 avatar riusricardo avatar robbepop avatar sekisamu avatar sergejparity avatar shawntabrizi avatar skymanone avatar smiasojed avatar tash-2s avatar tripleight avatar xermicus avatar xgreenx avatar yjhmelody avatar zgrannan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ink's Issues

Fix DynAlloc

The "new" DynAlloc storage allocator is broken.
Whenever using it (e.g. allocate_using) the code panics (traps).
The reason for this has yet to be found.

We need to fix it in order to finally deprecate CellChunkAlloc.

Write new acceptance tests for the eDSL

With the new minimal testing framework in place we can now add success and failure path acceptance tests for the pdsl_lang. We should add as many useful tests as possible in order to verify integrity of the system and to allow for easy refactoring of the proc macro later on.

This issue is a collection for outstanding tests that should be written.

  • Failure Using self by value in a message.
  • Failure No state in contract!
  • Failure No deploy handler in contract!
  • Failure Using &self or self in deploy handler
  • Failure The deploy implementation must not be generic
  • Failure The deploy implementation must not have a return type
  • Failure Using env as argument name in a message
  • Failure Using env as argument name in the deploy handler
  • Failure Using deploy as message name (already used by deploy handler)
  • Failure Using deploy as method name (already used by deploy handler)
  • Failure Using multiple states
  • Failure Using multiple deploy impl block
  • Failure Using a non-primitive type as message or deploy handler input.
  • Failure Using a non-primitive type as message or deploy handler output.
  • Success Add simple incrementer contract test case
  • Success Add a simplified ERC20 token contract test case

Fix bug that somehow std is compiled into eDSL smart contracts

When compiling a smart contract with the eDSL one currently can experience that parts of the standard library are compiled into the resulting binary. Things like the panicking infrastructure are using their std components. This is bad since we do not support std environments in Wasm smart contracts.

What could cause problems is that cargo is incapable of having different crate features for different paths to the same crate dependency in the dependency tree, cargo tree. Instead cargo will use the same crate features for the same crate all over the place. Since pdsl_lang, the proc_macro that itself validly makes use of std components is using pdsl_core and pdsl_model with their respective std feature flag set, it might be that the resulting final binary that also depends on pdsl_core and pdsl_model has the std flag set as well. Further investigations are needed.

Design & implement a CLI tool for deploying and developing smart contracts

Currently using the pDSL requires setting up a Rust project in a specific way.
Deploying or compiling a smart contract also requires certain knowledge about Web Assembly, internals and obviously Substrate.
This is a huge burden, especially for smart contract developers that are not coming from Rust or do not have proficient knowledge about Substrate or Web Assembly.

To counteract this we want to provide a small CLI tool that helps with the common tasks.

pDLS CLI

The pDSL CLI can be a standalone CLI tool or a cargo plugin.
For an initial MVP we basically want it to perform the following tasks:

  • It should help setting up a workspace for a new smart contract. The user should ideally have to do nothing else than issuing our provided tool. An example usage could be the following: cargo pdsl --new <name>. This will create, initialize and setup a workspace for a smart contract called <name> at the current working directory.

  • Also it should help with deploying smart contracts. Currently deploying a smart contract requires manual interoperation with a running Substrate chain, either local (--dev) or an open chain. This can be very tedious. Besides that deploying a Web Assembly smart contract also requires some instrumentation of the assembly itself and requires special compiler flags, for example in order to use "safe-math" always.

  • We might want to provide special functionality in order to just test a smart contract on-chain or off-chain since this process can also sometimes be tricky without proper knowledge.

Future Work

  • In the future we want to help users with dynamic linking or core components and remote smart contracts that their smart contracts are depending on.

Generate JSON API description

For a smart contract compiled written using pdsl_lang we want it to generate an API description file, probably encoded as JSON. The API file lists all the available messages, their hash selectors as well as their arguments in the required order so that an external system could eventually use this information to create and manage calls to the smart contract.

Uses

The pDSL itself as well as external tools can make use of the API description files.
For the pDSL it could help smart contract developers in calling messages of external smart contracts by introspecting their API description files.

Open Questions

For an MVP a smart contracts API should be restricted to just some primitives types such as (), bool, u{8,16,32,64,128}, i{8,16,32,64,128}, Balance, Address. Later we want to open this to allow for more and even custom types. However, for custom types, or even for Rust standard library types such as Option<T> we need a clear and concise way to communicate our intention in a language agnostic way.

This means that instead of describing that our message takes an Option<u32> we need to specify that our message takes an object of type Option<u32> that is encoded through {u8, u32}. This way we can keep our encoding interfaces clean and language agnostic.

The JSON shard for this could look like the following:
{ "name": "my_optional_int", "type_name": "Option<u32>", "encoded_as": ["u8", "u32"] }

Example

Looking again at the simple Incrementer smart contract:

/// A simple contract that has a value that can be
/// incremented, returned and compared.
struct Incrementer {
    /// The internal value.
    value: storage::Value<u32>,
}

impl Deploy for Incrementer {
    /// Automatically called when the contract is deployed.
    fn deploy(&mut self, init_value: u32) {
        self.value.set(init_value)
    }
}

impl Incrementer {
    /// Increments the internal counter.
    pub(external) fn inc(&mut self, by: u32) {
        self.value += by
    }

    /// Returns the internal counter.
    pub(external) fn get(&self) -> u32 {
        *self.value
    }

    /// Returns `true` if `x` is greater than the internal value.
    pub(external) fn compare(&self, x: u32) -> bool {
        x > *self.value
    }
}

The pdsl_lang should generate a JSON file that roughly looks like this:

{
    "name": "Incrementer",
    "deploy": {
        "params": [],
        "return_type": null
    },
    "messages": [
        {
            "name": "inc",
            "selector": 0,
            "mutates": true,
            "params": [
                {"name": "by", "type": "u32"}
            ],
            "return_type": null
        },
        {
            "name": "get",
            "selector": 1,
            "mutates": false,
            "params": [],
            "return_type": "u32"
        },
        {
            "name": "compare",
            "selector": 2,
            "mutates": false,
            "params": [
                {"name": "with", "type": "u32"}
            ],
            "return_type": "bool"
        }
    ]
}

Implement special trait-like syntax for deploy functionality

In the current state of pdsl_lang the on_deploy method looks just like any other message or state method. This is bad since it is something very special semantically. We want users to see that as a reflection in their smart contract code.

An idea to fix that is to introduce a special trait Deploy that needs to be implemented by every smart contract. For convenience it should serve a predefined empty implementation that does nothing. The special snow-flake about the Deploy trait is, that it allows its single fn deploy associated method to be variadic. So one contract might implement it as fn deploy() and another as fn deploy(init_value: u32).

An example:

use pdsl_lang::contract;
use pdsl_core::storage;

contract! {
    /// A simple contract that has a value that can be
    /// incremented, returned and compared.
    struct Incrementer {
        /// The internal value.
        value: storage::Value<u32>,
    }

    impl Deploy for Incrementer {
        fn deploy(&mut self, init_value: u32) {
            self.value.set(init_value);
        }
    }

    impl Incrementer {
        /// Increments the internal counter.
        pub(external) fn inc(&mut self, by: u32) {
            self.value += by
        }

        /// Returns the internal counter.
        pub(external) fn get(&self) -> u32 {
            *self.value
        }

        /// Returns `true` if `x` is greater than the internal value.
        pub(external) fn compare(&self, x: u32) -> bool {
            x > *self.value
        }
    }
}

Overhaul Key API

Key API

The Key API as of now mainly consists of

  • Key::with_offset(key: Key, offset: u32) -> Key
  • Key::with_chunk_offset(key: Key, offset: u32) -> Key
  • Key::store(&mut self, bytes: &[u8])
  • Key::load(&self) -> Vec<u8>
  • Key::clear(&mut self)

Especially the Key::with_offset and Key::with_chunk_offset APIs are not intuitive.
A better design would be to use the std::ops::Add and std::ops::AddAssign traits and implement them for u32 (replaces uses of Key::with_offset), u64 (replaces uses of Key::with_chunk_offset) and also some versions for i32 and i64 to go in both directions.
Due to technical implementation issues the unsigned variants could be implemented a bit more efficient.

For completeness one could also think about u128 and i128 support since Key is 256-bit anyway.

Also Key::store, Key::load and Key::clear could be marked unsafe since those operations are normally not safe using the raw Key abstraction and are only made safe by the many alternatives, like SyncCell, Value, Vec, etc.

Add Continuous Integration

Continuous Integration

For pDSL to evolve and be scalable once multiple contributors work on it we want and need integration with continuous systems that constantly check its integrity.

As a starter we want these CI systems to be supported:

  • TravisCI: for Linux and Mac builds
    • Register repository
    • Verify that master compiles successfully ..
      • .. on Linux
      • .. on Mac
    • Run pdsl_core unit tests on push or PR to master
    • Run coverage report
      • Upload coverage report to Coveralls (seems malfunction atm)
      • Upload coverage report to CodeCov
    • Verify that the code is formatted as intended.
    • README: Link to travisCI build status
  • Appveyor: for Windows builds
    • Register repository
    • Verify that master compiles successfully
    • Run pdsl_core unit tests on push or PR to master
    • README: Link to AppVeyor build status
  • Coveralls: since it is the most popular solution and has a nice bot
    • Register repository
    • Make Coveralls bot check PRs
    • Display coverage reports correctly
    • README: Link to Coveralls coverage status
  • CodeCov: since it displays coverage better than coveralls (opinionated)
    • Register repository
    • Display coverage reports correctly
    • README: Link to CodeCov coverage status

This issue tracks the progress made on this work item.

Travis CI

Should do the most of the work load.

  • Compile-check cargo check the project
  • Run unit tests of its components cargo test --all-features -all
  • Do the same in release mode since sometimes release can change semantics of code, e.g. debug_assert! is thrown out.
  • Optionally perform bench mark tests if any using e.g. cargo bench.
  • Optionally generating self hosted documentation using GitHub pages.
  • Generate a coverage report and sending it to Coveralls and CodeCov.

AppVeyor

Since AppVeyor can be very slow we should minimize its functionality to the possible minimum.

  • Compile-check cargo check the project.
  • Run unit tests of its components in debug mode only.

Coveralls

We should make use of its bot to never degrade in coverage upon new PRs.

Summary

We want protected branches (especially master) so that no one can merge a PR that breaks the masters invariances:

  • The code should always compile.
  • Tests should always pass.
  • Code test coverage should never decrease.
  • Performance tests should never show degradations unless fixing a correctness issue.

Implement message selectors via unique hashing

Currently we simply generate message IDs of messages generated by pdsl_lang via a simple counter increment. While this is very compute friendly and very easy to implement it has several downsides. First, the ID of a message is depending on its order in which it appears in a smart contract definition. This is super bad since this is effectively disables smart contract refactoring even on a syntactic level.

What we want from a message ID is the following:

  • It must be unique across all messages for all states.
  • It must be independent of the internal implementation and its order in the source file.
  • It must be dependent on the messages signature so that the external environment (and remote smart contracts) can safely rely on that for calling it.

For this we want to implement a hashing scheme.
The hashing function is not required to be cryptographically safe.
For creating a unique message ID hash that fulfills the above stated properties we need to take its signature into consideration.

Further research required!

eDSL: Wishes

Wishes for the upcoming eDSL

This issue tracks a list of all the wishes for the upcoming Rust eDSL for smart contracts.

Capital Importance

  • Deterministic execution and free of endless loops, e.g. using gas.
  • A standard contract library as in EOS.
  • Linking of external contracts (libraries) statically and dynamically.
    • Important: No serialization overhead between external contract calls!
  • Reusability, modularity and encapsulation of code, e.g. as enabled via inheritance or better.
  • Efficient built-in cryptographic functions
    • Example: RSA signature verification in Wasm ~0.15s is a lot slower compared to native routine.

Important

  • Pure contract routines that guarantee to not modify storage.
  • Ability to reason about gas cost and performance of smart contract routines.
  • Flexible account system featuring human-readable names.
  • Receive full trace of a contract execution.
    • During a "debug mode"
  • Off-chain environment for contract execution emulation.

Useful

  • Consensus mechanisms for common time of execution.
  • A counterpart for the create2 EVM instruction
    • Rational: "... to fight front running and to have state channels (like counterfactual)"
  • Solidity decorators for functions and methods
  • Solidity decorators for storage variables (fields)
    - Example: @observable for automatically storing and getting from storage

Design and implement Storage Allocator

Design: Storage Allocator

This issue tracks the design of an initial, simple and efficient storage allocator.

Problem

To dynamically create objects on the storage we have to have a way to allocate and deallocate them.

Since we are currently lacking an implementation of a storage allocator we cannot do these things in a sane and user friendly way.

Requirements

The requirements for a storage allocator are the following:

  • Operations alloc and dealloc shall perform in constant time
  • As few storage accesses as possible
  • Allocations must not overlap

Design Basis

The Allocator trait that should be used for that purpose should ideally be something like this:

trait Allocator {
	/// Allocates a storage area.
	fn alloc(size: u32) -> Key;
	/// Deallocates a storage area.
	fn dealloc(at: Key);
}

Design 1: Cell & Chunk Allocator

This allocator is specialized in allocating only for single cells or chunks of cells of size 2^32.
One may think that this constraint is too limiting, however, since the entire storage space allows for 2^256 different cells we would still have 2^224 different chunks if we partitioned this space in 2^32 big chunks.

So for every alloc(1) this allocator tries to allocate for a single cell and for everything else it will default to allocating an entire 2^32 large chunk of cells.

This has some serious advantages.
By separating storage regions into single-cells and cell-chunk regions we do no longer have to memorize the exact size of an allocation.
For the underlying implementation we could use something similar to a stash that works on the contract storage.

Other ideas are welcome.

Come up with an idea for generic mutable references

Generic Mutable References

Problem

Currently wrappers around objects stored on the contract storage do not provide a way to receive mutable references (&mut T) to those objects.

Rational

The reason for this is that users could do arbitrary changes to the objects without having an automated way to synchronize the storage.

Example

Storage wrappers like storage::Vec and storage::HashMap provide APIs like mutate_with with which it is possible for users to do an efficient in-place mutation of an object owned by the collection.

Goal

This is the tracking issue to come up with a user friendly, efficient and possibly general way to allow for mutable references that keep their referenced objects in sync between memory and storage.

Solutions

Solution 1: Do not provide mutable reference wrappers

This is the naive approach that we currently use.
However, it has the downside that it simply doesn't support what some users would like to have.

Solution 2: ?

Implement storage::BitVec

Storage Bitset

The pdsl_core is currently missing an efficient and elegant way to handle bit sets.
Using a storage::Vec<bool> is very inefficient, since it would require storing every single bit in a different storage cell and using storage::Vec<[u8; 32]> (or similar) would require implementation effort from the user.

The storage::Bitset should provide the most basic functionality for sets of bits and should be very efficient in the way that it reduces contract storage access to a minimum while keeping up with overall performance.

Bits stored in the storage::Bitset should be bundled - for example as [u8; 32]. This allows efficient contract storage access and lots of cache efficiency on the memory side.

A storage::Bitset should allow iteration and the following APIs:

  • symmetric_difference(&mut self, other: &Bitset) *
  • difference(&mut self, other: &Bitset) *
  • union(&mut self, other: &Bitset) *
  • intersection(&mut self, other: &Bitset) *

(*) Means that this API is not final.

  • is_disjoint(&self, other: &Bitset) -> bool
  • is_subset(&self, other: &Bitset) -> bool
  • is_superset(&self, other: &Bitset) -> bool
  • len(&self) -> u32
  • is_empty(&self) -> bool
  • contains(&self, n: i32) -> bool

Depending on the concrete bit set implementation we could either have one of the following APIs:

  • storage::Bitset based on storage::Vec
    • push
    • pop
  • storage::Bitset based on storage::HashMap
    • insert
    • remove

We can also think about implementing both of the above versions.

Add helper functions to generate random numbers from `random_seed`

We also needs helper methods to make a reasonable secure random number from the seed. i.e. have more entropy incorporated into it like sender account, extrinsic index etc.
And bunch more helper methods to generate randoms numbers from seed and update seeds within the contract.
Because otherwise each contract within a same block will see a same seed and it can be a security issue. e.g. a contract execution can read the seed, determine if it should do something to change the state (top up account balance), to possibly effect the result of following contract execution in the same block (gambling tx will fail if account have no enough balance, but the the previous tx in the same block can top it up if it thinks it is a favorable game in this block).

Originally posted by @xlc in #53 (comment)

Bikeshedding for better names

Some ideas for renamings and general bikeshedding of pdsl_lang.

  • contract! macro should be renamed decl_contract in order to make it more aligned to how Substrate macros are called: decl_storage, etc.
  • pub(external) should be renamed to pub(extrinsic) to make it clear that these are going to be extrinsic calls.

Add external (debug) printing functionality

Testing a smart contract on-chain is currently very hard.
There are no debuggers available nor can a developer currently even make use of simple println debugging.

By adding a println external function we could at least fairly easy implement some println debugging facilities - better than nothing!

However, having those println calls in real on-chain smart contracts would be catastrophic.
They must not be used in non---dev chains - this should be checked in an upcoming eDSL as well as on the substrate side upon PUT_CODE.

For this to implement we need to also adjust the substrate SRML-contracts interface and workings.
A simple approach would be to add another flag to SRML-contracts indicating whether ext_println should be enabled. This must only be true if the current environment is a --dev chain.

The eDSL must make sure that ext_println is only available if there was a debug opt-in.
To do this we need to add another crate feature that smart contract developers could enable to actually make use of ext_println. Otherwise using ext_println will be a compile error.

Write high-level usage documentation for smart contract developers

We currently only have low-level documentation for the pDSL describing all the low level library parts that are more targeted towards pDSL developers. For smart contract developers that aim to use the pDSL to write Wasm based smart contracts we need high-level documentation, tutorials and many more examples.

Since pdsl_lang is the highest possible level of abstraction that can be targeted by a smart contract developer we should start writing high-level documentation for it. Smart contract developers that want to target the more low-level abstractions such as pdsl_model or pdsl_core will have to wait a bit more for proper usage documentation.

Add Contribution Guidelines

Contribution Guidelines

This repository and project needs contribution guidelines to attract other talented open source developers to become interested and work on it.

For this we have to write down contribution guidelines, information and rules with the focus on empowering the collaboration between contributors.

This issue shall track all progress on that field.

Tasks

  • Add CONTRIBUTING.md file to the repository root.
  • The repositories readme links to the contribution guidelines.
  • Design and create an issue template.
  • Design and create a PR template.
  • Add a project Code of Conduct.

Rules

  • Contributed code (via PR) shall be ..
    • .. covered with tests.
    • .. extensively documented in the small (micro) and big-picture (macro) level.
  • Optionally for performance critical parts we need performance tests.
  • For user facing abstractions we might need documentation about its guarantees - e.g. maximum gas costs of a procedure.
  • All public API shall have a usage example in their documentation.

Stash: Implement automatic shrinking upon `Stash::take`

When storage::Stash::take(n) is called with n being the last occupied index it is possible to shrink the stash for all consecutive entries of n. So instead of replacing the nth entry with a removed entry that forms another pointer to the next vacant entry we can instead entirely drop the entry and clear the associated storage cell.

The advantage is that we can keep storage::Stash as small as possible at any time.
The downside is that we might need more computations than without this feature.

We also should decide if this is wanted behaviour since this further complicates the data structure.

This issue tracks a proper design for this.

Initial implementation and design of pdsl_lang

pDSL Derive

With the pdsl_core library set up and running we are now in a position where we can think about adding a derive functionality that handles all the remaining ugly parts that a user currently has to go through upon implementing a smart contract on the SRML.

Some definitions

We differentiate between smart contract compile-time and smart contract run-time.

  • The smart contract compile-time is the period of time when the pdsl_derive procedural macro is running on the smart contract code or subsets of it. This smart contract compile-time is a subset of the compile-time of the smart contract which itself is the period of time when the Rust compiler compiles the smart contract code.
  • The smart contract run-time is the period of time when the smart contract is executed either on a Substrate node or locally when testing the smart contract off-chain using e.g. the provided test environment.

The Ugly Parts

Below is a list of the parts that are currently considered to be "ugly" during smart contract implementation on the SRML. The list shall at the same time act as a requirements list for the pdsl_derive functionality.

  • Deterministic instantiation of storage allocators
    • Automatically find out if dynamic allocations may occur. Conservative approach is enough - we really do not want to solve the halting problem here. And instantiate a dynamic allocator if needed on the storage during smart contract compile time.
    • Automatically find constant allocation schemes for all fields of structs that are meant to be stored on the contract storage. This can be done through the allocator API, too, using abstractions like the ForwardAlloc or similar.
  • Generate all code necessary for function selection when calling the smart contract.
    • This includes generation of the call function and using a deterministic function selection that is agnostic to entity ordering within the same file. So reordering fields or structs won't change its semantics.
  • Generate all code necessary for smart contract deployment.
    • This includes generating the deploy function and generating procedures that are running during deployment. This could instantiate important setup routines to make some abstractions work.

Going further

The pdsl_derive crate can already do a lot to remove nearly all ugly parts that have to be done during smart contract implementation. However, some last ugly parts remain. For example a smart contract based on the SRML has to define some dependencies in the Cargo.toml.

We could create a minor cargo plugin tool to simplify this creation step so that users only have to type in something like cargo srml --new to start implementing a new smart contract using Rust based on the SRML.

Add Iterator impl for Stash

Stash: Iter and Values

Problem

The new Stash facility currently has a minimal viable interface.
It is missing a proper and efficient iterator over values and over indexed values.

Interface

/// Iterates over `T` of `Stash<T>`
struct stash::Values<'a> { .. }

/// Iterates over `(u32, T)` of `Stash<T>`
struct stash::Iter<'a> { .. }

Requirements

Both, stash::Values and stash::Iter should be implemented as Iterator with size_hint implementation as well as DoubleEndedIterator and ExactSizeIterator.

Rational

Because we can.

Experiment with new syntax for messages

Currently to flag a function in a contract impl block to be a message that can (only) be called from outside we set its visibility to pub(external) like in the following example:

contract! {
    struct MyContract;
    impl Deploy for MyContract { ... }

    impl MyContract {
        pub(external) fn callable_from_outside() { ... }
        fn private_fn() { ... }
    }
}

This has some downsides.
Fist of all it might be confusing to use the pub(restricted) syntax to restrict local entities from accessing but unrestrict remote accesses. Also it would be nice to not have to apply those annotation everywhere.
The following syntax might solve these issues.

contract! {
    struct MyContract;
    impl Deploy for MyContract { ... }

    impl MyContract {
        #[extern] fn callable_from_outside() { ... }
        fn private_fn() { ... }
    }
}

With the #[extern] semantic attribute we can either annotate functions or entire impl blocks to mark them as callable from remote sources. An example of an annotated impl block can be seen below.

contract! {
    struct MyContract;
    impl Deploy for MyContract { ... }

    #[extern]
    impl MyContract {
        fn callable_from_outside() { ... }
    }

    impl MyContract {
        fn private_fn() { ... }
    }
}

Another advantage is that this syntax is more aligned to standard Rust code.

JSON api description: compound types

Our current JSON api description implementation is kept very minimal and can only support the Rust primitives bool, u{8,16,32,64,128} and i{8,16,32,64,128} as well as the SRML primitives Address and Balance.

To allows for custom user types and tuples we need to extend the set of available types to include compound types. These can be named and unnamed optionally. A compound type is transparent in that it tells how it is structured inside to make it work language agnostic.

{
    "name": "Vec3",
    "fields": [
        { "x": "i32" },
        { "y": "i32" },
        { "z": "i32" }
    ]
}

The above example would demonstrate something like a

struct Vec3 {
    "x": i32,
    "y": i32,
    "z": i32,
}

We also want to support tuple structs like the following:

{
    "name": "Foo",
    "fields": [
        "i32", "bool", "Address"
    ]
}

This represents a tuple struct like the following:

struct Foo(i32, bool, Address);

It is important to note that this representation is recursive. So we could easily define something like this:

{
    "name": "Song",
    "fields": [
        { "duration": "u32" },
        { "title": "[u8; 32]" },
        { "date":
            {
                "name": "Date",
                "fields": [
                    { "year": "u16" },
                    { "month": "u8" },
                    { "day": "u8" }
                ]
            }
        }
    ]
}

Design and implement RawVec and RawHashMap

Design: RawVec and RawHashMap

This issue tracks design decisions around RawVec and RawHashMap implementations.

Problem

The current storage::Vec and storage::HashMap are very different from their counterparts (dynarray and mapping) in Solidity. They are targeted for a higher level of abstraction and provide more guarantees to the user.

The Solidity types are bare-metal, provide nearly no guarantees to the user and are targeted more towards efficiency and performance. Even though the Solidity types are arguably less secure and should be avoided for those reasons to write smart contracts in general, they might still be of some use for very experienced smart contract writers.

JSON api description: Support for fixed-size arrays and tuples

Our current JSON api description implementation is kept very minimal and can only support the Rust primitives bool, u{8,16,32,64,128} and i{8,16,32,64,128} as well as the SRML primitives Address and Balance.

To make the api more versatile we should also add tuples and arrays of fixed sizes.

JSON

This section covers the JSON representation of the api description.

Tuples

If we have an API like the following:

pub(external) fn get_pair() -> (bool, i32) { ... }

We want to generate JSON for the return type of this api description in the following way:

ret_ty: [ "bool", "i32" ]

API description data structure extension with the following variant:

enum TypeDescription {
    ...
    Tuple {
         elems: Vec<TypeDescription>,
    }
    ...
}

Arrays

If we have an API like the following:

pub(external) fn receive_array(vec3: [i32; 3]) { ... }

We want to generate JSON for the parameter of this api description in the following way:

params: [
    {
        "name": "vec3",
        "ty": {
            "[T;n]": {
                "T": "i32",
                "n": 3
            }
        }
    }
]

API description data structure extension with the following variant:

enum TypeDescription {
    ...
    Array {
        inner: Box<TypeDescription>,
        arity: u32
    }
    ...
}

Implement storage::BinaryHeap

ink! is currently missing a heap data structure that enables to store ranked entities in the contract's storage efficiently.

Motivation

Since sorting is very compute intense (O(n*log(n)) it is not a good idea to sort even medium sized sets of data in a smart contract. Instead one should refer to using a data structure such as the binary heap.

Design Goals

The design should mirror Rust's BinaryHeap and should in that sense also be a MAX-heap.
However, since smart contracts are very dependent on the best possible performance one might think about having a MIN- and a MAX- heap. This issue serves as meeting point for discussions around this topic.

Proposed API

To mirror Rust's BinaryHeap as closely as possible we want the following API:

impl<T> BinaryHeap<T> {
    /// Returns the length of the heap.
    fn len(&self) -> u32;

    /// Returns `true` if the heap is empty.
    fn is_empty(&self) -> bool;

    /// Returns the greatest item if not empty.
    fn peek(&self) -> Option<&T>;

    /// Mutates the greatest item if not empty and returns a reference to the result.
    fn peek_mut(&mut self) -> Option<&mut T>;

    /// Removes the greatest item from the heap if not empty.
    fn pop(&mut self) -> Option<T>;

    /// Pushes an item onto the heap.
    fn push(&mut self, val: T);

    /// Returns an iterator over all items of the heap.
    fn iter(&self) -> impl Iterator<Item = &T>;
}

Is possible to make the structure/syntax of smart contract similar to substrate runtime module?

If possible, I want to see the syntax, structure, ABI, overall architecture for smart contract as similar to substrate runtime module as possible.
This will reduce the fraction for developers to learn contract and runtime module.

A common scenario is: a developer chose to make a runtime / contract initially, then later figure out it is better to be contract / runtime instead. The amount of work to convert it between runtime and contract should be minimal.

Another scenario: it will be trivial to securely merge multiple parachains into one, just deploy them as smart contract.

This also means contract developers are runtime developers and vice visa, means we have more contract developers and more runtime developers.

Also the substrate runtime already have metadata system, which is basically the ABI (it is missing encoding information but can be fixed by paritytech/substrate#1328). It will be good if the similar system is used instead of invent another one.

Generate special code for rustdoc compilations

Currently the auto generated documentation for smart contracts written in the eDSL (pdsl_lang) is sup optimal. For example messages have their env: &EnvHandler argument that should not be visible in the documentation.

Using rust-lang/rust#43781 we can get further with conditional compilation.
So upon using pdsl_lang we can catch a rustfmt or cargo fmt run with #[cfg(rustdoc)] and can generate special code just for documentation purposes.

Codegen should generate tuples instead of variadic closure calls

In pdsl_lang codegen we currently generate invalid closure calls for smart contract messages with more than 1 argument. This can be seen when trying to compile the Erc20 token example. For that we need to add a distinction in codegen depending on the length of the inputs.

If len == 0 generate no arguments via _ wildcard.
If len == 1 generate a single free argument val.
If len >= 2 generate all arguments within a tuple (val1, val2, ... valN)

This should fix the above mentioned problem.

Got ExtrinsicFailed when use `Extrinsics -> contract -> call` with right selector

Hi, after I got Flipper.json, transformed the function selector into hex format. And then, open substrate UI and use:

Extrinsics -> contract -> putCode
Extrinsics -> contract -> create
Extrinsics -> contract -> call

the first two steps got done well, but at 3rd step, even with the right selector, I still got ExtrinsicFailed.
I have asked in Riot and was told that there might be a problem with extrinsic dispatch mechanism, hope it would be fixed soon.

Implement method code gen

For a pdsl_lang specified contract that has non-message methods we currently do not generate code for them - we only create code for the actual messages. This is a serious restriction and we really want to support this at later point in time.

For example

contract! {
    struct Contract {
        value: storage::Value<u32>,
    }

    impl Contract {
        pub(external) fn add(&mut self, by: u32) {
            self.add_impl(by)
        }

        fn add_impl(&mut self, by: u32) {
            self.value += by
        }
    }
}

We currently generate code for pub(external) fn add but not for fn add_impl.

Unresolved Question

For pub(external) fn add we generate a method that has a env: &mut EventHandler argument as second argument so that its body can make use of the env variable. It is currently an open question as to how we handle internal methods. We could do the same but this would make code gen seriously harder since this would require semantic introspection of method implementations and mutate as soon as they call another internal method. Currently we treat messages as only callable from external (remote) sources to counteract this problem. This is an argument in favor of not generating the second parameter.

Introduce language versioning

Currently the pdsl_lang contract! macro doesn't provide a way to specify which version of it a user wants to use. This doesn't matter at the moment (2019-03-13) since there is not a single published version so far. However, in the future we might want to have different versions that also might change syntax or semantics or other parts of the DSL without breaking everyone's smart contracts. To do this we can introduce a versioning scheme like in the example below:

contract! {
    #![version = "0.1"] // We specify here that we are using version 0.1 of the DSL

    struct MyContract;
    impl Deploy for MyContract { ... }
    impl MyContract { ... }
}

If we implement this versioning scheme before releasing the first public and stable version of the DSL we can mark it as mandatory so every smart contract is required to include it.

Further Work

Later we can add other inner attributes for the contract! macro in order to alter behaviour for other things.

The `contract!` macro recurses infinitely when trying to handle the Noop contract

Feeding the following code into the contract! macro from pdsl_lang will cause it to reach the recursion limit during compilation:

contract! {
    /// The contract that does nothing.
    ///
    /// # Note
    ///
    /// Can be deployed, cannot be called.
    struct Noop {}

    impl Deploy for Noop {
        /// Does nothing to initialize itself.
        fn deploy(&mut self) {}
    }

    /// Provides no way to call it as extrinsic.
    impl Noop {}
}

This is funny since it can handle much more complex code such as Erc20 but not this.

Edit: Apparently it happens as soon as the state struct Noop is empty.

Implement automatic message selector ABI

Problem

The messages! macro of the pdsl_model (Fleetwood abstractions) currently cannot automatically generate unique message selectors at compile-time. Note that we are not interested at all in any kind of operations done during runtime since users of the smart contracts would have to pay lots of gas for that. This is due to the fact that Rust's current const_fn feature is very constrained and does not allow for many use cases.

In spite of this the current messages! macro has a built-in message id parameter.

messages! {
	0 => Inc(by: u32); // This message get selected for function selectors of '0'
	1 => Get() -> u32; // ... and this for '1'
}

The advantage of this is that a user can simply choose their own message selectors and it should also be very easy to keep them unique - at least as long as the list of messages is short enough.
Another advantage is that this manual system is very transparent and allows for easy testing of smart contract execution.

As soon as Rust's const_fn feature matures and allows for more scenarios we can think about soft deprecating the manual message selector or just make it optional.

ABI Design

For automatic message selector generation we need the following attributes:

A message selector must be:

  • Unique over all messages that belong to the same state.
  • Unique over all messages with different signatures (name, params, ret-ty).
  • Independent of ordering in the file.
  • Shall be stable across compilations, chains and compilers.

When Rust is ready to implement this functionality at compile-time it should have the following structure - or a similar one.

For this we would take into consideration the following traits that have to be implemented for all states or messages respectively.

/// A message with an expected input type and output (result) type.
pub trait Message {
	/// The expected input type, also known as parameter types.
	type Input: parity_codec::Decode;
	/// The output of the message, also known as return type.
	type Output: parity_codec::Encode;
	/// The name of the message.
	///
	/// # Note
	///
	/// This must be a valid Rust identifier.
	const NAME: &'static [u8];
}

/// Types implementing this type can be used as contract state.
pub trait ContractState:
	AllocateUsing + Flush
{
	/// The name of the contract state.
	///
	/// # Note
	///
	/// - This must be a valid Rust identifier.
	/// - Normally this reflects the name of the contract.
	const NAME: &'static str;
}

For message selector generation we concatenate byte sequences together that we will hash with a stable (crypto) hashing algorithm.

For a message M with parameters P_n, 0 <= n < N and return type R that operates on state S the sequence will be the following:

S::NAME.as_bytes
~ 0xFF
~ M::NAME.as_bytes
forall p: P where n € [0, N) do: ~ 0xFE ~ p.signature
~ 0xFD ~ R.signature

Where ~ is the concat operator for byte sequences and .signature is yielding a byte sequence representation of a type at compile-time, similar to Rust's typename intrinsic.
We want to use this signatures instead of a raw transformation to the underlying types since we want distinction between (bool, u32) and types like Option<u32> that could be structured similarly internally. This allows us to abstract away from internal data structure layout.

Note again that all these computation must be happening at compilation time.
None of this must ever run at smart contract execution time due to gas costs.

Examples

Given

state! {
    struct Adder {
        val: u32
    }
}

messages! {
    Inc(by: u32);
    Get() -> u32;
    /// Returns `true` if the stored value is less than `rhs`.
    LessThan(rhs: u32) -> bool;
}

We would receive the following byte sequences for messages Inc, Get and LessThan when being used on state Adder:

  • Inc(by: u32) canonicalized to Inc(u32)

    • Byte Sequence: 0x4164646572 ~ 0xFF ~ 0x496E63 ~ 0xFE ~ 0x753332
  • Get() -> u32

    • Byte Sequence: 0x4164646572 ~ 0xFF ~ 0x476574 ~ 0xFD ~ 0x753332
  • LessThan(rhs: u32) -> bool canonicalized to LessThan(u32) -> bool

    • Byte Sequence: 0x4164646572 ~ 0xFF ~ 0x4C6573735468616E ~ 0xFE ~ 0x753332 ~ 0xFD ~ 0x626F6F6C

Design medium-level abstractions for smart contracts - Fleetwood™

Regarding https://github.com/paritytech/fleetwood.

Description

Current pDSL has a low level abstraction called the pdsl_core that offers core abstractions to work only on a very thin layer on top of the foundations that have been built for SRML smart contracts.

Then there are plans for a very high level eDSL that is going to be based on the Rust programming language and offers all niceties that users might expect from a language that is specific to the domain of smart contracts. (DSL)

Fleetwood

Fleetwood on the other hand offers what I would call a medium abstraction.
It is no real domain specific language nor forces it users to work on the bare metal foundations.

In essence Fleetwood provides data structures to virtually represent the structure of smart contracts.
This can be seen as an abstract representation that could be further utilized by users or systems.

About

This issue tracks progress and ideas around having a medium abstraction layer similar to https://github.com/paritytech/fleetwood for pDSL. It also tracks ideas about how we could eventually build an upcoming eDSL that is planned on top of it instead of the bare metal low-level abstractions.

Future Goal

So with this medium level abstraction built into the pDSL framework we would have the following structure
of working with smart contracts:

  • Low-Level
    • pDSL's core (aka pdsl_core)
      • Provides bare metal access to the foundational abstractions.
      • Users can do unsafe things that will probably result in malicious contracts.
  • Medium-Level
    • fleetsteel (aka Fleetwood style abstractions)
      • Provides a standard model to operate with smart contracts.
      • Just enough abstraction over the bare metal to protect users from most stupid mistakes.
      • Users disliking the eDSL could invent their own version of it based on this medium level abstraction.
  • High-Level
    • pDSL's eDSL
      • Provides the highest level abstractions and a Rust based DSL
      • Users have the least freedom but should profit from the most secure environment to write Wasm smart contracts

Implement flush system

Storage Flushing

Currently higher level abstractions that operate on the contract storage like SyncCell, SyncChunk as well as storage::Vec and storage::HashMap have an image of the storage in memory and keep the storage in sync for every write operation. So the image that is kept in memory is only utilized for read operations.

Rational

The current system of only using the cached storage image is to allow for external contract executions. Executing an external contract means to switch context which in fact requires the currently executing contract to write back any storage modifications. That's why it is important to keep the storage always in sync with the memory but not vice versa.

Other Work

Fleetwood has similar problems and also keeps a storage image in memory.
However, in Fleetwood write operations do not directly write back to storage every time.
Instead there is a flush operation to do the job collectively.

The advantage is that this allows to avoid even more storage accesses.
The obvious downside is that this interferes with external contract execution since flushing of contract storage has to be coordinated then which is currently missing from Fleetwood.

Combining Approaches

To restate our goals:

  • We want to avoid as many storage accesses as possible.
  • We want to be able to call external contract during contract execution.

To achieve this we should implement a flushing system in pDSL as well.
However, we have to link it to the system used to call external contracts somehow.
This could be done by including the external-contract-call system into the environment.

Ideas are welcome!

Get contract balance and transfer token to another account / contract

Needs ways to:

  • get the contract's current balance
  • transfer balance to another account
  • transfer balance to another contract and call it
  • ? transfer balance to another contract without calling it?

Relates to #30 as I think currently the only way to transfer balance from a contract to another contract / account is use ext_call

Depends on #4 because contract have to know the type of Balance

Find a decent allocation and initialization scheme for storage types

The Problem

Storage types such as storage::Value can be used and created very differently.
It is required to initialized its wrapped inner value somehow, the exact procedure for that is depending on the usage scenario.

Usage Scenarios

There are 3 common scenarios.

  • Deployment: The contract is being deployed and has a storage::Value. Upon deploy the contract allocates the storage::Value on the correct location in the contract storage using allocate_using and the bump allocator. Afterwards it needs to initialize the allocated storage::Value to the value that the contract expects using storage::Value's OnDeploy implementation. Most often users simply want the default value for this, e.g. 0 for storage::Value<i32>. Deployment requires allocation and initialization.

  • Call: The contract is being called and has a storage::Value. As in deploy it needs to allocate it using the bump allocator. However, since it has already been initialized in the contract's deploy stage no further steps need to be taken and the storage::Value can be used safely.

  • Dynamic Creation: Upon contract call or even upon deploy the contract might have the need to dynamically instantiate a storage::Value on the storage using the more complex CellAndChunkAllocator. For this the storage::Value not only needs to be allocated but also directly initialized using OnDeploy.

Summary

Scenario Requires Allocation Requires Initialization
Deployment Yes: Static Yes
Call Yes: Static No
Dynamic Yes: Dynamic Yes

Solution

Ideally we have

struct Uninitialized<T>(T);
auto trait CanUseAfterAlloc {}
trait AllocateUsing {
    fn allocate_using(alloc: &mut Allocator) -> Uninitialized<Self>; // T: !CanUseAfterAlloc
    fn allocate_using(alloc: &mut Allocator) -> Self; // T: CanUseAfterAlloc
}
trait OnDeploy {
    fn on_deploy(&mut self, args: Self::Args)
}

Where CanUseAfterAlloc is not implemented if Self cannot be safely used after just allocation, such as storage::Value. Note that for example storage::Vec can be used after allocation depending on the exact implementation.
Also we need helper traits and routines to make life easier for the Deployment and Dynamic stages where we always need to allocate and initialize. Ideally this should be done in a single step.

Is possible to have helper/private method implemented outside `contract` macro?

Last time I tried, VSCode and CLion is having trouble debug code inside macro. I think it is because the code in the macro can get modified in such way confused the IDE.

So I want to have most of the logic outside macro and only the interface and delegation call in macro.

I think it is already possible, just want to make sure this point is considered in the design of the contract syntax.

Provide environmental types (Balance, Gas, AccountId, ..)

Environmental Types

Currently pdsl_core does not provide environmental types, for example what type an address has.

Having this information is very important for contract writers.
However, there are certain limitations and problems connected with making those environmental type definitions generic and chain agnostic.

We do want this information as part of psdl_core or similar.

This issue is about finding a proper, user friendly, efficient and possibly chain agnostic design for this feature.

Design 1: Bound to Environment

An implemention of pdsl_core::env::Env could be required to provide this information.
For this the Env trait should be extended for those type definitions or require an implementation of another trait that provides them.

We use this issue also to track which type definitions we initially want to provide for users.

The downside to this approach is that making the types bound to an environment implementation would restrict pDSL to be used only from certain predefined environments. This should be sufficient as a start, since we could for example provide an environment for testing purposes, for SRML contracts on substrate, for pwasm and for ewasm just to name a few.

Make generated contracts testable if #[cfg(test)] is set

The pdsl_lang contract! macro currently doesn't make a distinction between compiling for #[cfg(test)] or not. However, the currently generated code is not very testable since it returns impl pdsl_model::Contract in the instantiate function. To make it testable we need to return impl TestableContract<DeployArgs = #deploy_args> instead. This allows users to write simple test cases as in https://github.com/Robbepop/pdsl/blob/master/model/tests/incrementer.rs#L55.

Roadmap and timeline

Are there any roadmap and timeline for this project?
I want to know what is the end goal for this project, what features are going to be included in the 1.0 version, and when it is likely to happen (Q2 or Q3?)

Implement infrastructure for smart contract to call back into runtime

Currently smart contracts are pretty isolated.
Wearing thick gloves one can call other remote smart contracts and even call back into the runtime, however, there is currently no support from the pDSL for doing that.

What we would like is a clean approach in calling into the runtime from the smart contract side.
This can be done by using ext_dispatch_call from the SRML contracts runtime module. It's documentation is currently unclear (at least to me) - further investigation needed.

Also we maybe need some infrastructure around it.

  • Find out how ext_dispatch_call is supposed to be called from a smart contract.
  • Implement ext_dispatch_call rudimentary in pdsl_core
  • Implement ext_dispatch_call in pdsl_model
  • Optionally provide additional helper framework in order to improve the user experience while using ext_dispatch_call as good as possible for the current state of pDSL.

Design & implement infrastructure for calling remote smart contracts

Currently there is only an extremely bare metal and unsafe way of calling remote smart contracts.
People will basically not use this feature without support that comes out of the box from pdsl_lang or layers below.

First we need to find the perfect layer to support calling remote contracts.

  • The core layer could provide some utilities to build up call-data but the user still has to provide all the necessary information about the API of the remote smart contract.

  • The model layer could provide further safety features to implement automatic flushing of the contract state before calling a remote smart contract since that could possible lead to storage mutation and thus re-entrancy attacks.

  • The lang layer could provide automatic API generation in form of JSON files and could also be fed by API files of dependent on smart contracts to make calling remote smart contracts nearly as simple as calling just another internal function.

Add PresetAllocator - The Prellocator

Problem

At the moment the simplest allocator is the BumpAllocator (formerly known as ForwardAllocator) that simply increases its current key by whatever sizes it needs to allocate. The BumpAllocator is mainly to allocate storage for fixed entities, alike the ones that are static members of smart contract storage.

Efficiency

Since Key in SRML smart contracts are 256-bit hashes an addition is not necesarily a simple operation.
And especially when there are many fields the add subroutine for Key has to be performed several times in order to provide a deterministic fixed allocation scheme for all fixed entities.
The problem is that this has to be done all over for every invocation of the smart contract via CALL.
To circumvent this problem we can use this new PresetAllocator - the Prellocator. Technically it looks similar or equal to this:

struct PresetAllocator {
    keys: core::slice::Iterator<(u32, Key)>,
}

It just consists of an ordered iterator over some allocation sizes and associated Key.
To make it work as it is the pdsl_derive crate, once done, needs to store all keys embedded into the executable. This might increase the size of the executable significantly if there are many entities. So there should be a trade-off between computation and binary size.

The important thing to remember is that it will always yield the same allocation sequence for every invocation.

The allocation sizes are there just for extra security so that it can check if the allocation scheme is approximately valid. Of course this does not serve real security over invalidations of the allocation sequence.

Trade-Offs

Once this is implemented we need to benchmark whether the extra binary size to store all the keys is actually worth the computation efficiency gains. If not we might revoke this or try letting the smart contract writer decide upon a technical resolution. In any case both allocator should always yield the same allocations so they are interchangable.

Storage layout description file generation

It is possible to access (read) contract storage outside from contract execution.
This is especially useful since the current SRML contract module does not allow returning data to non-contract callers.
However, when using the pDSL the allocation of storage items is hidden behind abstractions and not very transparent to the outside.
To counteract this the pdsl_lang could generate a storage layout description file that users can use to know where they can access which data.

For a contract with a storage state like shown below a potential storage layout file could look like the following:

contract! {
    struct Vec2 {
        x: storage::Value<i32>,
        y: storage::Value<i32>,
    }
    ...
}

The storage layout description file:

{
    "name": "Vec2",
    "fields": [
        {
            "name": "x",
            "key": 42
        },
        {
            "name": "y",
            "key": 1337
        }
    ]
}

Note that the above storage layout description file is just an example and doesn't demonstrate any true layout.

Also note that the above storage layout description file might be too simplified since we are interested in a very detailed look into the used data structures.
For example, describing a storage layout for a storage::Value requires just one key while describing a storage::Vec might require more and especially a key that is representing an entire array of elements.

In respect to the API description file we do not want to merge both files since they have a very different purpose and target very different clients.

Complex Types

For more complex types such as storage::Vec that has a len and a data field we could do with a recursive design for fields like the below:

Example contract:

contract! {
    struct UserBase {
        users: storage::Vec<User>,
    }
    ...
}

Storage layout description file:

{
    "name": "UserBase",
    "fields": [
        {
            "name": "users",
            "fields": [
                {
                    "name": "len",
                    "key": 1
                },
                {
                    "name": "data",
                    "key": 1000
                }
            ]
        }
    ]
}

Note that implementing this can quickly become tricky. We could introduce a new trait that the pDSL automatically and recursively (using derive macro) implements for all storage data structures and then pseudo-allocate these data structures during compilation of the contract while collecting their resulting keys. (Vague, needs more explanation!)

Nested single-key structures

For single nested key structures, such as storage::Value itself that contains a single storage::SyncCell which itself contains a single storage::TypedCell etc. we want to flatten the layout structure. So instead of displaying a storage::Value as its exact internal representation, we directly use the common unique key as can be seen in the first example.

Data Structure

The following simple data structure could work as support for a simple and raw underlying implementation:

struct Key([u8; 32]);

struct Layout {
    name: String,
    layout: StorageLayout,
}

enum StorageLayout {
    Contiuous(ContiguousLayout),
    Group(GroupLayout),
}

struct ContiguousLayout {
    key: Key,
    amount: u32,
}

struct GroupLayout {
    group: Vec<Layout>,
}

Errors when compiling with generate-api-description crate feature

after I have added

generate-api-description = [
    "pdsl_lang/generate-api-description",
]

under [features] in Cargo.toml
and then I run:

cargo build --release --features=generate-api-description --verbose

I got an error below:

error: linking with `cc` failed: exit code: 1
  |
  = note: "cc" "-m64" "-L" "/Users/hammer/.rustup/toolchains/nightly-2019-03-10-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib" "/Users/hammer/Desktop/flipper/target/release/deps/flipper.flipper.8f0qzq8t-cgu.3.rcgu.o" "-o" "/Users/hammer/Desktop/flipper/target/release/deps/libflipper.dylib" "-Wl,-exported_symbols_list,/var/folders/fv/qyvr1f6s5rl_c3d6r60f5ltm0000gq/T/rustc4osvKZ/list" "-Wl,-dead_strip" "-nodefaultlibs" "-L" "/Users/hammer/Desktop/flipper/target/release/deps" "-L" "/Users/hammer/.rustup/toolchains/nightly-2019-03-10-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib" "/Users/hammer/.rustup/toolchains/nightly-2019-03-10-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libcompiler_builtins-43b88c2e4123a11d.rlib" "-lc" "-lm" "-dynamiclib" "-Wl,-dylib"
  = note: Undefined symbols for architecture x86_64:
            "_ext_get_storage", referenced from:
                _$LT$pdsl_core..storage..cell..sync_cell..SyncCell$LT$T$GT$$GT$::get::h8bffa66abf59395b in flipper.flipper.8f0qzq8t-cgu.3.rcgu.o
            "_ext_scratch_size", referenced from:
                _$LT$pdsl_core..storage..cell..sync_cell..SyncCell$LT$T$GT$$GT$::get::h8bffa66abf59395b in flipper.flipper.8f0qzq8t-cgu.3.rcgu.o
            "_ext_scratch_copy", referenced from:
                _$LT$pdsl_core..storage..cell..sync_cell..SyncCell$LT$T$GT$$GT$::get::h8bffa66abf59395b in flipper.flipper.8f0qzq8t-cgu.3.rcgu.o
            "_ext_set_storage", referenced from:
                _$LT$pdsl_core..storage..cell..sync_cell..SyncCell$LT$T$GT$$u20$as$u20$pdsl_core..storage..flush..Flush$GT$::flush::h1ae42a8a1c3162d8 in flipper.flipper.8f0qzq8t-cgu.3.rcgu.o
            "_ext_input_size", referenced from:
                pdsl_core::env::api::input::h057e5735ba04f2b8 in flipper.flipper.8f0qzq8t-cgu.3.rcgu.o
            "_ext_return", referenced from:
                _$LT$pdsl_core..env..srml..srml_only..impls..SrmlEnv$LT$T$GT$$u20$as$u20$pdsl_core..env..traits..Env$GT$::return::h436b4ea078387302 in flipper.flipper.8f0qzq8t-cgu.3.rcgu.o
            "_ext_input_copy", referenced from:
                pdsl_core::env::api::input::h057e5735ba04f2b8 in flipper.flipper.8f0qzq8t-cgu.3.rcgu.o
          ld: symbol(s) not found for architecture x86_64
          clang: error: linker command failed with exit code 1 (use -v to see invocation)


error: aborting due to previous error

error: Could not compile `flipper`.

Caused by:
  process didn't exit successfully: `rustc --edition=2018 --crate-name flipper src/lib.rs --color always --crate-type cdylib --emit=dep-info,link -C opt-level=z -C panic=abort -C lto --cfg 'feature="default"' --cfg 'feature="generate-api-description"' --cfg 'feature="pdsl_lang"' -C metadata=0e77105c2e752b9a --out-dir /Users/hammer/Desktop/flipper/target/release/deps -L dependency=/Users/hammer/Desktop/flipper/target/release/deps --extern parity_codec=/Users/hammer/Desktop/flipper/target/release/deps/libparity_codec-c20356003e85c6d5.rlib --extern pdsl_core=/Users/hammer/Desktop/flipper/target/release/deps/libpdsl_core-3cdfb4d01cca5f5b.rlib --extern pdsl_lang=/Users/hammer/Desktop/flipper/target/release/deps/libpdsl_lang-4faf84eb9a4c8b6e.dylib --extern pdsl_model=/Users/hammer/Desktop/flipper/target/release/deps/libpdsl_model-10235a5d7017a8d8.rlib` (exit code: 1)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.