Git Product home page Git Product logo

rmrk-substrate's Introduction

RMRK Substrate

No Maintenance Intended

Warning: No stability and security guarantees. Not production ready.

Additional documentation https://rmrk-team.github.io/rmrk-substrate

Rust Setup

First, complete the basic Rust setup instructions.

Run

Use Rust's native cargo command to build and launch the template node:

cargo run --release -- --dev --tmp

Build

The cargo run command will perform an initial build. Use the following command to build the node without launching it:

cargo build --release

RPC

The RMRK RPC description can be found in RPC docs

The RPC is declared in the rmrk-rpc crate.

The Runtime implements the RPC API in the runtime/src/lib.rs inside the impl_runtime_apis macro. The node exposes the RPC interface described in the rpc.md. The RPC interface implementation passes each RPC call to the RMRK runtime API. The RPC interface declaration and implementation can be found in the file node/src/rpc.rs.

Integration Tests

The Integration Tests are located in the tests/src directory. They use the RPC interface to fetch data from the node.

  • All transactions used in the tests are located in tests/src/util/tx.ts.
  • All "fetch" functions like getNft are located in tests/src/util/fetch.ts. Here you can see an example of the RPC interface usage.
  • All "helper" functions are located in tests/src/util/helpers.ts.
  • Type augmentation located in tests/src/interfaces, autogenerated, a lot of lines of code :)
How to start the tests
# (In the rmrk-substrate directory)

# Run the node
cargo run --release -- --dev --tmp

# (In another terminal)
# Start the tests
cd tests && yarn test

Instead of running all the tests at once, you can run a separate test if you like. For instance, you can type yarn testSendNft to run the tests/src/sendNft.test.ts test.

All the tests have the following name pattern: <test-name>.test.ts. To run a separate test you can type the following: yarn test<test-name>

Embedded Docs

Once the project has been built, the following command can be used to explore all parameters and subcommands:

./target/release/rmrk-substrate -h

Run

The provided cargo run command will launch a temporary node and its state will be discarded after you terminate the process. After the project has been built, there are other ways to launch the node.

Single-Node Development Chain

This command will start the single-node development chain with persistent state:

./target/release/rmrk-substrate --dev

Purge the development chain's state:

./target/release/rmrk-substrate purge-chain --dev

Start the development chain with detailed logging:

RUST_BACKTRACE=1 ./target/release/rmrk-substrate -ldebug --dev

Connect with Polkadot-JS Apps Front-end

Once the node template is running locally, you can connect it with Polkadot-JS Apps front-end to interact with your chain. Click here connecting the Apps to your local node template.

Node

A blockchain node is an application that allows users to participate in a blockchain network. Substrate-based blockchain nodes expose a number of capabilities:

  • Networking: Substrate nodes use the libp2p networking stack to allow the nodes in the network to communicate with one another.
  • Consensus: Blockchains must have a way to come to consensus on the state of the network. Substrate makes it possible to supply custom consensus engines and also ships with several consensus mechanisms that have been built on top of Web3 Foundation research.
  • RPC Server: A remote procedure call (RPC) server is used to interact with Substrate nodes.

There are several files in the node directory - take special note of the following:

  • chain_spec.rs: A chain specification is a source code file that defines a Substrate chain's initial (genesis) state. Chain specifications are useful for development and testing, and critical when architecting the launch of a production chain. Take note of the development_config and testnet_genesis functions, which are used to define the genesis state for the local development chain configuration. These functions identify some well-known accounts and use them to configure the blockchain's initial state.
  • service.rs: This file defines the node implementation. Take note of the libraries that this file imports and the names of the functions it invokes. In particular, there are references to consensus-related topics, such as the longest chain rule, the Aura block authoring mechanism and the GRANDPA finality gadget.

After the node has been built, refer to the embedded documentation to learn more about the capabilities and configuration parameters that it exposes:

./target/release/rmrk-substrate --help

Runtime

In Substrate, the terms "runtime" and "state transition function" are analogous - they refer to the core logic of the blockchain that is responsible for validating blocks and executing the state changes they define. The Substrate project in this repository uses the FRAME framework to construct a blockchain runtime. FRAME allows runtime developers to declare domain-specific logic in modules called "pallets". At the heart of FRAME is a helpful macro language that makes it easy to create pallets and flexibly compose them to create blockchains that can address a variety of needs.

Review the FRAME runtime implementation included in this template and note the following:

  • This file configures several pallets to include in the runtime. Each pallet configuration is defined by a code block that begins with impl $PALLET_NAME::Config for Runtime.
  • The pallets are composed into a single runtime by way of the construct_runtime! macro, which is part of the core FRAME Support library.

Pallets

The runtime in this project is constructed using many FRAME pallets that ship with the core Substrate repository and a template pallet that is defined in the pallets directory.

A FRAME pallet is compromised of a number of blockchain primitives:

  • Storage: FRAME defines a rich set of powerful storage abstractions that makes it easy to use Substrate's efficient key-value database to manage the evolving state of a blockchain.
  • Dispatchables: FRAME pallets define special types of functions that can be invoked (dispatched) from outside of the runtime in order to update its state.
  • Events: Substrate uses events and errors to notify users of important changes in the runtime.
  • Errors: When a dispatchable fails, it returns an error.
  • Config: The Config configuration interface is used to define the types and parameters upon which a FRAME pallet depends.

rmrk-substrate's People

Contributors

baidang201 avatar bmacer avatar fahrrader avatar h3lpkey avatar h4x3rotab avatar hashwarlock avatar ilionic avatar jasl avatar kaiserkarel avatar maar-io avatar pangolierchick avatar sudipghimire533 avatar supremalex avatar szegoo avatar thelastpharoah avatar yornaath avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rmrk-substrate's Issues

Consider pallet owned account_id

by h4x3rotab:

let (collection_id, nft_id) = Self::nft_mint(
    sender.clone().unwrap_or_default(),
    owner,
    collection_id,
    recipient,
    royalty,
    metadata,
)?;

When sender == None? If it means the origin is some admin (ProtocolOrigin), I suggest to use pallet's account id instead of the default account id (0x0), which is going to be deprecated soon due to some security concern.

Example of a pallet owned account_id:
https://github.com/paritytech/substrate/blob/d18705ee7b5b654d3947558fea6dbd0701ac3ecc/frame/society/src/lib.rs#L1733-L1735

should we fail on ACCEPT of a non-pending resource?

For an otherwise OK operation, should we fail on an already-ACCEPTed resource?
case 1: alice RESADDs on her own NFT, then ACCEPTs (unnecessary)
case 2: BOB sends to ALICE's NFT, ALICE ACCEPTs, then ACCEPTs again.
assuming we want to add this check, otherwise we'll be sending redundance ResourceAccepted events.

Question: Adding rmrk-core as an external dependency

I'm starting on rust & susbtrate development and I'm interested in adding the rmrk-core pallet as an external dependency to a fresh susbtrate-node-template project, but I'm getting a bunch of errors.
This is what I've tried:

  • Adding the pallet on runtime/Cargo.toml:


[dependencies.pallet-rmrk-core]
default_features = false
git = 'https://github.com/rmrk-team/rmrk-substrate.git'
branch = 'main'
version = "0.0.1"

...

[features]
default = ["std"]
std = [
	#... Template dependencies
	"pallet-rmrk-core/std",
	
]
  • Configure the pallets on runtime/src/lib.rs

parameter_types! {
	pub const MaxRecursions: u32 = 10;
}

impl pallet_rmrk_core::Config for Runtime {
	type Event = Event;
	type ProtocolOrigin = EnsureRoot<AccountId>;
	type MaxRecursions = MaxRecursions;
}

And adding them on the construct_runtime! macro:

construct_runtime!(
	pub enum Runtime where
		Block = Block,
		NodeBlock = opaque::Block,
		UncheckedExtrinsic = UncheckedExtrinsic
	{
		/*--- snip ---*/
		RmrkCore: pallet_rmrk_core::{Pallet, Call, Event<T>, Storage},
	}
);

Executing cargo check on the terminal generates several (164 at the time) errors, but here's some that seem interesting:

error: duplicate lang item in crate `sp_io` (which `frame_support` depends on): `panic_impl`.
    |
    = note: the lang item is first defined in crate `sp_io` (which `frame_support` depends on)
    = note: first definition in `sp_io` loaded from ./substrate-node-template/target/debug/wbuild/node-template-runtime/target/wasm32-unknown-unknown/release/deps/libsp_io-d578f12f545fdf6e.rmeta
    = note: second definition in `sp_io` loaded from ./substrate-node-template/target/debug/wbuild/node-template-runtime/target/wasm32-unknown-unknown/release/deps/libsp_io-2c478c57ead6a958.rmeta

error[E0277]: the trait bound `Runtime: frame_system::pallet::Config` is not satisfied
     --> ./substrate-node-template/runtime/src/lib.rs:287:6
      |
  287 | impl pallet_uniques::Config for Runtime {
      |      ^^^^^^^^^^^^^^^^^^^^^^ the trait `frame_system::pallet::Config` is not implemented for `Runtime`
      |
  note: required by a bound in `pallet_uniques::Config`
     --> ./.cargo/git/checkouts/substrate-7e08433d4c370a21/1ca6b68/frame/uniques/src/lib.rs:69:37
      |
  69  |     pub trait Config<I: 'static = ()>: frame_system::Config {
      |                                        ^^^^^^^^^^^^^^^^^^^^ required by this bound in `pallet_uniques::Config`

  error[E0277]: the trait bound `Runtime: frame_system::pallet::Config` is not satisfied
     --> ./substrate-node-template/runtime/src/lib.rs:307:6
      |
  307 | impl pallet_rmrk_core::Config for Runtime {
      |      ^^^^^^^^^^^^^^^^^^^^^^^^ the trait `frame_system::pallet::Config` is not implemented for `Runtime`
      |
  note: required by a bound in `pallet_rmrk_core::Config`
     --> ./.cargo/git/checkouts/rmrk-substrate-4fe3068c99fc9e3a/b47d311/pallets/rmrk-core/src/lib.rs:53:20
      |
  53  |     pub trait Config: frame_system::Config + pallet_uniques::Config {
      |                       ^^^^^^^^^^^^^^^^^^^^ required by this bound in `pallet_rmrk_core::Config`

Am I doing something wrong when importing the pallet? I'll be happy to provide more information if required.
Thanks in advance.

Parachain(s) integration question

This is a question about rmrk-core pallet: How heavily is RMRK frontend code married to calls like mint_nft, send, add_resource, as well as architecture when the nesting is the function of the same pallet as nft logic? For example, could we replace api.tx.rmrkCore.send with api.tx.unqiue.transfer, and api.tx.rmrkCore.addResource with api.tx.nesting.add?

Parameter types considered harmful

Parameter types are the generic associate types that can be specified in the Config trait of each pallet. For example, CollectionId, NftId, and ResourceId in the rmrk core pallet:

type CollectionId: Member
+ Parameter
+ Default
+ Copy
+ HasCompact
+ AtLeast32BitUnsigned
+ Into<Self::ClassId>;
type ProtocolOrigin: EnsureOrigin<Self::Origin>;
type NftId: Member
+ Parameter
+ Default
+ Copy
+ HasCompact
+ AtLeast32BitUnsigned
+ From<Self::InstanceId>
+ Into<Self::InstanceId>;
type ResourceId: Member + Parameter + Default + Copy + HasCompact + AtLeast32BitUnsigned;

This pattern is widely used in FRAME and ORML. It's very powerful -- that's how Moonbeam makes the AccountId fully compatible with Ethereum. However in other cases, it's generally over-design. Substrate often got criticized by making the generic type system over complicated.

The Balances pallet allows you to specify the type of the balance. Suppose we have two parachains, one with u128 and another with u32, then how can they communicate in xcmp? It turned out in XCM all the assets has a fixed type, u128, in order to allow the participants to understand each other.

If we follow this pattern, not only it introduces the code complexity, but also it can result in ecosystem fragmentation. Imagine 10 parachains with 10 different instance of the RMRK pallets, it will be problematic when doing the XCM. Even for Substrate, there's a light version called smoldot that largely reduced the generic type usage in the codebase.

Do we really need to allow the downstream to change the types? There are generally 4 kinds of types, and IMO they can be treated differently:

  1. Constants: nice to keep, because it's likely the downstream wants to customized it (e.g. DepositPerByte in pallet-unique)
  2. Permission: nice to keep, for the same reason as constants (e.g. ProtocolOrigin in rmrk-core)
  3. Injected dependency: nice to keep (e.g. Currency used everywhere)
  4. Generic type: don't use until absolutely necessary

So my suggestion: define the concrete primitive types as much as possible.

What about parameter types introduced by the upstream?

Sometimes, we need to consume the outputs from an upstream pallet, or pass some arguments to it, but the type is parameterized. The type in the upstream is not determined at the pallet level. We don't know the concrete type before it's instantiated in a runtime.

E.g. the follow code cannot build because we cannot assume BalanceOf<T> is u128:

Currency::transfer(to, 1000_u128)

Two ways to solve the problem. If we are fine with only support the Currency type with u128 as the balance type, we can further constrain the dependency by adding a where clause:

impl<T: Config> RMRKCore
where
  T: Config<Balance = u128>

Full example here: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=3ca8ea5453ef407de15be579bdb7c45a

Another option is to inject a "converter", a struct that can be injected to the config that promises to convert the generic type to a concrete type. This is widely used in XCM:

pub trait InstanceIdConverter<T: unique_pallet::Config> {
    fn to_u16(id: T::InstanceId) -> u16;
    fn from_u16(id: u16) -> T::InstanceId;
}
// ...
let id = T::InstanceIdConverter::from_u16(1u16);
let id_u16: u16 = T::InstanceIdConverter::to_u16(id);

Full example here: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=b091cfeb5008feaba5b5f3a99b743359

Discussion: additional equip permissioning use cases

Need to think about adding the ability to equip your NFTs into other people’s NFTs. Lots of use cases, no easy solution without intricate permissioning.

  • Alice owns Parent
  • Bob owns Child 1 C1
  • Charlie owns Child 2 C2

Bob should be able to equip C1 into P1_C1 if approved, and Charlie should then be blocked from doing it. Alice should be able to kick out C1 to prevent slot-camping. Bob should be able to take C1 out at any time, unless special rules apply (how to define them?) like being unable to unequip once equipped, or being able to only equip after a while: think renting a slot, so someone can put up their billboard in your land.

This makes possible the following necessary concepts:

  • land (owned by RMRK or null-address to prevent administration and censorship) that equips an avatar (owned by anyone else). Based on size, land would have a number of avatar slots to prevent overcrowding in the engine. Once in a land, an avatar can freely move around in it.
  • people minting samples into the same music NFT owned by some platform which will then sell it

test TooManyRecursions behavior

fn nft_burn(
collection_id: CollectionId,
nft_id: NftId,
max_recursions: u32,
) -> sp_std::result::Result<(CollectionId, NftId), DispatchError> {
ensure!(max_recursions > 0, Error::<T>::TooManyRecursions);

let's say max_recursions is 5, and we have a 10-generation nft being burned. my belief is that the first 5 generations will successfully burn, and subsequent descendants will fail, but need to test this.

Core pallet design

Diagram


Full Screen Diagram

Calls

The external dispatchable calls. i.e. The methods user can invoke by sending a
transaction

Extending uniques pallet.

  • mint_collection(origin, metadata: Vec, max: u16)
    • create a collection
  • mint_nft(origin, collection_id, author: T::AccountId, royalty: u8, metadata: Vec)
    • minting NFT inside a collection
    • author: Receiver of the royalty
    • royalty: Percentage reward from each trade for the author
    • create a base. catalogue of parts. It is not an NFT
  • burn_nft(origin, collection_id, nft_id)
    • burn a NFT
  • burn_collection(origin, collection_id)
    • burn a Collection
  • send(origin, collection_id, nft_id, dest)
    • transfer NFT from account A to account B
  • change_issuer(origin, collection_id, base_id, dest)
    • changing the issuer of a collection or base
  • set_property(collection_id, maybe_nft_id, key, value)
    • key and value of type BoundedVec<u8, T::KeyLimit>
    • set a custom value on an NFT. Similar to uniques set_attribute
  • lock(collection_id)
    • locking a collection

Multi resource calls.

  • add_resource(nft_id, resource_id)
    • add a new resource to an NFT as the collection issuer
  • accept(nft_id, resource_id)
    • accept the addition of a new resource to an existing NFT or addition of a child into a parent NFT.
  • set_priority(collection_id, nft_id)
    • set a different order of resource priority

Storages

Defines how to access on-chain storage

Collection

type Collection<T: Config> = StorageMap<
    _,
    Twox64Concat,
    T::CollectionId,
    CollectionDetails<T::AccountId>,
>;

NFT

StorageDoubleMap structure CollectionId -> NftId -> NFTDetails

type NFT<T: Config> = StorageDoubleMap<
    _,
    Twox64Concat,
    T::CollectionId,
    Twox64Concat,
    T::NftId,
    NFTDetails<T::AccountId>,
>;

Attribute

  1. Uniques based StorageNMap structure CollectionId -> Some(NftId) -> Key -> Value
type Attribute<T: Config<I>, I: 'static = ()> = StorageNMap<
    _,
    (
        NMapKey<Blake2_128Concat, T::ClassId>,
        NMapKey<Blake2_128Concat, Option<T::InstanceId>>,
        NMapKey<Blake2_128Concat, BoundedVec<u8, T::KeyLimit>>,
    ),
    (BoundedVec<u8, T::ValueLimit>, DepositBalanceOf<T, I>),
    OptionQuery,
>;

Resource

StorageDoubleMap structure BaseId -> PartId -> PartDetail

type Resource<T: Config> = StorageDoubleMap<
    _,
    Twox64Concat,
    T::ResourceId,
    Twox64Concat,
    T::PartId,    
    ResourceDetail,
>;

Types

CollectionDetails

pub struct CollectionDetails<AccountId> {
    /// Can mint tokens.
    issuer: AccountId,
    max: u16,
    symbol: BoundedVec<u8, StringLimit>,
    metadata: BoundedVec<u8, StringLimit>
}

NFTDetails (WIP)

pub struct NFTDetails<AccountId> {
    owner: Option<NftId>,
    sn: u16,
    root_owner: AccountId,
    symbol: BoundedVec<u8, StringLimit>,
    metadata: BoundedVec<u8, StringLimit>
    children: Option<Vec<NFTDetails>>, // nesting references. To discuss
    parent: Option<Vec<NFTDetails>>,
}

Lazy ownership tree based on pallet-unique's owner

pallet-unique ownership is used to trace the hierarchy of the NFTs #27

Add Locks Logic To Lock Listed NFTs

Currently there is a limitation that doesn't allow for NFTs to be locked after they have been listed in the Market. Originally there was the idea to implement in the RMRK core pallet, but this leads to problems upstream in the Uniques pallet where Uniques will not have any knowledge of the locks mechanism defined in the RMRK core pallet. A current proposal would be to utilize a dependency injection trait defined in the Uniques pallet that would allow the RMRK core pallet to be able to enforce the locks logic by implementing the new trait.

A psuedo-code example can be found here by @h4x3rotab https://gist.github.com/h4x3rotab/152d05c9b5a18462e8dfe2b299b58db8

The plan would be to introduce the Uniques pallet code back into the RMRK repo temporarily until the code is ready to be submitted for review to be added to the Substrate Uniques pallet repository.

Test Coverage

Hi, I am new to open source development and would like to participate. Bruno said I should not mention Jira/Confluence but he recommended me creating an issue here :-)

With grcov I have created a test coverage html tree --> https://github.com/SiAlDev/rmrk-substrate/tree/main/target/debug/coverage

I could work on covering all sorts of test scenarios, different data sets etc. I just need a bit guidance to get going.

Best,
Silvio

Implement Royalties

I saw that the minting process can setup the royalty percentage for an NFT but it seems that the actual royalties are not sent during a transfer or a market sale. Has it not been implemented yet or does other code in a different repo implement this?

Prefer struct variant than tuple variant in events

pub enum Event<T: Config> {
CollectionCreated(T::AccountId, T::CollectionId),
NftMinted(T::AccountId, T::CollectionId, T::NftId),
NFTBurned(T::AccountId, T::NftId),
CollectionDestroyed(T::AccountId, T::CollectionId),
NFTSent(
T::AccountId,
AccountIdOrCollectionNftTuple<T::AccountId, T::CollectionId, T::NftId>,
T::CollectionId,
T::NftId,
),
IssuerChanged(T::AccountId, T::AccountId, T::CollectionId),
PropertySet(
T::CollectionId,
Option<T::NftId>,
BoundedVec<u8, T::KeyLimit>,
BoundedVec<u8, T::ValueLimit>,
),
CollectionLocked(T::AccountId, T::CollectionId),
ResourceAdded(T::NftId, T::ResourceId),
ResourceAccepted(T::NftId, T::ResourceId),
PrioritySet(T::CollectionId, T::NftId),
}

The above code has the event enum defined in mostly tuple variants. This was the only supported type of variant by Substrate because of the limitation of the old metadata generator. However after Metadata v14, it was changed. Now the struct variant is supported.

Before the struct variant was supported, it's a convention to document the name of each anonymous fields by adding manual comments like:

  /// Some event [id, account]
  SomeEvent(u32, T::AccountId),

It's much harder to understand than the struct variant version:

   /// Some evnet
  SomeEvent { id: u32, account: T::AccountId },

Now with the struct variant available, the field names are self-described. The upstream Substrate pallets also gradually migrate to the struct variant.

runtime api for fetching nft

In order to fetch nested NFT with all children we need: fetch parent NFT, fetch children ( ids ), fetch every single child. Plus resources etc.

Create single runtime api (ie fetch_nft ) for convenient fetching of nested nfts.

Adding traits to describe the total interface.

Right now the core pallet exposes most functionality through extrinsics. We should consider defining traits for the entire rmrk-ecosystem, which are implemented by the pallet, and having the extrinsics calling these trait methods. This makes pallet-to-pallet calls easier as well.

Add checks to add_resource

Currently possible to create empty resource.

  • Check existence of NftId and CollectionId
  • Restrict creating empty Resource ( all props None )

Convert Collection ID and NFT ID to BoundedVec

I believe we should convert Collection ID and NFT ID values from u32 to BoundedVec. The current way is incrementally indexed (Collection 0, 1, 2, etc). Proposal is to switch to user-defined symbols of some max length. I think one drawback would be storage size, but the benefits would be usability and likely cross-chain functionality.

Consider create collection KANBIRD and NFT of this collection SUPERFOUNDER_1. The collection ID/NFT ID tuple would be (KANBIRD, SUPERFOUNDER_1) as opposed to (0, 0). Obviously this would make interaction with the pallet more user-friendly. Also, I suspect cross-chain activities will be easier. Transferring across chain would only (conceptually) fail if the NFT (KANBIRD, SUPERFOUNDER_1) already exists on the destination change (less likely than (0, 0) obviously).

There are a few challenges in developing this way, as we currently have some reliance on the numbering of Collection ID (specifically I'm thinking of collection max count) but I believe this can be overcome pretty easily.

Resource ID is already implemented this way, and seems to be surviving like this.

What's the intended ProtocolOrigin

Currently ProtocolOrigin is able to bloat/block the chain, as the metadata is not a BoundedVec. It is not a security risk if protocol origin is Root, and the input is checked in a different location, but still best to add a check.

Add replace-resource ID

When adding a new resource, an issuer should be able to specify the ID of an existing one which would get overwritten. This is necessary to be able to upgrade NFTs, change their parts, or even just do reveal mechanics of certain types.

This should be an extra parameter passed alongside the add resource call.

Refactor try_origin pattern

There are a lot of match try_origin() { } pattern.
Consider to write a function to simplify (maybe call it Self::ensure_protocol_or_signed())

burnNft leaves resources

burnNft successfully destroys the NFT storage for the NFT, but Resources is stored separately (in Resources storage). burn operation doesn't destroy this storage, which it should.

repro:
create collection
create nft
add resource to nft
burn nft
check resources storage (it will still be there)

Bases and Parts storages both store parts (redundant)

Suggest removing Parts component of the Bases struct, as that info is kept in Parts storage
Usage of Parts in BaseInfo (delete this, is my suggestion)

/// Parts, full list of both Fixed and Slot parts
pub parts: Vec<PartType<BoundedString>>,

Parts getting added to Storage (via base_create):
for part in parts.clone() {
match part.clone() {
PartType::SlotPart(p) => {
Parts::<T>::insert(base_id, p.id, part);
},
PartType::FixedPart(p) => {
Parts::<T>::insert(base_id, p.id, part);
},
}
}

Base getting added to Bases storage (w redundant parts):
let base = BaseInfo { issuer, base_type, symbol, parts };
Bases::<T>::insert(base_id, base);

NftsByOwner storage isn't updated anywhere

NftsByOwner storage seems to be very useful since the frontend might be interested in fetching all the NFTs owned by a specific user.

#[pallet::storage]
#[pallet::getter(fn get_nfts_by_owner)]
/// Stores collections info
pub type NftsByOwner<T: Config> =
StorageMap<_, Twox64Concat, T::AccountId, Vec<(CollectionId, NftId)>>;

But for now, this storage is updated only at NFT minting.

NftsByOwner::<T>::append(owner, (collection_id, nft_id));

It never updates at send/burn and all the rest interactions.

Thus it cannot be used to obtain up-to-date ownership info.

Also, maybe this storage supposed to be StorageNMap instead of StorageMap with Vec<_> (as suggested in another issue)?
And maybe this storage is just a duplicate of uniques Account? See the comments of #48

Auctions pallet design

Calls

The external dispatchable calls. i.e. The methods user can invoke by sending a
transaction

Auctions pallet. Should extend NFT Core
Should implement offchain worker to monitor auctions end time to conclude auction

  • create_auction(origin: OriginFor, auction_info: AuctionInfoOf)
    • AuctionInfoOf consists of AccountId, BalanceOf, BlockNumber, CollectionId, NftId
  • bid_value(origin: OriginFor, id: T::AuctionId, value: BalanceOf)
  • delete_auction(origin: OriginFor, id: T::AuctionId)

Storages

Defines how to access on-chain storage

Auctions

Stores on-going and future auctions. Closed auction are removed.

pub type Auctions<T: Config> = StorageMap<_,
    Twox64Concat, T::AuctionId, AuctionInfoOf<T>, OptionQuery>;

AuctionEndTime

Index auctions by end time.

pub type AuctionEndTime<T: Config> = StorageDoubleMap<_, 
    Twox64Concat, T::BlockNumber, Twox64Concat, T::AuctionId, (), OptionQuery>;

AuctionOwnerById

Auction owner by ID

type AuctionOwnerById<T: Config> = StorageMap<_, 
    Twox64Concat, T::AuctionId, T::AccountId, ValueQuery>;

Types

Events

Defines the events that could be emitted by this pallet to indicate what happened

pub enum Event<T: Config> {
    /// Auction created
    AuctionCreated(T::AccountId, T::AuctionId),
    /// A bid is placed
    Bid(T::AuctionId, T::AccountId, BalanceOf<T>),
    /// Auction ended
    AuctionConcluded(T::AuctionId),
    /// Auction removed
    AuctionRemoved(T::AuctionId),
}

Market pallet design

Calls

The external dispatchable calls. i.e. The methods user can invoke by sending a
transaction

Marketplace pallet. Should extend NFT Core

  • buy
    • buy NFT
  • list/unlist
    • list NFT for sale
  • make_offer(origin, collection_id, nft_id, amount: BalanceOf, expires: T::BlockNumber)
  • withdraw_offer(origin, collection_id, nft_id)

Storages

Defines how to access on-chain storage

Types

TBD

Events

Defines the events that could be emitted by this pallet to indicate what happened

Every call should implement relevant Event. ie Auctions pallet events:

pub enum Event<T: Config> {
    /// The price for a token was updated \[owner, collection_id, nft_id, price\]
    TokenPriceUpdated(T::AccountId, T::NftId, T::NftId, Option<BalanceOf<T>>),
    /// Token was sold to a new owner \[owner, buyer, collection_id, nft_id, price, author, royalty, royalty_amount\]
    TokenSold(
        T::AccountId,
        T::AccountId,
        T::CollectionId,
        T::NftId,
        BalanceOf<T>,
        Option<(T::AccountId, u8)>,
        BalanceOf<T>,
    ),
    /// Token listed on Marketplace \[owner, collection_id, nft_id, author royalty\]
    TokenListed(T::AccountId, T::CollectionId, T::NftId, T::AccountId, u8),
    /// Token listed on Marketplace \[collection_id, nft_id\]
    TokenUnlisted(T::CollectionId, T::NftId),
    /// Offer was placed on a token \[offerer, collection_id, nft_id, price\]
    OfferPlaced(T::AccountId, T::CollectionId, T::NftId, BalanceOf<T>),
    /// Offer was withdrawn \[sender, collection_id, nft_id\]
    OfferWithdrawn(T::AccountId, T::CollectionId, T::NftId),
    /// Offer was accepted \[sender, collection_id, nft_id\]
    OfferAccepted(T::AccountId, T::CollectionId, T::NftId),
}

Symbol could be smaller StringLimit than metadata

Currently StringLimit for metadata is u128. Symbol could be smaller something like u32.

		pub fn create_collection(
			origin: OriginFor<T>,
			metadata: BoundedVec<u8, T::StringLimit>,
			max: Option<u32>,
			symbol: BoundedVec<u8, T::StringLimit>,

Equippables should handle additions/deletions (currently just overrides full list)

Currently the do_equippable function in rmrk-equip/src/functions.rs just overrides the equippables list. the rmrk-spec allows for + and - to add to (or remove from) existing equippable list. this should be implemented.

PartType::SlotPart(mut slot_part) => {
// Update equippable value
slot_part.equippable = equippables;
// Overwrite Parts entry for this base_id.part_id
Parts::<T>::insert(base_id, part_id, PartType::SlotPart(slot_part));
Ok((base_id, part_id))

Trigger Unlist When Interacting with RMRK NFT

Currently the Market Pallet implementation does not handle automatically unlisting RMRK NFTs in the ListedNfts Storage when a RMRK Core pallet function like send, burn_nft, add_resource or set_priority is called to interact with the RMRK NFT. There will need to be a mechanism implemented to trigger the unlist function to be called in the Market Pallet when these events occur. Currently for this to be done, the current owner of the RMRK NFT must manually call unlist to ensure the NFT is removed from the ListedNfts storage

Implement accept resource

Since it is possible to add resource to non owned NFT we should add extra logic to add_resource and accept extrinsics.

Audit Dependencies

cargo audit has detected 3 vulnerabilities in your dependencies. The first one has been a long standing issue in the rust community, and cannot easily be fixed. I wouldn't consider it a vulnerability.

�[0m�[0m�[1m�[31mCrate:        �[0m chrono
�[0m�[0m�[1m�[31mVersion:      �[0m 0.4.19
�[0m�[0m�[1m�[31mTitle:        �[0m Potential segfault in `localtime_r` invocations
�[0m�[0m�[1m�[31mDate:         �[0m 2020-11-10
�[0m�[0m�[1m�[31mID:           �[0m RUSTSEC-2020-0159
�[0m�[0m�[1m�[31mURL:          �[0m https://rustsec.org/advisories/RUSTSEC-2020-0159
�[0m�[0m�[1m�[31mSolution:     �[0m No safe upgrade is available!

The same goes for the time dependency, which you might be able to patch:

�[0m�[0m�[1m�[31mCrate:        �[0m time
�[0m�[0m�[1m�[31mVersion:      �[0m 0.1.44
�[0m�[0m�[1m�[31mTitle:        �[0m Potential segfault in the time crate
�[0m�[0m�[1m�[31mDate:         �[0m 2020-11-18
�[0m�[0m�[1m�[31mID:           �[0m RUSTSEC-2020-0071
�[0m�[0m�[1m�[31mURL:          �[0m https://rustsec.org/advisories/RUSTSEC-2020-0071
�[0m�[0m�[1m�[31mSolution:     �[0m Upgrade to >=0.2.23

lru is patchable

�[0m�[0m�[1m�[31mCrate:        �[0m lru
�[0m�[0m�[1m�[31mVersion:      �[0m 0.6.6
�[0m�[0m�[1m�[31mTitle:        �[0m Use after free in lru crate
�[0m�[0m�[1m�[31mDate:         �[0m 2021-12-21
�[0m�[0m�[1m�[31mID:           �[0m RUSTSEC-2021-0130
�[0m�[0m�[1m�[31mURL:          �[0m https://rustsec.org/advisories/RUSTSEC-2021-0130
�[0m�[0m�[1m�[31mSolution:     �[0m Upgrade to >=0.7.1

These vulnerabilities are all part of substrate, so you could consider not patching them, as rmrk is meant to be a library.

Some other crates have been yanked, consider patching these too.

lock-collection logic needs fixed

lock collection sets max to currently-minted. so someone could then burn and re-mint, which doesn't lock.
perhaps set max to one less than the currently-minted, then add a check when burning so if currently_minted < max (and not zero) then burning is not allowed.

Equip pallet design

Diagram


Full Screen Diagram

Nested NFTs include anything non-stackably-visual too: bundles of NFTs, mortgage NFT with photos of the house, music albums with songs, user-created curated collections for galleries, and more. None of these need equip. Equip, on the other hand, has many implications - per-slot limits, nested rendering across multiple depths (configurable cap on configurable bird), unequip on sale, burn, etc. Equip "addon lego" not useful by itself, it needs the core.

Should extend NFT Core. Original RMRK2 spec

Calls

  • equip
    • equip a child NFT into a parent's slot, or unequip
  • equippable
    • changes the list of equippable collections on a base's part
  • theme_add(base_id)
    • add a new theme to a base
  • create_base(origin, symbol, type, parts) TBD
    • create a base. catalogue of parts. It is not an NFT

Storages

Defines how to access on-chain storage

Base

type Base<T: Config> = StorageMap<
    _,
    Twox64Concat,
    T::BaseId,
    BaseDetails<T::AccountId>,
>;

Part

StorageDoubleMap structure BaseId -> PartId -> PartDetail

type Base<T: Config> = StorageDoubleMap<
    _,
    Twox64Concat,
    T::BaseId,
    Twox64Concat,
    T::PartId,    
    PartDetail,
>;

Events

Defines the events that could be emitted by this pallet to indicate what happened

Every call should implement relevant Event. ie Auctions pallet events:

pub enum Event<T: Config> {
    BaseCreated(T::AccountId, T::BaseId),
    ThemeAdded(T::BaseId),
    ResourceAdded(T::BaseId, T::ResourceId),
    ResourceAccepted(T::BaseId, T::ResourceId),
    PriorityUpdated(T::ResourceId)
}

Types

  • BaseDetails
pub struct BaseDetails<AccountId> {
    issuer: AccountId,
    symbol: BoundedVec<u8, StringLimit>,
    type: BoundedVec<u8, StringLimit>,
}
  • PartDetail
pub struct PartDetail<AccountId> {
    type: BoundedVec<u8, StringLimit>,
    src: BoundedVec<u8, StringLimit>
}

One can accept a non-existent resource

while a collection/nft id must exist, any value for resource ID can be accepted. i believe we should check for existence and fail when a resource doesn't exist.

Lazy ownership tree based on pallet-unique's owner

An important feature of RMRK spec is to allow any NFT to own other NFTs. This is called nested NFTs. It makes NFTs composable. Typical use cases are:

  • A set of NFTs can be combined as a single object that can be send and sell in an auction as a whole
  • By following some special rules (defined in BASE), some NFTs can be combined in a meaningful way that produce some special effects. E.g. glasses can be equipped to a Kanaria bird and can be rendered as a complete portrait

Current approach

Now the basic NFT is implemented by pallet-unique. On top of it, RMRKCore offers the advanced feature including the nested NFT. The basic logic can be described as below:

  • RMRKCore stores an additional layer of the NFT ownership
    • Let's call it rmrk-owner, in contrast to unique-owner
  • Every RMRK NFT has a rmrk-owner
  • Rmrk-owner can be either AccountId or CollectionIdAndNftTuple (Nft for short)
  • When minted, both rmrk-owner and unique-owner is set to the creator
  • When burned, it recursively burn all the children NFTs
  • When transferred, the destination can be either an AccountId or Nft
    • When transfer to an AccountId, both rmrk-owner and unique-owner are set to the new owner. Then it recursively set unique-owenr of all its children to the new owner.
    • When transfer to a Nft, the rmrk-owner is set to the dest NFT, the unique-owner is set to the unique-owner of the dest NFT. Then it recursively set the unique-owenr of all its children to the the same account.

This approach is suboptimal in three aspects:

  • rmrk-owner and unique-owner is theoretically redundant -- the former is a superset of the later. It increases the difficulty to maintain both relationship in sync.
  • The burn and transfer operation has to be recursive because we need to proactively update the ownership of the NFTs in unique (unique-owner). It's an overhead.
  • If we allow users the direct access to the underlying pallet-unique (or through xcm), it's likely they can break the sync between unique and RMRK

So here, we'd prefer an approach that can minimize the redundancy of the data and reduce the complexity and the overhead to maintain the data structure.

Proposal

The proposal is simple: use the ownership in pallet-unique to trace the hierarchy of the NFTs.

The ownership in pallet-unique is represented by AccountId. At the first glance, it doesn't fit our requirement which is to allow a NFT to own other NFTs. However the AccountId isn't necessary to be a real wallet. It's just an address that can be backed by anything. In our case, we can create "virtual account" for each NFT instance by mapping their (CollectionId, NftId) tuple to an AccountId via hashing.

Once each NFT has a virtual AccountId associated, we can safely establish the hierarchy with pallet-unique's builtin ownership:

  • When minted: just mint a unique NFT
  • When transferred: simply transfer the NFT to the new owner
    • When transfer to an AccountId, just call unique's transfer
    • When transfer to a NFT, generate its corresponding virtual AccountId, and then we do the same transfer as above]
  • When burned: check the NFT has no child, then just burn it

In this way, each operation is O(1) except the ownership check (discussed below). Even if you do a transfer of a NFT with heavy children, or put it to a sale, only the ownership of the top one needs to be changed.

The virtual account trick

This is a common trick widely used in Substrate. The virtual accounts are allocated to on-chain multisig wallets, anonymous proxies, and even some pallets. The benefit is that the AccountId unifies the identity representation for all kinds of entities, It can fit to any place that accepts an account to manage ownership.

The virtual accounts are usually just hard-coded string or hashes without any private key associated. For example, the AccountId of a multisig wallet can be generated by hash('multisig' ++ encode([owners]). Similarly, we can generate the virtual account id for each NFT like below:

fn nft_to_account_id<AccountId: Codec>(col: CollectionId, nft: NftId) -> AccountId {
    let preimage = (col, nft).encode();
    let hash = blake2_256(&preimage);
    (b"RmrkNft/", hash)
        .using_encoded(|b| AccountId::decode(&mut TrailingZeroInput::new(b)))
        .expect("Decoding with trailing zero never fails; qed.")
}

In the above example, "RmrkNft/" takes 8 bytes, and if AccountId is 32 bytes, we got 24 bytes entropy left for the hash of the nft ids.

Or if we are sure we can get an address space large enough to directly include the ids (e.g. AccountId32 can include CollectionId + Nft Id, which is just 8 bytes), we can just encode the ids to the account id directly. This enables reverse lookup much easier:

fn nft_to_account_id<AccountId: Codec>(col: CollectionId, nft: NftId) -> AccountId {
    (b"RmrkNft/", col, nft)
        .using_encoded(|b| AccountId::decode(&mut TrailingZeroInput::new(b)))
        .expect("Decoding with trailing zero never fails; qed.")
}

fn decode_nft_account_id<AccountId: Codec>(account_id: AccountId) -> Option<(CollectionId, NftId)> {
    let (prefix, tuple, suffix) = account_id
        .using_encoded(|mut b| {
            let slice = &mut b;
            let r = <([u8; 8], (CollectionId, NftId))>::decode(slice);
            r.map(|(prefix, tuple)| (prefix, tuple, slice.to_vec()))
        })
        .ok()?;
    // Check prefix and suffix to avoid collision attack
    if &prefix == b"RmrkNft/" && suffix.iter().all(|&x| x == 0) {
        Some(tuple)
    } else {
        None
    }
}

Ownership check

A drawback of this approach is that given a NFT in a tree, you cannot easily find if the user is owner if the tree. This can be solved by a reverse lookup:

  • Save a reverse map: the map from the virtual accounts to the NFTs
    • (Actually can be saved if decode_nft_account_id is available)
  • When finding the tree owner, we start from a NFT, check if its owner is in the reverse map
    • If no, this NFT is the root, and its owner is the result
    • If yes, we recursively lookup the parent NFT until we reach to the root

In this way, a lookup is O(h) where h is the depth of the NFT in the tree. It can be described in pseudocode:

type AccountPreimage = StorageMap<_, T::AccountId, TwoX64Concate, (CollectionId, NftId)>;

fn lookup_root(col: CollectionId, nft: NftId) -> (T::AccountId, (CollectionId,NftId)) {
    let parent = pallet_uniques::Pallet::<T>::owner(col, nft);
    match AccountPreimage::<T>::get(parent) {
        None => (parent, (col, nft)),
        Some((col, nft)) => lookup_root_owner(col, nft),
    }
}

Children

When burning a NFT itself, we need to ensure the is has no child. This can be done by either

  • require no child under it,
  • or automatically transfer its children to the caller before burning,
  • or automatically burn all the children before burning.

It's easy to recursively burn all the children. To track the children of a NFT, a straightforward solution is to keep a record of all its children, and update the list dynamically when adding or removing a child:

type Children = StorageMap<(CollectionId, NftId), TwoX64Concat, Vec<(CollectionId, Nft)>, ValueQuery>;

fn add_child(parent: (CollectionId, NftId), child: (CollectionId, NftId)) {
    Children::<T>::mutate(parent, |v| {
        v.push_back(child)
    });
}
fn remove_child(parent: (CollectionId, NftId), child: (CollectionId, NftId)) {
    Children::<T>::mutate(parent, |v| {
        *v = v.iter().filter(|nft| nft != child).collect();
    });
}
fn has_child(parent: (CollectionId, NftId)) -> bool {
    !Children::<T>::contains_key(parent).empty()
}

Caveat: Note that this essentially builds an index for the ownership of the nfts. It can only get updated if the change of the ownership was initiated by the RMRK pallet. A bad case is that if a user transfer a NFT to another nft via pallet-uniques directly, the RMRK pallet will know nothing about this ownership change. The children index will be out-of-sync, and therefore when burning the parent nft, some children may be missed.

A solution is to somehow disallow pallet-uniques to transfer an NFT to another NFT. Or we can add a notification handler to pallet-uniques to call back whenever a transfer was done.

Pseudocode

fn mint(origin: OriginFor<T>, owner: T::AccountId, col: CollectionId, nft: NftId) {
    ensure_signed(origin)?;
    pallet_uniques::Pallet::<T>::mint(origin, owner, col, nft)?;
    Ok(())
}

fn burn(origin: OriginFor<T>, owner: T::AccountId, col: CollectionId, nft: NftId) {
    let who = ensure_signed(origin)?;
    // Check ownership
    let (root_owner, _) = lookup_root_owner(col, nft);
    ensure!(who == root_owner, Error::BadOrigin);
    // Check no children
    ensure!(!has_child((col, nft)), Error::CannotBurnNftWithChild);
    // Burn
    let parent = pallet_uniques::Pallet::<T>::owner(col, nft);
    pallet_uniques::Pallet::<T>::burn(Origin::signed(parent), col, nft);
    // Maintain the children
    if parent != root_owner {
        remove_child(parent, (col, nft));
    }
    Ok(())
}

fn transfer(origin: OriginFor<T>, col: CollectionId, nft: NftId, dest: AccountIdOrNftTuple) {
    let who = ensure_signed(origin)?;
    // Check ownership
    let (root_owner, root_nft) = lookup_root_owner(col, nft);
    ensure!(who == root_owner, Error::BadOrigin);
    // Prepare transfer
    let dest_account = match dest {
        AccountId(id) => id,
        NftTuple(c, n) => {
            // Circle detection
            let (_, dest_root_nft) = lookup_root_owner(c, n);            
            ensure!(root_nft != dest_root_nft, Error::CircleDetected);
            // Convert to virtual account
            nft_to_account_id(c, n)
        },
    };
    let owner = pallet_uniques::<T>::owner(col, nft);
    pallet_uniques::<T>::transfer(
        Origin::signed(owner),
        col, nft,
        dest_account,
    )?;
    Ok(())
}

Update notes

  • 2021-12-31
    • Update the pseudocode in the "virtual account" section, also add an alternative encode scheme
    • Update burning and children indexing section according to Bruno's feedback
    • Correct the symbol names of pallet-uniques
    • Fix typos

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.