Git Product home page Git Product logo

subtensor's Introduction

███████╗██╗   ██╗██████╗ ████████╗███████╗███╗   ██╗███████╗ ██████╗ ██████╗
██╔════╝██║   ██║██╔══██╗╚══██╔══╝██╔════╝████╗  ██║██╔════╝██╔═══██╗██╔══██╗
███████╗██║   ██║██████╔╝   ██║   █████╗  ██╔██╗ ██║███████╗██║   ██║██████╔╝
╚════██║██║   ██║██╔══██╗   ██║   ██╔══╝  ██║╚██╗██║╚════██║██║   ██║██╔══██╗
███████║╚██████╔╝██████╔╝   ██║   ███████╗██║ ╚████║███████║╚██████╔╝██║  ██║
╚══════╝ ╚═════╝ ╚═════╝    ╚═╝   ╚══════╝╚═╝  ╚═══╝╚══════╝ ╚═════╝ ╚═╝  ╚═╝

Subtensor

Discord Chat License: MIT

This repository contains Bittensor's substrate-chain. Subtensor contains the trusted logic which:

  1. Runs Bittensor's consensus mechanism;
  2. Advertises neuron information, IPs, etc., and
  3. Facilitates value transfer via TAO.

System Requirements

  • The binaries in ./bin/release are x86_64 binaries to be used with the Linux kernel.
  • Subtensor needs ~286 MiB to run.
  • Architectures other than x86_64 are currently not supported.
  • OSs other than Linux and MacOS are currently not supported.

Architectures

Subtensor support the following architectures:

Linux x86_64

Requirements:

  • Linux kernel 2.6.32+,
  • glibc 2.11+ A fresh FRAME-based Substrate node, ready for hacking 🚀

MacOS x86_64

Requirements:

  • MacOS 10.7+ (Lion+)

Network requirements

  • Subtensor needs access to the public internet
  • Subtensor runs on ipv4
  • Subtensor listens on the following ports:
    1. 9944 - Websocket. This port is used by bittensor. It only accepts connections from localhost. Make sure this port is firewalled off from the public domain.
    2. 9933 - RPC. This port is opened, but not used.
    3. 30333 - p2p socket. This port accepts connections from other subtensor nodes. Make sure your firewall(s) allow incoming traffic to this port.
  • It is assumed your default outgoing traffic policy is ACCEPT. If not, make sure outbound traffic to port 30333 is allowed.

For Subnet Development

If you are developing and testing subnet incentive mechanism, you will need to run a local subtensor node. Follow the detailed step-by-step instructions provided in the document Running subtensor locally to run either a lite node or an archive node. Also see the Subtensor Nodes section in Bittensor Developer Documentation.

Lite node vs Archive node

For an explanation of lite node, archive node and how you can run your local subtensor node in these modes, see Lite node vs archive node section on Bittensor Developer Docs.


For Subtensor Development

Installation

First, complete the basic Rust setup instructions.

Build and Run

Use Rust's native cargo command to build and launch the template node:

cargo run --release -- --dev

Build only

The above cargo run command will perform an initial build and launch the node. Use the following command to build the node without launching it:

cargo build --release

Other ways to launch the node

The above cargo run command will launch a temporary node and its state will be discarded after you terminate the process. After the project has been built, there are other ways to launch the node.

Single-Node Development Chain

This command will start the single-node development chain with non-persistent state:

./target/release/subtensor --dev

Purge the development chain's state:

./target/release/subtensor purge-chain --dev

Start the development chain with detailed logging:

RUST_BACKTRACE=1 ./target/release/subtensor-ldebug --dev

Running debug with logs.

SKIP_WASM_BUILD=1 RUST_LOG=runtime=debug -- --nocapture

Running individual tests

SKIP_WASM_BUILD=1 \
  RUST_LOG=runtime=debug \
  cargo test <your test name> \
  -- --nocapture --color always
testing `tests/` tips

<package-name> Available members are found within the project root ./cargo.toml file, each point to a sub-directory containing a cargo.toml file with a name defined. for example, node/cargo.toml has a name of node-subtensor

<test-name> Available tests are often found within either a tests/ sub-directory or within the relevant src/ file. for example ./node/tests/chain_spec.rs has a test named chain_spec

example All together we can run all tests in chain_spec file from node-subtensor project via

skip_wasm_build=1 \
  rust_log=runtime=debug \
  cargo test \
  --package node-subtensor \
  --test chain_spec \
  -- --color always --nocapture

Running code coverage

bash scripts/code-coverage.sh

Note: They above requires cargo-tarpaulin is installed to the host, eg. cargo install cargo-tarpaulin Development chain means that the state of our chain will be in a tmp folder while the nodes are running. Also, alice account will be authority and sudo account as declared in the genesis state. At the same time the following accounts will be pre-funded:

  • Alice
  • Bob
  • Alice//stash
  • Bob//stash

If we want to maintain the chain state between runs, a base path must be added so the db can be stored in the provided folder instead of a temporal one. We could use this folder to store different chain databases, as a different folder will be created per different chain that is ran. The following commands show how to use a newly created folder as our db base path:

# Create a folder to use as the db base path
mkdir my-chain-state

# Use of that folder to store the chain state
./target/release/node-template --dev --base-path ./my-chain-state/

# Check the folder structure created inside the base path after running the chain
ls ./my-chain-state
#> chains
ls ./my-chain-state/chains/
#> dev
ls ./my-chain-state/chains/dev
#> db keystore network

Connect with Polkadot-JS Apps Front-end

Once the node template is running locally, you can connect it with Polkadot-JS Apps front-end to interact with your chain. Click here connecting the Apps to your local node template.

Multi-Node Local Testnet

If you want to see the multi-node consensus algorithm in action, refer to our Simulate a network tutorial.

Template Structure

A Substrate project such as this consists of a number of components that are spread across a few directories.

Node Capabilities

A blockchain node is an application that allows users to participate in a blockchain network. Substrate-based blockchain nodes expose a number of capabilities:

  • Networking: Substrate nodes use the libp2p networking stack to allow the nodes in the network to communicate with one another.
  • Consensus: Blockchains must have a way to come to consensus on the state of the network. Substrate makes it possible to supply custom consensus engines and also ships with several consensus mechanisms that have been built on top of Web3 Foundation research.
  • RPC Server: A remote procedure call (RPC) server is used to interact with Substrate nodes.

Directory structure

There are several files in the node directory. Make a note of the following important files:

  • chain_spec.rs: A chain specification is a source code file that defines a Substrate chain's initial (genesis) state. Chain specifications are useful for development and testing, and critical when architecting the launch of a production chain. Take note of the development_config and testnet_genesis functions, which are used to define the genesis state for the local development chain configuration. These functions identify some well-known accounts and use them to configure the blockchain's initial state.
  • service.rs: This file defines the node implementation. Take note of the libraries that this file imports and the names of the functions it invokes. In particular, there are references to consensus-related topics, such as the block finalization and forks and other consensus mechanisms such as Aura for block authoring and GRANDPA for finality.

CLI help

After the node has been built, refer to the embedded documentation to learn more about the capabilities and configuration parameters that it exposes:

./target/release/node-subtensor --help

Runtime

In Substrate, the terms "runtime" and "state transition function" are analogous - they refer to the core logic of the blockchain that is responsible for validating blocks and executing the state changes they define. The Substrate project in this repository uses FRAME to construct a blockchain runtime. FRAME allows runtime developers to declare domain-specific logic in modules called "pallets". At the heart of FRAME is a helpful macro language that makes it easy to create pallets and flexibly compose them to create blockchains that can address a variety of needs.

Review the FRAME runtime implementation included in this template and note the following:

  • This file configures several pallets to include in the runtime. Each pallet configuration is defined by a code block that begins with impl $PALLET_NAME::Config for Runtime.
  • The pallets are composed into a single runtime by way of the construct_runtime! macro, which is part of the core FRAME Support system library.

Pallets

The runtime in this project is constructed using many FRAME pallets that ship with the core Substrate repository and a template pallet that is defined in the pallets directory.

A FRAME pallet is compromised of a number of blockchain primitives:

  • Storage: FRAME defines a rich set of powerful storage abstractions that makes it easy to use Substrate's efficient key-value database to manage the evolving state of a blockchain.
  • Dispatchables: FRAME pallets define special types of functions that can be invoked (dispatched) from outside of the runtime in order to update its state.
  • Events: Substrate uses events and errors to notify users of important changes in the runtime.
  • Errors: When a dispatchable fails, it returns an error.
  • Config: The Config configuration interface is used to define the types and parameters upon which a FRAME pallet depends.

License

The MIT License (MIT) Copyright © 2021 Yuma Rao

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Acknowledgments

parralax

subtensor's People

Contributors

camfairchild avatar cuteolaf avatar dependabot[bot] avatar distributedstatemachine avatar eduardogr avatar eugene-hu avatar gentiksolm avatar gregzaitsev avatar gztensor avatar ifrit98 avatar johnreedv avatar keithtensor avatar maciejbaj avatar open-junius avatar opentaco avatar orriin avatar pawkanarek avatar rajkaramchedu avatar rubberbandits avatar s0ands0 avatar sam0x17 avatar samotlagh avatar shibshib avatar synthpolis avatar unconst avatar welikethestock avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

subtensor's Issues

Automatically fail a registration request if the attempt spans an adjustment interval

Is your feature request related to a problem? Please describe.

When registering, it is possible for the cost to register recycle to change in the middle of registration: if the adjustment interval happens after the registration reports "The cost to register by recycle is τx.xxxxxxxxx", then when the registrant (or their script) enters "y" in response to "Do you want to continue? [y/n] (n):", then they will be charged the new amount, not the amount reported.

Describe the solution you'd like

I would like the script to store the cost it has reported, then when attempting to execute the actual recycle, if the amount is > that reported amount, fail the transaction. (I imagine that most people would be fine with the transaction going through if the new cost is less than the reported cost)

Describe alternatives you've considered

No response

Additional context

No response

Add CI job for Rust checks

Description : We need CI jobs implemented in the subtensor repo for auto checks for rust using cargo, specifically cargo check, cargo format, and cargo fix

AC:

  • CI job is implemented for all commit pushes the cargo check command
  • CI job is implemented for all commit pushes the cargo format command
  • CI job is implemented for all commit pushes the cargo fix command

right now part of #275, done but could be merged separately if needed

new release process / CI actions / checks

We want to move to a more formal release process characterized by the following points:

  • all PRs merge into main
  • PRs don't get merged until they are finney-ready
  • labels will be used to indicate that a PR is on-devnet, devnet-pass, on-testnet, and testnet-pass, respectively, and having both the devnet-pass and testnet-pass labels will be required by the CI to merge into main.
  • any issues uncovered during testing can then be discussed and tracked on the PR, reducing issue/PR bloat and making it easy to see the full life-cycle of a feature from development to deployment on finney
  • we will cut releases off of main using the github releases feature, so every once in a while we cut a new release and deploy everything new in main to finney, and then that gets a semver version number + tag
  • also new github actions to run our (now manual) deploy process, these will include a built in check that the spec version has been bumped to a value higher than the net we are currently deploying to. If this criterion is not met, the deploy will abort, which will prevent issues like the recent testnet forking/bricking

all work in #346

AC:

  • CI check system for devnet / testnet stages (possibly using labels)
  • github actions for deploys
  • set up releases
  • PRs default to main
  • update docs / instructions / PR templates
  • test that non-team-members can't fake the checklist items

deny missing docs on public items

Right now we actually have a lot of // comments that are intended to be /// doc comments. This is rampant throughout our pallets. A really good way to fix this would be to enforce #[deny(missing_docs)] at the crate level for each of our crates and then work through all the compile errors. Prob a big effort but will dramatically improve @rajkaramchedu's effort to improve the subtensor docs, and will require us to document things going forward as we create them.

AC:

  • add #[deny(missing_docs)] to all subtensor crates
  • fix the resulting compile errors, upgrading any // item-level comments to /// and ensuring they do not introduce any broken doc links
  • cargo doc --workspace passing without any broken doc links

Subtensor Nodes Persistently Attempt to Connect to Unavailable Services

Describe the bug

Subtensor nodes continue to attempt connections to services that have become unavailable, persisting even after those services explicitly refuse connections or stop responding. This issue has been observed over a period of 7 days, during which a service initially returned TCP RST packets for 5 days before completely ceasing to respond for an additional 2 days.

To Reproduce

  1. Set up a node using the Bittensor Subtensor package.
  2. Allow the node to operate normally and connect to other nodes.
  3. Shut down one of the services or nodes it connects to, ensuring it either sends TCP RST packets back or stops responding altogether.
  4. Observe the behavior of the Subtensor node trying to connect to the now unavailable service over several days.

Expected behavior

The Subtensor node should cease its attempts to connect to a service that has consistently been unavailable or explicitly refused connections over a reasonable period.

Actual Behavior:
The Subtensor node persists in attempting to connect to the service, sending TCP SYN packets continuously without recognizing the service's unavailability. This behavior persists over an extended period, observed for at least 7 days in this instance.

Screenshots

Screenshot from 2024-04-08 11-26-20
Screenshot from 2024-04-08 11-29-38

Environment

Debian Linux 11, Subtensor v0.0.1, Docker v25.0.4

Additional context

  • Persistent Connection Attempts: The behavior was first noticed when a service was intentionally shut down, returning TCP RST packets for 5 days. Following this period, the port was blocked to stop responding altogether, yet Subtensor nodes continued their connection attempts for at least 2 more days.
  • Network Resource Wastage: The continuous attempts to connect to an unavailable service can lead to unnecessary network traffic, consuming bandwidth and potentially degrading the performance of nodes involved in these futile connection attempts.
  • Potential Latency Increase: The persistent attempts and subsequent network congestion could introduce latency in the network, affecting the overall performance and responsiveness of the Bittensor network.
  • Risk of Blacklisting: Continuous, unwarranted connection attempts to external services or nodes could result in Subtensor nodes being blacklisted by network administrators or automated security systems, isolating them from parts of the internet and hindering the network's functionality and reach.

Governance

Description

The current governance mechanism in the Subtensor blockchain needs to be revised to introduce a new group called "SubnetOwners" alongside the existing "Triumvirate" and "Senate" groups. The goal is to establish a checks and balances system where a proposal must be accepted by the other two groups in order to pass.

For instance, if the Triumvirate proposes a change, both the SubnetOwners and Senate must accept it for the proposal to be enacted. Each acceptance group should have a configurable minimum threshold for proposal acceptance.

Acceptance Criteria

  • Introduce a new "SubnetOwners" group in the governance mechanism.
  • Modify the proposal process to require acceptance from the other two groups for a proposal to pass.
  • Implement configurable minimum thresholds for each acceptance group.
  • Update the existing code to accommodate the new governance structure.

Tasks

Substrate (rust)

  • Create a new SubnetOwners struct and associated storage items.
// runtime/src/lib.rs

// ...

pub struct SubnetOwners;

impl SubnetOwners {
    fn is_member(account: &AccountId) -> bool {
        // Implement logic to check if an account is a member of SubnetOwners
        // ...
    }

    fn members() -> Vec<AccountId> {
        // Implement logic to retrieve the list of SubnetOwners members
        // ...
    }

    fn max_members() -> u32 {
        // Implement logic to retrieve the maximum number of SubnetOwners members
        // ...
    }
}

// ...
  • Modify the propose function to include the new acceptance requirements.
// pallets/collective/src/lib.rs

// ...

#[pallet::call]
impl<T: Config<I>, I: 'static> Pallet<T, I> {
    // ...

    #[pallet::call_index(2)]
    #[pallet::weight(/* ... */)]
    pub fn propose(
        origin: OriginFor<T>,
        proposal: Box<<T as Config<I>>::Proposal>,
        #[pallet::compact] length_bound: u32,
        duration: BlockNumberFor<T>,
    ) -> DispatchResultWithPostInfo {
        // ...

        // Check if the proposer is a member of the Triumvirate
        ensure!(T::CanPropose::can_propose(&who), Error::<T, I>::NotMember);

        // ...

        // Initialize vote trackers for Senate and SubnetOwners
        let senate_votes = Votes {
            index,
            threshold: SenateThreshold::get(),
            ayes: sp_std::vec![],
            nays: sp_std::vec![],
            end,
        };
        let subnet_owners_votes = Votes {
            index,
            threshold: SubnetOwnersThreshold::get(),
            ayes: sp_std::vec![],
            nays: sp_std::vec![],
            end,
        };

        // Store the vote trackers
        <SenateVoting<T, I>>::insert(proposal_hash, senate_votes);
        <SubnetOwnersVoting<T, I>>::insert(proposal_hash, subnet_owners_votes);

        // ...
    }

    // ...
}

// ...
  • Implement configurable minimum thresholds for each acceptance group.
// runtime/src/lib.rs

// ...

parameter_types! {
    pub const TriumvirateThreshold: Permill = Permill::from_percent(60);
    pub const SenateThreshold: Permill = Permill::from_percent(50);
    pub const SubnetOwnersThreshold: Permill = Permill::from_percent(40);
}

// ...
  • Update the do_vote function to handle voting from the new SubnetOwners group.
// pallets/collective/src/lib.rs

impl<T: Config<I>, I: 'static> Pallet<T, I> {
    // ...

    pub fn do_vote(
        who: T::AccountId,
        proposal: T::Hash,
        index: ProposalIndex,
        approve: bool,
    ) -> DispatchResult {
        // ...

        // Check if the voter is a member of the Senate or SubnetOwners
        if Senate::is_member(&who) {
            // Update the Senate vote tracker
            <SenateVoting<T, I>>::mutate(proposal, |v| {
                if let Some(mut votes) = v.take() {
                    if approve {
                        votes.ayes.push(who.clone());
                    } else {
                        votes.nays.push(who.clone());
                    }
                    *v = Some(votes);
                }
            });
        } else if SubnetOwners::is_member(&who) {
            // Update the SubnetOwners vote tracker
            <SubnetOwnersVoting<T, I>>::mutate(proposal, |v| {
                if let Some(mut votes) = v.take() {
                    if approve {
                        votes.ayes.push(who.clone());
                    } else {
                        votes.nays.push(who.clone());
                    }
                    *v = Some(votes);
                }
            });
        } else {
            return Err(Error::<T, I>::NotMember.into());
        }

        // ...
    }

    // ...
}
// pallets/collective/src/lib.rs

// ...

impl<T: Config<I>, I: 'static> Pallet<T, I> {
    // ...

    pub fn do_vote(
        who: T::AccountId,
        proposal: T::Hash,
        index: ProposalIndex,
        approve: bool,
    ) -> DispatchResult {
        // ...

        // Check if the voter is a member of the Senate or SubnetOwners
        if Senate::is_member(&who) {
            // Update the Senate vote tracker
            <SenateVoting<T, I>>::mutate(proposal, |v| {
                if let Some(mut votes) = v.take() {
                    if approve {
                        votes.ayes.push(who.clone());
                    } else {
                        votes.nays.push(who.clone());
                    }
                    *v = Some(votes);
                }
            });
        } else if SubnetOwners::is_member(&who) {
            // Update the SubnetOwners vote tracker
            <SubnetOwnersVoting<T, I>>::mutate(proposal, |v| {
                if let Some(mut votes) = v.take() {
                    if approve {
                        votes.ayes.push(who.clone());
                    } else {
                        votes.nays.push(who.clone());
                    }
                    *v = Some(votes);
                }
            });
        } else {
            return Err(Error::<T, I>::NotMember.into());
        }

        // ...
    }

    // ...
}

// ...
  • Migrate the collective pallet name/storage
let old_pallet = "Triumvirate";
let new_pallet = <Governance as PalletInfoAccess>::name();
frame_support::storage::migration::move_pallet(
    new_pallet.as_bytes(),
    old_pallet.as_bytes(),
);

Python API

  • call to grab the list of subnet owners (governance members)
# bittensor/subtensor.py

class subtensor:
    
     # ...
    
     def get_subnet_owners_members(self, block: Optional[int] = None) -> Optional[List[str]]:
        subnet_owners_members = self.query_module("SubnetOwnersMembers", "Members", block=block)
        if not hasattr(subnet_owners_members, "serialize"):
            return None
        return subnet_owners_members.serialize() if subnet_owners_members != None else None
  • call to grab the list of governance members
# bittensor/subtensor.py

class subtensor:
    
     # ...
    
     def get_governance_members(self, block: Optional[int] = None) -> Optional[List[Tuple[str, Tuple[Union[GovernanceEnum, str]]]]]:
        senate_members = self.get_senate_members(block=block)
        subnet_owners_members = self.get_subnet_owners_members(block=block)
        triumvirate_members = self.get_triumvirate_members(block=block)

        if senate_members is None and subnet_owners_members is None and triumvirate_members is None:
           return None
        
        governance_members = {}
        for member in senate_members:
            governance_members[member] = (GovernanceEnum.Senate)

        for member in subnet_owners_members:
            if member not in governance_members:
                governance_members[member] = ()
            governance_members[member] += (GovernanceEnum.SubnetOwner)
       
         for member in triumvirate_members:
              if member not in governance_members:
                  governance_members[member] = ()
              governance_members[member] += (GovernanceEnum.Triumvirate)
         
        return [item for item in governance_members.items()]
  • call to vote as a subnet owner member
# bittensor/subtensor.py

class subtensor:
    
     # ...
    
     def vote_subnet_owner(self, wallet=wallet, 
            proposal_hash: str,
            proposal_idx: int,
            vote: bool,
     ) -> bool:
        return vote_subnet_owner_extrinsic(...)
    
    def vote_senate_extrinsic(
        subtensor: "bittensor.subtensor",
        wallet: "bittensor.wallet",
        proposal_hash: str,
        proposal_idx: int,
        vote: bool,
        wait_for_inclusion: bool = False,
        wait_for_finalization: bool = True,
        prompt: bool = False,
    ) -> bool:
        r"""Votes ayes or nays on proposals."""
    
        if prompt:
            # Prompt user for confirmation.
            if not Confirm.ask("Cast a vote of {}?".format(vote)):
                return False
        
        # Unlock coldkey
        wallet.coldkey
    
        with bittensor.__console__.status(":satellite: Casting vote.."):
            with subtensor.substrate as substrate:
                # create extrinsic call
                call = substrate.compose_call(
                    call_module="SubtensorModule",
                    call_function="subnet_owner_vote",
                    call_params={ 
                        "proposal": proposal_hash,
                        "index": proposal_idx,
                        "approve": vote,
                    },
                )

                # Sign using coldkey 
                
                # ...
    
                bittensor.__console__.print(
                    ":white_heavy_check_mark: [green]Vote cast.[/green]"
                )
                return True
  • call to vote as a governance member
# bittensor/subtensor.py

class subtensor:
    
     # ...
    
     def vote_governance(self, wallet=wallet, 
            proposal_hash: str,
            proposal_idx: int,
            vote: bool,
            group_choice: Tuple[GovernanceEnum],
     ) -> Tuple[bool]:
        result = []
        for group in group_choice:
            if GovernanceEnum.Senate == group:
                result.append( self.vote_senate(...) )
            if GovernanceEnum.Triumvirate == group:
                result.append( self.vote_triumvirate(...) )
           if GovernanceEnum.SubnetOwner == group:
                result.append( self.vote_subnet_owner(...) )
        
       return tuple(result) 
  • move voting to a governance command
# bittensor/cli.py
# bittensor/commands/senate.py -> bittensor/commands/governance.py

COMMANDS = {
    "governance": {
        "name": "governance",
        "aliases": ["g", "gov"],
        "help": "Commands for managing and viewing governance.",
        "commands": {
            "list": GovernanceListCommand,
            "senate_vote": SenateVoteCommand,
            "senate": SenateCommand,
            "owner_vote": OwnerVoteCommand,
            "proposals": ProposalsCommand,
            "register": SenateRegisterCommand, # prev: RootRegisterCommand
        },
    },
...
}
  • UI to vote as a governance member (now including subnet owners)
# bittensor/commands/governance.py

class VoteCommand:
    @staticmethod
    def run(cli: "bittensor.cli"):
        # ...

    @staticmethod
    def _run(cli: "bittensor.cli", subtensor: "bittensor.subtensor"):
        r"""Vote in Bittensor's governance protocol proposals"""
        wallet = bittensor.wallet(config=cli.config)
        
        # ...
        member_groups = subtensor.get_governance_groups(hotkey, coldkey)
        if len(member_groups) == 0:
            # Abort; Not a governance member
            return

        elif len(member_groups) > 1: # belongs to multiple groups
             # Ask which group(s) to vote as
             
             group_choice = ask_group_select( member_groups )

        else: # belongs to only one group
            group_choice = member_groups

        # ...
        
        subtensor.governance_vote( 
            wallet=wallet,
            proposal_hash=proposal_hash,
            proposal_idx=vote_data["index"],
            vote=vote,
            group_choice=group_choice,
        )
    
     # ...

    @classmethod
    def add_args(cls, parser: argparse.ArgumentParser):
        vote_parser = parser.add_parser(
            "vote", help="""Vote on an active proposal by hash."""
        )
        vote_parser.add_argument(
            "--proposal",
            dest="proposal_hash",
            type=str,
            nargs="?",
            help="""Set the proposal to show votes for.""",
            default="",
        )
        bittensor.wallet.add_args(vote_parser)
        bittensor.subtensor.add_args(vote_parser)
  • UI to list all governance members
# bittensor/commands/governance.py

class GovernanceMembersCommand:
    # ...

    @staticmethod
    def _run(cli: "bittensor.cli", subtensor: "bittensor.subtensor"):
        r"""View Bittensor's governance protocol members"""
        
        # ...

        senate_members = subtensor.get_governance_members()

        table = Table(show_footer=False)
        table.title = "[white]Senate"
        table.add_column(
            "[overline white]NAME",
            footer_style="overline white",
            style="rgb(50,163,219)",
            no_wrap=True,
        )
        table.add_column(
            "[overline white]ADDRESS",
            footer_style="overline white",
            style="yellow",
            no_wrap=True,
        )
        table.add_column(
            "[overline white]GROUP(S)",
            footer_style="overline white",
            style="yellow",
            no_wrap=True,
        )
        table.show_footer = True

        for ss58_address, groups in governance_members:
            table.add_row(
                (
                    delegate_info[ss58_address].name
                    if ss58_address in delegate_info
                    else ""
                ),
                ss58_address,
                " ".join(groups), # list all groups
            )

        table.box = None
        table.pad_edge = False
        table.width = None
        console.print(table)

    # ...

    @classmethod
    def add_args(cls, parser: argparse.ArgumentParser):
        member_parser = parser.add_parser(
            "members", help="""View all the governance members"""
        )

        bittensor.wallet.add_args(senate_parser)
        bittensor.subtensor.add_args(senate_parser)

TODO:

  • Python side of things
  • Senate Registrations are currently via the root network . How does this change in a post DTAO world?

UI and python client get the error types and error docs from metadata

It's pretty annoying that the UI doesn't know at all what errors are if they haven't been hard-coded. Unknown error isn't a great user experience, and right now introducing new error types or errors in new situations creates a lot of churn with releases.

Substrate provides the metadata, which includes all info about errors, calls and events.
At first, we must guarantee all of them are well documented. In this issue, all docs for errors are checked.

From this unit test file for metadata check, the structure is clear for other language like python and js to parse.
https://github.com/opentensor/subtensor/blob/development/runtime/tests/metadata.rs

  • add these as doc comments for the errors
  • add unit test in the runtime to guarantee docs is ok for new added errors
  • profit?

fixes #375

Cargo run command missing a flag

Describe the bug

By following the README instructions for cargo run I get the following error:

root@b8032326feec:~/subtensor# cargo run  --release -- --dev
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package:   /root/subtensor/runtime/Cargo.toml
workspace: /root/subtensor/Cargo.toml
error: `cargo run` could not determine which binary to run. Use the `--bin` option to specify a binary, or the `default-run` manifest key.
available binaries: integration-tests, node-subtensor

So i added the flag --node-subtensor and things work

root@b8032326feec:~/subtensor# cargo run --release --bin node-subtensor -- --dev
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package:   /root/subtensor/runtime/Cargo.toml
workspace: /root/subtensor/Cargo.toml
    Updating git repository https://github.com/paritytech/substrate.git
    Updating crates.io index
       Fetch [==========>              ]  46.22%, (114640/474454) resolving deltas                                                                         ```


### To Reproduce

I followed the main README steps for linux

### Expected behavior

Everything should install as expected when following the instructions

### Screenshots

_No response_

### Environment

Linux ubuntu

### Additional context

_No response_

./target/release/subtensor: No such file or directory

Describe the bug

After building with the command : cargo build --release
And running the build with : ./target/release/subtensor --dev, I get the error: -bash: ./target/release/subtensor: No such file or directory. When I looked in the /target/release folder, there is no subtensor file, but there is a node-subtensor file.

To Reproduce

build with the command cargo build --release
run the build with the command ./target/release/subtensor --dev

Expected behavior

The build should be successful

Screenshots

No response

Environment

Linux Ubuntu

Additional context

No response

Add proxy pallet to the subtensor layer

Is your feature request related to a problem? Please describe.

Currently, the Subtensor layer doesn't include the proxy pallet which is a key feature for delegating operations securely such as staking to a third party.

Describe the solution you'd like

Include the proxy pallet to the subtensor layer.

Describe alternatives you've considered

No response

Additional context

No response

Running new local subtensor keep waiting for peers

Describe the bug

running new local subtensor keep waiting for peers with main lite node command

btcli w overview return 0 balance

while using old running subttensor return with correct balance

2024-04-03 15:38:42 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.2kiB/s ⬆ 3.1kiB/s    
2024-04-03 15:38:47 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 1.9kiB/s ⬆ 2.6kiB/s    
2024-04-03 15:38:52 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.1kiB/s ⬆ 2.9kiB/s    
2024-04-03 15:38:57 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 1.8kiB/s ⬆ 2.3kiB/s    
2024-04-03 15:39:02 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.3kiB/s ⬆ 3.2kiB/s    
2024-04-03 15:39:07 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.1kiB/s ⬆ 2.8kiB/s    
2024-04-03 15:39:12 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.2kiB/s ⬆ 3.1kiB/s    
2024-04-03 15:39:17 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 1.9kiB/s ⬆ 2.6kiB/s    
2024-04-03 15:39:22 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.0kiB/s ⬆ 2.7kiB/s    
2024-04-03 15:39:27 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 1.9kiB/s ⬆ 2.7kiB/s    
2024-04-03 15:39:32 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.1kiB/s ⬆ 2.9kiB/s    
2024-04-03 15:39:37 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.0kiB/s ⬆ 2.8kiB/s    
2024-04-03 15:39:42 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.0kiB/s ⬆ 2.9kiB/s    
2024-04-03 15:39:47 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.3kiB/s ⬆ 3.1kiB/s    
2024-04-03 15:39:52 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.2kiB/s ⬆ 3.2kiB/s    
2024-04-03 15:39:57 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.1kiB/s ⬆ 2.9kiB/s    
2024-04-03 15:40:02 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.4kiB/s ⬆ 3.3kiB/s    
2024-04-03 15:40:07 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 1.9kiB/s ⬆ 2.6kiB/s

To Reproduce

git clone https://github.com/opentensor/subtensor.git
sudo ./scripts/run/subtensor.sh -e docker --network mainnet --node-type lite

Expected behavior

btcli w overview # should return correct balance

Screenshots

image

Environment

ubuntu 22.02

Additional context

No response

Enhancement Proposal: Ensure Principle of Least Privilege Across Subtensor Deployments

Dear OpenTensor team,

I'm reaching out as a Coretex developer of the OpenTensor project to discuss a critical improvement in our Substrate deployment practices that I've identified and successfully tested. As part of my ongoing efforts to enhance the security posture of the OpenTensor project, a significant opportunity has been identified to align both our Docker and binary deployment methods with the Principle of Least Privilege (PoLP). This principle is a cornerstone of security and systems administration best practices, advocating for minimal user privileges to perform required tasks, thereby reducing the attack surface and potential impact of a compromise.

Currently, the service within the Docker container is configured to start and run as the root user, and similar privilege concerns apply to our binary deployment process. Furthermore, I also have not witnessed the executable performing a privdrop after initialization, suggesting that the process continues to run as root throughout its life cycle. This setup diverges from best practices by not minimizing the operational privileges of the service, potentially exposing it to unnecessary risks.

Upon further exploration and testing, I discovered that initializing and running the service as a non-privileged user within a Docker container does not adversely affect its operation, granted that the necessary file permissions have been applied before execution. This finding suggests that our service does not require root privileges for its initialization or runtime.

Implementing the Principle of Least Privilege by default in our Dockerfile could significantly mitigate potential security risks. Such risks include the escalation of privileges in the event of a vulnerability being exploited, which could lead to unauthorized access or control over the host machine or other containers.

In light of this, I propose the following changes to our Docker deployment methodology:

  1. Update our Dockerfile to create and use a non-root user for initializing and running the Subtensor service.
  2. Amend our documentation to reflect this change and emphasize the importance of following the Principle of Least Privilege when deploying Subtensor in all environments.
  3. This enhancement will not only improve our project's security profile but also demonstrate our commitment to following best practices in software development and deployment.

I am eager to discuss this further and collaborate on implementing these changes. Your feedback and insights will be invaluable as we strive to make Subtensor safer and more resilient against potential threats.

Best Regards

safe sparse matrix type

We need a safe type that allows sparse matrix operations that will not panic / does not provide panicking operations or methods. Everything should be infallible or return a Result or Option.

needed as part of #300

Priority too low when unstaking all

When trying to unstake all hotkeys I get this error sometimes.

SubstrateRequestException: {'code': 1014, 'message': 'Priority is too low: (18446744073709551615 vs 18446744073709551615)', 'data': 'The transaction has too low priority to replace another transaction already in the pool.'}

I need to restart the entire unstake all operation from the start. It would be great if the error can be caught and it will retry again after a short while.

Commit Reveal Weights

Decscription

The goal is to implement a commit-reveal scheme for submitting weights in the Subtensor module. This scheme will require validators to submit a hashed version of their weights along with a signature during the commit phase. After a specified number of blocks (reveal tempo), validators will reveal the actual weights, which will be verified against the commit hash.

Requirements

  1. Implement a commit_weights function that allows validators to submit a hashed version of their weights along with a signature during the commit phase.
  2. Implement a reveal_weights function that allows validators to reveal the actual weights after the specified reveal tempo.
  3. Ensure that commits and reveals are unique to each validator and include a signature to prevent copying.
  4. Make the commit-reveal scheme optional and configurable per subnet.
  5. Enforce the reveal tempo and allow commits only once within a tempo period.
  6. Implement helper functions for the commit-reveal process and background tasks.
  7. Provide clear error messages for issues like missing commits, hash mismatches, or timing violations.

Rust Implementation

Storage

Add the following storage item to store commit hashes, signatures, and block numbers per validator and subnet:

#[pallet::storage]
pub type WeightCommits<T: Config> = StorageDoubleMap<_, Twox64Concat, u16, Twox64Concat, T::AccountId, (T::Hash, T::Signature, T::BlockNumber), ValueQuery>;

commit_weights Function

Implement the commit_weights function for the commit phase:

#[pallet::call]
impl<T: Config> Pallet<T> {
    pub fn commit_weights(
        origin: T::RuntimeOrigin,
        netuid: u16,
        commit_hash: T::Hash,
        signature: T::Signature,
    ) -> DispatchResult {
        let who = ensure_signed(origin)?;
        ensure!(Self::can_commit(netuid, &who), Error::<T>::CommitNotAllowed);
        WeightCommits::<T>::insert(netuid, &who, (commit_hash, signature, <frame_system::Pallet<T>>::block_number()));
        Ok(())
    }
}

reveal_weights Function

Implement the reveal_weights function for the reveal phase:

pub fn reveal_weights(
    origin: T::RuntimeOrigin,
    netuid: u16,
    uids: Vec<u16>,
    values: Vec<u16>,
    version_key: u64,
) -> DispatchResult {
    let who = ensure_signed(origin)?;
    WeightCommits::<T>::try_mutate_exists(netuid, &who, |maybe_commit| -> DispatchResult {
        let (commit_hash, signature, commit_block) = maybe_commit.take().ok_or(Error::<T>::NoCommitFound)?;
        ensure!(Self::is_reveal_block(netuid, commit_block), Error::<T>::InvalidRevealTempo);
        let provided_hash = T::Hashing::hash_of(&(uids.clone(), values.clone(), version_key));
        ensure!(provided_hash == commit_hash, Error::<T>::InvalidReveal);
        ensure!(Self::verify_signature(&who, &commit_hash, &signature), Error::<T>::InvalidSignature);
        Self::do_set_weights(
            T::Origin::from(origin),
            netuid,
            uids,
            values,
            version_key,
        )
    })
}

Helper Functions

Implement helper functions for the commit-reveal process:

impl<T: Config> Pallet<T> {
    fn can_commit(netuid: u16, who: &T::AccountId) -> bool {
        // Check if commit-reveal is enabled for the subnet
        // Check if the validator hasn't committed within the current tempo```rust
        // ...
    }

    fn is_reveal_block(netuid: u16, commit_block: T::BlockNumber) -> bool {
        // Check if the current block is within the reveal tempo
        // ...
    }

    fn verify_signature(who: &T::AccountId, commit_hash: &T::Hash, signature: &T::Signature) -> bool {
        // Verify the provided signature against the commit hash and validator's public key
        // ...
    }
}

Implement background tasks for cleaning up expired commits and managing the commit-reveal process.

Python Implementation

commit_weights Function

Add to bittensor/subtensor.py:

def commit_weights(
    self,
    wallet: "bittensor.wallet",
    netuid: int,
    commit_hash: str,
    signature: str,
    wait_for_inclusion: bool = False,
    wait_for_finalization: bool = False,
) -> Tuple[bool, Optional[str]]:
    @retry(delay=2, tries=3, backoff=2, max_delay=4)
    def make_substrate_call_with_retry():
        with self.substrate as substrate:
            call = substrate.compose_call(
                call_module="SubtensorModule",
                call_function="commit_weights",
                call_params={
                    "netuid": netuid,
                    "commit_hash": commit_hash,
                    "signature": signature,
                },
            )
            extrinsic = substrate.create_signed_extrinsic(call=call, keypair=wallet.coldkey)
            response = substrate.submit_extrinsic(
                extrinsic,
                wait_for_inclusion=wait_for_inclusion,
                wait_for_finalization=wait_for_finalization,
            )
            if not wait_for_finalization and not wait_for_inclusion:
                return True, None
            response.process_events()
            if response.is_success:
                return True, None
            else:
                return False, response.error_message

    return make_substrate_call_with_retry()

reveal_weights Function

Add to bittensor/subtensor.py:

def reveal_weights(
    self,
    wallet: "bittensor.wallet",
    netuid: int,
    uids: List[int],
    values: List[int],
    version_key: int,
    wait_for_inclusion: bool = False,
    wait_for_finalization: bool = False,
) -> Tuple[bool, Optional[str]]:
    @retry(delay=2, tries=3, backoff=2, max_delay=4)
    def make_substrate_call_with_retry():
        with self.substrate as substrate:
            call = substrate.compose_call(
                call_module="SubtensorModule",
                call_function="reveal_weights",
                call_params={
                    "netuid": netuid,
                    "uids": uids,
                    "values": values,
                    "version_key": version_key,
                },
            )
            extrinsic = substrate.create_signed_extrinsic(call=call, keypair=wallet.coldkey)
            response = substrate.submit_extrinsic(
                extrinsic,
                wait_for_inclusion=wait_for_inclusion,
                wait_for_finalization=wait_for_finalization,
            )
            if not wait_for_finalization and not wait_for_inclusion:
                return True, None
            response.process_events()
            if response.is_success:
                return True, None
            else:
                return False, response.error_message

    return make_substrate_call_with_retry()

Helper Functions

Add to bittensor/subtensor.py:

def can_commit(self, netuid: int, who: str) -> bool:
    # Check if commit-reveal is enabled for the subnet
    # Check if the validator hasn't committed within```python
    # Check if the validator hasn't committed within the current tempo
    # ...
    pass

def is_reveal_block(self, netuid: int, commit_block: int) -> bool:
    # Check if the current block is within the reveal tempo
    # ...
    pass

def verify_signature(self, who: str, commit_hash: str, signature: str) -> bool:
    # Verify the provided signature against the commit hash and validator's public key
    # ...
    pass

Implement background tasks for cleaning up expired commits and managing the commit-reveal process.

Error Handling

Provide clear error messages in bittensor/errors.py for various scenarios, such as:

  • Missing commits when revealing weights
  • Mismatched commit hash when revealing weights
  • Invalid signature when committing or revealing weights
  • Attempting to commit weights more than once within a tempo period
  • Attempting to reveal weights outside the reveal tempo

Testing

  • Write unit tests to cover the functionality of the commit_weights and reveal_weights functions in both Rust and Python.
  • Write integration tests to ensure the commit-reveal scheme works as expected with the Subtensor module.
  • Test various error scenarios and ensure appropriate error messages are provided.

Documentation

  • Update the Subtensor module's documentation to include information about the commit-reveal scheme.
  • Provide examples and guidelines for subnet owners to configure and use the commit-reveal scheme effectively.
  • Update the Subtensor Python module's documentation with information and examples about the commit-reveal scheme.

Const

  • Python side of the code to written subtensor.commit_weights extrinsics
  • Commits takes the reveal tempo, how long after in terms of blocks before we
  • We want the owner to be able set the value, min blocks that pass before you reveal.
  • We want to make them subtensor primitives
  • subtensor helper commit reveal + set background process
  • We want to be careful with tempo and only allow it to be called once within a tempo
  • focus on the primitives

Vune

  • We should use signatures of weights
  • make it optional

Integration Testing

Description

To ensure continuous reliability between our Subtensor and the Bittensor package, we need to implement a comprehensive GitHub Actions workflow. This workflow will automate the entire testing process, from building the blockchain node using the localnet.sh script, to installing the Bittensor package from a configurable branch, and finally running the test_subtensor_integration.py integration test.

The primary objective of this setup is to verify that any changes introduced to the subtensor codebase do not break or introduce regressions in the Bittensor Python code. By parameterizing the Bittensor repository branch, we can test against various development stages and release candidates, ensuring compatibility and robustness across different versions.

Acceptance Criteria

  • The GitHub Actions workflow should accept the Bittensor repository branch as a configurable input parameter.
  • The workflow should trigger automatically on push or pull request events to specified branches, with an option for manual triggers.
  • It should build the latest version of our blockchain node using the localnet.sh script.
  • The Bittensor package should be fetched and installed from the specified branch of the GitHub repository.
  • The test_subtensor_integration.py integration test should be executed after successful installation of the Bittensor package.
  • Test results, including any failures or errors, should be prominently reported within the GitHub Actions workflow.
  • The workflow should gracefully handle and report any errors encountered during the build, installation, or testing process.

Tasks

  • Design and implement a GitHub Actions workflow file in the .github/workflows directory.
    • Define a workflow trigger based on push or pull request events to specified branches.
    • Include an option for manual workflow triggers.
    • Add an input parameter for specifying the Bittensor repository branch.
  • Integrate the localnet.sh script into the workflow for building and starting the blockchain nodes.
    • Ensure the script is executed with the appropriate parameters and environment variables.
    • Handle any errors or failures during the node build and startup process.
  • Implement steps to fetch and install the Bittensor package from the specified branch.
    • Use the input parameter to dynamically set the branch for Bittensor package installation.
    • Handle any dependencies or setup required for the Bittensor package.
  • Configure the environment and execute the test_subtensor_integration.py integration test.
    • Set up any necessary environment variables or configurations for the test.
    • Run the integration test and capture the test results.
  • Implement comprehensive error handling and reporting throughout the workflow.
    • Catch and handle any errors or exceptions that may occur during each step.
    • Provide clear and informative error messages in the workflow logs.
  • Optimize the workflow for performance and efficiency.
    • Implement caching mechanisms for dependencies and build artifacts.
    • Parallelize independent tasks wherever possible.
  • Optional: Implement notifications or integrate with monitoring tools for test failures or critical issues.

Additional Considerations

  • Utilize Docker containers within the GitHub Actions workflow to provide a consistent and isolated environment for building, installing, and testing.
  • Implement a matrix strategy to test against multiple versions or configurations of the Bittensor package and subtensor node.
  • Consider integrating code coverage reporting to monitor and maintain high test coverage.
  • Explore opportunities for performance optimization, such as parallel test execution or selective test runs based on changed files.

Related Links

Incorrect miner deregistration priority

Describe the bug

look at steps to reproduce and expected behavior

To Reproduce

  1. Register miners 1-N that do nothing where N>more than subnet limit (currently 256)
  2. See that the first miner that gets unregistered is not necessarily the oldest miner that registered

Expected behavior

The miners for deregistration should be sorted by (emission, registration_time), so if both miners have the same emission, the one that registered earlier on should be ejected first. It was observed in the wild on sn12 mainned that this is not the case and in presence of 120 miners that do nothing, a relatively fresh one was ejected a day after it registered.

Screenshots

No response

Environment

linux

Additional context

No response

Create RPCs for Fetching Dynamic Pool Info

Description

The python package currently makes 4 queries to subtensor in order to retrieve the dynamic pool info, which leads to a higher latency.

This PR is reduce of the process , but implementing APIs of subtensor that return this information.

Acceptance Criteria

  • The python package should be able to retrieve dynamic pool info for a single pool given a netuid.
  • The python package should be able to retrieve the dynamic pool info for all the pools

Tasks

  • Implement dynamic_pool_info: Returns the pool info for a single pool given the netuid
  • Implement dynamic_pool_infos: Returns dynamic pool info for all the pools.
  • Update python package

Changing Delegate Take

Description

Currently, the delegation rewards distribution system in our blockchain project has a limitation where delegates' commission rates (takes) are hardcoded at 18%. This inflexibility prevents delegates from adjusting their commission rates based on market conditions and their individual strategies.

As a result, delegates often resort to off-chain agreements and rebate systems to attract delegators and remain competitive. This creates market inefficiencies and hinders the overall user experience within our ecosystem.

Implementing the ability for delegates to alter their commission rates within the defined range would provide significant value to both delegates and delegators, enabling them to make informed decisions and adapt to evolving market dynamics.

Acceptance Criteria

  • Delegates should be allowed to set their commission rates (takes) within the range of 0% to 18%.
  • To prevent frequent changes and ensure stability, delegates should only be allowed to increase their commission rates once every 30 days (monthly).
  • The new commission rate should be applied to future rewards distributions and not affect previously distributed rewards.

Tasks

  • Implement a function in the delegation contract to allow delegates to change their commission rates (takes) within the specified range.
  • Add necessary checks to ensure that delegates can only increase their commission rates once every 30 days.

Additional Considerations

  • Do we need some notification system to broadcast delegate rate changes ?

Related Links

enforce no unwrap/panic in critical paths

Right now it is possible to have code that panics in pallets, extrinsics, etc., which can brick the chain. Ideally we disallow this at the clippy linting level so the CI will not allow such code to be merged. This is a tall order, because there are a bunch of instances currently where we do panic, so these all need to be fixed before this CI change will pass.

AC:

  • fix any existing unwrap()s
  • fix any existing expect()s
  • fix any existing unwrap_err()s
  • fix any existing panic!s
  • fix any existing unreachable!()s
  • fix any existing unimplemented!()s
  • prevent unwrap()s in CI
  • prevent expect()s in CI
  • prevent unwrap_err()s in CI
  • prevent panic!s in CI
  • prevent unreachable!()s in CI
  • prevent unimplemented!()s in CI
  • #301
  • #303
  • fix any existing panicking array indexing operations (requires #301)
  • prevent panicking array indexing operations in CI (if possible)
  • Eventually once things are locked down enough, we might be able to better enforce some of these constraints by having a whitelist of types that are allowed to be returned by an extrinsic in our pallets and then a simple attribute / visitor pattern that enforces this at the pallet function signature level. The rule would be that none of these types are allowed to have a method or op that gets by CI that could panic.

Bittensor Archive node stuck at block 2585474

Describe the bug

Hi, I'm currently operating a Bittensor node, and it seems to be stuck at block 2585474.

The logs indicate that the best block remains at 2585476, while the finalized block remains at 2585474. Does anyone have any suggestions on how to resolve this issue?

Additionally, I would greatly appreciate it if someone could provide the p2p address of a node that has successfully passed this block. I'd like to try using it as a peer for my node to see if it helps resolve the problem.

Thank you in advance for any assistance!

To Reproduce

  1. Run a bittensor archive node
    with parameters
--base-path=/chain-data
--rpc-cors=all
--port=20540
--rpc-port=9933
--ws-port=9944
--ws-external
--rpc-external
--node-key=bb86e433fe0f1f662a6fdf93211d21fa1e72537865f2f58ddec8e63d6eab3348
--pruning=archive
--rpc-methods=Unsafe
--in-peers=25
--out-peers=25
--prometheus-external
--chain=/raw_spec.json
--in-peers-light=0
--max-runtime-instances=128
--ws-max-connections=10000

Expected behavior

Expect the node to keep syncing the latest block

Screenshots

No response

Environment

Ubuntu VERSION="20.04.6 LTS (Focal Fossa)"

Additional context

No response

Upgrade to Polkadot 1.0

upgrades subtensor to polkadot 1.0.0

AC:

  • breaking changes for 0.9.39 => 1.0 (tons of stuff, but one bullet point now lol)
  • revamp CI, now broken into 6 separate jobs, takes no more than ~6-7 mins to complete, cargo fmt feedback in ~22 seconds, cargo check --workspace in about 3 mins 😎, also added CI checks for clippy and one that asserts that cargo fix has no trivial fixes available
  • cargo check passing
  • cargo check --workspace passing
  • cargo test compiling
  • cargo test --workspace compiling
  • cargo test --workspace passing
  • cargo test --workspace --features=runtime-benchmarks compiling
  • cargo test --workspace --features=runtime-benchmarks passing

Add subnet specific take

Description

We aim to fully integrate the functionality for delegates to set and adjust their take values per subnet, including the ability to increase or decrease these values. This involves updating the do_become_delegate function, modifying the existing DelegateInfo struct to include subnet-specific take values, and ensuring that delegates can dynamically adjust their take values for specific subnets.

By providing delegates with the flexibility to manage their take values on a per-subnet basis, we enable them to tailor their participation and rewards strategy according to their preferences and the unique characteristics of each subnet.

Acceptance Criteria

  • The DelegateInfo struct should be modified to include a new field subnet_takes of type Vec<(Compact<u16>, Compact<u16>)> to store subnet-specific take values.
  • The do_become_delegate function should be updated to accept a list of subnet IDs and their corresponding take values, allowing delegates to set initial take values for multiple subnets upon becoming a delegate.
  • A storage migration should be implemented to handle the addition of the subnet_takes field to the DelegateInfo struct, ensuring a smooth transition for existing delegates.
  • The do_increase_take and do_decrease_take functions should be adjusted to handle subnet-specific take adjustments, ensuring that increases or decreases are performed within the constraints of the system's rules.
  • Proper validation should be in place to ensure that the specified subnets exist and that the new take values adhere to the system's constraints.
  • The updated functionality should be thoroughly tested to verify that delegates can successfully set and adjust their take values per subnet, and that the changes are accurately reflected in the storage and overall system behavior.

Tasks

  • Modify the DelegateInfo struct in lib.rs to include the subnet_takes field.
#[derive(Decode, Encode, PartialEq, Eq, Clone, Debug)]
pub struct DelegateInfo<T: Config> {
    // ...
    subnet_takes: Vec<(Compact<u16>, Compact<u16>)>, // New field: Vec of (subnet ID, take value)
}
  • Implement a storage migration to handle the addition of the subnet_takes field to the DelegateInfo struct.
  • Update the do_become_delegate function in staking.rs to accept a list of subnet IDs and their corresponding take values.
// Update the do_become_delegate function
pub fn do_become_delegate(
    origin: T::RuntimeOrigin,
    hotkey: T::AccountId,
    subnet_takes: Vec<(u16, u16)>,
) -> dispatch::DispatchResult {
    // ...
    let mut delegate_info = DelegateInfo {
        // ...
        subnet_takes: subnet_takes.iter().map(|(id, take)| (Compact(*id), Compact(*take))).collect(),
    };
    // ...
}
  • Modify the do_increase_take function in staking.rs to handle subnet-specific take increases.
  • Modify the do_decrease_take function in staking.rs to handle subnet-specific take decreases.

Related Links

Latest status: Subtensor node installation instructions

Describe the bug

I am opening this issue to report the latest status on subtensor node installation steps:

  1. In this repo this section of the README needs fixing: https://github.com/opentensor/subtensor?tab=readme-ov-file#run-in-docker (there is no script called ./scripts/docker_run.sh).
  2. Instructions in this Discord announcement thread are cleaner and they are working. I tested them okay. https://discord.com/channels/799672011265015819/830075335084474390/1219185730564390922
  3. These instructions in this repo need fixing (it says "docker" for compiling with source code): https://github.com/opentensor/subtensor/blob/development/docs/running-subtensor-locally.md#running-the-node
  4. For my docs at docs.bittensor.com, I cleaned up all this, removed using the script for the source code option and created a new Subtensor section: opentensor/developer-docs#192 I used Discord announcement instructions in these docs.
  5. If you are okay with it, I can take a shot at fixing the README and other docs in this repo. I will wait for your signal.

To Reproduce

See above description.

Expected behavior

See above description.

Screenshots

No response

Environment

macOS

Additional context

No response

upgrade to latest polkadot-sdk version

we have upgraded to the final 1.0 version of polkadot before the move to the monorepo, now it is time to upgrade to the latest version on polkadot-sdk so we are fully up-to-date.

v1.9.0 as of writing. Currently we are on v1.0.0

AC:

  • upgraded to latest polkadot-sdk version
  • cargo check passing
  • cargo check --workspace passing
  • cargo test compiling
  • cargo test --workspace compiling
  • cargo test --workspace passing
  • cargo test --workspace --features=runtime-benchmarks compiling
  • cargo test --workspace --features=runtime-benchmarks passing

Improve Error Message for Rejected Extrinsics

Is your feature request related to a problem? Please describe.

The nodes currently reject some extrinsics (registration/stake/unstake) for rate limiting purposes, etc.
However, the error message for any case is "Transaction would exhaust the block limits" which is not descriptive/informative.

Describe the solution you'd like

It would be good to be more informative to clients about why the extrinsic was rejected

Describe alternatives you've considered

No response

Additional context

No response

Eliminate warnings from compilation

Is your feature request related to a problem? Please describe.

I generally don't like (production) code to compile with warnings, and I get quite a few of them compiling subtensor with default settings. Would it be an idea to eliminate them? Most of the current warnings seem easy enough to eliminate. Some of them are hard to understand without in-depth knowledge of rust and substrate.

Describe the solution you'd like

I would like to work together on this, e.g. by submitting a PR and iteratively working toward a warning-free build. At some point the equivalent of -Werror (in C) could be enabled in the default build settings.

First I would like to know if such a PR has any chance of being accepted.

Describe alternatives you've considered

No response

Additional context

No response

scripts/localnet_setup.sh Not working

To recreate clone subtensor and run scripts/localnet_setup.sh

./scripts/localnet_setup.sh *** Local testnet installation *** Installing substrate support libraries Substrate library script checksum not valid, exiting.

Convert to using balance locks in staking

Describe the bug

Currently staking is affected by existential deposit because it is using Currency::withdraw (and Currency::deposit for unstaking). Pallet balances has the locking mechanism for this, which does not affect the total balance of the account, but effectively locks the currency.

The staking/unstaking logic of working with balances is located in remove_balance_from_coldkey_account and add_balance_to_coldkey_account

To Reproduce

Stake full amount less transaction fees.
Result: Staked amount is less by ED (or account is wiped, unsure).

Expected behavior

Staked amount can be full balance and account does not get dust-collected.

Screenshots

No response

Environment

Any

Additional context

No response

README.md cargo build command misses a flag

Describe the bug

Building subtensor leads to a non-functioning binary (with fatal errors such as "runtime requires function imports which are not present on the host: 'env:ext_benchmarking_current_time_version_1'") which is solved by adding --features=runtime-benchmarks as mentioned in the Discord. It would be nice if the README would reflect this requirement.

To Reproduce

  1. Go to https://github.com/opentensor/subtensor
  2. Follow the build instructions, especially cargo build --release
  3. Run the subtensor
  4. Observe it crashing with error "runtime requires function imports which are not present on the host: 'env:ext_benchmarking_current_time_version_1'"

Expected behavior

I expect a working binary after performing the build instructions.

Screenshots

No response

Environment

Linux Ubuntu

Additional context

It seems to be solved by adding --features=runtime-benchmarks to the cargo build command.

Outdated readme

Describe the bug

The Run in Docker section is outdated.

To Reproduce

Follow README section Run in Docker

subtensor/README.md

Lines 281 to 306 in 1d3cb71

### Run in Docker
First, install [Docker](https://docs.docker.com/get-docker/) and
[Docker Compose](https://docs.docker.com/compose/install/).
Then run the following command to start a single node development chain.
```bash
./scripts/docker_run.sh
```
This command will firstly compile your code, and then start a local development network. You can
also replace the default command
(`cargo build --release && ./target/release/node-template --dev --ws-external`)
by appending your own. A few useful ones are as follow.
```bash
# Run Substrate node without re-compiling
./scripts/docker_run.sh ./target/release/node-template --dev --ws-external
# Purge the local dev chain
./scripts/docker_run.sh ./target/release/node-template purge-chain --dev
# Check whether the code is compilable
./scripts/docker_run.sh cargo check
```

But there is no

./scripts/docker_run.sh

Expected behavior

Section correctly explains how to run in subtensor in docker, this is probably done with the

docker compose up

Screenshots

No response

Environment

repo

Additional context

I think this is just outdated Readme, but it's misleading.

v1.1.0 seems to break websocket support

Describe the bug

It appears that v.1.0.0 does not support websockets , at least not through the --ws-port flag. This currently breaks the localnet setup script (./scripts/localnet.sh), and might have impacts upstream, as most of our services connect via websockets.

To Reproduce

  1. Clone the development branch
  2. Run .scripts/localnet.sh

Expected behavior

Node runs

*** Binary compiled
*** Building chainspec...
2024-04-12 17:19:46 Building chain spec    
*** Chainspec built and output to file
*** Purging previous state...
*** Previous chainstate purged
*** Starting localnet nodes...
error: unexpected argument '--ws-port' found

  tip: a similar argument exists: '--rpc-port'

Usage: node-subtensor --bob --port <PORT> --rpc-port <PORT> <--chain <CHAIN_SPEC>|--dev|--base-path <PATH>|--log <LOG_PATTERN>...|--detailed-log-output|--disable-log-color|--enable-log-reloading|--tracing-targets <TARGETS>|--tracing-receiver <RECEIVER>>

For more information, try '--help'.
error: unexpected argument '--ws-port' found

  tip: a similar argument exists: '--rpc-port'

Screenshots

No response

Environment

M3 Max ,OSX

Additional context

No response

Trust minimized (light client) interconnect with Cosmos SDK

Is your feature request related to a problem? Please describe.

How to call subtensor from Cosmos SDK?

Describe the solution you'd like

Integrate IBC pallet which is trust minimized (light client based) way to call subtensor from cosmos-sdk and other ibc enabled chains.

Here is example https://github.com/ggxchain/ibc/blob/main/ibc-ggx-cosmos%20ICF%20M3%20deliverabl.md

Describe alternatives you've considered

Offchain solutions.

Additional context

No response

Subnet Registration: lock -> some other mechanism

Is your feature request related to a problem? Please describe.

Currently, if a subnet owner registers on mainnet, the cost for the subnet is locked, but not recycled. So if the subnet fails, the only people significantly affected are the miner and validator operators, who have recycled TAO to register within the subnet. I feel as if the price of failure to launch a subnet and gain emissions from the root network should actually be a price.

Describe the solution you'd like

I would like to see the subnet registration either:

  1. Be recycled, instead of locked
  2. Locked, but then apportioned out to the participants in the subnet according to the last know emissions schedule (18% for the owner, 82% for the node operators).

Describe alternatives you've considered

No response

Additional context

No response

deny unsafe math operations

Part of #300, we need to deny the ability to perform potentially-panicking and/or overflowing operations on number types in subtensor

AC:

  • fix existing offending overflow operations
  • fix existing offending float usages, if any
  • enforce #[deny(arithmetic_overflow)]
  • enforce clippy::integer_arithmetic
  • disallow the use of f32 / f64 entirely

On chain runtime specVersion (143) mismatches github runtime version (142)

Describe the bug

My subtensor node doesn't execute native code, apparently due to a version mismatch. On-chain runtime version is specVersion=143 while the code in github specifies 142:

spec_version: 142,

To Reproduce

Run node-subtensor with --execution native and observe that the native code is not run (e.g. when adding some extra debugging in local pallets).

Expected behavior

I expect the github to reflect the on-chain runtime in every respect, including the version number, and I expect that node-subtensor uses native runtime where possible when --execution native is specified.

Screenshots

No response

Environment

Linux Ubuntu

Additional context

No response

Add the origin AccountId to StakeAdded and StakeRemoved events

Is your feature request related to a problem? Please describe.

Currently the StakeAdded/Removed events do not list the origin that initiated the stake add or removal action. This makes it hard to track staking action based on events and requires a link to the actual extrinsic triggering this event.

Describe the solution you'd like

Add the origin's AccountId to these events. The information is already available in the functions emitting the events, so it should be as simple as adding another field to the Events tuple.

Describe alternatives you've considered

No response

Additional context

No response

Issue setting up a node to run on testnet

Copied from @rajkaramchedu:

  1. I believe running subtensor node for mainnet is okay, no complaints there.
  2. But whenever a user wants to run the node on testnet, for some reason it keeps going to mainnet, is what they are saying.
  3. It may be as simple as one of the nucleus team members to look at testchain command syntax for both "using source" and "using docker" sections of the doc and help identify what's wrong with those command options. Maybe the bootnode public key is wrong for the testnet? https://docs.bittensor.com/subtensor-nodes/using-source#lite-node-on-testchain
    [3:51 PM]
    Latest issue the user is facing is from a few mins ago https://discord.com/channels/799672011265015819/1228380578949369947/1230596951381119079. This is all familiar, we've been getting these questions since almost 2+ weeks ago.
    [3:53 PM]
    This issue resolution alone will reduce a good number of subtensor node related questions in the community.
    Doc Issue on this opentensor/developer-docs#188

Unit test for runtime

Is your feature request related to a problem? Please describe.

Now, we don't have the test for runtime to verified the pallets, their configuration, rpc and so on

Describe the solution you'd like

Follow the solution from upstream, the reference like https://github.com/polkadot-fellows/runtimes/blob/main/integration-tests/emulated/tests/collectives/collectives-polkadot/src/tests/fellowship_treasury.rs.

Describe alternatives you've considered

No response

Additional context

No response

Unable to connect local node to chain.

Describe the bug

I have recently spun up a new server, I followed my usual process.
Issue is that I am unable to connect my local node to chain. I cloned the latest repo and run it on docker, like I do for all of my other servers.

I am unable to register a new miner or connect a miner to chain.

To Reproduce

git clone https://github.com/opentensor/subtensor.git
sudo ./scripts/run/subtensor.sh -e docker --network mainnet --node-type lite

Expected behavior

Note: my server running a older version of subtensor have no issues at all. I am using the version released just after the chain when down.

Screenshots

No response

Environment

Ubuntu

Additional context

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.