Git Product home page Git Product logo

aleph-node's People

Contributors

artemis-cc avatar bartoszjedrzejewski avatar damianstraszak avatar dependabot[bot] avatar deuszx avatar fbielejec avatar fixxxedpoint avatar ggawryal avatar h4nsu avatar ivan770 avatar jamakase avatar joshuajbouw avatar kosiare avatar kostekiv avatar krzysztofziobro avatar lesniak43 avatar maciejnems avatar maciejzelaszczyk avatar marcin-radecki avatar michalseweryn avatar mike1729 avatar mikolajgs avatar obrok avatar piotrmocz avatar pmikolajczyk41 avatar skrolikowski-10c avatar timorl avatar timorleph avatar woocash2 avatar yithis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aleph-node's Issues

Documentation on how to use the subtrate compatible finality-aleph consensus

I have read your original alephBFT API documentation but I noticed that you also have a subtrate compatible version here. I wanted to know how to integrate that component instead into a subtrate compatible node.

If this is the wrong place to ask such a question then I will delete and ask somewhere else.

Thanks in advance.

Thread '<unnamed>' panicked at 'Value must reside in 1 or 2 registers'

Describe the bug

node panic on restart with: Thread '' panicked at 'Value must reside in 1 or 2 registers'

To Reproduce

  • this is aleph-node first crash since node was started almost 2 weeks ago
  • I was adjusting vCPU number (on a different VM) around the time on the ESXi host, I'll be looking for relevant logs on the ESXi host as well - edit: done that, nothing out of the ordinary in these logs

Environment information:

  • OS: Ubuntu 22.04.1 LTS
  • RAM: 16 GB
  • Cores: 10 vCores
  • Type: VM on ESXi-7.0U3-18644231-standard
  • version: 34248bd (HEAD, tag: r-7.1, origin/release-7) Perform word splitting on custom arguments (#560)

Additional context

to summarize, there were 2 crashes:

  1. at 16:38:43 - the node was synced and operating normally - Thread 'tokio-runtime-worker' panicked at 'arithmetic operation overflow',
  2. at 18:58:29 - the node had just been restarted - Thread '' panicked at 'Value must reside in 1 or 2 registers'

after the 2nd crash, the VM was rebooted and node was restarted without issue

attaching node log (2nd crash)

aleph-node-2.log

docker build failed because .dockerignore exclude the default.nix file

[2022-04-26 17:50:00] Step 4/9 : COPY default.nix /node/
[2022-04-26 17:50:00] COPY failed: file not found in build context or excluded by .dockerignore: stat default.nix: file does not exist
script returned exit code 1

dockerignore file:
**
!target/release/aleph-node
!docker
!local-tests/send-runtime/target/release/send_runtime
!docker-runtime-hook/entrypoint.sh

r-12.2 decode issue with subxt 0.30.1

Description of Issue

While testing r-12.2 against mainnet & testnet, I noticed the function staking().eras_validator_prefs(..) leads to a decoding error.

It seems that this error comes from subxt "0.30.1", which is used in r-12.2.

[tokio::main]
async fn main() {

        use aleph_client::{AccountId, api, Connection};
        use subxt::utils::Static;

        let conn = Connection::new(NODE_URL).await;

        let current_finalized = conn
            .get_finalized_block_hash()
            .await
            .unwrap();

        let metadata = conn.client.rpc().metadata_legacy(Some(current_finalized)).await.unwrap();

        conn.client.set_metadata(metadata);

        let storage_address = api::storage()
            .staking()
            .eras_validator_prefs(570, Static(AccountId::from_str("5Ggeo3mbNaiZZVE1RJaSKB4kD9qeqy78hRiFE6tWUbazCbPS").unwrap()));

        let pref = conn
            .get_storage_entry_maybe(&storage_address, None)
            .await;

        pref.unwrap();
        
    }

Error:

thread 'main' panicked at /Users/.cargo/git/checkouts/aleph-node-05aaac119b4e8869/337ac41/aleph-client/src/connections.rs:269:34:
Should access storage: Decode(Error { context: Context { path: [Location { inner: Field("commission") }] }, kind: VisitorDecodeError(Unexpected(U32)) })

Note

When testing the call with subxt 0.26.0 it works. This suggests there might be an issue with subxt 0.30.1 .

docker build failed

$ docker build -t aleph-node/build -f nix/Dockerfile.build .
#10 69.77 [naersk] Crate was already unpacked at /nix/store/kwkd89q88xswd73k1kf44zjs9l73yhls-git-deps/sp-offchain-4.0.0-dev-98c2eeea74413044ae8ccfca1b6d56d01b57a76b
#10 69.77   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
#10 69.77 100 16040  100 16040    0     0  65737      0 --:--:-- --:--:-- --:--:-- 65737
#10 69.84 building '/nix/store/riy97mg8fk7zy8mswb2l1spci4gf7ycn-download-dyn-clonable-0.9.0.drv'...
#10 69.85 [naersk] https://github.com/Cardinal-Cryptography/substrate.git Found crate 'sp-panic-handler-4.0.
#10 69.85 [output clipped, log limit 1MiB reached]
------
executor failed running [/bin/sh -c /node/nix-build.sh]: exit code: 100

`scripts/run_nodes.sh` fails on 12.x

Hi,

I'm trying to setup local development chain. With minor changes to scripts/run_nodes.sh it works for 11.4 but fails on 12 branch. After calculating some blocks (~20) it hangs. I also see that run_nodes.sh was completely rewritten on main. Could you suggest correct configuration for 12.2?

System information:

$ uname -a
Linux ip-x-x-x-x 6.2.0-1017-aws #17~22.04.1-Ubuntu SMP Fri Nov 17 21:07:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
$ git status
HEAD detached at r-12.2
$ git diff
diff --git a/scripts/run_nodes.sh b/scripts/run_nodes.sh
index ca3e2047..bad03ae7 100755
--- a/scripts/run_nodes.sh
+++ b/scripts/run_nodes.sh
@@ -103,6 +103,7 @@ run_node() {
     --public-validator-addresses 127.0.0.1:${validator_port} \
     --validator-port ${validator_port} \
     --detailed-log-output \
+    --unsafe-rpc-external \
     -laleph-party=debug \
     -laleph-network=debug \
     -lnetwork-clique=debug \

Not able to make public rpc calls on aleph-nodes

As per instructions I have aleph-node running(by running scripts/run_nodes.sh), it is accessible through local host but on deployment to docker /kubernetes I am not able to connect to network through rpc url. (Example: wss://node.mynetwork.in) Is there any flag missing while running the node so that I can access it publically?

Question: AlephBFT depend on Aura, so Aleph Zero is actually a blockchain. So what is DAG consensus used for?

Hello guys,
I'm working on doing research about DAG and blockchain. Your project looks promising about the idea but I'm confused about your implementation in this repository.
In FAQ from your official website, you mentioned that Aleph Zero uses DAG consensus protocol so multiple users can create blocks at the same time. But, as I can see you are using created block by Substrate's Aura, which is actually working as a blockchain protocol.
As I understand that AlephBFT only finalizes those blocks created by Aura so it cannot help us to create multiple blocks at the same time. Could you clarify more details about this?

Btw, also in FAQ, you said that your project is different from Hedera Hashgraph because of the number of validators. As I can see in this repository, you haven't implemented any validators election logic for both Aura and AlephBFT. Could you give me more details about this?

Shared IP address provokes collision on validator nodes

Dear team:

Please consider the following use case: using one machine with validator intent in both testnet and mainnet chains, let's please assume the bare metal specifications are over-dimensioned to allow plenty resources for both nodes. and. being both from different chains, this case should not affect the decentralisation of each individual chain.

Problem: at the moment, an error message is triggered when running a second --validator node using the same IPv4 address and different ports in the --validator-port flag:

 ====================
 Version: 0.8.0-a433672835f
    0: sp_panic_handler::set::{{closure}}
    1: std::panicking::rust_panic_with_hook
              at rustc/20ffea6938b5839c390252e07940b99e3b6a889a/library/std/src/panicking.rs:702:17
    2: std::panicking::begin_panic_handler::{{closure}}
              at rustc/20ffea6938b5839c390252e07940b99e3b6a889a/library/std/src/panicking.rs:588:13
    3: std::sys_common::backtrace::__rust_end_short_backtrace
              at rustc/20ffea6938b5839c390252e07940b99e3b6a889a/library/std/src/sys_common/backtrace.rs:138:18
    4: rust_begin_unwind
              at rustc/20ffea6938b5839c390252e07940b99e3b6a889a/library/std/src/panicking.rs:584:5
    5: core::panicking::panic_fmt
              at rustc/20ffea6938b5839c390252e07940b99e3b6a889a/library/core/src/panicking.rs:142:14
    6: core::result::unwrap_failed
              at rustc/20ffea6938b5839c390252e07940b99e3b6a889a/library/core/src/result.rs:1814:5
    7: finality_aleph::nodes::validator_node::run_validator_node::{{closure}}
    8: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
    9: <sc_service::task_manager::prometheus_future::PrometheusFuture<T> as core::future::future::Future>::poll
   10: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll
   11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
   12: std::thread::local::LocalKey<T>::with
   13: tokio::park::thread::CachedParkThread::block_on
   14: tokio::runtime::handle::Handle::block_on
   15: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
   16: tokio::runtime::task::harness::Harness<T,S>::poll
   17: tokio::runtime::blocking::pool::Inner::run
   18: std::sys_common::backtrace::__rust_begin_short_backtrace
   19: core::ops::function::FnOnce::call_once{{vtable.shim}}
   20: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
              at rustc/20ffea6938b5839c390252e07940b99e3b6a889a/library/alloc/src/boxed.rs:1935:9
       <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
              at rustc/20ffea6938b5839c390252e07940b99e3b6a889a/library/alloc/src/boxed.rs:1935:9
       std::sys::unix::thread::Thread::new::thread_start
              at rustc/20ffea6938b5839c390252e07940b99e3b6a889a/library/std/src/sys/unix/thread.rs:108:17
   21: <unknown>
   22: <unknown>
 Thread 'tokio-runtime-worker' panicked at 'we should have working networking: Os { code: 98, kind: AddrInUse, message: "Address already in use" }', /home/user/aleph-node/finality-aleph/src/nodes/validator_node.rs:75
 This is a bug. Please report it at:
         docs.alephzero.org

The relevant flags of the first node are:

  --validator \
  --name <SOME_NAME> \
  --base-path '/var/lib/aleph-node/azero1' \
  --chain /var/lib/aleph-node/azero1/chainspec.json \
  --node-key-file /var/lib/aleph-node/azero1/p2p_secret \
  --backup-path /var/lib/aleph-node/azero1/backup-stash \
  --public-validator-addresses <SOME IP>:30345 \
  --port 30335 \
  --rpc-port 9935 \
  --ws-port 9946 \
  --prometheus-port 9617 \
  --database rocksdb \
  --state-pruning archive \
  --telemetry-url 'wss://telemetry.polkadot.io/submit/ 1' \
  --bootnodes <REDACTED> \
  --wasm-execution Compiled --execution native-else-wasm --pool-limit 100 --unit-creation-delay 300 --enable-log-reloading

and the second node has:

  --validator \
  --name SOME_OTHER_NODE \
  --base-path '/var/lib/aleph-node/tzero1' \
  --chain /var/lib/aleph-node/tzero1/chainspec.json \
  --node-key-file /var/lib/aleph-node/tzero1/p2p_secret \
  --backup-path /var/lib/aleph-node/tzero1/backup-stash \
  --public-validator-addresses <SAME IP>:30346 \
  --port 30336 \
  --rpc-port 9936 \
  --ws-port 9947 \
  --prometheus-port 9618 \
  --database rocksdb \
  --state-pruning archive \
  --telemetry-url 'wss://telemetry.polkadot.io/submit/ 1' \
  --bootnodes <REDACTED> \
  --wasm-execution Compiled --execution native-else-wasm --pool-limit 100 --unit-creation-delay 300 --enable-log-reloading

Please advise in case you require more details to reproduce this behaviour.

Many thanks!!

Milos

Substrate 3.0 release

Substrate 3.0 has been released which may include some nice new features for us to use, or update to.

From my understanding, it was mostly a large refactor of the library and organization.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.