Git Product home page Git Product logo

nessus-cardano's Introduction

Nessus Cardano

With Nessus Cardano we explore, praise, comment, contribute to various technical aspects of Cardano. This is our contribution to "Making The World Work Better For All".

Initially, we focus on a "container first" approach for the Cardano node.

Running a Node

To get up and running with Cardano, you can spin up a node like this ...

docker run --detach \
    --name=relay \
    -p 3001:3001 \
    -v node-data:/opt/cardano/data \
    -v node-ipc:/opt/cardano/ipc \
    nessusio/cardano-node run

docker logs -f relay

This works on x86_64 and arm64.

The nessusio/cardano-node image is built from source in multiple stages like this and then with Nix like this.

Running a Node on the Testnet

docker run --detach \
    --name=testrl \
    -p 3001:3001 \
    -e CARDANO_NETWORK=testnet \
    -v test-data:/opt/cardano/data \
    -v node-ipc:/opt/cardano/ipc \
    nessusio/cardano-node run

docker logs -f testrl

Accessing the build-in gLiveView

The image has gLiveView monitoring built in.

For a running container, you can do ...

docker exec -it relay gLiveView

Accessing the build-in topology updater

There is currently no P2P module activated in cardano-1.26.1. Your node may call out to well known relay nodes, but you may never have incoming connections. According to this it is necessary to update your topology every hour. At the time of writing, cardano-node doesn't do this on its own.

This functionality has been built into nessus/cardano-node as well. On startup, you should see a log similar to this ...

docker run --detach \
    --name=relay \
    -p 3001:3001 \
    -e CARDANO_UPDATE_TOPOLOGY=true \
    -v node-data:/opt/cardano/data \
    nessusio/cardano-node run

$ docker exec -it relay tail /opt/cardano/logs/topologyUpdateResult
{ "resultcode": "201", "datetime":"2021-01-10 18:30:06", "clientIp": "209.250.233.200", "iptype": 4, "msg": "nice to meet you" }
{ "resultcode": "203", "datetime":"2021-01-10 19:30:03", "clientIp": "209.250.233.200", "iptype": 4, "msg": "welcome to the topology" }
{ "resultcode": "204", "datetime":"2021-01-10 20:30:04", "clientIp": "209.250.233.200", "iptype": 4, "msg": "glad you're staying with us" }

The topologyUpdater is triggered by CARDANO_UPDATE_TOPOLOGY. Without it, the cron job is not installed.

External storage for block data

In this configuration, we map the node's --database-path to a mounted directory.

docker run --detach \
    --name=relay \
    -p 3001:3001 \
    -v /mnt/disks/data00:/opt/cardano/data \
    nessusio/cardano-node run

docker logs -f relay

Using custom configurations

Here we define a config volume called cardano-relay-config. It holds the mainnet-topology.json file that we setup externally. Note, that our custom config lives in /var/cardano/config and not in the default location /opt/cardano/config.

docker run --detach \
    --name=relay \
    -p 3001:3001 \
    -e CARDANO_UPDATE_TOPOLOGY=true \
    -e CARDANO_CUSTOM_PEERS="$PRODUCER_IP:3001" \
    -e CARDANO_TOPOLOGY="/var/cardano/config/mainnet-topology.json" \
    -v cardano-relay-config:/var/cardano/config  \
    -v /mnt/disks/data00:/opt/cardano/data \
    nessusio/cardano-node run

docker logs -f relay

Running a Block Producer Node

A producer node is configured in the same way as a Relay node, except that it has some additional key/cert files configured and does not need to update its topology.

docker run --detach \
    --name=bprod \
    -p 3001:3001 \
    -e CARDANO_BLOCK_PRODUCER=true \
    -e CARDANO_TOPOLOGY="/var/cardano/config/mainnet-topology.json" \
    -e CARDANO_SHELLEY_KES_KEY="/var/cardano/config/keys/pool/kes.skey" \
    -e CARDANO_SHELLEY_VRF_KEY="/var/cardano/config/keys/pool/vrf.skey" \
    -e CARDANO_SHELLEY_OPERATIONAL_CERTIFICATE="/var/cardano/config/keys/pool/node.cert" \
    -v cardano-prod-config:/var/cardano/config  \
    -v /mnt/disks/data01:/opt/cardano/data \
    nessusio/cardano-node run

docker logs -f bprod

docker exec -it bprod gLiveView

Running the Cardano CLI

We can also use the image to run Cardano CLI commands.

For this to work, the node must share its IPC socket location, which can then be use in the alias definition.

alias cardano-cli="docker run -it --rm \
  -v ~/cardano:/var/cardano/local \
  -v node-ipc:/opt/cardano/ipc \
  nessusio/cardano-node cardano-cli"

cardano-cli query tip --mainnet
{
    "blockNo": 5102089,
    "headerHash": "e5984f27d1d3b5dcc296b33ccd919a28618ff2d77513971bd316cffd35afecda",
    "slotNo": 16910651
}

Docker Compose

We sometimes may prefer somm middle ground between manually spinning up individual docker containers and the full blown enterprise Kubernetes account.

Perhaps we'd like to use Docker Compose.

$ docker-compose -f nix/docker/compose/cardano-node-relay.yaml up --detach

Creating relay ... done
Creating nginx ... done

$ docker-compose -f nix/docker/compose/cardano-node-bprod.yaml up --detach

Creating bprod ... done
Creating nginx ... done

For details you may want to have a look at nix/docker/compose/cardano-nodes.yaml.

Kubernetes

This project has started as an incubator space for stuff that may eventually become available upstream. As part of this, we want to "provide high quality multiarch docker images and k8s support"

With Kubernetes as the de-facto standard for container deployment, orchestration, monitoring, scaling , etc, it should be as easy as this to run Cardano nodes …

kubectl apply -f nix/docker/k8s/cardano-nodes.yaml

storageclass.storage.k8s.io/cardano-standard-rwo created
persistentvolumeclaim/relay-data created
pod/relay created
service/relay-np created
service/relay-clip created
persistentvolumeclaim/bprod-data created
pod/bprod created
service/bprod-clip created

This is documented in detail over here.

For details you may want to have a look at nix/docker/k8s/cardano-nodes.yaml.

Leader logs

For a Stake Pool Operator it is important to know when the node is scheduled to produce the next block. We definitely want to be online at that important moment and fullfil our block producing duties. There are better times to do node maintenance.

This important functionality has also been built into the nessusio/cardano-tools image.

First, lets define an alias and ping the node that we want to work with.

Details about this API are here.

$ alias cncli="docker run -it --rm \
  -v ~/cardano:/var/cardano/local \
  -v cncli:/var/cardano/cncli \
  nessusio/cardano-tools cncli"

NODE_IP=192.168.0.30

cncli ping --host $NODE_IP
{
  "status": "ok",
  "host": "10.128.0.31",
  "port": 3001,
  "connectDurationMs": 0,
  "durationMs": 53
}

Syncing the database

This command connects to a remote node and synchronizes blocks to a local sqlite database.

$ cncli sync --host $NODE_IP \
  --db /var/cardano/cncli/cncli.db \
  --no-service

...
2021-03-04T10:23:19.719Z INFO  cardano_ouroboros_network::protocols::chainsync   > block 5417518 of 5417518, 100.00% synced
2021-03-04T10:23:23.459Z INFO  cncli::nodeclient::sync                           > Exiting...

Slot Leader Schedule

We can now obtain the leader schedule for our pool.

STAKE_SNAPSHOT=$HOME/cardano/scratch/stake-snapshot.json
cardano-cli query stake-snapshot --stake-pool-id 9e8009b249142d80144dfb681984e08d96d51c2085e8bb6d9d1831d2 --mainnet | \
  tee $STAKE_SNAPSHOT

# Prev
LEDGER_SET=prev
POOL_STAKE=$(cat $STAKE_SNAPSHOT | jq .poolStakeGo)
ACTIVE_STAKE=$(cat $STAKE_SNAPSHOT | jq .activeStakeGo)

# Current
LEDGER_SET=current
POOL_STAKE=$(cat $STAKE_SNAPSHOT | jq .poolStakeSet)
ACTIVE_STAKE=$(cat $STAKE_SNAPSHOT | jq .activeStakeSet)

# Next
LEDGER_SET=next
POOL_STAKE=$(cat $STAKE_SNAPSHOT | jq .poolStakeMark)
ACTIVE_STAKE=$(cat $STAKE_SNAPSHOT | jq .activeStakeMark)

$ cncli leaderlog \
  --pool-id 9e8009b249142d80144dfb681984e08d96d51c2085e8bb6d9d1831d2 \
  --shelley-genesis /opt/cardano/config/mainnet-shelley-genesis.json \
  --byron-genesis /opt/cardano/config/mainnet-byron-genesis.json \
  --pool-vrf-skey /var/cardano/local/mainnet/keys/pool/vrf.skey \
  --db /var/cardano/cncli/cncli.db \
  --active-stake $ACTIVE_STAKE \
  --pool-stake $POOL_STAKE \
  --tz Europe/Berlin \
  --ledger-set $LEDGER_SET | tee leaderlog.json

cat leaderlog.json | jq -c ".assignedSlots[] | {no: .no, slot: .slotInEpoch, at: .at}"

{"no":1,"slot":165351,"at":"2021-02-26T20:40:42+01:00"}
{"no":2,"slot":312656,"at":"2021-02-28T13:35:47+01:00"}
{"no":3,"slot":330588,"at":"2021-02-28T18:34:39+01:00"}
{"no":4,"slot":401912,"at":"2021-03-01T14:23:23+01:00"}

Build the Images

In line with the upstream project we also us a Nix based build. The build requires Nix and a working Docker environment. This works on x86_64 and arm64.

Spinning up a build/test environment on GCE is documented here

To build all images and their respective dependencies, run ...

./build.sh all

Enjoy!

nessus-cardano's People

Contributors

asarco avatar dominik-gubrynowicz avatar flojon avatar jterrier84 avatar tdiesler avatar wolf31o2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nessus-cardano's Issues

Block producer node prerequisites and stateful docker commands

Hi,
I have been going through your gitbook and managed to spin up 2 relays, very nice and easy, ty!

2 things I found:

1- before running the producer nodes, all the steps in here: https://docs.cardano.org/projects/cardano-node/en/latest/stake-pool-operations/getConfigFiles_AND_Connect.html need to be walked through, where your gitbook picks up at #9: "Start your nodes". I would suggest a reference to this guide on this page: https://tdiesler.gitbook.io/cardano/v/iohk/plain-docker/running-the-nodes

2- the running of stateful commands in the docker container as per https://tdiesler.gitbook.io/cardano/v/iohk/plain-docker/running-the-nodes under "Run stateful CLI commands" doesn;t appear to be working (for me)

thanks

Simplify stop signal handling by using SIGINT directly

As @nyetwurk pointed out, we could use

    config = {
      Entrypoint = [ "entrypoint" ];
      StopSignal = "SIGINT";
    };

in the Nix build instead of using convoluted trap logic in the wrapper script that runs cardano-node as a sub-process in the background. However, if we do this we loose

Signalling for shutdown ...

I personally would prefer to make this change dependent on

  • #2648 Indicate graceful shutdown with a log message

Topology updater might not working as expected

Hi,

First of all thanks for your work.

Let me state that I'm quite new to Cardano so I might have some missconceptions.

I've managed to derive to a working testnet setup (based on dev tag), but despite topologyUpdate is running and I get multiple IN connections... OUT connections remains 2.

I've tried to see what's going on and my conclusions are:

  • Topology file always get wiped on startup whenever CARDANO_TOPOLOGY env var contains a valid JSON, so if dealing with CARDANO_UPDATE_TOPOLOGY topology file should be better mounted, ¿is that correct?. But if I mount the file I get mv: cannot move '/var/cardano/config/testnet-topology.json.tmp' to '/var/cardano/config/testnet-topology.json': Device or resource busy I'm not sure why. I'll try to mount containing folder.
  • Despite CARDANO_TOPOLOGY (the file) gets updated, there is no way to tell cardano node to use refreshed file, I've tried even to SIGHUP node process inside container but resulted on a contanier crash. I guess there lack some glue that restarts the node on topology change (since original uses wrappers + systemd to restart node process) or some support on node side (I've found https://cardanoupdates.com/commits/133b35227fd86624e9fd85bf0302dcbaa8785bb5 which seems to adress it but can't found that on github to see the status). Also I'm not sure if this somewhat relates to #49

gLiveView monitor does not respond to SIGHUP

Closing the terminal that displays the live view without quitting may not terminate the process. Next time you run gLiveView, you may start another process

$ docker exec -it relay ps ax
  PID TTY      STAT   TIME COMMAND
    1 ?        Ss     0:00 /bin/bash /opt/cardano/entrypoint.sh run
   14 ?        Ss     0:00 /usr/sbin/cron
   15 ?        Sl    87:44 cardano-node run --config /opt/cardano/config/mainnet
  191 pts/0    Ss+    0:00 /bin/bash /usr/local/bin/gLiveView
  197 pts/0    S+     2:49 /bin/bash guild-operators/scripts/cnode-helper-script
 2814 pts/1    Ss+    0:00 /bin/bash /usr/local/bin/gLiveView
 2820 pts/1    S+     0:00 /bin/bash guild-operators/scripts/cnode-helper-script
 3601 pts/2    Rs+    0:00 ps ax

pool keys dir, raspberry data mount and custom peers

Hi
some more small findings which might help people going through the manual after me:

  1. when following the gitbook here: https://tdiesler.gitbook.io/cardano/v/iohk/plain-docker/running-the-nodes, coming from the raspberry install section I was expecting to use the sda1 mount under "Run a relay node". What you do there initially is test if things work in the first place, but might be helpful to mention the -v node-data:/opt/cardano/data actually mounts a volume not on sda, but will on the final section of "Run a relay node". (In my case I left in the volume mapping in place and am only now copying the entire blockchain to /mnt/disks/data00, ie sudo docker cp tmp:/opt/cardano/data/. /mnt/disks/data00)

  2. under https://tdiesler.gitbook.io/cardano/v/iohk/plain-docker/running-the-nodes#setup-the-config-volume-1 there are instructions to copy the keys to /var/cardano/config/keys, however in the section below that to start the block producer the path referenced is "/var/cardano/config/keys/pool/", they should be the same

  3. The topology updater works fine for the relay, however the file mainnet-topology.json is overwritten with a new version after successful execution of the updater (as it should). This then overwrites the block producer entry in mainnet-topology.json and after a restart of the container the relay no longer connects back to the block producer. To remediate this we should add the custom peers parameter to the (relay only) docker run command:
    -e CARDANO_CUSTOM_PEERS="your-BP-IP:3001|relays-new.cardano-mainnet.iohk.io:3001:2" \

  4. I found keeping the config in volumes was a tad cumbersome and have chosen to bind a directory on the host for easy manipulation of the config files ie "--mount type=bind,src=$HOME/cardano/config,target=/opt/cardano/config ". Especially if you want prometheus monitoring to work properly several tracers need to be enabled, having easy access to the config makes manipulating the same a bit easier.

Happily running my pool 14ALL now due to your excellent writeup, ty!

Provide high quality multiarch docker image and k8s support

This is an umbrella issue that combines various feature requests and bug fixes in order to provide a high quality Cardano Docker image. In the not too distant future it may become a reality that Cardano nodes need to get integrated in corporate IT infrastructure. They may become destinations or sources for various corporate data streams and as such, it'd be good to have an offering that allows these organizations to integrate Cardano with already existing cloud infrastructure.

With Kubernetes as the de-facto standard for container deployment, orchestration, monitoring, scaling , etc, it should be as easy as this to integrate a Cardano node ...

kubectl apply -f https://raw.githubusercontent.com/input-output-hk/cardano-node/master/cardano-node.yaml

CrossRef: IntersectMBO/cardano-node#2360

Add a liveness check endpoint

We would need to have some sort of HTTP endpoint that internally accesses the EKG. Responding with all or part of the EKG data is probably not such a good idea. Especially not, if we want to monitor liveness of the block producer.

Does Cardano already defend itself against DOS attacks on the metrics endpoint? If not, that would need to be done on the liveness endpoint as well. It should not be possible to put significant load on the node by (maliciously) checking liveness.

Configure log severity per scribe

We can configure scribes and global severity like this ...

  "defaultScribes": [
    [
      "StdoutSK",
      "stdout"
    ],
    [
      "FileSK",
      "/opt/cardano/logs/debug.log"
    ]
  ],
  "minSeverity": "Info",

Ideally however, I'd like to have Info on stdout and debug for a rotating log file

CrossRef: IntersectMBO/cardano-node#2252

Could not start Block Producer Node

when trying to run a block producer node with

docker run --detach
--name=prod
-p 3001:3001
-e CARDANO_BLOCK_PRODUCER=true
-e CARDANO_TOPOLOGY="/var/cardano/config/mainnet-topology.json"
-e CARDANO_SHELLEY_KES_KEY="/var/cardano/config/keys/pool/kes.skey"
-e CARDANO_SHELLEY_VRF_KEY="/var/cardano/config/keys/pool/vrf.skey"
-e CARDANO_SHELLEY_OPERATIONAL_CERTIFICATE="/var/cardano/config/keys/pool/node.cert"
-v cardano-prod-config:/var/cardano/config
-v /mnt/disks/data01:/opt/cardano/data
nessusio/cardano run

the container stops with logfile:

cardano-node: /var/cardano/config/keys/pool/vrf.skey: getFileStatus: does not exist (No such file or directory)

how can vrf.skey be copied to the container when the container is not running?

Port 3001 is open although denied by ufw

I set the port in the producer node to 6000, and I denied the port 3001 in the producer node.
My UFW on the producer server doesn't show that port 3001 is allowed. It allows port 6000.
However, running netcat on my producer server shows that the port 3001 on my relay server is open.

$ nc -vz <relay-IP> 3001
aaa.com [<relay-IP>] 3001 (?) open

$ nc -vz <relay-IP> 6000
aaa.com [<relay-IP>] 6000 (x11) : Connection refused

The topology file of my relay node:

{
  "Producers": [
    {
      "addr": "relays-new.cardano-mainnet.iohk.io",
      "port": 3001,
      "valency": 2
    },
    {
      "addr": "<producer-ip>",
      "port": 6000,
      "valency": 1
    },
  ]
}

I started the docker with this command:

docker run --detach --rm --name=relay -p 3001:3001 -p 6000:6000 -e CARDANO_UPDATE_TOPOLOGY=true -v $HOME/cardano/data:/opt/cardano/data -v $HOME/cardano/ipc:/opt/cardano/ipc -v $HOME/cardano/config:/var/cardano/config nessusio/cardano-node run

The logs from my producer node shows a connection error to the relay ip:

[12432a2c:cardano.node.IpSubscription:Error:6932] [2021-05-09 20:57:52.69 UTC] IPs: 0.0.0.0:0 [xxx.xx.xx.xx:6000] Connection Attempt Exception, destination xxx.xx.xx.xx:6000 exception: Network.Socket.connect: <socket: 36>: does not exist (Connection refused)
[12432a2c:cardano.node.IpSubscription:Info:6932] [2021-05-09 20:57:52.69 UTC] IPs: 0.0.0.0:0 [xxx.xx.xx.xx:6000] Closed socket to xxx.xx.xx.xx:6000
[12432a2c:cardano.node.ErrorPolicy:Notice:256] [2021-05-09 20:57:52.69 UTC] IP xxx.xx.xx.xx:6000 ErrorPolicySuspendConsumer (Just (ConnectionExceptionTrace Network.Socket.connect: <socket: 36>: does not exist (Connection refused))) 20s
[12432a2c:cardano.node.IpSubscription:Error:260] [2021-05-09 20:57:52.71 UTC] IPs: 0.0.0.0:0 [xxx.xx.xx.xx:6000] Failed to start all required subscriptions

Shall I remove -p 3001:3001?

Initial sync may block with CPU maxed out

On arm64 (i.e. RaspberryPi) you may see this ...

  • Initial syncing stops
  • CPU is at 200%
  • Memory < 10%

This looks like a tight endless loop or perhaps some I/O condition that cannot be recovered from.
With an even slower USB-A storage device, this condition showed up almost immediately.

On x86_64 this works fine.

CrossRef: IntersectMBO/cardano-node#2251

No graceful shutdown on docker stop

After IPSubscriptionTarget the node shows no log output for about 10min. It looks like a rescan of the existing block storage.

  • Happens on amd64 as well
  • Happens on docker restart as well (i.e. not just on docker run)
[cdrelay:cardano.node.dns-producers:Notice:5] [2020-12-30 04:14:41.44 UTC] [DnsSubscriptionTarget {dstDomain = "relays-new.cardano-mainnet.iohk.io", dstPort = 3001, dstValency = 1}]
[cdrelay:cardano.node.ip-producers:Notice:5] [2020-12-30 04:14:41.44 UTC] IPSubscriptionTarget {ispIps = [172.18.0.11:3001], ispValency = 1}

[cdrelay:cardano.node.ChainDB:Info:5] [2020-12-30 04:27:52.57 UTC] Opened imm db with immutable tip at 426ca1b0b2c590b0123b287bff5543cc5bcf34bd56bfbd1c1a924ec88df8f230 at slot 17691487 and chunk 819
[cdrelay:cardano.node.ChainDB:Info:5] [2020-12-30 04:27:54.51 UTC] Opened vol db

CrossRef: IntersectMBO/cardano-node#2267

Add CNCLI to docker image

I noticed that CNCLI doesn't seem to be present on the docker images. It can be useful for troubleshooting node connectivity, as well as allowing gLiveView to perform a deeper peer health-check other than an ICMP request.

Missing libselinux.so since 1.26.1

Starting with version 1.26.1 and 1.26.2 of the cardano-tools image libselinux.so appears to be missing. This causes a large number of commands to fail resulting in errors like:

/bin/ls: error while loading shared libraries: libselinux.so.1: cannot open shared object file: No such file or directory
mkdir: error while loading shared libraries: libselinux.so.1: cannot open shared object file: No such file or directory

The 1.25.1 tag does not suffer from this issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.