Git Product home page Git Product logo

cardano-node-tests's Introduction

Code style: black

README for cardano-node-tests

Integration tests for cardano-node.

For more details check the doc folder

Installation

Create a python virtual env and install this package together with dev requirements:

$ ./setup_venv.sh

Requires Python 3.8 or newer.

Usage

Preparing env:

# cd to cardano-node repo
$ cd <your path to cardano-node repo>
# update and checkout the desired commit/tag
$ git checkout master
$ git pull origin master
$ git fetch --all --tags
$ git checkout tags/<tag>
# launch devops shell
$ nix-shell -A devops
# cd to tests repo
$ cd <your path to cardano-node-test repo>
# activate virtual env
$ . .env/bin/activate

Running tests on local cluster (local cluster instances will be started automatically during test run setup):

# set env variables
$ export CARDANO_NODE_SOCKET_PATH=<your path to cardano-node repo>/state-cluster0/bft1.socket
# run tests
$ make tests

Running tests on one of the testnets:

# set env variables
$ export CARDANO_NODE_SOCKET_PATH=<your path to cardano-node repo>/state-cluster0/relay1.socket
# run tests
$ BOOTSTRAP_DIR=<your path to bootstrap dir> make testnets

Running individual tests:

$ pytest -k "test_name1 or test_name2" cardano_node_tests

Running linter:

# activate virtual env
$ . .env/bin/activate
# run linter
$ make lint

Variables for make tests and make testnets

  • SCHEDULING_LOG - specifies path to file where log messages for tests and cluster instance scheduler are stored
  • PYTEST_ARGS - specifies additional arguments for pytest
  • MARKEXPR - specifies marker expression for pytest
  • TEST_THREADS - specifies number of pytest workers
  • CLUSTERS_COUNT - number of cluster instances that will be started
  • CLUSTER_ERA - cluster era for cardano node - used for selecting correct cluster start script
  • TX_ERA - era for transactions - can be used for creating Shelley-era (Allegra-era, ...) transactions
  • NOPOOLS - when running tests on testnet, a cluster with no staking pools will be created
  • BOOTSTRAP_DIR - path to a bootstrap dir for given testnet (genesis files, config files, faucet data)
  • SCRIPTS_DIRNAME - path to a dir with local cluster start / stop scripts and configuration files

E.g.

$ SCHEDULING_LOG=testrun_20211012_1.log TEST_THREADS=3 CLUSTER_ERA=alonzo TX_ERA=mary SCRIPTS_DIRNAME=cardano_node_tests/cluster_scripts/alonzo/ PYTEST_ARGS="-k 'test_stake_pool_low_cost or test_reward_amount'" MARKEXPR="not long" make tests

Tests development

When running tests, the testing framework starts and stops cluster instances as needed. That is not ideal for tests development, as starting a cluster instance takes several epochs (to get from Byron to Alonzo). To keep cardano cluster running in-between test runs, one needs to start it in "development mode".

# activate virtual env
$ . .env/bin/activate
# prepare cluster scripts
$ prepare-cluster-scripts -d <destination dir>/alonzo -s cardano_node_tests/cluster_scripts/alonzo/
# set env variables
$ export CARDANO_NODE_SOCKET_PATH=<your path to cardano-node repo>/state-cluster0/bft1.socket DEV_CLUSTER_RUNNING=1
# start the cluster instance in development mode
$ <destination dir>/alonzo/start-cluster-hfc

After starting the cluster, keys and configuration files are available in the <your path to cardano-node repo>/state-cluster0 directory. The pools-related files and keys are located in the nodes subdirectory, genesis keys in the shelley and byron subdirectories, payment address with initial funds and related keys in the byron subdirectory. Local faucet address and related key files are stored in the addrs_data subdirectory.

Test coverage of cardano-cli commands

To get test coverage of cardano-cli commands, run tests as usual (make tests) and generate the coverage report JSON file with

$ cardano-cli-coverage -i .cli_coverage/cli_coverage_*.json -o .cli_coverage/coverage_report.json

Publishing testing results

Clone https://github.com/mkoura/cardano-node-tests-reports and see its README.

Building documentation

To build documentation using Sphinx, run

$ make install_doc
$ make doc

The documentation is generated to src_docs/build/html.

To publish documentation to https://input-output-hk.github.io/cardano-node-tests/, run:

# checkout the "github_pages" branch
$ git checkout github_pages
# copy/move content of src_docs/build/html to src_docs
$ rm -rf docs/*
$ cp -aT src_docs/build/html docs
# stage changes
$ git add docs
# commit changes
$ git commit
# push to origin/github_pages (upstream/github_pages)
$ git push origin github_pages

Contributing

Install this package and its dependencies as described above.

Run pre-commit install to set up the git hook scripts that will check you changes before every commit. Alternatively run make lint manually before pushing your changes.

Follow the Google Python Style Guide, with the exception that formatting is handled automatically by Black (through pre-commit command).

cardano-node-tests's People

Contributors

actions-user avatar arturwieczorek avatar disassembler avatar dorin100 avatar james-iohk avatar jbgi avatar jorisdral avatar mkoura avatar saratomaz avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.