Git Product home page Git Product logo

smartnode-install's Introduction

Rocket Pool - Smart Node Installation

This repository contains compiled binaries for the Rocket Pool smart node client, as well as the installation script & configuration assets for the smart node service.

The smart node client is supported on Linux, MacOS and Windows. Note that a smart node cannot be run locally on Windows at this stage; the Windows client can only be used to manage a remote server.

The smart node service is supported on all Unix platforms, with automatic dependency installation for Ubuntu, Debian, CentOS and Fedora. A smart node can be run on other Unix platforms, but manual installation of dependencies (docker engine and docker-compose) is required.

Smart Node Client Installation

Linux (64 bit)

With cURL:

curl -L https://github.com/rocket-pool/smartnode-install/releases/latest/download/rocketpool-cli-linux-amd64 --create-dirs -o ~/bin/rocketpool && chmod +x ~/bin/rocketpool

With wget:

mkdir -p ~/bin && wget https://github.com/rocket-pool/smartnode-install/releases/latest/download/rocketpool-cli-linux-amd64 -O ~/bin/rocketpool && chmod +x ~/bin/rocketpool

Note: you may need to start a new shell session before you can run the rocketpool command.

MacOS Intel (64 bit)

With cURL:

curl -L https://github.com/rocket-pool/smartnode-install/releases/latest/download/rocketpool-cli-darwin-amd64 -o /usr/local/bin/rocketpool && chmod +x /usr/local/bin/rocketpool

With wget:

wget https://github.com/rocket-pool/smartnode-install/releases/latest/download/rocketpool-cli-darwin-amd64 -O /usr/local/bin/rocketpool && chmod +x /usr/local/bin/rocketpool

MacOS M1 (64 bit)

With cURL:

curl -L https://github.com/rocket-pool/smartnode-install/releases/latest/download/rocketpool-cli-darwin-arm64 -o /opt/homebrew/bin/rocketpool && chmod +x /opt/homebrew/bin/rocketpool

With wget:

wget https://github.com/rocket-pool/smartnode-install/releases/latest/download/rocketpool-cli-darwin-arm64 -O /opt/homebrew/bin/rocketpool && chmod +x /opt/homebrew/bin/rocketpool

Windows (64 bit)

  1. Download the smart node client.
  2. Move it to the desired location on your system (e.g. C:\bin\rocketpool.exe).
  3. Open the command prompt and run it via its full path (e.g. C:\bin\rocketpool.exe).

Smart Node Service Installation

Automatic

Once you have installed the Rocket Pool smart node client, simply run the rocketpool service install command to install the smart node service locally.

To install to a remote server, use:

rocketpool --host example.com --user username --key /path/to/identity.pem service install

If automatic dependency installation is not supported on your platform, use the -d option to skip this step (e.g. rocketpool service install -d). Then, manually install docker engine and docker-compose.

Manual

If you would prefer to check the installation script before running it, you may download and run it manually.

With cURL:

curl -L https://github.com/rocket-pool/smartnode-install/releases/latest/download/install.sh -o install.sh
chmod +x install.sh

./install.sh
rm install.sh

With wget:

wget https://github.com/rocket-pool/smartnode-install/releases/latest/download/install.sh -O install.sh
chmod +x install.sh

./install.sh
rm install.sh

The installation script prints progress messages to stdout and full command output to stderr. Use 1>/dev/null to silence progress messages, or 2>/dev/null to silence command output.

Available Options

The following options apply to both automatic and manual installation unless specified otherwise:

  • -r: Verbose mode (print all output from the installation process) - automatic installation only
  • -d: Skip automatic installation of OS dependencies
  • -n: Specify a network to run the smart node on (default: pyrmont)
  • -v: Specify a version of the smart node service package files to use (default: latest)

Post-Install

Once the smart node service has been installed, you may need to start a new shell session if working locally. This is required for updated user permissions to take effect (for interacting with docker engine).

Update Instructions

A complete guide to updating Rocketpool can be found here: https://docs.rocketpool.net/guides/node/updates.html

Running and Using Rocketpool

Documentation about running and using Rocketpool can be found here: https://docs.rocketpool.net/guides/

smartnode-install's People

Contributors

0xfornax avatar 0xobjectobject avatar activescott avatar ajhodges avatar anthonygraignic avatar crisog avatar darcius avatar dhruvinparikh avatar dodrian avatar dondochaka avatar drinkthere avatar fmoledina avatar haloooloolo avatar invisiblesymbol avatar iofq avatar jclapis avatar jeffersonwarrior avatar jshufro avatar kanewallmann avatar kidkal avatar moles1 avatar naps62 avatar nflaig avatar nickdoherty avatar njflowers avatar nymd avatar poupas avatar tedsteen avatar tmarrec avatar tocsick avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

smartnode-install's Issues

Document running RP daemon outside of docker

Provide a guide for configuring and running the daemon on the host OS, including:

  • Installing Go, cloning the smartnode repo, building the daemon from source, copying the daemon to e.g. /usr/local/bin/rocketpoold
  • Setting up scaffolding at ~/.rocketpool
  • Configuring the password / wallet / validator paths and the eth1 and eth2 endpoints
  • Pointing validators to the RP keystores
  • Setting up systemd units for the eth1 / eth2 / validator / node / watchtower processes

Infura and Nimbus don't work when selected together because of websockets

The combo of Infura and Nimbus is broken, because Nimbus needs a Websocket API for ETH1. When selecting Infura, the ETH1_WS_PROVIDER environment variable that Nimbus needs doesn't get set.

Apparently this is mitigated by just sticking that variable directly into settings.yml, so we have a workaround for now but it's not great UX.

Infura log limit workaround

Node operators using Infura hit a per-request log limit when their eth2 client is requesting deposit contract logs. Add the following flags to circumvent this:

Lighthouse - --eth1-blocks-per-log-query
Prysm - --eth1-header-req-limit

update metrics on successful install

If the update tracker is installed, smartnode-install should quietly fork off

/usr/share/apt-metrics.sh | sponge /var/lib/node_exporter/textfile_collector/apt.prom

and suppress any errors it might emit.

this will hopefully clear the metric (though I'm on mobile and can't check whether we look at service version or binary version- might need a service start first, in which case, this whole ticket is moot)

Undocumented forced non-root user & user groups

Hello team!

I just tried to upgrade my node and received a rocketpool should not be run as root. Please try again without 'sudo'.

  1. This is undocumented in the public beta guide
  2. For those of us who are in the beta, this is very sudden 'oh shit everything broke with the update' moment
  3. On discord someone mentioned that Rocketpool assumes the running user is part of the docker user group, is this correct and if so can we document it in the setup guide please?

Handle various bash config files

Add logic to setup script to detect which bash config files are in use (.bash_profile, .profile, .bashrc etc) and only update the appropriate one.

sudo/doas check too global

64f1406 elevated the sudo/doas check to the top level of the script, but since it's only used when -d isn't passed, that's a bit too universal.

Perhaps that commit should be reverted and detect_sudo_command >&2 should be invoked just after if [ -z "$NO_DEPS" ]; then and before the case statements

Infura warning when using Geth

Durign rocketpool service config I selected I want to run Geth. When running rocketpool service start for the first time I get the following output:

The INFURA_PROJECT_ID variable is not set. Defaulting to a blank string.
Creating network "rocketpool_net" with the default driver
Creating volume "rocketpool_eth1clientdata" with default driver
Creating volume "rocketpool_eth2clientdata" with default driver
Pulling eth1 (ethereum/client-go:v1.9.22)...
.... # THIS RUNS FINE

The line The INFURA_PROJECT_ID variable is not set. Defaulting to a blank string. seems not to be relevant if I selected Geth.

Where can I find an example of uset-setting.yml

It appears in the 1.3.0 release the example settins.yml and config.yml files were removed. Specifically in this commit: fe382430

The release notes state that:

Removed config.yml and settings.yml; there will now be a single file with all user-modifiable settings called user-settings.yml.

But no such user-settings.yml file exists in this repo:
https://github.com/rocket-pool/smartnode-install/search?q=user-settings.yml

And the link to the migration guide in the release just gives 404:
https://docs.rocketpool.net/guides/node/v1.3-update.html

We've been running 1.0.0 so far with our own Ansible role to set things up: https://github.com/status-im/infra-role-rocketpool
And now that I'm trying to upgrade to 1.5.0 I'm a little confuse as to where I'm supposed to find an example user-setting.yml.

Error "Could not register node: Method not found" in native mode set up

I am trying to set up my machine (64GB RAM, 2TB SSD, Ubuntu 20.04 LTS) as an independent node operator (first in Prater network). Setup mostly followed documentation's "Native Mode" path. Eth1 (Besu), Eth2 (Teku) clients are setup independently and running as a system service and both synced fully. All rocketpool related files are set up in the Linux standard "/var/lib/rocketpool" directory. Wallets are initialized and put some Goerli ETH in it. 'rp node status' command returns all good. But when I ran 'rp node register' command, it says:

"Could not register node: Method not found".

The same for 'rp faucet withdraw-rpl' command: "Could not withdraw RPL from faucet: Method not found".

Can't proceed further after this. Besides, there are always annoying "Command unavailable with '--daemon-path'" appended in many rp commands. Any idea? Thanks!

"Clients not ready" in 1.11.5

I just upgraded to 1.11.5, I've got a hybrid setup with Nethermind/Teku as my EC/CC. Everything still seems to be working fine and I'm attesting, but I've been seeing this since the upgrade whenever I run minipool status:

Error: primary client pair isn't ready and fallback clients aren't enabled.
        Primary EC status: syncing (99.99%)
        Primary CC status: synced and ready
clients not ready

I have non-rocketpool validators running on the same Nethermind/Teku clients that rocketpool is pointed at that are working fine, and as far as I can tell Nethermind is synced and up to date with head.

ETH2_RPC_PORT not passed to validator container

I am seeing this issue after upgrading to 1.1.0.

> rocketpool --version 

rocketpool version 1.1.0

I am running in hybrid mode with an external beacon chain process (Prysm) with the RPC running on a non-default port. I added the following to .rocketpool/settings.yml under eth2.client.params.

- env: ETH2_RPC_PORT
  value: 1234

However, this value is not passed to the validator container.

Logs from Prysm container indicate the default 5053 is still used.

time="2021-12-02 00:20:46" level=warning msg="Could not determine if beacon chain started: could not setup beacon chain ChainStart streaming client: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 172.33.0.5:5053: connect: connection refused": could not connect" prefix=validator

It seems that .rocketpool/docker-compose.yml needs to be updated to include the env var. Under services.validator.environment there should be something like ETH2_RPC_PORT=${ETH2_RPC_PORT}.

WARNING: no logs are available with the 'syslog' log driver

I'm sorry for the sparse details on this report, but when I run rocketpool service logs eth2, I get the following output instead of logs:

Attaching to rocketpool_eth2
eth2_1        | WARNING: no logs are available with the 'syslog' log driver

Running docker logs <container-id> works fine. Let me know if I can help with some further experiments that will track the root cause of this.

Why not default grafana's container user to be `root` like all the other containers?

I recently had an issue where my default umask on my OS removes other permissions.

Then the default docker mode, meant that the grafana container which is configured to use the grafana user couldn't load the bind mounted ~/.rocketpool/grafana-prometheus-datasource.yml:/etc/grafana/provisioning/datasources/prometheus.yml.

I had to do chmod o+r ~/.rocketpool/grafana-prometheus-datasource.yml so that the grafana container could run.

Given that all the other containers are already using root, I doubt there's much security increase with using a grafana user. Might as well default to root so that it can read the files by default.

Pebble flag does not make Geth use Pebble

Issue

When setting Use Pebble DB: X and then rocketpool service resync-eth1, eth1 logs are still showing database LevelDB being used.

Workaround

Only after manually adding Additional Flags: --db.engine pebble and then rocketpool service resync-eth1, eth1 logs are showing Pebble DB being used.

Versions

Rocket Pool client version: 1.9.0
Rocket Pool service version: 1.9.0
Selected Eth 1.0 client: Geth (Locally managed)
	Image: ethereum/client-go:v1.11.5
Selected Eth 2.0 client: Lighthouse (Locally managed)
	Image: sigp/lighthouse:v4.0.2-rc.0-modern

Kill eth1 & eth2 processes spawned in shell scripts gracefully

Currently, the processes spawned inside the eth1, eth2 and validator containers by shell scripts appear to get left running (and terminated abruptly by Docker after the default timeout).

Add signal traps to the shell scripts to gracefully interrupt these processes when the main process (the shell script) is terminated. Possibly also increase the stop_grace_period on the containers to allow time for them to terminate.

Downgrade docker-compose version

Downgrade docker-compose file version for compatibility with the docker-compose version bundled with Ubuntu LTS. 3.4 should work.

Install ntp or equivalent packages with OS deps

A number of smart node operators have expressed an interest in automatically installing ntp (for Ubuntu) as part of the initial smart node setup.

Investigate equivalent packages for all supported platforms (Ubuntu, Debian, CentOS, Fedora) and add to the install script (OS deps section).

[Feature request] Configure logrotation for nimbus with docker + documentation

I'm validating on Prater with geth+nimbus, and ran into a missed attestation that I wanted to investigate to rule out any hardware issues with my setup. Finding the logs for my specific epoch+slot was surprisingly difficult, and required bouncing around between rocketpool, docker, and nimbus quite a lot.

Luckily, I got some invaluable help on Discord.

Running this command from jcrtp saved the day

docker-compose --project-directory ~/.rocketpool -f ~/.rocketpool/docker-compose.yml logs eth2 > eth2.log

But my understanding is that this isn't sourcing the logs from the filesystem, but instead from stdin which is a bit more fragile.

I looked at the nimbus documentation, and saw that they suggest leveraging log rotations for more persistent + performant archiving of logs
https://nimbus.guide/log-rotate.html

In their docker documentation, they also mention that the --log-file flag is ignored for their docker image, which is what rocketpool is using. I suspect this logging scenario might be broken.
https://nimbus.guide/docker.html

Summary: Would be awesome if rocketpool launched the nimbus container with log rotation configured correctly. That way client logs would last longer. Also, it would be nice to have documentation that explains how to investigate client logs within docker containers.

Error with Windows Client

When I run the Windows client, I get the following error below. I saw a post suggesting it might be a Go compilation issue, and it might just have to be recompiled with the right flag.

Exception 0xc0000005 0x0 0x7ff9021b0fff 0x1dc7fb00000
PC=0x1dc7fb00000

runtime: unknown pc 0x1dc7fb00000
stack: frame={sp:0x1440bfe7f0, fp:0x0} stack=[0x0,0x1440bffed0)
0000001440bfe6f0: 0000001440bfe738 0000001440bfe760.......

Potential solution: golang/go#42593

The `service install` command doesn't take into account the `--config-path` option

To reproduce, run the following command on a fresh installation:

mkdir /tmp/rocketpool
rocketpool --config-path /tmp/rocketpool service install

The result I expected is that the "Rocket Pool package files" (i.e. chains, config.yml, data, docker-compose.yml, settings.yml) will be placed in the designated config directory. Instead, they were placed in ~/.rocketpool.

Teku's VC doesn't delete its `.lock` files after a hard shutdown

Teku makes 0xabcd.lock files for each loaded validator, to prevent re-running with the same keys. Rocket Pool doesn't have that problem with the Docker setup, so we should either add support for its --validators-keystore-locking-enabled flag or have start-validator.sh remove all *.lock files upon startup (or both).

Besu needs '--p2p-host' set in order to get connected peers.

I was struggling with slow sync with Besu, and noticed that I only had like 5-10% of max connected peers most of the time. Was chatting on Besu discord and realized I if I added --p2p-host=<external_ip> to the additional fields of Besu in rocketpool config, I now have max 25 connected peers, and much better sync. This should at least be a config text box, if not automatically filled out. Image is before and after adding that flag.

image

[Feature Request] Option to open up metrics ports on docker

Hi,

I am running Prometheus and Grafana on an external device. Therefore, after upgrading the smartnode stack I need to open the metrics ports each time by hand for

  • eth2
  • node
  • validator
  • watchtower

It would be great, if the installer does not only ask for the port of the respective metrics, but also if the respective metrics port should be exposed in docker, making it available in the network.

Thanks, BR

`node status` fails with custom Eth1 providers

I've tried to configure my RocketPool smart node with an external Eth1 provider, using the following config:

chains:
  eth1:
    client:
      selected: "custom"
      params:
      - env: INFURA_PROJECT_ID
        value: ""
      - env: ETHSTATS_LABEL
        value: ""
      - env: ETHSTATS_LOGIN
        value: ""
      - env: PROVIDER_URL
        value: "ws://10.4.0.165:8546"
  eth2:
    client:
      selected: nimbus
      params:
      - env: CUSTOM_GRAFFITI
        value: "Nimbus on RocketPool"

ws://10.4.0.165:8546 is a Geth instance that I've verified to be accessible within the container by executing docker exec -it <eth1-container-id> /bin/bash and then connecting with telnet.

When I run a command such as rocketpool node status, I get the following error:

Could not get node status: invalid character 'E' looking for beginning of value

The same problem appears if I try to use a http://... URL or if I omit the protocol prefix altogether.

Add PATH support for Zsh

Hello!

Many of us use zsh instead of bash. In your instructions the source .profile triggers a source of the .bashrc if using the bash shell. In order to manually support zsh I had to:

  1. Run echo "export PATH=~/bin/:$PATH" >> ~/.zshrc
  2. Run source ~/.zshrc

It is unclear to me how you generated the release binary so I'm not able to make a PR. If you have contribution guidelines/docs please let me know.

Add custom Eth 1.0 provider config option

  • Add a configuration option for a custom Eth 1 provider URL
  • Note in the prompts that this will not work for providers on localhost, as the proxy runs in a docker container (although the node operator could probably still point to a static local IP)

[Feature Request] Export RP version via metrics

Hi,

it would be great if not only the eth2 client version would be exposed as metrics from the respective client, but also the RP smartnode stack version. Then it could be added to the Grafana dashboard.

Thanks, BR

Are Rocket Pool's PGP keys documented somewhere?

Release v1.0.0 includes a PGP key (1):

# curl -L https://github.com/rocket-pool/smartnode-install/releases/download/v1.0.0/smartnode-signing-key.asc | gpg --import-options show-only --import
pub   ed25519/0xC87825790FEE494C 2021-10-01 [SC]
      Key fingerprint = 465E 63FA 396B D193 09D1  E5FE C878 2579 0FEE 494C
      Keygrip = A78FD0D2744F946FF11916F88D2FFA0EE29570FC
uid                              Rocket Pool (Smartnode Signing Key) <[email protected]>

Release v1.4.1 notes that the signing key has been changed:

# curl -L https://github.com/rocket-pool/smartnode-install/releases/download/v1.4.1/smartnode-signing-key-v2.asc | gpg --import-options show-only --import
pub   ed25519/0xA69D503BCDB98CB1 2022-06-01 [SC] [expired: 2023-06-01]
      Key fingerprint = 8F10 7D8C 1248 71D8 C98C  DC91 A69D 503B CDB9 8CB1
      Keygrip = C18AEC7EE7515DB951C4A7723DBAB6DAF374CD56
uid                              Rocket Pool (Smartnode Installation Signing Key v2) <[email protected]>

Shortly after, release v1.4.3 seems to have changed the key again, though I don't think it was announced. Note that the previous key was set to expire in 2023, but v1.4.3 was released on 2022, just a bit over a month after v1.4.1.

# curl -L https://github.com/rocket-pool/smartnode-install/releases/download/v1.4.3/smartnode-signing-key-v3.asc | gpg --import-options show-only --import
pub   nistp256/0xE00CDCDC74B1E3F5 1970-01-01 [SC]
      Key fingerprint = D17F BE7E 12E2 C9DC 21CE  2BC3 E00C DCDC 74B1 E3F5
      Keygrip = E10252EC650D7F6E48E11E3FEBF0A88E6A39816A
uid                              Joe Clapis <[email protected]>
sub   nistp256/0x754769E8F0A9ECF4 1970-01-01 [E]
      Keygrip = 2561044DFDCF2022F468657DC12EE501859F2919

This "v3" key is the most recent key as far as I can tell; it has been used up until and including the most recent release (v1.10.0).


I think it would be helpful to document the current signing key somewhere on the website or git repository. For example, Geth lists their PGP keys on the download page of their website, and Lighthouse lists their PGP key in the README of their repository.

Additionally, I think it would be nice if the previous keys were documented for historical transparency. For example, why was "v2" added? Did the original key get compromised? Why was "v2" replaced so quickly, and without announcement? Why does the user ID of the current key refer to one developer (Joe Clapis <[email protected]>) as opposed to Rocket Pool (Smartnode Installation Signing Key v2) <[email protected]> and Rocket Pool (Smartnode Signing Key) <[email protected]> in the previous keys (2)?

(1): I believe the key was first published with v1.0.0 prerelease 4. Prerelease 3 published signatures but I don't think it included the signing key. I don't believe any prior releases were signed.

(2): Having lurked on the discord for a while, I recognize Joe Clapis (pretty sure he's personally answered my questions before ๐Ÿ˜… ) and I trust the key. However, I still think it would be better practice to have an "official reference" for the active (and past) PGP keys.

Add RP graffiti

Add Rocket Pool graffiti text to the validator process, with optional custom text entered by the user at config.

Optionally install dependencies

Currently the setup script install all the required dependencies to be running a rocket pool node.

However, I think could be great to disable the "Install dependencies" step, so that a pre-configured computer don't see its configuration touched.

A computer could already have docker and the other dependencies alreay installed and may do not want its configuration changed by commands such as sudo apt-install upgrade sudo groupadd docker

Just list the dependencies required (apt-transport-https ca-certificates curl gnupg-agent software-properties-common, docker-ce, docker-compose)

Alertmanager fails to start when metrics is disabled in the setup wizzard.

Possible Reason

Maybe alertmanager depends on the monitoring module, but when monitoring is disabled it still tries to start.

Logs

[+] Running 7/8
 โœ” Container rocketpool_api           Running                                                                 0.0s
 โœ” Container rocketpool_eth1          Running                                                                 0.0s
 โœ” Container rocketpool_mev-boost     Running                                                                 0.0s
 โœ” Container rocketpool_watchtower    Running                                                                 0.0s
 โœ” Container rocketpool_node          Running                                                                 0.0s
 โœ” Container rocketpool_validator     Running                                                                 0.0s
 โœ” Container rocketpool_eth2          Running                                                                 0.0s
 โ ธ Container rocketpool_alertmanager  Starting                                                                0.3s
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/home/leo_h/.rocketpool/alerting/alertmanager.yml" to rootfs at "/etc/alertmanager/alertmanager.yml": mount /home/leo_h/.rocketpool/alerting/alertmanager.yml:/etc/alertmanager/alertmanager.yml (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
exit status 1

Smartnode Install for multiple networks

Currently the setup script has some hard coded vars which will only work with the testnet. We should make this configurable per network chain ID, so the install script can work on multiple networks if needed.

The file structure could be reorganised to allow this with a clean separation of networks, such as:

/files/rocketpool.sh
/files/setup.sh

/files/network/77/pow/config.yml (clientsSupported, providerPow, storageAddress, depositContractAddress, withdrawalContractAddress, bootnodes, ethstatsLabel, ethstatsLogin)

/files/network/77/beacon/config.yml (clientsSupported, providerBeacon, simulatorETHAddress - address ETH is sent from for the sim)

Make it so that an extra parameter passed to the install script selected the appropriate network directory to us eg --network 77 and setup.sh can use this.

clientsSupported in each client directory could contain a key:value pair list, with the key being the name of the client and value being its docker image ID:Ver eg: prysm:'prysm:0.1'

Can add this to the second beta.

Error pulling rocketpool/smartnode-cli

Hello there I've finally got the rocketpool service start command working. Yet whenever I execute get the following errors:

https://pastebin.com/Sd56gbVn

Would really appreciate any help.. I've also tried without creating a new image, same issue. I'm also logged in to Docker, but if that's even really part of problem I'm unsure of.

README Update

When this script is ready, we'll need a lot more info on the readme about the smart node, install process etc.

Remove ambiguity around custom (external) eth1 clients

Currently, the process for using an external eth1 client with RP involves configuring RP and selecting "Geth", which is confusing when there is a "custom" option present. Process and/or docs should be updated to address this confusion.

Nethermind with enabled metrics start script has error

When I enable metrics and use nethermind as eth1 client the following error appears as first log line:

rocketpool-eth1  | /setup/start-ec.sh: 201: [: prater: unexpected operator

I think the problem is the double equality sign in this line of the start script. As far as I know the double equality is not understood by /bin/sh but only by /bin/bash but the start script is executed with /bin/sh by default.

I think a fix would replacing the double equality in this line with a single equality.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.