Lighthouse Book

Documentation for Lighthouse users and developers.

Chat Badge

Lighthouse is an Ethereum 2.0 client that connects to other Ethereum 2.0 clients to form a resilient and decentralized proof-of-stake blockchain.

We implement the specification as defined in the ethereum/eth2.0-specs repository.

Topics

You may read this book from start to finish, or jump to some of these topics:

Prospective contributors can read the Contributing section to understand how we develop and test Lighthouse.

About this Book

This book is open source, contribute at github.com/sigp/lighthouse/book.

The Lighthouse CI/CD system maintains a hosted version of the unstable branch at lighthouse-book.sigmaprime.io.

Become an Eth2 Mainnet Validator

Becoming an Eth2 validator is rewarding, but it's not for the faint of heart. You'll need to be familiar with the rules of staking (e.g., rewards, penalties, etc.) and also configuring and managing servers. You'll also need at least 32 ETH!

For those with an understanding of Eth2 and server maintenance, you'll find that running Lighthouse is easy. Install it, start it, monitor it and keep it updated. You shouldn't need to interact with it on a day-to-day basis.

Being educated is critical to validator success. Before submitting your mainnet deposit, we recommend:

  • Thoroughly exploring the Eth2 Launchpad website
    • Try running through the deposit process without actually submitting a deposit.
  • Reading through this documentation, especially the Slashing Protection section.
  • Running a testnet validator.
  • Performing a web search and doing your own research.

By far, the best technical learning experience is to run a Testnet Validator. You can get hands-on experience with all the tools and it's a great way to test your staking hardware. We recommend all mainnet validators to run a testnet validator initially; 32 ETH is a significant outlay and joining a testnet is a great way to "try before you buy".

Remember, if you get stuck you can always reach out on our Discord.

Please note: the Lighthouse team does not take any responsibility for losses or damages occured through the use of Lighthouse. We have an experienced internal security team and have undergone multiple third-party security-reviews, however the possibility of bugs or malicious interference remains a real and constant threat. Validators should be prepared to lose some rewards due to the actions of other actors on the Eth2 network or software bugs. See the software license for more detail on liability.

Using Lighthouse for Mainnet

When using Lighthouse, the --network flag selects a network. E.g.,

  • lighthouse (no flag): Mainnet.
  • lighthouse --network mainnet: Mainnet.
  • lighthouse --network pyrmont: Pyrmont (testnet).

Using the correct --network flag is very important; using the wrong flag can result in penalties, slashings or lost deposits. As a rule of thumb, always provide a --network flag instead of relying on the default.

Joining a Testnet

There are five primary steps to become a testnet validator:

  1. Create validator keys and submit deposits.
  2. Start an Eth1 client.
  3. Install Lighthouse.
  4. Import the validator keys into Lighthouse.
  5. Start Lighthouse.
  6. Leave Lighthouse running.

Each of these primary steps has several intermediate steps, so we recommend setting aside one or two hours for this process.

Step 1. Create validator keys

The Ethereum Foundation provides an "Eth2 launch pad" for creating validator keypairs and submitting deposits:

Please follow the steps on the launch pad site to generate validator keys and submit deposits. Make sure you select "Lighthouse" as your client.

Move to the next step once you have completed the steps on the launch pad, including generating keys via the Python CLI and submitting gETH/ETH deposits.

Step 2. Start an Eth1 client

Since Eth2 relies upon the Eth1 chain for validator on-boarding, all Eth2 validators must have a connection to an Eth1 node.

We provide instructions for using Geth, but you could use any client that implements the JSON RPC via HTTP. A fast-synced node is sufficient.

Installing Geth

If you're using a Mac, follow the instructions listed here to install geth. Otherwise see here.

Starting Geth

Once you have geth installed, use this command to start your Eth1 node:

 geth --http

Step 3. Install Lighthouse

Note: Lighthouse only supports Windows via WSL.

Follow the Lighthouse Installation Instructions to install Lighthouse from one of the available options.

Proceed to the next step once you've successfully installed Lighthouse and viewed its --version info.

Note: Some of the instructions vary when using Docker, ensure you follow the appropriate sections later in this guide.

Step 4. Import validator keys to Lighthouse

When Lighthouse is installed, follow the Importing from the Ethereum 2.0 Launch pad instructions so the validator client can perform your validator duties.

Proceed to the next step once you've successfully imported all validators.

Step 5. Start Lighthouse

For staking, one needs to run two Lighthouse processes:

  • lighthouse bn: the "beacon node" which connects to the P2P network and verifies blocks.
  • lighthouse vc: the "validator client" which manages validators, using data obtained from the beacon node via a HTTP API.

Starting these processes is different for binary and docker users:

Binary users

Those using the pre- or custom-built binaries can start the two processes with:

lighthouse --network mainnet bn --staking
lighthouse --network mainnet vc

Note: ~/.lighthouse/mainnet is the default directory which contains the keys and databases. To specify a custom dir, see Custom Directories.

Docker users

Those using Docker images can start the processes with:

$ docker run \
	--network host \
	-v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse \
	lighthouse --network mainnet bn --staking --http-address 0.0.0.0
$ docker run \
	--network host \
	-v $HOME/.lighthouse:/root/.lighthouse \
	sigp/lighthouse \
	lighthouse --network mainnet vc

Step 6. Leave Lighthouse running

Leave your beacon node and validator client running and you'll see logs as the beacon node stays synced with the network while the validator client produces blocks and attestations.

It will take 4-8+ hours for the beacon chain to process and activate your validator, however you'll know you're active when the validator client starts successfully publishing attestations each epoch:

Dec 03 08:49:40.053 INFO Successfully published attestation      slot: 98, committee_index: 0, head_block: 0xa208…7fd5,

Although you'll produce an attestation each epoch, it's less common to produce a block. Watch for the block production logs too:

Dec 03 08:49:36.225 INFO Successfully published block            slot: 98, attestations: 2, deposits: 0, service: block

If you see any ERRO (error) logs, please reach out on Discord or create an issue.

Happy staking!

Become a Testnet Validator

Joining an Eth2 testnet is a great way to get familiar with staking in Phase 0. All users should experiment with a testnet prior to staking mainnet ETH.

To join a testnet, you can follow the Become an Eth2 Mainnet Validator instructions but with a few differences:

  1. Use the appropriate Eth2 launchpad website:
  2. Instead of --network mainnet, use the appropriate network flag:
    • --network pyrmont: Pyrmont.
    • --network prater: Prater.
  3. Use a Goerli Eth1 node instead of a mainnet one:
    • For Geth, this means using geth --goerli --http.
  4. Notice that Lighthouse will store its files in a different directory by default:
    • ~/.lighthouse/pyrmont: Pyrmont.
    • ~/.lighthouse/prater: Prater.

Never use real ETH to join a testnet! All of the testnets listed here use Goerli ETH which is basically worthless. This allows experimentation without real-world costs.

πŸ“¦ Installation

Lighthouse runs on Linux, macOS, and Windows (still in beta testing).

There are three core methods to obtain the Lighthouse application:

Additionally, there are two extra guides for specific uses:

Minimum System Requirements

  • Dual-core CPU, 2015 or newer
  • 8 GB RAM
  • 128 GB solid state storage
  • 10 Mb/s download, 5 Mb/s upload broadband connection

For more information see System Requirements.

System Requirements

Lighthouse is able to run on most low to mid-range consumer hardware, but will perform best when provided with ample system resources. The following system requirements are for running a beacon node and a validator client with a modest number of validator keys (less than 100).

Minimum

  • Dual-core CPU, 2015 or newer
  • 8 GB RAM
  • 128 GB solid state storage
  • 10 Mb/s download, 5 Mb/s upload broadband connection

During smooth network conditions, Lighthouse's database will fit within 15 GB, but in case of a long period of non-finality, it is strongly recommended that at least 128 GB is available.

  • Quad-core AMD Ryzen, Intel Broadwell, ARMv8 or newer
  • 16 GB RAM
  • 256 GB solid state storage
  • 100 Mb/s download, 20 Mb/s upload broadband connection

Pre-built Binaries

Each Lighthouse release contains several downloadable binaries in the "Assets" section of the release. You can find the releases on Github.

Note: binaries are provided for Windows native, but Windows Lighthouse support is still in beta testing.

Platforms

Binaries are supplied for four platforms:

  • x86_64-unknown-linux-gnu: AMD/Intel 64-bit processors (most desktops, laptops, servers)
  • aarch64-unknown-linux-gnu: 64-bit ARM processors (Raspberry Pi 4)
  • x86_64-apple-darwin: macOS with Intel chips
  • x86_64-windows: Windows with 64-bit processors (Beta)

Additionally there is also a -portable suffix which indicates if the portable feature is used:

  • Without portable: uses modern CPU instructions to provide the fastest signature verification times (may cause Illegal instruction error on older CPUs)
  • With portable: approx. 20% slower, but should work on all modern 64-bit processors.

Usage

Each binary is contained in a .tar.gz archive. For this example, lets assume the user needs a portable x86_64 binary.

Whilst this example uses v0.2.13 we recommend always using the latest release.

Steps

  1. Go to the Releases page and select the latest release.
  2. Download the lighthouse-${VERSION}-x86_64-unknown-linux-gnu-portable.tar.gz binary.
  3. Extract the archive:
    1. cd Downloads
    2. tar -xvf lighthouse-${VERSION}-x86_64-unknown-linux-gnu.tar.gz
  4. Test the binary with ./lighthouse --version (it should print the version).
  5. (Optional) Move the lighthouse binary to a location in your PATH, so the lighthouse command can be called from anywhere.
    • E.g., cp lighthouse /usr/bin

Windows users will need to execute the commands in Step 3 from PowerShell.

Troubleshooting

If you get a SIGILL (exit code 132), then your CPU is incompatible with the optimized build of Lighthouse and you should switch to the -portable build. In this case, you will see a warning like this on start-up:

WARN CPU seems incompatible with optimized Lighthouse build, advice: If you get a SIGILL, please try Lighthouse portable build

On some VPS providers, the virtualization can make it appear as if CPU features are not available, even when they are. In this case you might see the warning above, but so long as the client continues to function it's nothing to worry about.

Docker Guide

This repository has a Dockerfile in the root which builds an image with the lighthouse binary installed. A pre-built image is available on Docker Hub.

Obtaining the Docker image

There are two ways to obtain the docker image, either via Docker Hub or building the image from source. Once you have obtained the docker image via one of these methods, proceed to Using the Docker image.

Docker Hub

Lighthouse maintains the sigp/lighthouse Docker Hub repository which provides an easy way to run Lighthouse without building the image yourself.

Obtain the latest image with:

$ docker pull sigp/lighthouse

Download and test the image with:

$ docker run sigp/lighthouse lighthouse --version

If you can see the latest Lighthouse release version (see example below), then you've successfully installed Lighthouse via Docker.

Example Version Output

Lighthouse vx.x.xx-xxxxxxxxx
BLS Library: xxxx-xxxxxxx

Note: when you're running the Docker Hub image you're relying upon a pre-built binary instead of building from source.

Note: due to the Docker Hub image being compiled to work on arbitrary machines, it isn't as highly optimized as an image built from source. We're working to improve this, but for now if you want the absolute best performance, please build the image yourself.

Building the Docker Image

To build the image from source, navigate to the root of the repository and run:

$ docker build . -t lighthouse:local

The build will likely take several minutes. Once it's built, test it with:

$ docker run lighthouse:local lighthouse --help

Using the Docker image

You can run a Docker beacon node with the following command:

$ docker run -p 9000:9000 -p 127.0.0.1:5052:5052 -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse --network mainnet beacon --http --http-address 0.0.0.0

To join the Pyrmont testnet, use --network pyrmont instead.

The -p and -v and values are described below.

Volumes

Lighthouse uses the /root/.lighthouse directory inside the Docker image to store the configuration, database and validator keys. Users will generally want to create a bind-mount volume to ensure this directory persists between docker run commands.

The following example runs a beacon node with the data directory mapped to the users home directory:

$ docker run -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse beacon

Ports

In order to be a good peer and serve other peers you should expose port 9000. Use the -p flag to do this:

$ docker run -p 9000:9000 sigp/lighthouse lighthouse beacon

If you use the --http flag you may also want to expose the HTTP port with -p 127.0.0.1:5052:5052.

$ docker run -p 9000:9000 -p 127.0.0.1:5052:5052 sigp/lighthouse lighthouse beacon --http --http-address 0.0.0.0

Installation: Build from Source

Lighthouse builds on Linux, macOS, and Windows (native Windows support in BETA, we also support Windows via WSL).

Compilation should be easy. In fact, if you already have Rust and the build dependencies installed, all you need is:

  • git clone https://github.com/sigp/lighthouse.git
  • cd lighthouse
  • git checkout stable
  • make

If this doesn't work or is not clear enough, see the Detailed Instructions below. If you have further issues, see Troubleshooting. If you'd prefer to use Docker, see the Docker Guide.

Updating lighthouse

You can update Lighthouse to a specific version by running the commands below. The lighthouse directory will be the location you cloned Lighthouse to during the installation process. ${VERSION} will be the version you wish to build in the format vX.X.X.

  • cd lighthouse
  • git fetch
  • git checkout ${VERSION}
  • make

Detailed Instructions

  1. Install the build dependencies for your platform
    • Check the Dependencies section for additional information.
  2. Clone the Lighthouse repository.
    • Run $ git clone https://github.com/sigp/lighthouse.git
    • Change into the newly created directory with $ cd lighthouse
  3. Build Lighthouse with $ make.
  4. Installation was successful if $ lighthouse --help displays the command-line documentation.

First time compilation may take several minutes. If you experience any failures, please reach out on discord or create an issue.

Dependencies

Installing Rust

The best way to install Rust (regardless of platform) is usually with rustup

  • Use the stable toolchain (it's the default).

Windows Support

These instructions are for compiling or running Lighthouse natively on Windows, which is currently in BETA testing. Lighthouse can also run successfully under the Windows Subsystem for Linux (WSL). If using Ubuntu under WSL, you should follow the instructions for Ubuntu listed in the Dependencies (Ubuntu) section.

  1. Install Git
  2. Install Chocolatey Package Manager for Windows
    • Install make via choco install make
    • Install cmake via choco install cmake --installargs 'ADD_CMAKE_TO_PATH=System'

Ubuntu

Several dependencies may be required to compile Lighthouse. The following packages may be required in addition a base Ubuntu Server installation:

sudo apt install -y git gcc g++ make cmake pkg-config

macOS

You will need cmake. You can install via homebrew:

brew install cmake

Troubleshooting

Command is not found

Lighthouse will be installed to CARGO_HOME or $HOME/.cargo. This directory needs to be on your PATH before you can run $ lighthouse.

See "Configuring the PATH environment variable" (rust-lang.org) for more information.

Compilation error

Make sure you are running the latest version of Rust. If you have installed Rust using rustup, simply type $ rustup update.

If compilation fails with (signal: 9, SIGKILL: kill), this could mean your machine ran out of memory during compilation. If you are on a resource-constrained device you can look into cross compilation.

If compilation fails with error: linking with cc failed: exit code: 1, try running cargo clean.

Raspberry Pi 4 Installation

Tested on:

  • Raspberry Pi 4 Model B (4GB)
  • Ubuntu 20.04 LTS (GNU/Linux 5.4.0-1011-raspi aarch64)

Note: Lighthouse supports cross-compiling to target a Raspberry Pi (aarch64). Compiling on a faster machine (i.e., x86_64 desktop) may be convenient.

1. Install Ubuntu

Follow the Ubuntu Raspberry Pi installation instructions.

A 64-bit version is required and latest version is recommended (Ubuntu 20.04 LTS was the latest at the time of writing).

A graphical environment is not required in order to use Lighthouse. Only the terminal and an Internet connection are necessary.

2. Install Packages

Install the Ubuntu Dependencies. (I.e., run the sudo apt install ... command at that link).

Tips:

  • If there are difficulties, try updating the package manager with sudo apt update.

3. Install Rust

Install Rust as per rustup. (I.e., run the curl ... command).

Tips:

  • When prompted, enter 1 for the default installation.
  • Try running cargo version after Rust installation completes. If it cannot be found, run source $HOME/.cargo/env.
  • It's generally advised to append source $HOME/.cargo/env to ~/.bashrc.

4. Install Lighthouse

git clone https://github.com/sigp/lighthouse.git
cd lighthouse
git checkout stable
make

Compiling Lighthouse can take up to an hour. The safety guarantees provided by the Rust language unfortunately result in a lengthy compilation time on a low-spec CPU like a Raspberry Pi. For faster compilation on low-spec hardware, try cross-compiling on a more powerful computer (e.g., compile for RasPi from your desktop computer).

Once installation has finished, confirm Lighthouse is installed by viewing the usage instructions with lighthouse --help.

Cross-compiling

Lighthouse supports cross-compiling, allowing users to run a binary on one platform (e.g., aarch64) that was compiled on another platform (e.g., x86_64).

Instructions

Cross-compiling requires Docker, rustembedded/cross and for the current user to be in the docker group.

The binaries will be created in the target/ directory of the Lighthouse project.

Targets

The Makefile in the project contains four targets for cross-compiling:

  • build-x86_64: builds an optimized version for x86_64 processors (suitable for most users). Supports Intel Broadwell (2014) and newer, and AMD Ryzen (2017) and newer.
  • build-x86_64-portable: builds a version for x86_64 processors which avoids using some modern CPU instructions that are incompatible with older CPUs. Suitable for pre-Broadwell/Ryzen CPUs.
  • build-aarch64: builds an optimized version for 64-bit ARM processors (suitable for Raspberry Pi 4).
  • build-aarch64-portable: builds a version for 64-bit ARM processors which avoids using some modern CPU instructions. In practice, very few ARM processors lack the instructions necessary to run the faster non-portable build.

Example

cd lighthouse
make build-aarch64

The lighthouse binary will be compiled inside a Docker container and placed in lighthouse/target/aarch64-unknown-linux-gnu/release.

Key Management

Note: we recommend using the Eth2 launchpad to create validators.

Lighthouse uses a hierarchical key management system for producing validator keys. It is hierarchical because each validator key can be derived from a master key, making the validators keys children of the master key. This scheme means that a single 24-word mnemonic can be used to backup all of your validator keys without providing any observable link between them (i.e., it is privacy-retaining). Hierarchical key derivation schemes are common-place in cryptocurrencies, they are already used by most hardware and software wallets to secure BTC, ETH and many other coins.

Key Concepts

We defined some terms in the context of validator key management:

  • Mnemonic: a string of 24 words that is designed to be easy to write down and remember. E.g., "radar fly lottery mirror fat icon bachelor sadness type exhaust mule six beef arrest you spirit clog mango snap fox citizen already bird erase".
    • Defined in BIP-39
  • Wallet: a wallet is a JSON file which stores an encrypted version of a mnemonic.
    • Defined in EIP-2386
  • Keystore: typically created by wallet, it contains a single encrypted BLS keypair.
    • Defined in EIP-2335.
  • Voting Keypair: a BLS public and private keypair which is used for signing blocks, attestations and other messages on regular intervals, whilst staking in Phase 0.
  • Withdrawal Keypair: a BLS public and private keypair which will be required after Phase 0 to manage ETH once a validator has exited.

Overview

The key management system in Lighthouse involves moving down the above list of items, starting at one easy-to-backup mnemonic and ending with multiple keypairs. Creating a single validator looks like this:

  1. Create a wallet and record the mnemonic:
    • lighthouse --network pyrmont account wallet create --name wally --password-file wally.pass
  2. Create the voting and withdrawal keystores for one validator:
    • lighthouse --network pyrmont account validator create --wallet-name wally --wallet-password wally.pass --count 1

In step (1), we created a wallet in ~/.lighthouse/{network}/wallets with the name wally. We encrypted this using a pre-defined password in the wally.pass file. Then, in step (2), we created one new validator in the ~/.lighthouse/{network}/validators directory using wally (unlocking it with wally.pass) and storing the passwords to the validators voting key in ~/.lighthouse/{network}/secrets.

Thanks to the hierarchical key derivation scheme, we can delete all of the aforementioned directories and then regenerate them as long as we remembered the 24-word mnemonic (we don't recommend doing this, though).

Creating another validator is easy, it's just a matter of repeating step (2). The wallet keeps track of how many validators it has generated and ensures that a new validator is generated each time.

Detail

Directory Structure

There are three important directories in Lighthouse validator key management:

  • wallets/: contains encrypted wallets which are used for hierarchical key derivation.
    • Defaults to ~/.lighthouse/{network}/wallets
  • validators/: contains a directory for each validator containing encrypted keystores and other validator-specific data.
    • Defaults to ~/.lighthouse/{network}/validators
  • secrets/: since the validator signing keys are "hot", the validator process needs access to the passwords to decrypt the keystores in the validators dir. These passwords are stored here.
    • Defaults to ~/.lighthouse/{network}/secrets where network is the name of the network passed in the --network parameter (default is mainnet).

When the validator client boots, it searches the validators/ for directories containing voting keystores. When it discovers a keystore, it searches the secrets/ dir for a file with the same name as the 0x-prefixed hex representation of the keystore public key. If it finds this file, it attempts to decrypt the keystore using the contents of this file as the password. If it fails, it logs an error and moves onto the next keystore.

The validators/ and secrets/ directories are kept separate to allow for ease-of-backup; you can safely backup validators/ without worrying about leaking private key data.

Withdrawal Keypairs

In Eth2 Phase 0, withdrawal keypairs do not serve any immediate purpose. However, they become very important after Phase 0: they will provide the ultimate control of the ETH of withdrawn validators.

This presents an interesting key management scenario: withdrawal keys are very important, but not right now. Considering this, Lighthouse has adopted a strategy where we do not save withdrawal keypairs to disk by default (it is opt-in). Instead, we assert that since the withdrawal keys can be regenerated from a mnemonic, having them lying around on the file-system only presents risk and complexity.

At the time or writing, we do not expose the commands to regenerate keys from mnemonics. However, key regeneration is tested on the public Lighthouse repository and will be exposed prior to mainnet launch.

So, in summary, withdrawal keypairs can be trivially regenerated from the mnemonic via EIP-2333 so they are not saved to disk like the voting keypairs.

Create a wallet

Note: we recommend using the Eth2 launchpad to create validators.

A wallet allows for generating practically unlimited validators from an easy-to-remember 24-word string (a mnemonic). As long as that mnemonic is backed up, all validator keys can be trivially re-generated.

The 24-word string is randomly generated during wallet creation and printed out to the terminal. It's important to make one or more backups of the mnemonic to ensure your ETH is not lost in the case of data loss. It very important to keep your mnemonic private as it represents the ultimate control of your ETH.

Whilst the wallet stores the mnemonic, it does not store it in plain-text: the mnemonic is encrypted with a password. It is the responsibility of the user to define a strong password. The password is only required for interacting with the wallet, it is not required for recovering keys from a mnemonic.

Usage

To create a wallet, use the lighthouse account wallet command:

lighthouse account wallet create --help

Creates a new HD (hierarchical-deterministic) EIP-2386 wallet.

USAGE:
    lighthouse account_manager wallet create [OPTIONS] --name <WALLET_NAME> --password-file <WALLET_PASSWORD_PATH>

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

OPTIONS:
    -d, --datadir <DIR>                             Data directory for lighthouse keys and databases.
        --mnemonic-output-path <MNEMONIC_PATH>
            If present, the mnemonic will be saved to this file. DO NOT SHARE THE MNEMONIC.

        --name <WALLET_NAME>
            The wallet will be created with this name. It is not allowed to create two wallets with the same name for
            the same --base-dir.
        --password-file <WALLET_PASSWORD_PATH>
            A path to a file containing the password which will unlock the wallet. If the file does not exist, a random
            password will be generated and saved at that path. To avoid confusion, if the file does not already exist it
            must include a '.pass' suffix.
    -t, --testnet-dir <DIR>
            Path to directory containing eth2_testnet specs. Defaults to a hard-coded Lighthouse testnet. Only effective
            if there is no existing database.
        --type <WALLET_TYPE>
            The type of wallet to create. Only HD (hierarchical-deterministic) wallets are supported presently..
            [default: hd]  [possible values: hd]

Example

Creates a new wallet named wally and saves it in ~/.lighthouse/pyrmont/wallets with a randomly generated password saved to ./wallet.pass:

lighthouse --network pyrmont account wallet create --name wally --password-file wally.pass

Notes:

  • The password is not wally.pass, it is the contents of the wally.pass file.
  • If wally.pass already exists the wallet password will be set to contents of that file.

Create a validator

Note: we recommend using the Eth2 launchpad to create validators.

Validators are fundamentally represented by a BLS keypair. In Lighthouse, we use a wallet to generate these keypairs. Once a wallet exists, the lighthouse account validator create command is used to generate the BLS keypair and all necessary information to submit a validator deposit and have that validator operate in the lighthouse validator_client.

Usage

To create a validator from a wallet, use the lighthouse account validator create command:

lighthouse account validator create --help

Creates new validators from an existing EIP-2386 wallet using the EIP-2333 HD key derivation scheme.

USAGE:
    lighthouse account_manager validator create [FLAGS] [OPTIONS]

FLAGS:
    -h, --help                         Prints help information
        --stdin-inputs                 If present, read all user inputs from stdin instead of tty.
        --store-withdrawal-keystore    If present, the withdrawal keystore will be stored alongside the voting keypair.
                                       It is generally recommended to *not* store the withdrawal key and instead
                                       generate them from the wallet seed when required.
    -V, --version                      Prints version information

OPTIONS:
        --at-most <AT_MOST_VALIDATORS>
            Observe the number of validators in --validator-dir, only creating enough to reach the given count. Never
            deletes an existing validator.
        --count <VALIDATOR_COUNT>
            The number of validators to create, regardless of how many already exist

    -d, --datadir <DIR>
            Used to specify a custom root data directory for lighthouse keys and databases. Defaults to
            $HOME/.lighthouse/{network} where network is the value of the `network` flag Note: Users should specify
            separate custom datadirs for different networks.
        --debug-level <LEVEL>
            The verbosity level for emitting logs. [default: info]  [possible values: info, debug, trace, warn, error,
            crit]
        --deposit-gwei <DEPOSIT_GWEI>
            The GWEI value of the deposit amount. Defaults to the minimum amount required for an active validator
            (MAX_EFFECTIVE_BALANCE)
        --network <network>
            Name of the Eth2 chain Lighthouse will sync and follow. [default: mainnet]  [possible values: medalla,
            altona, spadina, pyrmont, mainnet, toledo]
        --secrets-dir <SECRETS_DIR>
            The path where the validator keystore passwords will be stored. Defaults to ~/.lighthouse/{network}/secrets

    -s, --spec <DEPRECATED>
            This flag is deprecated, it will be disallowed in a future release. This value is now derived from the
            --network or --testnet-dir flags.
    -t, --testnet-dir <DIR>
            Path to directory containing eth2_testnet specs. Defaults to a hard-coded Lighthouse testnet. Only effective
            if there is no existing database.
        --wallet-name <WALLET_NAME>                 Use the wallet identified by this name
        --wallet-password <WALLET_PASSWORD_PATH>
            A path to a file containing the password which will unlock the wallet.

        --wallets-dir <wallets-dir>
            A path containing Eth2 EIP-2386 wallets. Defaults to ~/.lighthouse/{network}/wallets

Example

The example assumes that the wally wallet was generated from the wallet example.

lighthouse --network pyrmont account validator create --name wally --wallet-password wally.pass --count 1

This command will:

  • Derive a single new BLS keypair from wallet wally in ~/.lighthouse/{network}/wallets, updating it so that it generates a new key next time.
  • Create a new directory in ~/.lighthouse/{network}/validators containing:
    • An encrypted keystore containing the validators voting keypair.
    • An eth1_deposit_data.rlp assuming the default deposit amount (32 ETH for most testnets and mainnet) which can be submitted to the deposit contract for the Pyrmont testnet. Other testnets can be set via the --network CLI param.
  • Store a password to the validators voting keypair in ~/.lighthouse/{network}/secrets.

Key recovery

Generally, validator keystore files are generated alongside a mnemonic. If the keystore and/or the keystore password are lost this mnemonic can regenerate a new, equivalent keystore with a new password.

There are two ways to recover keys using the lighthouse CLI:

  • lighthouse account validator recover: recover one or more EIP-2335 keystores from a mnemonic. These keys can be used directly in a validator client.
  • lighthouse account wallet recover: recover an EIP-2386 wallet from a mnemonic.

⚠️ Warning

Recovering validator keys from a mnemonic should only be used as a last resort. Key recovery entails significant risks:

  • Exposing your mnemonic to a computer at any time puts it at risk of being compromised. Your mnemonic is not encrypted and is a target for theft.
  • It's completely possible to regenerate a validator keypairs that is already active on some other validator client. Running the same keypairs on two different validator clients is very likely to result in slashing.

Recover EIP-2335 validator keystores

A single mnemonic can generate a practically unlimited number of validator keystores using an index. Generally, the first time you generate a keystore you'll use index 0, the next time you'll use index 1, and so on. Using the same index on the same mnemonic always results in the same validator keypair being generated (see EIP-2334 for more detail).

Using the lighthouse account validator recover command you can generate the keystores that correspond to one or more indices in the mnemonic:

  • lighthouse account validator recover: recover only index 0.
  • lighthouse account validator recover --count 2: recover indices 0, 1.
  • lighthouse account validator recover --first-index 1: recover only index 1.
  • lighthouse account validator recover --first-index 1 --count 2: recover indices 1, 2.

For each of the indices recovered in the above commands, a directory will be created in the --validator-dir location (default ~/.lighthouse/{network}/validators) which contains all the information necessary to run a validator using the lighthouse vc command. The password to this new keystore will be placed in the --secrets-dir (default ~/.lighthouse/{network}/secrets).

where network is the name of the Eth2 network passed in the --network parameter (default is mainnet).

Recover a EIP-2386 wallet

Instead of creating EIP-2335 keystores directly, an EIP-2386 wallet can be generated from the mnemonic. This wallet can then be used to generate validator keystores, if desired. For example, the following command will create an encrypted wallet named wally-recovered from a mnemonic:

lighthouse account wallet recover --name wally-recovered

⚠️ Warning: the wallet will be created with a nextaccount value of 0. This means that if you have already generated n validators, then the next n validators generated by this wallet will be duplicates. As mentioned previously, running duplicate validators is likely to result in slashing.

Validator Management

The lighthouse vc command starts a validator client instance which connects to a beacon node performs the duties of a staked validator.

This document provides information on how the validator client discovers the validators it will act for and how it should obtain their cryptographic signatures.

Users that create validators using the lighthouse account tool in the standard directories and do not start their lighthouse vc with the --disable-auto-discover flag should not need to understand the contents of this document. However, users with more complex needs may find this document useful.

Introducing the validator_definitions.yml file

The validator_definitions.yml file is located in the validator-dir, which defaults to ~/.lighthouse/{network}/validators. It is a YAML encoded file defining exactly which validators the validator client will (and won't) act for.

Example

Here's an example file with two validators:

---
- enabled: true
  voting_public_key: "0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007"
  type: local_keystore
  voting_keystore_path: /home/paul/.lighthouse/validators/0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007/voting-keystore.json
  voting_keystore_password_path: /home/paul/.lighthouse/secrets/0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007
- enabled: false
  voting_public_key: "0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477"
  type: local_keystore
  voting_keystore_path: /home/paul/.lighthouse/validators/0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477/voting-keystore.json
  voting_keystore_password: myStrongpa55word123&$

In this example we can see two validators:

  • A validator identified by the 0x87a5... public key which is enabled.
  • Another validator identified by the 0x0xa556... public key which is not enabled.

Fields

Each permitted field of the file is listed below for reference:

  • enabled: A true/false indicating if the validator client should consider this validator "enabled".
  • voting_public_key: A validator public key.
  • type: How the validator signs messages (currently restricted to local_keystore).
  • voting_keystore_path: The path to a EIP-2335 keystore.
  • voting_keystore_password_path: The path to the password for the EIP-2335 keystore.
  • voting_keystore_password: The password to the EIP-2335 keystore.

Note: Either voting_keystore_password_path or voting_keystore_password must be supplied. If both are supplied, voting_keystore_password_path is ignored.

Populating the validator_definitions.yml file

When validator client starts and the validator_definitions.yml file doesn't exist, a new file will be created. If the --disable-auto-discover flag is provided, the new file will be empty and the validator client will not start any validators. If the --disable-auto-discover flag is not provided, an automatic validator discovery routine will start (more on that later). To recap:

  • lighthouse vc: validators are automatically discovered.
  • lighthouse vc --disable-auto-discover: validators are not automatically discovered.

Automatic validator discovery

When the --disable-auto-discover flag is not provided, the validator will search the validator-dir for validators and add any new validators to the validator_definitions.yml with enabled: true.

The routine for this search begins in the validator-dir, where it obtains a list of all files in that directory and all sub-directories (i.e., recursive directory-tree search). For each file named voting-keystore.json it creates a new validator definition by the following process:

  1. Set enabled to true.
  2. Set voting_public_key to the pubkey value from the voting-keystore.json.
  3. Set type to local_keystore.
  4. Set voting_keystore_path to the full path of the discovered keystore.
  5. Set voting_keystore_password_path to be a file in the secrets-dir with a name identical to the voting_public_key value.

Discovery Example

Lets assume the following directory structure:

~/.lighthouse/{network}/validators
β”œβ”€β”€ john
β”‚Β Β  └── voting-keystore.json
β”œβ”€β”€ sally
β”‚Β Β  β”œβ”€β”€ one
β”‚Β Β  β”‚Β Β  └── voting-keystore.json
β”‚Β Β  β”œβ”€β”€ three
β”‚Β Β  β”‚Β Β  └── my-voting-keystore.json
β”‚Β Β  └── two
β”‚Β Β      └── voting-keystore.json
└── slashing_protection.sqlite

There is no validator_definitions.yml file present, so we can run lighthouse vc (without --disable-auto-discover) and it will create the following validator_definitions.yml:

---
- enabled: true
  voting_public_key: "0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477"
  type: local_keystore
  voting_keystore_path: /home/paul/.lighthouse/validators/sally/one/voting-keystore.json
  voting_keystore_password_path: /home/paul/.lighthouse/secrets/0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477
- enabled: true
  voting_public_key: "0xaa440c566fcf34dedf233baf56cf5fb05bb420d9663b4208272545608c27c13d5b08174518c758ecd814f158f2b4a337"
  type: local_keystore
  voting_keystore_path: /home/paul/.lighthouse/validators/sally/two/voting-keystore.json
  voting_keystore_password_path: /home/paul/.lighthouse/secrets/0xaa440c566fcf34dedf233baf56cf5fb05bb420d9663b4208272545608c27c13d5b08174518c758ecd814f158f2b4a337
- enabled: true
  voting_public_key: "0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007"
  type: local_keystore
  voting_keystore_path: /home/paul/.lighthouse/validators/john/voting-keystore.json
  voting_keystore_password_path: /home/paul/.lighthouse/secrets/0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007

All voting-keystore.json files have been detected and added to the file. Notably, the sally/three/my-voting-keystore.json file was not added to the file, since the file name is not exactly voting-keystore.json.

In order for the validator client to decrypt the validators, they will need to ensure their secrets-dir is organised as below:

~/.lighthouse/{network}/secrets
β”œβ”€β”€ 0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477
β”œβ”€β”€ 0xaa440c566fcf34dedf233baf56cf5fb05bb420d9663b4208272545608c27c13d5b08174518c758ecd814f158f2b4a337
└── 0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007

Manual configuration

The automatic validator discovery process works out-of-the-box with validators that are created using the lighthouse account validator new command. The details of this process are only interesting to those who are using keystores generated with another tool or have a non-standard requirements.

If you are one of these users, manually edit the validator_definitions.yml file to suit your requirements. If the file is poorly formatted or any one of the validators is unable to be initialized, the validator client will refuse to start.

How the validator_definitions.yml file is processed

If a validator client were to start using the first example validator_definitions.yml file it would print the following log, acknowledging there there are two validators and one is disabled:

INFO Initialized validators                  enabled: 1, disabled: 1

The validator client will simply ignore the disabled validator. However, for the active validator, the validator client will:

  1. Load an EIP-2335 keystore from the voting_keystore_path.
  2. If the voting_keystore_password field is present, use it as the keystore password. Otherwise, attempt to read the file at voting_keystore_password_path and use the contents as the keystore password.
  3. Use the keystore password to decrypt the keystore and obtain a BLS keypair.
  4. Verify that the decrypted BLS keypair matches the voting_public_key.
  5. Create a voting-keystore.json.lock file adjacent to the voting_keystore_path, indicating that the voting keystore is in-use and should not be opened by another process.
  6. Proceed to act for that validator, creating blocks and attestations if/when required.

If there is an error during any of these steps (e.g., a file is missing or corrupt) the validator client will log an error and continue to attempt to process other validators.

When the validator client exits (or the validator is deactivated) it will remove the voting-keystore.json.lock to indicate that the keystore is free for use again.

Importing from the Ethereum 2.0 Launch pad

The Eth2 Lauchpad is a website from the Ethereum Foundation which guides users how to use the eth2.0-deposit-cli command-line program to generate Eth2 validator keys.

The keys that are generated from eth2.0-deposit-cli can be easily loaded into a Lighthouse validator client (lighthouse vc). In fact, both of these programs are designed to work with each other.

This guide will show the user how to import their keys into Lighthouse so they can perform their duties as a validator. The guide assumes the user has already installed Lighthouse.

Instructions

Whilst following the steps on the website, users are instructed to download the eth2.0-deposit-cli repository. This eth2-deposit-cli script will generate the validator BLS keys into a validator_keys directory. We assume that the user's present-working-directory is the eth2-deposit-cli repository (this is where you will be if you just ran the ./deposit.sh script from the Eth2 Launch pad website). If this is not the case, simply change the --directory to point to the validator_keys directory.

Now, assuming that the user is in the eth2-deposit-cli directory and they're using the default (~/.lighthouse/{network}/validators) validators directory (specify a different one using --validators-dir flag), they can follow these steps:

1. Run the lighthouse account validator import command.

Docker users should use the command from the Docker section, all other users can use:

lighthouse --network mainnet account validator import --directory validator_keys

Note: The user must specify the Eth2 network that they are importing the keys for using the --network flag.

After which they will be prompted for a password for each keystore discovered:

Keystore found at "validator_keys/keystore-m_12381_3600_0_0_0-1595406747.json":

 - Public key: 0xa5e8702533f6d66422e042a0bf3471ab9b302ce115633fa6fdc5643f804b6b4f1c33baf95f125ec21969a3b1e0dd9e56
 - UUID: 8ea4cf99-8719-43c5-9eda-e97b8a4e074f

If you enter a password it will be stored in validator_definitions.yml so that it is not required each time the validator client starts.

Enter a password, or press enter to omit a password:

The user can choose whether or not they'd like to store the validator password in the validator_definitions.yml file. If the password is not stored here, the validator client (lighthouse vc) application will ask for the password each time it starts. This might be nice for some users from a security perspective (i.e., if it is a shared computer), however it means that if the validator client restarts, the user will be liable to off-line penalties until they can enter the password. If the user trusts the computer that is running the validator client and they are seeking maximum validator rewards, we recommend entering a password at this point.

Once the process is done the user will see:

Successfully imported keystore.
Successfully updated validator_definitions.yml.

Successfully imported 1 validators (0 skipped).

WARNING: DO NOT USE THE ORIGINAL KEYSTORES TO VALIDATE WITH ANOTHER CLIENT, OR YOU WILL GET SLASHED..

The import process is complete!

2. Run the lighthouse vc command.

Now the keys are imported the user can start performing their validator duties by running lighthouse vc and checking that their validator public key appears as a voting_pubkey in one of the following logs:

INFO Enabled validator       voting_pubkey: 0xa5e8702533f6d66422e042a0bf3471ab9b302ce115633fa6fdc5643f804b6b4f1c33baf95f125ec21969a3b1e0dd9e56

Once this log appears (and there are no errors) the lighthouse vc application will ensure that the validator starts performing its duties and being rewarded by the protocol. There is no more input required from the user.

Docker

The import command is a little more complex for Docker users, but the example in this document can be substituted with:

docker run -it \
	-v $HOME/.lighthouse:/root/.lighthouse \
	-v $(pwd)/validator_keys:/root/validator_keys \
	sigp/lighthouse \
	lighthouse --network MY_NETWORK account validator import --directory /root/validator_keys

Here we use two -v volumes to attach:

  • ~/.lighthouse on the host to /root/.lighthouse in the Docker container.
  • The validator_keys directory in the present working directory of the host to the /root/validator_keys directory of the Docker container.

Slashing Protection

The security of Ethereum 2.0's proof of stake protocol depends on penalties for misbehaviour, known as slashings. Validators that sign conflicting messages (blocks or attestations), can be slashed by other validators through the inclusion of a ProposerSlashing or AttesterSlashing on chain.

The Lighthouse validator client includes a mechanism to protect its validators against accidental slashing, known as the slashing protection database. This database records every block and attestation signed by validators, and the validator client uses this information to avoid signing any slashable messages.

Lighthouse's slashing protection database is an SQLite database located at $datadir/validators/slashing_protection.sqlite which is locked exclusively when the validator client is running. In normal operation, this database will be automatically created and utilized, meaning that your validators are kept safe by default.

If you are seeing errors related to slashing protection, it's important that you act slowly and carefully to keep your validators safe. See the Troubleshooting section.

Initialization

The database will be automatically created, and your validators registered with it when:

Avoiding Slashing

The slashing protection database is designed to protect against many common causes of slashing, but is unable to prevent against some others.

Examples of circumstances where the slashing protection database is effective are:

  • Accidentally running two validator clients on the same machine with the same datadir. The exclusive and transactional access to the database prevents the 2nd validator client from signing anything slashable (it won't even start).
  • Deep re-orgs that cause the shuffling to change, prompting validators to re-attest in an epoch where they have already attested. The slashing protection checks all messages against the slashing conditions and will refuse to attest on the new chain until it is safe to do so (usually after one epoch).
  • Importing keys and signing history from another client, where that history is complete. If you run another client and decide to switch to Lighthouse, you can export data from your client to be imported into Lighthouse's slashing protection database. See Import and Export.
  • Misplacing slashing_protection.sqlite during a datadir change or migration between machines. By default Lighthouse will refuse to start if it finds validator keys that are not registered in the slashing protection database.

Examples where it is ineffective are:

  • Running two validator client instances simultaneously. This could be two different clients (e.g. Lighthouse and Prysm) running on the same machine, two Lighthouse instances using different datadirs, or two clients on completely different machines (e.g. one on a cloud server and one running locally). You are responsible for ensuring that your validator keys are never running simultanously – the slashing protection DB cannot protect you in this case.
  • Importing keys from another client without also importing voting history.
  • If you use --init-slashing-protection to recreate a missing slashing protection database.

Import and Export

Lighthouse supports the slashing protection interchange format described in EIP-3076. An interchange file is a record of blocks and attestations signed by a set of validator keys – basically a portable slashing protection database!

With your validator client stopped, you can import a .json interchange file from another client using this command:

lighthouse account validator slashing-protection import <my_interchange.json>

Instructions for exporting your existing client's database are out of scope for this document, please check the other client's documentation for instructions.

When importing an interchange file, you still need to import the validator keystores themselves separately, using the instructions about importing keystores into Lighthouse.


You can export Lighthouse's database for use with another client with this command:

lighthouse account validator slashing-protection export <lighthouse_interchange.json>

The validator client needs to be stopped in order to export, to guarantee that the data exported is up to date.

Minification

Since version 1.5.0 Lighthouse automatically minifies slashing protection data upon import. Minification safely shrinks the input file, making it faster to import.

If an import file contains slashable data, then its minification is still safe to import even though the non-minified file would fail to be imported. This means that leaving minification enabled is recommended if the input could contain slashable data. Conversely, if you would like to double-check that the input file is not slashable with respect to itself, then you should disable minification.

Minification can be disabled for imports by adding --minify=false to the command:

lighthouse account validator slashing-protection import --minify=false <my_interchange.json>

It can also be enabled for exports (disabled by default):

lighthouse account validator slashing-protection export --minify=true <lighthouse_interchange.json>

Minifying the export file should make it faster to import, and may allow it to be imported into an implementation that is rejecting the non-minified equivalent due to slashable data.

Troubleshooting

Misplaced Slashing Database

If the slashing protection database cannot be found, it will manifest in an error like this:

Oct 12 14:41:26.415 CRIT Failed to start validator client        reason: Failed to open slashing protection database: SQLError("Unable to open database: Error(Some(\"unable to open database file: /home/karlm/.lighthouse/mainnet/validators/slashing_protection.sqlite\"))").
Ensure that `slashing_protection.sqlite` is in "/home/karlm/.lighthouse/mainnet/validators" folder

Usually this indicates that during some manual intervention the slashing database has been misplaced. This error can also occur if you have upgraded from Lighthouse v0.2.x to v0.3.x without moving the slashing protection database. If you have imported your keys into a new node, you should never see this error (see Initialization).

The safest way to remedy this error is to find your old slashing protection database and move it to the correct location. In our example that would be ~/.lighthouse/mainnet/validators/slashing_protection.sqlite. You can search for your old database using a tool like find, fd, or your file manager's GUI. Ask on the Lighthouse Discord if you're not sure.

If you are absolutely 100% sure that you need to recreate the missing database, you can start the Lighthouse validator client with the --init-slashing-protection flag. This flag is incredibly dangerous and should not be used lightly, and we strongly recommend you try finding your old slashing protection database before using it. If you do decide to use it, you should wait at least 1 epoch (~7 minutes) from when your validator client was last actively signing messages. If you suspect your node experienced a clock drift issue you should wait longer. Remember that the inactivity penalty for being offline for even a day or so is approximately equal to the rewards earned in a day. You will get slashed if you use --init-slashing-protection incorrectly.

Slashable Attestations and Re-orgs

Sometimes a re-org can cause the validator client to attempt to sign something slashable, in which case it will be blocked by slashing protection, resulting in a log like this:

Sep 29 15:15:05.303 CRIT Not signing slashable attestation       error: InvalidAttestation(DoubleVote(SignedAttestation { source_epoch: Epoch(0), target_epoch: Epoch(30), signing_root: 0x0c17be1f233b20341837ff183d21908cce73f22f86d5298c09401c6f37225f8a })), attestation: AttestationData { slot: Slot(974), index: 0, beacon_block_root: 0xa86a93ed808f96eb81a0cd7f46e3b3612cafe4bd0367aaf74e0563d82729e2dc, source: Checkpoint { epoch: Epoch(0), root: 0x0000000000000000000000000000000000000000000000000000000000000000 }, target: Checkpoint { epoch: Epoch(30), root: 0xcbe6901c0701a89e4cf508cfe1da2bb02805acfdfe4c39047a66052e2f1bb614 } }

This log is still marked as CRIT because in general it should occur only very rarely, and could indicate a serious error or misconfiguration (see Avoiding Slashing).

Slashable Data in Import

During import of an interchange file if you receive an error about the file containing slashable data, then you must carefully consider whether you want to continue.

There are several potential causes for this error, each of which require a different reaction. If the error output lists multiple validator keys, the cause could be different for each of them.

  1. Your validator has actually signed slashable data. If this is the case, you should assess whether your validator has been slashed (or is likely to be slashed). It's up to you whether you'd like to continue.
  2. You have exported data from Lighthouse to another client, and then back to Lighthouse, in a way that didn't preserve the signing roots. A message with no signing roots is considered slashable with respect to any other message at the same slot/epoch, so even if it was signed by Lighthouse originally, Lighthouse has no way of knowing this. If you're sure you haven't run Lighthouse and the other client simultaneously, you can drop Lighthouse's DB in favour of the interchange file.
  3. You have imported the same interchange file (which lacks signing roots) twice, e.g. from Teku. It might be safe to continue as-is, or you could consider a Drop and Re-import.

If you are running the import command with --minify=false, you should consider enabling minification.

Drop and Re-import

If you'd like to prioritize an interchange file over any existing database stored by Lighthouse then you can move (not delete) Lighthouse's database and replace it like so:

mv $datadir/validators/slashing_protection.sqlite ~/slashing_protection_backup.sqlite
lighthouse account validator slashing-protection import <my_interchange.json>

If your interchange file doesn't cover all of your validators, you shouldn't do this. Please reach out on Discord if you need help.

Limitation of Liability

The Lighthouse developers do not guarantee the perfect functioning of this software, or accept liability for any losses suffered. For more information see the Lighthouse license.

Voluntary exits

A validator may chose to voluntarily stop performing duties (proposing blocks and attesting to blocks) by submitting a voluntary exit transaction to the beacon chain.

A validator can initiate a voluntary exit provided that the validator is currently active, has not been slashed and has been active for at least 256 epochs (~27 hours) since it has been activated.

Note: After initiating a voluntary exit, the validator will have to keep performing duties until it has successfully exited to avoid penalties.

It takes at a minimum 5 epochs (32 minutes) for a validator to exit after initiating a voluntary exit. This number can be much higher depending on how many other validators are queued to exit.

Withdrawal of exited funds

Even though users can perform a voluntary exit in phase 0, they cannot withdraw their exited funds at this point in time. This implies that the staked funds are effectively frozen until withdrawals are enabled in future phases.

To understand the phased rollout strategy for Eth2, please visit https://ethereum.org/en/eth2/#roadmap.

Initiating a voluntary exit

In order to initiate an exit, users can use the lighthouse account validator exit command.

  • The --keystore flag is used to specify the path to the EIP-2335 voting keystore for the validator.

  • The --beacon-node flag is used to specify a beacon chain HTTP endpoint that confirms to the Eth2.0 Standard API specifications. That beacon node will be used to validate and propagate the voluntary exit. The default value for this flag is http://localhost:5052.

  • The --network flag is used to specify a particular Eth2 network (default is mainnet).

  • The --password-file flag is used to specify the path to the file containing the password for the voting keystore. If this flag is not provided, the user will be prompted to enter the password.

After validating the password, the user will be prompted to enter a special exit phrase as a final confirmation after which the voluntary exit will be published to the beacon chain.

The exit phrase is the following:

Exit my validator

Below is an example for initiating a voluntary exit on the Pyrmont testnet.

$ lighthouse --network pyrmont account validator exit --keystore /path/to/keystore --beacon-node http://localhost:5052

Running account manager for pyrmont network
validator-dir path: ~/.lighthouse/pyrmont/validators

Enter the keystore password for validator in 0xabcd

Password is correct

Publishing a voluntary exit for validator 0xabcd

WARNING: WARNING: THIS IS AN IRREVERSIBLE OPERATION

WARNING: WITHDRAWING STAKED ETH WILL NOT BE POSSIBLE UNTIL ETH1/ETH2 MERGE.

PLEASE VISIT https://lighthouse-book.sigmaprime.io/voluntary-exit.html
TO MAKE SURE YOU UNDERSTAND THE IMPLICATIONS OF A VOLUNTARY EXIT.

Enter the exit phrase from the above URL to confirm the voluntary exit:
Exit my validator

Successfully published voluntary exit for validator 0xabcd
Voluntary exit has been accepted into the beacon chain, but not yet finalized. Finalization may take several minutes or longer. Before finalization there is a low probability that the exit may be reverted.
Current epoch: 29946, Exit epoch: 29951, Withdrawable epoch: 30207
Please keep your validator running till exit epoch
Exit epoch in approximately 1920 secs

Validator Monitoring

Lighthouse allows for fine-grained monitoring of specific validators using the "validator monitor". Generally users will want to use this function to track their own validators, however, it can be used for any validator, regardless of who controls it.

Monitoring is in the Beacon Node

Lighthouse performs validator monitoring in the Beacon Node (BN) instead of the Validator Client (VC). This is contrary to what some users may expect, but it has several benefits:

  1. It keeps the VC simple. The VC handles cryptographic signing and the developers believe it should be doing as little additional work as possible.
  2. The BN has a better knowledge of the chain and network. Communicating all this information to the VC is impractical, we can provide more information when monitoring with the BN.
  3. It is more flexible:
    • Users can use a local BN to observe some validators running in a remote location.
    • Users can monitor validators that are not their own.

How to Enable Monitoring

The validator monitor is always enabled in Lighthouse, but it might not have any enrolled validators. There are two methods for a validator to be enrolled for additional monitoring; automatic and manual.

Automatic

When the --validator-monitor-auto flag is supplied, any validator which uses the beacon_committee_subscriptions API endpoint will be enrolled for additional monitoring. All active validators will use this endpoint each epoch, so you can expect it to detect all local and active validators within several minutes after start up.

Example

lighthouse bn --staking --validator-monitor-auto

Manual

The --validator-monitor-pubkeys flag can be used to specify validator public keys for monitoring. This is useful when monitoring validators that are not directly attached to this BN.

Note: when monitoring validators that aren't connected to this BN, supply the --subscribe-all-subnets --import-all-attestations flags to ensure the BN has a full view of the network. This is not strictly necessary, though.

Example

Monitor the mainnet validators at indices 0 and 1:

lighthouse bn --validator-monitor-pubkeys 0x933ad9491b62059dd065b560d256d8957a8c402cc6e8d8ee7290ae11e8f7329267a8811c397529dac52ae1342ba58c95,0xa1d1ad0714035353258038e964ae9675dc0252ee22cea896825c01458e1807bfad2f9969338798548d9858a571f7425c

Observing Monitoring

Enrolling a validator for additional monitoring results in:

  • Additional logs to be printed during BN operation.
  • Additional Prometheus metrics from the BN.

Logging

Lighthouse will create logs for the following events for each monitored validator:

  • A block from the validator is observed.
  • An unaggregated attestation from the validator is observed.
  • An unaggregated attestation from the validator is included in an aggregate.
  • An unaggregated attestation from the validator is included in a block.
  • An aggregated attestation from the validator is observed.
  • An exit for the validator is observed.
  • A slashing (proposer or attester) is observed which implicates that validator.

Example

Jan 18 11:50:03.896 INFO Unaggregated attestation                validator: 0, src: gossip, slot: 342248, epoch: 10695, delay_ms: 891, index: 12, head: 0x5f9d603c04b5489bf2de3708569226fd9428eb40a89c75945e344d06c7f4f86a, service: beacon
Jan 18 11:32:55.196 INFO Attestation included in aggregate       validator: 0, src: gossip, slot: 342162, epoch: 10692, delay_ms: 2193, index: 10, head: 0x9be04ecd04bf82952dad5d12c62e532fd13a8d42afb2e6ee98edaf05fc7f9f30, service: beacon
Jan 18 11:21:09.808 INFO Attestation included in block           validator: 1, slot: 342102, epoch: 10690, inclusion_lag: 0 slot(s), index: 7, head: 0x422bcd14839e389f797fd38b01e31995f91bcaea3d5d56457fc6aac76909ebac, service: beacon

Metrics

The ValidatorMonitor dashboard contains all/most of the metrics exposed via the validator monitor.

Doppelganger Protection

From Lighthouse v1.5.0, the Doppelganger Protection feature is available for the Validator Client. Taken from the German doppelgΓ€nger, which translates literally to "double-walker", a "doppelganger" in Eth2 refers to another instance of a validator running in a separate validator process. As detailed in Slashing Protection, running the same validator twice will inevitably result in slashing.

The Doppelganger Protection (DP) feature in Lighthouse imperfectly attempts to detect other instances of a validator operating on the network before any slashable offences can be committed. It achieves this by staying silent for 2-3 epochs after a validator is started so it can listen for other instances of that validator before starting to sign potentially slashable messages.

Note: Doppelganger Protection is not yet interoperable, so if it is configured on a Lighthouse validator client, the client must be connected to a Lighthouse beacon node. Because Infura uses Teku, Lighthouse's Doppelganger Protection cannot yet be used with Infura's Eth2 service.

Initial Considerations

There are two important initial considerations when using DP:

1. Doppelganger Protection is imperfect

The mechanism is best-effort and imperfect. Even if another validator exists on the network, there is no guarantee that your Beacon Node (BN) will see messages from it. It is feasible for doppelganger protection to fail to detect another validator due to network faults or other common circumstances.

DP should be considered a last-line-of-defence that might save a validator from being slashed due to operator error (i.e. running two instances of the same validator). Users should never rely upon DP and should practice the same caution with regards to duplicating validators as if it did not exist.

Remember: even with doppelganger protection enabled, it is not safe to run two instances of the same validator.

2. Using Doppelganger Protection will always result in penalties

DP works by staying silent on the network for 2-3 epochs before starting to sign slashable messages. Staying silent and refusing to sign messages will cause the following:

  • 2-3 missed attestations, incurring penalties and missed rewards.
  • 2-3 epochs of missed sync committee contributions (if the validator is in a sync committee, which is unlikely), incurring penalties and missed rewards (post-Altair upgrade only).
  • Potentially missed rewards by missing a block proposal (if the validator is an elected block proposer, which is unlikely).

The loss of rewards and penalties incurred due to the missed duties will be very small in dollar-values. Generally, they will equate to around one US dollar (at August 2021 figures) or about 2% of the reward for one validator for one day. Since DP costs so little but can protect a user from slashing, many users will consider this a worthwhile trade-off.

The 2-3 epochs of missed duties will be incurred whenever the VC is started (e.g., after an update or reboot) or whenever a new validator is added via the VC HTTP API.

Enabling Doppelganger Protection

If you understand that DP is imperfect and will cause some (generally, non-substantial) missed duties, it can be enabled by providing the --enable-doppelganger-protection flag:

lighthouse vc --enable-doppelganger-protection

When enabled, the validator client will emit the following log on start up:

INFO Doppelganger detection service started  service: doppelganger

Whilst DP is active, the following log will be emitted (this log indicates that one validator is staying silent and listening for validators):

INFO Listening for doppelgangers     doppelganger_detecting_validators: 1, service: notifier

When a validator has completed DP without detecting a doppelganger, the following log will be emitted:

INFO Doppelganger protection complete   validator_index: 42, msg: starting validator, service: notifier

What if a doppelganger is detected?

If a doppelganger is detected, logs similar to those below will be emitted (these logs indicate that the validator with the index 42 was found to have a doppelganger):

CRIT Doppelganger(s) detected                doppelganger_indices: [42], msg: A doppelganger occurs when two different validator clients run the same public key. This validator client detected another instance of a local validator on the network and is shutting down to prevent potential slashable offences. Ensure that you are not running a duplicate or overlapping validator client, service: doppelganger
INFO Internal shutdown received              reason: Doppelganger detected.
INFO Shutting down..                         reason: Failure("Doppelganger detected.")

Observing a doppelganger is a serious problem and users should be very alarmed. The Lighthouse DP system tries very hard to avoid false-positives so it is likely that a slashing risk is present.

If a doppelganger is observed, the VC will shut down. Do not restart the VC until you are certain there is no other instance of that validator running elsewhere!

The steps to solving a doppelganger vary depending on the case, but some places to check are:

  1. Is there another validator process running on this host?
    • Unix users can check ps aux | grep lighthouse
    • Windows users can check the Task Manager.
  2. Has this validator recently been moved from another host? Check to ensure it's not running.
  3. Has this validator been delegated to a staking service?

Doppelganger Protection FAQs

Should I use DP?

Yes, probably. If you don't have a clear and well considered reason not to use DP, then it is a good idea to err on the safe side.

How long does it take for DP to complete?

DP takes 2-3 epochs, which is approximately 12-20 minutes.

How long does it take for DP to detect a doppelganger?

To avoid false positives from restarting the same VC, Lighthouse will wait until the next epoch before it starts detecting doppelgangers. Additionally, a validator might not attest till the end of the next epoch. This creates a 2 epoch delay, which is just over 12 minutes. Network delays or issues might lengthen this time more.

This means your validator client might take up to 20 minutes to detect a doppelganger and shut down.

Can I use DP to run redundant validator instances?

πŸ™… Absolutely not. πŸ™… DP is imperfect and cannot be relied upon. The Internet is messy and lossy, there's no guarantee that DP will detect a duplicate validator before slashing conditions arise.

APIs

Lighthouse allows users to query the state of Eth2.0 using web-standard, RESTful HTTP/JSON APIs.

There are two APIs served by Lighthouse:

Beacon Node API

Lighthouse implements the standard Eth2 Beacon Node API specification. Please follow that link for a full description of each API endpoint.

Starting the server

A Lighthouse beacon node can be configured to expose a HTTP server by supplying the --http flag. The default listen address is 127.0.0.1:5052.

The following CLI flags control the HTTP server:

  • --http: enable the HTTP server (required even if the following flags are provided).
  • --http-port: specify the listen port of the server.
  • --http-address: specify the listen address of the server. It is not recommended to listen on 0.0.0.0, please see Security below.
  • --http-allow-origin: specify the value of the Access-Control-Allow-Origin header. The default is to not supply a header.

The schema of the API aligns with the standard Eth2 Beacon Node API as defined at github.com/ethereum/eth2.0-APIs. An interactive specification is available here.

Security

Do not expose the beacon node API to the public internet or you will open your node to denial-of-service (DoS) attacks.

The API includes several endpoints which can be used to trigger heavy processing, and as such it is strongly recommended to restrict how it is accessed. Using --http-address to change the listening address from localhost should only be done with extreme care.

To safely provide access to the API from a different machine you should use one of the following standard techniques:

  • Use an SSH tunnel, i.e. access localhost remotely. This is recommended, and doesn't require setting --http-address.
  • Use a firewall to limit access to certain remote IPs, e.g. allow access only from one other machine on the local network.
  • Shield Lighthouse behind an HTTP server with rate-limiting such as NGINX. This is only recommended for advanced users, e.g. beacon node hosting providers.

Additional risks to be aware of include:

  • The node/identity and node/peers endpoints expose information about your node's peer-to-peer identity.
  • The --http-allow-origin flag changes the server's CORS policy, allowing cross-site requests from browsers. You should only supply it if you understand the risks, e.g. malicious websites accessing your beacon node if you use the same machine for staking and web browsing.

CLI Example

Start the beacon node with the HTTP server listening on http://localhost:5052:

lighthouse bn --http

HTTP Request/Response Examples

This section contains some simple examples of using the HTTP API via curl. All endpoints are documented in the Eth2 Beacon Node API specification.

View the head of the beacon chain

Returns the block header at the head of the canonical chain.

curl -X GET "http://localhost:5052/eth/v1/beacon/headers/head" -H  "accept:
application/json"
{
  "data": {
    "root": "0x4381454174fc28c7095077e959dcab407ae5717b5dca447e74c340c1b743d7b2",
    "canonical": true,
    "header": {
      "message": {
        "slot": "3199",
        "proposer_index": "19077",
        "parent_root": "0xf1934973041c5896d0d608e52847c3cd9a5f809c59c64e76f6020e3d7cd0c7cd",
        "state_root": "0xe8e468f9f5961655dde91968f66480868dab8d4147de9498111df2b7e4e6fe60",
        "body_root": "0x6f183abc6c4e97f832900b00d4e08d4373bfdc819055d76b0f4ff850f559b883"
      },
      "signature": "0x988064a2f9cf13fe3aae051a3d85f6a4bca5a8ff6196f2f504e32f1203b549d5f86a39c6509f7113678880701b1881b50925a0417c1c88a750c8da7cd302dda5aabae4b941e3104d0cf19f5043c4f22a7d75d0d50dad5dbdaf6991381dc159ab"
    }
  }
}

View the status of a validator

Shows the status of validator at index 1 at the head state.

curl -X GET "http://localhost:5052/eth/v1/beacon/states/head/validators/1" -H  "accept: application/json"
{
  "data": {
    "index": "1",
    "balance": "63985937939",
    "status": "Active",
    "validator": {
      "pubkey": "0x873e73ee8b3e4fcf1d2fb0f1036ba996ac9910b5b348f6438b5f8ef50857d4da9075d0218a9d1b99a9eae235a39703e1",
      "withdrawal_credentials": "0x00b8cdcf79ba7e74300a07e9d8f8121dd0d8dd11dcfd6d3f2807c45b426ac968",
      "effective_balance": "32000000000",
      "slashed": false,
      "activation_eligibility_epoch": "0",
      "activation_epoch": "0",
      "exit_epoch": "18446744073709551615",
      "withdrawable_epoch": "18446744073709551615"
    }
  }
}

Troubleshooting

HTTP API is unavailable or refusing connections

Ensure the --http flag has been supplied at the CLI.

You can quickly check that the HTTP endpoint is up using curl:

curl -X GET "http://localhost:5052/eth/v1/node/version" -H  "accept: application/json"

The beacon node should respond with its version:

{"data":{"version":"Lighthouse/v0.2.9-6f7b4768a/x86_64-linux"}}

If this doesn't work, the server might not be started or there might be a network connection error.

I cannot query my node from a web browser (e.g., Swagger)

By default, the API does not provide an Access-Control-Allow-Origin header, which causes browsers to reject responses with a CORS error.

The --http-allow-origin flag can be used to add a wild-card CORS header:

lighthouse bn --http --http-allow-origin "*"

Warning: Adding the wild-card allow-origin flag can pose a security risk. Only use it in production if you understand the risks of a loose CORS policy.

Lighthouse Non-Standard APIs

Lighthouse fully supports the standardization efforts at github.com/ethereum/eth2.0-APIs, however sometimes development requires additional endpoints that shouldn't necessarily be defined as a broad-reaching standard. Such endpoints are placed behind the /lighthouse path.

The endpoints behind the /lighthouse path are:

  • Not intended to be stable.
  • Not guaranteed to be safe.
  • For testing and debugging purposes only.

Although we don't recommend that users rely on these endpoints, we document them briefly so they can be utilized by developers and researchers.

/lighthouse/health

Presently only available on Linux.

curl -X GET "http://localhost:5052/lighthouse/health" -H  "accept: application/json" | jq
{
  "data": {
    "sys_virt_mem_total": 16671133696,
    "sys_virt_mem_available": 8273715200,
    "sys_virt_mem_used": 7304818688,
    "sys_virt_mem_free": 2998190080,
    "sys_virt_mem_percent": 50.37101,
    "sys_virt_mem_cached": 5013975040,
    "sys_virt_mem_buffers": 1354149888,
    "sys_loadavg_1": 2.29,
    "sys_loadavg_5": 3.48,
    "sys_loadavg_15": 3.72,
    "cpu_cores": 4,
    "cpu_threads": 8,
    "system_seconds_total": 5728,
    "user_seconds_total": 33680,
    "iowait_seconds_total": 873,
    "idle_seconds_total": 177530,
    "cpu_time_total": 217447,
    "disk_node_bytes_total": 358443397120,
    "disk_node_bytes_free": 70025089024,
    "disk_node_reads_total": 1141863,
    "disk_node_writes_total": 1377993,
    "network_node_bytes_total_received": 2405639308,
    "network_node_bytes_total_transmit": 328304685,
    "misc_node_boot_ts_seconds": 1620629638,
    "misc_os": "linux",
    "pid": 4698,
    "pid_num_threads": 25,
    "pid_mem_resident_set_size": 783757312,
    "pid_mem_virtual_memory_size": 2564665344,
    "pid_process_seconds_total": 22
  }
}

/lighthouse/syncing

curl -X GET "http://localhost:5052/lighthouse/syncing" -H  "accept: application/json" | jq
{
  "data": {
    "SyncingFinalized": {
      "start_slot": 3104,
      "head_slot": 343744,
      "head_root": "0x1b434b5ed702338df53eb5e3e24336a90373bb51f74b83af42840be7421dd2bf"
    }
  }
}

/lighthouse/peers

curl -X GET "http://localhost:5052/lighthouse/peers" -H  "accept: application/json" | jq
[
  {
    "peer_id": "16Uiu2HAmA9xa11dtNv2z5fFbgF9hER3yq35qYNTPvN7TdAmvjqqv",
    "peer_info": {
      "_status": "Healthy",
      "score": {
        "score": 0
      },
      "client": {
        "kind": "Lighthouse",
        "version": "v0.2.9-1c9a055c",
        "os_version": "aarch64-linux",
        "protocol_version": "lighthouse/libp2p",
        "agent_string": "Lighthouse/v0.2.9-1c9a055c/aarch64-linux"
      },
      "connection_status": {
        "status": "disconnected",
        "connections_in": 0,
        "connections_out": 0,
        "last_seen": 1082,
        "banned_ips": []
      },
      "listening_addresses": [
        "/ip4/80.109.35.174/tcp/9000",
        "/ip4/127.0.0.1/tcp/9000",
        "/ip4/192.168.0.73/tcp/9000",
        "/ip4/172.17.0.1/tcp/9000",
        "/ip6/::1/tcp/9000"
      ],
      "sync_status": {
        "Advanced": {
          "info": {
            "status_head_slot": 343829,
            "status_head_root": "0xe34e43efc2bb462d9f364bc90e1f7f0094e74310fd172af698b5a94193498871",
            "status_finalized_epoch": 10742,
            "status_finalized_root": "0x1b434b5ed702338df53eb5e3e24336a90373bb51f74b83af42840be7421dd2bf"
          }
        }
      },
      "meta_data": {
        "seq_number": 160,
        "attnets": "0x0000000800000080"
      }
    }
  }
]

/lighthouse/peers/connected

curl -X GET "http://localhost:5052/lighthouse/peers/connected" -H  "accept: application/json" | jq
[
  {
    "peer_id": "16Uiu2HAkzJC5TqDSKuLgVUsV4dWat9Hr8EjNZUb6nzFb61mrfqBv",
    "peer_info": {
      "_status": "Healthy",
      "score": {
        "score": 0
      },
      "client": {
        "kind": "Lighthouse",
        "version": "v0.2.8-87181204+",
        "os_version": "x86_64-linux",
        "protocol_version": "lighthouse/libp2p",
        "agent_string": "Lighthouse/v0.2.8-87181204+/x86_64-linux"
      },
      "connection_status": {
        "status": "connected",
        "connections_in": 1,
        "connections_out": 0,
        "last_seen": 0,
        "banned_ips": []
      },
      "listening_addresses": [
        "/ip4/34.204.178.218/tcp/9000",
        "/ip4/127.0.0.1/tcp/9000",
        "/ip4/172.31.67.58/tcp/9000",
        "/ip4/172.17.0.1/tcp/9000",
        "/ip6/::1/tcp/9000"
      ],
      "sync_status": "Unknown",
      "meta_data": {
        "seq_number": 1819,
        "attnets": "0xffffffffffffffff"
      }
    }
  }
]

/lighthouse/proto_array

curl -X GET "http://localhost:5052/lighthouse/proto_array" -H  "accept: application/json" | jq

Example omitted for brevity.

/lighthouse/validator_inclusion/{epoch}/{validator_id}

See Validator Inclusion APIs.

/lighthouse/validator_inclusion/{epoch}/global

See Validator Inclusion APIs.

/lighthouse/eth1/syncing

Returns information regarding the Eth1 network, as it is required for use in Eth2

Fields

  • head_block_number, head_block_timestamp: the block number and timestamp from the very head of the Eth1 chain. Useful for understanding the immediate health of the Eth1 node that the beacon node is connected to.
  • latest_cached_block_number & latest_cached_block_timestamp: the block number and timestamp of the latest block we have in our block cache.
    • For correct Eth1 voting this timestamp should be later than the voting_period_start_timestamp.
  • voting_target_timestamp: The latest timestamp allowed for an eth1 block in this voting period.
  • eth1_node_sync_status_percentage (float): An estimate of how far the head of the Eth1 node is from the head of the Eth1 chain.
    • 100.0 indicates a fully synced Eth1 node.
    • 0.0 indicates an Eth1 node that has not verified any blocks past the genesis block.
  • lighthouse_is_cached_and_ready: Is set to true if the caches in the beacon node are ready for block production.
    • This value might be set to false whilst eth1_node_sync_status_percentage == 100.0 if the beacon node is still building its internal cache.
    • This value might be set to true whilst eth1_node_sync_status_percentage < 100.0 since the cache only cares about blocks a certain distance behind the head.

Example

curl -X GET "http://localhost:5052/lighthouse/eth1/syncing" -H  "accept: application/json" | jq
{
  "data": {
    "head_block_number": 3611806,
    "head_block_timestamp": 1603249317,
    "latest_cached_block_number": 3610758,
    "latest_cached_block_timestamp": 1603233597,
    "voting_target_timestamp": 1603228632,
    "eth1_node_sync_status_percentage": 100,
    "lighthouse_is_cached_and_ready": true
  }
}

/lighthouse/eth1/block_cache

Returns a list of all the Eth1 blocks in the Eth1 voting cache.

Example

curl -X GET "http://localhost:5052/lighthouse/eth1/block_cache" -H  "accept: application/json" | jq
{
  "data": [
    {
      "hash": "0x3a17f4b7ae4ee57ef793c49ebc9c06ff85207a5e15a1d0bd37b68c5ef5710d7f",
      "timestamp": 1603173338,
      "number": 3606741,
      "deposit_root": "0xd24920d936e8fb9b67e93fd126ce1d9e14058b6d82dcf7d35aea46879fae6dee",
      "deposit_count": 88911
    },
    {
      "hash": "0x78852954ea4904e5f81038f175b2adefbede74fbb2338212964405443431c1e7",
      "timestamp": 1603173353,
      "number": 3606742,
      "deposit_root": "0xd24920d936e8fb9b67e93fd126ce1d9e14058b6d82dcf7d35aea46879fae6dee",
      "deposit_count": 88911
    }
  ]
}

/lighthouse/eth1/deposit_cache

Returns a list of all cached logs from the deposit contract.

Example

curl -X GET "http://localhost:5052/lighthouse/eth1/deposit_cache" -H  "accept: application/json" | jq
{
  "data": [
    {
      "deposit_data": {
        "pubkey": "0xae9e6a550ac71490cdf134533b1688fcbdb16f113d7190eacf4f2e9ca6e013d5bd08c37cb2bde9bbdec8ffb8edbd495b",
        "withdrawal_credentials": "0x0062a90ebe71c4c01c4e057d7d13b944d9705f524ebfa24290c22477ab0517e4",
        "amount": "32000000000",
        "signature": "0xa87a4874d276982c471e981a113f8af74a31ffa7d18898a02df2419de2a7f02084065784aa2f743d9ddf80952986ea0b012190cd866f1f2d9c633a7a33c2725d0b181906d413c82e2c18323154a2f7c7ae6f72686782ed9e423070daa00db05b"
      },
      "block_number": 3086571,
      "index": 0,
      "signature_is_valid": false
    },
    {
      "deposit_data": {
        "pubkey": "0xb1d0ec8f907e023ea7b8cb1236be8a74d02ba3f13aba162da4a68e9ffa2e395134658d150ef884bcfaeecdf35c286496",
        "withdrawal_credentials": "0x00a6aa2a632a6c4847cf87ef96d789058eb65bfaa4cc4e0ebc39237421c22e54",
        "amount": "32000000000",
        "signature": "0x8d0f8ec11935010202d6dde9ab437f8d835b9cfd5052c001be5af9304f650ada90c5363022e1f9ef2392dd222cfe55b40dfd52578468d2b2092588d4ad3745775ea4d8199216f3f90e57c9435c501946c030f7bfc8dbd715a55effa6674fd5a4"
      },
      "block_number": 3086579,
      "index": 1,
      "signature_is_valid": false
    }
  ]
}

/lighthouse/beacon/states/{state_id}/ssz

Obtains a BeaconState in SSZ bytes. Useful for obtaining a genesis state.

The state_id parameter is identical to that used in the Standard Eth2.0 API beacon/state routes.

curl -X GET "http://localhost:5052/lighthouse/beacon/states/0/ssz" | jq

Example omitted for brevity, the body simply contains SSZ bytes.

/lighthouse/liveness

POST request that checks if any of the given validators have attested in the given epoch. Returns a list of objects, each including the validator index, epoch, and is_live status of a requested validator.

This endpoint is used in doppelganger detection, and will only provide accurate information for the current, previous, or next epoch.

curl -X POST "http://localhost:5052/lighthouse/liveness" -d '{"indices":["0","1"],"epoch":"1"}' -H  "content-type: application/json" | jq
{
    "data": [
        {
            "index": "0",
            "epoch": "1",
            "is_live": true
        }
    ]
}

Validator Inclusion APIs

The /lighthouse/validator_inclusion API endpoints provide information on results of the proof-of-stake voting process used for finality/justification under Casper FFG.

These endpoints are not stable or included in the Eth2 standard API. As such, they are subject to change or removal without a change in major release version.

Endpoints

HTTP PathDescription
/lighthouse/validator_inclusion/{epoch}/globalA global vote count for a given epoch.
/lighthouse/validator_inclusion/{epoch}/{validator_id}A per-validator breakdown of votes in a given epoch.

Global

Returns a global count of votes for some given epoch. The results are included both for the current and previous (epoch - 1) epochs since both are required by the beacon node whilst performing per-epoch-processing.

Generally, you should consider the "current" values to be incomplete and the "previous" values to be final. This is because validators can continue to include attestations from the current epoch in the next epoch, however this is not the case for attestations from the previous epoch.

                  `epoch` query parameter
				              |
				              |     --------- values are calcuated here
                              |     |
							  v     v
Epoch:  |---previous---|---current---|---next---|

                          |-------------|
						         ^
                                 |
		       window for including "current" attestations
					        in a block

The votes are expressed in terms of staked effective Gwei (i.e., not the number of individual validators). For example, if a validator has 32 ETH staked they will increase the current_epoch_attesting_gwei figure by 32,000,000,000 if they have an attestation included in a block during the current epoch. If this validator has more than 32 ETH, that extra ETH will not count towards their vote (that is why it is effective Gwei).

The following fields are returned:

  • current_epoch_active_gwei: the total staked gwei that was active (i.e., able to vote) during the current epoch.
  • current_epoch_target_attesting_gwei: the total staked gwei that attested to the majority-elected Casper FFG target epoch during the current epoch.
  • previous_epoch_active_gwei: as per current_epoch_active_gwei, but during the previous epoch.
  • previous_epoch_target_attesting_gwei: see current_epoch_target_attesting_gwei.
  • previous_epoch_head_attesting_gwei: the total staked gwei that attested to a head beacon block that is in the canonical chain.

From this data you can calculate some interesting figures:

Participation Rate

previous_epoch_attesting_gwei / previous_epoch_active_gwei

Expresses the ratio of validators that managed to have an attestation voting upon the previous epoch included in a block.

Justification/Finalization Rate

previous_epoch_target_attesting_gwei / previous_epoch_active_gwei

When this value is greater than or equal to 2/3 it is possible that the beacon chain may justify and/or finalize the epoch.

HTTP Example

curl -X GET "http://localhost:5052/lighthouse/validator_inclusion/0/global" -H  "accept: application/json" | jq
{
  "data": {
    "current_epoch_active_gwei": 642688000000000,
    "previous_epoch_active_gwei": 642688000000000,
    "current_epoch_target_attesting_gwei": 366208000000000,
    "previous_epoch_target_attesting_gwei": 1000000000,
    "previous_epoch_head_attesting_gwei": 1000000000
  }
}

Individual

Returns a per-validator summary of how that validator performed during the current epoch.

The Global Votes endpoint is the summation of all of these individual values, please see it for definitions of terms like "current_epoch", "previous_epoch" and "target_attester".

HTTP Example

curl -X GET "http://localhost:5052/lighthouse/validator_inclusion/0/42" -H  "accept: application/json" | jq
{
  "data": {
    "is_slashed": false,
    "is_withdrawable_in_current_epoch": false,
    "is_active_unslashed_in_current_epoch": true,
    "is_active_unslashed_in_previous_epoch": true,
    "current_epoch_effective_balance_gwei": 32000000000,
    "is_current_epoch_target_attester": false,
    "is_previous_epoch_target_attester": false,
    "is_previous_epoch_head_attester": false
  }
}

Validator Client API

Lighthouse implements a HTTP/JSON API for the validator client. Since there is no Eth2 standard validator client API, Lighthouse has defined its own.

A full list of endpoints can be found in Endpoints.

Note: All requests to the HTTP server must supply an Authorization header. All responses contain a Signature header for optional verification.

Starting the server

A Lighthouse validator client can be configured to expose a HTTP server by supplying the --http flag. The default listen address is 127.0.0.1:5062.

The following CLI flags control the HTTP server:

  • --http: enable the HTTP server (required even if the following flags are provided).
  • --http-address: specify the listen address of the server. It is almost always unsafe to use a non-default HTTP listen address. Use with caution. See the Security section below for more information.
  • --http-port: specify the listen port of the server.
  • --http-allow-origin: specify the value of the Access-Control-Allow-Origin header. The default is to not supply a header.

Security

The validator client HTTP server is not encrypted (i.e., it is not HTTPS). For this reason, it will listen by default on 127.0.0.1.

It is unsafe to expose the validator client to the public Internet without additional transport layer security (e.g., HTTPS via nginx, SSH tunnels, etc.).

For custom setups, such as certain Docker configurations, a custom HTTP listen address can be used by passing the --http-address and --unencrypted-http-transport flags. The --unencrypted-http-transport flag is a safety flag which is required to ensure the user is aware of the potential risks when using a non-default listen address.

CLI Example

Start the validator client with the HTTP server listening on http://localhost:5062:

lighthouse vc --http

Validator Client API: Endpoints

Endpoints

HTTP PathDescription
GET /lighthouse/versionGet the Lighthouse software version
GET /lighthouse/healthGet information about the host machine
GET /lighthouse/specGet the Eth2 specification used by the validator
GET /lighthouse/validatorsList all validators
GET /lighthouse/validators/:voting_pubkeyGet a specific validator
PATCH /lighthouse/validators/:voting_pubkeyUpdate a specific validator
POST /lighthouse/validatorsCreate a new validator and mnemonic.
POST /lighthouse/validators/keystoreImport a keystore.
POST /lighthouse/validators/mnemonicCreate a new validator from an existing mnemonic.

GET /lighthouse/version

Returns the software version and git commit hash for the Lighthouse binary.

HTTP Specification

PropertySpecification
Path/lighthouse/version
MethodGET
Required HeadersAuthorization
Typical Responses200

Example Response Body

{
    "data": {
        "version": "Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux"
    }
}

GET /lighthouse/health

Returns information regarding the health of the host machine.

HTTP Specification

PropertySpecification
Path/lighthouse/health
MethodGET
Required HeadersAuthorization
Typical Responses200

Note: this endpoint is presently only available on Linux.

Example Response Body

{
    "data": {
        "pid": 1476293,
        "pid_num_threads": 19,
        "pid_mem_resident_set_size": 4009984,
        "pid_mem_virtual_memory_size": 1306775552,
        "sys_virt_mem_total": 33596100608,
        "sys_virt_mem_available": 23073017856,
        "sys_virt_mem_used": 9346957312,
        "sys_virt_mem_free": 22410510336,
        "sys_virt_mem_percent": 31.322334,
        "sys_loadavg_1": 0.98,
        "sys_loadavg_5": 0.98,
        "sys_loadavg_15": 1.01
    }
}

GET /lighthouse/spec

Returns the Eth2 specification loaded for this validator.

HTTP Specification

PropertySpecification
Path/lighthouse/spec
MethodGET
Required HeadersAuthorization
Typical Responses200

Example Response Body

{
    "data": {
        "CONFIG_NAME": "mainnet",
        "MAX_COMMITTEES_PER_SLOT": "64",
        "TARGET_COMMITTEE_SIZE": "128",
        "MIN_PER_EPOCH_CHURN_LIMIT": "4",
        "CHURN_LIMIT_QUOTIENT": "65536",
        "SHUFFLE_ROUND_COUNT": "90",
        "MIN_GENESIS_ACTIVE_VALIDATOR_COUNT": "1024",
        "MIN_GENESIS_TIME": "1601380800",
        "GENESIS_DELAY": "172800",
        "MIN_DEPOSIT_AMOUNT": "1000000000",
        "MAX_EFFECTIVE_BALANCE": "32000000000",
        "EJECTION_BALANCE": "16000000000",
        "EFFECTIVE_BALANCE_INCREMENT": "1000000000",
        "HYSTERESIS_QUOTIENT": "4",
        "HYSTERESIS_DOWNWARD_MULTIPLIER": "1",
        "HYSTERESIS_UPWARD_MULTIPLIER": "5",
        "PROPORTIONAL_SLASHING_MULTIPLIER": "3",
        "GENESIS_FORK_VERSION": "0x00000002",
        "BLS_WITHDRAWAL_PREFIX": "0x00",
        "SECONDS_PER_SLOT": "12",
        "MIN_ATTESTATION_INCLUSION_DELAY": "1",
        "MIN_SEED_LOOKAHEAD": "1",
        "MAX_SEED_LOOKAHEAD": "4",
        "MIN_EPOCHS_TO_INACTIVITY_PENALTY": "4",
        "MIN_VALIDATOR_WITHDRAWABILITY_DELAY": "256",
        "SHARD_COMMITTEE_PERIOD": "256",
        "BASE_REWARD_FACTOR": "64",
        "WHISTLEBLOWER_REWARD_QUOTIENT": "512",
        "PROPOSER_REWARD_QUOTIENT": "8",
        "INACTIVITY_PENALTY_QUOTIENT": "16777216",
        "MIN_SLASHING_PENALTY_QUOTIENT": "32",
        "SAFE_SLOTS_TO_UPDATE_JUSTIFIED": "8",
        "DOMAIN_BEACON_PROPOSER": "0x00000000",
        "DOMAIN_BEACON_ATTESTER": "0x01000000",
        "DOMAIN_RANDAO": "0x02000000",
        "DOMAIN_DEPOSIT": "0x03000000",
        "DOMAIN_VOLUNTARY_EXIT": "0x04000000",
        "DOMAIN_SELECTION_PROOF": "0x05000000",
        "DOMAIN_AGGREGATE_AND_PROOF": "0x06000000",
        "MAX_VALIDATORS_PER_COMMITTEE": "2048",
        "SLOTS_PER_EPOCH": "32",
        "EPOCHS_PER_ETH1_VOTING_PERIOD": "32",
        "SLOTS_PER_HISTORICAL_ROOT": "8192",
        "EPOCHS_PER_HISTORICAL_VECTOR": "65536",
        "EPOCHS_PER_SLASHINGS_VECTOR": "8192",
        "HISTORICAL_ROOTS_LIMIT": "16777216",
        "VALIDATOR_REGISTRY_LIMIT": "1099511627776",
        "MAX_PROPOSER_SLASHINGS": "16",
        "MAX_ATTESTER_SLASHINGS": "2",
        "MAX_ATTESTATIONS": "128",
        "MAX_DEPOSITS": "16",
        "MAX_VOLUNTARY_EXITS": "16",
        "ETH1_FOLLOW_DISTANCE": "1024",
        "TARGET_AGGREGATORS_PER_COMMITTEE": "16",
        "RANDOM_SUBNETS_PER_VALIDATOR": "1",
        "EPOCHS_PER_RANDOM_SUBNET_SUBSCRIPTION": "256",
        "SECONDS_PER_ETH1_BLOCK": "14",
        "DEPOSIT_CONTRACT_ADDRESS": "0x48b597f4b53c21b48ad95c7256b49d1779bd5890"
    }
}

GET /lighthouse/validators

Lists all validators managed by this validator client.

HTTP Specification

PropertySpecification
Path/lighthouse/validators
MethodGET
Required HeadersAuthorization
Typical Responses200

Example Response Body

{
    "data": [
        {
            "enabled": true,
            "description": "validator one",
            "voting_pubkey": "0xb0148e6348264131bf47bcd1829590e870c836dc893050fd0dadc7a28949f9d0a72f2805d027521b45441101f0cc1cde"
        },
        {
            "enabled": true,
            "description": "validator two",
            "voting_pubkey": "0xb0441246ed813af54c0a11efd53019f63dd454a1fa2a9939ce3c228419fbe113fb02b443ceeb38736ef97877eb88d43a"
        },
        {
            "enabled": true,
            "description": "validator three",
            "voting_pubkey": "0xad77e388d745f24e13890353031dd8137432ee4225752642aad0a2ab003c86620357d91973b6675932ff51f817088f38"
        }
    ]
}

GET /lighthouse/validators/:voting_pubkey

Get a validator by their voting_pubkey.

HTTP Specification

PropertySpecification
Path/lighthouse/validators/:voting_pubkey
MethodGET
Required HeadersAuthorization
Typical Responses200, 400

Example Path

localhost:5062/lighthouse/validators/0xb0148e6348264131bf47bcd1829590e870c836dc893050fd0dadc7a28949f9d0a72f2805d027521b45441101f0cc1cde

Example Response Body

{
    "data": {
        "enabled": true,
        "voting_pubkey": "0xb0148e6348264131bf47bcd1829590e870c836dc893050fd0dadc7a28949f9d0a72f2805d027521b45441101f0cc1cde"
    }
}

PATCH /lighthouse/validators/:voting_pubkey

Update some values for the validator with voting_pubkey.

HTTP Specification

PropertySpecification
Path/lighthouse/validators/:voting_pubkey
MethodPATCH
Required HeadersAuthorization
Typical Responses200, 400

Example Path

localhost:5062/lighthouse/validators/0xb0148e6348264131bf47bcd1829590e870c836dc893050fd0dadc7a28949f9d0a72f2805d027521b45441101f0cc1cde

Example Request Body

{
    "enabled": false
}

Example Response Body

null

POST /lighthouse/validators/

Create any number of new validators, all of which will share a common mnemonic generated by the server.

A BIP-39 mnemonic will be randomly generated and returned with the response. This mnemonic can be used to recover all keys returned in the response. Validators are generated from the mnemonic according to EIP-2334, starting at index 0.

HTTP Specification

PropertySpecification
Path/lighthouse/validators
MethodPOST
Required HeadersAuthorization
Typical Responses200

Example Request Body

[
    {
        "enable": true,
        "description": "validator_one",
        "deposit_gwei": "32000000000",
        "graffiti": "Mr F was here"
    },
    {
        "enable": false,
        "description": "validator two",
        "deposit_gwei": "34000000000"
    }
]

Example Response Body

{
    "data": {
        "mnemonic": "marine orchard scout label trim only narrow taste art belt betray soda deal diagram glare hero scare shadow ramp blur junior behave resource tourist",
        "validators": [
            {
                "enabled": true,
                "description": "validator_one",
                "voting_pubkey": "0x8ffbc881fb60841a4546b4b385ec5e9b5090fd1c4395e568d98b74b94b41a912c6101113da39d43c101369eeb9b48e50",
                "eth1_deposit_tx_data": "0x22895118000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000001206c68675776d418bfd63468789e7c68a6788c4dd45a3a911fe3d642668220bbf200000000000000000000000000000000000000000000000000000000000000308ffbc881fb60841a4546b4b385ec5e9b5090fd1c4395e568d98b74b94b41a912c6101113da39d43c101369eeb9b48e5000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000cf8b3abbf0ecd91f3b0affcc3a11e9c5f8066efb8982d354ee9a812219b17000000000000000000000000000000000000000000000000000000000000000608fbe2cc0e17a98d4a58bd7a65f0475a58850d3c048da7b718f8809d8943fee1dbd5677c04b5fa08a9c44d271d009edcd15caa56387dc217159b300aad66c2cf8040696d383d0bff37b2892a7fe9ba78b2220158f3dc1b9cd6357bdcaee3eb9f2",
                "deposit_gwei": "32000000000"
            },
            {
                "enabled": false,
                "description": "validator two",
                "voting_pubkey": "0xa9fadd620dc68e9fe0d6e1a69f6c54a0271ad65ab5a509e645e45c6e60ff8f4fc538f301781193a08b55821444801502",
                "eth1_deposit_tx_data": "0x22895118000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000e00000000000000000000000000000000000000000000000000000000000000120b1911954c1b8d23233e0e2bf8c4878c8f56d25a4f790ec09a94520ec88af30490000000000000000000000000000000000000000000000000000000000000030a9fadd620dc68e9fe0d6e1a69f6c54a0271ad65ab5a509e645e45c6e60ff8f4fc538f301781193a08b5582144480150200000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000a96df8b95c3ba749265e48a101f2ed974fffd7487487ed55f8dded99b617ad000000000000000000000000000000000000000000000000000000000000006090421299179824950e2f5a592ab1fdefe5349faea1e8126146a006b64777b74cce3cfc5b39d35b370e8f844e99c2dc1b19a1ebd38c7605f28e9c4540aea48f0bc48e853ae5f477fa81a9fc599d1732968c772730e1e47aaf5c5117bd045b788e",
                "deposit_gwei": "34000000000"
            }
        ]
    }
}

POST /lighthouse/validators/keystore

Import a keystore into the validator client.

HTTP Specification

PropertySpecification
Path/lighthouse/validators/keystore
MethodPOST
Required HeadersAuthorization
Typical Responses200

Example Request Body

{
  "enable": true,
  "password": "mypassword",
  "keystore": {
    "crypto": {
      "kdf": {
        "function": "scrypt",
        "params": {
          "dklen": 32,
          "n": 262144,
          "r": 8,
          "p": 1,
          "salt": "445989ec2f332bb6099605b4f1562c0df017488d8d7fb3709f99ebe31da94b49"
        },
        "message": ""
      },
      "checksum": {
        "function": "sha256",
        "params": {

        },
        "message": "abadc1285fd38b24a98ac586bda5b17a8f93fc1ff0778803dc32049578981236"
      },
      "cipher": {
        "function": "aes-128-ctr",
        "params": {
          "iv": "65abb7e1d02eec9910d04299cc73efbe"
        },
        "message": "6b7931a4447be727a3bb5dc106d9f3c1ba50671648e522f213651d13450b6417"
      }
    },
    "uuid": "5cf2a1fb-dcd6-4095-9ebf-7e4ee0204cab",
    "path": "m/12381/3600/0/0/0",
    "pubkey": "b0d2f05014de27c6d7981e4a920799db1c512ee7922932be6bf55729039147cf35a090bd4ab378fe2d133c36cbbc9969",
    "version": 4,
    "description": ""
  }
}

Example Response Body

{
  "data": {
    "enabled": true,
    "description": "",
    "voting_pubkey": "0xb0d2f05014de27c6d7981e4a920799db1c512ee7922932be6bf55729039147cf35a090bd4ab378fe2d133c36cbbc9969"
  }
}

POST /lighthouse/validators/mnemonic

Create any number of new validators, all of which will share a common mnemonic.

The supplied BIP-39 mnemonic will be used to generate the validator keys according to EIP-2334, starting at the supplied key_derivation_path_offset. For example, if key_derivation_path_offset = 42, then the first validator voting key will be generated with the path m/12381/3600/i/42.

HTTP Specification

PropertySpecification
Path/lighthouse/validators/mnemonic
MethodPOST
Required HeadersAuthorization
Typical Responses200

Example Request Body

{
    "mnemonic": "theme onion deal plastic claim silver fancy youth lock ordinary hotel elegant balance ridge web skill burger survey demand distance legal fish salad cloth",
    "key_derivation_path_offset": 0,
    "validators": [
        {
            "enable": true,
            "description": "validator_one",
            "deposit_gwei": "32000000000"
        }
    ]
}

Example Response Body

{
    "data": [
        {
            "enabled": true,
            "description": "validator_one",
            "voting_pubkey": "0xa062f95fee747144d5e511940624bc6546509eeaeae9383257a9c43e7ddc58c17c2bab4ae62053122184c381b90db380",
            "eth1_deposit_tx_data": "0x22895118000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000e00000000000000000000000000000000000000000000000000000000000000120a57324d95ae9c7abfb5cc9bd4db253ed0605dc8a19f84810bcf3f3874d0e703a0000000000000000000000000000000000000000000000000000000000000030a062f95fee747144d5e511940624bc6546509eeaeae9383257a9c43e7ddc58c17c2bab4ae62053122184c381b90db3800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000200046e4199f18102b5d4e8842d0eeafaa1268ee2c21340c63f9c2cd5b03ff19320000000000000000000000000000000000000000000000000000000000000060b2a897b4ba4f3910e9090abc4c22f81f13e8923ea61c0043506950b6ae174aa643540554037b465670d28fa7b7d716a301e9b172297122acc56be1131621c072f7c0a73ea7b8c5a90ecd5da06d79d90afaea17cdeeef8ed323912c70ad62c04b",
            "deposit_gwei": "32000000000"
        }
    ]
}

POST /lighthouse/validators/web3signer

Create any number of new validators, all of which will refer to a Web3Signer server for signing.

HTTP Specification

PropertySpecification
Path/lighthouse/validators/web3signer
MethodPOST
Required HeadersAuthorization
Typical Responses200, 400

Example Request Body

[
    {
        "enable": true,
        "description": "validator_one",
        "graffiti": "Mr F was here",
        "voting_public_key": "0xa062f95fee747144d5e511940624bc6546509eeaeae9383257a9c43e7ddc58c17c2bab4ae62053122184c381b90db380",
        "url": "http://path-to-web3signer.com",
        "root_certificate_path": "/path/on/vc/filesystem/to/certificate.pem",
        "request_timeout_ms": 12000
    }
]

The following fields may be omitted or nullified to obtain default values:

  • graffiti
  • root_certificate_path
  • request_timeout_ms

Example Response Body

No data is included in the response body.

Validator Client API: Authorization Header

Overview

The validator client HTTP server requires that all requests have the following HTTP header:

  • Name: Authorization
  • Value: Basic <api-token>

Where <api-token> is a string that can be obtained from the validator client host. Here is an example Authorization header:

Authorization Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123

Obtaining the API token

The API token is stored as a file in the validators directory. For most users this is ~/.lighthouse/{network}/validators/api-token.txt. Here's an example using the cat command to print the token to the terminal, but any text editor will suffice:

$ cat api-token.txt
api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123

When starting the validator client it will output a log message containing the path to the file containing the api token.

Sep 28 19:17:52.615 INFO HTTP API started                        api_token_file: "$HOME/prater/validators/api-token.txt", listen_address: 127.0.0.1:5062

Example

Here is an example curl command using the API token in the Authorization header:

curl localhost:5062/lighthouse/version -H "Authorization: Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123"

The server should respond with its version:

{"data":{"version":"Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux"}}

Validator Client API: Signature Header

Overview

The validator client HTTP server adds the following header to all responses:

  • Name: Signature
  • Value: a secp256k1 signature across the SHA256 of the response body.

Example Signature header:

Signature: 0x304402205b114366444112580bf455d919401e9c869f5af067cd496016ab70d428b5a99d0220067aede1eb5819eecfd5dd7a2b57c5ac2b98f25a7be214b05684b04523aef873

Verifying the Signature

Below is a browser-ready example of signature verification.

HTML

<script src="https://rawgit.com/emn178/js-sha256/master/src/sha256.js" type="text/javascript"></script>
<script src="https://rawgit.com/indutny/elliptic/master/dist/elliptic.min.js" type="text/javascript"></script>

Javascript

// Helper function to turn a hex-string into bytes.
function hexStringToByte(str) {
  if (!str) {
    return new Uint8Array();
  }

  var a = [];
  for (var i = 0, len = str.length; i < len; i+=2) {
    a.push(parseInt(str.substr(i,2),16));
  }

  return new Uint8Array(a);
}

// This example uses the secp256k1 curve from the "elliptic" library:
//
// https://github.com/indutny/elliptic
var ec = new elliptic.ec('secp256k1');

// The public key is contained in the API token:
//
// Authorization: Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123
var pk_bytes = hexStringToByte('03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123');

// The signature is in the `Signature` header of the response:
//
// Signature: 0x304402205b114366444112580bf455d919401e9c869f5af067cd496016ab70d428b5a99d0220067aede1eb5819eecfd5dd7a2b57c5ac2b98f25a7be214b05684b04523aef873
var sig_bytes = hexStringToByte('304402205b114366444112580bf455d919401e9c869f5af067cd496016ab70d428b5a99d0220067aede1eb5819eecfd5dd7a2b57c5ac2b98f25a7be214b05684b04523aef873');

// The HTTP response body.
var response_body = "{\"data\":{\"version\":\"Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux\"}}";

// The HTTP response body is hashed (SHA256) to determine the 32-byte message.
let hash = sha256.create();
hash.update(response_body);
let message = hash.array();

// The 32-byte message hash, the signature and the public key are verified.
if (ec.verify(message, sig_bytes, pk_bytes)) {
  console.log("The signature is valid")
} else {
  console.log("The signature is invalid")
}

This example is also available as a JSFiddle.

Example

The previous Javascript example was written using the output from the following curl command:

curl -v localhost:5062/lighthouse/version -H "Authorization: Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123"
*   Trying ::1:5062...
* connect to ::1 port 5062 failed: Connection refused
*   Trying 127.0.0.1:5062...
* Connected to localhost (127.0.0.1) port 5062 (#0)
> GET /lighthouse/version HTTP/1.1
> Host: localhost:5062
> User-Agent: curl/7.72.0
> Accept: */*
> Authorization: Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< signature: 0x304402205b114366444112580bf455d919401e9c869f5af067cd496016ab70d428b5a99d0220067aede1eb5819eecfd5dd7a2b57c5ac2b98f25a7be214b05684b04523aef873
< server: Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux
< access-control-allow-origin:
< content-length: 65
< date: Tue, 29 Sep 2020 04:23:46 GMT
<
* Connection #0 to host localhost left intact
{"data":{"version":"Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux"}}

Prometheus Metrics

Lighthouse provides an extensive suite of metrics and monitoring in the Prometheus export format via a HTTP server built into Lighthouse.

These metrics are generally consumed by a Prometheus server and displayed via a Grafana dashboard. These components are available in a docker-compose format at sigp/lighthouse-metrics.

Beacon Node Metrics

By default, these metrics are disabled but can be enabled with the --metrics flag. Use the --metrics-address, --metrics-port and --metrics-allow-origin flags to customize the metrics server.

Example

Start a beacon node with the metrics server enabled:

lighthouse bn --metrics

Check to ensure that the metrics are available on the default port:

curl localhost:5054/metrics

Validator Client Metrics

By default, these metrics are disabled but can be enabled with the --metrics flag. Use the --metrics-address, --metrics-port and --metrics-allow-origin flags to customize the metrics server.

Example

Start a validator client with the metrics server enabled:

lighthouse vc --metrics

Check to ensure that the metrics are available on the default port:

curl localhost:5064/metrics

Advanced Usage

Want to get into the nitty-gritty of Lighthouse configuration? Looking for something not covered elsewhere?

This section provides detailed information about configuring Lighthouse for specific use cases, and tips about how things work under the hood.

Custom Data Directories

Users can override the default Lighthouse data directories (e.g., ~/.lighthouse/mainnet) using the --datadir flag. The custom data directory mirrors the structure of any network specific default directory (e.g. ~/.lighthouse/mainnet).

Note: Users should specify different custom directories for different networks.

Below is an example flow for importing validator keys, running a beacon node and validator client using a custom data directory /var/lib/my-custom-dir for the Mainnet network.

lighthouse --network mainnet --datadir /var/lib/my-custom-dir account validator import --directory <PATH-TO-LAUNCHPAD-KEYS-DIRECTORY>
lighthouse --network mainnet --datadir /var/lib/my-custom-dir bn --staking
lighthouse --network mainnet --datadir /var/lib/my-custom-dir vc

The first step creates a validators directory under /var/lib/my-custom-dir which contains the imported keys and validator_definitions.yml. After that, we simply run the beacon chain and validator client with the custom dir path.

Validator Graffiti

Lighthouse provides four options for setting validator graffiti.

1. Using the "--graffiti-file" flag on the validator client

Users can specify a file with the --graffiti-file flag. This option is useful for dynamically changing graffitis for various use cases (e.g. drawing on the beaconcha.in graffiti wall). This file is loaded once on startup and reloaded everytime a validator is chosen to propose a block.

Usage: lighthouse vc --graffiti-file graffiti_file.txt

The file should contain key value pairs corresponding to validator public keys and their associated graffiti. The file can also contain a default key for the default case.

default: default_graffiti
public_key1: graffiti1
public_key2: graffiti2
...

Below is an example of a graffiti file:

default: Lighthouse
0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007: mr f was here
0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477: mr v was here

Lighthouse will first search for the graffiti corresponding to the public key of the proposing validator, if there are no matches for the public key, then it uses the graffiti corresponding to the default key if present.

2. Setting the graffiti in the validator_definitions.yml

Users can set validator specific graffitis in validator_definitions.yml with the graffiti key. This option is recommended for static setups where the graffitis won't change on every new block proposal.

Below is an example of the validator_definitions.yml with validator specific graffitis:

---
- enabled: true
  voting_public_key: "0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007"
  type: local_keystore
  voting_keystore_path: /home/paul/.lighthouse/validators/0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007/voting-keystore.json
  voting_keystore_password_path: /home/paul/.lighthouse/secrets/0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007
  graffiti: "mr f was here"
- enabled: false
  voting_public_key: "0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477"
  type: local_keystore
  voting_keystore_path: /home/paul/.lighthouse/validators/0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477/voting-keystore.json
  voting_keystore_password: myStrongpa55word123&$
  graffiti: "somethingprofound"

3. Using the "--graffiti" flag on the validator client

Users can specify a common graffiti for all their validators using the --graffiti flag on the validator client.

4. Using the "--graffiti" flag on the beacon node

Users can also specify a common graffiti using the --graffiti flag on the beacon node as a common graffiti for all validators.

Usage: lighthouse vc --graffiti fortytwo

Note: The order of preference for loading the graffiti is as follows:

  1. Read from --graffiti-file if provided.
  2. If --graffiti-file is not provided or errors, read graffiti from validator_definitions.yml.
  3. If graffiti is not specified in validator_definitions.yml, load the graffiti passed in the --graffiti flag on the validator client.
  4. If the --graffiti flag on the validator client is not passed, load the graffiti passed in the --graffiti flag on the beacon node.
  5. If the --graffiti flag is not passed, load the default Lighthouse graffiti.

Remote Signing with Web3Signer

Web3Signer is a tool by Consensys which allows remote signing. Remote signing is when a Validator Client (VC) out-sources the signing of messages to remote server (e.g., via HTTPS). This means that the VC does not hold the validator private keys.

Warnings

Using a remote signer comes with risks, please read the following two warnings before proceeding:

Remote signing is complex and risky

Remote signing is generally only desirable for enterprise users or users with unique security requirements. Most users will find the separation between the Beacon Node (BN) and VC to be sufficient without introducing a remote signer.

Using a remote signer introduces a new set of security and slashing risks and should only be undertaken by advanced users who fully understand the risks.

Web3Signer is not maintained by Lighthouse

The Web3Signer tool is maintained by Consensys, the same team that maintains Teku. The Lighthouse team (Sigma Prime) does not maintain Web3Signer or make any guarantees about its safety or effectiveness.

Usage

A remote signing validator is added to Lighthouse in much the same way as one that uses a local keystore, via the validator_definitions.yml file or via the POST /lighthouse/validators/web3signer API endpoint.

Here is an example of a validator_definitions.yml file containing one validator which uses a remote signer:

---
- enabled: true
  voting_public_key: "0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477"
  type: web3signer
  url: "https://my-remote-signer.com:1234"
  root_certificate_path: /home/paul/my-certificates/my-remote-signer.pem

When using this file, the Lighthouse VC will perform duties for the 0xa5566.. validator and defer to the https://my-remote-signer.com:1234 server to obtain any signatures. It will load a "self-signed" SSL certificate from /home/paul/my-certificates/my-remote-signer.pem (on the filesystem of the VC) to encrypt the communications between the VC and Web3Signer.

The request_timeout_ms key can also be specified. Use this key to override the default timeout with a new timeout in milliseconds. This is the timeout before requests to Web3Signer are considered to be failures. Setting a value that it too-long may create contention and late duties in the VC. Setting it too short will result in failed signatures and therefore missed duties.

Database Configuration

Lighthouse uses an efficient "split" database schema, whereby finalized states are stored separately from recent, unfinalized states. We refer to the portion of the database storing finalized states as the freezer or cold DB, and the portion storing recent states as the hot DB.

In both the hot and cold DBs, full BeaconState data structures are only stored periodically, and intermediate states are reconstructed by quickly replaying blocks on top of the nearest state. For example, to fetch a state at slot 7 the database might fetch a full state from slot 0, and replay blocks from slots 1-7 while omitting redundant signature checks and Merkle root calculations. The full states upon which blocks are replayed are referred to as restore points in the case of the freezer DB, and epoch boundary states in the case of the hot DB.

The frequency at which the hot database stores full BeaconStates is fixed to one-state-per-epoch in order to keep loads of recent states performant. For the freezer DB, the frequency is configurable via the --slots-per-restore-point CLI flag, which is the topic of the next section.

Freezer DB Space-time Trade-offs

Frequent restore points use more disk space but accelerate the loading of historical states. Conversely, infrequent restore points use much less space, but cause the loading of historical states to slow down dramatically. A lower slots per restore point value (SPRP) corresponds to more frequent restore points, while a higher SPRP corresponds to less frequent. The table below shows some example values.

Use CaseSPRPYearly Disk UsageLoad Historical State
Block explorer/analysis32411 GB96 ms
Default20486.4 GB6 s
Validator only81921.6 GB25 s

As you can see, it's a high-stakes trade-off! The relationships to disk usage and historical state load time are both linear – doubling SPRP halves disk usage and doubles load time. The minimum SPRP is 32, and the maximum is 8192.

The values shown in the table are approximate, calculated using a simple heuristic: each BeaconState consumes around 5MB of disk space, and each block replayed takes around 3ms. The Yearly Disk Usage column shows the approx size of the freezer DB alone (hot DB not included), and the Load Historical State time is the worst-case load time for a state in the last slot of an epoch.

To configure your Lighthouse node's database with a non-default SPRP, run your Beacon Node with the --slots-per-restore-point flag:

lighthouse beacon_node --slots-per-restore-point 8192

Glossary

  • Freezer DB: part of the database storing finalized states. States are stored in a sparser format, and usually less frequently than in the hot DB.
  • Cold DB: see Freezer DB.
  • Hot DB: part of the database storing recent states, all blocks, and other runtime data. Full states are stored every epoch.
  • Restore Point: a full BeaconState stored periodically in the freezer DB.
  • Slots Per Restore Point (SPRP): the number of slots between restore points in the freezer DB.
  • Split Slot: the slot at which states are divided between the hot and the cold DBs. All states from slots less than the split slot are in the freezer, while all states with slots greater than or equal to the split slot are in the hot DB.

Advanced Networking

Lighthouse's networking stack has a number of configurable parameters that can be adjusted to handle a variety of network situations. This section outlines some of these configuration parameters and their consequences at the networking level and their general intended use.

Target Peers

The beacon node has a --target-peers CLI parameter. This allows you to instruct the beacon node how many peers it should try to find and maintain. Lighthouse allows an additional 10% of this value for nodes to connect to us. Every 30 seconds, the excess peers are pruned. Lighthouse removes the worst-performing peers and maintains the best performing peers.

It may be counter-intuitive, but having a very large peer count will likely have a degraded performance for a beacon node in normal operation and during sync.

Having a large peer count means that your node must act as an honest RPC server to all your connected peers. If there are many that are syncing, they will often be requesting a large number of blocks from your node. This means you node must perform a lot of work reading and responding to these peers. If you node is over-loaded with peers and cannot respond in time, other Lighthouse peers will consider you non-performant and disfavour you from their peer stores. You node will also have to handle and manage the gossip and extra bandwidth that comes from having these extra peers. Having a non-responsive node (due to overloading of connected peers), degrades the network as a whole.

It is often the belief that a higher peer counts will improve sync times. Beyond a handful of peers, this is not true. On all current tested networks, the bottleneck for syncing is not the network bandwidth of downloading blocks, rather it is the CPU load of processing the blocks themselves. Most of the time, the network is idle, waiting for blocks to be processed. Having a very large peer count will not speed up sync.

For these reasons, we recommend users do not modify the --target-peers count drastically and use the (recommended) default.

NAT Traversal (Port Forwarding)

Lighthouse, by default, used port 9000 for both TCP and UDP. Lighthouse will still function if it is behind a NAT without any port mappings. Although Lighthouse still functions, we recommend that some mechanism is used to ensure that your Lighthouse node is publicly accessible. This will typically improve your peer count, allow the scoring system to find the best/most favourable peers for your node and overall improve the eth2 network.

Lighthouse currently supports UPnP. If UPnP is enabled on your router, Lighthouse will automatically establish the port mappings for you (the beacon node will inform you of established routes in this case). If UPnP is not enabled, we recommend you manually set up port mappings to both of Lighthouse's TCP and UDP ports (9000 by default).

ENR Configuration

Lighthouse has a number of CLI parameters for constructing and modifying the local Ethereum Node Record (ENR). Examples are --enr-address, --enr-udp-port, --enr-tcp-port and --disable-enr-auto-update. These settings allow you construct your initial ENR. Their primary intention is for setting up boot-like nodes and having a contactable ENR on boot. On normal operation of a Lighthouse node, none of these flags need to be set. Setting these flags incorrectly can lead to your node being incorrectly added to the global DHT which will degrades the discovery process for all Eth2 peers.

The ENR of a Lighthouse node is initially set to be non-contactable. The in-built discovery mechanism can determine if you node is publicly accessible, and if it is, it will update your ENR to the correct public IP and port address (meaning you do not need to set it manually). Lighthouse persists its ENR, so on reboot it will re-load the settings it had discovered previously.

Modifying the ENR settings can degrade the discovery of your node making it harder for peers to find you or potentially making it harder for other peers to find each other. We recommend not touching these settings unless for a more advanced use case.

Running a Slasher

Lighthouse includes a slasher for identifying slashable offences comitted by other validators and including proof of those offences in blocks.

Running a slasher is a good way to contribute to the health of the network, and doing so can earn extra income for your validators. However it is currently only recommended for expert users because of the immaturity of the slasher UX and the extra resources required.

Minimum System Requirements

  • Quad-core CPU
  • 16 GB RAM
  • 256 GB solid state storage (in addition to space for the beacon node DB)
  • ⚠️ If you are running natively on Windows: LMDB will pre-allocate the entire 256 GB for the slasher database

How to Run

The slasher runs inside the same process as the beacon node, when enabled via the --slasher flag:

lighthouse bn --slasher --debug-level debug

The slasher hooks into Lighthouse's block and attestation processing, and pushes messages into an in-memory queue for regular processing. It will increase the CPU usage of the beacon node because it verifies the signatures of otherwise invalid messages. When a slasher batch update runs, the messages are filtered for relevancy, and all relevant messages are checked for slashings and written to the slasher database.

You should run with debug logs, so that you can see the slasher's internal machinations, and provide logs to the devs should you encounter any bugs.

Configuration

The slasher has several configuration options that control its functioning.

Database Directory

  • Flag: --slasher-dir PATH
  • Argument: path to directory

By default the slasher stores data in the slasher_db directory inside the beacon node's datadir, e.g. ~/.lighthouse/{network}/beacon/slasher_db. You can use this flag to change that storage directory.

History Length

  • Flag: --slasher-history-length EPOCHS
  • Argument: number of epochs
  • Default: 4096 epochs

The slasher stores data for the history-length most recent epochs. By default the history length is set high in order to catch all validator misbehaviour since the last weak subjectivity checkpoint. If you would like to reduce the resource requirements (particularly disk space), set the history length to a lower value, although a lower history length may prevent your slasher from finding some slashings.

Note: See the --slasher-max-db-size section below to ensure that your disk space savings are applied. The history length must be a multiple of the chunk size (default 16), and cannot be changed after initialization.

Max Database Size

  • Flag: --slasher-max-db-size GIGABYTES
  • Argument: maximum size of the database in gigabytes
  • Default: 256 GB

The slasher uses LMDB as its backing store, and LMDB will consume up to the maximum amount of disk space allocated to it. By default the limit is set to accomodate the default history length and around 150K validators but you can set it lower if running with a reduced history length. The space required scales approximately linearly in validator count and history length, i.e. if you halve either you can halve the space required.

If you want a better estimate you can use this formula:

360 * V * N + (16 * V * N)/(C * K) + 15000 * N

where

  • V is the validator count
  • N is the history length
  • C is the chunk size
  • K is the validator chunk size

Update Period

  • Flag: --slasher-update-period SECONDS
  • Argument: number of seconds
  • Default: 12 seconds

Set the length of the time interval between each slasher batch update. You can check if your slasher is keeping up with its update period by looking for a log message like this:

DEBG Completed slasher update num_blocks: 1, num_attestations: 279, time_taken: 1821ms, epoch: 20889, service: slasher

If the time_taken is substantially longer than the update period then it indicates your machine is struggling under the load, and you should consider increasing the update period or lowering the resource requirements by tweaking the history length.

Chunk Size and Validator Chunk Size

  • Flags: --slasher-chunk-size EPOCHS, --slasher-validator-chunk-size NUM_VALIDATORS
  • Arguments: number of ecochs, number of validators
  • Defaults: 16, 256

Adjusting these parameter should only be done in conjunction with reading in detail about how the slasher works, and/or reading the source code.

Short-Range Example

If you would like to run a lightweight slasher that just checks blocks and attestations within the last day or so, you can use this combination of arguments:

lighthouse bn --slasher --slasher-history-length 256 --slasher-max-db-size 16 --debug-level debug

Stability Warning

The slasher code is still quite new, so we may update the schema of the slasher database in a backwards-incompatible way which will require re-initialization.

Redundancy

There are three places in Lighthouse where redundancy is notable:

  1. βœ… GOOD: Using a redundant Beacon node in lighthouse vc --beacon-nodes
  2. βœ… GOOD: Using a redundant Eth1 node in lighthouse bn --eth1-endpoints
  3. ☠️ BAD: Running redundant lighthouse vc instances with overlapping keypairs.

I mention (3) since it is unsafe and should not be confused with the other two uses of redundancy. Running the same validator keypair in more than one validator client (Lighthouse, or otherwise) will eventually lead to slashing. See Slashing Protection for more information.

From this paragraph, this document will only refer to the first two items (1, 2). We never recommend that users implement redundancy for validator keypairs.

Redundant Beacon Nodes

The Lighthouse validator client can be configured to use multiple redundant beacon nodes.

The lighthouse vc --beacon-nodes flag allows one or more comma-separated values:

  1. lighthouse vc --beacon-nodes http://localhost:5052
  2. lighthouse vc --beacon-nodes http://localhost:5052,http://192.168.1.1:5052

In the first example, the validator client will attempt to contact http://localhost:5052 to perform duties. If that node is not contactable, not synced or unable to serve the request then the validator client may fail to perform some duty (e.g. produce a block or attest).

However, in the second example, any failure on http://localhost:5052 will be followed by a second attempt using http://192.168.1.1:5052. This achieves redundancy, allowing the validator client to continue to perform its duties as long as at least one of the beacon nodes is available.

There are a few interesting properties about the list of --beacon-nodes:

  • Ordering matters: the validator client prefers a beacon node that is earlier in the list.
  • Synced is preferred: the validator client prefers a synced beacon node over one that is still syncing.
  • Failure is sticky: if a beacon node fails, it will be flagged as offline and wont be retried again for the rest of the slot (12 seconds). This helps prevent the impact of time-outs and other lengthy errors.

Note: When supplying multiple beacon nodes the http://localhost:5052 address must be explicitly provided (if it is desired). It will only be used as default if no --beacon-nodes flag is provided at all.

Configuring a redundant Beacon Node

In our previous example we listed http://192.168.1.1:5052 as a redundant node. Apart from having sufficient resources, the backup node should have the following flags:

  • --staking: starts the HTTP API server and ensures the Eth1 chain is synced.
  • --http-address 0.0.0.0: this allows any external IP address to access the HTTP server (a firewall should be configured to deny unauthorized access to port 5052). This is only required if your backup node is on a different host.
  • --subscribe-all-subnets: ensures that the beacon node subscribes to all subnets, not just on-demand requests from validators.
  • --import-all-attestations: ensures that the beacon node performs aggregation on all seen attestations.

Subsequently, one could use the following command to provide a backup beacon node:

lighthouse bn \
  --staking \
  --http-address 0.0.0.0 \
  --subscribe-all-subnets \
  --import-all-attestations

Resource usage of redundant Beacon Nodes

The --subscribe-all-subnets and --import-all-attestations flags typically cause a significant increase in resource consumption. A doubling in CPU utilization and RAM consumption is expected.

The increase in resource consumption is due to the fact that the beacon node is now processing, validating, aggregating and forwarding all attestations, whereas previously it was likely only doing a fraction of this work. Without these flags, subscription to attestation subnets and aggregation of attestations is only performed for validators which explicitly request subscriptions.

There are 64 subnets and each validator will result in a subscription to at least one subnet. So, using the two aforementioned flags will result in resource consumption akin to running 64+ validators.

Redundant Eth1 nodes

Compared to redundancy in beacon nodes (see above), using redundant Eth1 nodes is very straight-forward:

  1. lighthouse bn --eth1-endpoints http://localhost:8545
  2. lighthouse bn --eth1-endpoints http://localhost:8545,http://192.168.0.1:8545

In the case of (1), any failure on http://localhost:8545 will result in a failure to update the Eth1 cache in the beacon node. Consistent failure over a period of hours may result in a failure in block production.

However, in the case of (2), the http://192.168.0.1:8545 Eth1 endpoint will be tried each time the first fails. Eth1 endpoints will be tried from first to last in the list, until a successful response is obtained.

There is no need for special configuration on the Eth1 endpoint, all endpoints can (probably should) be configured identically.

Note: When supplying multiple endpoints the http://localhost:8545 address must be explicitly provided (if it is desired). It will only be used as default if no --eth1-endpoints flag is provided at all.

Pre-Releases

From time-to-time, Lighthouse pre-releases will be published on the sigp/lighthouse repository. These releases have passed the usual automated testing, however the developers would like to see it running "in the wild" in a variety of configurations before declaring it an official, stable release. Pre-releases are also used by developers to get feedback from users regarding the ergonomics of new features or changes.

Github will clearly show such releases as a "Pre-release" and they will not show up on sigp/lighthouse/releases/latest. However, pre-releases will show up on the sigp/lighthouse/releases page, so please pay attention to avoid the pre-releases when you're looking for stable Lighthouse.

Examples

v1.4.0-rc.0 has rc (release candidate) in the version string and is therefore a pre-release. This release is not stable and is not intended for critical tasks on mainnet (e.g., staking).

However, v1.4.0 is considered stable since it is not marked as a pre-release and does not contain rc in the version string. This release is intended for use on mainnet.

When to use a pre-release

Users may wish to try a pre-release for the following reasons:

  • To preview new features before they are officially released.
  • To help detect bugs and regressions before they reach production.
  • To provide feedback on annoyances before they make it into a release and become harder to change or revert.

When not to use a pre-release

It is not recommended to use pre-releases for any critical tasks on mainnet (e.g., staking). To test critical features, try one of the testnets (e.g., Prater).

Contributing to Lighthouse

Chat Badge

Lighthouse welcomes contributions. If you are interested in contributing to the Ethereum ecosystem, and you want to learn Rust, Lighthouse is a great project to work on.

To start contributing,

  1. Read our how to contribute document.
  2. Setup a development environment.
  3. Browse through the open issues (tip: look for the good first issue tag).
  4. Comment on an issue before starting work.
  5. Share your work via a pull-request.

If you have questions, please reach out via Discord.

Branches

Lighthouse maintains two permanent branches:

  • stable: Always points to the latest stable release.
    • This is ideal for most users.
  • unstable: Used for development, contains the latest PRs.
    • Developers should base thier PRs on this branch.

Ethereum 2.0

Lighthouse is an implementation of the Ethereum 2.0 specification, as defined in the ethereum/eth2.0-specs repository.

We recommend reading Danny Ryan's (incomplete) Phase 0 for Humans before diving into the canonical spec.

Rust

Lighthouse adheres to Rust code conventions as outlined in the Rust Styleguide.

Please use clippy and rustfmt to detect common mistakes and inconsistent code formatting:

$ cargo clippy --all
$ cargo fmt --all --check

Panics

Generally, panics should be avoided at all costs. Lighthouse operates in an adversarial environment (the Internet) and it's a severe vulnerability if people on the Internet can cause Lighthouse to crash via a panic.

Always prefer returning a Result or Option over causing a panic. For example, prefer array.get(1)? over array[1].

If you know there won't be a panic but can't express that to the compiler, use .expect("Helpful message") instead of .unwrap(). Always provide detailed reasoning in a nearby comment when making assumptions about panics.

TODOs

All TODO statements should be accompanied by a GitHub issue.


#![allow(unused)]
fn main() {
pub fn my_function(&mut self, _something &[u8]) -> Result<String, Error> {
  // TODO: something_here
  // https://github.com/sigp/lighthouse/issues/XX
}
}

Comments

General Comments

  • Prefer line (//) comments to block comments (/* ... */)
  • Comments can appear on the line prior to the item or after a trailing space.

#![allow(unused)]
fn main() {
// Comment for this struct
struct Lighthouse {}
fn make_blockchain() {} // A comment on the same line after a space
}

Doc Comments

  • The /// is used to generate comments for Docs.
  • The comments should come before attributes.

#![allow(unused)]
fn main() {
/// Stores the core configuration for this Lighthouse instance.
/// This struct is general, other components may implement more
/// specialized config structs.
#[derive(Clone)]
pub struct LighthouseConfig {
    pub data_dir: PathBuf,
    pub p2p_listen_port: u16,
}
}

Rust Resources

Rust is an extremely powerful, low-level programming language that provides freedom and performance to create powerful projects. The Rust Book provides insight into the Rust language and some of the coding style to follow (As well as acting as a great introduction and tutorial for the language).

Rust has a steep learning curve, but there are many resources to help. We suggest:

Development Environment

Most Lighthouse developers work on Linux or MacOS, however Windows should still be suitable.

First, follow the Installation Guide to install Lighthouse. This will install Lighthouse to your PATH, which is not particularly useful for development but still a good way to ensure you have the base dependencies.

The only additional requirement for developers is ganache-cli. This is used to simulate the Eth1 chain during tests. You'll get failures during tests if you don't have ganache-cli available on your PATH.

Testing

As with most other Rust projects, Lighthouse uses cargo test for unit and integration tests. For example, to test the ssz crate run:

cd consensus/ssz
cargo test

We also wrap some of these commands and expose them via the Makefile in the project root for the benefit of CI/CD. We list some of these commands below so you can run them locally and avoid CI failures:

  • $ make cargo-fmt: (fast) runs a Rust code linter.
  • $ make test: (medium) runs unit tests across the whole project.
  • $ make test-ef: (medium) runs the Ethereum Foundation test vectors.
  • $ make test-full: (slow) runs the full test suite (including all previous commands). This is approximately everything that is required to pass CI.

The lighthouse test suite is quite extensive, running the whole suite may take 30+ minutes.

Ethereum 2.0 Spec Tests

The ethereum/eth2.0-spec-tests repository contains a large set of tests that verify Lighthouse behaviour against the Ethereum Foundation specifications.

These tests are quite large (100's of MB) so they're only downloaded if you run $ make test-ef (or anything that run it). You may want to avoid downloading these tests if you're on a slow or metered Internet connection. CI will require them to pass, though.

Local Testnets

During development and testing it can be useful to start a small, local testnet.

The scripts/local_testnet/ directory contains several scripts and a README that should make this process easy.

Frequently Asked Questions

Why does it take so long for a validator to be activated?

After validators create their Eth1 deposit transaction there are two waiting periods before they can start producing blocks and attestations:

  1. Waiting for the beacon chain to recognise the Eth1 block containing the deposit (generally 4 to 7.4 hours).
  2. Waiting in the queue for validator activation (generally 6.4 minutes for every 4 validators in the queue).

Detailed answers below:

1. Waiting for the beacon chain to detect the Eth1 deposit

Since the beacon chain uses Eth1 for validator on-boarding, beacon chain validators must listen to event logs from the deposit contract. Since the latest blocks of the Eth1 chain are vulnerable to re-orgs due to minor network partitions, beacon nodes follow the Eth1 chain at a distance of 1,024 blocks (~4 hours) (see ETH1_FOLLOW_DISTANCE). This follow distance protects the beacon chain from on-boarding validators that are likely to be removed due to an Eth1 re-org.

Now we know there's a 4 hours delay before the beacon nodes even consider an Eth1 block. Once they are considering these blocks, there's a voting period where beacon validators vote on which Eth1 to include in the beacon chain. This period is defined as 32 epochs (~3.4 hours, see ETH1_VOTING_PERIOD). During this voting period, each beacon block producer includes an Eth1Data in their block which counts as a vote towards what that validator considers to be the head of the Eth1 chain at the start of the voting period (with respect to ETH1_FOLLOW_DISTANCE, of course). You can see the exact voting logic here.

These two delays combined represent the time between an Eth1 deposit being included in an Eth1 data vote and that validator appearing in the beacon chain. The ETH1_FOLLOW_DISTANCE delay causes a minimum delay of ~4 hours and ETH1_VOTING_PERIOD means that if a validator deposit happens just before the start of a new voting period then they might not notice this delay at all. However, if the validator deposit happens just after the start of the new voting period the validator might have to wait ~3.4 hours for next voting period. In times of very, very severe network issues, the network may even fail to vote in new Eth1 blocks, stopping all new validator deposits!

Note: you can see the list of validators included in the beacon chain using our REST API: /beacon/validators/all

2. Waiting for a validator to be activated

If a validator has provided an invalid public key or signature, they will never be activated or even show up in /beacon/validators/all. They will simply be forgotten by the beacon chain! But, if those parameters were correct, once the Eth1 delays have elapsed and the validator appears in the beacon chain, there's another delay before the validator becomes "active" (canonical definition here) and can start producing blocks and attestations.

Firstly, the validator won't become active until their beacon chain balance is equal to or greater than MAX_EFFECTIVE_BALANCE (32 ETH on mainnet, usually 3.2 ETH on testnets). Once this balance is reached, the validator must wait until the start of the next epoch (up to 6.4 minutes) for the process_registry_updates routine to run. This routine activates validators with respect to a churn limit; it will only allow the number of validators to increase (churn) by a certain amount. Up until there are about 330,000 validators this churn limit is set to 4 and it starts to very slowly increase as the number of validators increases from there.

If a new validator isn't within the churn limit from the front of the queue, they will need to wait another epoch (6.4 minutes) for their next chance. This repeats until the queue is cleared.

Once a validator has been activated, there's no more waiting! It's time to produce blocks and attestations!

Do I need to set up any port mappings

It is not strictly required to open any ports for Lighthouse to connect and participate in the network. Lighthouse should work out-of-the-box. However, if your node is not publicly accessible (you are behind a NAT or router that has not been configured to allow access to Lighthouse ports) you will only be able to reach peers who have a set up that is publicly accessible.

There are a number of undesired consequences of not making your Lighthouse node publicly accessible.

Firstly, it will make it more difficult for your node to find peers, as your node will not be added to the global DHT and other peers will not be able to initiate connections with you. Secondly, the peers in your peer store are more likely to end connections with you and be less performant as these peers will likely be overloaded with subscribing peers. The reason being, that peers that have correct port forwarding (publicly accessible) are in higher demand than regular peers as other nodes behind NAT's will also be looking for these peers. Finally, not making your node publicly accessible degrades the overall network, making it more difficult for other peers to join and degrades the overall connectivity of the global network.

For these reasons, we recommend that you make your node publicly accessible.

Lighthouse supports UPnP. If you are behind a NAT with a router that supports UPnP you can simply ensure UPnP is enabled (Lighthouse will inform you in its initial logs if a route has been established). You can also manually set up port mappings in your router to your local Lighthouse instance. By default, Lighthouse uses port 9000 for both TCP and UDP. Opening both these ports will make your Lighthouse node maximally contactable.

I have a low peer count and it is not increasing

If you cannot find ANY peers at all. It is likely that you have incorrect testnet configuration settings. Ensure that the network you wish to connect to is correct (the beacon node outputs the network it is connecting to in the initial boot-up log lines). On top of this, ensure that you are not using the same datadir as a previous network. I.e if you have been running the pyrmont testnet and are now trying to join a new testnet but using the same datadir (the datadir is also printed out in the beacon node's logs on boot-up).

If you find yourself with a low peer count and is not reaching the target you expect. Try setting up the correct port forwards as described in 3. above.

What should I do if I lose my slashing protection database?

See here.

How do I update lighthouse?

If you are updating to new release binaries, it will be the same process as described here.

If you are updating by rebuilding from source, see here.

If you are running the docker image provided by Sigma Prime on Dockerhub, you can update to specific versions, for example:

$ docker pull sigp/lighthouse:v1.0.0

If you are building a docker image, the process will be similar to the one described here. You will just also need to make sure the code you have checked out is up to date.

I can't compile lighthouse

See here.

What is "Syncing eth1 block cache"

Nov 30 21:04:28.268 WARN Syncing eth1 block cache   est_blocks_remaining: initializing deposits, service: slot_notifier

This log indicates that your beacon node is downloading blocks and deposits from your eth1 node. When the est_blocks_remaining is initializing_deposits, your node is downloading deposit logs. It may stay in this stage for several minutes. Once the deposits logs are finished downloading, the est_blocks_remaining value will start decreasing.

It is perfectly normal to see this log when starting a node for the first time or after being off for more than several minutes.

If this log continues appearing sporadically during operation, there may be an issue with your eth1 endpoint.

Can I use redundancy in my staking setup?

You should never use duplicate/redundant validator keypairs or validator clients (i.e., don't duplicate your JSON keystores and don't run lighthouse vc twice). This will lead to slashing.

However, there are some components which can be configured with redundancy. See the Redundancy guide for more information.

How can I monitor my validators?

Apart from using block explorers, you may use the "Validator Monitor" built into Lighthouse which provides logging and Prometheus/Grafana metrics for individual validators. See Validator Monitoring for more information.