Lighthouse Book

Documentation for Lighthouse users and developers.

Chat Badge

Lighthouse is an Ethereum 2.0 client that connects to other Ethereum 2.0 clients to form a resilient and decentralized proof-of-stake blockchain.

We implement the specification as defined in the ethereum/eth2.0-specs repository.

Topics

You may read this book from start to finish, or jump to some of these topics:

Prospective contributors can read the Contributing section to understand how we develop and test Lighthouse.

About this Book

This book is open source, contribute at github.com/sigp/lighthouse/book.

The Lighthouse CI/CD system maintains a hosted version of the master branch at lighthouse-book.sigmaprime.io.

Become a Testnet Validator

Joining an Eth2 testnet is a great way to get familiar with staking in Phase 0. All users should experiment with a testnet prior to staking mainnet ETH.

Supported Testnets

Lighthouse supports four testnets:

When using Lighthouse, the --testnet flag selects a testnet. E.g.,

  • lighthouse (no flag): Medalla.
  • lighthouse --testnet medalla: Medalla.
  • lighthouse --testnet zinken: Zinken.

Using the correct --testnet flag is very important; using the wrong flag can result in penalties, slashings or lost deposits. As a rule of thumb, always provide a --testnet flag instead of relying on the default.

Note: In these documents we use --testnet MY_TESTNET for demonstration. You must replace MY_TESTNET with a valid testnet name.

Joining a Testnet

There are five primary steps to become a testnet validator:

  1. Create validator keys and submit deposits.
  2. Start an Eth1 client.
  3. Install Lighthouse.
  4. Import the validator keys into Lighthouse.
  5. Start Lighthouse.
  6. Leave Lighthouse running.

Each of these primary steps has several intermediate steps, so we recommend setting aside one or two hours for this process.

Step 1. Create validator keys

The Ethereum Foundation provides an "Eth2 launch pad" for each active testnet:

Please follow the steps on the appropriate launch pad site to generate validator keys and submit deposits. Make sure you select "Lighthouse" as your client.

Move to the next step once you have completed the steps on the launch pad, including generating keys via the Python CLI and submitting gETH/ETH deposits.

Step 2. Start an Eth1 client

Since Eth2 relies upon the Eth1 chain for validator on-boarding, all Eth2 validators must have a connection to an Eth1 node.

We provide instructions for using Geth (the Eth1 client that, by chance, we ended up testing with), but you could use any client that implements the JSON RPC via HTTP. A fast-synced node should be sufficient.

Installing Geth

If you're using a Mac, follow the instructions listed here to install geth. Otherwise see here.

Starting Geth

Once you have geth installed, use this command to start your Eth1 node:

 geth --goerli --http

Step 3. Install Lighthouse

Note: Lighthouse only supports Windows via WSL.

Follow the Lighthouse Installation Instructions to install Lighthouse from one of the available options.

Proceed to the next step once you've successfully installed Lighthouse and view its --version info.

Note: Some of the instructions vary when using Docker, ensure you follow the appropriate sections later in this guide.

Step 4. Import validator keys to Lighthouse

When Lighthouse is installed, follow the Importing from the Ethereum 2.0 Launch pad instructions so the validator client can perform your validator duties.

Proceed to the next step once you've successfully imported all validators.

Step 5. Start Lighthouse

For staking, one needs to run two Lighthouse processes:

  • lighthouse bn: the "beacon node" which connects to the P2P network and verifies blocks.
  • lighthouse vc: the "validator client" which manages validators, using data obtained from the beacon node via a HTTP API.

Starting these processes is different for binary and docker users:

Binary users

Those using the pre- or custom-built binaries can start the two processes with:

lighthouse --testnet MY_TESTNET bn --staking
lighthouse --testnet MY_TESTNET vc

Note: ~/.lighthouse/{testnet} is the default directory which contains the keys and databases. To specify a custom dir, see this section

Docker users

Those using Docker images can start the processes with:

$ docker run \
	--network host \
	-v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse \
	lighthouse --testnet MY_TESTNET bn --staking --http-address 0.0.0.0
$ docker run \
	--network host \
	-v $HOME/.lighthouse:/root/.lighthouse \
	sigp/lighthouse \
	lighthouse --testnet MY_TESTNET vc

Step 6. Leave Lighthouse running

Leave your beacon node and validator client running and you'll see logs as the beacon node stays synced with the network while the validator client produces blocks and attestations.

It will take 4-8+ hours for the beacon chain to process and activate your validator, however you'll know you're active when the validator client starts successfully publishing attestations each epoch:

Dec 03 08:49:40.053 INFO Successfully published attestation      slot: 98, committee_index: 0, head_block: 0xa208…7fd5,

Although you'll produce an attestation each epoch, it's less common to produce a block. Watch for the block production logs too:

Dec 03 08:49:36.225 INFO Successfully published block            slot: 98, attestations: 2, deposits: 0, service: block

If you see any ERRO (error) logs, please reach out on Discord or create an issue.

Happy staking!

Custom directories

Users can override the default Lighthouse data directories (~/.lighthouse/{testnet}) using the --datadir flag. The custom data directory mirrors the structure of any testnet specific default directory (e.g. ~/.lighthouse/medalla).

Note: Users should specify different custom directories for different testnets.

Below is an example flow for importing validator keys, running a beacon node and validator client using a custom data directory /var/lib/my-custom-dir for the medalla testnet.

lighthouse --testnet medalla --datadir /var/lib/my-custom-dir account validator import --directory <PATH-TO-LAUNCHPAD-KEYS-DIRECTORY>
lighthouse --testnet medalla --datadir /var/lib/my-custom-dir bn --staking
lighthouse --testnet medalla --datadir /var/lib/my-custom-dir vc

The first step creates a validators directory under /var/lib/my-custom-dir which contains the imported keys and validator_definitions.yml. After that, we simply run the beacon chain and validator client with the custom dir path.

📦 Installation

Lighthouse runs on Linux, macOS, and Windows (via WSL only).

There are three core methods to obtain the Lighthouse application:

Additionally, there are two extra guides for specific uses:

Pre-built Binaries

Each Lighthouse release contains several downloadable binaries in the "Assets" section of the release. You can find the releases on Github.

Note: binaries are not yet provided for MacOS or Windows native.

Platforms

Binaries are supplied for two platforms:

  • x86_64-unknown-linux-gnu: AMD/Intel 64-bit processors (most desktops, laptops, servers)
  • aarch64-unknown-linux-gnu: 64-bit ARM processors (Raspberry Pi 4)

Additionally there is also a -portable suffix which indicates if the portable feature is used:

  • Without portable: uses modern CPU instructions to provide the fastest signature verification times (may cause Illegal instruction error on older CPUs)
  • With portable: approx. 20% slower, but should work on all modern 64-bit processors.

Usage

Each binary is contained in a .tar.gz archive. For this example, lets use the v0.2.13 release and assume the user needs a portable x86_64 binary.

Whilst this example uses v0.2.13 we recommend always using the latest release.

Steps

  1. Go to the Releases page and select the latest release.
  2. Download the lighthouse-${VERSION}-x86_64-unknown-linux-gnu-portable.tar.gz binary.
  3. Extract the archive:
    1. cd Downloads
    2. tar -xvf lighthouse-${VERSION}-x86_64-unknown-linux-gnu.tar.gz
  4. Test the binary with ./lighthouse --version (it should print the version).
  5. (Optional) Move the lighthouse binary to a location in your PATH, so the lighthouse command can be called from anywhere.
    • E.g., cp lighthouse /usr/bin

Docker Guide

This repository has a Dockerfile in the root which builds an image with the lighthouse binary installed. A pre-built image is available on Docker Hub.

Obtaining the Docker image

There are two ways to obtain the docker image, either via Docker Hub or building the image from source. Once you have obtained the docker image via one of these methods, proceed to Using the Docker image.

Docker Hub

Lighthouse maintains the sigp/lighthouse Docker Hub repository which provides an easy way to run Lighthouse without building the image yourself.

Obtain the latest image with:

$ docker pull sigp/lighthouse

Download and test the image with:

$ docker run sigp/lighthouse lighthouse --version

If you can see the latest Lighthouse release version (see example below), then you've successfully installed Lighthouse via Docker.

Example Version Output

Lighthouse vx.x.xx-xxxxxxxxx
BLS Library: xxxx-xxxxxxx

Note: when you're running the Docker Hub image you're relying upon a pre-built binary instead of building from source.

Note: due to the Docker Hub image being compiled to work on arbitrary machines, it isn't as highly optimized as an image built from source. We're working to improve this, but for now if you want the absolute best performance, please build the image yourself.

Building the Docker Image

To build the image from source, navigate to the root of the repository and run:

$ docker build . -t lighthouse:local

The build will likely take several minutes. Once it's built, test it with:

$ docker run lighthouse:local lighthouse --help

Using the Docker image

You can run a Docker beacon node with the following command:

$ docker run -p 9000:9000 -p 127.0.0.1:5052:5052 -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse --testnet medalla beacon --http --http-address 0.0.0.0

To join the altona testnet, use --testnet altona instead.

The -p and -v and values are described below.

Volumes

Lighthouse uses the /root/.lighthouse directory inside the Docker image to store the configuration, database and validator keys. Users will generally want to create a bind-mount volume to ensure this directory persists between docker run commands.

The following example runs a beacon node with the data directory mapped to the users home directory:

$ docker run -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse beacon

Ports

In order to be a good peer and serve other peers you should expose port 9000. Use the -p flag to do this:

$ docker run -p 9000:9000 sigp/lighthouse lighthouse beacon

If you use the --http flag you may also want to expose the HTTP port with -p 127.0.0.1:5052:5052.

$ docker run -p 9000:9000 -p 127.0.0.1:5052:5052 sigp/lighthouse lighthouse beacon --http --http-address 0.0.0.0

Installation: Build from Source

Lighthouse builds on Linux, macOS, and Windows (via WSL only).

Compilation should be easy. In fact, if you already have Rust installed all you need is:

  • git clone https://github.com/sigp/lighthouse.git
  • cd lighthouse
  • make

If this doesn't work or is not clear enough, see the Detailed Instructions below. If you have further issues, see Troubleshooting. If you'd prefer to use Docker, see the Docker Guide.

Detailed Instructions

  1. Install Rust and Cargo with rustup.
    • Use the stable toolchain (it's the default).
    • Check the Troubleshooting section for additional dependencies (e.g., cmake).
  2. Clone the Lighthouse repository.
    • Run $ git clone https://github.com/sigp/lighthouse.git
    • Change into the newly created directory with $ cd lighthouse
  3. Build Lighthouse with $ make.
  4. Installation was successful if $ lighthouse --help displays the command-line documentation.

First time compilation may take several minutes. If you experience any failures, please reach out on discord or create an issue.

Windows Support

Compiling or running Lighthouse natively on Windows is not currently supported. However, Lighthouse can run successfully under the Windows Subsystem for Linux (WSL). If using Ubuntu under WSL, you can should install the Ubuntu dependencies listed in the Dependencies (Ubuntu) section.

Troubleshooting

Dependencies

Ubuntu

Several dependencies may be required to compile Lighthouse. The following packages may be required in addition a base Ubuntu Server installation:

sudo apt install -y git gcc g++ make cmake pkg-config libssl-dev

macOS

You will need cmake. You can install via homebrew:

brew install openssl cmake

Command is not found

Lighthouse will be installed to CARGO_HOME or $HOME/.cargo. This directory needs to be on your PATH before you can run $ lighthouse.

See "Configuring the PATH environment variable" (rust-lang.org) for more information.

Compilation error

Make sure you are running the latest version of Rust. If you have installed Rust using rustup, simply type $ rustup update.

OpenSSL

If you get a build failure relating to OpenSSL, try installing openssl-dev or libssl-dev using your OS package manager.

  • Ubuntu: $ apt-get install libssl-dev.
  • Amazon Linux: $ yum install openssl-devel.

Raspberry Pi 4 Installation

Tested on:

  • Raspberry Pi 4 Model B (4GB)
  • Ubuntu 20.04 LTS (GNU/Linux 5.4.0-1011-raspi aarch64)

Note: Lighthouse supports cross-compiling to target a Raspberry Pi (aarch64). Compiling on a faster machine (i.e., x86_64 desktop) may be convenient.

1. Install Ubuntu

Follow the Ubuntu Raspberry Pi installation instructions.

A 64-bit version is required and latest version is recommended (Ubuntu 20.04 LTS was the latest at the time of writing).

A graphical environment is not required in order to use Lighthouse. Only the terminal and an Internet connection are necessary.

2. Install Packages

Install the Ubuntu Dependencies. (I.e., run the sudo apt install ... command at that link).

Tips:

  • If there are difficulties, try updating the package manager with sudo apt update.

3. Install Rust

Install Rust as per rustup. (I.e., run the curl ... command).

Tips:

  • When prompted, enter 1 for the default installation.
  • Try running cargo version after Rust installation completes. If it cannot be found, run source $HOME/.cargo/env.
  • It's generally advised to append source $HOME/.cargo/env to ~/.bashrc.

4. Install Lighthouse

git clone https://github.com/sigp/lighthouse.git
cd lighthouse
make

Compiling Lighthouse can take up to an hour. The safety guarantees provided by the Rust language unfortunately result in a lengthy compilation time on a low-spec CPU like a Raspberry Pi.

Once installation has finished, confirm Lighthouse is installed by viewing the usage instructions with lighthouse --help.

Cross-compiling

Lighthouse supports cross-compiling, allowing users to run a binary on one platform (e.g., aarch64) that was compiled on another platform (e.g., x86_64).

Instructions

Cross-compiling requires Docker, rustembedded/cross and for the current user to be in the docker group.

The binaries will be created in the target/ directory of the Lighthouse project.

Targets

The Makefile in the project contains four targets for cross-compiling:

  • build-x86_64: builds an optimized version for x86_64 processors (suitable for most users).
  • build-x86_64-portable: builds a version x86_64 processors which avoids using some modern CPU instructions that might cause an "illegal instruction" error on older CPUs.
  • build-aarch64: builds an optimized version for 64bit ARM processors (suitable for Raspberry Pi 4).
  • build-aarch64-portable: builds a version 64 bit ARM processors which avoids using some modern CPU instructions that might cause an "illegal instruction" error on older CPUs.

Example

cd lighthouse
make build-aarch64

The lighthouse binary will be compiled inside a Docker container and placed in lighthouse/target/aarch64-unknown-linux-gnu/release.

Key Management

Lighthouse uses a hierarchical key management system for producing validator keys. It is hierarchical because each validator key can be derived from a master key, making the validators keys children of the master key. This scheme means that a single 24-word mnemonic can be used to backup all of your validator keys without providing any observable link between them (i.e., it is privacy-retaining). Hierarchical key derivation schemes are common-place in cryptocurrencies, they are already used by most hardware and software wallets to secure BTC, ETH and many other coins.

Key Concepts

We defined some terms in the context of validator key management:

  • Mnemonic: a string of 24 words that is designed to be easy to write down and remember. E.g., "radar fly lottery mirror fat icon bachelor sadness type exhaust mule six beef arrest you spirit clog mango snap fox citizen already bird erase".
    • Defined in BIP-39
  • Wallet: a wallet is a JSON file which stores an encrypted version of a mnemonic.
    • Defined in EIP-2386
  • Keystore: typically created by wallet, it contains a single encrypted BLS keypair.
    • Defined in EIP-2335.
  • Voting Keypair: a BLS public and private keypair which is used for signing blocks, attestations and other messages on regular intervals, whilst staking in Phase 0.
  • Withdrawal Keypair: a BLS public and private keypair which will be required after Phase 0 to manage ETH once a validator has exited.

Overview

The key management system in Lighthouse involves moving down the above list of items, starting at one easy-to-backup mnemonic and ending with multiple keypairs. Creating a single validator looks like this:

  1. Create a wallet and record the mnemonic:
    • lighthouse --testnet medalla account wallet create --name wally --password-file wally.pass
  2. Create the voting and withdrawal keystores for one validator:
    • lighthouse --testnet medalla account validator create --wallet-name wally --wallet-password wally.pass --count 1

In step (1), we created a wallet in ~/.lighthouse/{testnet}/wallets with the name wally. We encrypted this using a pre-defined password in the wally.pass file. Then, in step (2), we created one new validator in the ~/.lighthouse/{testnet}/validators directory using wally (unlocking it with wally.pass) and storing the passwords to the validators voting key in ~/.lighthouse/{testnet}/secrets.

Thanks to the hierarchical key derivation scheme, we can delete all of the aforementioned directories and then regenerate them as long as we remembered the 24-word mnemonic (we don't recommend doing this, though).

Creating another validator is easy, it's just a matter of repeating step (2). The wallet keeps track of how many validators it has generated and ensures that a new validator is generated each time.

Detail

Directory Structure

There are three important directories in Lighthouse validator key management:

  • wallets/: contains encrypted wallets which are used for hierarchical key derivation.
    • Defaults to ~/.lighthouse/{testnet}/wallets
  • validators/: contains a directory for each validator containing encrypted keystores and other validator-specific data.
    • Defaults to ~/.lighthouse/{testnet}/validators
  • secrets/: since the validator signing keys are "hot", the validator process needs access to the passwords to decrypt the keystores in the validators dir. These passwords are stored here.
    • Defaults to ~/.lighthouse/{testnet}/secrets

where testnet is the name of the testnet passed in the --testnet parameter (default is medalla).

When the validator client boots, it searches the validators/ for directories containing voting keystores. When it discovers a keystore, it searches the secrets/ dir for a file with the same name as the 0x-prefixed hex representation of the keystore public key. If it finds this file, it attempts to decrypt the keystore using the contents of this file as the password. If it fails, it logs an error and moves onto the next keystore.

The validators/ and secrets/ directories are kept separate to allow for ease-of-backup; you can safely backup validators/ without worrying about leaking private key data.

Withdrawal Keypairs

In Eth2 Phase 0, withdrawal keypairs do not serve any immediate purpose. However, they become very important after Phase 0: they will provide the ultimate control of the ETH of withdrawn validators.

This presents an interesting key management scenario: withdrawal keys are very important, but not right now. Considering this, Lighthouse has adopted a strategy where we do not save withdrawal keypairs to disk by default (it is opt-in). Instead, we assert that since the withdrawal keys can be regenerated from a mnemonic, having them lying around on the file-system only presents risk and complexity.

At the time or writing, we do not expose the commands to regenerate keys from mnemonics. However, key regeneration is tested on the public Lighthouse repository and will be exposed prior to mainnet launch.

So, in summary, withdrawal keypairs can be trivially regenerated from the mnemonic via EIP-2333 so they are not saved to disk like the voting keypairs.

Create a wallet

A wallet allows for generating practically unlimited validators from an easy-to-remember 24-word string (a mnemonic). As long as that mnemonic is backed up, all validator keys can be trivially re-generated.

The 24-word string is randomly generated during wallet creation and printed out to the terminal. It's important to make one or more backups of the mnemonic to ensure your ETH is not lost in the case of data loss. It very important to keep your mnemonic private as it represents the ultimate control of your ETH.

Whilst the wallet stores the mnemonic, it does not store it in plain-text: the mnemonic is encrypted with a password. It is the responsibility of the user to define a strong password. The password is only required for interacting with the wallet, it is not required for recovering keys from a mnemonic.

Usage

To create a wallet, use the lighthouse account wallet command:

lighthouse account wallet create --help

Creates a new HD (hierarchical-deterministic) EIP-2386 wallet.

USAGE:
    lighthouse account_manager wallet create [OPTIONS] --name <WALLET_NAME> --password-file <WALLET_PASSWORD_PATH>

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

OPTIONS:
    -d, --datadir <DIR>                             Data directory for lighthouse keys and databases.
        --mnemonic-output-path <MNEMONIC_PATH>
            If present, the mnemonic will be saved to this file. DO NOT SHARE THE MNEMONIC.

        --name <WALLET_NAME>
            The wallet will be created with this name. It is not allowed to create two wallets with the same name for
            the same --base-dir.
        --password-file <WALLET_PASSWORD_PATH>
            A path to a file containing the password which will unlock the wallet. If the file does not exist, a random
            password will be generated and saved at that path. To avoid confusion, if the file does not already exist it
            must include a '.pass' suffix.
    -s, --spec <TITLE>
            Specifies the default eth2 spec type. [default: mainnet]  [possible values: mainnet, minimal, interop]

    -t, --testnet-dir <DIR>
            Path to directory containing eth2_testnet specs. Defaults to a hard-coded Lighthouse testnet. Only effective
            if there is no existing database.
        --type <WALLET_TYPE>
            The type of wallet to create. Only HD (hierarchical-deterministic) wallets are supported presently..
            [default: hd]  [possible values: hd]

Example

Creates a new wallet named wally and saves it in ~/.lighthouse/medalla/wallets with a randomly generated password saved to ./wallet.pass:

lighthouse --testnet medalla account wallet create --name wally --password-file wally.pass

Notes:

  • The password is not wally.pass, it is the contents of the wally.pass file.
  • If wally.pass already exists the wallet password will be set to contents of that file.

Create a validator

Validators are fundamentally represented by a BLS keypair. In Lighthouse, we use a wallet to generate these keypairs. Once a wallet exists, the lighthouse account validator create command is used to generate the BLS keypair and all necessary information to submit a validator deposit and have that validator operate in the lighthouse validator_client.

Usage

To create a validator from a wallet, use the lighthouse account validator create command:

lighthouse account validator create --help

Creates new validators from an existing EIP-2386 wallet using the EIP-2333 HD key derivation scheme.

USAGE:
    lighthouse account_manager validator create [FLAGS] [OPTIONS] --wallet-name <WALLET_NAME> --wallet-password <WALLET_PASSWORD_PATH>

FLAGS:
    -h, --help                         Prints help information
        --store-withdrawal-keystore    If present, the withdrawal keystore will be stored alongside the voting keypair.
                                       It is generally recommended to *not* store the withdrawal key and instead
                                       generate them from the wallet seed when required.
    -V, --version                      Prints version information

OPTIONS:
        --at-most <AT_MOST_VALIDATORS>
            Observe the number of validators in --validator-dir, only creating enough to reach the given count. Never
            deletes an existing validator.
        --count <VALIDATOR_COUNT>
            The number of validators to create, regardless of how many already exist

    -d, --datadir <DIR>                               Data directory for lighthouse keys and databases.
        --debug-level <LEVEL>
            The verbosity level for emitting logs. [default: info]  [possible values: info, debug, trace, warn, error,
            crit]
        --deposit-gwei <DEPOSIT_GWEI>
            The GWEI value of the deposit amount. Defaults to the minimum amount required for an active validator
            (MAX_EFFECTIVE_BALANCE)
        --secrets-dir <SECRETS_DIR>
            The path where the validator keystore passwords will be stored. Defaults to ~/.lighthouse/{testnet}/secrets

    -s, --spec <TITLE>
            Specifies the default eth2 spec type. [default: mainnet]  [possible values: mainnet, minimal, interop]

        --testnet <testnet>
            Name of network lighthouse will connect to [possible values: medalla, altona]

    -t, --testnet-dir <DIR>
            Path to directory containing eth2_testnet specs. Defaults to a hard-coded Lighthouse testnet. Only effective
            if there is no existing database.
        --validator-dir <VALIDATOR_DIRECTORY>
            The path where the validator directories will be created. Defaults to ~/.lighthouse/{testnet}/validators

        --wallet-name <WALLET_NAME>                   Use the wallet identified by this name
        --wallet-password <WALLET_PASSWORD_PATH>
            A path to a file containing the password which will unlock the wallet.

Example

The example assumes that the wally wallet was generated from the wallet example.

lighthouse --testnet medalla account validator create --name wally --wallet-password wally.pass --count 1

This command will:

  • Derive a single new BLS keypair from wallet wally in ~/.lighthouse/{testnet}/wallets, updating it so that it generates a new key next time.
  • Create a new directory in ~/.lighthouse/{testnet}/validators containing:
    • An encrypted keystore containing the validators voting keypair.
    • An eth1_deposit_data.rlp assuming the default deposit amount (32 ETH for most testnets and mainnet) which can be submitted to the deposit contract for the medalla testnet. Other testnets can be set via the --testnet CLI param.
  • Store a password to the validators voting keypair in ~/.lighthouse/{testnet}/secrets.

where testnet is the name of the testnet passed in the --testnet parameter (default is medalla).

Key recovery

Generally, validator keystore files are generated alongside a mnemonic. If the keystore and/or the keystore password are lost this mnemonic can regenerate a new, equivalent keystore with a new password.

There are two ways to recover keys using the lighthouse CLI:

  • lighthouse account validator recover: recover one or more EIP-2335 keystores from a mnemonic. These keys can be used directly in a validator client.
  • lighthouse account wallet recover: recover an EIP-2386 wallet from a mnemonic.

⚠️ Warning

Recovering validator keys from a mnemonic should only be used as a last resort. Key recovery entails significant risks:

  • Exposing your mnemonic to a computer at any time puts it at risk of being compromised. Your mnemonic is not encrypted and is a target for theft.
  • It's completely possible to regenerate a validator keypairs that is already active on some other validator client. Running the same keypairs on two different validator clients is very likely to result in slashing.

Recover EIP-2335 validator keystores

A single mnemonic can generate a practically unlimited number of validator keystores using an index. Generally, the first time you generate a keystore you'll use index 0, the next time you'll use index 1, and so on. Using the same index on the same mnemonic always results in the same validator keypair being generated (see EIP-2334 for more detail).

Using the lighthouse account validator recover command you can generate the keystores that correspond to one or more indices in the mnemonic:

  • lighthouse account validator recover: recover only index 0.
  • lighthouse account validator recover --count 2: recover indices 0, 1.
  • lighthouse account validator recover --first-index 1: recover only index 1.
  • lighthouse account validator recover --first-index 1 --count 2: recover indices 1, 2.

For each of the indices recovered in the above commands, a directory will be created in the --validator-dir location (default ~/.lighthouse/{testnet}/validators) which contains all the information necessary to run a validator using the lighthouse vc command. The password to this new keystore will be placed in the --secrets-dir (default ~/.lighthouse/{testnet}/secrets).

where testnet is the name of the testnet passed in the --testnet parameter (default is medalla).

Recover a EIP-2386 wallet

Instead of creating EIP-2335 keystores directly, an EIP-2386 wallet can be generated from the mnemonic. This wallet can then be used to generate validator keystores, if desired. For example, the following command will create an encrypted wallet named wally-recovered from a mnemonic:

lighthouse account wallet recover --name wally-recovered

⚠️ Warning: the wallet will be created with a nextaccount value of 0. This means that if you have already generated n validators, then the next n validators generated by this wallet will be duplicates. As mentioned previously, running duplicate validators is likely to result in slashing.

Validator Management

The lighthouse vc command starts a validator client instance which connects to a beacon node performs the duties of a staked validator.

This document provides information on how the validator client discovers the validators it will act for and how it should obtain their cryptographic signatures.

Users that create validators using the lighthouse account tool in the standard directories and do not start their lighthouse vc with the --disable-auto-discover flag should not need to understand the contents of this document. However, users with more complex needs may find this document useful.

Introducing the validator_definitions.yml file

The validator_definitions.yml file is located in the validator-dir, which defaults to ~/.lighthouse/{testnet}/validators. It is a YAML encoded file defining exactly which validators the validator client will (and won't) act for.

Example

Here's an example file with two validators:

---
- enabled: true
  voting_public_key: "0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007"
  type: local_keystore
  voting_keystore_path: /home/paul/.lighthouse/validators/0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007/voting-keystore.json
  voting_keystore_password_path: /home/paul/.lighthouse/secrets/0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007
- enabled: false
  voting_public_key: "0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477"
  type: local_keystore
  voting_keystore_path: /home/paul/.lighthouse/validators/0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477/voting-keystore.json
  voting_keystore_password: myStrongpa55word123&$

In this example we can see two validators:

  • A validator identified by the 0x87a5... public key which is enabled.
  • Another validator identified by the 0x0xa556... public key which is not enabled.

Fields

Each permitted field of the file is listed below for reference:

  • enabled: A true/false indicating if the validator client should consider this validator "enabled".
  • voting_public_key: A validator public key.
  • type: How the validator signs messages (currently restricted to local_keystore).
  • voting_keystore_path: The path to a EIP-2335 keystore.
  • voting_keystore_password_path: The path to the password for the EIP-2335 keystore.
  • voting_keystore_password: The password to the EIP-2335 keystore.

Note: Either voting_keystore_password_path or voting_keystore_password must be supplied. If both are supplied, voting_keystore_password_path is ignored.

Populating the validator_definitions.yml file

When validator client starts and the validator_definitions.yml file doesn't exist, a new file will be created. If the --disable-auto-discover flag is provided, the new file will be empty and the validator client will not start any validators. If the --disable-auto-discover flag is not provided, an automatic validator discovery routine will start (more on that later). To recap:

  • lighthouse vc: validators are automatically discovered.
  • lighthouse vc --disable-auto-discover: validators are not automatically discovered.

Automatic validator discovery

When the --disable-auto-discover flag is not provided, the validator will search the validator-dir for validators and add any new validators to the validator_definitions.yml with enabled: true.

The routine for this search begins in the validator-dir, where it obtains a list of all files in that directory and all sub-directories (i.e., recursive directory-tree search). For each file named voting-keystore.json it creates a new validator definition by the following process:

  1. Set enabled to true.
  2. Set voting_public_key to the pubkey value from the voting-keystore.json.
  3. Set type to local_keystore.
  4. Set voting_keystore_path to the full path of the discovered keystore.
  5. Set voting_keystore_password_path to be a file in the secrets-dir with a name identical to the voting_public_key value.

Discovery Example

Lets assume the following directory structure:

~/.lighthouse/{testnet}/validators
├── john
│   └── voting-keystore.json
├── sally
│   ├── one
│   │   └── voting-keystore.json
│   ├── three
│   │   └── my-voting-keystore.json
│   └── two
│       └── voting-keystore.json
└── slashing_protection.sqlite

There is no validator_definitions.yml file present, so we can run lighthouse vc (without --disable-auto-discover) and it will create the following validator_definitions.yml:

---
- enabled: true
  voting_public_key: "0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477"
  type: local_keystore
  voting_keystore_path: /home/paul/.lighthouse/validators/sally/one/voting-keystore.json
  voting_keystore_password_path: /home/paul/.lighthouse/secrets/0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477
- enabled: true
  voting_public_key: "0xaa440c566fcf34dedf233baf56cf5fb05bb420d9663b4208272545608c27c13d5b08174518c758ecd814f158f2b4a337"
  type: local_keystore
  voting_keystore_path: /home/paul/.lighthouse/validators/sally/two/voting-keystore.json
  voting_keystore_password_path: /home/paul/.lighthouse/secrets/0xaa440c566fcf34dedf233baf56cf5fb05bb420d9663b4208272545608c27c13d5b08174518c758ecd814f158f2b4a337
- enabled: true
  voting_public_key: "0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007"
  type: local_keystore
  voting_keystore_path: /home/paul/.lighthouse/validators/john/voting-keystore.json
  voting_keystore_password_path: /home/paul/.lighthouse/secrets/0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007

All voting-keystore.json files have been detected and added to the file. Notably, the sally/three/my-voting-keystore.json file was not added to the file, since the file name is not exactly voting-keystore.json.

In order for the validator client to decrypt the validators, they will need to ensure their secrets-dir is organised as below:

~/.lighthouse/{testnet}/secrets
├── 0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477
├── 0xaa440c566fcf34dedf233baf56cf5fb05bb420d9663b4208272545608c27c13d5b08174518c758ecd814f158f2b4a337
└── 0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007

Manual configuration

The automatic validator discovery process works out-of-the-box with validators that are created using the lighthouse account validator new command. The details of this process are only interesting to those who are using keystores generated with another tool or have a non-standard requirements.

If you are one of these users, manually edit the validator_definitions.yml file to suit your requirements. If the file is poorly formatted or any one of the validators is unable to be initialized, the validator client will refuse to start.

How the validator_definitions.yml file is processed

If a validator client were to start using the first example validator_definitions.yml file it would print the following log, acknowledging there there are two validators and one is disabled:

INFO Initialized validators                  enabled: 1, disabled: 1

The validator client will simply ignore the disabled validator. However, for the active validator, the validator client will:

  1. Load an EIP-2335 keystore from the voting_keystore_path.
  2. If the voting_keystore_password field is present, use it as the keystore password. Otherwise, attempt to read the file at voting_keystore_password_path and use the contents as the keystore password.
  3. Use the keystore password to decrypt the keystore and obtain a BLS keypair.
  4. Verify that the decrypted BLS keypair matches the voting_public_key.
  5. Create a voting-keystore.json.lock file adjacent to the voting_keystore_path, indicating that the voting keystore is in-use and should not be opened by another process.
  6. Proceed to act for that validator, creating blocks and attestations if/when required.

If there is an error during any of these steps (e.g., a file is missing or corrupt) the validator client will log an error and continue to attempt to process other validators.

When the validator client exits (or the validator is deactivated) it will remove the voting-keystore.json.lock to indicate that the keystore is free for use again.

Importing from the Ethereum 2.0 Launch pad

The Eth2 Lauchpad is a website from the Ethereum Foundation which guides users how to use the eth2.0-deposit-cli command-line program to generate Eth2 validator keys.

The keys that are generated from eth2.0-deposit-cli can be easily loaded into a Lighthouse validator client (lighthouse vc). In fact, both of these programs are designed to work with each other.

This guide will show the user how to import their keys into Lighthouse so they can perform their duties as a validator. The guide assumes the user has already installed Lighthouse.

Instructions

Whilst following the steps on the website, users are instructed to download the eth2.0-deposit-cli repository. This eth2-deposit-cli script will generate the validator BLS keys into a validator_keys directory. We assume that the user's present-working-directory is the eth2-deposit-cli repository (this is where you will be if you just ran the ./deposit.sh script from the Eth2 Launch pad website). If this is not the case, simply change the --directory to point to the validator_keys directory.

Now, assuming that the user is in the eth2-deposit-cli directory and they're using the default (~/.lighthouse/{testnet}/validators) validators directory (specify a different one using --validators-dir flag), they can follow these steps:

1. Run the lighthouse account validator import command.

Docker users should use the command from the Docker section, all other users can use:

lighthouse --testnet medalla account validator import --directory validator_keys

Note: The user must specify the testnet that they are importing the keys for using the --testnet flag.

After which they will be prompted for a password for each keystore discovered:

Keystore found at "validator_keys/keystore-m_12381_3600_0_0_0-1595406747.json":

 - Public key: 0xa5e8702533f6d66422e042a0bf3471ab9b302ce115633fa6fdc5643f804b6b4f1c33baf95f125ec21969a3b1e0dd9e56
 - UUID: 8ea4cf99-8719-43c5-9eda-e97b8a4e074f

If you enter a password it will be stored in validator_definitions.yml so that it is not required each time the validator client starts.

Enter a password, or press enter to omit a password:

The user can choose whether or not they'd like to store the validator password in the validator_definitions.yml file. If the password is not stored here, the validator client (lighthouse vc) application will ask for the password each time it starts. This might be nice for some users from a security perspective (i.e., if it is a shared computer), however it means that if the validator client restarts, the user will be liable to off-line penalties until they can enter the password. If the user trusts the computer that is running the validator client and they are seeking maximum validator rewards, we recommend entering a password at this point.

Once the process is done the user will see:

Successfully imported keystore.
Successfully updated validator_definitions.yml.

Successfully imported 1 validators (0 skipped).

WARNING: DO NOT USE THE ORIGINAL KEYSTORES TO VALIDATE WITH ANOTHER CLIENT, OR YOU WILL GET SLASHED..

The import process is complete!

2. Run the lighthouse vc command.

Now the keys are imported the user can start performing their validator duties by running lighthouse vc and checking that their validator public key appears as a voting_pubkey in one of the following logs:

INFO Enabled validator       voting_pubkey: 0xa5e8702533f6d66422e042a0bf3471ab9b302ce115633fa6fdc5643f804b6b4f1c33baf95f125ec21969a3b1e0dd9e56

Once this log appears (and there are no errors) the lighthouse vc application will ensure that the validator starts performing its duties and being rewarded by the protocol. There is no more input required from the user.

Docker

The import command is a little more complex for Docker users, but the example in this document can be substituted with:

docker run -it \
	-v $HOME/.lighthouse:/root/.lighthouse \
	-v $(pwd)/validator_keys:/root/validator_keys \
	sigp/lighthouse \
	lighthouse --testnet medalla account validator import --directory /root/validator_keys

Here we use two -v volumes to attach:

  • ~/.lighthouse on the host to /root/.lighthouse in the Docker container.
  • The validator_keys directory in the present working directory of the host to the /root/validator_keys directory of the Docker container.

Slashing Protection

The security of Ethereum 2.0's proof of stake protocol depends on penalties for misbehaviour, known as slashings. Validators that sign conflicting messages (blocks or attestations), can be slashed by other validators through the inclusion of a ProposerSlashing or AttesterSlashing on chain.

The Lighthouse validator client includes a mechanism to protect its validators against accidental slashing, known as the slashing protection database. This database records every block and attestation signed by validators, and the validator client uses this information to avoid signing any slashable messages.

Lighthouse's slashing protection database is an SQLite database located at $datadir/validators/slashing_protection.sqlite which is locked exclusively when the validator client is running. In normal operation, this database will be automatically created and utilized, meaning that your validators are kept safe by default.

If you are seeing errors related to slashing protection, it's important that you act slowly and carefully to keep your validators safe. See the Troubleshooting section.

Initialization

The database will be automatically created, and your validators registered with it when:

Avoiding Slashing

The slashing protection database is designed to protect against many common causes of slashing, but is unable to prevent against some others.

Examples of circumstances where the slashing protection database is effective are:

  • Accidentally running two validator clients on the same machine with the same datadir. The exclusive and transactional access to the database prevents the 2nd validator client from signing anything slashable (it won't even start).
  • Deep re-orgs that cause the shuffling to change, prompting validators to re-attest in an epoch where they have already attested. The slashing protection checks all messages against the slashing conditions and will refuse to attest on the new chain until it is safe to do so (usually after one epoch).
  • Importing keys and signing history from another client, where that history is complete. If you run another client and decide to switch to Lighthouse, you can export data from your client to be imported into Lighthouse's slashing protection database. See Import and Export.
  • Misplacing slashing_protection.sqlite during a datadir change or migration between machines. By default Lighthouse will refuse to start if it finds validator keys that are not registered in the slashing protection database.

Examples where it is ineffective are:

  • Running two validator client instances simultaneously. This could be two different clients (e.g. Lighthouse and Prysm) running on the same machine, two Lighthouse instances using different datadirs, or two clients on completely different machines (e.g. one on a cloud server and one running locally). You are responsible for ensuring that your validator keys are never running simultanously – the slashing protection DB cannot protect you in this case.
  • Importing keys from another client without also importing voting history.
  • If you use --init-slashing-protection to recreate a missing slashing protection database.

Import and Export

Lighthouse supports v4 of the slashing protection interchange format described here. An interchange file is a record of all blocks and attestations signing by a set of validator keys – basically a portable slashing protection database!

You can import a .json interchange file from another client using this command:

lighthouse account validator slashing-protection import <my_interchange.json>

Instructions for exporting your existing client's database are out of scope for this document, please check the other client's documentation for instructions.

When importing an interchange file, you still need to import the validator keystores themselves separately, using the instructions about importing keystores into Lighthouse.


You can export Lighthouse's database for use with another client with this command:

lighthouse account validator slashing-protection export <lighthouse_interchange.json>

Troubleshooting

Misplaced Slashing Database

If the slashing protection database cannot be found, it will manifest in an error like this:

Oct 12 14:41:26.415 CRIT Failed to start validator client        reason: Failed to open slashing protection database: SQLError("Unable to open database: Error(Some(\"unable to open database file: /home/karlm/.lighthouse/medalla/validators/slashing_protection.sqlite\"))").
Ensure that `slashing_protection.sqlite` is in "/home/karlm/.lighthouse/medalla/validators" folder

Usually this indicates that during some manual intervention the slashing database has been misplaced. This error can also occur if you have upgraded from Lighthouse v0.2.x to v0.3.x without moving the slashing protection database. If you have imported your keys into a new node, you should never see this error (see Initialization).

The safest way to remedy this error is to find your old slashing protection database and move it to the correct location. In our example that would be ~/.lighthouse/medalla/validators/slashing_protection.sqlite. You can search for your old database using a tool like find, fd, or your file manager's GUI. Ask on the Lighthouse Discord if you're not sure.

If you are absolutely 100% sure that you need to recreate the missing database, you can start the Lighthouse validator client with the --init-slashing-protection flag. This flag is incredibly dangerous and should not be used lightly, and we strongly recommend you try finding your old slashing protection database before using it. If you do decide to use it, you should wait at least 1 epoch (~7 minutes) from when your validator client was last actively signing messages. If you suspect your node experienced a clock drift issue you should wait longer. Remember that the inactivity penalty for being offline for even a day or so is approximately equal to the rewards earned in a day. You will get slashed if you use --init-slashing-protection incorrectly.

Slashable Attestations and Re-orgs

Sometimes a re-org can cause the validator client to attempt to sign something slashable, in which case it will be blocked by slashing protection, resulting in a log like this:

Sep 29 15:15:05.303 CRIT Not signing slashable attestation       error: InvalidAttestation(DoubleVote(SignedAttestation { source_epoch: Epoch(0), target_epoch: Epoch(30), signing_root: 0x0c17be1f233b20341837ff183d21908cce73f22f86d5298c09401c6f37225f8a })), attestation: AttestationData { slot: Slot(974), index: 0, beacon_block_root: 0xa86a93ed808f96eb81a0cd7f46e3b3612cafe4bd0367aaf74e0563d82729e2dc, source: Checkpoint { epoch: Epoch(0), root: 0x0000000000000000000000000000000000000000000000000000000000000000 }, target: Checkpoint { epoch: Epoch(30), root: 0xcbe6901c0701a89e4cf508cfe1da2bb02805acfdfe4c39047a66052e2f1bb614 } }

This log is still marked as CRIT because in general it should occur only very rarely, and could indicate a serious error or misconfiguration (see Avoiding Slashing).

Limitation of Liability

The Lighthouse developers do not guarantee the perfect functioning of this software, or accept liability for any losses suffered. For more information see the Lighthouse license.

APIs

Lighthouse allows users to query the state of Eth2.0 using web-standard, RESTful HTTP/JSON APIs.

There are two APIs served by Lighthouse:

Beacon Node API

Lighthouse implements the standard Eth2 Beacon Node API specification. Please follow that link for a full description of each API endpoint.

Warning: the standard API specification is still in flux and the Lighthouse implementation is partially incomplete. You can track the status of each endpoint at #1434.

Starting the server

A Lighthouse beacon node can be configured to expose a HTTP server by supplying the --http flag. The default listen address is 127.0.0.1:5052.

The following CLI flags control the HTTP server:

  • --http: enable the HTTP server (required even if the following flags are provided).
  • --http-port: specify the listen port of the server.
  • --http-address: specify the listen address of the server.
  • --http-allow-origin: specify the value of the Access-Control-Allow-Origin header. The default is to not supply a header.

The schema of the API aligns with the standard Eth2 Beacon Node API as defined at github.com/ethereum/eth2.0-APIs. An interactive specification is available here.

CLI Example

Start the beacon node with the HTTP server listening on http://localhost:5052:

lighthouse bn --http

HTTP Request/Response Examples

This section contains some simple examples of using the HTTP API via curl. All endpoints are documented in the Eth2 Beacon Node API specification.

View the head of the beacon chain

Returns the block header at the head of the canonical chain.

curl -X GET "http://localhost:5052/eth/v1/beacon/headers/head" -H  "accept:
application/json"
{
  "data": {
    "root": "0x4381454174fc28c7095077e959dcab407ae5717b5dca447e74c340c1b743d7b2",
    "canonical": true,
    "header": {
      "message": {
        "slot": "3199",
        "proposer_index": "19077",
        "parent_root": "0xf1934973041c5896d0d608e52847c3cd9a5f809c59c64e76f6020e3d7cd0c7cd",
        "state_root": "0xe8e468f9f5961655dde91968f66480868dab8d4147de9498111df2b7e4e6fe60",
        "body_root": "0x6f183abc6c4e97f832900b00d4e08d4373bfdc819055d76b0f4ff850f559b883"
      },
      "signature": "0x988064a2f9cf13fe3aae051a3d85f6a4bca5a8ff6196f2f504e32f1203b549d5f86a39c6509f7113678880701b1881b50925a0417c1c88a750c8da7cd302dda5aabae4b941e3104d0cf19f5043c4f22a7d75d0d50dad5dbdaf6991381dc159ab"
    }
  }
}

View the status of a validator

Shows the status of validator at index 1 at the head state.

curl -X GET "http://localhost:5052/eth/v1/beacon/states/head/validators/1" -H  "accept: application/json"
{
  "data": {
    "index": "1",
    "balance": "63985937939",
    "status": "Active",
    "validator": {
      "pubkey": "0x873e73ee8b3e4fcf1d2fb0f1036ba996ac9910b5b348f6438b5f8ef50857d4da9075d0218a9d1b99a9eae235a39703e1",
      "withdrawal_credentials": "0x00b8cdcf79ba7e74300a07e9d8f8121dd0d8dd11dcfd6d3f2807c45b426ac968",
      "effective_balance": "32000000000",
      "slashed": false,
      "activation_eligibility_epoch": "0",
      "activation_epoch": "0",
      "exit_epoch": "18446744073709551615",
      "withdrawable_epoch": "18446744073709551615"
    }
  }
}

Troubleshooting

HTTP API is unavailable or refusing connections

Ensure the --http flag has been supplied at the CLI.

You can quickly check that the HTTP endpoint is up using curl:

curl -X GET "http://localhost:5052/eth/v1/node/version" -H  "accept: application/json"

The beacon node should respond with its version:

{"data":{"version":"Lighthouse/v0.2.9-6f7b4768a/x86_64-linux"}}

If this doesn't work, the server might not be started or there might be a network connection error.

I cannot query my node from a web browser (e.g., Swagger)

By default, the API does not provide an Access-Control-Allow-Origin header, which causes browsers to reject responses with a CORS error.

The --http-allow-origin flag can be used to add a wild-card CORS header:

lighthouse bn --http --http-allow-origin "*"

Warning: Adding the wild-card allow-origin flag can pose a security risk. Only use it in production if you understand the risks of a loose CORS policy.

Lighthouse Non-Standard APIs

Lighthouse fully supports the standardization efforts at github.com/ethereum/eth2.0-APIs, however sometimes development requires additional endpoints that shouldn't necessarily be defined as a broad-reaching standard. Such endpoints are placed behind the /lighthouse path.

The endpoints behind the /lighthouse path are:

  • Not intended to be stable.
  • Not guaranteed to be safe.
  • For testing and debugging purposes only.

Although we don't recommend that users rely on these endpoints, we document them briefly so they can be utilized by developers and researchers.

/lighthouse/health

Presently only available on Linux.

curl -X GET "http://localhost:5052/lighthouse/health" -H  "accept: application/json" | jq
{
  "data": {
    "pid": 1728254,
    "pid_num_threads": 47,
    "pid_mem_resident_set_size": 510054400,
    "pid_mem_virtual_memory_size": 3963158528,
    "sys_virt_mem_total": 16715530240,
    "sys_virt_mem_available": 4065374208,
    "sys_virt_mem_used": 11383402496,
    "sys_virt_mem_free": 1368662016,
    "sys_virt_mem_percent": 75.67906,
    "sys_loadavg_1": 4.92,
    "sys_loadavg_5": 5.53,
    "sys_loadavg_15": 5.58
  }
}

/lighthouse/syncing

curl -X GET "http://localhost:5052/lighthouse/syncing" -H  "accept: application/json" | jq
{
  "data": {
    "SyncingFinalized": {
      "start_slot": 3104,
      "head_slot": 343744,
      "head_root": "0x1b434b5ed702338df53eb5e3e24336a90373bb51f74b83af42840be7421dd2bf"
    }
  }
}

/lighthouse/peers

curl -X GET "http://localhost:5052/lighthouse/peers" -H  "accept: application/json" | jq
[
  {
    "peer_id": "16Uiu2HAmA9xa11dtNv2z5fFbgF9hER3yq35qYNTPvN7TdAmvjqqv",
    "peer_info": {
      "_status": "Healthy",
      "score": {
        "score": 0
      },
      "client": {
        "kind": "Lighthouse",
        "version": "v0.2.9-1c9a055c",
        "os_version": "aarch64-linux",
        "protocol_version": "lighthouse/libp2p",
        "agent_string": "Lighthouse/v0.2.9-1c9a055c/aarch64-linux"
      },
      "connection_status": {
        "status": "disconnected",
        "connections_in": 0,
        "connections_out": 0,
        "last_seen": 1082,
        "banned_ips": []
      },
      "listening_addresses": [
        "/ip4/80.109.35.174/tcp/9000",
        "/ip4/127.0.0.1/tcp/9000",
        "/ip4/192.168.0.73/tcp/9000",
        "/ip4/172.17.0.1/tcp/9000",
        "/ip6/::1/tcp/9000"
      ],
      "sync_status": {
        "Advanced": {
          "info": {
            "status_head_slot": 343829,
            "status_head_root": "0xe34e43efc2bb462d9f364bc90e1f7f0094e74310fd172af698b5a94193498871",
            "status_finalized_epoch": 10742,
            "status_finalized_root": "0x1b434b5ed702338df53eb5e3e24336a90373bb51f74b83af42840be7421dd2bf"
          }
        }
      },
      "meta_data": {
        "seq_number": 160,
        "attnets": "0x0000000800000080"
      }
    }
  }
]

/lighthouse/peers/connected

curl -X GET "http://localhost:5052/lighthouse/peers/connected" -H  "accept: application/json" | jq
[
  {
    "peer_id": "16Uiu2HAkzJC5TqDSKuLgVUsV4dWat9Hr8EjNZUb6nzFb61mrfqBv",
    "peer_info": {
      "_status": "Healthy",
      "score": {
        "score": 0
      },
      "client": {
        "kind": "Lighthouse",
        "version": "v0.2.8-87181204+",
        "os_version": "x86_64-linux",
        "protocol_version": "lighthouse/libp2p",
        "agent_string": "Lighthouse/v0.2.8-87181204+/x86_64-linux"
      },
      "connection_status": {
        "status": "connected",
        "connections_in": 1,
        "connections_out": 0,
        "last_seen": 0,
        "banned_ips": []
      },
      "listening_addresses": [
        "/ip4/34.204.178.218/tcp/9000",
        "/ip4/127.0.0.1/tcp/9000",
        "/ip4/172.31.67.58/tcp/9000",
        "/ip4/172.17.0.1/tcp/9000",
        "/ip6/::1/tcp/9000"
      ],
      "sync_status": "Unknown",
      "meta_data": {
        "seq_number": 1819,
        "attnets": "0xffffffffffffffff"
      }
    }
  }
]

/lighthouse/proto_array

curl -X GET "http://localhost:5052/lighthouse/proto_array" -H  "accept: application/json" | jq

Example omitted for brevity.

/lighthouse/validator_inclusion/{epoch}/{validator_id}

See Validator Inclusion APIs.

/lighthouse/validator_inclusion/{epoch}/global

See Validator Inclusion APIs.

Validator Inclusion APIs

The /lighthouse/validator_inclusion API endpoints provide information on results of the proof-of-stake voting process used for finality/justification under Casper FFG.

These endpoints are not stable or included in the Eth2 standard API. As such, they are subject to change or removal without a change in major release version.

Endpoints

HTTP PathDescription
/lighthouse/validator_inclusion/{epoch}/globalA global vote count for a given epoch.
/lighthouse/validator_inclusion/{epoch}/{validator_id}A per-validator breakdown of votes in a given epoch.

Global

Returns a global count of votes for some given epoch. The results are included both for the current and previous (epoch - 1) epochs since both are required by the beacon node whilst performing per-epoch-processing.

Generally, you should consider the "current" values to be incomplete and the "previous" values to be final. This is because validators can continue to include attestations from the current epoch in the next epoch, however this is not the case for attestations from the previous epoch.

                  `epoch` query parameter
				              |
				              |     --------- values are calcuated here
                              |     |
							  v     v
Epoch:  |---previous---|---current---|---next---|

                          |-------------|
						         ^
                                 |
		       window for including "current" attestations
					        in a block

The votes are expressed in terms of staked effective Gwei (i.e., not the number of individual validators). For example, if a validator has 32 ETH staked they will increase the current_epoch_attesting_gwei figure by 32,000,000,000 if they have an attestation included in a block during the current epoch. If this validator has more than 32 ETH, that extra ETH will not count towards their vote (that is why it is effective Gwei).

The following fields are returned:

  • current_epoch_active_gwei: the total staked gwei that was active (i.e., able to vote) during the current epoch.
  • current_epoch_attesting_gwei: the total staked gwei that had one or more attestations included in a block during the current epoch (multiple attestations by the same validator do not increase this figure).
  • current_epoch_target_attesting_gwei: the total staked gwei that attested to the majority-elected Casper FFG target epoch during the current epoch. This figure must be equal to or less than current_epoch_attesting_gwei.
  • previous_epoch_active_gwei: as above, but during the previous epoch.
  • previous_epoch_attesting_gwei: see current_epoch_attesting_gwei.
  • previous_epoch_target_attesting_gwei: see current_epoch_target_attesting_gwei.
  • previous_epoch_head_attesting_gwei: the total staked gwei that attested to a head beacon block that is in the canonical chain.

From this data you can calculate some interesting figures:

Participation Rate

previous_epoch_attesting_gwei / previous_epoch_active_gwei

Expresses the ratio of validators that managed to have an attestation voting upon the previous epoch included in a block.

Justification/Finalization Rate

previous_epoch_target_attesting_gwei / previous_epoch_active_gwei

When this value is greater than or equal to 2/3 it is possible that the beacon chain may justify and/or finalize the epoch.

HTTP Example

curl -X GET "http://localhost:5052/lighthouse/validator_inclusion/0/global" -H  "accept: application/json" | jq
{
  "data": {
    "current_epoch_active_gwei": 642688000000000,
    "previous_epoch_active_gwei": 642688000000000,
    "current_epoch_attesting_gwei": 366208000000000,
    "current_epoch_target_attesting_gwei": 366208000000000,
    "previous_epoch_attesting_gwei": 1000000000,
    "previous_epoch_target_attesting_gwei": 1000000000,
    "previous_epoch_head_attesting_gwei": 1000000000
  }
}

Individual

Returns a per-validator summary of how that validator performed during the current epoch.

The Global Votes endpoint is the summation of all of these individual values, please see it for definitions of terms like "current_epoch", "previous_epoch" and "target_attester".

HTTP Example

curl -X GET "http://localhost:5052/lighthouse/validator_inclusion/0/42" -H  "accept: application/json" | jq
{
  "data": {
    "is_slashed": false,
    "is_withdrawable_in_current_epoch": false,
    "is_active_in_current_epoch": true,
    "is_active_in_previous_epoch": true,
    "current_epoch_effective_balance_gwei": 32000000000,
    "is_current_epoch_attester": false,
    "is_current_epoch_target_attester": false,
    "is_previous_epoch_attester": false,
    "is_previous_epoch_target_attester": false,
    "is_previous_epoch_head_attester": false
  }
}

Validator Client API

Lighthouse implements a HTTP/JSON API for the validator client. Since there is no Eth2 standard validator client API, Lighthouse has defined its own.

A full list of endpoints can be found in Endpoints.

Note: All requests to the HTTP server must supply an Authorization header. All responses contain a Signature header for optional verification.

Starting the server

A Lighthouse validator client can be configured to expose a HTTP server by supplying the --http flag. The default listen address is 127.0.0.1:5062.

The following CLI flags control the HTTP server:

  • --http: enable the HTTP server (required even if the following flags are provided).
  • --http-port: specify the listen port of the server.
  • --http-allow-origin: specify the value of the Access-Control-Allow-Origin header. The default is to not supply a header.

Security

The validator client HTTP is not encrypted (i.e., it is not HTTPS). For this reason, it will only listen on 127.0.0.1.

It is unsafe to expose the validator client to the public Internet without additional transport layer security (e.g., HTTPS via nginx, SSH tunnels, etc.).

CLI Example

Start the validator client with the HTTP server listening on http://localhost:5062:

lighthouse vc --http

Validator Client API: Endpoints

Endpoints

HTTP PathDescription
GET /lighthouse/versionGet the Lighthouse software version
GET /lighthouse/healthGet information about the host machine
GET /lighthouse/specGet the Eth2 specification used by the validator
GET /lighthouse/validatorsList all validators
GET /lighthouse/validators/:voting_pubkeyGet a specific validator
PATCH /lighthouse/validators/:voting_pubkeyUpdate a specific validator
POST /lighthouse/validatorsCreate a new validator and mnemonic.
POST /lighthouse/validators/mnemonicCreate a new validator from an existing mnemonic.

GET /lighthouse/version

Returns the software version and git commit hash for the Lighthouse binary.

HTTP Specification

PropertySpecification
Path/lighthouse/version
MethodGET
Required HeadersAuthorization
Typical Responses200

Example Response Body

{
    "data": {
        "version": "Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux"
    }
}

GET /lighthouse/health

Returns information regarding the health of the host machine.

HTTP Specification

PropertySpecification
Path/lighthouse/health
MethodGET
Required HeadersAuthorization
Typical Responses200

Note: this endpoint is presently only available on Linux.

Example Response Body

{
    "data": {
        "pid": 1476293,
        "pid_num_threads": 19,
        "pid_mem_resident_set_size": 4009984,
        "pid_mem_virtual_memory_size": 1306775552,
        "sys_virt_mem_total": 33596100608,
        "sys_virt_mem_available": 23073017856,
        "sys_virt_mem_used": 9346957312,
        "sys_virt_mem_free": 22410510336,
        "sys_virt_mem_percent": 31.322334,
        "sys_loadavg_1": 0.98,
        "sys_loadavg_5": 0.98,
        "sys_loadavg_15": 1.01
    }
}

GET /lighthouse/spec

Returns the Eth2 specification loaded for this validator.

HTTP Specification

PropertySpecification
Path/lighthouse/spec
MethodGET
Required HeadersAuthorization
Typical Responses200

Example Response Body

{
    "data": {
        "CONFIG_NAME": "mainnet",
        "MAX_COMMITTEES_PER_SLOT": "64",
        "TARGET_COMMITTEE_SIZE": "128",
        "MIN_PER_EPOCH_CHURN_LIMIT": "4",
        "CHURN_LIMIT_QUOTIENT": "65536",
        "SHUFFLE_ROUND_COUNT": "90",
        "MIN_GENESIS_ACTIVE_VALIDATOR_COUNT": "1024",
        "MIN_GENESIS_TIME": "1601380800",
        "GENESIS_DELAY": "172800",
        "MIN_DEPOSIT_AMOUNT": "1000000000",
        "MAX_EFFECTIVE_BALANCE": "32000000000",
        "EJECTION_BALANCE": "16000000000",
        "EFFECTIVE_BALANCE_INCREMENT": "1000000000",
        "HYSTERESIS_QUOTIENT": "4",
        "HYSTERESIS_DOWNWARD_MULTIPLIER": "1",
        "HYSTERESIS_UPWARD_MULTIPLIER": "5",
        "PROPORTIONAL_SLASHING_MULTIPLIER": "3",
        "GENESIS_FORK_VERSION": "0x00000002",
        "BLS_WITHDRAWAL_PREFIX": "0x00",
        "SECONDS_PER_SLOT": "12",
        "MIN_ATTESTATION_INCLUSION_DELAY": "1",
        "MIN_SEED_LOOKAHEAD": "1",
        "MAX_SEED_LOOKAHEAD": "4",
        "MIN_EPOCHS_TO_INACTIVITY_PENALTY": "4",
        "MIN_VALIDATOR_WITHDRAWABILITY_DELAY": "256",
        "SHARD_COMMITTEE_PERIOD": "256",
        "BASE_REWARD_FACTOR": "64",
        "WHISTLEBLOWER_REWARD_QUOTIENT": "512",
        "PROPOSER_REWARD_QUOTIENT": "8",
        "INACTIVITY_PENALTY_QUOTIENT": "16777216",
        "MIN_SLASHING_PENALTY_QUOTIENT": "32",
        "SAFE_SLOTS_TO_UPDATE_JUSTIFIED": "8",
        "DOMAIN_BEACON_PROPOSER": "0x00000000",
        "DOMAIN_BEACON_ATTESTER": "0x01000000",
        "DOMAIN_RANDAO": "0x02000000",
        "DOMAIN_DEPOSIT": "0x03000000",
        "DOMAIN_VOLUNTARY_EXIT": "0x04000000",
        "DOMAIN_SELECTION_PROOF": "0x05000000",
        "DOMAIN_AGGREGATE_AND_PROOF": "0x06000000",
        "MAX_VALIDATORS_PER_COMMITTEE": "2048",
        "SLOTS_PER_EPOCH": "32",
        "EPOCHS_PER_ETH1_VOTING_PERIOD": "32",
        "SLOTS_PER_HISTORICAL_ROOT": "8192",
        "EPOCHS_PER_HISTORICAL_VECTOR": "65536",
        "EPOCHS_PER_SLASHINGS_VECTOR": "8192",
        "HISTORICAL_ROOTS_LIMIT": "16777216",
        "VALIDATOR_REGISTRY_LIMIT": "1099511627776",
        "MAX_PROPOSER_SLASHINGS": "16",
        "MAX_ATTESTER_SLASHINGS": "2",
        "MAX_ATTESTATIONS": "128",
        "MAX_DEPOSITS": "16",
        "MAX_VOLUNTARY_EXITS": "16",
        "ETH1_FOLLOW_DISTANCE": "1024",
        "TARGET_AGGREGATORS_PER_COMMITTEE": "16",
        "RANDOM_SUBNETS_PER_VALIDATOR": "1",
        "EPOCHS_PER_RANDOM_SUBNET_SUBSCRIPTION": "256",
        "SECONDS_PER_ETH1_BLOCK": "14",
        "DEPOSIT_CONTRACT_ADDRESS": "0x48b597f4b53c21b48ad95c7256b49d1779bd5890"
    }
}

GET /lighthouse/validators

Lists all validators managed by this validator client.

HTTP Specification

PropertySpecification
Path/lighthouse/validators
MethodGET
Required HeadersAuthorization
Typical Responses200

Example Response Body

{
    "data": [
        {
            "enabled": true,
            "voting_pubkey": "0xb0148e6348264131bf47bcd1829590e870c836dc893050fd0dadc7a28949f9d0a72f2805d027521b45441101f0cc1cde"
        },
        {
            "enabled": true,
            "voting_pubkey": "0xb0441246ed813af54c0a11efd53019f63dd454a1fa2a9939ce3c228419fbe113fb02b443ceeb38736ef97877eb88d43a"
        },
        {
            "enabled": true,
            "voting_pubkey": "0xad77e388d745f24e13890353031dd8137432ee4225752642aad0a2ab003c86620357d91973b6675932ff51f817088f38"
        }
    ]
}

GET /lighthouse/validators/:voting_pubkey

Get a validator by their voting_pubkey.

HTTP Specification

PropertySpecification
Path/lighthouse/validators/:voting_pubkey
MethodGET
Required HeadersAuthorization
Typical Responses200, 400

Example Path

localhost:5062/lighthouse/validators/0xb0148e6348264131bf47bcd1829590e870c836dc893050fd0dadc7a28949f9d0a72f2805d027521b45441101f0cc1cde

Example Response Body

{
    "data": {
        "enabled": true,
        "voting_pubkey": "0xb0148e6348264131bf47bcd1829590e870c836dc893050fd0dadc7a28949f9d0a72f2805d027521b45441101f0cc1cde"
    }
}

PATCH /lighthouse/validators/:voting_pubkey

Update some values for the validator with voting_pubkey.

HTTP Specification

PropertySpecification
Path/lighthouse/validators/:voting_pubkey
MethodPATCH
Required HeadersAuthorization
Typical Responses200, 400

Example Path

localhost:5062/lighthouse/validators/0xb0148e6348264131bf47bcd1829590e870c836dc893050fd0dadc7a28949f9d0a72f2805d027521b45441101f0cc1cde

Example Request Body

{
    "enabled": false
}

Example Response Body

null

POST /lighthouse/validators/

Create any number of new validators, all of which will share a common mnemonic generated by the server.

A BIP-39 mnemonic will be randomly generated and returned with the response. This mnemonic can be used to recover all keys returned in the response. Validators are generated from the mnemonic according to EIP-2334, starting at index 0.

HTTP Specification

PropertySpecification
Path/lighthouse/validators
MethodPOST
Required HeadersAuthorization
Typical Responses200

Example Request Body

[
    {
        "enable": true,
        "description": "validator_one",
        "deposit_gwei": "32000000000"
    },
    {
        "enable": false,
        "description": "validator two",
        "deposit_gwei": "34000000000"
    }
]

Example Response Body

{
    "data": {
        "mnemonic": "marine orchard scout label trim only narrow taste art belt betray soda deal diagram glare hero scare shadow ramp blur junior behave resource tourist",
        "validators": [
            {
                "enabled": true,
                "description": "validator_one",
                "voting_pubkey": "0x8ffbc881fb60841a4546b4b385ec5e9b5090fd1c4395e568d98b74b94b41a912c6101113da39d43c101369eeb9b48e50",
                "eth1_deposit_tx_data": "0x22895118000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000001206c68675776d418bfd63468789e7c68a6788c4dd45a3a911fe3d642668220bbf200000000000000000000000000000000000000000000000000000000000000308ffbc881fb60841a4546b4b385ec5e9b5090fd1c4395e568d98b74b94b41a912c6101113da39d43c101369eeb9b48e5000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000cf8b3abbf0ecd91f3b0affcc3a11e9c5f8066efb8982d354ee9a812219b17000000000000000000000000000000000000000000000000000000000000000608fbe2cc0e17a98d4a58bd7a65f0475a58850d3c048da7b718f8809d8943fee1dbd5677c04b5fa08a9c44d271d009edcd15caa56387dc217159b300aad66c2cf8040696d383d0bff37b2892a7fe9ba78b2220158f3dc1b9cd6357bdcaee3eb9f2",
                "deposit_gwei": "32000000000"
            },
            {
                "enabled": false,
                "description": "validator two",
                "voting_pubkey": "0xa9fadd620dc68e9fe0d6e1a69f6c54a0271ad65ab5a509e645e45c6e60ff8f4fc538f301781193a08b55821444801502",
                "eth1_deposit_tx_data": "0x22895118000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000e00000000000000000000000000000000000000000000000000000000000000120b1911954c1b8d23233e0e2bf8c4878c8f56d25a4f790ec09a94520ec88af30490000000000000000000000000000000000000000000000000000000000000030a9fadd620dc68e9fe0d6e1a69f6c54a0271ad65ab5a509e645e45c6e60ff8f4fc538f301781193a08b5582144480150200000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000a96df8b95c3ba749265e48a101f2ed974fffd7487487ed55f8dded99b617ad000000000000000000000000000000000000000000000000000000000000006090421299179824950e2f5a592ab1fdefe5349faea1e8126146a006b64777b74cce3cfc5b39d35b370e8f844e99c2dc1b19a1ebd38c7605f28e9c4540aea48f0bc48e853ae5f477fa81a9fc599d1732968c772730e1e47aaf5c5117bd045b788e",
                "deposit_gwei": "34000000000"
            }
        ]
    }
}

POST /lighthouse/validators/mnemonic

Create any number of new validators, all of which will share a common mnemonic.

The supplied BIP-39 mnemonic will be used to generate the validator keys according to EIP-2334, starting at the supplied key_derivation_path_offset. For example, if key_derivation_path_offset = 42, then the first validator voting key will be generated with the path m/12381/3600/i/42.

HTTP Specification

PropertySpecification
Path/lighthouse/validators/mnemonic
MethodPOST
Required HeadersAuthorization
Typical Responses200

Example Request Body

{
    "mnemonic": "theme onion deal plastic claim silver fancy youth lock ordinary hotel elegant balance ridge web skill burger survey demand distance legal fish salad cloth",
    "key_derivation_path_offset": 0,
    "validators": [
        {
            "enable": true,
            "description": "validator_one",
            "deposit_gwei": "32000000000"
        }
    ]
}

Example Response Body

{
    "data": [
        {
            "enabled": true,
            "description": "validator_one",
            "voting_pubkey": "0xa062f95fee747144d5e511940624bc6546509eeaeae9383257a9c43e7ddc58c17c2bab4ae62053122184c381b90db380",
            "eth1_deposit_tx_data": "0x22895118000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000e00000000000000000000000000000000000000000000000000000000000000120a57324d95ae9c7abfb5cc9bd4db253ed0605dc8a19f84810bcf3f3874d0e703a0000000000000000000000000000000000000000000000000000000000000030a062f95fee747144d5e511940624bc6546509eeaeae9383257a9c43e7ddc58c17c2bab4ae62053122184c381b90db3800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000200046e4199f18102b5d4e8842d0eeafaa1268ee2c21340c63f9c2cd5b03ff19320000000000000000000000000000000000000000000000000000000000000060b2a897b4ba4f3910e9090abc4c22f81f13e8923ea61c0043506950b6ae174aa643540554037b465670d28fa7b7d716a301e9b172297122acc56be1131621c072f7c0a73ea7b8c5a90ecd5da06d79d90afaea17cdeeef8ed323912c70ad62c04b",
            "deposit_gwei": "32000000000"
        }
    ]
}

Validator Client API: Authorization Header

Overview

The validator client HTTP server requires that all requests have the following HTTP header:

  • Name: Authorization
  • Value: Basic <api-token>

Where <api-token> is a string that can be obtained from the validator client host. Here is an example Authorization header:

Authorization Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123

Obtaining the API token

The API token can be obtained via two methods:

Method 1: Reading from a file

The API token is stored as a file in the validators directory. For most users this is ~/.lighthouse/{testnet}/validators/api-token.txt. Here's an example using the cat command to print the token to the terminal, but any text editor will suffice:

$ cat api-token.txt
api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123

Method 2: Reading from logs

When starting the validator client it will output a log message containing an api-token field:

Sep 28 19:17:52.615 INFO HTTP API started                        api_token: api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123, listen_address: 127.0.0.1:5062

Example

Here is an example curl command using the API token in the Authorization header:

curl localhost:5062/lighthouse/version -H "Authorization: Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123"

The server should respond with its version:

{"data":{"version":"Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux"}}

Validator Client API: Signature Header

Overview

The validator client HTTP server adds the following header to all responses:

  • Name: Signature
  • Value: a secp256k1 signature across the SHA256 of the response body.

Example Signature header:

Signature: 0x304402205b114366444112580bf455d919401e9c869f5af067cd496016ab70d428b5a99d0220067aede1eb5819eecfd5dd7a2b57c5ac2b98f25a7be214b05684b04523aef873

Verifying the Signature

Below is a browser-ready example of signature verification.

HTML

<script src="https://rawgit.com/emn178/js-sha256/master/src/sha256.js" type="text/javascript"></script>
<script src="https://rawgit.com/indutny/elliptic/master/dist/elliptic.min.js" type="text/javascript"></script>

Javascript

// Helper function to turn a hex-string into bytes.
function hexStringToByte(str) {
  if (!str) {
    return new Uint8Array();
  }

  var a = [];
  for (var i = 0, len = str.length; i < len; i+=2) {
    a.push(parseInt(str.substr(i,2),16));
  }

  return new Uint8Array(a);
}

// This example uses the secp256k1 curve from the "elliptic" library:
//
// https://github.com/indutny/elliptic
var ec = new elliptic.ec('secp256k1');

// The public key is contained in the API token:
//
// Authorization: Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123
var pk_bytes = hexStringToByte('03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123');

// The signature is in the `Signature` header of the response:
//
// Signature: 0x304402205b114366444112580bf455d919401e9c869f5af067cd496016ab70d428b5a99d0220067aede1eb5819eecfd5dd7a2b57c5ac2b98f25a7be214b05684b04523aef873
var sig_bytes = hexStringToByte('304402205b114366444112580bf455d919401e9c869f5af067cd496016ab70d428b5a99d0220067aede1eb5819eecfd5dd7a2b57c5ac2b98f25a7be214b05684b04523aef873');

// The HTTP response body.
var response_body = "{\"data\":{\"version\":\"Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux\"}}";

// The HTTP response body is hashed (SHA256) to determine the 32-byte message.
let hash = sha256.create();
hash.update(response_body);
let message = hash.array();

// The 32-byte message hash, the signature and the public key are verified.
if (ec.verify(message, sig_bytes, pk_bytes)) {
  console.log("The signature is valid")
} else {
  console.log("The signature is invalid")
}

This example is also available as a JSFiddle.

Example

The previous Javascript example was written using the output from the following curl command:

curl -v localhost:5062/lighthouse/version -H "Authorization: Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123"
*   Trying ::1:5062...
* connect to ::1 port 5062 failed: Connection refused
*   Trying 127.0.0.1:5062...
* Connected to localhost (127.0.0.1) port 5062 (#0)
> GET /lighthouse/version HTTP/1.1
> Host: localhost:5062
> User-Agent: curl/7.72.0
> Accept: */*
> Authorization: Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< signature: 0x304402205b114366444112580bf455d919401e9c869f5af067cd496016ab70d428b5a99d0220067aede1eb5819eecfd5dd7a2b57c5ac2b98f25a7be214b05684b04523aef873
< server: Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux
< access-control-allow-origin:
< content-length: 65
< date: Tue, 29 Sep 2020 04:23:46 GMT
<
* Connection #0 to host localhost left intact
{"data":{"version":"Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux"}}

Prometheus Metrics

Lighthouse provides an extensive suite of metrics and monitoring in the Prometheus export format via a HTTP server built into Lighthouse.

These metrics are generally consumed by a Prometheus server and displayed via a Grafana dashboard. These components are available in a docker-compose format at sigp/lighthouse-metrics.

Beacon Node Metrics

By default, these metrics are disabled but can be enabled with the --metrics flag. Use the --metrics-address, --metrics-port and --metrics-allow-origin flags to customize the metrics server.

Example

Start a beacon node with the metrics server enabled:

lighthouse bn --metrics

Check to ensure that the metrics are available on the default port:

curl localhost:5054/metrics

Validator Client Metrics

The validator client does not yet expose metrics, however this functionality is expected to be implemented in late-September 2020.

Advanced Usage

Want to get into the nitty-gritty of Lighthouse configuration? Looking for something not covered elsewhere?

This section provides detailed information about configuring Lighthouse for specific use cases, and tips about how things work under the hood.

Database Configuration

Lighthouse uses an efficient "split" database schema, whereby finalized states are stored separately from recent, unfinalized states. We refer to the portion of the database storing finalized states as the freezer or cold DB, and the portion storing recent states as the hot DB.

In both the hot and cold DBs, full BeaconState data structures are only stored periodically, and intermediate states are reconstructed by quickly replaying blocks on top of the nearest state. For example, to fetch a state at slot 7 the database might fetch a full state from slot 0, and replay blocks from slots 1-7 while omitting redundant signature checks and Merkle root calculations. The full states upon which blocks are replayed are referred to as restore points in the case of the freezer DB, and epoch boundary states in the case of the hot DB.

The frequency at which the hot database stores full BeaconStates is fixed to one-state-per-epoch in order to keep loads of recent states performant. For the freezer DB, the frequency is configurable via the --slots-per-restore-point CLI flag, which is the topic of the next section.

Freezer DB Space-time Trade-offs

Frequent restore points use more disk space but accelerate the loading of historical states. Conversely, infrequent restore points use much less space, but cause the loading of historical states to slow down dramatically. A lower slots per restore point value (SPRP) corresponds to more frequent restore points, while a higher SPRP corresponds to less frequent. The table below shows some example values.

Use CaseSPRPYearly Disk UsageLoad Historical State
Block explorer/analysis32411 GB96 ms
Default20486.4 GB6 s
Validator only81921.6 GB25 s

As you can see, it's a high-stakes trade-off! The relationships to disk usage and historical state load time are both linear – doubling SPRP halves disk usage and doubles load time. The minimum SPRP is 32, and the maximum is 8192.

The values shown in the table are approximate, calculated using a simple heuristic: each BeaconState consumes around 5MB of disk space, and each block replayed takes around 3ms. The Yearly Disk Usage column shows the approx size of the freezer DB alone (hot DB not included), and the Load Historical State time is the worst-case load time for a state in the last slot of an epoch.

To configure your Lighthouse node's database with a non-default SPRP, run your Beacon Node with the --slots-per-restore-point flag:

lighthouse beacon_node --slots-per-restore-point 8192

Glossary

  • Freezer DB: part of the database storing finalized states. States are stored in a sparser format, and usually less frequently than in the hot DB.
  • Cold DB: see Freezer DB.
  • Hot DB: part of the database storing recent states, all blocks, and other runtime data. Full states are stored every epoch.
  • Restore Point: a full BeaconState stored periodically in the freezer DB.
  • Slots Per Restore Point (SPRP): the number of slots between restore points in the freezer DB.
  • Split Slot: the slot at which states are divided between the hot and the cold DBs. All states from slots less than the split slot are in the freezer, while all states with slots greater than or equal to the split slot are in the hot DB.

Local Testnets

During development and testing it can be useful to start a small, local testnet.

The scripts/local_testnet/ directory contains several scripts and a README that should make this process easy.

Advanced Networking

Lighthouse's networking stack has a number of configurable parameters that can be adjusted to handle a variety of network situations. This section outlines some of these configuration parameters and their consequences at the networking level and their general intended use.

Target Peers

The beacon node has a --target-peers CLI parameter. This allows you to instruct the beacon node how many peers it should try to find and maintain. Lighthouse allows an additional 10% of this value for nodes to connect to us. Every 30 seconds, the excess peers are pruned. Lighthouse removes the worst-performing peers and maintains the best performing peers.

It may be counter-intuitive, but having a very large peer count will likely have a degraded performance for a beacon node in normal operation and during sync.

Having a large peer count means that your node must act as an honest RPC server to all your connected peers. If there are many that are syncing, they will often be requesting a large number of blocks from your node. This means you node must perform a lot of work reading and responding to these peers. If you node is over-loaded with peers and cannot respond in time, other Lighthouse peers will consider you non-performant and disfavour you from their peer stores. You node will also have to handle and manage the gossip and extra bandwidth that comes from having these extra peers. Having a non-responsive node (due to overloading of connected peers), degrades the network as a whole.

It is often the belief that a higher peer counts will improve sync times. Beyond a handful of peers, this is not true. On all current tested networks, the bottleneck for syncing is not the network bandwidth of downloading blocks, rather it is the CPU load of processing the blocks themselves. Most of the time, the network is idle, waiting for blocks to be processed. Having a very large peer count will not speed up sync.

For these reasons, we recommend users do not modify the --target-peer count drastically and use the (recommended) default.

NAT Traversal (Port Forwarding)

Lighthouse, by default, used port 9000 for both TCP and UDP. Lighthouse will still function if it is behind a NAT without any port mappings. Although Lighthouse still functions, we recommend that some mechanism is used to ensure that your Lighthouse node is publicly accessible. This will typically improve your peer count, allow the scoring system to find the best/most favourable peers for your node and overall improve the eth2 network.

Lighthouse currently supports UPnP. If UPnP is enabled on your router, Lighthouse will automatically establish the port mappings for you (the beacon node will inform you of established routes in this case). If UPnP is not enabled, we recommend you manually set up port mappings to both of Lighthouse's TCP and UDP ports (9000 by default).

ENR Configuration

Lighthouse has a number of CLI parameters for constructing and modifying the local Ethereum Node Record (ENR). Examples are --enr-address, --enr-udp-port, --enr-tcp-port and --disable-enr-auto-update. These settings allow you construct your initial ENR. Their primary intention is for setting up boot-like nodes and having a contactable ENR on boot. On normal operation of a Lighthouse node, none of these flags need to be set. Setting these flags incorrectly can lead to your node being incorrectly added to the global DHT which will degrades the discovery process for all Eth2 peers.

The ENR of a Lighthouse node is initially set to be non-contactable. The in-built discovery mechanism can determine if you node is publicly accessible, and if it is, it will update your ENR to the correct public IP and port address (meaning you do not need to set it manually). Lighthouse persists its ENR, so on reboot it will re-load the settings it had discovered previously.

Modifying the ENR settings can degrade the discovery of your node making it harder for peers to find you or potentially making it harder for other peers to find each other. We recommend not touching these settings unless for a more advanced use case.

Contributing to Lighthouse

Chat Badge

Lighthouse welcomes contributions. If you are interested in contributing to the Ethereum ecosystem, and you want to learn Rust, Lighthouse is a great project to work on.

To start contributing,

  1. Read our how to contribute document.
  2. Setup a development environment.
  3. Browse through the open issues (tip: look for the good first issue tag).
  4. Comment on an issue before starting work.
  5. Share your work via a pull-request.

If you have questions, please reach out via Discord.

Ethereum 2.0

Lighthouse is an implementation of the Ethereum 2.0 specification, as defined in the ethereum/eth2.0-specs repository.

We recommend reading Danny Ryan's (incomplete) Phase 0 for Humans before diving into the canonical spec.

Rust

Lighthouse adheres to Rust code conventions as outlined in the Rust Styleguide.

Please use clippy and rustfmt to detect common mistakes and inconsistent code formatting:

$ cargo clippy --all
$ cargo fmt --all --check

Panics

Generally, panics should be avoided at all costs. Lighthouse operates in an adversarial environment (the Internet) and it's a severe vulnerability if people on the Internet can cause Lighthouse to crash via a panic.

Always prefer returning a Result or Option over causing a panic. For example, prefer array.get(1)? over array[1].

If you know there won't be a panic but can't express that to the compiler, use .expect("Helpful message") instead of .unwrap(). Always provide detailed reasoning in a nearby comment when making assumptions about panics.

TODOs

All TODO statements should be accompanied by a GitHub issue.


#![allow(unused_variables)]
fn main() {
pub fn my_function(&mut self, _something &[u8]) -> Result<String, Error> {
  // TODO: something_here
  // https://github.com/sigp/lighthouse/issues/XX
}
}

Comments

General Comments

  • Prefer line (//) comments to block comments (/* ... */)
  • Comments can appear on the line prior to the item or after a trailing space.

#![allow(unused_variables)]
fn main() {
// Comment for this struct
struct Lighthouse {}
fn make_blockchain() {} // A comment on the same line after a space
}

Doc Comments

  • The /// is used to generate comments for Docs.
  • The comments should come before attributes.

#![allow(unused_variables)]
fn main() {
/// Stores the core configuration for this Lighthouse instance.
/// This struct is general, other components may implement more
/// specialized config structs.
#[derive(Clone)]
pub struct LighthouseConfig {
    pub data_dir: PathBuf,
    pub p2p_listen_port: u16,
}
}

Rust Resources

Rust is an extremely powerful, low-level programming language that provides freedom and performance to create powerful projects. The Rust Book provides insight into the Rust language and some of the coding style to follow (As well as acting as a great introduction and tutorial for the language).

Rust has a steep learning curve, but there are many resources to help. We suggest:

Development Environment

Most Lighthouse developers work on Linux or MacOS, however Windows should still be suitable.

First, follow the Installation Guide to install Lighthouse. This will install Lighthouse to your PATH, which is not particularly useful for development but still a good way to ensure you have the base dependencies.

The only additional requirement for developers is ganache-cli. This is used to simulate the Eth1 chain during tests. You'll get failures during tests if you don't have ganache-cli available on your PATH.

Testing

As with most other Rust projects, Lighthouse uses cargo test for unit and integration tests. For example, to test the ssz crate run:

cd consensus/ssz
cargo test

We also wrap some of these commands and expose them via the Makefile in the project root for the benefit of CI/CD. We list some of these commands below so you can run them locally and avoid CI failures:

  • $ make cargo-fmt: (fast) runs a Rust code linter.
  • $ make test: (medium) runs unit tests across the whole project.
  • $ make test-ef: (medium) runs the Ethereum Foundation test vectors.
  • $ make test-full: (slow) runs the full test suite (including all previous commands). This is approximately everything that is required to pass CI.

The lighthouse test suite is quite extensive, running the whole suite may take 30+ minutes.

Ethereum 2.0 Spec Tests

The ethereum/eth2.0-spec-tests repository contains a large set of tests that verify Lighthouse behaviour against the Ethereum Foundation specifications.

These tests are quite large (100's of MB) so they're only downloaded if you run $ make test-ef (or anything that run it). You may want to avoid downloading these tests if you're on a slow or metered Internet connection. CI will require them to pass, though.

Frequently Asked Questions

Why does it take so long for a validator to be activated?

After validators create their Eth1 deposit transaction there are two waiting periods before they can start producing blocks and attestations:

  1. Waiting for the beacon chain to recognise the Eth1 block containing the deposit (generally 4 to 7.4 hours).
  2. Waiting in the queue for validator activation (generally 6.4 minutes for every 4 validators in the queue).

Detailed answers below:

1. Waiting for the beacon chain to detect the Eth1 deposit

Since the beacon chain uses Eth1 for validator on-boarding, beacon chain validators must listen to event logs from the deposit contract. Since the latest blocks of the Eth1 chain are vulnerable to re-orgs due to minor network partitions, beacon nodes follow the Eth1 chain at a distance of 1,024 blocks (~4 hours) (see ETH1_FOLLOW_DISTANCE). This follow distance protects the beacon chain from on-boarding validators that are likely to be removed due to an Eth1 re-org.

Now we know there's a 4 hours delay before the beacon nodes even consider an Eth1 block. Once they are considering these blocks, there's a voting period where beacon validators vote on which Eth1 to include in the beacon chain. This period is defined as 32 epochs (~3.4 hours, see ETH1_VOTING_PERIOD). During this voting period, each beacon block producer includes an Eth1Data in their block which counts as a vote towards what that validator considers to be the head of the Eth1 chain at the start of the voting period (with respect to ETH1_FOLLOW_DISTANCE, of course). You can see the exact voting logic here.

These two delays combined represent the time between an Eth1 deposit being included in an Eth1 data vote and that validator appearing in the beacon chain. The ETH1_FOLLOW_DISTANCE delay causes a minimum delay of ~4 hours and ETH1_VOTING_PERIOD means that if a validator deposit happens just before the start of a new voting period then they might not notice this delay at all. However, if the validator deposit happens just after the start of the new voting period the validator might have to wait ~3.4 hours for next voting period. In times of very, very severe network issues, the network may even fail to vote in new Eth1 blocks, stopping all new validator deposits!

Note: you can see the list of validators included in the beacon chain using our REST API: /beacon/validators/all

2. Waiting for a validator to be activated

If a validator has provided an invalid public key or signature, they will never be activated or even show up in /beacon/validators/all. They will simply be forgotten by the beacon chain! But, if those parameters were correct, once the Eth1 delays have elapsed and the validator appears in the beacon chain, there's another delay before the validator becomes "active" (canonical definition here) and can start producing blocks and attestations.

Firstly, the validator won't become active until their beacon chain balance is equal to or greater than MAX_EFFECTIVE_BALANCE (32 ETH on mainnet, usually 3.2 ETH on testnets). Once this balance is reached, the validator must wait until the start of the next epoch (up to 6.4 minutes) for the process_registry_updates routine to run. This routine activates validators with respect to a churn limit; it will only allow the number of validators to increase (churn) by a certain amount. Up until there are about 330,000 validators this churn limit is set to 4 and it starts to very slowly increase as the number of validators increases from there.

If a new validator isn't within the churn limit from the front of the queue, they will need to wait another epoch (6.4 minutes) for their next chance. This repeats until the queue is cleared.

Once a validator has been activated, there's no more waiting! It's time to produce blocks and attestations!

3. Do I need to set up any port mappings

It is not strictly required to open any ports for Lighthouse to connect and participate in the network. Lighthouse should work out-of-the-box. However, if your node is not publicly accessible (you are behind a NAT or router that has not been configured to allow access to Lighthouse ports) you will only be able to reach peers who have a set up that is publicly accessible.

There are a number of undesired consequences of not making your Lighthouse node publicly accessible.

Firstly, it will make it more difficult for your node to find peers, as your node will not be added to the global DHT and other peers will not be able to initiate connections with you. Secondly, the peers in your peer store are more likely to end connections with you and be less performant as these peers will likely be overloaded with subscribing peers. The reason being, that peers that have correct port forwarding (publicly accessible) are in higher demand than regular peers as other nodes behind NAT's will also be looking for these peers. Finally, not making your node publicly accessible degrades the overall network, making it more difficult for other peers to join and degrades the overall connectivity of the global network.

For these reasons, we recommend that you make your node publicly accessible.

Lighthouse supports UPnP. If you are behind a NAT with a router that supports UPnP you can simply ensure UPnP is enabled (Lighthouse will inform you in its initial logs if a route has been established). You can also manually set up port mappings in your router to your local Lighthouse instance. By default, Lighthouse uses port 9000 for both TCP and UDP. Opening both these ports will make your Lighthouse node maximally contactable.

4. I have a low peer count and it is not increasing

If you cannot find ANY peers at all. It is likely that you have incorrect testnet configuration settings. Ensure that the network you wish to connect to is correct (the beacon node outputs the network it is connecting to in the initial boot-up log lines). On top of this, ensure that you are not using the same datadir as a previous network. I.e if you have been running the medalla testnet and are now trying to join a new testnet but using the same datadir (the datadir is also printed out in the beacon node's logs on boot-up).

If you find yourself with a low peer count and is not reaching the target you expect. Try setting up the correct port forwards as described in 3. above.