Lighthouse Book
Documentation for Lighthouse users and developers.
Lighthouse is an Ethereum 2.0 client that connects to other Ethereum 2.0 clients to form a resilient and decentralized proof-of-stake blockchain.
We implement the specification as defined in the ethereum/eth2.0-specs repository.
Topics
You may read this book from start to finish, or jump to some of these topics:
- Follow the Installation Guide to install Lighthouse.
- Learn about becoming a mainnet validator.
- Get hacking with the Development Environment Guide.
- Utilize the whole stack by starting a local testnet.
- Query the RESTful HTTP API using
curl
.
Prospective contributors can read the Contributing section to understand how we develop and test Lighthouse.
About this Book
This book is open source, contribute at github.com/sigp/lighthouse/book.
The Lighthouse CI/CD system maintains a hosted version of the master
branch
at lighthouse-book.sigmaprime.io.
Become an Eth2 Mainnet Validator
Becoming an Eth2 validator is rewarding, but it's not for the faint of heart. You'll need to be familiar with the rules of staking (e.g., rewards, penalties, etc.) and also configuring and managing servers. You'll also need at least 32 ETH!
For those with an understanding of Eth2 and server maintenance, you'll find that running Lighthouse is easy. Install it, start it, monitor it and keep it updated. You shouldn't need to interact with it on a day-to-day basis.
Being educated is critical to validator success. Before submitting your mainnet deposit, we recommend:
- Thoroughly exploring the Eth2 Launchpad website
- Try running through the deposit process without actually submitting a deposit.
- Reading through this documentation, especially the Slashing Protection section.
- Running a testnet validator.
- Performing a web search and doing your own research.
By far, the best technical learning experience is to run a Testnet Validator. You can get hands-on experience with all the tools and it's a great way to test your staking hardware. We recommend all mainnet validators to run a testnet validator initially; 32 ETH is a significant outlay and joining a testnet is a great way to "try before you buy".
Remember, if you get stuck you can always reach out on our Discord.
Please note: the Lighthouse team does not take any responsibility for losses or damages occured through the use of Lighthouse. We have an experienced internal security team and have undergone multiple third-party security-reviews, however the possibility of bugs or malicious interference remains a real and constant threat. Validators should be prepared to lose some rewards due to the actions of other actors on the Eth2 network or software bugs. See the software license for more detail on liability.
Using Lighthouse for Mainnet
When using Lighthouse, the --network
flag selects a network. E.g.,
lighthouse
(no flag): Mainnet.lighthouse --network mainnet
: Mainnet.lighthouse --network pyrmont
: Pyrmont (testnet).
Using the correct --network
flag is very important; using the wrong flag can
result in penalties, slashings or lost deposits. As a rule of thumb, always
provide a --network
flag instead of relying on the default.
Joining a Testnet
There are five primary steps to become a testnet validator:
- Create validator keys and submit deposits.
- Start an Eth1 client.
- Install Lighthouse.
- Import the validator keys into Lighthouse.
- Start Lighthouse.
- Leave Lighthouse running.
Each of these primary steps has several intermediate steps, so we recommend setting aside one or two hours for this process.
Step 1. Create validator keys
The Ethereum Foundation provides an "Eth2 launch pad" for creating validator keypairs and submitting deposits:
Please follow the steps on the launch pad site to generate validator keys and submit deposits. Make sure you select "Lighthouse" as your client.
Move to the next step once you have completed the steps on the launch pad, including generating keys via the Python CLI and submitting gETH/ETH deposits.
Step 2. Start an Eth1 client
Since Eth2 relies upon the Eth1 chain for validator on-boarding, all Eth2 validators must have a connection to an Eth1 node.
We provide instructions for using Geth, but you could use any client that implements the JSON RPC via HTTP. A fast-synced node is sufficient.
Installing Geth
If you're using a Mac, follow the instructions listed here to install geth. Otherwise see here.
Starting Geth
Once you have geth installed, use this command to start your Eth1 node:
geth --http
Step 3. Install Lighthouse
Note: Lighthouse only supports Windows via WSL.
Follow the Lighthouse Installation Instructions to install Lighthouse from one of the available options.
Proceed to the next step once you've successfully installed Lighthouse and viewed
its --version
info.
Note: Some of the instructions vary when using Docker, ensure you follow the appropriate sections later in this guide.
Step 4. Import validator keys to Lighthouse
When Lighthouse is installed, follow the Importing from the Ethereum 2.0 Launch pad instructions so the validator client can perform your validator duties.
Proceed to the next step once you've successfully imported all validators.
Step 5. Start Lighthouse
For staking, one needs to run two Lighthouse processes:
lighthouse bn
: the "beacon node" which connects to the P2P network and verifies blocks.lighthouse vc
: the "validator client" which manages validators, using data obtained from the beacon node via a HTTP API.
Starting these processes is different for binary and docker users:
Binary users
Those using the pre- or custom-built binaries can start the two processes with:
lighthouse --network mainnet bn --staking
lighthouse --network mainnet vc
Note:
~/.lighthouse/mainnet
is the default directory which contains the keys and databases. To specify a custom dir, see Custom Directories.
Docker users
Those using Docker images can start the processes with:
$ docker run \
--network host \
-v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse \
lighthouse --network mainnet bn --staking --http-address 0.0.0.0
$ docker run \
--network host \
-v $HOME/.lighthouse:/root/.lighthouse \
sigp/lighthouse \
lighthouse --network mainnet vc
Step 6. Leave Lighthouse running
Leave your beacon node and validator client running and you'll see logs as the beacon node stays synced with the network while the validator client produces blocks and attestations.
It will take 4-8+ hours for the beacon chain to process and activate your validator, however you'll know you're active when the validator client starts successfully publishing attestations each epoch:
Dec 03 08:49:40.053 INFO Successfully published attestation slot: 98, committee_index: 0, head_block: 0xa208…7fd5,
Although you'll produce an attestation each epoch, it's less common to produce a block. Watch for the block production logs too:
Dec 03 08:49:36.225 INFO Successfully published block slot: 98, attestations: 2, deposits: 0, service: block
If you see any ERRO
(error) logs, please reach out on
Discord or create an
issue.
Happy staking!
Become a Testnet Validator
Joining an Eth2 testnet is a great way to get familiar with staking in Phase 0. All users should experiment with a testnet prior to staking mainnet ETH.
To join a testnet, you can follow the Become an Eth2 Mainnet Validator instructions but with a few differences:
- Use the appropriate Eth2 launchpad website:
- Instead of
--network mainnet
, use the appropriate network flag:--network pyrmont
: Pyrmont.
- Use a Goerli Eth1 node instead of a mainnet one:
- For Geth, this means using
geth --goerli --http
.
- For Geth, this means using
- Notice that Lighthouse will store its files in a different directory by default:
~/.lighthouse/pyrmont
: Pyrmont.
Never use real ETH to join a testnet! All of the testnets listed here use Goerli ETH which is basically worthless. This allows experimentation without real-world costs.
📦 Installation
Lighthouse runs on Linux, macOS, and Windows (via WSL only).
There are three core methods to obtain the Lighthouse application:
Additionally, there are two extra guides for specific uses:
Minimum System Requirements
- Dual-core CPU, 2015 or newer
- 8 GB RAM
- 128 GB solid state storage
- 10 Mb/s download, 5 Mb/s upload broadband connection
For more information see System Requirements.
System Requirements
Lighthouse is able to run on most low to mid-range consumer hardware, but will perform best when provided with ample system resources. The following system requirements are for running a beacon node and a validator client with a modest number of validator keys (less than 100).
Minimum
- Dual-core CPU, 2015 or newer
- 8 GB RAM
- 128 GB solid state storage
- 10 Mb/s download, 5 Mb/s upload broadband connection
During smooth network conditions, Lighthouse's database will fit within 15 GB, but in case of a long period of non-finality, it is strongly recommended that at least 128 GB is available.
Recommended
- Quad-core AMD Ryzen, Intel Broadwell, ARMv8 or newer
- 16 GB RAM
- 256 GB solid state storage
- 100 Mb/s download, 20 Mb/s upload broadband connection
Pre-built Binaries
Each Lighthouse release contains several downloadable binaries in the "Assets" section of the release. You can find the releases on Github.
Note: binaries are not yet provided for MacOS or Windows native.
Platforms
Binaries are supplied for two platforms:
x86_64-unknown-linux-gnu
: AMD/Intel 64-bit processors (most desktops, laptops, servers)aarch64-unknown-linux-gnu
: 64-bit ARM processors (Raspberry Pi 4)
Additionally there is also a -portable
suffix which indicates if the portable
feature is used:
- Without
portable
: uses modern CPU instructions to provide the fastest signature verification times (may causeIllegal instruction
error on older CPUs) - With
portable
: approx. 20% slower, but should work on all modern 64-bit processors.
Usage
Each binary is contained in a .tar.gz
archive. For this example, lets assume the user needs
a portable x86_64
binary.
Whilst this example uses
v0.2.13
we recommend always using the latest release.
Steps
- Go to the Releases page and select the latest release.
- Download the
lighthouse-${VERSION}-x86_64-unknown-linux-gnu-portable.tar.gz
binary. - Extract the archive:
cd Downloads
tar -xvf lighthouse-${VERSION}-x86_64-unknown-linux-gnu.tar.gz
- Test the binary with
./lighthouse --version
(it should print the version). - (Optional) Move the
lighthouse
binary to a location in yourPATH
, so thelighthouse
command can be called from anywhere.- E.g.,
cp lighthouse /usr/bin
- E.g.,
Troubleshooting
If you get a SIGILL (exit code 132), then your CPU is incompatible with the optimized build
of Lighthouse and you should switch to the -portable
build. In this case, you will see a
warning like this on start-up:
WARN CPU seems incompatible with optimized Lighthouse build, advice: If you get a SIGILL, please try Lighthouse portable build
On some VPS providers, the virtualization can make it appear as if CPU features are not available, even when they are. In this case you might see the warning above, but so long as the client continues to function it's nothing to worry about.
Docker Guide
This repository has a Dockerfile
in the root which builds an image with the
lighthouse
binary installed. A pre-built image is available on Docker Hub.
Obtaining the Docker image
There are two ways to obtain the docker image, either via Docker Hub or building the image from source. Once you have obtained the docker image via one of these methods, proceed to Using the Docker image.
Docker Hub
Lighthouse maintains the sigp/lighthouse Docker Hub repository which provides an easy way to run Lighthouse without building the image yourself.
Obtain the latest image with:
$ docker pull sigp/lighthouse
Download and test the image with:
$ docker run sigp/lighthouse lighthouse --version
If you can see the latest Lighthouse release version (see example below), then you've successfully installed Lighthouse via Docker.
Example Version Output
Lighthouse vx.x.xx-xxxxxxxxx
BLS Library: xxxx-xxxxxxx
Note: when you're running the Docker Hub image you're relying upon a pre-built binary instead of building from source.
Note: due to the Docker Hub image being compiled to work on arbitrary machines, it isn't as highly optimized as an image built from source. We're working to improve this, but for now if you want the absolute best performance, please build the image yourself.
Building the Docker Image
To build the image from source, navigate to the root of the repository and run:
$ docker build . -t lighthouse:local
The build will likely take several minutes. Once it's built, test it with:
$ docker run lighthouse:local lighthouse --help
Using the Docker image
You can run a Docker beacon node with the following command:
$ docker run -p 9000:9000 -p 127.0.0.1:5052:5052 -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse --network mainnet beacon --http --http-address 0.0.0.0
To join the Pyrmont testnet, use
--network pyrmont
instead.
The
-p
and-v
and values are described below.
Volumes
Lighthouse uses the /root/.lighthouse
directory inside the Docker image to
store the configuration, database and validator keys. Users will generally want
to create a bind-mount volume to ensure this directory persists between docker run
commands.
The following example runs a beacon node with the data directory mapped to the users home directory:
$ docker run -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse beacon
Ports
In order to be a good peer and serve other peers you should expose port 9000
.
Use the -p
flag to do this:
$ docker run -p 9000:9000 sigp/lighthouse lighthouse beacon
If you use the --http
flag you may also want to expose the HTTP port with -p 127.0.0.1:5052:5052
.
$ docker run -p 9000:9000 -p 127.0.0.1:5052:5052 sigp/lighthouse lighthouse beacon --http --http-address 0.0.0.0
Installation: Build from Source
Lighthouse builds on Linux, macOS, and Windows (via WSL only).
Compilation should be easy. In fact, if you already have Rust installed all you need is:
git clone https://github.com/sigp/lighthouse.git
cd lighthouse
git checkout stable
make
If this doesn't work or is not clear enough, see the Detailed Instructions below. If you have further issues, see Troubleshooting. If you'd prefer to use Docker, see the Docker Guide.
Updating lighthouse
You can update Lighthouse to a specific version by running the commands below. The lighthouse
directory will be the location you cloned Lighthouse to during the installation process.
${VERSION}
will be the version you wish to build in the format vX.X.X
.
cd lighthouse
git fetch
git checkout ${VERSION}
make
Detailed Instructions
- Install Rust and Cargo with rustup.
- Use the
stable
toolchain (it's the default). - Check the Troubleshooting section for additional
dependencies (e.g.,
cmake
).
- Use the
- Clone the Lighthouse repository.
- Run
$ git clone https://github.com/sigp/lighthouse.git
- Change into the newly created directory with
$ cd lighthouse
- Run
- Build Lighthouse with
$ make
. - Installation was successful if
$ lighthouse --help
displays the command-line documentation.
First time compilation may take several minutes. If you experience any failures, please reach out on discord or create an issue.
Windows Support
Compiling or running Lighthouse natively on Windows is not currently supported. However, Lighthouse can run successfully under the Windows Subsystem for Linux (WSL). If using Ubuntu under WSL, you can should install the Ubuntu dependencies listed in the Dependencies (Ubuntu) section.
Troubleshooting
Dependencies
Ubuntu
Several dependencies may be required to compile Lighthouse. The following packages may be required in addition a base Ubuntu Server installation:
sudo apt install -y git gcc g++ make cmake pkg-config
macOS
You will need cmake
. You can install via homebrew:
brew install cmake
Command is not found
Lighthouse will be installed to CARGO_HOME
or $HOME/.cargo
. This directory
needs to be on your PATH
before you can run $ lighthouse
.
See "Configuring the PATH
environment variable"
(rust-lang.org) for more information.
Compilation error
Make sure you are running the latest version of Rust. If you have installed Rust using rustup, simply type $ rustup update
.
If compilation fails with (signal: 9, SIGKILL: kill)
, this could mean your machine ran out of
memory during compilation. If you are on a resource-constrained device you can
look into cross compilation.
If compilation fails with error: linking with cc failed: exit code: 1
, try running cargo clean
.
Raspberry Pi 4 Installation
Tested on:
- Raspberry Pi 4 Model B (4GB)
Ubuntu 20.04 LTS (GNU/Linux 5.4.0-1011-raspi aarch64)
Note: Lighthouse supports cross-compiling to target a
Raspberry Pi (aarch64
). Compiling on a faster machine (i.e., x86_64
desktop) may be convenient.
1. Install Ubuntu
Follow the Ubuntu Raspberry Pi installation instructions.
A 64-bit version is required and latest version is recommended (Ubuntu 20.04 LTS was the latest at the time of writing).
A graphical environment is not required in order to use Lighthouse. Only the terminal and an Internet connection are necessary.
2. Install Packages
Install the Ubuntu Dependencies.
(I.e., run the sudo apt install ...
command at that link).
Tips:
- If there are difficulties, try updating the package manager with
sudo apt update
.
3. Install Rust
Install Rust as per rustup. (I.e., run the curl ...
command).
Tips:
- When prompted, enter
1
for the default installation.- Try running
cargo version
after Rust installation completes. If it cannot be found, runsource $HOME/.cargo/env
.- It's generally advised to append
source $HOME/.cargo/env
to~/.bashrc
.
4. Install Lighthouse
git clone https://github.com/sigp/lighthouse.git
cd lighthouse
git checkout stable
make
Compiling Lighthouse can take up to an hour. The safety guarantees provided by the Rust language unfortunately result in a lengthy compilation time on a low-spec CPU like a Raspberry Pi. For faster compilation on low-spec hardware, try cross-compiling on a more powerful computer (e.g., compile for RasPi from your desktop computer).
Once installation has finished, confirm Lighthouse is installed by viewing the
usage instructions with lighthouse --help
.
Cross-compiling
Lighthouse supports cross-compiling, allowing users to run a binary on one
platform (e.g., aarch64
) that was compiled on another platform (e.g.,
x86_64
).
Instructions
Cross-compiling requires Docker
,
rustembedded/cross
and for the
current user to be in the docker
group.
The binaries will be created in the target/
directory of the Lighthouse
project.
Targets
The Makefile
in the project contains four targets for cross-compiling:
build-x86_64
: builds an optimized version for x86_64 processors (suitable for most users). Supports Intel Broadwell (2014) and newer, and AMD Ryzen (2017) and newer.build-x86_64-portable
: builds a version for x86_64 processors which avoids using some modern CPU instructions that are incompatible with older CPUs. Suitable for pre-Broadwell/Ryzen CPUs.build-aarch64
: builds an optimized version for 64-bit ARM processors (suitable for Raspberry Pi 4).build-aarch64-portable
: builds a version for 64-bit ARM processors which avoids using some modern CPU instructions. In practice, very few ARM processors lack the instructions necessary to run the faster non-portable build.
Example
cd lighthouse
make build-aarch64
The lighthouse
binary will be compiled inside a Docker container and placed
in lighthouse/target/aarch64-unknown-linux-gnu/release
.
Key Management
Note: we recommend using the Eth2 launchpad to create validators.
Lighthouse uses a hierarchical key management system for producing validator keys. It is hierarchical because each validator key can be derived from a master key, making the validators keys children of the master key. This scheme means that a single 24-word mnemonic can be used to backup all of your validator keys without providing any observable link between them (i.e., it is privacy-retaining). Hierarchical key derivation schemes are common-place in cryptocurrencies, they are already used by most hardware and software wallets to secure BTC, ETH and many other coins.
Key Concepts
We defined some terms in the context of validator key management:
- Mnemonic: a string of 24 words that is designed to be easy to write down
and remember. E.g., "radar fly lottery mirror fat icon bachelor sadness
type exhaust mule six beef arrest you spirit clog mango snap fox citizen
already bird erase".
- Defined in BIP-39
- Wallet: a wallet is a JSON file which stores an
encrypted version of a mnemonic.
- Defined in EIP-2386
- Keystore: typically created by wallet, it contains a single encrypted BLS
keypair.
- Defined in EIP-2335.
- Voting Keypair: a BLS public and private keypair which is used for signing blocks, attestations and other messages on regular intervals, whilst staking in Phase 0.
- Withdrawal Keypair: a BLS public and private keypair which will be required after Phase 0 to manage ETH once a validator has exited.
Overview
The key management system in Lighthouse involves moving down the above list of items, starting at one easy-to-backup mnemonic and ending with multiple keypairs. Creating a single validator looks like this:
- Create a wallet and record the mnemonic:
lighthouse --network pyrmont account wallet create --name wally --password-file wally.pass
- Create the voting and withdrawal keystores for one validator:
lighthouse --network pyrmont account validator create --wallet-name wally --wallet-password wally.pass --count 1
In step (1), we created a wallet in ~/.lighthouse/{network}/wallets
with the name
wally
. We encrypted this using a pre-defined password in the
wally.pass
file. Then, in step (2), we created one new validator in the
~/.lighthouse/{network}/validators
directory using wally
(unlocking it with
wally.pass
) and storing the passwords to the validators voting key in
~/.lighthouse/{network}/secrets
.
Thanks to the hierarchical key derivation scheme, we can delete all of the aforementioned directories and then regenerate them as long as we remembered the 24-word mnemonic (we don't recommend doing this, though).
Creating another validator is easy, it's just a matter of repeating step (2). The wallet keeps track of how many validators it has generated and ensures that a new validator is generated each time.
Detail
Directory Structure
There are three important directories in Lighthouse validator key management:
wallets/
: contains encrypted wallets which are used for hierarchical key derivation.- Defaults to
~/.lighthouse/{network}/wallets
- Defaults to
validators/
: contains a directory for each validator containing encrypted keystores and other validator-specific data.- Defaults to
~/.lighthouse/{network}/validators
- Defaults to
secrets/
: since the validator signing keys are "hot", the validator process needs access to the passwords to decrypt the keystores in the validators dir. These passwords are stored here.- Defaults to
~/.lighthouse/{network}/secrets
wherenetwork
is the name of the network passed in the--network
parameter (default ismainnet
).
- Defaults to
When the validator client boots, it searches the validators/
for directories
containing voting keystores. When it discovers a keystore, it searches the
secrets/
dir for a file with the same name as the 0x-prefixed hex
representation of the keystore public key. If it finds this file, it attempts
to decrypt the keystore using the contents of this file as the password. If it
fails, it logs an error and moves onto the next keystore.
The validators/
and secrets/
directories are kept separate to allow for
ease-of-backup; you can safely backup validators/
without worrying about
leaking private key data.
Withdrawal Keypairs
In Eth2 Phase 0, withdrawal keypairs do not serve any immediate purpose. However, they become very important after Phase 0: they will provide the ultimate control of the ETH of withdrawn validators.
This presents an interesting key management scenario: withdrawal keys are very important, but not right now. Considering this, Lighthouse has adopted a strategy where we do not save withdrawal keypairs to disk by default (it is opt-in). Instead, we assert that since the withdrawal keys can be regenerated from a mnemonic, having them lying around on the file-system only presents risk and complexity.
At the time or writing, we do not expose the commands to regenerate keys from mnemonics. However, key regeneration is tested on the public Lighthouse repository and will be exposed prior to mainnet launch.
So, in summary, withdrawal keypairs can be trivially regenerated from the mnemonic via EIP-2333 so they are not saved to disk like the voting keypairs.
Create a wallet
Note: we recommend using the Eth2 launchpad to create validators.
A wallet allows for generating practically unlimited validators from an easy-to-remember 24-word string (a mnemonic). As long as that mnemonic is backed up, all validator keys can be trivially re-generated.
The 24-word string is randomly generated during wallet creation and printed out to the terminal. It's important to make one or more backups of the mnemonic to ensure your ETH is not lost in the case of data loss. It very important to keep your mnemonic private as it represents the ultimate control of your ETH.
Whilst the wallet stores the mnemonic, it does not store it in plain-text: the mnemonic is encrypted with a password. It is the responsibility of the user to define a strong password. The password is only required for interacting with the wallet, it is not required for recovering keys from a mnemonic.
Usage
To create a wallet, use the lighthouse account wallet
command:
lighthouse account wallet create --help
Creates a new HD (hierarchical-deterministic) EIP-2386 wallet.
USAGE:
lighthouse account_manager wallet create [OPTIONS] --name <WALLET_NAME> --password-file <WALLET_PASSWORD_PATH>
FLAGS:
-h, --help Prints help information
-V, --version Prints version information
OPTIONS:
-d, --datadir <DIR> Data directory for lighthouse keys and databases.
--mnemonic-output-path <MNEMONIC_PATH>
If present, the mnemonic will be saved to this file. DO NOT SHARE THE MNEMONIC.
--name <WALLET_NAME>
The wallet will be created with this name. It is not allowed to create two wallets with the same name for
the same --base-dir.
--password-file <WALLET_PASSWORD_PATH>
A path to a file containing the password which will unlock the wallet. If the file does not exist, a random
password will be generated and saved at that path. To avoid confusion, if the file does not already exist it
must include a '.pass' suffix.
-t, --testnet-dir <DIR>
Path to directory containing eth2_testnet specs. Defaults to a hard-coded Lighthouse testnet. Only effective
if there is no existing database.
--type <WALLET_TYPE>
The type of wallet to create. Only HD (hierarchical-deterministic) wallets are supported presently..
[default: hd] [possible values: hd]
Example
Creates a new wallet named wally
and saves it in ~/.lighthouse/pyrmont/wallets
with a randomly generated password saved
to ./wallet.pass
:
lighthouse --network pyrmont account wallet create --name wally --password-file wally.pass
Notes:
- The password is not
wally.pass
, it is the contents of thewally.pass
file.- If
wally.pass
already exists the wallet password will be set to contents of that file.
Create a validator
Note: we recommend using the Eth2 launchpad to create validators.
Validators are fundamentally represented by a BLS keypair. In Lighthouse, we
use a wallet to generate these keypairs. Once a wallet
exists, the lighthouse account validator create
command is used to generate
the BLS keypair and all necessary information to submit a validator deposit and
have that validator operate in the lighthouse validator_client
.
Usage
To create a validator from a wallet, use the lighthouse account validator create
command:
lighthouse account validator create --help
Creates new validators from an existing EIP-2386 wallet using the EIP-2333 HD key derivation scheme.
USAGE:
lighthouse account_manager validator create [FLAGS] [OPTIONS]
FLAGS:
-h, --help Prints help information
--stdin-inputs If present, read all user inputs from stdin instead of tty.
--store-withdrawal-keystore If present, the withdrawal keystore will be stored alongside the voting keypair.
It is generally recommended to *not* store the withdrawal key and instead
generate them from the wallet seed when required.
-V, --version Prints version information
OPTIONS:
--at-most <AT_MOST_VALIDATORS>
Observe the number of validators in --validator-dir, only creating enough to reach the given count. Never
deletes an existing validator.
--count <VALIDATOR_COUNT>
The number of validators to create, regardless of how many already exist
-d, --datadir <DIR>
Used to specify a custom root data directory for lighthouse keys and databases. Defaults to
$HOME/.lighthouse/{network} where network is the value of the `network` flag Note: Users should specify
separate custom datadirs for different networks.
--debug-level <LEVEL>
The verbosity level for emitting logs. [default: info] [possible values: info, debug, trace, warn, error,
crit]
--deposit-gwei <DEPOSIT_GWEI>
The GWEI value of the deposit amount. Defaults to the minimum amount required for an active validator
(MAX_EFFECTIVE_BALANCE)
--network <network>
Name of the Eth2 chain Lighthouse will sync and follow. [default: mainnet] [possible values: medalla,
altona, spadina, pyrmont, mainnet, toledo]
--secrets-dir <SECRETS_DIR>
The path where the validator keystore passwords will be stored. Defaults to ~/.lighthouse/{network}/secrets
-s, --spec <DEPRECATED>
This flag is deprecated, it will be disallowed in a future release. This value is now derived from the
--network or --testnet-dir flags.
-t, --testnet-dir <DIR>
Path to directory containing eth2_testnet specs. Defaults to a hard-coded Lighthouse testnet. Only effective
if there is no existing database.
--wallet-name <WALLET_NAME> Use the wallet identified by this name
--wallet-password <WALLET_PASSWORD_PATH>
A path to a file containing the password which will unlock the wallet.
--wallets-dir <wallets-dir>
A path containing Eth2 EIP-2386 wallets. Defaults to ~/.lighthouse/{network}/wallets
Example
The example assumes that the wally
wallet was generated from the
wallet example.
lighthouse --network pyrmont account validator create --name wally --wallet-password wally.pass --count 1
This command will:
- Derive a single new BLS keypair from wallet
wally
in~/.lighthouse/{network}/wallets
, updating it so that it generates a new key next time. - Create a new directory in
~/.lighthouse/{network}/validators
containing:- An encrypted keystore containing the validators voting keypair.
- An
eth1_deposit_data.rlp
assuming the default deposit amount (32 ETH
for most testnets and mainnet) which can be submitted to the deposit contract for the Pyrmont testnet. Other testnets can be set via the--network
CLI param.
- Store a password to the validators voting keypair in
~/.lighthouse/{network}/secrets
.
Key recovery
Generally, validator keystore files are generated alongside a mnemonic. If the keystore and/or the keystore password are lost this mnemonic can regenerate a new, equivalent keystore with a new password.
There are two ways to recover keys using the lighthouse
CLI:
lighthouse account validator recover
: recover one or more EIP-2335 keystores from a mnemonic. These keys can be used directly in a validator client.lighthouse account wallet recover
: recover an EIP-2386 wallet from a mnemonic.
⚠️ Warning
Recovering validator keys from a mnemonic should only be used as a last resort. Key recovery entails significant risks:
- Exposing your mnemonic to a computer at any time puts it at risk of being compromised. Your mnemonic is not encrypted and is a target for theft.
- It's completely possible to regenerate a validator keypairs that is already active on some other validator client. Running the same keypairs on two different validator clients is very likely to result in slashing.
Recover EIP-2335 validator keystores
A single mnemonic can generate a practically unlimited number of validator keystores using an index. Generally, the first time you generate a keystore you'll use index 0, the next time you'll use index 1, and so on. Using the same index on the same mnemonic always results in the same validator keypair being generated (see EIP-2334 for more detail).
Using the lighthouse account validator recover
command you can generate the
keystores that correspond to one or more indices in the mnemonic:
lighthouse account validator recover
: recover only index0
.lighthouse account validator recover --count 2
: recover indices0, 1
.lighthouse account validator recover --first-index 1
: recover only index1
.lighthouse account validator recover --first-index 1 --count 2
: recover indices1, 2
.
For each of the indices recovered in the above commands, a directory will be
created in the --validator-dir
location (default ~/.lighthouse/{network}/validators
)
which contains all the information necessary to run a validator using the
lighthouse vc
command. The password to this new keystore will be placed in
the --secrets-dir
(default ~/.lighthouse/{network}/secrets
).
where network
is the name of the Eth2 network passed in the --network
parameter (default is mainnet
).
Recover a EIP-2386 wallet
Instead of creating EIP-2335 keystores directly, an EIP-2386 wallet can be
generated from the mnemonic. This wallet can then be used to generate validator
keystores, if desired. For example, the following command will create an
encrypted wallet named wally-recovered
from a mnemonic:
lighthouse account wallet recover --name wally-recovered
⚠️ Warning: the wallet will be created with a nextaccount
value of 0
.
This means that if you have already generated n
validators, then the next n
validators generated by this wallet will be duplicates. As mentioned
previously, running duplicate validators is likely to result in slashing.
Validator Management
The lighthouse vc
command starts a validator client instance which connects
to a beacon node performs the duties of a staked validator.
This document provides information on how the validator client discovers the validators it will act for and how it should obtain their cryptographic signatures.
Users that create validators using the lighthouse account
tool in the
standard directories and do not start their lighthouse vc
with the
--disable-auto-discover
flag should not need to understand the contents of
this document. However, users with more complex needs may find this document
useful.
Introducing the validator_definitions.yml
file
The validator_definitions.yml
file is located in the validator-dir
, which
defaults to ~/.lighthouse/{network}/validators
. It is a
YAML encoded file defining exactly which
validators the validator client will (and won't) act for.
Example
Here's an example file with two validators:
---
- enabled: true
voting_public_key: "0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007"
type: local_keystore
voting_keystore_path: /home/paul/.lighthouse/validators/0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007/voting-keystore.json
voting_keystore_password_path: /home/paul/.lighthouse/secrets/0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007
- enabled: false
voting_public_key: "0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477"
type: local_keystore
voting_keystore_path: /home/paul/.lighthouse/validators/0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477/voting-keystore.json
voting_keystore_password: myStrongpa55word123&$
In this example we can see two validators:
- A validator identified by the
0x87a5...
public key which is enabled. - Another validator identified by the
0x0xa556...
public key which is not enabled.
Fields
Each permitted field of the file is listed below for reference:
enabled
: Atrue
/false
indicating if the validator client should consider this validator "enabled".voting_public_key
: A validator public key.type
: How the validator signs messages (currently restricted tolocal_keystore
).voting_keystore_path
: The path to a EIP-2335 keystore.voting_keystore_password_path
: The path to the password for the EIP-2335 keystore.voting_keystore_password
: The password to the EIP-2335 keystore.
Note: Either
voting_keystore_password_path
orvoting_keystore_password
must be supplied. If both are supplied,voting_keystore_password_path
is ignored.
Populating the validator_definitions.yml
file
When validator client starts and the validator_definitions.yml
file doesn't
exist, a new file will be created. If the --disable-auto-discover
flag is
provided, the new file will be empty and the validator client will not start
any validators. If the --disable-auto-discover
flag is not provided, an
automatic validator discovery routine will start (more on that later). To
recap:
lighthouse vc
: validators are automatically discovered.lighthouse vc --disable-auto-discover
: validators are not automatically discovered.
Automatic validator discovery
When the --disable-auto-discover
flag is not provided, the validator will search the
validator-dir
for validators and add any new validators to the
validator_definitions.yml
with enabled: true
.
The routine for this search begins in the validator-dir
, where it obtains a
list of all files in that directory and all sub-directories (i.e., recursive
directory-tree search). For each file named voting-keystore.json
it creates a
new validator definition by the following process:
- Set
enabled
totrue
. - Set
voting_public_key
to thepubkey
value from thevoting-keystore.json
. - Set
type
tolocal_keystore
. - Set
voting_keystore_path
to the full path of the discovered keystore. - Set
voting_keystore_password_path
to be a file in thesecrets-dir
with a name identical to thevoting_public_key
value.
Discovery Example
Lets assume the following directory structure:
~/.lighthouse/{network}/validators
├── john
│ └── voting-keystore.json
├── sally
│ ├── one
│ │ └── voting-keystore.json
│ ├── three
│ │ └── my-voting-keystore.json
│ └── two
│ └── voting-keystore.json
└── slashing_protection.sqlite
There is no validator_definitions.yml
file present, so we can run lighthouse vc
(without --disable-auto-discover
) and it will create the following validator_definitions.yml
:
---
- enabled: true
voting_public_key: "0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477"
type: local_keystore
voting_keystore_path: /home/paul/.lighthouse/validators/sally/one/voting-keystore.json
voting_keystore_password_path: /home/paul/.lighthouse/secrets/0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477
- enabled: true
voting_public_key: "0xaa440c566fcf34dedf233baf56cf5fb05bb420d9663b4208272545608c27c13d5b08174518c758ecd814f158f2b4a337"
type: local_keystore
voting_keystore_path: /home/paul/.lighthouse/validators/sally/two/voting-keystore.json
voting_keystore_password_path: /home/paul/.lighthouse/secrets/0xaa440c566fcf34dedf233baf56cf5fb05bb420d9663b4208272545608c27c13d5b08174518c758ecd814f158f2b4a337
- enabled: true
voting_public_key: "0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007"
type: local_keystore
voting_keystore_path: /home/paul/.lighthouse/validators/john/voting-keystore.json
voting_keystore_password_path: /home/paul/.lighthouse/secrets/0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007
All voting-keystore.json
files have been detected and added to the file.
Notably, the sally/three/my-voting-keystore.json
file was not added to the
file, since the file name is not exactly voting-keystore.json
.
In order for the validator client to decrypt the validators, they will need to
ensure their secrets-dir
is organised as below:
~/.lighthouse/{network}/secrets
├── 0xa5566f9ec3c6e1fdf362634ebec9ef7aceb0e460e5079714808388e5d48f4ae1e12897fed1bea951c17fa389d511e477
├── 0xaa440c566fcf34dedf233baf56cf5fb05bb420d9663b4208272545608c27c13d5b08174518c758ecd814f158f2b4a337
└── 0x87a580d31d7bc69069b55f5a01995a610dd391a26dc9e36e81057a17211983a79266800ab8531f21f1083d7d84085007
Manual configuration
The automatic validator discovery process works out-of-the-box with validators
that are created using the lighthouse account validator new
command. The
details of this process are only interesting to those who are using keystores
generated with another tool or have a non-standard requirements.
If you are one of these users, manually edit the validator_definitions.yml
file to suit your requirements. If the file is poorly formatted or any one of
the validators is unable to be initialized, the validator client will refuse to
start.
How the validator_definitions.yml
file is processed
If a validator client were to start using the first example
validator_definitions.yml
file it would print the following log,
acknowledging there there are two validators and one is disabled:
INFO Initialized validators enabled: 1, disabled: 1
The validator client will simply ignore the disabled validator. However, for the active validator, the validator client will:
- Load an EIP-2335 keystore from the
voting_keystore_path
. - If the
voting_keystore_password
field is present, use it as the keystore password. Otherwise, attempt to read the file atvoting_keystore_password_path
and use the contents as the keystore password. - Use the keystore password to decrypt the keystore and obtain a BLS keypair.
- Verify that the decrypted BLS keypair matches the
voting_public_key
. - Create a
voting-keystore.json.lock
file adjacent to thevoting_keystore_path
, indicating that the voting keystore is in-use and should not be opened by another process. - Proceed to act for that validator, creating blocks and attestations if/when required.
If there is an error during any of these steps (e.g., a file is missing or corrupt) the validator client will log an error and continue to attempt to process other validators.
When the validator client exits (or the validator is deactivated) it will
remove the voting-keystore.json.lock
to indicate that the keystore is free for use again.
Importing from the Ethereum 2.0 Launch pad
The Eth2 Lauchpad is a website
from the Ethereum Foundation which guides users how to use the
eth2.0-deposit-cli
command-line program to generate Eth2 validator keys.
The keys that are generated from eth2.0-deposit-cli
can be easily loaded into
a Lighthouse validator client (lighthouse vc
). In fact, both of these
programs are designed to work with each other.
This guide will show the user how to import their keys into Lighthouse so they can perform their duties as a validator. The guide assumes the user has already installed Lighthouse.
Instructions
Whilst following the steps on the website, users are instructed to download the
eth2.0-deposit-cli
repository. This eth2-deposit-cli
script will generate the validator BLS keys
into a validator_keys
directory. We assume that the user's
present-working-directory is the eth2-deposit-cli
repository (this is where
you will be if you just ran the ./deposit.sh
script from the Eth2 Launch pad
website). If this is not the case, simply change the --directory
to point to
the validator_keys
directory.
Now, assuming that the user is in the eth2-deposit-cli
directory and they're
using the default (~/.lighthouse/{network}/validators
) validators
directory (specify a different one using
--validators-dir
flag), they can follow these steps:
1. Run the lighthouse account validator import
command.
Docker users should use the command from the Docker section, all other users can use:
lighthouse --network mainnet account validator import --directory validator_keys
Note: The user must specify the Eth2 network that they are importing the keys for using the --network
flag.
After which they will be prompted for a password for each keystore discovered:
Keystore found at "validator_keys/keystore-m_12381_3600_0_0_0-1595406747.json":
- Public key: 0xa5e8702533f6d66422e042a0bf3471ab9b302ce115633fa6fdc5643f804b6b4f1c33baf95f125ec21969a3b1e0dd9e56
- UUID: 8ea4cf99-8719-43c5-9eda-e97b8a4e074f
If you enter a password it will be stored in validator_definitions.yml so that it is not required each time the validator client starts.
Enter a password, or press enter to omit a password:
The user can choose whether or not they'd like to store the validator password
in the validator_definitions.yml
file. If the
password is not stored here, the validator client (lighthouse vc
)
application will ask for the password each time it starts. This might be nice
for some users from a security perspective (i.e., if it is a shared computer),
however it means that if the validator client restarts, the user will be liable
to off-line penalties until they can enter the password. If the user trusts the
computer that is running the validator client and they are seeking maximum
validator rewards, we recommend entering a password at this point.
Once the process is done the user will see:
Successfully imported keystore.
Successfully updated validator_definitions.yml.
Successfully imported 1 validators (0 skipped).
WARNING: DO NOT USE THE ORIGINAL KEYSTORES TO VALIDATE WITH ANOTHER CLIENT, OR YOU WILL GET SLASHED..
The import process is complete!
2. Run the lighthouse vc
command.
Now the keys are imported the user can start performing their validator duties
by running lighthouse vc
and checking that their validator public key appears
as a voting_pubkey
in one of the following logs:
INFO Enabled validator voting_pubkey: 0xa5e8702533f6d66422e042a0bf3471ab9b302ce115633fa6fdc5643f804b6b4f1c33baf95f125ec21969a3b1e0dd9e56
Once this log appears (and there are no errors) the lighthouse vc
application
will ensure that the validator starts performing its duties and being rewarded
by the protocol. There is no more input required from the user.
Docker
The import
command is a little more complex for Docker users, but the example
in this document can be substituted with:
docker run -it \
-v $HOME/.lighthouse:/root/.lighthouse \
-v $(pwd)/validator_keys:/root/validator_keys \
sigp/lighthouse \
lighthouse --network MY_NETWORK account validator import --directory /root/validator_keys
Here we use two -v
volumes to attach:
~/.lighthouse
on the host to/root/.lighthouse
in the Docker container.- The
validator_keys
directory in the present working directory of the host to the/root/validator_keys
directory of the Docker container.
Slashing Protection
The security of Ethereum 2.0's proof of stake protocol depends on penalties for misbehaviour, known
as slashings. Validators that sign conflicting messages (blocks or attestations), can be slashed
by other validators through the inclusion of a ProposerSlashing
or AttesterSlashing
on chain.
The Lighthouse validator client includes a mechanism to protect its validators against accidental slashing, known as the slashing protection database. This database records every block and attestation signed by validators, and the validator client uses this information to avoid signing any slashable messages.
Lighthouse's slashing protection database is an SQLite database located at
$datadir/validators/slashing_protection.sqlite
which is locked exclusively when the validator
client is running. In normal operation, this database will be automatically created and utilized,
meaning that your validators are kept safe by default.
If you are seeing errors related to slashing protection, it's important that you act slowly and carefully to keep your validators safe. See the Troubleshooting section.
Initialization
The database will be automatically created, and your validators registered with it when:
- Importing keys from another source (e.g. Launchpad, Teku, Prysm,
ethdo
). See the docs on importing keys. - Creating keys using Lighthouse itself (
lighthouse account validator create
) - Creating keys via the validator client API.
Avoiding Slashing
The slashing protection database is designed to protect against many common causes of slashing, but is unable to prevent against some others.
Examples of circumstances where the slashing protection database is effective are:
- Accidentally running two validator clients on the same machine with the same datadir. The exclusive and transactional access to the database prevents the 2nd validator client from signing anything slashable (it won't even start).
- Deep re-orgs that cause the shuffling to change, prompting validators to re-attest in an epoch where they have already attested. The slashing protection checks all messages against the slashing conditions and will refuse to attest on the new chain until it is safe to do so (usually after one epoch).
- Importing keys and signing history from another client, where that history is complete. If you run another client and decide to switch to Lighthouse, you can export data from your client to be imported into Lighthouse's slashing protection database. See Import and Export.
- Misplacing
slashing_protection.sqlite
during a datadir change or migration between machines. By default Lighthouse will refuse to start if it finds validator keys that are not registered in the slashing protection database.
Examples where it is ineffective are:
- Running two validator client instances simultaneously. This could be two different clients (e.g. Lighthouse and Prysm) running on the same machine, two Lighthouse instances using different datadirs, or two clients on completely different machines (e.g. one on a cloud server and one running locally). You are responsible for ensuring that your validator keys are never running simultanously – the slashing protection DB cannot protect you in this case.
- Importing keys from another client without also importing voting history.
- If you use
--init-slashing-protection
to recreate a missing slashing protection database.
Import and Export
Lighthouse supports the slashing protection interchange format described in EIP-3076. An interchange file is a record of blocks and attestations signed by a set of validator keys – basically a portable slashing protection database!
With your validator client stopped, you can import a .json
interchange file from another client
using this command:
lighthouse account validator slashing-protection import <my_interchange.json>
Instructions for exporting your existing client's database are out of scope for this document, please check the other client's documentation for instructions.
When importing an interchange file, you still need to import the validator keystores themselves separately, using the instructions about importing keystores into Lighthouse.
You can export Lighthouse's database for use with another client with this command:
lighthouse account validator slashing-protection export <lighthouse_interchange.json>
The validator client needs to be stopped in order to export, to guarantee that the data exported is up to date.
Troubleshooting
Misplaced Slashing Database
If the slashing protection database cannot be found, it will manifest in an error like this:
Oct 12 14:41:26.415 CRIT Failed to start validator client reason: Failed to open slashing protection database: SQLError("Unable to open database: Error(Some(\"unable to open database file: /home/karlm/.lighthouse/mainnet/validators/slashing_protection.sqlite\"))").
Ensure that `slashing_protection.sqlite` is in "/home/karlm/.lighthouse/mainnet/validators" folder
Usually this indicates that during some manual intervention the slashing database has been misplaced. This error can also occur if you have upgraded from Lighthouse v0.2.x to v0.3.x without moving the slashing protection database. If you have imported your keys into a new node, you should never see this error (see Initialization).
The safest way to remedy this error is to find your old slashing protection database and move
it to the correct location. In our example that would be
~/.lighthouse/mainnet/validators/slashing_protection.sqlite
. You can search for your old database
using a tool like find
, fd
, or your file manager's GUI. Ask on the Lighthouse Discord if you're
not sure.
If you are absolutely 100% sure that you need to recreate the missing database, you can start
the Lighthouse validator client with the --init-slashing-protection
flag. This flag is incredibly
dangerous and should not be used lightly, and we strongly recommend you try finding
your old slashing protection database before using it. If you do decide to use it, you should
wait at least 1 epoch (~7 minutes) from when your validator client was last actively signing
messages. If you suspect your node experienced a clock drift issue you should wait
longer. Remember that the inactivity penalty for being offline for even a day or so
is approximately equal to the rewards earned in a day. You will get slashed if you use
--init-slashing-protection
incorrectly.
Slashable Attestations and Re-orgs
Sometimes a re-org can cause the validator client to attempt to sign something slashable, in which case it will be blocked by slashing protection, resulting in a log like this:
Sep 29 15:15:05.303 CRIT Not signing slashable attestation error: InvalidAttestation(DoubleVote(SignedAttestation { source_epoch: Epoch(0), target_epoch: Epoch(30), signing_root: 0x0c17be1f233b20341837ff183d21908cce73f22f86d5298c09401c6f37225f8a })), attestation: AttestationData { slot: Slot(974), index: 0, beacon_block_root: 0xa86a93ed808f96eb81a0cd7f46e3b3612cafe4bd0367aaf74e0563d82729e2dc, source: Checkpoint { epoch: Epoch(0), root: 0x0000000000000000000000000000000000000000000000000000000000000000 }, target: Checkpoint { epoch: Epoch(30), root: 0xcbe6901c0701a89e4cf508cfe1da2bb02805acfdfe4c39047a66052e2f1bb614 } }
This log is still marked as CRIT
because in general it should occur only very rarely,
and could indicate a serious error or misconfiguration (see Avoiding Slashing).
Slashable Data in Import
If you receive a warning when trying to import an interchange file about the file containing slashable data, then you must carefully consider whether you want to continue.
There are several potential causes for this warning, each of which require a different reaction. If you have seen the warning for multiple validator keys, the cause could be different for each of them.
- Your validator has actually signed slashable data. If this is the case, you should assess whether your validator has been slashed (or is likely to be slashed). It's up to you whether you'd like to continue.
- You have exported data from Lighthouse to another client, and then back to Lighthouse, in a way that didn't preserve the signing roots. A message with no signing roots is considered slashable with respect to any other message at the same slot/epoch, so even if it was signed by Lighthouse originally, Lighthouse has no way of knowing this. If you're sure you haven't run Lighthouse and the other client simultaneously, you can drop Lighthouse's DB in favour of the interchange file.
- You have imported the same interchange file (which lacks signing roots) twice, e.g. from Teku. It might be safe to continue as-is, or you could consider a Drop and Re-import.
Drop and Re-import
If you'd like to prioritize an interchange file over any existing database stored by Lighthouse then you can move (not delete) Lighthouse's database and replace it like so:
mv $datadir/validators/slashing_protection.sqlite ~/slashing_protection_backup.sqlite
lighthouse account validator slashing-protection import <my_interchange.json>
If your interchange file doesn't cover all of your validators, you shouldn't do this. Please reach out on Discord if you need help.
Limitation of Liability
The Lighthouse developers do not guarantee the perfect functioning of this software, or accept liability for any losses suffered. For more information see the Lighthouse license.
Voluntary exits
A validator may chose to voluntarily stop performing duties (proposing blocks and attesting to blocks) by submitting a voluntary exit transaction to the beacon chain.
A validator can initiate a voluntary exit provided that the validator is currently active, has not been slashed and has been active for at least 256 epochs (~27 hours) since it has been activated.
Note: After initiating a voluntary exit, the validator will have to keep performing duties until it has successfully exited to avoid penalties.
It takes at a minimum 5 epochs (32 minutes) for a validator to exit after initiating a voluntary exit. This number can be much higher depending on how many other validators are queued to exit.
Withdrawal of exited funds
Even though users can perform a voluntary exit in phase 0, they cannot withdraw their exited funds at this point in time. This implies that the staked funds are effectively frozen until withdrawals are enabled in future phases.
To understand the phased rollout strategy for Eth2, please visit https://ethereum.org/en/eth2/#roadmap.
Initiating a voluntary exit
In order to initiate an exit, users can use the lighthouse account validator exit
command.
-
The
--keystore
flag is used to specify the path to the EIP-2335 voting keystore for the validator. -
The
--beacon-nodes
flag is used to specify a beacon chain HTTP endpoint that confirms to the Eth2.0 Standard API specifications. That beacon node will be used to validate and propagate the voluntary exit. The default value for this flag ishttp://localhost:5052
. -
The
--network
flag is used to specify a particular Eth2 network (default ismainnet
). -
The
--password-file
flag is used to specify the path to the file containing the password for the voting keystore. If this flag is not provided, the user will be prompted to enter the password.
After validating the password, the user will be prompted to enter a special exit phrase as a final confirmation after which the voluntary exit will be published to the beacon chain.
The exit phrase is the following:
Exit my validator
Below is an example for initiating a voluntary exit on the Pyrmont testnet.
$ lighthouse --network pyrmont account validator exit --keystore /path/to/keystore --beacon-nodes http://localhost:5052
Running account manager for pyrmont network
validator-dir path: ~/.lighthouse/pyrmont/validators
Enter the keystore password for validator in 0xabcd
Password is correct
Publishing a voluntary exit for validator 0xabcd
WARNING: WARNING: THIS IS AN IRREVERSIBLE OPERATION
WARNING: WITHDRAWING STAKED ETH WILL NOT BE POSSIBLE UNTIL ETH1/ETH2 MERGE.
PLEASE VISIT https://lighthouse-book.sigmaprime.io/voluntary-exit.html
TO MAKE SURE YOU UNDERSTAND THE IMPLICATIONS OF A VOLUNTARY EXIT.
Enter the exit phrase from the above URL to confirm the voluntary exit:
Exit my validator
Successfully published voluntary exit for validator 0xabcd
Validator Monitoring
Lighthouse allows for fine-grained monitoring of specific validators using the "validator monitor". Generally users will want to use this function to track their own validators, however, it can be used for any validator, regardless of who controls it.
Monitoring is in the Beacon Node
Lighthouse performs validator monitoring in the Beacon Node (BN) instead of the Validator Client (VC). This is contrary to what some users may expect, but it has several benefits:
- It keeps the VC simple. The VC handles cryptographic signing and the developers believe it should be doing as little additional work as possible.
- The BN has a better knowledge of the chain and network. Communicating all this information to the VC is impractical, we can provide more information when monitoring with the BN.
- It is more flexible:
- Users can use a local BN to observe some validators running in a remote location.
- Users can monitor validators that are not their own.
How to Enable Monitoring
The validator monitor is always enabled in Lighthouse, but it might not have any enrolled validators. There are two methods for a validator to be enrolled for additional monitoring; automatic and manual.
Automatic
When the --validator-monitor-auto
flag is supplied, any validator which uses the
beacon_committee_subscriptions
API endpoint will be enrolled for additional monitoring. All active validators will use this
endpoint each epoch, so you can expect it to detect all local and active validators within several
minutes after start up.
Example
lighthouse bn --staking --validator-monitor-auto
Manual
The --validator-monitor-pubkeys
flag can be used to specify validator public keys for monitoring.
This is useful when monitoring validators that are not directly attached to this BN.
Note: when monitoring validators that aren't connected to this BN, supply the
--subscribe-all-subnets --import-all-attestations
flags to ensure the BN has a full view of the network. This is not strictly necessary, though.
Example
Monitor the mainnet validators at indices 0
and 1
:
lighthouse bn --validator-monitor-pubkeys 0x933ad9491b62059dd065b560d256d8957a8c402cc6e8d8ee7290ae11e8f7329267a8811c397529dac52ae1342ba58c95,0xa1d1ad0714035353258038e964ae9675dc0252ee22cea896825c01458e1807bfad2f9969338798548d9858a571f7425c
Observing Monitoring
Enrolling a validator for additional monitoring results in:
- Additional logs to be printed during BN operation.
- Additional Prometheus metrics from the BN.
Logging
Lighthouse will create logs for the following events for each monitored validator:
- A block from the validator is observed.
- An unaggregated attestation from the validator is observed.
- An unaggregated attestation from the validator is included in an aggregate.
- An unaggregated attestation from the validator is included in a block.
- An aggregated attestation from the validator is observed.
- An exit for the validator is observed.
- A slashing (proposer or attester) is observed which implicates that validator.
Example
Jan 18 11:50:03.896 INFO Unaggregated attestation validator: 0, src: gossip, slot: 342248, epoch: 10695, delay_ms: 891, index: 12, head: 0x5f9d603c04b5489bf2de3708569226fd9428eb40a89c75945e344d06c7f4f86a, service: beacon
Jan 18 11:32:55.196 INFO Attestation included in aggregate validator: 0, src: gossip, slot: 342162, epoch: 10692, delay_ms: 2193, index: 10, head: 0x9be04ecd04bf82952dad5d12c62e532fd13a8d42afb2e6ee98edaf05fc7f9f30, service: beacon
Jan 18 11:21:09.808 INFO Attestation included in block validator: 1, slot: 342102, epoch: 10690, inclusion_lag: 0 slot(s), index: 7, head: 0x422bcd14839e389f797fd38b01e31995f91bcaea3d5d56457fc6aac76909ebac, service: beacon
Metrics
The
ValidatorMonitor
dashboard contains all/most of the metrics exposed via the validator monitor.
APIs
Lighthouse allows users to query the state of Eth2.0 using web-standard, RESTful HTTP/JSON APIs.
There are two APIs served by Lighthouse:
- Beacon Node API
- Validator Client API (not yet released).
Beacon Node API
Lighthouse implements the standard Eth2 Beacon Node API specification. Please follow that link for a full description of each API endpoint.
Warning: the standard API specification is still in flux and the Lighthouse implementation is partially incomplete. You can track the status of each endpoint at #1434.
Starting the server
A Lighthouse beacon node can be configured to expose a HTTP server by supplying the --http
flag. The default listen address is 127.0.0.1:5052
.
The following CLI flags control the HTTP server:
--http
: enable the HTTP server (required even if the following flags are provided).--http-port
: specify the listen port of the server.--http-address
: specify the listen address of the server.--http-allow-origin
: specify the value of theAccess-Control-Allow-Origin
header. The default is to not supply a header.
The schema of the API aligns with the standard Eth2 Beacon Node API as defined at github.com/ethereum/eth2.0-APIs. An interactive specification is available here.
CLI Example
Start the beacon node with the HTTP server listening on http://localhost:5052:
lighthouse bn --http
HTTP Request/Response Examples
This section contains some simple examples of using the HTTP API via curl
.
All endpoints are documented in the Eth2 Beacon Node API
specification.
View the head of the beacon chain
Returns the block header at the head of the canonical chain.
curl -X GET "http://localhost:5052/eth/v1/beacon/headers/head" -H "accept:
application/json"
{
"data": {
"root": "0x4381454174fc28c7095077e959dcab407ae5717b5dca447e74c340c1b743d7b2",
"canonical": true,
"header": {
"message": {
"slot": "3199",
"proposer_index": "19077",
"parent_root": "0xf1934973041c5896d0d608e52847c3cd9a5f809c59c64e76f6020e3d7cd0c7cd",
"state_root": "0xe8e468f9f5961655dde91968f66480868dab8d4147de9498111df2b7e4e6fe60",
"body_root": "0x6f183abc6c4e97f832900b00d4e08d4373bfdc819055d76b0f4ff850f559b883"
},
"signature": "0x988064a2f9cf13fe3aae051a3d85f6a4bca5a8ff6196f2f504e32f1203b549d5f86a39c6509f7113678880701b1881b50925a0417c1c88a750c8da7cd302dda5aabae4b941e3104d0cf19f5043c4f22a7d75d0d50dad5dbdaf6991381dc159ab"
}
}
}
View the status of a validator
Shows the status of validator at index 1
at the head
state.
curl -X GET "http://localhost:5052/eth/v1/beacon/states/head/validators/1" -H "accept: application/json"
{
"data": {
"index": "1",
"balance": "63985937939",
"status": "Active",
"validator": {
"pubkey": "0x873e73ee8b3e4fcf1d2fb0f1036ba996ac9910b5b348f6438b5f8ef50857d4da9075d0218a9d1b99a9eae235a39703e1",
"withdrawal_credentials": "0x00b8cdcf79ba7e74300a07e9d8f8121dd0d8dd11dcfd6d3f2807c45b426ac968",
"effective_balance": "32000000000",
"slashed": false,
"activation_eligibility_epoch": "0",
"activation_epoch": "0",
"exit_epoch": "18446744073709551615",
"withdrawable_epoch": "18446744073709551615"
}
}
}
Troubleshooting
HTTP API is unavailable or refusing connections
Ensure the --http
flag has been supplied at the CLI.
You can quickly check that the HTTP endpoint is up using curl
:
curl -X GET "http://localhost:5052/eth/v1/node/version" -H "accept: application/json"
The beacon node should respond with its version:
{"data":{"version":"Lighthouse/v0.2.9-6f7b4768a/x86_64-linux"}}
If this doesn't work, the server might not be started or there might be a network connection error.
I cannot query my node from a web browser (e.g., Swagger)
By default, the API does not provide an Access-Control-Allow-Origin
header,
which causes browsers to reject responses with a CORS error.
The --http-allow-origin
flag can be used to add a wild-card CORS header:
lighthouse bn --http --http-allow-origin "*"
Warning: Adding the wild-card allow-origin flag can pose a security risk. Only use it in production if you understand the risks of a loose CORS policy.
Lighthouse Non-Standard APIs
Lighthouse fully supports the standardization efforts at
github.com/ethereum/eth2.0-APIs,
however sometimes development requires additional endpoints that shouldn't
necessarily be defined as a broad-reaching standard. Such endpoints are placed
behind the /lighthouse
path.
The endpoints behind the /lighthouse
path are:
- Not intended to be stable.
- Not guaranteed to be safe.
- For testing and debugging purposes only.
Although we don't recommend that users rely on these endpoints, we document them briefly so they can be utilized by developers and researchers.
/lighthouse/health
Presently only available on Linux.
curl -X GET "http://localhost:5052/lighthouse/health" -H "accept: application/json" | jq
{
"data": {
"pid": 1728254,
"pid_num_threads": 47,
"pid_mem_resident_set_size": 510054400,
"pid_mem_virtual_memory_size": 3963158528,
"sys_virt_mem_total": 16715530240,
"sys_virt_mem_available": 4065374208,
"sys_virt_mem_used": 11383402496,
"sys_virt_mem_free": 1368662016,
"sys_virt_mem_percent": 75.67906,
"sys_loadavg_1": 4.92,
"sys_loadavg_5": 5.53,
"sys_loadavg_15": 5.58
}
}
/lighthouse/syncing
curl -X GET "http://localhost:5052/lighthouse/syncing" -H "accept: application/json" | jq
{
"data": {
"SyncingFinalized": {
"start_slot": 3104,
"head_slot": 343744,
"head_root": "0x1b434b5ed702338df53eb5e3e24336a90373bb51f74b83af42840be7421dd2bf"
}
}
}
/lighthouse/peers
curl -X GET "http://localhost:5052/lighthouse/peers" -H "accept: application/json" | jq
[
{
"peer_id": "16Uiu2HAmA9xa11dtNv2z5fFbgF9hER3yq35qYNTPvN7TdAmvjqqv",
"peer_info": {
"_status": "Healthy",
"score": {
"score": 0
},
"client": {
"kind": "Lighthouse",
"version": "v0.2.9-1c9a055c",
"os_version": "aarch64-linux",
"protocol_version": "lighthouse/libp2p",
"agent_string": "Lighthouse/v0.2.9-1c9a055c/aarch64-linux"
},
"connection_status": {
"status": "disconnected",
"connections_in": 0,
"connections_out": 0,
"last_seen": 1082,
"banned_ips": []
},
"listening_addresses": [
"/ip4/80.109.35.174/tcp/9000",
"/ip4/127.0.0.1/tcp/9000",
"/ip4/192.168.0.73/tcp/9000",
"/ip4/172.17.0.1/tcp/9000",
"/ip6/::1/tcp/9000"
],
"sync_status": {
"Advanced": {
"info": {
"status_head_slot": 343829,
"status_head_root": "0xe34e43efc2bb462d9f364bc90e1f7f0094e74310fd172af698b5a94193498871",
"status_finalized_epoch": 10742,
"status_finalized_root": "0x1b434b5ed702338df53eb5e3e24336a90373bb51f74b83af42840be7421dd2bf"
}
}
},
"meta_data": {
"seq_number": 160,
"attnets": "0x0000000800000080"
}
}
}
]
/lighthouse/peers/connected
curl -X GET "http://localhost:5052/lighthouse/peers/connected" -H "accept: application/json" | jq
[
{
"peer_id": "16Uiu2HAkzJC5TqDSKuLgVUsV4dWat9Hr8EjNZUb6nzFb61mrfqBv",
"peer_info": {
"_status": "Healthy",
"score": {
"score": 0
},
"client": {
"kind": "Lighthouse",
"version": "v0.2.8-87181204+",
"os_version": "x86_64-linux",
"protocol_version": "lighthouse/libp2p",
"agent_string": "Lighthouse/v0.2.8-87181204+/x86_64-linux"
},
"connection_status": {
"status": "connected",
"connections_in": 1,
"connections_out": 0,
"last_seen": 0,
"banned_ips": []
},
"listening_addresses": [
"/ip4/34.204.178.218/tcp/9000",
"/ip4/127.0.0.1/tcp/9000",
"/ip4/172.31.67.58/tcp/9000",
"/ip4/172.17.0.1/tcp/9000",
"/ip6/::1/tcp/9000"
],
"sync_status": "Unknown",
"meta_data": {
"seq_number": 1819,
"attnets": "0xffffffffffffffff"
}
}
}
]
/lighthouse/proto_array
curl -X GET "http://localhost:5052/lighthouse/proto_array" -H "accept: application/json" | jq
Example omitted for brevity.
/lighthouse/validator_inclusion/{epoch}/{validator_id}
/lighthouse/validator_inclusion/{epoch}/global
/lighthouse/eth1/syncing
Returns information regarding the Eth1 network, as it is required for use in Eth2
Fields
head_block_number
,head_block_timestamp
: the block number and timestamp from the very head of the Eth1 chain. Useful for understanding the immediate health of the Eth1 node that the beacon node is connected to.latest_cached_block_number
&latest_cached_block_timestamp
: the block number and timestamp of the latest block we have in our block cache.- For correct Eth1 voting this timestamp should be later than the
voting_period_start_timestamp
.
- For correct Eth1 voting this timestamp should be later than the
voting_target_timestamp
: The latest timestamp allowed for an eth1 block in this voting period.eth1_node_sync_status_percentage
(float): An estimate of how far the head of the Eth1 node is from the head of the Eth1 chain.100.0
indicates a fully synced Eth1 node.0.0
indicates an Eth1 node that has not verified any blocks past the genesis block.
lighthouse_is_cached_and_ready
: Is set totrue
if the caches in the beacon node are ready for block production.- This value might be set to
false
whilsteth1_node_sync_status_percentage == 100.0
if the beacon node is still building its internal cache. - This value might be set to
true
whilsteth1_node_sync_status_percentage < 100.0
since the cache only cares about blocks a certain distance behind the head.
- This value might be set to
Example
curl -X GET "http://localhost:5052/lighthouse/eth1/syncing" -H "accept: application/json" | jq
{
"data": {
"head_block_number": 3611806,
"head_block_timestamp": 1603249317,
"latest_cached_block_number": 3610758,
"latest_cached_block_timestamp": 1603233597,
"voting_target_timestamp": 1603228632,
"eth1_node_sync_status_percentage": 100,
"lighthouse_is_cached_and_ready": true
}
}
/lighthouse/eth1/block_cache
Returns a list of all the Eth1 blocks in the Eth1 voting cache.
Example
curl -X GET "http://localhost:5052/lighthouse/eth1/block_cache" -H "accept: application/json" | jq
{
"data": [
{
"hash": "0x3a17f4b7ae4ee57ef793c49ebc9c06ff85207a5e15a1d0bd37b68c5ef5710d7f",
"timestamp": 1603173338,
"number": 3606741,
"deposit_root": "0xd24920d936e8fb9b67e93fd126ce1d9e14058b6d82dcf7d35aea46879fae6dee",
"deposit_count": 88911
},
{
"hash": "0x78852954ea4904e5f81038f175b2adefbede74fbb2338212964405443431c1e7",
"timestamp": 1603173353,
"number": 3606742,
"deposit_root": "0xd24920d936e8fb9b67e93fd126ce1d9e14058b6d82dcf7d35aea46879fae6dee",
"deposit_count": 88911
}
]
}
/lighthouse/eth1/deposit_cache
Returns a list of all cached logs from the deposit contract.
Example
curl -X GET "http://localhost:5052/lighthouse/eth1/deposit_cache" -H "accept: application/json" | jq
{
"data": [
{
"deposit_data": {
"pubkey": "0xae9e6a550ac71490cdf134533b1688fcbdb16f113d7190eacf4f2e9ca6e013d5bd08c37cb2bde9bbdec8ffb8edbd495b",
"withdrawal_credentials": "0x0062a90ebe71c4c01c4e057d7d13b944d9705f524ebfa24290c22477ab0517e4",
"amount": "32000000000",
"signature": "0xa87a4874d276982c471e981a113f8af74a31ffa7d18898a02df2419de2a7f02084065784aa2f743d9ddf80952986ea0b012190cd866f1f2d9c633a7a33c2725d0b181906d413c82e2c18323154a2f7c7ae6f72686782ed9e423070daa00db05b"
},
"block_number": 3086571,
"index": 0,
"signature_is_valid": false
},
{
"deposit_data": {
"pubkey": "0xb1d0ec8f907e023ea7b8cb1236be8a74d02ba3f13aba162da4a68e9ffa2e395134658d150ef884bcfaeecdf35c286496",
"withdrawal_credentials": "0x00a6aa2a632a6c4847cf87ef96d789058eb65bfaa4cc4e0ebc39237421c22e54",
"amount": "32000000000",
"signature": "0x8d0f8ec11935010202d6dde9ab437f8d835b9cfd5052c001be5af9304f650ada90c5363022e1f9ef2392dd222cfe55b40dfd52578468d2b2092588d4ad3745775ea4d8199216f3f90e57c9435c501946c030f7bfc8dbd715a55effa6674fd5a4"
},
"block_number": 3086579,
"index": 1,
"signature_is_valid": false
}
]
}
/lighthouse/beacon/states/{state_id}/ssz
Obtains a BeaconState
in SSZ bytes. Useful for obtaining a genesis state.
The state_id
parameter is identical to that used in the Standard Eth2.0 API
beacon/state
routes.
curl -X GET "http://localhost:5052/lighthouse/beacon/states/0/ssz" | jq
Example omitted for brevity, the body simply contains SSZ bytes.
Validator Inclusion APIs
The /lighthouse/validator_inclusion
API endpoints provide information on
results of the proof-of-stake voting process used for finality/justification
under Casper FFG.
These endpoints are not stable or included in the Eth2 standard API. As such, they are subject to change or removal without a change in major release version.
Endpoints
HTTP Path | Description |
---|---|
/lighthouse/validator_inclusion/{epoch}/global | A global vote count for a given epoch. |
/lighthouse/validator_inclusion/{epoch}/{validator_id} | A per-validator breakdown of votes in a given epoch. |
Global
Returns a global count of votes for some given epoch
. The results are included
both for the current and previous (epoch - 1
) epochs since both are required
by the beacon node whilst performing per-epoch-processing.
Generally, you should consider the "current" values to be incomplete and the "previous" values to be final. This is because validators can continue to include attestations from the current epoch in the next epoch, however this is not the case for attestations from the previous epoch.
`epoch` query parameter
|
| --------- values are calcuated here
| |
v v
Epoch: |---previous---|---current---|---next---|
|-------------|
^
|
window for including "current" attestations
in a block
The votes are expressed in terms of staked effective Gwei
(i.e., not the number of
individual validators). For example, if a validator has 32 ETH staked they will
increase the current_epoch_attesting_gwei
figure by 32,000,000,000
if they
have an attestation included in a block during the current epoch. If this
validator has more than 32 ETH, that extra ETH will not count towards their
vote (that is why it is effective Gwei
).
The following fields are returned:
current_epoch_active_gwei
: the total staked gwei that was active (i.e., able to vote) during the current epoch.current_epoch_attesting_gwei
: the total staked gwei that had one or more attestations included in a block during the current epoch (multiple attestations by the same validator do not increase this figure).current_epoch_target_attesting_gwei
: the total staked gwei that attested to the majority-elected Casper FFG target epoch during the current epoch. This figure must be equal to or less thancurrent_epoch_attesting_gwei
.previous_epoch_active_gwei
: as above, but during the previous epoch.previous_epoch_attesting_gwei
: seecurrent_epoch_attesting_gwei
.previous_epoch_target_attesting_gwei
: seecurrent_epoch_target_attesting_gwei
.previous_epoch_head_attesting_gwei
: the total staked gwei that attested to a head beacon block that is in the canonical chain.
From this data you can calculate some interesting figures:
Participation Rate
previous_epoch_attesting_gwei / previous_epoch_active_gwei
Expresses the ratio of validators that managed to have an attestation voting upon the previous epoch included in a block.
Justification/Finalization Rate
previous_epoch_target_attesting_gwei / previous_epoch_active_gwei
When this value is greater than or equal to 2/3
it is possible that the
beacon chain may justify and/or finalize the epoch.
HTTP Example
curl -X GET "http://localhost:5052/lighthouse/validator_inclusion/0/global" -H "accept: application/json" | jq
{
"data": {
"current_epoch_active_gwei": 642688000000000,
"previous_epoch_active_gwei": 642688000000000,
"current_epoch_attesting_gwei": 366208000000000,
"current_epoch_target_attesting_gwei": 366208000000000,
"previous_epoch_attesting_gwei": 1000000000,
"previous_epoch_target_attesting_gwei": 1000000000,
"previous_epoch_head_attesting_gwei": 1000000000
}
}
Individual
Returns a per-validator summary of how that validator performed during the current epoch.
The Global Votes endpoint is the summation of all of these individual values, please see it for definitions of terms like "current_epoch", "previous_epoch" and "target_attester".
HTTP Example
curl -X GET "http://localhost:5052/lighthouse/validator_inclusion/0/42" -H "accept: application/json" | jq
{
"data": {
"is_slashed": false,
"is_withdrawable_in_current_epoch": false,
"is_active_in_current_epoch": true,
"is_active_in_previous_epoch": true,
"current_epoch_effective_balance_gwei": 32000000000,
"is_current_epoch_attester": false,
"is_current_epoch_target_attester": false,
"is_previous_epoch_attester": false,
"is_previous_epoch_target_attester": false,
"is_previous_epoch_head_attester": false
}
}
Validator Client API
Lighthouse implements a HTTP/JSON API for the validator client. Since there is no Eth2 standard validator client API, Lighthouse has defined its own.
A full list of endpoints can be found in Endpoints.
Note: All requests to the HTTP server must supply an
Authorization
header. All responses contain aSignature
header for optional verification.
Starting the server
A Lighthouse validator client can be configured to expose a HTTP server by supplying the --http
flag. The default listen address is 127.0.0.1:5062
.
The following CLI flags control the HTTP server:
--http
: enable the HTTP server (required even if the following flags are provided).--http-port
: specify the listen port of the server.--http-allow-origin
: specify the value of theAccess-Control-Allow-Origin
header. The default is to not supply a header.
Security
The validator client HTTP is not encrypted (i.e., it is not HTTPS). For
this reason, it will only listen on 127.0.0.1
.
It is unsafe to expose the validator client to the public Internet without additional transport layer security (e.g., HTTPS via nginx, SSH tunnels, etc.).
CLI Example
Start the validator client with the HTTP server listening on http://localhost:5062:
lighthouse vc --http
Validator Client API: Endpoints
Endpoints
HTTP Path | Description |
---|---|
GET /lighthouse/version | Get the Lighthouse software version |
GET /lighthouse/health | Get information about the host machine |
GET /lighthouse/spec | Get the Eth2 specification used by the validator |
GET /lighthouse/validators | List all validators |
GET /lighthouse/validators/:voting_pubkey | Get a specific validator |
PATCH /lighthouse/validators/:voting_pubkey | Update a specific validator |
POST /lighthouse/validators | Create a new validator and mnemonic. |
POST /lighthouse/validators/keystore | Import a keystore. |
POST /lighthouse/validators/mnemonic | Create a new validator from an existing mnemonic. |
GET /lighthouse/version
Returns the software version and git
commit hash for the Lighthouse binary.
HTTP Specification
Property | Specification |
---|---|
Path | /lighthouse/version |
Method | GET |
Required Headers | Authorization |
Typical Responses | 200 |
Example Response Body
{
"data": {
"version": "Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux"
}
}
GET /lighthouse/health
Returns information regarding the health of the host machine.
HTTP Specification
Property | Specification |
---|---|
Path | /lighthouse/health |
Method | GET |
Required Headers | Authorization |
Typical Responses | 200 |
Note: this endpoint is presently only available on Linux.
Example Response Body
{
"data": {
"pid": 1476293,
"pid_num_threads": 19,
"pid_mem_resident_set_size": 4009984,
"pid_mem_virtual_memory_size": 1306775552,
"sys_virt_mem_total": 33596100608,
"sys_virt_mem_available": 23073017856,
"sys_virt_mem_used": 9346957312,
"sys_virt_mem_free": 22410510336,
"sys_virt_mem_percent": 31.322334,
"sys_loadavg_1": 0.98,
"sys_loadavg_5": 0.98,
"sys_loadavg_15": 1.01
}
}
GET /lighthouse/spec
Returns the Eth2 specification loaded for this validator.
HTTP Specification
Property | Specification |
---|---|
Path | /lighthouse/spec |
Method | GET |
Required Headers | Authorization |
Typical Responses | 200 |
Example Response Body
{
"data": {
"CONFIG_NAME": "mainnet",
"MAX_COMMITTEES_PER_SLOT": "64",
"TARGET_COMMITTEE_SIZE": "128",
"MIN_PER_EPOCH_CHURN_LIMIT": "4",
"CHURN_LIMIT_QUOTIENT": "65536",
"SHUFFLE_ROUND_COUNT": "90",
"MIN_GENESIS_ACTIVE_VALIDATOR_COUNT": "1024",
"MIN_GENESIS_TIME": "1601380800",
"GENESIS_DELAY": "172800",
"MIN_DEPOSIT_AMOUNT": "1000000000",
"MAX_EFFECTIVE_BALANCE": "32000000000",
"EJECTION_BALANCE": "16000000000",
"EFFECTIVE_BALANCE_INCREMENT": "1000000000",
"HYSTERESIS_QUOTIENT": "4",
"HYSTERESIS_DOWNWARD_MULTIPLIER": "1",
"HYSTERESIS_UPWARD_MULTIPLIER": "5",
"PROPORTIONAL_SLASHING_MULTIPLIER": "3",
"GENESIS_FORK_VERSION": "0x00000002",
"BLS_WITHDRAWAL_PREFIX": "0x00",
"SECONDS_PER_SLOT": "12",
"MIN_ATTESTATION_INCLUSION_DELAY": "1",
"MIN_SEED_LOOKAHEAD": "1",
"MAX_SEED_LOOKAHEAD": "4",
"MIN_EPOCHS_TO_INACTIVITY_PENALTY": "4",
"MIN_VALIDATOR_WITHDRAWABILITY_DELAY": "256",
"SHARD_COMMITTEE_PERIOD": "256",
"BASE_REWARD_FACTOR": "64",
"WHISTLEBLOWER_REWARD_QUOTIENT": "512",
"PROPOSER_REWARD_QUOTIENT": "8",
"INACTIVITY_PENALTY_QUOTIENT": "16777216",
"MIN_SLASHING_PENALTY_QUOTIENT": "32",
"SAFE_SLOTS_TO_UPDATE_JUSTIFIED": "8",
"DOMAIN_BEACON_PROPOSER": "0x00000000",
"DOMAIN_BEACON_ATTESTER": "0x01000000",
"DOMAIN_RANDAO": "0x02000000",
"DOMAIN_DEPOSIT": "0x03000000",
"DOMAIN_VOLUNTARY_EXIT": "0x04000000",
"DOMAIN_SELECTION_PROOF": "0x05000000",
"DOMAIN_AGGREGATE_AND_PROOF": "0x06000000",
"MAX_VALIDATORS_PER_COMMITTEE": "2048",
"SLOTS_PER_EPOCH": "32",
"EPOCHS_PER_ETH1_VOTING_PERIOD": "32",
"SLOTS_PER_HISTORICAL_ROOT": "8192",
"EPOCHS_PER_HISTORICAL_VECTOR": "65536",
"EPOCHS_PER_SLASHINGS_VECTOR": "8192",
"HISTORICAL_ROOTS_LIMIT": "16777216",
"VALIDATOR_REGISTRY_LIMIT": "1099511627776",
"MAX_PROPOSER_SLASHINGS": "16",
"MAX_ATTESTER_SLASHINGS": "2",
"MAX_ATTESTATIONS": "128",
"MAX_DEPOSITS": "16",
"MAX_VOLUNTARY_EXITS": "16",
"ETH1_FOLLOW_DISTANCE": "1024",
"TARGET_AGGREGATORS_PER_COMMITTEE": "16",
"RANDOM_SUBNETS_PER_VALIDATOR": "1",
"EPOCHS_PER_RANDOM_SUBNET_SUBSCRIPTION": "256",
"SECONDS_PER_ETH1_BLOCK": "14",
"DEPOSIT_CONTRACT_ADDRESS": "0x48b597f4b53c21b48ad95c7256b49d1779bd5890"
}
}
GET /lighthouse/validators
Lists all validators managed by this validator client.
HTTP Specification
Property | Specification |
---|---|
Path | /lighthouse/validators |
Method | GET |
Required Headers | Authorization |
Typical Responses | 200 |
Example Response Body
{
"data": [
{
"enabled": true,
"description": "validator one",
"voting_pubkey": "0xb0148e6348264131bf47bcd1829590e870c836dc893050fd0dadc7a28949f9d0a72f2805d027521b45441101f0cc1cde"
},
{
"enabled": true,
"description": "validator two",
"voting_pubkey": "0xb0441246ed813af54c0a11efd53019f63dd454a1fa2a9939ce3c228419fbe113fb02b443ceeb38736ef97877eb88d43a"
},
{
"enabled": true,
"description": "validator three",
"voting_pubkey": "0xad77e388d745f24e13890353031dd8137432ee4225752642aad0a2ab003c86620357d91973b6675932ff51f817088f38"
}
]
}
GET /lighthouse/validators/:voting_pubkey
Get a validator by their voting_pubkey
.
HTTP Specification
Property | Specification |
---|---|
Path | /lighthouse/validators/:voting_pubkey |
Method | GET |
Required Headers | Authorization |
Typical Responses | 200, 400 |
Example Path
localhost:5062/lighthouse/validators/0xb0148e6348264131bf47bcd1829590e870c836dc893050fd0dadc7a28949f9d0a72f2805d027521b45441101f0cc1cde
Example Response Body
{
"data": {
"enabled": true,
"voting_pubkey": "0xb0148e6348264131bf47bcd1829590e870c836dc893050fd0dadc7a28949f9d0a72f2805d027521b45441101f0cc1cde"
}
}
PATCH /lighthouse/validators/:voting_pubkey
Update some values for the validator with voting_pubkey
.
HTTP Specification
Property | Specification |
---|---|
Path | /lighthouse/validators/:voting_pubkey |
Method | PATCH |
Required Headers | Authorization |
Typical Responses | 200, 400 |
Example Path
localhost:5062/lighthouse/validators/0xb0148e6348264131bf47bcd1829590e870c836dc893050fd0dadc7a28949f9d0a72f2805d027521b45441101f0cc1cde
Example Request Body
{
"enabled": false
}
Example Response Body
null
POST /lighthouse/validators/
Create any number of new validators, all of which will share a common mnemonic generated by the server.
A BIP-39 mnemonic will be randomly generated and returned with the response.
This mnemonic can be used to recover all keys returned in the response.
Validators are generated from the mnemonic according to
EIP-2334, starting at index 0
.
HTTP Specification
Property | Specification |
---|---|
Path | /lighthouse/validators |
Method | POST |
Required Headers | Authorization |
Typical Responses | 200 |
Example Request Body
[
{
"enable": true,
"description": "validator_one",
"deposit_gwei": "32000000000"
},
{
"enable": false,
"description": "validator two",
"deposit_gwei": "34000000000"
}
]
Example Response Body
{
"data": {
"mnemonic": "marine orchard scout label trim only narrow taste art belt betray soda deal diagram glare hero scare shadow ramp blur junior behave resource tourist",
"validators": [
{
"enabled": true,
"description": "validator_one",
"voting_pubkey": "0x8ffbc881fb60841a4546b4b385ec5e9b5090fd1c4395e568d98b74b94b41a912c6101113da39d43c101369eeb9b48e50",
"eth1_deposit_tx_data": "0x22895118000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000001206c68675776d418bfd63468789e7c68a6788c4dd45a3a911fe3d642668220bbf200000000000000000000000000000000000000000000000000000000000000308ffbc881fb60841a4546b4b385ec5e9b5090fd1c4395e568d98b74b94b41a912c6101113da39d43c101369eeb9b48e5000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000cf8b3abbf0ecd91f3b0affcc3a11e9c5f8066efb8982d354ee9a812219b17000000000000000000000000000000000000000000000000000000000000000608fbe2cc0e17a98d4a58bd7a65f0475a58850d3c048da7b718f8809d8943fee1dbd5677c04b5fa08a9c44d271d009edcd15caa56387dc217159b300aad66c2cf8040696d383d0bff37b2892a7fe9ba78b2220158f3dc1b9cd6357bdcaee3eb9f2",
"deposit_gwei": "32000000000"
},
{
"enabled": false,
"description": "validator two",
"voting_pubkey": "0xa9fadd620dc68e9fe0d6e1a69f6c54a0271ad65ab5a509e645e45c6e60ff8f4fc538f301781193a08b55821444801502",
"eth1_deposit_tx_data": "0x22895118000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000e00000000000000000000000000000000000000000000000000000000000000120b1911954c1b8d23233e0e2bf8c4878c8f56d25a4f790ec09a94520ec88af30490000000000000000000000000000000000000000000000000000000000000030a9fadd620dc68e9fe0d6e1a69f6c54a0271ad65ab5a509e645e45c6e60ff8f4fc538f301781193a08b5582144480150200000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000a96df8b95c3ba749265e48a101f2ed974fffd7487487ed55f8dded99b617ad000000000000000000000000000000000000000000000000000000000000006090421299179824950e2f5a592ab1fdefe5349faea1e8126146a006b64777b74cce3cfc5b39d35b370e8f844e99c2dc1b19a1ebd38c7605f28e9c4540aea48f0bc48e853ae5f477fa81a9fc599d1732968c772730e1e47aaf5c5117bd045b788e",
"deposit_gwei": "34000000000"
}
]
}
}
POST /lighthouse/validators/keystore
Import a keystore into the validator client.
HTTP Specification
Property | Specification |
---|---|
Path | /lighthouse/validators/keystore |
Method | POST |
Required Headers | Authorization |
Typical Responses | 200 |
Example Request Body
{
"enable": true,
"password": "mypassword",
"keystore": {
"crypto": {
"kdf": {
"function": "scrypt",
"params": {
"dklen": 32,
"n": 262144,
"r": 8,
"p": 1,
"salt": "445989ec2f332bb6099605b4f1562c0df017488d8d7fb3709f99ebe31da94b49"
},
"message": ""
},
"checksum": {
"function": "sha256",
"params": {
},
"message": "abadc1285fd38b24a98ac586bda5b17a8f93fc1ff0778803dc32049578981236"
},
"cipher": {
"function": "aes-128-ctr",
"params": {
"iv": "65abb7e1d02eec9910d04299cc73efbe"
},
"message": "6b7931a4447be727a3bb5dc106d9f3c1ba50671648e522f213651d13450b6417"
}
},
"uuid": "5cf2a1fb-dcd6-4095-9ebf-7e4ee0204cab",
"path": "m/12381/3600/0/0/0",
"pubkey": "b0d2f05014de27c6d7981e4a920799db1c512ee7922932be6bf55729039147cf35a090bd4ab378fe2d133c36cbbc9969",
"version": 4,
"description": ""
}
}
Example Response Body
{
"data": {
"enabled": true,
"description": "",
"voting_pubkey": "0xb0d2f05014de27c6d7981e4a920799db1c512ee7922932be6bf55729039147cf35a090bd4ab378fe2d133c36cbbc9969"
}
}
POST /lighthouse/validators/mnemonic
Create any number of new validators, all of which will share a common mnemonic.
The supplied BIP-39 mnemonic will be used to generate the validator keys
according to EIP-2334, starting at
the supplied key_derivation_path_offset
. For example, if
key_derivation_path_offset = 42
, then the first validator voting key will be
generated with the path m/12381/3600/i/42
.
HTTP Specification
Property | Specification |
---|---|
Path | /lighthouse/validators/mnemonic |
Method | POST |
Required Headers | Authorization |
Typical Responses | 200 |
Example Request Body
{
"mnemonic": "theme onion deal plastic claim silver fancy youth lock ordinary hotel elegant balance ridge web skill burger survey demand distance legal fish salad cloth",
"key_derivation_path_offset": 0,
"validators": [
{
"enable": true,
"description": "validator_one",
"deposit_gwei": "32000000000"
}
]
}
Example Response Body
{
"data": [
{
"enabled": true,
"description": "validator_one",
"voting_pubkey": "0xa062f95fee747144d5e511940624bc6546509eeaeae9383257a9c43e7ddc58c17c2bab4ae62053122184c381b90db380",
"eth1_deposit_tx_data": "0x22895118000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000e00000000000000000000000000000000000000000000000000000000000000120a57324d95ae9c7abfb5cc9bd4db253ed0605dc8a19f84810bcf3f3874d0e703a0000000000000000000000000000000000000000000000000000000000000030a062f95fee747144d5e511940624bc6546509eeaeae9383257a9c43e7ddc58c17c2bab4ae62053122184c381b90db3800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000200046e4199f18102b5d4e8842d0eeafaa1268ee2c21340c63f9c2cd5b03ff19320000000000000000000000000000000000000000000000000000000000000060b2a897b4ba4f3910e9090abc4c22f81f13e8923ea61c0043506950b6ae174aa643540554037b465670d28fa7b7d716a301e9b172297122acc56be1131621c072f7c0a73ea7b8c5a90ecd5da06d79d90afaea17cdeeef8ed323912c70ad62c04b",
"deposit_gwei": "32000000000"
}
]
}
Validator Client API: Authorization Header
Overview
The validator client HTTP server requires that all requests have the following HTTP header:
- Name:
Authorization
- Value:
Basic <api-token>
Where <api-token>
is a string that can be obtained from the validator client
host. Here is an example Authorization
header:
Authorization Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123
Obtaining the API token
The API token can be obtained via two methods:
Method 1: Reading from a file
The API token is stored as a file in the validators
directory. For most users
this is ~/.lighthouse/{network}/validators/api-token.txt
. Here's an
example using the cat
command to print the token to the terminal, but any
text editor will suffice:
$ cat api-token.txt
api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123
Method 2: Reading from logs
When starting the validator client it will output a log message containing an
api-token
field:
Sep 28 19:17:52.615 INFO HTTP API started api_token: api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123, listen_address: 127.0.0.1:5062
Example
Here is an example curl
command using the API token in the Authorization
header:
curl localhost:5062/lighthouse/version -H "Authorization: Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123"
The server should respond with its version:
{"data":{"version":"Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux"}}
Validator Client API: Signature Header
Overview
The validator client HTTP server adds the following header to all responses:
- Name:
Signature
- Value: a secp256k1 signature across the SHA256 of the response body.
Example Signature
header:
Signature: 0x304402205b114366444112580bf455d919401e9c869f5af067cd496016ab70d428b5a99d0220067aede1eb5819eecfd5dd7a2b57c5ac2b98f25a7be214b05684b04523aef873
Verifying the Signature
Below is a browser-ready example of signature verification.
HTML
<script src="https://rawgit.com/emn178/js-sha256/master/src/sha256.js" type="text/javascript"></script>
<script src="https://rawgit.com/indutny/elliptic/master/dist/elliptic.min.js" type="text/javascript"></script>
Javascript
// Helper function to turn a hex-string into bytes.
function hexStringToByte(str) {
if (!str) {
return new Uint8Array();
}
var a = [];
for (var i = 0, len = str.length; i < len; i+=2) {
a.push(parseInt(str.substr(i,2),16));
}
return new Uint8Array(a);
}
// This example uses the secp256k1 curve from the "elliptic" library:
//
// https://github.com/indutny/elliptic
var ec = new elliptic.ec('secp256k1');
// The public key is contained in the API token:
//
// Authorization: Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123
var pk_bytes = hexStringToByte('03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123');
// The signature is in the `Signature` header of the response:
//
// Signature: 0x304402205b114366444112580bf455d919401e9c869f5af067cd496016ab70d428b5a99d0220067aede1eb5819eecfd5dd7a2b57c5ac2b98f25a7be214b05684b04523aef873
var sig_bytes = hexStringToByte('304402205b114366444112580bf455d919401e9c869f5af067cd496016ab70d428b5a99d0220067aede1eb5819eecfd5dd7a2b57c5ac2b98f25a7be214b05684b04523aef873');
// The HTTP response body.
var response_body = "{\"data\":{\"version\":\"Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux\"}}";
// The HTTP response body is hashed (SHA256) to determine the 32-byte message.
let hash = sha256.create();
hash.update(response_body);
let message = hash.array();
// The 32-byte message hash, the signature and the public key are verified.
if (ec.verify(message, sig_bytes, pk_bytes)) {
console.log("The signature is valid")
} else {
console.log("The signature is invalid")
}
This example is also available as a JSFiddle.
Example
The previous Javascript example was written using the output from the following
curl
command:
curl -v localhost:5062/lighthouse/version -H "Authorization: Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123"
* Trying ::1:5062...
* connect to ::1 port 5062 failed: Connection refused
* Trying 127.0.0.1:5062...
* Connected to localhost (127.0.0.1) port 5062 (#0)
> GET /lighthouse/version HTTP/1.1
> Host: localhost:5062
> User-Agent: curl/7.72.0
> Accept: */*
> Authorization: Basic api-token-0x03eace4c98e8f77477bb99efb74f9af10d800bd3318f92c33b719a4644254d4123
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< signature: 0x304402205b114366444112580bf455d919401e9c869f5af067cd496016ab70d428b5a99d0220067aede1eb5819eecfd5dd7a2b57c5ac2b98f25a7be214b05684b04523aef873
< server: Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux
< access-control-allow-origin:
< content-length: 65
< date: Tue, 29 Sep 2020 04:23:46 GMT
<
* Connection #0 to host localhost left intact
{"data":{"version":"Lighthouse/v0.2.11-fc0654fbe+/x86_64-linux"}}
Prometheus Metrics
Lighthouse provides an extensive suite of metrics and monitoring in the Prometheus export format via a HTTP server built into Lighthouse.
These metrics are generally consumed by a Prometheus server and displayed via a Grafana dashboard. These components are available in a docker-compose format at sigp/lighthouse-metrics.
Beacon Node Metrics
By default, these metrics are disabled but can be enabled with the --metrics
flag. Use the --metrics-address
, --metrics-port
and
--metrics-allow-origin
flags to customize the metrics server.
Example
Start a beacon node with the metrics server enabled:
lighthouse bn --metrics
Check to ensure that the metrics are available on the default port:
curl localhost:5054/metrics
Validator Client Metrics
By default, these metrics are disabled but can be enabled with the --metrics
flag. Use the --metrics-address
, --metrics-port
and
--metrics-allow-origin
flags to customize the metrics server.
Example
Start a validator client with the metrics server enabled:
lighthouse vc --metrics
Check to ensure that the metrics are available on the default port:
curl localhost:5064/metrics
Advanced Usage
Want to get into the nitty-gritty of Lighthouse configuration? Looking for something not covered elsewhere?
This section provides detailed information about configuring Lighthouse for specific use cases, and tips about how things work under the hood.
- Advanced Database Configuration: understanding space-time trade-offs in the database.
Custom Data Directories
Users can override the default Lighthouse data directories (e.g., ~/.lighthouse/mainnet
) using the --datadir
flag. The custom data directory mirrors the structure of any network specific default directory (e.g. ~/.lighthouse/mainnet
).
Note: Users should specify different custom directories for different networks.
Below is an example flow for importing validator keys, running a beacon node and validator client using a custom data directory /var/lib/my-custom-dir
for the Mainnet network.
lighthouse --network mainnet --datadir /var/lib/my-custom-dir account validator import --directory <PATH-TO-LAUNCHPAD-KEYS-DIRECTORY>
lighthouse --network mainnet --datadir /var/lib/my-custom-dir bn --staking
lighthouse --network mainnet --datadir /var/lib/my-custom-dir vc
The first step creates a validators
directory under /var/lib/my-custom-dir
which contains the imported keys and validator_definitions.yml
.
After that, we simply run the beacon chain and validator client with the custom dir path.
Database Configuration
Lighthouse uses an efficient "split" database schema, whereby finalized states are stored separately from recent, unfinalized states. We refer to the portion of the database storing finalized states as the freezer or cold DB, and the portion storing recent states as the hot DB.
In both the hot and cold DBs, full BeaconState
data structures are only stored periodically, and
intermediate states are reconstructed by quickly replaying blocks on top of the nearest state. For
example, to fetch a state at slot 7 the database might fetch a full state from slot 0, and replay
blocks from slots 1-7 while omitting redundant signature checks and Merkle root calculations. The
full states upon which blocks are replayed are referred to as restore points in the case of the
freezer DB, and epoch boundary states in the case of the hot DB.
The frequency at which the hot database stores full BeaconState
s is fixed to one-state-per-epoch
in order to keep loads of recent states performant. For the freezer DB, the frequency is
configurable via the --slots-per-restore-point
CLI flag, which is the topic of the next section.
Freezer DB Space-time Trade-offs
Frequent restore points use more disk space but accelerate the loading of historical states. Conversely, infrequent restore points use much less space, but cause the loading of historical states to slow down dramatically. A lower slots per restore point value (SPRP) corresponds to more frequent restore points, while a higher SPRP corresponds to less frequent. The table below shows some example values.
Use Case | SPRP | Yearly Disk Usage | Load Historical State |
---|---|---|---|
Block explorer/analysis | 32 | 411 GB | 96 ms |
Default | 2048 | 6.4 GB | 6 s |
Validator only | 8192 | 1.6 GB | 25 s |
As you can see, it's a high-stakes trade-off! The relationships to disk usage and historical state load time are both linear – doubling SPRP halves disk usage and doubles load time. The minimum SPRP is 32, and the maximum is 8192.
The values shown in the table are approximate, calculated using a simple heuristic: each
BeaconState
consumes around 5MB of disk space, and each block replayed takes around 3ms. The
Yearly Disk Usage column shows the approx size of the freezer DB alone (hot DB not included),
and the Load Historical State time is the worst-case load time for a state in the last slot of
an epoch.
To configure your Lighthouse node's database with a non-default SPRP, run your Beacon Node with
the --slots-per-restore-point
flag:
lighthouse beacon_node --slots-per-restore-point 8192
Glossary
- Freezer DB: part of the database storing finalized states. States are stored in a sparser format, and usually less frequently than in the hot DB.
- Cold DB: see Freezer DB.
- Hot DB: part of the database storing recent states, all blocks, and other runtime data. Full states are stored every epoch.
- Restore Point: a full
BeaconState
stored periodically in the freezer DB. - Slots Per Restore Point (SPRP): the number of slots between restore points in the freezer DB.
- Split Slot: the slot at which states are divided between the hot and the cold DBs. All states from slots less than the split slot are in the freezer, while all states with slots greater than or equal to the split slot are in the hot DB.
Local Testnets
During development and testing it can be useful to start a small, local testnet.
The scripts/local_testnet/ directory contains several scripts and a README that should make this process easy.
Advanced Networking
Lighthouse's networking stack has a number of configurable parameters that can be adjusted to handle a variety of network situations. This section outlines some of these configuration parameters and their consequences at the networking level and their general intended use.
Target Peers
The beacon node has a --target-peers
CLI parameter. This allows you to
instruct the beacon node how many peers it should try to find and maintain.
Lighthouse allows an additional 10% of this value for nodes to connect to us.
Every 30 seconds, the excess peers are pruned. Lighthouse removes the
worst-performing peers and maintains the best performing peers.
It may be counter-intuitive, but having a very large peer count will likely have a degraded performance for a beacon node in normal operation and during sync.
Having a large peer count means that your node must act as an honest RPC server to all your connected peers. If there are many that are syncing, they will often be requesting a large number of blocks from your node. This means you node must perform a lot of work reading and responding to these peers. If you node is over-loaded with peers and cannot respond in time, other Lighthouse peers will consider you non-performant and disfavour you from their peer stores. You node will also have to handle and manage the gossip and extra bandwidth that comes from having these extra peers. Having a non-responsive node (due to overloading of connected peers), degrades the network as a whole.
It is often the belief that a higher peer counts will improve sync times. Beyond a handful of peers, this is not true. On all current tested networks, the bottleneck for syncing is not the network bandwidth of downloading blocks, rather it is the CPU load of processing the blocks themselves. Most of the time, the network is idle, waiting for blocks to be processed. Having a very large peer count will not speed up sync.
For these reasons, we recommend users do not modify the --target-peer
count
drastically and use the (recommended) default.
NAT Traversal (Port Forwarding)
Lighthouse, by default, used port 9000 for both TCP and UDP. Lighthouse will still function if it is behind a NAT without any port mappings. Although Lighthouse still functions, we recommend that some mechanism is used to ensure that your Lighthouse node is publicly accessible. This will typically improve your peer count, allow the scoring system to find the best/most favourable peers for your node and overall improve the eth2 network.
Lighthouse currently supports UPnP. If UPnP is enabled on your router, Lighthouse will automatically establish the port mappings for you (the beacon node will inform you of established routes in this case). If UPnP is not enabled, we recommend you manually set up port mappings to both of Lighthouse's TCP and UDP ports (9000 by default).
ENR Configuration
Lighthouse has a number of CLI parameters for constructing and modifying the
local Ethereum Node Record (ENR). Examples are --enr-address
,
--enr-udp-port
, --enr-tcp-port
and --disable-enr-auto-update
. These
settings allow you construct your initial ENR. Their primary intention is for
setting up boot-like nodes and having a contactable ENR on boot. On normal
operation of a Lighthouse node, none of these flags need to be set. Setting
these flags incorrectly can lead to your node being incorrectly added to the
global DHT which will degrades the discovery process for all Eth2 peers.
The ENR of a Lighthouse node is initially set to be non-contactable. The in-built discovery mechanism can determine if you node is publicly accessible, and if it is, it will update your ENR to the correct public IP and port address (meaning you do not need to set it manually). Lighthouse persists its ENR, so on reboot it will re-load the settings it had discovered previously.
Modifying the ENR settings can degrade the discovery of your node making it harder for peers to find you or potentially making it harder for other peers to find each other. We recommend not touching these settings unless for a more advanced use case.
Running a Slasher
Lighthouse includes a slasher for identifying slashable offences comitted by other validators and including proof of those offences in blocks.
Running a slasher is a good way to contribute to the health of the network, and doing so can earn extra income for your validators. However it is currently only recommended for expert users because of the immaturity of the slasher UX and the extra resources required.
Minimum System Requirements
- Quad-core CPU
- 16 GB RAM
- 256 GB solid state storage (in addition to space for the beacon node DB)
How to Run
The slasher runs inside the same process as the beacon node, when enabled via the --slasher
flag:
lighthouse bn --slasher --debug-level debug
The slasher hooks into Lighthouse's block and attestation processing, and pushes messages into an in-memory queue for regular processing. It will increase the CPU usage of the beacon node because it verifies the signatures of otherwise invalid messages. When a slasher batch update runs, the messages are filtered for relevancy, and all relevant messages are checked for slashings and written to the slasher database.
You should run with debug logs, so that you can see the slasher's internal machinations, and provide logs to the devs should you encounter any bugs.
Configuration
The slasher has several configuration options that control its functioning.
Database Directory
- Flag:
--slasher-dir PATH
- Argument: path to directory
By default the slasher stores data in the slasher_db
directory inside the beacon node's datadir,
e.g. ~/.lighthouse/{network}/beacon/slasher_db
. You can use this flag to change that storage
directory.
History Length
- Flag:
--slasher-history-length EPOCHS
- Argument: number of epochs
- Default: 4096 epochs
The slasher stores data for the history-length
most recent epochs. By default the history length
is set high in order to catch all validator misbehaviour since the last weak subjectivity
checkpoint. If you would like to reduce the resource requirements (particularly disk space), set the
history length to a lower value, although a lower history length may prevent your slasher from
finding some slashings.
Note: See the --slasher-max-db-size
section below to ensure that your disk space savings are
applied. The history length must be a multiple of the chunk size (default 16), and cannot be
changed after initialization.
Max Database Size
- Flag:
--slasher-max-db-size GIGABYTES
- Argument: maximum size of the database in gigabytes
- Default: 256 GB
The slasher uses LMDB as its backing store, and LMDB will consume up to the maximum amount of disk space allocated to it. By default the limit is set to accomodate the default history length and around 150K validators but you can set it lower if running with a reduced history length. The space required scales approximately linearly in validator count and history length, i.e. if you halve either you can halve the space required.
If you want a better estimate you can use this formula:
360 * V * N + (16 * V * N)/(C * K) + 15000 * N
where
V
is the validator countN
is the history lengthC
is the chunk sizeK
is the validator chunk size
Update Period
- Flag:
--slasher-update-period SECONDS
- Argument: number of seconds
- Default: 12 seconds
Set the length of the time interval between each slasher batch update. You can check if your slasher is keeping up with its update period by looking for a log message like this:
DEBG Completed slasher update num_blocks: 1, num_attestations: 279, time_taken: 1821ms, epoch: 20889, service: slasher
If the time_taken
is substantially longer than the update period then it indicates your machine is
struggling under the load, and you should consider increasing the update period or lowering the
resource requirements by tweaking the history length.
Chunk Size and Validator Chunk Size
- Flags:
--slasher-chunk-size EPOCHS
,--slasher-validator-chunk-size NUM_VALIDATORS
- Arguments: number of ecochs, number of validators
- Defaults: 16, 256
Adjusting these parameter should only be done in conjunction with reading in detail about how the slasher works, and/or reading the source code.
Short-Range Example
If you would like to run a lightweight slasher that just checks blocks and attestations within the last day or so, you can use this combination of arguments:
lighthouse bn --slasher --slasher-history-length 256 --slasher-max-db-size 16 --debug-level debug
Stability Warning
The slasher code is still quite new, so we may update the schema of the slasher database in a backwards-incompatible way which will require re-initialization.
Redundancy
There are three places in Lighthouse where redundancy is notable:
- ✅ GOOD: Using a redundant Beacon node in
lighthouse bn --beacon-nodes
- ✅ GOOD: Using a redundant Eth1 node in
lighthouse bn --eth1-endpoints
- ☠️ BAD: Running redundant
lighthouse vc
instances with overlapping keypairs.
I mention (3) since it is unsafe and should not be confused with the other two uses of redundancy. Running the same validator keypair in more than one validator client (Lighthouse, or otherwise) will eventually lead to slashing. See Slashing Protection for more information.
From this paragraph, this document will only refer to the first two items (1, 2). We never recommend that users implement redundancy for validator keypairs.
Redundant Beacon Nodes
The lighthouse bn --beacon-nodes
flag allows one or more comma-separated values:
lighthouse vc --beacon-nodes http://localhost:5052
lighthouse vc --beacon-nodes http://localhost:5052,http://192.168.1.1:5052
In the first example, the validator client will attempt to contact
http://localhost:5052
to perform duties. If that node is not contactable, not
synced or unable to serve the request then the validator client may fail to
perform some duty (e.g., produce a block or attest).
However, in the second example, any failure on http://localhost:5052
will be
followed by a second attempt using http://192.168.1.1:5052
. This
achieves redundancy, allowing the validator client to continue to perform its
duties as long as at least one of the beacon nodes is available.
There are a few interesting properties about the list of --beacon-nodes
:
- Ordering matters: the validator client prefers a beacon node that is earlier in the list.
- Synced is preferred: the validator client prefers a synced beacon node over one that is still syncing.
- Failure is sticky: if a beacon node fails, it will be flagged as offline and wont be retried again for the rest of the slot (12 seconds). This helps prevent the impact of time-outs and other lengthy errors.
Note: When supplying multiple beacon nodes the
http://localhost:5052
address must be explicitly provided (if it is desired). It will only be used as default if no--beacon-nodes
flag is provided at all.
Configuring a redundant Beacon Node
In our previous example we listed http://192.168.1.1:5052
as a redundant
node. Apart from having sufficient resources, the backup node should have the
following flags:
--staking
: starts the HTTP API server and ensures the Eth1 chain is synced.--http-address 0.0.0.0
: this allows any external IP address to access the HTTP server (a firewall should be configured to deny unauthorized access to port5052
). This is only required if your backup node is on a different host.--subscribe-all-subnets
: ensures that the beacon node subscribes to all subnets, not just on-demand requests from validators.--process-all-attestations
: ensures that the beacon node performs aggregation on all seen attestations.
Subsequently, one could use the following command to provide a backup beacon node:
lighthouse bn \
--staking \
--http-address 0.0.0.0 \
--subscribe-all-subnets \
--process-all-attestations
Resource usage of redundant Beacon Nodes
The --subscribe-all-subnets
and --process-all-attestations
flags typically
cause a significant increase in resource consumption. A doubling in CPU
utilization and RAM consumption is expected.
The increase in resource consumption is due to the fact that the beacon node is now processing, validating, aggregating and forwarding all attestations, whereas previously it was likely only doing a fraction of this work. Without these flags, subscription to attestation subnets and aggregation of attestations is only performed for validators which explicitly request subscriptions.
There are 64 subnets and each validator will result in a subscription to at least one subnet. So, using the two aforementioned flags will result in resource consumption akin to running 64+ validators.
Redundant Eth1 nodes
Compared to redundancy in beacon nodes (see above), using redundant Eth1 nodes is very straight-forward:
lighthouse bn --eth1-endpoints http://localhost:8545
lighthouse bn --eth1-endpoints http://localhost:8545,http://192.168.0.1:8545
In the case of (1), any failure on http://localhost:8545
will result in a
failure to update the Eth1 cache in the beacon node. Consistent failure over a
period of hours may result in a failure in block production.
However, in the case of (2), the http://192.168.0.1:8545
Eth1 endpoint will
be tried each time the first fails. Eth1 endpoints will be tried from first to
last in the list, until a successful response is obtained.
There is no need for special configuration on the Eth1 endpoint, all endpoints can (probably should) be configured identically.
Note: When supplying multiple endpoints the
http://localhost:8545
address must be explicitly provided (if it is desired). It will only be used as default if no--eth1-endpoints
flag is provided at all.
Contributing to Lighthouse
Lighthouse welcomes contributions. If you are interested in contributing to the Ethereum ecosystem, and you want to learn Rust, Lighthouse is a great project to work on.
To start contributing,
- Read our how to contribute document.
- Setup a development environment.
- Browse through the open issues (tip: look for the good first issue tag).
- Comment on an issue before starting work.
- Share your work via a pull-request.
If you have questions, please reach out via Discord.
Branches
Lighthouse maintains two permanent branches:
stable
: Always points to the latest stable release.- This is ideal for most users.
unstable
: Used for development, contains the latest PRs.- Developers should base thier PRs on this branch.
Ethereum 2.0
Lighthouse is an implementation of the Ethereum 2.0 specification, as defined in the ethereum/eth2.0-specs repository.
We recommend reading Danny Ryan's (incomplete) Phase 0 for Humans before diving into the canonical spec.
Rust
Lighthouse adheres to Rust code conventions as outlined in the Rust Styleguide.
Please use clippy and rustfmt to detect common mistakes and inconsistent code formatting:
$ cargo clippy --all
$ cargo fmt --all --check
Panics
Generally, panics should be avoided at all costs. Lighthouse operates in an adversarial environment (the Internet) and it's a severe vulnerability if people on the Internet can cause Lighthouse to crash via a panic.
Always prefer returning a Result
or Option
over causing a panic. For
example, prefer array.get(1)?
over array[1]
.
If you know there won't be a panic but can't express that to the compiler,
use .expect("Helpful message")
instead of .unwrap()
. Always provide
detailed reasoning in a nearby comment when making assumptions about panics.
TODOs
All TODO
statements should be accompanied by a GitHub issue.
#![allow(unused_variables)] fn main() { pub fn my_function(&mut self, _something &[u8]) -> Result<String, Error> { // TODO: something_here // https://github.com/sigp/lighthouse/issues/XX } }
Comments
General Comments
- Prefer line (
//
) comments to block comments (/* ... */
) - Comments can appear on the line prior to the item or after a trailing space.
#![allow(unused_variables)] fn main() { // Comment for this struct struct Lighthouse {} fn make_blockchain() {} // A comment on the same line after a space }
Doc Comments
- The
///
is used to generate comments for Docs. - The comments should come before attributes.
#![allow(unused_variables)] fn main() { /// Stores the core configuration for this Lighthouse instance. /// This struct is general, other components may implement more /// specialized config structs. #[derive(Clone)] pub struct LighthouseConfig { pub data_dir: PathBuf, pub p2p_listen_port: u16, } }
Rust Resources
Rust is an extremely powerful, low-level programming language that provides freedom and performance to create powerful projects. The Rust Book provides insight into the Rust language and some of the coding style to follow (As well as acting as a great introduction and tutorial for the language).
Rust has a steep learning curve, but there are many resources to help. We suggest:
- Rust Book
- Rust by example
- Learning Rust With Entirely Too Many Linked Lists
- Rustlings
- Rust Exercism
- Learn X in Y minutes - Rust
Development Environment
Most Lighthouse developers work on Linux or MacOS, however Windows should still be suitable.
First, follow the Installation Guide
to install
Lighthouse. This will install Lighthouse to your PATH
, which is not
particularly useful for development but still a good way to ensure you have the
base dependencies.
The only additional requirement for developers is
ganache-cli
. This is used to
simulate the Eth1 chain during tests. You'll get failures during tests if you
don't have ganache-cli
available on your PATH
.
Testing
As with most other Rust projects, Lighthouse uses cargo test
for unit and
integration tests. For example, to test the ssz
crate run:
cd consensus/ssz
cargo test
We also wrap some of these commands and expose them via the Makefile
in the
project root for the benefit of CI/CD. We list some of these commands below so
you can run them locally and avoid CI failures:
$ make cargo-fmt
: (fast) runs a Rust code linter.$ make test
: (medium) runs unit tests across the whole project.$ make test-ef
: (medium) runs the Ethereum Foundation test vectors.$ make test-full
: (slow) runs the full test suite (including all previous commands). This is approximately everything that is required to pass CI.
The lighthouse test suite is quite extensive, running the whole suite may take 30+ minutes.
Ethereum 2.0 Spec Tests
The ethereum/eth2.0-spec-tests repository contains a large set of tests that verify Lighthouse behaviour against the Ethereum Foundation specifications.
These tests are quite large (100's of MB) so they're only downloaded if you run
$ make test-ef
(or anything that run it). You may want to avoid
downloading these tests if you're on a slow or metered Internet connection. CI
will require them to pass, though.
Frequently Asked Questions
- Why does it take so long for a validator to be activated?
- Do I need to set up any port mappings
- I have a low peer count and it is not increasing
- What should I do if I lose my slashing protection database?
- How do I update lighthouse?
- I can't compile lighthouse
- What is "Syncing eth1 block cache"
- Can I use redundancy in my staking setup?
- How can I monitor my validators
Why does it take so long for a validator to be activated?
After validators create their Eth1 deposit transaction there are two waiting periods before they can start producing blocks and attestations:
- Waiting for the beacon chain to recognise the Eth1 block containing the deposit (generally 4 to 7.4 hours).
- Waiting in the queue for validator activation (generally 6.4 minutes for every 4 validators in the queue).
Detailed answers below:
1. Waiting for the beacon chain to detect the Eth1 deposit
Since the beacon chain uses Eth1 for validator on-boarding, beacon chain
validators must listen to event logs from the deposit contract. Since the
latest blocks of the Eth1 chain are vulnerable to re-orgs due to minor network
partitions, beacon nodes follow the Eth1 chain at a distance of 1,024 blocks
(~4 hours) (see
ETH1_FOLLOW_DISTANCE
).
This follow distance protects the beacon chain from on-boarding validators that
are likely to be removed due to an Eth1 re-org.
Now we know there's a 4 hours delay before the beacon nodes even consider an
Eth1 block. Once they are considering these blocks, there's a voting period
where beacon validators vote on which Eth1 to include in the beacon chain. This
period is defined as 32 epochs (~3.4 hours, see
ETH1_VOTING_PERIOD
).
During this voting period, each beacon block producer includes an
Eth1Data
in their block which counts as a vote towards what that validator considers to
be the head of the Eth1 chain at the start of the voting period (with respect
to ETH1_FOLLOW_DISTANCE
, of course). You can see the exact voting logic
here.
These two delays combined represent the time between an Eth1 deposit being
included in an Eth1 data vote and that validator appearing in the beacon chain.
The ETH1_FOLLOW_DISTANCE
delay causes a minimum delay of ~4 hours and
ETH1_VOTING_PERIOD
means that if a validator deposit happens just before
the start of a new voting period then they might not notice this delay at all.
However, if the validator deposit happens just after the start of the new
voting period the validator might have to wait ~3.4 hours for next voting
period. In times of very, very severe network issues, the network may even fail
to vote in new Eth1 blocks, stopping all new validator deposits!
Note: you can see the list of validators included in the beacon chain using our REST API: /beacon/validators/all
2. Waiting for a validator to be activated
If a validator has provided an invalid public key or signature, they will never be activated or even show up in /beacon/validators/all. They will simply be forgotten by the beacon chain! But, if those parameters were correct, once the Eth1 delays have elapsed and the validator appears in the beacon chain, there's another delay before the validator becomes "active" (canonical definition here) and can start producing blocks and attestations.
Firstly, the validator won't become active until their beacon chain balance is
equal to or greater than
MAX_EFFECTIVE_BALANCE
(32 ETH on mainnet, usually 3.2 ETH on testnets). Once this balance is reached,
the validator must wait until the start of the next epoch (up to 6.4 minutes)
for the
process_registry_updates
routine to run. This routine activates validators with respect to a churn
limit;
it will only allow the number of validators to increase (churn) by a certain
amount. Up until there are about 330,000 validators this churn limit is set to
4 and it starts to very slowly increase as the number of validators increases
from there.
If a new validator isn't within the churn limit from the front of the queue, they will need to wait another epoch (6.4 minutes) for their next chance. This repeats until the queue is cleared.
Once a validator has been activated, there's no more waiting! It's time to produce blocks and attestations!
Do I need to set up any port mappings
It is not strictly required to open any ports for Lighthouse to connect and participate in the network. Lighthouse should work out-of-the-box. However, if your node is not publicly accessible (you are behind a NAT or router that has not been configured to allow access to Lighthouse ports) you will only be able to reach peers who have a set up that is publicly accessible.
There are a number of undesired consequences of not making your Lighthouse node publicly accessible.
Firstly, it will make it more difficult for your node to find peers, as your node will not be added to the global DHT and other peers will not be able to initiate connections with you. Secondly, the peers in your peer store are more likely to end connections with you and be less performant as these peers will likely be overloaded with subscribing peers. The reason being, that peers that have correct port forwarding (publicly accessible) are in higher demand than regular peers as other nodes behind NAT's will also be looking for these peers. Finally, not making your node publicly accessible degrades the overall network, making it more difficult for other peers to join and degrades the overall connectivity of the global network.
For these reasons, we recommend that you make your node publicly accessible.
Lighthouse supports UPnP. If you are behind a NAT with a router that supports UPnP you can simply ensure UPnP is enabled (Lighthouse will inform you in its initial logs if a route has been established). You can also manually set up port mappings in your router to your local Lighthouse instance. By default, Lighthouse uses port 9000 for both TCP and UDP. Opening both these ports will make your Lighthouse node maximally contactable.
I have a low peer count and it is not increasing
If you cannot find ANY peers at all. It is likely that you have incorrect
testnet configuration settings. Ensure that the network you wish to connect to
is correct (the beacon node outputs the network it is connecting to in the
initial boot-up log lines). On top of this, ensure that you are not using the
same datadir
as a previous network. I.e if you have been running the
pyrmont
testnet and are now trying to join a new testnet but using the same
datadir
(the datadir
is also printed out in the beacon node's logs on
boot-up).
If you find yourself with a low peer count and is not reaching the target you
expect. Try setting up the correct port forwards as described in 3.
above.
What should I do if I lose my slashing protection database?
See here.
How do I update lighthouse?
If you are updating to new release binaries, it will be the same process as described here.
If you are updating by rebuilding from source, see here.
If you are running the docker image provided by Sigma Prime on Dockerhub, you can update to specific versions, for example:
$ docker pull sigp/lighthouse:v1.0.0
If you are building a docker image, the process will be similar to the one described here. You will just also need to make sure the code you have checked out is up to date.
I can't compile lighthouse
See here.
What is "Syncing eth1 block cache"
Nov 30 21:04:28.268 WARN Syncing eth1 block cache est_blocks_remaining: initializing deposits, msg: sync can take longer when using remote eth1 nodes, service: slot_notifier
This log indicates that your beacon node is downloading blocks and deposits
from your eth1 node. When the est_blocks_remaining
is
initializing_deposits
, your node is downloading deposit logs. It may stay in
this stage for several minutes. Once the deposits logs are finished
downloading, the est_blocks_remaining
value will start decreasing.
It is perfectly normal to see this log when starting a node for the first time or after being off for more than several minutes.
If this log continues appearing sporadically during operation, there may be an issue with your eth1 endpoint.
Can I use redundancy in my staking setup?
You should never use duplicate/redundant validator keypairs or validator clients (i.e., don't
duplicate your JSON keystores and don't run lighthouse vc
twice). This will lead to slashing.
However, there are some components which can be configured with redundancy. See the Redundancy guide for more information.
How can I monitor my validators?
Apart from using block explorers, you may use the "Validator Monitor" built into Lighthouse which provides logging and Prometheus/Grafana metrics for individual validators. See Validator Monitoring for more information.