please me more with avax

This commit is contained in:
Sebastian
2023-01-15 19:16:12 +01:00
parent 28d7accdf5
commit 25e93d9a27

View File

@@ -1,6 +1,13 @@
How to bootstrap a Avalanche archive node with Docker How to bootstrap a Avalanche archive node with Docker
==== ====
Also EASY
------
[Celo](howto-celo-archive.md) | [Optimism](howto-optimism-archive.md) | [Avalanche](howto-avalanche-archive.md) | [Arbitrum](howto-arbitrum-archive.md) | [Gnosis](http://rpc.bash-st.art) | [Polygon](http://rpc.bash-st.art) | [Ethereum](http://rpc.bash-st.art)
[Very EASY](http://rpc.bash-st.art)
Prerequisites Prerequisites
==== ====
@@ -10,11 +17,15 @@ Prerequisites
* Storage: 4 TiB NVMe SSD * Storage: 4 TiB NVMe SSD
* OS: Ubuntu 22.04 * OS: Ubuntu 22.04
The main requirement here is the storage. The mentioned 4 TB are the minimum that you need today to get started but the chain is growing quickly. Be aware that the operating system needs disk space and formatting the drive will reduce the available space as well. A typical 4 TB drive comes actually with 3.84 TB disk space from which after formatting 3.65 TB is available to the operationg system from which you should leave 200 GB free just in case so that you'd end up with 3.45 TB for the nodes datdir. Thus you should probably invest into an array of two 4 TB disks e.g. by configuring them to run in RAID0. Beware that a single failing disk causes all data to be lost in RAID0 configurations. The main requirement here is the storage.
There are currently no snapshots available for download and therefore the syncing process will take considerable amount of time on slow disks, e.g. attached network storage form cloud providers is a no go. Also the CPU should feature a higth single core speed. * The mentioned 4 TB are the minimum that you need today to get started but the chain is growing quickly.
* Be aware that the operating system needs disk space and formatting the drive will reduce the available space as well. A typical 4 TB drive comes actually with 3.84 TB disk space from which after formatting 3.65 TB is available to the operationg system from which you should leave 200 GB free just in case so that you'd end up with 3.45 TB for the nodes datdir.
* Thus you should probably invest into an array of two 4 TB disks e.g. by configuring them to run in RAID0. Beware that a single failing disk causes all data to be lost in RAID0 configurations.
Sync times are reported to be in the range of 3 weeks on dedicated hardware. **Sync times are reported to be in the range of 3 weeks on dedicated hardware.**
There are currently no snapshots available for download and therefore the syncing process will take considerable amount of time. It's almost impossible on slow disks, e.g. attached network storage form cloud providers is a no go. Also the CPU should feature a higth single core speed.
Install Required Software Install Required Software
@@ -30,6 +41,7 @@ Create a new folder and place a new text file named docker-compose.yml into it.
Copy paste the following content to the file and save it by closing it with crtl-x and answering with "y" in the next prompt. Copy paste the following content to the file and save it by closing it with crtl-x and answering with "y" in the next prompt.
```
version: '3.1' version: '3.1'
services: services:
@@ -83,23 +95,28 @@ Copy paste the following content to the file and save it by closing it with crtl
volumes: volumes:
avalanche: avalanche:
traefik_letsencrypt: traefik_letsencrypt:
```
Next you'd need the ip address of the machine that your indexer runs on. you can query it using curl by entering the following in the terminal. Find out the ip address of the machine that you are on. It needs to be whitelisted to connect to the RPC that we create. You can query it using curl by entering the following in the terminal.
curl ifconfig.me curl ifconfig.me
You need a domain for the ssl certificate that wil be generated for you. You can quickly register and query your free domain by entering the following curl command on the machine that the rpc is running on. For the SSL certificate you need a domain. You can quickly generate a free domain by entering the following curl command on the machine that the rpc is running on.
curl -X PUT bash-st.art curl -X PUT bash-st.art
you also need a email address for the registration of the ssl certificates. you might not want your private email address to be that public. Also think of a nonsense email address for your SSL cert. You can also give your real address but it's kinda public.
icantthink@ofnonsen.se
create a file .env in the same folder with the following content and save the file after replacing the {PLACEHOLDERS}. create a file .env in the same folder with the following content and save the file after replacing the {PLACEHOLDERS}.
EMAIL={YOUR_EMAIL} EMAIL={YOUR_EMAIL}
DOMAIN={YOUR_DOMAIN} DOMAIN={YOUR_DOMAIN}
WHITELIST={YOUR_INDEXER_MACHINE_IP} WHITELIST={YOUR_MACHINE_IP}
In case you want to whitelist more IPs just add them separated by a comma.
Also create a file named archive-config.json with the following content. Also create a file named archive-config.json with the following content.
@@ -108,12 +125,16 @@ Also create a file named archive-config.json with the following content.
"pruning-enabled": false "pruning-enabled": false
} }
The last step is to run the node using docker-compose. Enter the following on the command line. This tells our node to not prune blocks which means to be an archive node.
Well done!
Ready to find out if wverything works? I invite you to run the whole thing using docker-compose. Enter the following on the command line.
docker-compose up -d docker-compose up -d
In case you want to whitelist more IPs you can simply edit the .env file and run the above command again to pick up the changes.
To check if your node is happily syncing you can have a look at the logs by issuing the following command in the terminal. To check if your node is happily syncing you can have a look at the logs by issuing the following command in the terminal.
docker-compose logs -f avalanche docker-compose logs -f avalanche
@@ -130,3 +151,7 @@ To trouble shoot it's also interesting to know which block your node is currentl
curl --data '{"method":"eth_blockNumber","params":[],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST http://{DOMAIN}/avalanche-archive curl --data '{"method":"eth_blockNumber","params":[],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST http://{DOMAIN}/avalanche-archive
BEWARE
===
In case you missed it in the first section: while it's fun watching the node start syncing in the logs, it gets boring pretty quickly. And it takes weeks not days. The further you get the slower it becomes. Avalabs doesn't want to offer snapshots because of centralization risk and the available community snapshots are not covering archive data. We have to do this the hard way. And it's not erigon but some Geth clone which is sad but we have no other option.