1 Commits

Author SHA1 Message Date
cventastic
40ce43a82c added harmony.conf 2022-04-05 15:47:51 +02:00
894 changed files with 2709 additions and 572198 deletions

BIN
.DS_Store vendored

Binary file not shown.

1
.gitignore vendored
View File

@@ -1 +0,0 @@
.env

355
README.md
View File

@@ -1,353 +1,4 @@
# Blockchain Node Configurations
### Docs
This directory contains Docker Compose configurations for various blockchain networks and node implementations.
## Directory Structure
- Root level YAML files (e.g. `ethereum-mainnet.yml`, `arbitrum-one.yml`) - Main Docker Compose configurations for specific networks
- Network-specific subdirectories - Contain additional configurations, genesis files, and client-specific implementations
- Utility scripts (e.g. `show-networks.sh`, `logs.sh`) - Helper scripts for managing and monitoring nodes
## Node Types
This repository supports multiple node types for various blockchain networks:
- **Ethereum networks**: Mainnet, Sepolia, Holesky
- **Layer 2 networks**: Arbitrum, Optimism, Base, Scroll, ZKSync Era, etc.
- **Alternative L1 networks**: Avalanche, BSC, Fantom, Polygon, etc.
Most networks have both archive and pruned node configurations available, with support for different client implementations (Geth, Erigon, Reth, etc.).
## Quick Start
1. Create a `.env` file in this directory (see example below)
2. Select which node configurations you want to run by adding them to the `COMPOSE_FILE` variable
3. Run `docker compose up -d`
4. Access your RPC endpoints at `https://yourdomain.tld/path` or `http://localhost:port`
### Example .env File
```bash
# Domain settings
DOMAIN=203-0-113-42.traefik.me # Use your PUBLIC IP with dots replaced by hyphens
MAIL=your-email@example.com # Required for Let's Encrypt SSL
WHITELIST=0.0.0.0/0 # IP whitelist for access (0.0.0.0/0 allows all)
# Public IP (required for many chains)
IP=203.0.113.42 # Your PUBLIC IP (get it with: curl ipinfo.io/ip)
# Network settings
CHAINS_SUBNET=192.168.0.0/26
# RPC provider endpoints (fallback/bootstrap nodes)
ETHEREUM_MAINNET_EXECUTION_RPC=https://ethereum-rpc.publicnode.com
ETHEREUM_MAINNET_EXECUTION_WS=wss://ethereum-rpc.publicnode.com
ETHEREUM_MAINNET_BEACON_REST=https://ethereum-beacon-api.publicnode.com
ETHEREUM_SEPOLIA_EXECUTION_RPC=https://ethereum-sepolia-rpc.publicnode.com
ETHEREUM_SEPOLIA_EXECUTION_WS=wss://ethereum-sepolia-rpc.publicnode.com
ETHEREUM_SEPOLIA_BEACON_REST=https://ethereum-sepolia-beacon-api.publicnode.com
ARBITRUM_SEPOLIA_EXECUTION_RPC=https://arbitrum-sepolia-rpc.publicnode.com
ARBITRUM_SEPOLIA_EXECUTION_WS=wss://arbitrum-sepolia-rpc.publicnode.com
# SSL settings (set NO_SSL=true to disable SSL)
# NO_SSL=true
# Docker Compose configuration
# Always include base.yml and rpc.yml, then add the networks you want
COMPOSE_FILE=base.yml:rpc.yml:ethereum-mainnet.yml
```
## Usage
To start nodes defined in your `.env` file:
```bash
docker compose up -d
```
### Ports
The default ports are defined in the templates. They are randomised to avoid conflicts. Some configurations can require 7 ports to be opened for P2P discovery. Docker will override any UFW firewall rule that you define on the host. You should prevent the containers to try to reach out to other nodes on local IP ranges.
You can use the following service definition as a starting point. Replace the {{ chains_subnet }} with the subnet of your network. Default is 192.168.0.0/26.
```
[Unit]
Description= iptables firewall docker fix
After=docker.service
[Service]
ExecStart=/usr/local/bin/iptables-firewall.sh start
RemainAfterExit=true
StandardOutput=journal
[Install]
WantedBy=multi-user.target
```
```bash
#!/bin/bash
PATH="/sbin:/usr/sbin:/bin:/usr/bin"
# Flush existing rules in the DOCKER-USER chain
# this is potentially dangerous if other scripts write in that chain too but for now this should be the only one
iptables -F DOCKER-USER
# block heise.de to test it's working. ./ping.sh heise.de will ping from a container in the subnet.
iptables -I DOCKER-USER -s {{ chains_subnet }} -d 193.99.144.80/32 -j REJECT
# block local networks
iptables -I DOCKER-USER -s {{ chains_subnet }} -d 192.168.0.0/16 -j REJECT
iptables -I DOCKER-USER -s {{ chains_subnet }} -d 172.16.0.0/12 -j REJECT
iptables -I DOCKER-USER -s {{ chains_subnet }} -d 10.0.0.0/8 -j REJECT
# accept the subnet so containers can reach each other.
iptables -I DOCKER-USER -s {{ chains_subnet }} -d {{ chains_subnet }} -j ACCEPT
# I don't know why that is
iptables -I DOCKER-USER -s {{ chains_subnet }} -d 10.13.13.0/24 -j ACCEPT
```
### Node Structure
In general Nodes can have one or all of the following components:
- a client (execution layer)
- a node (for consensus)
- a relay (for data availability access)
- a database (external to the client mostly zk rollups, can have mulitple databases)
- a proxy (to map http access and websockets to the same endpoint)
The simplest examples have only a client. The compose files define one entrypoint to query the node. usually it's the client otherwise it's the proxy. some clients have multiple entrypoints because they allow to query the consensus layer and the execution layer.
In the root folder of this repository you can find convenience yml files which are symlinks to specific compose files. The naming for the symlinks follow the principle {network_name}-{chain_name}.yml which leaves the client and bd type unspecified so they are defaults.
### Syncing
The configurations aim to work standalone restoring state as much as possible from public sources. Using snapshots can help syncing faster. For some configurations it's not reasonably possible to maintain a version that can be bootstrapped from scratch using only the compose file.
### Naming conventions
- default client is the default client for the network. Usually it's geth or op-geth.
- default sync mode is pruned. If available clients are snap synced.
- default node is op-node or prysm or whatever is the default for the network (e.g. beacon-kit for berachain, goat for goat, etc.)
- default sync mode for nodes is pruned
- default client for archive nodes is (op-)erigon or (op-)reth
- default sync mode for (op-)reth and (op-)erigon is archive-trace.
- default sync mode for erigon3 is pruned-trace.
- default db is postgres
- default proxy is nginx
#### Node features
The idea is to assume a default node configuration that is able to drive the execution client. In case the beacon node database has special features then the file name would include the features after a double hyphen. e.g. `ethereum-mainnet-geth-pruned-pebble-hash--lighthouse-pruned-blobs.yml` would be a node that has a pruned execution client and a pruned beacon node database with a complete blob history.
#### Container names
The docker containers are generally named using the base name and the component suffix. The base name is generally the network name and the chain name and the sync mode archive in case of archive nodes. The rationale is that it doesn't make sense to run 2 pruned nodes for the same chain on the same machine as well as 2 archive nodes for the same chain. The volumes that are created in /var/lib/docker/volumes are using the full name of the node including the sync mode and database features. This is to allow switching out the implementation of parts of the configuration and not causing conflicts, e.g. exchanging prysm for nimbus as node implementation but keep using the same exection client. Environment variables are also using the full name of the component that they are defined for.
## Utility Scripts
This directory includes several useful scripts to help you manage and monitor your nodes:
### Status and Monitoring
- `show-status.sh [config-name]` - Check sync status of all configured nodes (or specific config if provided)
- `show-db-size.sh` - Display disk usage of all Docker volumes, sorted by size
- `show-networks.sh` - List all available network configurations
- `show-running.sh` - List currently running containers
- `sync-status.sh <config-name>` - Check synchronization status of a specific configuration
- `logs.sh <config-name>` - View logs of all containers for a specific configuration
- `latest.sh <config-name>` - Get the latest block number and hash of a local node
- `ping.sh <container-name>` - Test connectivity to a container from inside the Docker network
### Node Management
- `stop.sh <config-name>` - Stop all containers for a specific configuration
- `force-recreate.sh <config-name>` - Force recreate all containers for a specific configuration
- `backup-node.sh <config-name> [webdav_url]` - Backup Docker volumes for a configuration (locally or to WebDAV)
- `restore-volumes.sh <config-name> [http_url]` - Restore Docker volumes from backup (local or HTTP source)
- `cleanup-backups.sh` - Clean up old backup files
- `list-backups.sh` - List available backup files
- `op-wheel.sh` - Tool for Optimism rollup maintenance, including rewinding to a specific block
Note: `<config-name>` refers to the compose file name without the .yml extension (e.g., `ethereum-mainnet` for ethereum-mainnet.yml)
#### Nuclear option to recreate a node
```bash
./stop.sh <config-name> && ./rm.sh <config-name> && ./delete-volumes.sh <config-name> && ./force-recreate.sh <config-name> && ./logs.sh <config-name>
```
#### Debugging tips
To get the configuration name for one of the commands use `./show-status.sh` which lists all the configrations and their status to copy paste for further inspection with e.g. `./catchup.sh <config-name>` or repeated use of `./latest.sh <config-name>` which will give you and idea if the sync is actually progressing and if it is on the canonical chain.
Note: some configurations use staged sync which means that there is no measurable progress on the RPC in between bacthes of processed blocks. In any case `./logs.sh <config-name>` will give you insights into problems, potentially filtered by a LLM to spot common errors. It could be that clients are syncing slower than the chain progresses.
#### Further automation
You can chain `./success-if-almost-synced.sh <config-name> <age-of-last-block-in-seconds-to-be-considered-almost-synced>` with other scripts to create more complex automation, e.g. notify you once a node synced up to chainhead or adding the node to the dshackle configuration or taking a backup to clone the node to a different server.
#### OP Wheel Usage Example
Be aware that this is dangerous because you skip every check for your rollups op-geth execution client database to be consistent.
```bash
# Rewind an Optimism rollup to a specific block
./op-wheel.sh engine set-forkchoice --unsafe=0x111AC7F --safe=0x111AC7F --finalized=0x111AC7F \
--engine=http://op-lisk-sepolia:8551/ --engine.open=http://op-lisk-sepolia:8545 \
--engine.jwt-secret-path=/jwtsecret
```
Nuclear option:
```bash
# Finalize the latest locally available block of an Optimism rollup
./op-wheel-finalize-latest-block.sh <client_service_name> (<node_service_name>)
```
Where `<client_service_name>` is the name of the client service in the compose file and `<node_service_name>` is the name of the node service in the compose file which defaults to `<client_service_name>-node`.
## SSL Certificates and IP Configuration
### Public IP Configuration
Many blockchain nodes require your public IP address to function properly:
1. Get your public IP address:
```bash
curl ipinfo.io/ip
```
2. Add this IP to your `.env` file:
```bash
IP=203.0.113.42 # Your actual public IP
```
3. This IP is used by several chains for P2P discovery and network communication
### SSL Certificates with Traefik
This system uses Traefik as a reverse proxy for SSL certificates:
1. By default, certificates are obtained from Let's Encrypt
2. Use your **public** IP address with traefik.me by replacing dots with hyphens
```
# If your public IP is 203.0.113.42
DOMAIN=203-0-113-42.traefik.me
```
3. Traefik.me automatically generates valid SSL certificates for this domain
4. For production, use your own domain and set MAIL for Let's Encrypt notifications
5. To disable SSL, set `NO_SSL=true` in your .env file
## Configuration
Each network configuration includes:
- Node client software (Geth, Erigon, etc.)
- Synchronization type (archive or pruned)
- Database backend and configuration
- Network-specific parameters
## Accessing RPC Endpoints
Once your nodes are running, you can access the RPC endpoints at:
- HTTPS: `https://yourdomain.tld/ethereum` (or other network paths)
- HTTP: `http://yourdomain.tld/ethereum` (or other network paths)
- WebSocket: `wss://yourdomain.tld/ethereum` (same URL as HTTP/HTTPS)
All services use standard ports (80 for HTTP, 443 for HTTPS), so no port specification is required in the URL.
## Resource Requirements
Different node types have different hardware requirements:
- Pruned Ethereum node: ~500GB disk, 8GB RAM
- Archive Ethereum node: ~2TB disk, 16GB RAM
- L2 nodes typically require less resources than L1 nodes
- Consider using SSD or NVMe storage for better performance
## DRPC Integration
This system includes support for DRPC (Decentralized RPC) integration, allowing you to monetize your RPC nodes by selling excess capacity:
### Setting Up DRPC
1. Add `drpc.yml` to your `COMPOSE_FILE` variable in `.env`
2. Configure the following variables in your `.env` file:
```
GW_DOMAIN=your-gateway-domain.com
GW_REDIS_RAM=2gb # Memory allocation for Redis
DRPC_VERSION=0.64.16 # Or latest version
```
3. Generate the upstream configurations for dshackle:
```bash
# Using domain URLs (default)
./upstreams.sh
```
The `upstreams.sh` script automatically detects all running nodes on your machine and generates the appropriate configuration for the dshackle load balancer. This allows you to connect your nodes to drpc.org and sell RPC capacity.
For more information about DRPC, visit [drpc.org](https://drpc.org/).
## Supported Networks
This repository supports a comprehensive range of blockchain networks:
### Layer 1 Networks
- **Major Networks**: Ethereum (Mainnet, Sepolia, Holesky), BSC, Polygon, Avalanche, Gnosis
- **Alternative L1s**: Fantom, Core, Berachain, Ronin, Viction, Fuse, Tron, ThunderCore
- **Emerging L1s**: Goat, AlephZero, Haqq, Taiko, Rootstock
### Layer 2 Networks
- **OP Stack**: Optimism, Base, Zora, Mode, Blast, Fraxtal, Bob, Boba, Worldchain, Metal, Ink, Lisk, SNAX, Celo
- **Arbitrum Ecosystem**: Arbitrum One, Arbitrum Nova, Everclear, Playblock, Real, Connext, OpenCampusCodex
- **Other L2s**: Linea, Scroll, zkSync Era, Metis, Moonbeam
Most networks support multiple node implementations (Geth, Erigon, Reth) and environments (mainnet, testnet).
## Backup and Restore System
This repository includes a comprehensive backup and restore system for Docker volumes:
### Local Backups
- `backup-node.sh <config-name>` - Create a backup of all volumes for a configuration to the `/backup` directory
- `restore-volumes.sh <config-name>` - Restore volumes from the latest backup in the `/backup` directory
### Remote Backups
To serve backups via HTTP and WebDAV:
1. Add `backup-http.yml` to your `COMPOSE_FILE` variable in `.env`
2. This exposes:
- HTTP access to backups at `https://yourdomain.tld/backup`
- WebDAV access at `https://yourdomain.tld/dav`
### Cross-Server Backup and Restore
For multi-server setups:
1. On server A: Include `backup-http.yml` in `COMPOSE_FILE` to serve backups
2. On server B: Use restore from server A's backups:
```bash
# Restore directly from server A
./restore-volumes.sh ethereum-mainnet https://serverA.domain.tld/backup/
```
3. Create backups on server B and send to server A via WebDAV:
```bash
# Backup to server A's WebDAV
./backup-node.sh ethereum-mainnet https://serverA.domain.tld/dav
```
This allows for efficient volume transfers between servers without needing SSH access.
[Pocket Validator](README_POKT.md) </br>
[Harmony Validator](harmony/README_HARMONY.md)

76
README_POKT.md Normal file
View File

@@ -0,0 +1,76 @@
Tested on Ubuntu 20.04.3 LTS
#### Prerequisites:
docker <br />
docker-compose <br />
DNS A-Record pointing to your server <br />
Wireguard-Server: Paste wireguard wg0.conf from wireguard-server to wireguard/config/wg0.conf <br />
.env File inside POKT-DOKT with secrets <br />
#### Usage
```
git clone https://github.com/cventastic/POKT_DOKT.git
cd POKT_DOKT
git reset --hard origin/main && git pull && ./util/prepare.sh
```
# EXAMPLES
Start POKT in relay mode:
```
command: pocket start --simulateRelay
```
If you want to activly relay. You also have to Stake! <br />
Testnet-Faucet: (https://faucet.pokt.network/) <br />
How to stake: https://docs.pokt.network/home/paths/node-runner#stake-the-validator <br />
POKT QUERY for simulate-relay mode:
```
Pockt-Testnet:
curl -X POST --data '{"relay_network_id":"0002","payload":{"data":"{}","method":"POST","path":"v1/query/height","headers":{}}}' http://localhost:8082/v1/client/sim
Pocket-Mainnet:
curl -X POST --data '{"relay_network_id":"0001","payload":{"data":"{}","method":"POST","path":"v1/query/height","headers":{}}}' http://localhost:8081/v1/client/sim
```
GETH QUERY (from whitelisted servers e.g pokt-test) for simulate-relay mode:
```
Pocket-Testnet:
curl -X POST --data '{"relay_network_id":"0020","payload":{"data":"{\"jsonrpc\":\"2.0\",\"method\":\"eth_getBalance\",\"params\":[\"0x1a8c807a6E4F624fCab01FEBf76a541d31B8345A\", \"latest\"],\"id\":1}","method":"POST","path":"","headers":{}}}' http://127.0.0.1:8082/v1/client/sim
curl -v -X POST --data '{"relay_network_id":"0020","payload":{"data":"{\"jsonrpc\":\"2.0\",\"method\":\"eth_syncing\",\"params\":[],\"id\":1}","method":"POST","path":"","headers":{}}}' http://127.0.0.1:8082/v1/client/sim
Pocket-Mainnet:
curl -X POST --data '{"relay_network_id":"0021","payload":{"data":"{\"jsonrpc\":\"2.0\",\"method\":\"eth_getBalance\",\"params\":[\"0x1a8c807a6E4F624fCab01FEBf76a541d31B8345A\", \"latest\"],\"id\":1}","method":"POST","path":"","headers":{}}}' http://127.0.0.1:8081/v1/client/sim
curl -v -X POST --data '{"relay_network_id":"0021","payload":{"data":"{\"jsonrpc\":\"2.0\",\"method\":\"eth_syncing\",\"params\":[],\"id\":1}","method":"POST","path":"","headers":{}}}' http://127.0.0.1:8081/v1/client/sim
```
STANDARD GETH QUERY (from whitelistet server)
```
curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' https://$RPCNODE/goerli
```
# SSL
I you want to test SSL comment in:
```
# - "--certificatesresolvers.myresolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
```
Check if there is a file here /traefic/letsencrypt/acme.json if yes, you have to delete it. <br />
Otherwise traefik will not issue the certificate for an existing domain. <br />
#### TODO !!!!
Bootstrapping from Snapshots <br />
Link-Timezone into containers.
AVALANCHE:
- Archive?
- Monitoring https://docs.avax.network/build/tools/dashboards/README
### Notes
#### Monitoring
Telegram get group ids the bot is in:
```curl -X POST https://api.telegram.org/bot$TELEGRAM_API_TOKEN/getUpdates```
There has to be an event in the channel for the bot to get updates
### neue NODE
.env
rpc-timeout
whitelist

View File

@@ -1 +0,0 @@
abstract/external-node/abstract-mainnet-external-node-pruned.yml

View File

@@ -1 +0,0 @@
abstract/external-node/abstract-testnet-external-node-pruned.yml

View File

@@ -1,161 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:abstract/external-node/abstract-mainnet-external-node-archive.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/abstract-mainnet-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
abstract-mainnet-archive:
image: ${ABSTRACT_EXTERNAL_NODE_IMAGE:-matterlabs/external-node}:${ABSTRACT_MAINNET_EXTERNAL_NODE_VERSION:-v29.1.2}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 12612:12612
- 12612:12612/udp
expose:
- 8545
- 8546
environment:
- DATABASE_POOL_SIZE=50
- DATABASE_URL=postgres://postgres:notsecurepassword@abstract-mainnet-archive-db:5430/zksync_local_ext_node
- EN_API_NAMESAPCES=eth,net,web3,debug,pubsub,debug,zks
- EN_ETH_CLIENT_URL=${ETHEREUM_MAINNET_EXECUTION_RPC}
- EN_HEALTHCHECK_PORT=3081
- EN_HTTP_PORT=8545
- EN_L1_CHAIN_ID=1
- EN_L2_CHAIN_ID=2741
- EN_MAIN_NODE_URL=https://api.mainnet.abs.xyz
- EN_MAX_RESPONSE_BODY_SIZE_MB=25
- EN_MAX_RESPONSE_BODY_SIZE_OVERRIDES_MB=eth_getLogs=100,eth_getBlockReceipts=None
- EN_MERKLE_TREE_PATH=./db/ext-node/lightweight
- EN_PROMETHEUS_PORT=3322
- EN_PRUNING_ENABLED=false
- EN_REQ_ENTITIES_LIMIT=100000
- EN_SNAPSHOTS_OBJECT_STORE_BUCKET_BASE_URL=raas-abstract-mainnet-external-node-snapshots
- EN_SNAPSHOTS_OBJECT_STORE_MODE=GCSAnonymousReadOnly
- EN_SNAPSHOTS_RECOVERY_ENABLED=true
- EN_STATE_CACHE_PATH=./db/ext-node/state_keeper
- EN_WS_PORT=8546
- RUST_LOG=warn,zksync=info,zksync_core::metadata_calculator=debug,zksync_state=info,zksync_utils=info,zksync_web3_decl::client=error
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ABSTRACT_MAINNET_EXTERNAL_NODE_ARCHIVE_DATA:-abstract-mainnet-external-node-archive}:/db
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.middlewares.abstract-mainnet-external-node-archive-stripprefix.stripprefix.prefixes=/abstract-mainnet-archive
- traefik.http.services.abstract-mainnet-external-node-archive.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.abstract-mainnet-external-node-archive.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.abstract-mainnet-external-node-archive.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.abstract-mainnet-external-node-archive.rule=Host(`$DOMAIN`) && (Path(`/abstract-mainnet-archive`) || Path(`/abstract-mainnet-archive/`))}
- ${NO_SSL:+traefik.http.routers.abstract-mainnet-external-node-archive.rule=Path(`/abstract-mainnet-archive`) || Path(`/abstract-mainnet-archive/`)}
- traefik.http.routers.abstract-mainnet-external-node-archive.middlewares=abstract-mainnet-external-node-archive-stripprefix, ipallowlist
- traefik.http.routers.abstract-mainnet-external-node-archive.priority=50 # gets any request that is not GET with UPGRADE header
- traefik.http.routers.abstract-mainnet-external-node-archive-ws.priority=100 # answers GET requests first
- traefik.http.services.abstract-mainnet-external-node-archive-ws.loadbalancer.server.port=8546
- traefik.http.routers.abstract-mainnet-external-node-archive-ws.service=abstract-mainnet-external-node-archive-ws
- traefik.http.routers.abstract-mainnet-external-node-archive.service=abstract-mainnet-external-node-archive
- ${NO_SSL:-traefik.http.routers.abstract-mainnet-external-node-archive-ws.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.abstract-mainnet-external-node-archive-ws.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.abstract-mainnet-external-node-archive-ws.rule=Host(`$DOMAIN`) && (Path(`/abstract-mainnet-archive`) || Path(`/abstract-mainnet-archive/`)) && Headers(`Upgrade`, `websocket`)}
- ${NO_SSL:+traefik.http.routers.abstract-mainnet-external-node-archive-ws.rule=(Path(`/abstract-mainnet-archive`) || Path(`/abstract-mainnet-archive/`)) && Headers(`Upgrade`, `websocket`)}
- traefik.http.routers.abstract-mainnet-external-node-archive-ws.middlewares=abstract-mainnet-external-node-archive-stripprefix, ipallowlist
abstract-mainnet-archive-db:
image: postgres:14
expose:
- 5430
environment:
- PGPORT=5430
- POSTGRES_PASSWORD=notsecurepassword
command: >
postgres
-c max_connections=200
-c log_error_verbosity=terse
-c shared_buffers=2GB
-c effective_cache_size=4GB
-c maintenance_work_mem=1GB
-c checkpoint_completion_target=0.9
-c random_page_cost=1.1
-c effective_io_concurrency=200
-c min_wal_size=4GB
-c max_wal_size=16GB
-c max_worker_processes=16
-c checkpoint_timeout=1800
networks:
- chains
volumes:
- ${ABSTRACT_MAINNET_EXTERNAL_NODE_ARCHIVE__DB_DATA:-abstract-mainnet-external-node-archive_db}:/var/lib/postgresql/data
healthcheck:
interval: 1s
timeout: 3s
test: [CMD-SHELL, psql -U postgres -c "select exists (select * from pg_stat_activity where datname = '' and application_name = 'pg_restore')" | grep -e ".f$$"]
logging: *logging-defaults
volumes:
abstract-mainnet-external-node-archive:
abstract-mainnet-external-node-archive_db:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: abstract
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,161 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:abstract/external-node/abstract-mainnet-external-node-pruned.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/abstract-mainnet \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
abstract-mainnet:
image: ${ABSTRACT_EXTERNAL_NODE_IMAGE:-matterlabs/external-node}:${ABSTRACT_MAINNET_EXTERNAL_NODE_VERSION:-v29.1.2}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 14370:14370
- 14370:14370/udp
expose:
- 8545
- 8546
environment:
- DATABASE_POOL_SIZE=50
- DATABASE_URL=postgres://postgres:notsecurepassword@abstract-mainnet-db:5430/zksync_local_ext_node
- EN_API_NAMESAPCES=eth,net,web3,debug,pubsub,debug,zks
- EN_ETH_CLIENT_URL=${ETHEREUM_MAINNET_EXECUTION_RPC}
- EN_HEALTHCHECK_PORT=3081
- EN_HTTP_PORT=8545
- EN_L1_CHAIN_ID=1
- EN_L2_CHAIN_ID=2741
- EN_MAIN_NODE_URL=https://api.mainnet.abs.xyz
- EN_MAX_RESPONSE_BODY_SIZE_MB=25
- EN_MAX_RESPONSE_BODY_SIZE_OVERRIDES_MB=eth_getLogs=100,eth_getBlockReceipts=None
- EN_MERKLE_TREE_PATH=./db/ext-node/lightweight
- EN_PROMETHEUS_PORT=3322
- EN_PRUNING_ENABLED=true
- EN_REQ_ENTITIES_LIMIT=100000
- EN_SNAPSHOTS_OBJECT_STORE_BUCKET_BASE_URL=raas-abstract-mainnet-external-node-snapshots
- EN_SNAPSHOTS_OBJECT_STORE_MODE=GCSAnonymousReadOnly
- EN_SNAPSHOTS_RECOVERY_ENABLED=true
- EN_STATE_CACHE_PATH=./db/ext-node/state_keeper
- EN_WS_PORT=8546
- RUST_LOG=warn,zksync=info,zksync_core::metadata_calculator=debug,zksync_state=info,zksync_utils=info,zksync_web3_decl::client=error
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ABSTRACT_MAINNET_EXTERNAL_NODE_PRUNED_DATA:-abstract-mainnet-external-node-pruned}:/db
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.middlewares.abstract-mainnet-external-node-pruned-stripprefix.stripprefix.prefixes=/abstract-mainnet
- traefik.http.services.abstract-mainnet-external-node-pruned.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.abstract-mainnet-external-node-pruned.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.abstract-mainnet-external-node-pruned.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.abstract-mainnet-external-node-pruned.rule=Host(`$DOMAIN`) && (Path(`/abstract-mainnet`) || Path(`/abstract-mainnet/`))}
- ${NO_SSL:+traefik.http.routers.abstract-mainnet-external-node-pruned.rule=Path(`/abstract-mainnet`) || Path(`/abstract-mainnet/`)}
- traefik.http.routers.abstract-mainnet-external-node-pruned.middlewares=abstract-mainnet-external-node-pruned-stripprefix, ipallowlist
- traefik.http.routers.abstract-mainnet-external-node-pruned.priority=50 # gets any request that is not GET with UPGRADE header
- traefik.http.routers.abstract-mainnet-external-node-pruned-ws.priority=100 # answers GET requests first
- traefik.http.services.abstract-mainnet-external-node-pruned-ws.loadbalancer.server.port=8546
- traefik.http.routers.abstract-mainnet-external-node-pruned-ws.service=abstract-mainnet-external-node-pruned-ws
- traefik.http.routers.abstract-mainnet-external-node-pruned.service=abstract-mainnet-external-node-pruned
- ${NO_SSL:-traefik.http.routers.abstract-mainnet-external-node-pruned-ws.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.abstract-mainnet-external-node-pruned-ws.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.abstract-mainnet-external-node-pruned-ws.rule=Host(`$DOMAIN`) && (Path(`/abstract-mainnet`) || Path(`/abstract-mainnet/`)) && Headers(`Upgrade`, `websocket`)}
- ${NO_SSL:+traefik.http.routers.abstract-mainnet-external-node-pruned-ws.rule=(Path(`/abstract-mainnet`) || Path(`/abstract-mainnet/`)) && Headers(`Upgrade`, `websocket`)}
- traefik.http.routers.abstract-mainnet-external-node-pruned-ws.middlewares=abstract-mainnet-external-node-pruned-stripprefix, ipallowlist
abstract-mainnet-db:
image: postgres:14
expose:
- 5430
environment:
- PGPORT=5430
- POSTGRES_PASSWORD=notsecurepassword
command: >
postgres
-c max_connections=200
-c log_error_verbosity=terse
-c shared_buffers=2GB
-c effective_cache_size=4GB
-c maintenance_work_mem=1GB
-c checkpoint_completion_target=0.9
-c random_page_cost=1.1
-c effective_io_concurrency=200
-c min_wal_size=4GB
-c max_wal_size=16GB
-c max_worker_processes=16
-c checkpoint_timeout=1800
networks:
- chains
volumes:
- ${ABSTRACT_MAINNET_EXTERNAL_NODE_PRUNED__DB_DATA:-abstract-mainnet-external-node-pruned_db}:/var/lib/postgresql/data
healthcheck:
interval: 1s
timeout: 3s
test: [CMD-SHELL, psql -U postgres -c "select exists (select * from pg_stat_activity where datname = '' and application_name = 'pg_restore')" | grep -e ".f$$"]
logging: *logging-defaults
volumes:
abstract-mainnet-external-node-pruned:
abstract-mainnet-external-node-pruned_db:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: abstract
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,161 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:abstract/external-node/abstract-testnet-external-node-archive.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/abstract-testnet-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
abstract-testnet-archive:
image: ${ABSTRACT_EXTERNAL_NODE_IMAGE:-matterlabs/external-node}:${ABSTRACT_TESTNET_EXTERNAL_NODE_VERSION:-v29.1.2}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 14028:14028
- 14028:14028/udp
expose:
- 8545
- 8546
environment:
- DATABASE_POOL_SIZE=50
- DATABASE_URL=postgres://postgres:notsecurepassword@abstract-testnet-archive-db:5430/zksync_local_ext_node
- EN_API_NAMESAPCES=eth,net,web3,debug,pubsub,debug,zks
- EN_ETH_CLIENT_URL=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- EN_HEALTHCHECK_PORT=3081
- EN_HTTP_PORT=8545
- EN_L1_CHAIN_ID=11155111
- EN_L2_CHAIN_ID=11124
- EN_MAIN_NODE_URL=https://api.testnet.abs.xyz
- EN_MAX_RESPONSE_BODY_SIZE_MB=25
- EN_MAX_RESPONSE_BODY_SIZE_OVERRIDES_MB=eth_getLogs=100,eth_getBlockReceipts=None
- EN_MERKLE_TREE_PATH=./db/ext-node/lightweight
- EN_PROMETHEUS_PORT=3322
- EN_PRUNING_ENABLED=false
- EN_REQ_ENTITIES_LIMIT=100000
- EN_SNAPSHOTS_OBJECT_STORE_BUCKET_BASE_URL=abstract-testnet-external-node-snapshots
- EN_SNAPSHOTS_OBJECT_STORE_MODE=GCSAnonymousReadOnly
- EN_SNAPSHOTS_RECOVERY_ENABLED=true
- EN_STATE_CACHE_PATH=./db/ext-node/state_keeper
- EN_WS_PORT=8546
- RUST_LOG=warn,zksync=info,zksync_core::metadata_calculator=debug,zksync_state=info,zksync_utils=info,zksync_web3_decl::client=error
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ABSTRACT_TESTNET_EXTERNAL_NODE_ARCHIVE_DATA:-abstract-testnet-external-node-archive}:/db
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.middlewares.abstract-testnet-external-node-archive-stripprefix.stripprefix.prefixes=/abstract-testnet-archive
- traefik.http.services.abstract-testnet-external-node-archive.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.abstract-testnet-external-node-archive.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.abstract-testnet-external-node-archive.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.abstract-testnet-external-node-archive.rule=Host(`$DOMAIN`) && (Path(`/abstract-testnet-archive`) || Path(`/abstract-testnet-archive/`))}
- ${NO_SSL:+traefik.http.routers.abstract-testnet-external-node-archive.rule=Path(`/abstract-testnet-archive`) || Path(`/abstract-testnet-archive/`)}
- traefik.http.routers.abstract-testnet-external-node-archive.middlewares=abstract-testnet-external-node-archive-stripprefix, ipallowlist
- traefik.http.routers.abstract-testnet-external-node-archive.priority=50 # gets any request that is not GET with UPGRADE header
- traefik.http.routers.abstract-testnet-external-node-archive-ws.priority=100 # answers GET requests first
- traefik.http.services.abstract-testnet-external-node-archive-ws.loadbalancer.server.port=8546
- traefik.http.routers.abstract-testnet-external-node-archive-ws.service=abstract-testnet-external-node-archive-ws
- traefik.http.routers.abstract-testnet-external-node-archive.service=abstract-testnet-external-node-archive
- ${NO_SSL:-traefik.http.routers.abstract-testnet-external-node-archive-ws.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.abstract-testnet-external-node-archive-ws.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.abstract-testnet-external-node-archive-ws.rule=Host(`$DOMAIN`) && (Path(`/abstract-testnet-archive`) || Path(`/abstract-testnet-archive/`)) && Headers(`Upgrade`, `websocket`)}
- ${NO_SSL:+traefik.http.routers.abstract-testnet-external-node-archive-ws.rule=(Path(`/abstract-testnet-archive`) || Path(`/abstract-testnet-archive/`)) && Headers(`Upgrade`, `websocket`)}
- traefik.http.routers.abstract-testnet-external-node-archive-ws.middlewares=abstract-testnet-external-node-archive-stripprefix, ipallowlist
abstract-testnet-archive-db:
image: postgres:14
expose:
- 5430
environment:
- PGPORT=5430
- POSTGRES_PASSWORD=notsecurepassword
command: >
postgres
-c max_connections=200
-c log_error_verbosity=terse
-c shared_buffers=2GB
-c effective_cache_size=4GB
-c maintenance_work_mem=1GB
-c checkpoint_completion_target=0.9
-c random_page_cost=1.1
-c effective_io_concurrency=200
-c min_wal_size=4GB
-c max_wal_size=16GB
-c max_worker_processes=16
-c checkpoint_timeout=1800
networks:
- chains
volumes:
- ${ABSTRACT_TESTNET_EXTERNAL_NODE_ARCHIVE__DB_DATA:-abstract-testnet-external-node-archive_db}:/var/lib/postgresql/data
healthcheck:
interval: 1s
timeout: 3s
test: [CMD-SHELL, psql -U postgres -c "select exists (select * from pg_stat_activity where datname = '' and application_name = 'pg_restore')" | grep -e ".f$$"]
logging: *logging-defaults
volumes:
abstract-testnet-external-node-archive:
abstract-testnet-external-node-archive_db:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: abstract-sepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,161 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:abstract/external-node/abstract-testnet-external-node-pruned.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/abstract-testnet \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
abstract-testnet:
image: ${ABSTRACT_EXTERNAL_NODE_IMAGE:-matterlabs/external-node}:${ABSTRACT_TESTNET_EXTERNAL_NODE_VERSION:-v29.1.2}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 12157:12157
- 12157:12157/udp
expose:
- 8545
- 8546
environment:
- DATABASE_POOL_SIZE=50
- DATABASE_URL=postgres://postgres:notsecurepassword@abstract-testnet-db:5430/zksync_local_ext_node
- EN_API_NAMESAPCES=eth,net,web3,debug,pubsub,debug,zks
- EN_ETH_CLIENT_URL=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- EN_HEALTHCHECK_PORT=3081
- EN_HTTP_PORT=8545
- EN_L1_CHAIN_ID=11155111
- EN_L2_CHAIN_ID=11124
- EN_MAIN_NODE_URL=https://api.testnet.abs.xyz
- EN_MAX_RESPONSE_BODY_SIZE_MB=25
- EN_MAX_RESPONSE_BODY_SIZE_OVERRIDES_MB=eth_getLogs=100,eth_getBlockReceipts=None
- EN_MERKLE_TREE_PATH=./db/ext-node/lightweight
- EN_PROMETHEUS_PORT=3322
- EN_PRUNING_ENABLED=true
- EN_REQ_ENTITIES_LIMIT=100000
- EN_SNAPSHOTS_OBJECT_STORE_BUCKET_BASE_URL=abstract-testnet-external-node-snapshots
- EN_SNAPSHOTS_OBJECT_STORE_MODE=GCSAnonymousReadOnly
- EN_SNAPSHOTS_RECOVERY_ENABLED=true
- EN_STATE_CACHE_PATH=./db/ext-node/state_keeper
- EN_WS_PORT=8546
- RUST_LOG=warn,zksync=info,zksync_core::metadata_calculator=debug,zksync_state=info,zksync_utils=info,zksync_web3_decl::client=error
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ABSTRACT_TESTNET_EXTERNAL_NODE_PRUNED_DATA:-abstract-testnet-external-node-pruned}:/db
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.middlewares.abstract-testnet-external-node-pruned-stripprefix.stripprefix.prefixes=/abstract-testnet
- traefik.http.services.abstract-testnet-external-node-pruned.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.abstract-testnet-external-node-pruned.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.abstract-testnet-external-node-pruned.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.abstract-testnet-external-node-pruned.rule=Host(`$DOMAIN`) && (Path(`/abstract-testnet`) || Path(`/abstract-testnet/`))}
- ${NO_SSL:+traefik.http.routers.abstract-testnet-external-node-pruned.rule=Path(`/abstract-testnet`) || Path(`/abstract-testnet/`)}
- traefik.http.routers.abstract-testnet-external-node-pruned.middlewares=abstract-testnet-external-node-pruned-stripprefix, ipallowlist
- traefik.http.routers.abstract-testnet-external-node-pruned.priority=50 # gets any request that is not GET with UPGRADE header
- traefik.http.routers.abstract-testnet-external-node-pruned-ws.priority=100 # answers GET requests first
- traefik.http.services.abstract-testnet-external-node-pruned-ws.loadbalancer.server.port=8546
- traefik.http.routers.abstract-testnet-external-node-pruned-ws.service=abstract-testnet-external-node-pruned-ws
- traefik.http.routers.abstract-testnet-external-node-pruned.service=abstract-testnet-external-node-pruned
- ${NO_SSL:-traefik.http.routers.abstract-testnet-external-node-pruned-ws.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.abstract-testnet-external-node-pruned-ws.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.abstract-testnet-external-node-pruned-ws.rule=Host(`$DOMAIN`) && (Path(`/abstract-testnet`) || Path(`/abstract-testnet/`)) && Headers(`Upgrade`, `websocket`)}
- ${NO_SSL:+traefik.http.routers.abstract-testnet-external-node-pruned-ws.rule=(Path(`/abstract-testnet`) || Path(`/abstract-testnet/`)) && Headers(`Upgrade`, `websocket`)}
- traefik.http.routers.abstract-testnet-external-node-pruned-ws.middlewares=abstract-testnet-external-node-pruned-stripprefix, ipallowlist
abstract-testnet-db:
image: postgres:14
expose:
- 5430
environment:
- PGPORT=5430
- POSTGRES_PASSWORD=notsecurepassword
command: >
postgres
-c max_connections=200
-c log_error_verbosity=terse
-c shared_buffers=2GB
-c effective_cache_size=4GB
-c maintenance_work_mem=1GB
-c checkpoint_completion_target=0.9
-c random_page_cost=1.1
-c effective_io_concurrency=200
-c min_wal_size=4GB
-c max_wal_size=16GB
-c max_worker_processes=16
-c checkpoint_timeout=1800
networks:
- chains
volumes:
- ${ABSTRACT_TESTNET_EXTERNAL_NODE_PRUNED__DB_DATA:-abstract-testnet-external-node-pruned_db}:/var/lib/postgresql/data
healthcheck:
interval: 1s
timeout: 3s
test: [CMD-SHELL, psql -U postgres -c "select exists (select * from pg_stat_activity where datname = '' and application_name = 'pg_restore')" | grep -e ".f$$"]
logging: *logging-defaults
volumes:
abstract-testnet-external-node-pruned:
abstract-testnet-external-node-pruned_db:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: abstract-sepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1 +0,0 @@
arb/nitro/alephzero-mainnet-nitro-archive-pebble-hash.yml

View File

@@ -1 +0,0 @@
arb/nitro/alephzero-mainnet-nitro-pruned-pebble-path.yml

View File

@@ -1 +0,0 @@
arb/nitro/alephzero-sepolia-nitro-archive-leveldb-hash.yml

View File

@@ -1 +0,0 @@
arb/nitro/alephzero-sepolia-nitro-pruned-pebble-path.yml

View File

@@ -1,5 +0,0 @@
{
"chain": {
"info-json": "[{\"chain-id\":41455,\"parent-chain-id\":1,\"parent-chain-is-arbitrum\":false,\"chain-config\":{\"homesteadBlock\":0,\"daoForkBlock\":null,\"daoForkSupport\":true,\"eip150Block\":0,\"eip150Hash\":\"0x0000000000000000000000000000000000000000000000000000000000000000\",\"eip155Block\":0,\"eip158Block\":0,\"byzantiumBlock\":0,\"constantinopleBlock\":0,\"petersburgBlock\":0,\"istanbulBlock\":0,\"muirGlacierBlock\":0,\"berlinBlock\":0,\"londonBlock\":0,\"clique\":{\"period\":0,\"epoch\":0},\"arbitrum\":{\"EnableArbOS\":true,\"AllowDebugPrecompiles\":false,\"DataAvailabilityCommittee\":true,\"InitialArbOSVersion\":20,\"GenesisBlockNum\":0,\"MaxCodeSize\":98304,\"MaxInitCodeSize\":49152,\"InitialChainOwner\":\"0x257812604076712675ae9788F5Bd738173CA3CE0\"},\"chainId\":41455},\"rollup\":{\"bridge\":\"0x41Ec9456AB918f2aBA81F38c03Eb0B93b78E84d9\",\"inbox\":\"0x56D8EC76a421063e1907503aDd3794c395256AEb\",\"sequencer-inbox\":\"0xF75206c49c1694594E3e69252E519434f1579876\",\"rollup\":\"0x1CA12290D954CFe022323b6A6Df92113ed6b1C98\",\"validator-utils\":\"0x2b0E04Dc90e3fA58165CB41E2834B44A56E766aF\",\"validator-wallet-creator\":\"0x9CAd81628aB7D8e239F1A5B497313341578c5F71\",\"deployed-at\":20412468}}]"
}
}

View File

@@ -1,6 +0,0 @@
{
"chain": {
"info-json": "[{\"chain-id\":2039,\"parent-chain-id\":11155111,\"parent-chain-is-arbitrum\":false,\"chain-name\":\"Aleph Zero EVM testnet\",\"chain-config\":{\"homesteadBlock\":0,\"daoForkBlock\":null,\"daoForkSupport\":true,\"eip150Block\":0,\"eip150Hash\":\"0x0000000000000000000000000000000000000000000000000000000000000000\",\"eip155Block\":0,\"eip158Block\":0,\"byzantiumBlock\":0,\"constantinopleBlock\":0,\"petersburgBlock\":0,\"istanbulBlock\":0,\"muirGlacierBlock\":0,\"berlinBlock\":0,\"londonBlock\":0,\"clique\":{\"period\":0,\"epoch\":0},\"arbitrum\":{\"EnableArbOS\":true,\"AllowDebugPrecompiles\":false,\"DataAvailabilityCommittee\":true,\"InitialArbOSVersion\":20,\"GenesisBlockNum\":0,\"MaxCodeSize\":98304,\"MaxInitCodeSize\":49152,\"InitialChainOwner\":\"0x4c6dfF3e40e82a1fB599e062051726a9f7808a18\"},\"chainId\":2039},\"rollup\":{\"bridge\":\"0xCB5c0B38C45Fad0C20591E26b0b3C3809123994A\",\"inbox\":\"0xb27fd27987a71a6B77Fb8705bFb6010C411083EB\",\"sequencer-inbox\":\"0x16Ef70c48EF4BaaCfdaa4AfdD37F69332832a0bD\",\"rollup\":\"0xC8C08A4DbbF3367c8441151591c3d935947CB42F\",\"validator-utils\":\"0xb33Dca7b17c72CFC311D68C543cd4178E0d7ce55\",\"validator-wallet-creator\":\"0x75500812ADC9E51b721BEa31Df322EEc66967DDF\",\"deployed-at\":5827184}}]",
"name": "Aleph Zero EVM testnet"
}
}

View File

@@ -1,113 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/arbnode/arbitrum-one-arbnode-archive-leveldb-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-one-arbnode-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-one-arbnode-archive:
image: ${ARBITRUM_ARBNODE_IMAGE:-offchainlabs/arb-node}:${ARBITRUM_ONE_ARBNODE_VERSION:-v1.4.5-e97c1a4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
- 8546
entrypoint: [/home/user/go/bin/arb-node]
command:
- --core.checkpoint-gas-frequency=156250000
- --l1.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --l2.disable-upstream
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=7070
- --node.cache.allow-slow-lookup
- --node.chain-id=42161
- --node.rpc.addr=0.0.0.0
- --node.rpc.enable-l1-calls
- --node.rpc.port=8545
- --node.rpc.tracing.enable
- --node.rpc.tracing.namespace=trace
- --node.ws.addr=0.0.0.0
- --node.ws.port=8546
- --persistent.chain=/data/datadir/
- --persistent.global-config=/data/
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_ONE_ARBNODE_ARCHIVE_LEVELDB_HASH_DATA:-arbitrum-one-arbnode-archive-leveldb-hash}:/data
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.middlewares.arbitrum-one-arbnode-archive-leveldb-hash-stripprefix.stripprefix.prefixes=/arbitrum-one-arbnode-archive
- traefik.http.services.arbitrum-one-arbnode-archive-leveldb-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-one-arbnode-archive-leveldb-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-arbnode-archive-leveldb-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-arbnode-archive-leveldb-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-one-arbnode-archive`) || Path(`/arbitrum-one-arbnode-archive/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-one-arbnode-archive-leveldb-hash.rule=Path(`/arbitrum-one-arbnode-archive`) || Path(`/arbitrum-one-arbnode-archive/`)}
- traefik.http.routers.arbitrum-one-arbnode-archive-leveldb-hash.middlewares=arbitrum-one-arbnode-archive-leveldb-hash-stripprefix, ipallowlist
volumes:
arbitrum-one-arbnode-archive-leveldb-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,6 +0,0 @@
{
"chain": {
"info-json": "[{\"chain-id\":6398,\"parent-chain-id\":11155111,\"parent-chain-is-arbitrum\":false,\"chain-name\":\"Connext Sepolia\",\"chain-config\":{\"homesteadBlock\":0,\"daoForkBlock\":null,\"daoForkSupport\":true,\"eip150Block\":0,\"eip150Hash\":\"0x0000000000000000000000000000000000000000000000000000000000000000\",\"eip155Block\":0,\"eip158Block\":0,\"byzantiumBlock\":0,\"constantinopleBlock\":0,\"petersburgBlock\":0,\"istanbulBlock\":0,\"muirGlacierBlock\":0,\"berlinBlock\":0,\"londonBlock\":0,\"clique\":{\"period\":0,\"epoch\":0},\"arbitrum\":{\"EnableArbOS\":true,\"AllowDebugPrecompiles\":false,\"DataAvailabilityCommittee\":true,\"InitialArbOSVersion\":20,\"GenesisBlockNum\":0,\"MaxCodeSize\":24576,\"MaxInitCodeSize\":49152,\"InitialChainOwner\":\"0x8ECD393576Ca37a7e5095f31bdfE21F606FF5F75\"},\"chainId\":6398},\"rollup\":{\"bridge\":\"0xf0b58FA876005898798a66A04EE09159C199CB7A\",\"inbox\":\"0x7bc7DAF843bf57c54D41D912F8221A2eE830c320\",\"sequencer-inbox\":\"0x7f5C1a58014E9DE69663CAc441bfa4C5d94b7E64\",\"rollup\":\"0xE6D7bf11A6264BACa59e8fAD7f6985FaC8f62e60\",\"validator-utils\":\"0xb33Dca7b17c72CFC311D68C543cd4178E0d7ce55\",\"validator-wallet-creator\":\"0x75500812ADC9E51b721BEa31Df322EEc66967DDF\",\"deployed-at\":5780509}}]",
"name": "Connext Sepolia"
}
}

View File

@@ -1,6 +0,0 @@
{
"chain": {
"name": "Everclear Mainnet",
"info-json": "[{\"chain-id\":25327,\"parent-chain-id\":1,\"parent-chain-is-arbitrum\":false,\"chain-name\":\"Everclear Mainnet\",\"chain-config\":{\"homesteadBlock\":0,\"daoForkBlock\":null,\"daoForkSupport\":true,\"eip150Block\":0,\"eip150Hash\":\"0x0000000000000000000000000000000000000000000000000000000000000000\",\"eip155Block\":0,\"eip158Block\":0,\"byzantiumBlock\":0,\"constantinopleBlock\":0,\"petersburgBlock\":0,\"istanbulBlock\":0,\"muirGlacierBlock\":0,\"berlinBlock\":0,\"londonBlock\":0,\"clique\":{\"period\":0,\"epoch\":0},\"arbitrum\":{\"EnableArbOS\":true,\"AllowDebugPrecompiles\":false,\"DataAvailabilityCommittee\":true,\"InitialArbOSVersion\":20,\"GenesisBlockNum\":0,\"MaxCodeSize\":24576,\"MaxInitCodeSize\":49152,\"InitialChainOwner\":\"0x98a426C8ED821cAaef1b4BF7D29b514dcef970C0\"},\"chainId\":25327},\"rollup\":{\"bridge\":\"0x4eb4fB614e1aa3634513319F4Ec7334bC4321356\",\"inbox\":\"0x97FdC935c5E25613AA13a054C7Aa71cf751DB495\",\"sequencer-inbox\":\"0x7B0517E0104dB60198f9d573C0aB8d480207827E\",\"rollup\":\"0xc6CAd31D83E33Fc8fBc855f36ef9Cb2fCE070f5C\",\"validator-utils\":\"0x2b0E04Dc90e3fA58165CB41E2834B44A56E766aF\",\"validator-wallet-creator\":\"0x9CAd81628aB7D8e239F1A5B497313341578c5F71\",\"deployed-at\":20684364}}]"
}
}

View File

@@ -1,145 +0,0 @@
---
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/fireeth/arbitrum-one-fireeth-pruned-pebble-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-one \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: 10m
max-file: '3'
services:
arbitrum-one:
image: ${ARBITRUM_FIREETH_IMAGE:-ghcr.io/streamingfast/firehose-ethereum}:${ARBITRUM_ONE_FIREETH_VERSION:-v2.11.7-nitro-nitro-v3.5.5-fh3.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
environment:
- ${ARBITRUM_ONE_FIREETH_PRUNED_PEBBLE_HASH_S3_BLOCKS_STORE:-/firehose-data/storage/merged-blocks}
entrypoint: [sh, -c, 'exec fireeth -c /config/firehose.yml start --substreams-rpc-endpoints "${ ARBITRUM_ONE_EXECUTION_RPC}" --reader-node-arguments "$*"', _]
command:
- --execution.caching.archive=false
- --execution.caching.state-scheme=hash
- --execution.rpc.gas-cap=600000000
- --execution.sequencer.enable=false
- --firehose-enabled
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,debug,admin,txpool,engine
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=pruned
- --persistent.chain=/firehose-data/reader/data/arbitrum-one
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_ONE_FIREETH_PRUNED_PEBBLE_HASH_DATA:-arbitrum-one-fireeth-pruned-pebble-hash}:/firehose-data
- ${ARBITRUM_ONE_FIREETH_PRUNED_PEBBLE_HASH_MERGED_BLOCKS_DATA:-arbitrum-one-fireeth-pruned-pebble-hash-blocks}:/firehose-data/storage/merged-blocks
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- traefik.enable=true
- traefik.http.middlewares.arbitrum-one-fireeth-pruned-pebble-hash-stripprefix.stripprefix.prefixes=/arbitrum-one
- traefik.http.services.arbitrum-one-fireeth-pruned-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-one`) || Path(`/arbitrum-one/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash.rule=Path(`/arbitrum-one`) || Path(`/arbitrum-one/`)}
- traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash.middlewares=arbitrum-one-fireeth-pruned-pebble-hash-stripprefix, ipallowlist
- traefik.http.services.arbitrum-one-fireeth-pruned-pebble-hash-firehose.loadbalancer.server.scheme=h2c
- traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash-firehose.service=arbitrum-one-fireeth-pruned-pebble-hash-firehose
- traefik.http.services.arbitrum-one-fireeth-pruned-pebble-hash-firehose.loadbalancer.server.port=10015
- traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash-firehose.entrypoints=grpc
- ${NO_SSL:-traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash-firehose.tls.certresolver=myresolver}
- traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash-firehose.rule=Host(`arbitrum-one.${DOMAIN}`)
- traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash-firehose.middlewares=ipallowlist
- traefik.http.services.arbitrum-one-fireeth-pruned-pebble-hash-substreams.loadbalancer.server.scheme=h2c
- traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash-substreams.service=arbitrum-one-fireeth-pruned-pebble-hash-substreams
- traefik.http.services.arbitrum-one-fireeth-pruned-pebble-hash-substreams.loadbalancer.server.port=10016
- traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash-substreams.entrypoints=grpc
- ${NO_SSL:-traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash-substreams.tls.certresolver=myresolver}
- traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash-substreams.rule=Host(`arbitrum-one-substreams.${DOMAIN}`)
- traefik.http.routers.arbitrum-one-fireeth-pruned-pebble-hash-substreams.middlewares=ipallowlist
volumes:
arbitrum-one-fireeth-pruned-pebble-hash:
arbitrum-one-fireeth-pruned-pebble-hash-blocks:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,166 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro-erigon/arbitrum-sepolia-nitro-erigon-archive-trace.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-sepolia-nitro-erigon-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-sepolia-nitro-erigon-archive:
image: ${ARBITRUM_NITRO_ERIGON_IMAGE:-erigontech/nitro-erigon}:${ARBITRUM_SEPOLIA_NITRO_ERIGON_VERSION:-main-1a9771c}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
memlock: -1 # Disable memory locking limits (for in-memory DBs like MDBX)
user: root
ports:
- 11387:11387
- 11387:11387/udp
- 31387:31387
- 31387:31387/udp
- 36387:36387
- 36387:36387/udp
expose:
- 8545
entrypoint: [erigon]
command:
- --chain=arb-sepolia
- --datadir=/root/.local/share/erigon
- --http
- --http.addr=0.0.0.0
- --http.api=eth,erigon,web3,net,debug,trace,txpool,admin,ots
- --http.port=8545
- --http.vhosts=*
- --l2rpc=${ARBITRUM_SEPOLIA_EXECUTION_RPC}
- --maxpeers=50
- --metrics
- --metrics.addr=0.0.0.0
- --metrics.port=6060
- --nat=extip:${IP}
- --p2p.allowed-ports=31387
- --p2p.allowed-ports=36387
- --port=11387
- --prune.mode=archive
- --rpc.evmtimeout=${ARBITRUM_SEPOLIA_NITRO_ERIGON_ARCHIVE_TRACE_EVMTIMEOUT:-5m0s}
- --rpc.gascap=6000000000
- --rpc.overlay.getlogstimeout=${ARBITRUM_SEPOLIA_NITRO_ERIGON_ARCHIVE_TRACE_GETLOGSTIMEOUT:-5m0s}
- --rpc.overlay.replayblocktimeout=${ARBITRUM_SEPOLIA_NITRO_ERIGON_ARCHIVE_TRACE_REPLAYBLOCKTIMEOUT:-10s}
- --rpc.returndata.limit=10000000
- --sync.loop.block.limit=100000
- --torrent.port=26387
- --ws
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_SEPOLIA_NITRO_ERIGON_ARCHIVE_TRACE_DATA:-arbitrum-sepolia-nitro-erigon-archive-trace}:/root/.local/share/erigon
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6060
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-sepolia-nitro-erigon-archive-trace-stripprefix.stripprefix.prefixes=/arbitrum-sepolia-nitro-erigon-archive
- traefik.http.services.arbitrum-sepolia-nitro-erigon-archive-trace.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-erigon-archive-trace.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-erigon-archive-trace.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-erigon-archive-trace.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-sepolia-nitro-erigon-archive`) || Path(`/arbitrum-sepolia-nitro-erigon-archive/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-sepolia-nitro-erigon-archive-trace.rule=Path(`/arbitrum-sepolia-nitro-erigon-archive`) || Path(`/arbitrum-sepolia-nitro-erigon-archive/`)}
- traefik.http.routers.arbitrum-sepolia-nitro-erigon-archive-trace.middlewares=arbitrum-sepolia-nitro-erigon-archive-trace-stripprefix, ipallowlist
shm_size: 2gb
volumes:
arbitrum-sepolia-nitro-erigon-archive-trace:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum-sepolia
method-groups:
enabled:
- debug
- filter
- trace
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
# non standard erigon only
- name: eth_getBlockReceipts
- name: eth_protocolVersion
- name: eth_callMany
- name: eth_callBundle
- name: debug_accountAt
- name: debug_traceCallMany
- name: erigon_getHeaderByHash
- name: erigon_getBlockReceiptsByBlockHash
- name: erigon_getHeaderByNumber
- name: erigon_getLogsByHash
- name: erigon_forks
- name: erigon_getBlockByTimestamp
- name: erigon_BlockNumber
- name: erigon_getLatestLogs
- name: ots_getInternalOperations
- name: ots_hasCode
- name: ots_getTransactionError
- name: ots_traceTransaction
- name: ots_getBlockDetails
- name: ots_getBlockDetailsByHash
- name: ots_getBlockTransactions
- name: ots_searchTransactionsBefore
- name: ots_searchTransactionsAfter
- name: ots_getTransactionBySenderAndNonce
- name: ots_getContractCreator
...

View File

@@ -1,167 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro-erigon/arbitrum-sepolia-nitro-erigon-minimal-trace.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-sepolia-nitro-erigon-minimal \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-sepolia-nitro-erigon-minimal:
image: ${ARBITRUM_NITRO_ERIGON_IMAGE:-erigontech/nitro-erigon}:${ARBITRUM_SEPOLIA_NITRO_ERIGON_VERSION:-main-1a9771c}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
memlock: -1 # Disable memory locking limits (for in-memory DBs like MDBX)
user: root
ports:
- 12072:12072
- 12072:12072/udp
- 32072:32072
- 32072:32072/udp
- 37072:37072
- 37072:37072/udp
expose:
- 8545
entrypoint: [erigon]
command:
- --chain=arb-sepolia
- --datadir=/root/.local/share/erigon
- --http
- --http.addr=0.0.0.0
- --http.api=eth,erigon,web3,net,debug,trace,txpool,admin,ots
- --http.port=8545
- --http.vhosts=*
- --l2rpc=${ARBITRUM_SEPOLIA_EXECUTION_RPC}
- --maxpeers=50
- --metrics
- --metrics.addr=0.0.0.0
- --metrics.port=6060
- --nat=extip:${IP}
- --p2p.allowed-ports=32072
- --p2p.allowed-ports=37072
- --persist.receipts=false
- --port=12072
- --prune.mode=minimal
- --rpc.evmtimeout=${ARBITRUM_SEPOLIA_NITRO_ERIGON_MINIMAL_TRACE_EVMTIMEOUT:-5m0s}
- --rpc.gascap=6000000000
- --rpc.overlay.getlogstimeout=${ARBITRUM_SEPOLIA_NITRO_ERIGON_MINIMAL_TRACE_GETLOGSTIMEOUT:-5m0s}
- --rpc.overlay.replayblocktimeout=${ARBITRUM_SEPOLIA_NITRO_ERIGON_MINIMAL_TRACE_REPLAYBLOCKTIMEOUT:-10s}
- --rpc.returndata.limit=10000000
- --sync.loop.block.limit=100000
- --torrent.port=27072
- --ws
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_SEPOLIA_NITRO_ERIGON_MINIMAL_TRACE_DATA:-arbitrum-sepolia-nitro-erigon-minimal-trace}:/root/.local/share/erigon
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6060
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-sepolia-nitro-erigon-minimal-trace-stripprefix.stripprefix.prefixes=/arbitrum-sepolia-nitro-erigon-minimal
- traefik.http.services.arbitrum-sepolia-nitro-erigon-minimal-trace.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-erigon-minimal-trace.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-erigon-minimal-trace.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-erigon-minimal-trace.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-sepolia-nitro-erigon-minimal`) || Path(`/arbitrum-sepolia-nitro-erigon-minimal/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-sepolia-nitro-erigon-minimal-trace.rule=Path(`/arbitrum-sepolia-nitro-erigon-minimal`) || Path(`/arbitrum-sepolia-nitro-erigon-minimal/`)}
- traefik.http.routers.arbitrum-sepolia-nitro-erigon-minimal-trace.middlewares=arbitrum-sepolia-nitro-erigon-minimal-trace-stripprefix, ipallowlist
shm_size: 2gb
volumes:
arbitrum-sepolia-nitro-erigon-minimal-trace:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum-sepolia
method-groups:
enabled:
- debug
- filter
- trace
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
# non standard erigon only
- name: eth_getBlockReceipts
- name: eth_protocolVersion
- name: eth_callMany
- name: eth_callBundle
- name: debug_accountAt
- name: debug_traceCallMany
- name: erigon_getHeaderByHash
- name: erigon_getBlockReceiptsByBlockHash
- name: erigon_getHeaderByNumber
- name: erigon_getLogsByHash
- name: erigon_forks
- name: erigon_getBlockByTimestamp
- name: erigon_BlockNumber
- name: erigon_getLatestLogs
- name: ots_getInternalOperations
- name: ots_hasCode
- name: ots_getTransactionError
- name: ots_traceTransaction
- name: ots_getBlockDetails
- name: ots_getBlockDetailsByHash
- name: ots_getBlockTransactions
- name: ots_searchTransactionsBefore
- name: ots_searchTransactionsAfter
- name: ots_getTransactionBySenderAndNonce
- name: ots_getContractCreator
...

View File

@@ -1,167 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro-erigon/arbitrum-sepolia-nitro-erigon-pruned-trace.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-sepolia-nitro-erigon \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-sepolia-nitro-erigon:
image: ${ARBITRUM_NITRO_ERIGON_IMAGE:-erigontech/nitro-erigon}:${ARBITRUM_SEPOLIA_NITRO_ERIGON_VERSION:-main-1a9771c}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
memlock: -1 # Disable memory locking limits (for in-memory DBs like MDBX)
user: root
ports:
- 13369:13369
- 13369:13369/udp
- 33369:33369
- 33369:33369/udp
- 38369:38369
- 38369:38369/udp
expose:
- 8545
entrypoint: [erigon]
command:
- --chain=arb-sepolia
- --datadir=/root/.local/share/erigon
- --http
- --http.addr=0.0.0.0
- --http.api=eth,erigon,web3,net,debug,trace,txpool,admin,ots
- --http.port=8545
- --http.vhosts=*
- --l2rpc=${ARBITRUM_SEPOLIA_EXECUTION_RPC}
- --maxpeers=50
- --metrics
- --metrics.addr=0.0.0.0
- --metrics.port=6060
- --nat=extip:${IP}
- --p2p.allowed-ports=33369
- --p2p.allowed-ports=38369
- --persist.receipts=false
- --port=13369
- --prune.mode=full
- --rpc.evmtimeout=${ARBITRUM_SEPOLIA_NITRO_ERIGON_PRUNED_TRACE_EVMTIMEOUT:-5m0s}
- --rpc.gascap=6000000000
- --rpc.overlay.getlogstimeout=${ARBITRUM_SEPOLIA_NITRO_ERIGON_PRUNED_TRACE_GETLOGSTIMEOUT:-5m0s}
- --rpc.overlay.replayblocktimeout=${ARBITRUM_SEPOLIA_NITRO_ERIGON_PRUNED_TRACE_REPLAYBLOCKTIMEOUT:-10s}
- --rpc.returndata.limit=10000000
- --sync.loop.block.limit=100000
- --torrent.port=28369
- --ws
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_SEPOLIA_NITRO_ERIGON_PRUNED_TRACE_DATA:-arbitrum-sepolia-nitro-erigon-pruned-trace}:/root/.local/share/erigon
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6060
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-sepolia-nitro-erigon-pruned-trace-stripprefix.stripprefix.prefixes=/arbitrum-sepolia-nitro-erigon
- traefik.http.services.arbitrum-sepolia-nitro-erigon-pruned-trace.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-erigon-pruned-trace.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-erigon-pruned-trace.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-erigon-pruned-trace.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-sepolia-nitro-erigon`) || Path(`/arbitrum-sepolia-nitro-erigon/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-sepolia-nitro-erigon-pruned-trace.rule=Path(`/arbitrum-sepolia-nitro-erigon`) || Path(`/arbitrum-sepolia-nitro-erigon/`)}
- traefik.http.routers.arbitrum-sepolia-nitro-erigon-pruned-trace.middlewares=arbitrum-sepolia-nitro-erigon-pruned-trace-stripprefix, ipallowlist
shm_size: 2gb
volumes:
arbitrum-sepolia-nitro-erigon-pruned-trace:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum-sepolia
method-groups:
enabled:
- debug
- filter
- trace
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
# non standard erigon only
- name: eth_getBlockReceipts
- name: eth_protocolVersion
- name: eth_callMany
- name: eth_callBundle
- name: debug_accountAt
- name: debug_traceCallMany
- name: erigon_getHeaderByHash
- name: erigon_getBlockReceiptsByBlockHash
- name: erigon_getHeaderByNumber
- name: erigon_getLogsByHash
- name: erigon_forks
- name: erigon_getBlockByTimestamp
- name: erigon_BlockNumber
- name: erigon_getLatestLogs
- name: ots_getInternalOperations
- name: ots_hasCode
- name: ots_getTransactionError
- name: ots_traceTransaction
- name: ots_getBlockDetails
- name: ots_getBlockDetailsByHash
- name: ots_getBlockTransactions
- name: ots_searchTransactionsBefore
- name: ots_searchTransactionsAfter
- name: ots_getTransactionBySenderAndNonce
- name: ots_getContractCreator
...

View File

@@ -1,100 +0,0 @@
---
services:
alephzero-mainnet-archive:
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_MAINNET_NITRO_VERSION:-v3.5.3-0a9c975}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=true
- --execution.caching.state-scheme=hash
- --execution.forwarding-target=https://rpc.alephzero.raas.gelato.cloud
- --execution.rpc.gas-cap=600000000
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --node.batch-poster.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.alephzero.raas.gelato.cloud
- --node.data-availability.sequencer-inbox-address=0x1411949971076304187394088912578077660717096867958
- --node.feed.input.url=wss://feed.alephzero.raas.gelato.cloud
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/alephzero-mainnet-archive
- --persistent.db-engine=leveldb
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ALEPHZERO_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_DATA:-alephzero-mainnet-nitro-archive-leveldb-hash}:/root/.arbitrum
- ./arb/alephzero/mainnet:/config
- /slowdisk:/slowdisk
labels:
- traefik.enable=true
- traefik.http.middlewares.alephzero-mainnet-nitro-archive-leveldb-hash-stripprefix.stripprefix.prefixes=/alephzero-mainnet-archive
- traefik.http.services.alephzero-mainnet-nitro-archive-leveldb-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.alephzero-mainnet-nitro-archive-leveldb-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.alephzero-mainnet-nitro-archive-leveldb-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.alephzero-mainnet-nitro-archive-leveldb-hash.rule=Host(`$DOMAIN`) && PathPrefix(`/alephzero-mainnet-archive`)}
- ${NO_SSL:+traefik.http.routers.alephzero-mainnet-nitro-archive-leveldb-hash.rule=PathPrefix(`/alephzero-mainnet-archive`)}
- traefik.http.routers.alephzero-mainnet-nitro-archive-leveldb-hash.middlewares=alephzero-mainnet-nitro-archive-leveldb-hash-stripprefix, ipwhitelist
volumes:
alephzero-mainnet-nitro-archive-leveldb-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
chain: alephzero
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,149 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/alephzero-mainnet-nitro-archive-pebble-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/alephzero-mainnet-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
alephzero-mainnet-archive:
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=true
- --execution.caching.database-cache=${ALEPHZERO_MAINNET_NITRO_ARCHIVE_PEBBLE_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ALEPHZERO_MAINNET_NITRO_ARCHIVE_PEBBLE_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ALEPHZERO_MAINNET_NITRO_ARCHIVE_PEBBLE_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ALEPHZERO_MAINNET_NITRO_ARCHIVE_PEBBLE_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.alephzero.raas.gelato.cloud
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.alephzero.raas.gelato.cloud
- --node.data-availability.sequencer-inbox-address=0x1411949971076304187394088912578077660717096867958
- --node.feed.input.url=wss://feed.alephzero.raas.gelato.cloud
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/alephzero-mainnet-archive
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ALEPHZERO_MAINNET_NITRO_ARCHIVE_PEBBLE_HASH_DATA:-alephzero-mainnet-nitro-archive-pebble-hash}:/root/.arbitrum
- ./arb/alephzero/mainnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.alephzero-mainnet-nitro-archive-pebble-hash-stripprefix.stripprefix.prefixes=/alephzero-mainnet-archive
- traefik.http.services.alephzero-mainnet-nitro-archive-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.alephzero-mainnet-nitro-archive-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.alephzero-mainnet-nitro-archive-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.alephzero-mainnet-nitro-archive-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/alephzero-mainnet-archive`) || Path(`/alephzero-mainnet-archive/`))}
- ${NO_SSL:+traefik.http.routers.alephzero-mainnet-nitro-archive-pebble-hash.rule=Path(`/alephzero-mainnet-archive`) || Path(`/alephzero-mainnet-archive/`)}
- traefik.http.routers.alephzero-mainnet-nitro-archive-pebble-hash.middlewares=alephzero-mainnet-nitro-archive-pebble-hash-stripprefix, ipallowlist
volumes:
alephzero-mainnet-nitro-archive-pebble-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: alephzero
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,152 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/alephzero-mainnet-nitro-pruned-pebble-path.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/alephzero-mainnet \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
alephzero-mainnet:
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=${ALEPHZERO_MAINNET_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${ALEPHZERO_MAINNET_NITRO_PRUNED_PEBBLE_PATH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ALEPHZERO_MAINNET_NITRO_PRUNED_PEBBLE_PATH_SNAPSHOT_CACHE:-400}
- --execution.caching.state-scheme=path
- --execution.caching.trie-clean-cache=${ALEPHZERO_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ALEPHZERO_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.alephzero.raas.gelato.cloud
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.alephzero.raas.gelato.cloud
- --node.data-availability.sequencer-inbox-address=0x1411949971076304187394088912578077660717096867958
- --node.feed.input.url=wss://feed.alephzero.raas.gelato.cloud
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/alephzero-mainnet
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ALEPHZERO_MAINNET_NITRO_PRUNED_PEBBLE_PATH_DATA:-alephzero-mainnet-nitro-pruned-pebble-path}:/root/.arbitrum
- ./arb/alephzero/mainnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.alephzero-mainnet-nitro-pruned-pebble-path-stripprefix.stripprefix.prefixes=/alephzero-mainnet
- traefik.http.services.alephzero-mainnet-nitro-pruned-pebble-path.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.alephzero-mainnet-nitro-pruned-pebble-path.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.alephzero-mainnet-nitro-pruned-pebble-path.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.alephzero-mainnet-nitro-pruned-pebble-path.rule=Host(`$DOMAIN`) && (Path(`/alephzero-mainnet`) || Path(`/alephzero-mainnet/`))}
- ${NO_SSL:+traefik.http.routers.alephzero-mainnet-nitro-pruned-pebble-path.rule=Path(`/alephzero-mainnet`) || Path(`/alephzero-mainnet/`)}
- traefik.http.routers.alephzero-mainnet-nitro-pruned-pebble-path.middlewares=alephzero-mainnet-nitro-pruned-pebble-path-stripprefix, ipallowlist
volumes:
alephzero-mainnet-nitro-pruned-pebble-path:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: alephzero
method-groups:
enabled:
- debug
- filter
methods:
disabled:
# not compatible with path state scheme
- name: debug_traceBlockByHash
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,100 +0,0 @@
---
services:
alephzero-sepolia-archive:
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_SEPOLIA_NITRO_VERSION:-v3.5.3-0a9c975}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=true
- --execution.caching.state-scheme=hash
- --execution.forwarding-target=https://rpc.alephzero-testnet.gelato.digital
- --execution.rpc.gas-cap=600000000
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --node.batch-poster.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.alephzero-testnet.gelato.digital
- --node.data-availability.sequencer-inbox-address=0x130937498521962644184395825246273622310592356541
- --node.feed.input.url=wss://feed.alephzero-testnet.gelato.digital
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_SEPOLIA_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/alephzero-sepolia-archive
- --persistent.db-engine=leveldb
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ALEPHZERO_SEPOLIA_NITRO_ARCHIVE_LEVELDB_HASH_DATA:-alephzero-sepolia-nitro-archive-leveldb-hash}:/root/.arbitrum
- ./arb/alephzero/sepolia:/config
- /slowdisk:/slowdisk
labels:
- traefik.enable=true
- traefik.http.middlewares.alephzero-sepolia-nitro-archive-leveldb-hash-stripprefix.stripprefix.prefixes=/alephzero-sepolia-archive
- traefik.http.services.alephzero-sepolia-nitro-archive-leveldb-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.alephzero-sepolia-nitro-archive-leveldb-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.alephzero-sepolia-nitro-archive-leveldb-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.alephzero-sepolia-nitro-archive-leveldb-hash.rule=Host(`$DOMAIN`) && PathPrefix(`/alephzero-sepolia-archive`)}
- ${NO_SSL:+traefik.http.routers.alephzero-sepolia-nitro-archive-leveldb-hash.rule=PathPrefix(`/alephzero-sepolia-archive`)}
- traefik.http.routers.alephzero-sepolia-nitro-archive-leveldb-hash.middlewares=alephzero-sepolia-nitro-archive-leveldb-hash-stripprefix, ipwhitelist
volumes:
alephzero-sepolia-nitro-archive-leveldb-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
chain: alephzero-sepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,149 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/alephzero-sepolia-nitro-archive-pebble-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/alephzero-sepolia-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
alephzero-sepolia-archive:
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=true
- --execution.caching.database-cache=${ALEPHZERO_SEPOLIA_NITRO_ARCHIVE_PEBBLE_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ALEPHZERO_SEPOLIA_NITRO_ARCHIVE_PEBBLE_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ALEPHZERO_SEPOLIA_NITRO_ARCHIVE_PEBBLE_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ALEPHZERO_SEPOLIA_NITRO_ARCHIVE_PEBBLE_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.alephzero-testnet.gelato.digital
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.alephzero-testnet.gelato.digital
- --node.data-availability.sequencer-inbox-address=0x130937498521962644184395825246273622310592356541
- --node.feed.input.url=wss://feed.alephzero-testnet.gelato.digital
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_SEPOLIA_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/alephzero-sepolia-archive
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ALEPHZERO_SEPOLIA_NITRO_ARCHIVE_PEBBLE_HASH_DATA:-alephzero-sepolia-nitro-archive-pebble-hash}:/root/.arbitrum
- ./arb/alephzero/sepolia:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.alephzero-sepolia-nitro-archive-pebble-hash-stripprefix.stripprefix.prefixes=/alephzero-sepolia-archive
- traefik.http.services.alephzero-sepolia-nitro-archive-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.alephzero-sepolia-nitro-archive-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.alephzero-sepolia-nitro-archive-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.alephzero-sepolia-nitro-archive-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/alephzero-sepolia-archive`) || Path(`/alephzero-sepolia-archive/`))}
- ${NO_SSL:+traefik.http.routers.alephzero-sepolia-nitro-archive-pebble-hash.rule=Path(`/alephzero-sepolia-archive`) || Path(`/alephzero-sepolia-archive/`)}
- traefik.http.routers.alephzero-sepolia-nitro-archive-pebble-hash.middlewares=alephzero-sepolia-nitro-archive-pebble-hash-stripprefix, ipallowlist
volumes:
alephzero-sepolia-nitro-archive-pebble-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: alephzero-sepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,152 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/alephzero-sepolia-nitro-pruned-pebble-path.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/alephzero-sepolia \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
alephzero-sepolia:
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=${ALEPHZERO_SEPOLIA_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${ALEPHZERO_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ALEPHZERO_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_SNAPSHOT_CACHE:-400}
- --execution.caching.state-scheme=path
- --execution.caching.trie-clean-cache=${ALEPHZERO_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ALEPHZERO_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.alephzero-testnet.gelato.digital
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.alephzero-testnet.gelato.digital
- --node.data-availability.sequencer-inbox-address=0x130937498521962644184395825246273622310592356541
- --node.feed.input.url=wss://feed.alephzero-testnet.gelato.digital
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_SEPOLIA_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/alephzero-sepolia
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ALEPHZERO_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_DATA:-alephzero-sepolia-nitro-pruned-pebble-path}:/root/.arbitrum
- ./arb/alephzero/sepolia:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.alephzero-sepolia-nitro-pruned-pebble-path-stripprefix.stripprefix.prefixes=/alephzero-sepolia
- traefik.http.services.alephzero-sepolia-nitro-pruned-pebble-path.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.alephzero-sepolia-nitro-pruned-pebble-path.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.alephzero-sepolia-nitro-pruned-pebble-path.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.alephzero-sepolia-nitro-pruned-pebble-path.rule=Host(`$DOMAIN`) && (Path(`/alephzero-sepolia`) || Path(`/alephzero-sepolia/`))}
- ${NO_SSL:+traefik.http.routers.alephzero-sepolia-nitro-pruned-pebble-path.rule=Path(`/alephzero-sepolia`) || Path(`/alephzero-sepolia/`)}
- traefik.http.routers.alephzero-sepolia-nitro-pruned-pebble-path.middlewares=alephzero-sepolia-nitro-pruned-pebble-path-stripprefix, ipallowlist
volumes:
alephzero-sepolia-nitro-pruned-pebble-path:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: alephzero-sepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
# not compatible with path state scheme
- name: debug_traceBlockByHash
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,141 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/arbitrum-nova-nitro-archive-leveldb-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-nova-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-nova-archive:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_NOVA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --chain.id=42170
- --execution.caching.archive=true
- --execution.caching.database-cache=${ARBITRUM_NOVA_NITRO_ARCHIVE_LEVELDB_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ARBITRUM_NOVA_NITRO_ARCHIVE_LEVELDB_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ARBITRUM_NOVA_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ARBITRUM_NOVA_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=archive
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/arbitrum-nova-archive
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_NOVA_NITRO_ARCHIVE_LEVELDB_HASH_DATA:-arbitrum-nova-nitro-archive-leveldb-hash}:/root/.arbitrum
- ./tmp/arbitrum-nova-archive:/tmp
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-nova-nitro-archive-leveldb-hash-stripprefix.stripprefix.prefixes=/arbitrum-nova-archive
- traefik.http.services.arbitrum-nova-nitro-archive-leveldb-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-nova-nitro-archive-leveldb-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-nova-nitro-archive-leveldb-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-nova-nitro-archive-leveldb-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-nova-archive`) || Path(`/arbitrum-nova-archive/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-nova-nitro-archive-leveldb-hash.rule=Path(`/arbitrum-nova-archive`) || Path(`/arbitrum-nova-archive/`)}
- traefik.http.routers.arbitrum-nova-nitro-archive-leveldb-hash.middlewares=arbitrum-nova-nitro-archive-leveldb-hash-stripprefix, ipallowlist
volumes:
arbitrum-nova-nitro-archive-leveldb-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum-nova
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,142 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/arbitrum-nova-nitro-pruned-pebble-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-nova \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-nova:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_NOVA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --chain.id=42170
- --execution.caching.archive=${ARBITRUM_NOVA_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${ARBITRUM_NOVA_NITRO_PRUNED_PEBBLE_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ARBITRUM_NOVA_NITRO_PRUNED_PEBBLE_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ARBITRUM_NOVA_NITRO_PRUNED_PEBBLE_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ARBITRUM_NOVA_NITRO_PRUNED_PEBBLE_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=pruned
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/arbitrum-nova
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_NOVA_NITRO_PRUNED_PEBBLE_HASH_DATA:-arbitrum-nova-nitro-pruned-pebble-hash}:/root/.arbitrum
- ./tmp/arbitrum-nova:/tmp
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-nova-nitro-pruned-pebble-hash-stripprefix.stripprefix.prefixes=/arbitrum-nova
- traefik.http.services.arbitrum-nova-nitro-pruned-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-nova-nitro-pruned-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-nova-nitro-pruned-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-nova-nitro-pruned-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-nova`) || Path(`/arbitrum-nova/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-nova-nitro-pruned-pebble-hash.rule=Path(`/arbitrum-nova`) || Path(`/arbitrum-nova/`)}
- traefik.http.routers.arbitrum-nova-nitro-pruned-pebble-hash.middlewares=arbitrum-nova-nitro-pruned-pebble-hash-stripprefix, ipallowlist
volumes:
arbitrum-nova-nitro-pruned-pebble-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum-nova
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,181 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/arbitrum-one-nitro-archive-erigon.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-one-nitro-archive-erigon \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-one-archive-erigon:
image: ${ARBITRUM_NITRO_IMAGE:-erigontech/nitro-erigon}:${ARBITRUM_ONE_NITRO_VERSION:-main-de68b93}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
memlock: -1 # Disable memory locking limits (for in-memory DBs like MDBX)
user: root
ports:
- 21789:21789
- 21789:21789/udp
- 27891:27891
- 27891:27891/udp
- 38917:38917
- 38917:38917/udp
- 43123:43123
- 43123:43123/udp
- 49231:49231
- 49231:49231/udp
expose:
- 8545
- 5555
entrypoint: [erigon]
command:
- --datadir=/root/.local/share/erigon
- --http
- --http.addr=0.0.0.0
- --http.api=eth,erigon,web3,net,debug,trace,txpool,admin,ots
- --http.port=8545
- --http.vhosts=*
- --maxpeers=50
- --metrics
- --metrics.addr=0.0.0.0
- --metrics.port=6060
- --nat=extip:${IP}
- --p2p.allowed-ports=43123
- --p2p.allowed-ports=49231
- --port=21789
- --prune.mode=archive
- --l2rpc="http://arbitrum-one-archive:8545"
- --torrent.download.rate=${ARBITRUM_ONE_NITRO_ARCHIVE_ERIGON_MAX_DOWNLOAD_RATE:-1000mb}
- --torrent.port=38917
- --ws
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_ONE_NITRO_ARCHIVE_ERIGON_DATA:-arbitrum-one-nitro-archive-erigon}:/root/.local/share/erigon
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6060
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-one-nitro-archive-erigon-stripprefix.stripprefix.prefixes=/arbitrum-one-nitro-archive-erigon
- traefik.http.services.arbitrum-one-nitro-archive-erigon.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-archive-erigon.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-archive-erigon.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-archive-erigon.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-one-nitro-archive-erigon`) || Path(`/arbitrum-one-nitro-archive-erigon/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-one-nitro-archive-erigon.rule=Path(`/arbitrum-one-nitro-archive-erigon`) || Path(`/arbitrum-one-nitro-archive-erigon/`)}
- traefik.http.routers.arbitrum-one-nitro-archive-erigon.middlewares=arbitrum-one-nitro-archive-erigon-stripprefix, ipallowlist
- traefik.http.routers.arbitrum-one-nitro-archive-erigon.service=arbitrum-one-nitro-archive-erigon
- traefik.http.routers.arbitrum-one-nitro-archive-erigon-node.service=arbitrum-one-nitro-archive-erigon-node
- traefik.http.services.arbitrum-one-nitro-archive-erigon-node.loadbalancer.server.port=5555
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-archive-erigon-node.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-archive-erigon-node.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-archive-erigon-node.rule=Host(`$DOMAIN`) && PathPrefix(`/arbitrum-one-nitro-archive-erigon/eth`)}
- ${NO_SSL:+traefik.http.routers.arbitrum-one-nitro-archive-erigon-node.rule=PathPrefix(`/arbitrum-one-nitro-archive-erigon/eth`)}
- traefik.http.routers.arbitrum-one-nitro-archive-erigon-node.middlewares=arbitrum-one-nitro-archive-erigon-stripprefix, ipallowlist
shm_size: 2gb
volumes:
arbitrum-one-nitro-archive-erigon:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum-one
method-groups:
enabled:
- debug
- filter
- trace
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
# non standard erigon only
- name: eth_getBlockReceipts
- name: eth_protocolVersion
- name: eth_callMany
- name: eth_callBundle
- name: debug_accountAt
- name: debug_traceCallMany
- name: erigon_getHeaderByHash
- name: erigon_getBlockReceiptsByBlockHash
- name: erigon_getHeaderByNumber
- name: erigon_getLogsByHash
- name: erigon_forks
- name: erigon_getBlockByTimestamp
- name: erigon_BlockNumber
- name: erigon_getLatestLogs
- name: ots_getInternalOperations
- name: ots_hasCode
- name: ots_getTransactionError
- name: ots_traceTransaction
- name: ots_getBlockDetails
- name: ots_getBlockDetailsByHash
- name: ots_getBlockTransactions
- name: ots_searchTransactionsBefore
- name: ots_searchTransactionsAfter
- name: ots_getTransactionBySenderAndNonce
- name: ots_getContractCreator
- id: $${ID}-beacon-chain
chain: eth-beacon-chain
labels:
provider: $${PROVIDER}-beacon-chain
connection:
generic:
rpc:
url: $${RPC_URL}
...

View File

@@ -1,212 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/arbitrum-one-nitro-archive-leveldb-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-one-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-one-archive:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_ONE_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --chain.id=42161
- --execution.caching.archive=true
- --execution.caching.database-cache=${ARBITRUM_ONE_NITRO_ARCHIVE_LEVELDB_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ARBITRUM_ONE_NITRO_ARCHIVE_LEVELDB_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ARBITRUM_ONE_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ARBITRUM_ONE_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.rpc.classic-redirect=http://arbitrum-one-arbnode-archive:8545
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=archive
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/arbitrum-one-archive
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_ONE_NITRO_ARCHIVE_LEVELDB_HASH_DATA:-arbitrum-one-nitro-archive-leveldb-hash}:/root/.arbitrum
- ./tmp/arbitrum-one-archive:/tmp
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-one-nitro-archive-leveldb-hash-stripprefix.stripprefix.prefixes=/arbitrum-one-archive
- traefik.http.services.arbitrum-one-nitro-archive-leveldb-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-archive-leveldb-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-archive-leveldb-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-archive-leveldb-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-one-archive`) || Path(`/arbitrum-one-archive/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-one-nitro-archive-leveldb-hash.rule=Path(`/arbitrum-one-archive`) || Path(`/arbitrum-one-archive/`)}
- traefik.http.routers.arbitrum-one-nitro-archive-leveldb-hash.middlewares=arbitrum-one-nitro-archive-leveldb-hash-stripprefix, ipallowlist
arbitrum-one-arbnode-archive:
image: ${ARBITRUM_ARBNODE_IMAGE:-offchainlabs/arb-node}:${ARBITRUM_ONE_ARBNODE_VERSION:-v1.4.5-e97c1a4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
- 8546
entrypoint: [/home/user/go/bin/arb-node]
command:
- --core.checkpoint-gas-frequency=156250000
- --l1.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --l2.disable-upstream
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=7070
- --node.cache.allow-slow-lookup
- --node.chain-id=42161
- --node.rpc.addr=0.0.0.0
- --node.rpc.enable-l1-calls
- --node.rpc.port=8545
- --node.rpc.tracing.enable
- --node.rpc.tracing.namespace=trace
- --node.ws.addr=0.0.0.0
- --node.ws.port=8546
- --persistent.chain=/data/datadir/
- --persistent.global-config=/data/
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_ONE_ARBNODE_ARCHIVE_LEVELDB_HASH_DATA:-arbitrum-one-arbnode-archive-leveldb-hash}:/data
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
volumes:
arbitrum-one-arbnode-archive-leveldb-hash:
arbitrum-one-nitro-archive-leveldb-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,213 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/arbitrum-one-nitro-archive-pebble-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-one-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-one-archive:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_ONE_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --chain.id=42161
- --execution.caching.archive=true
- --execution.caching.database-cache=${ARBITRUM_ONE_NITRO_ARCHIVE_PEBBLE_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ARBITRUM_ONE_NITRO_ARCHIVE_PEBBLE_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ARBITRUM_ONE_NITRO_ARCHIVE_PEBBLE_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ARBITRUM_ONE_NITRO_ARCHIVE_PEBBLE_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.rpc.classic-redirect=http://arbitrum-one-arbnode-archive:8545
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=archive
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/arbitrum-one-archive
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_ONE_NITRO_ARCHIVE_PEBBLE_HASH_DATA:-arbitrum-one-nitro-archive-pebble-hash}:/root/.arbitrum
- ./tmp/arbitrum-one-archive:/tmp
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-one-nitro-archive-pebble-hash-stripprefix.stripprefix.prefixes=/arbitrum-one-archive
- traefik.http.services.arbitrum-one-nitro-archive-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-archive-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-archive-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-archive-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-one-archive`) || Path(`/arbitrum-one-archive/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-one-nitro-archive-pebble-hash.rule=Path(`/arbitrum-one-archive`) || Path(`/arbitrum-one-archive/`)}
- traefik.http.routers.arbitrum-one-nitro-archive-pebble-hash.middlewares=arbitrum-one-nitro-archive-pebble-hash-stripprefix, ipallowlist
arbitrum-one-arbnode-archive:
image: ${ARBITRUM_ARBNODE_IMAGE:-offchainlabs/arb-node}:${ARBITRUM_ONE_ARBNODE_VERSION:-v1.4.5-e97c1a4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
- 8546
entrypoint: [/home/user/go/bin/arb-node]
command:
- --core.checkpoint-gas-frequency=156250000
- --l1.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --l2.disable-upstream
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=7070
- --node.cache.allow-slow-lookup
- --node.chain-id=42161
- --node.rpc.addr=0.0.0.0
- --node.rpc.enable-l1-calls
- --node.rpc.port=8545
- --node.rpc.tracing.enable
- --node.rpc.tracing.namespace=trace
- --node.ws.addr=0.0.0.0
- --node.ws.port=8546
- --persistent.chain=/data/datadir/
- --persistent.global-config=/data/
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_ONE_ARBNODE_ARCHIVE_LEVELDB_HASH_DATA:-arbitrum-one-arbnode-archive-leveldb-hash}:/data
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
volumes:
arbitrum-one-arbnode-archive-leveldb-hash:
arbitrum-one-nitro-archive-pebble-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,206 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/arbitrum-one-nitro-pruned-pebble-hash--fireeth.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-one \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-one:
image: ${ARBITRUM_FIREETH_IMAGE:-ghcr.io/streamingfast/go-ethereum}:${ARBITRUM_ONE_FIREETH_VERSION:-v2.12.4-nitro-nitro-v3.6.7-fh3.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
entrypoint: [sh, -c, exec fireeth start reader-node --log-to-file=false --reader-node-arguments "$*", _]
command:
- --chain.id=42161
- --execution.caching.archive=${ARBITRUM_ONE_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=pruned
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/arbitrum-one
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_ONE_FIREETH_DATA:-arbitrum-one-fireeth}:/app/firehose-data
- ${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_DATA:-arbitrum-one-nitro-pruned-pebble-hash}:/root/.arbitrum
- ./tmp/arbitrum-one:/tmp
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-one-nitro-pruned-pebble-hash-stripprefix.stripprefix.prefixes=/arbitrum-one
- traefik.http.services.arbitrum-one-nitro-pruned-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-one`) || Path(`/arbitrum-one/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.rule=Path(`/arbitrum-one`) || Path(`/arbitrum-one/`)}
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.middlewares=arbitrum-one-nitro-pruned-pebble-hash-stripprefix, ipallowlist
arbitrum-one-firehose:
image: ${ARBITRUM_FIREETH_IMAGE:-ghcr.io/streamingfast/go-ethereum}:${ARBITRUM_ONE_FIREETH_VERSION:-v2.12.4-nitro-nitro-v3.6.7-fh3.0}
expose:
- 10015
- 10014
environment:
- ${ARBITRUM_ONE_FIREETH_BLOCKS_STORE:-/app/firehose-data/storage/merged-blocks}
entrypoint: [sh, -c, exec fireeth --config-file="" --log-to-file=false start firehose index-builder relayer merger $@, _]
command:
- --firehose-rate-limit-bucket-fill-rate=${ARBITRUM_ONE_FIREHOSE_RATE_LIMIT_BUCKET_FILL_RATE:-1s}
- --firehose-rate-limit-bucket-size=${ARBITRUM_ONE_FIREHOSE_RATE_LIMIT_BUCKET_SIZE:-200}
- --log-to-file=false
- --relayer-source=arbitrum-one:10010
restart: unless-stopped
depends_on:
- arbitrum-one
networks:
- chains
volumes:
- ${ARBITRUM_ONE_FIREETH_DATA:-arbitrum-one-fireeth}:/app/firehose-data
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.services.arbitrum-one-nitro-pruned-pebble-hash-firehose.loadbalancer.server.scheme=h2c
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-firehose.service=arbitrum-one-nitro-pruned-pebble-hash-firehose
- traefik.http.services.arbitrum-one-nitro-pruned-pebble-hash-firehose.loadbalancer.server.port=10015
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-firehose.entrypoints=grpc
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-firehose.tls.certresolver=myresolver}
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-firehose.rule=Host(`arbitrum-one-firehose.${DOMAIN}`)
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-firehose.middlewares=ipallowlist
arbitrum-one-events:
image: ${ARBITRUM_FIREETH_IMAGE:-ghcr.io/streamingfast/go-ethereum}:${ARBITRUM_ONE_FIREETH_VERSION:-v2.12.4-nitro-nitro-v3.6.7-fh3.0}
expose:
- 10016
entrypoint: [sh, -c, exec fireeth --config-file="" --log-to-file=false start substreams-tier1 substreams-tier2 $@, _]
command:
- --common-live-blocks-addr=arbitrum-one-firehose:10014
- --log-to-file=false
- --substreams-block-execution-timeout=${ARBITRUM_ONE_SUBSTREAMS_BLOCK_EXECUTION_TIMEOUT:-3m0s}
- --substreams-rpc-endpoints=${ARBITRUM_ONE_EXECUTION_ARCHIVE_RPC}
- --substreams-tier1-max-subrequests=${ARBITRUM_ONE_SUBSTREAMS_TIER1_MAX_SUBREQUESTS:-4}
restart: unless-stopped
depends_on:
- arbitrum-one
networks:
- chains
volumes:
- ${ARBITRUM_ONE_FIREETH_DATA:-arbitrum-one-fireeth}:/app/firehose-data
logging: *logging-defaults
labels:
- traefik.enable=true
- traefik.http.services.arbitrum-one-nitro-pruned-pebble-hash-events.loadbalancer.server.scheme=h2c
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-events.service=arbitrum-one-nitro-pruned-pebble-hash-events
- traefik.http.services.arbitrum-one-nitro-pruned-pebble-hash-events.loadbalancer.server.port=10016
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-events.entrypoints=grpc
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-events.tls.certresolver=myresolver}
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-events.rule=Host(`arbitrum-one-events.${DOMAIN}`)
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-events.middlewares=ipallowlist
volumes:
arbitrum-one-nitro-pruned-pebble-hash:
arbitrum-one-nitro-pruned-pebble-hash_fireeth:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,142 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/arbitrum-one-nitro-pruned-pebble-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-one \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-one:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_ONE_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --chain.id=42161
- --execution.caching.archive=${ARBITRUM_ONE_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=pruned
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/arbitrum-one
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_DATA:-arbitrum-one-nitro-pruned-pebble-hash}:/root/.arbitrum
- ./tmp/arbitrum-one:/tmp
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-one-nitro-pruned-pebble-hash-stripprefix.stripprefix.prefixes=/arbitrum-one
- traefik.http.services.arbitrum-one-nitro-pruned-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-one`) || Path(`/arbitrum-one/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.rule=Path(`/arbitrum-one`) || Path(`/arbitrum-one/`)}
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.middlewares=arbitrum-one-nitro-pruned-pebble-hash-stripprefix, ipallowlist
volumes:
arbitrum-one-nitro-pruned-pebble-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,142 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/arbitrum-sepolia-nitro-archive-pebble-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-sepolia-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-sepolia-archive:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --chain.id=421614
- --execution.caching.archive=true
- --execution.caching.database-cache=${ARBITRUM_SEPOLIA_NITRO_ARCHIVE_PEBBLE_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ARBITRUM_SEPOLIA_NITRO_ARCHIVE_PEBBLE_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ARBITRUM_SEPOLIA_NITRO_ARCHIVE_PEBBLE_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ARBITRUM_SEPOLIA_NITRO_ARCHIVE_PEBBLE_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=archive
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_SEPOLIA_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/arbitrum-sepolia-archive
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_SEPOLIA_NITRO_ARCHIVE_PEBBLE_HASH_DATA:-arbitrum-sepolia-nitro-archive-pebble-hash}:/root/.arbitrum
- ./tmp/arbitrum-sepolia-archive:/tmp
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-sepolia-nitro-archive-pebble-hash-stripprefix.stripprefix.prefixes=/arbitrum-sepolia-archive
- traefik.http.services.arbitrum-sepolia-nitro-archive-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-archive-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-archive-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-archive-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-sepolia-archive`) || Path(`/arbitrum-sepolia-archive/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-sepolia-nitro-archive-pebble-hash.rule=Path(`/arbitrum-sepolia-archive`) || Path(`/arbitrum-sepolia-archive/`)}
- traefik.http.routers.arbitrum-sepolia-nitro-archive-pebble-hash.middlewares=arbitrum-sepolia-nitro-archive-pebble-hash-stripprefix, ipallowlist
volumes:
arbitrum-sepolia-nitro-archive-pebble-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum-sepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,142 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/arbitrum-sepolia-nitro-pruned-pebble-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-sepolia \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-sepolia:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --chain.id=421614
- --execution.caching.archive=${ARBITRUM_SEPOLIA_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${ARBITRUM_SEPOLIA_NITRO_PRUNED_PEBBLE_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ARBITRUM_SEPOLIA_NITRO_PRUNED_PEBBLE_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ARBITRUM_SEPOLIA_NITRO_PRUNED_PEBBLE_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ARBITRUM_SEPOLIA_NITRO_PRUNED_PEBBLE_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=pruned
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_SEPOLIA_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/arbitrum-sepolia
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_SEPOLIA_NITRO_PRUNED_PEBBLE_HASH_DATA:-arbitrum-sepolia-nitro-pruned-pebble-hash}:/root/.arbitrum
- ./tmp/arbitrum-sepolia:/tmp
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-sepolia-nitro-pruned-pebble-hash-stripprefix.stripprefix.prefixes=/arbitrum-sepolia
- traefik.http.services.arbitrum-sepolia-nitro-pruned-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-sepolia`) || Path(`/arbitrum-sepolia/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-hash.rule=Path(`/arbitrum-sepolia`) || Path(`/arbitrum-sepolia/`)}
- traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-hash.middlewares=arbitrum-sepolia-nitro-pruned-pebble-hash-stripprefix, ipallowlist
volumes:
arbitrum-sepolia-nitro-pruned-pebble-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum-sepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,57 +0,0 @@
services:
arbitrum-sepolia:
image: 'offchainlabs/nitro-node:${NITRO_VERSION:-v3.5.3-0a9c975}'
stop_grace_period: 3m
user: root
volumes:
- 'arbitrum-sepolia-nitro-pruned-pebble-path:/root/.arbitrum'
- './tmp/arbitrum-sepolia:/tmp'
expose:
- 8545
command:
- --chain.id=421614
- --execution.caching.state-scheme=path
- --execution.rpc.gas-cap=600000000
- --execution.caching.archive=false
- --execution.sequencer.enable=false
- --persistent.db-engine=pebble
- --persistent.chain=/root/.arbitrum/arbitrum-sepolia
- --parent-chain.connection.url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --parent-chain.blob-client.beacon-url=${ETHEREUM_SEPOLIA_BEACON_REST}
- --http.addr=0.0.0.0
- --http.port=8545
- --http.vhosts=*
- --http.corsdomain=*
- --http.api=eth,net,web3,arb,txpool,debug
- --ws.port=8545
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.api=eth,net,web3,arb,txpool,debug
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --log-type=json
- --node.sequencer=false
- --node.staker.enable=false
- --node.batch-poster.enable=false
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.arbitrum-sepolia-nitro-pruned-pebble-path-stripprefix.stripprefix.prefixes=/arbitrum-sepolia"
- "traefik.http.services.arbitrum-sepolia-nitro-pruned-pebble-path.loadbalancer.server.port=8545"
- "traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-path.entrypoints=websecure"
- "traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-path.tls.certresolver=myresolver"
- "traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-path.rule=Host(`$DOMAIN`) && PathPrefix(`/arbitrum-sepolia`)"
- "traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-path.middlewares=arbitrum-sepolia-nitro-pruned-pebble-path-stripprefix, ipwhitelist"
networks:
- chains
volumes:
arbitrum-sepolia-nitro-pruned-pebble-path:

View File

@@ -1,148 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/connext-sepolia-nitro-archive-leveldb-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/connext-sepolia-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
connext-sepolia-archive:
image: ${CONNEXT_NITRO_IMAGE:-offchainlabs/nitro-node}:${CONNEXT_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=true
- --execution.caching.database-cache=${CONNEXT_SEPOLIA_NITRO_ARCHIVE_LEVELDB_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${CONNEXT_SEPOLIA_NITRO_ARCHIVE_LEVELDB_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${CONNEXT_SEPOLIA_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${CONNEXT_SEPOLIA_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.connext-sepolia.gelato.digital
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.connext-sepolia.gelato.digital
- --node.data-availability.sequencer-inbox-address=0x727095791318912381473707332248435763608420056676
- --node.feed.input.url=wss://feed.connext-sepolia.gelato.digital
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_SEPOLIA_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/connext-sepolia-archive
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${CONNEXT_SEPOLIA_NITRO_ARCHIVE_LEVELDB_HASH_DATA:-connext-sepolia-nitro-archive-leveldb-hash}:/root/.arbitrum
- ./arb/connext/sepolia:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.connext-sepolia-nitro-archive-leveldb-hash-stripprefix.stripprefix.prefixes=/connext-sepolia-archive
- traefik.http.services.connext-sepolia-nitro-archive-leveldb-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.connext-sepolia-nitro-archive-leveldb-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.connext-sepolia-nitro-archive-leveldb-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.connext-sepolia-nitro-archive-leveldb-hash.rule=Host(`$DOMAIN`) && (Path(`/connext-sepolia-archive`) || Path(`/connext-sepolia-archive/`))}
- ${NO_SSL:+traefik.http.routers.connext-sepolia-nitro-archive-leveldb-hash.rule=Path(`/connext-sepolia-archive`) || Path(`/connext-sepolia-archive/`)}
- traefik.http.routers.connext-sepolia-nitro-archive-leveldb-hash.middlewares=connext-sepolia-nitro-archive-leveldb-hash-stripprefix, ipallowlist
volumes:
connext-sepolia-nitro-archive-leveldb-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: everclear-sepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,152 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/connext-sepolia-nitro-pruned-pebble-path.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/connext-sepolia \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
connext-sepolia:
image: ${CONNEXT_NITRO_IMAGE:-offchainlabs/nitro-node}:${CONNEXT_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=${CONNEXT_SEPOLIA_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${CONNEXT_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${CONNEXT_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_SNAPSHOT_CACHE:-400}
- --execution.caching.state-scheme=path
- --execution.caching.trie-clean-cache=${CONNEXT_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${CONNEXT_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.connext-sepolia.gelato.digital
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.connext-sepolia.gelato.digital
- --node.data-availability.sequencer-inbox-address=0x727095791318912381473707332248435763608420056676
- --node.feed.input.url=wss://feed.connext-sepolia.gelato.digital
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_SEPOLIA_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/connext-sepolia
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${CONNEXT_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_DATA:-connext-sepolia-nitro-pruned-pebble-path}:/root/.arbitrum
- ./arb/connext/sepolia:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.connext-sepolia-nitro-pruned-pebble-path-stripprefix.stripprefix.prefixes=/connext-sepolia
- traefik.http.services.connext-sepolia-nitro-pruned-pebble-path.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.connext-sepolia-nitro-pruned-pebble-path.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.connext-sepolia-nitro-pruned-pebble-path.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.connext-sepolia-nitro-pruned-pebble-path.rule=Host(`$DOMAIN`) && (Path(`/connext-sepolia`) || Path(`/connext-sepolia/`))}
- ${NO_SSL:+traefik.http.routers.connext-sepolia-nitro-pruned-pebble-path.rule=Path(`/connext-sepolia`) || Path(`/connext-sepolia/`)}
- traefik.http.routers.connext-sepolia-nitro-pruned-pebble-path.middlewares=connext-sepolia-nitro-pruned-pebble-path-stripprefix, ipallowlist
volumes:
connext-sepolia-nitro-pruned-pebble-path:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: everclear-sepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
# not compatible with path state scheme
- name: debug_traceBlockByHash
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,148 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/everclear-mainnet-nitro-archive-leveldb-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/everclear-mainnet-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
everclear-mainnet-archive:
image: ${EVERCLEAR_NITRO_IMAGE:-offchainlabs/nitro-node}:${EVERCLEAR_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=true
- --execution.caching.database-cache=${EVERCLEAR_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${EVERCLEAR_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${EVERCLEAR_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${EVERCLEAR_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.everclear.raas.gelato.cloud
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.everclear.raas.gelato.cloud
- --node.data-availability.sequencer-inbox-address=0x727095791318912381473707332248435763608420056676
- --node.feed.input.url=wss://feed.everclear.raas.gelato.cloud
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/everclear-mainnet-archive
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${EVERCLEAR_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_DATA:-everclear-mainnet-nitro-archive-leveldb-hash}:/root/.arbitrum
- ./arb/everclear/mainnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.everclear-mainnet-nitro-archive-leveldb-hash-stripprefix.stripprefix.prefixes=/everclear-mainnet-archive
- traefik.http.services.everclear-mainnet-nitro-archive-leveldb-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.everclear-mainnet-nitro-archive-leveldb-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.everclear-mainnet-nitro-archive-leveldb-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.everclear-mainnet-nitro-archive-leveldb-hash.rule=Host(`$DOMAIN`) && (Path(`/everclear-mainnet-archive`) || Path(`/everclear-mainnet-archive/`))}
- ${NO_SSL:+traefik.http.routers.everclear-mainnet-nitro-archive-leveldb-hash.rule=Path(`/everclear-mainnet-archive`) || Path(`/everclear-mainnet-archive/`)}
- traefik.http.routers.everclear-mainnet-nitro-archive-leveldb-hash.middlewares=everclear-mainnet-nitro-archive-leveldb-hash-stripprefix, ipallowlist
volumes:
everclear-mainnet-nitro-archive-leveldb-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: everclear
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,152 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/everclear-mainnet-nitro-pruned-pebble-path.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/everclear-mainnet \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
everclear-mainnet:
image: ${EVERCLEAR_NITRO_IMAGE:-offchainlabs/nitro-node}:${EVERCLEAR_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=${EVERCLEAR_MAINNET_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${EVERCLEAR_MAINNET_NITRO_PRUNED_PEBBLE_PATH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${EVERCLEAR_MAINNET_NITRO_PRUNED_PEBBLE_PATH_SNAPSHOT_CACHE:-400}
- --execution.caching.state-scheme=path
- --execution.caching.trie-clean-cache=${EVERCLEAR_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${EVERCLEAR_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.everclear.raas.gelato.cloud
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.everclear.raas.gelato.cloud
- --node.data-availability.sequencer-inbox-address=0x727095791318912381473707332248435763608420056676
- --node.feed.input.url=wss://feed.everclear.raas.gelato.cloud
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/everclear-mainnet
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${EVERCLEAR_MAINNET_NITRO_PRUNED_PEBBLE_PATH_DATA:-everclear-mainnet-nitro-pruned-pebble-path}:/root/.arbitrum
- ./arb/everclear/mainnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.everclear-mainnet-nitro-pruned-pebble-path-stripprefix.stripprefix.prefixes=/everclear-mainnet
- traefik.http.services.everclear-mainnet-nitro-pruned-pebble-path.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.everclear-mainnet-nitro-pruned-pebble-path.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.everclear-mainnet-nitro-pruned-pebble-path.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.everclear-mainnet-nitro-pruned-pebble-path.rule=Host(`$DOMAIN`) && (Path(`/everclear-mainnet`) || Path(`/everclear-mainnet/`))}
- ${NO_SSL:+traefik.http.routers.everclear-mainnet-nitro-pruned-pebble-path.rule=Path(`/everclear-mainnet`) || Path(`/everclear-mainnet/`)}
- traefik.http.routers.everclear-mainnet-nitro-pruned-pebble-path.middlewares=everclear-mainnet-nitro-pruned-pebble-path-stripprefix, ipallowlist
volumes:
everclear-mainnet-nitro-pruned-pebble-path:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: everclear
method-groups:
enabled:
- debug
- filter
methods:
disabled:
# not compatible with path state scheme
- name: debug_traceBlockByHash
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,147 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/opencampuscodex-sepolia-nitro-archive-leveldb-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/opencampuscodex-sepolia-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
opencampuscodex-sepolia-archive:
image: ${OPENCAMPUSCODEX_NITRO_IMAGE:-offchainlabs/nitro-node}:${OPENCAMPUSCODEX_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=true
- --execution.caching.database-cache=${OPENCAMPUSCODEX_SEPOLIA_NITRO_ARCHIVE_LEVELDB_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${OPENCAMPUSCODEX_SEPOLIA_NITRO_ARCHIVE_LEVELDB_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${OPENCAMPUSCODEX_SEPOLIA_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${OPENCAMPUSCODEX_SEPOLIA_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.open-campus-codex.gelato.digital
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ARBITRUM_SEPOLIA_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.open-campus-codex.gelato.digital
- --node.data-availability.sequencer-inbox-address=0xe347C1223381b9Dcd6c0F61cf81c90175A7Bae77
- --node.feed.input.url=wss://feed.open-campus-codex.gelato.digital
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.connection.url=${ARBITRUM_SEPOLIA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/opencampuscodex-sepolia-archive
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${OPENCAMPUSCODEX_SEPOLIA_NITRO_ARCHIVE_LEVELDB_HASH_DATA:-opencampuscodex-sepolia-nitro-archive-leveldb-hash}:/root/.arbitrum
- ./arb/opencampuscodex/arbitrum-sepolia:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.opencampuscodex-sepolia-nitro-archive-leveldb-hash-stripprefix.stripprefix.prefixes=/opencampuscodex-sepolia-archive
- traefik.http.services.opencampuscodex-sepolia-nitro-archive-leveldb-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.opencampuscodex-sepolia-nitro-archive-leveldb-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.opencampuscodex-sepolia-nitro-archive-leveldb-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.opencampuscodex-sepolia-nitro-archive-leveldb-hash.rule=Host(`$DOMAIN`) && (Path(`/opencampuscodex-sepolia-archive`) || Path(`/opencampuscodex-sepolia-archive/`))}
- ${NO_SSL:+traefik.http.routers.opencampuscodex-sepolia-nitro-archive-leveldb-hash.rule=Path(`/opencampuscodex-sepolia-archive`) || Path(`/opencampuscodex-sepolia-archive/`)}
- traefik.http.routers.opencampuscodex-sepolia-nitro-archive-leveldb-hash.middlewares=opencampuscodex-sepolia-nitro-archive-leveldb-hash-stripprefix, ipallowlist
volumes:
opencampuscodex-sepolia-nitro-archive-leveldb-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: open-campus-codex-sepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,120 +0,0 @@
# use at your own risk
services:
opencampuscodex-sepolia:
image: ${OPENCAMPUSCODEX_NITRO_IMAGE:-offchainlabs/nitro-node}:${OPENCAMPUSCODEX_SEPOLIA_NITRO_VERSION:-v3.5.3-0a9c975}
user: root
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
expose:
- 8545
- 8551
ports:
- 10938:10938
- 10938:10938/udp
volumes:
- ${OPENCAMPUSCODEX_SEPOLIA_NITRO_PRUNED_PEBBLE_HASH_DATA:-opencampuscodex-sepolia-nitro-pruned-pebble-hash}:/root/.arbitrum
- /slowdisk:/slowdisk
- .jwtsecret:/jwtsecret:ro
- ./tmp/opencampuscodex-sepolia:/tmp
command:
- --datadir=/root/.arbitrum
- --port=10938
- --bind=0.0.0.0
- --nat=extip:${IP}
- --http
- --http.port=8545
- --http.vhosts=*
- --ws
- --ws.port=8545
- --ws.origins=*
- --ws.addr=0.0.0.0
- --http.addr=0.0.0.0
- --maxpeers=50
- --http.api=eth,net,web3,arb,txpool,debug
- --ws.api=eth,net,web3,arb,txpool,debug
- --rpc.gascap=600000000
- --rpc.returndatalimit=10000000
- --rpc.txfeecap=0
- --execution.caching.state-scheme=hash
- --execution.rpc.gas-cap=600000000
- --execution.caching.archive=false
- --execution.sequencer.enable=false
- --persistent.db-engine=pebble
- --persistent.chain=/root/.arbitrum/opencampuscodex-sepolia
- --conf.file=/config/baseConfig.json
- --node.sequencer=false
- --node.staker.enable=false
- --node.batch-poster.enable=false
- --node.data-availability.enable=true
- --node.data-availability.sequencer-inbox-address=0xe347C1223381b9Dcd6c0F61cf81c90175A7Bae77
- --node.data-availability.parent-chain-node-url=${ARBITRUM_SEPOLIA_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.open-campus-codex.gelato.digital
- --node.feed.input.url=wss://feed.open-campus-codex.gelato.digital
- --execution.forwarding-target=https://rpc.open-campus-codex.gelato.digital
- --parent-chain.connection.url=${ARBITRUM_SEPOLIA_EXECUTION_RPC}
networks:
- chains
restart: unless-stopped
stop_grace_period: 5m
labels:
- traefik.enable=true
- traefik.http.middlewares.opencampuscodex-sepolia-nitro-pruned-pebble-hash-stripprefix.stripprefix.prefixes=/opencampuscodex-sepolia
- traefik.http.services.opencampuscodex-sepolia-nitro-pruned-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.opencampuscodex-sepolia-nitro-pruned-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.opencampuscodex-sepolia-nitro-pruned-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.opencampuscodex-sepolia-nitro-pruned-pebble-hash.rule=Host(`$DOMAIN`) && PathPrefix(`/opencampuscodex-sepolia`)}
- ${NO_SSL:+traefik.http.routers.opencampuscodex-sepolia-nitro-pruned-pebble-hash.rule=PathPrefix(`/opencampuscodex-sepolia`)}
- traefik.http.routers.opencampuscodex-sepolia-nitro-pruned-pebble-hash.middlewares=opencampuscodex-sepolia-nitro-pruned-pebble-hash-stripprefix, ipwhitelist
volumes:
opencampuscodex-sepolia-nitro-pruned-pebble-hash:
x-upstreams:
- chain: open-campus-codex-sepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex

View File

@@ -1,151 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/opencampuscodex-sepolia-nitro-pruned-pebble-path.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/opencampuscodex-sepolia \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
opencampuscodex-sepolia:
image: ${OPENCAMPUSCODEX_NITRO_IMAGE:-offchainlabs/nitro-node}:${OPENCAMPUSCODEX_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=${OPENCAMPUSCODEX_SEPOLIA_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${OPENCAMPUSCODEX_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${OPENCAMPUSCODEX_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_SNAPSHOT_CACHE:-400}
- --execution.caching.state-scheme=path
- --execution.caching.trie-clean-cache=${OPENCAMPUSCODEX_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${OPENCAMPUSCODEX_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.open-campus-codex.gelato.digital
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ARBITRUM_SEPOLIA_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.open-campus-codex.gelato.digital
- --node.data-availability.sequencer-inbox-address=0xe347C1223381b9Dcd6c0F61cf81c90175A7Bae77
- --node.feed.input.url=wss://feed.open-campus-codex.gelato.digital
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.connection.url=${ARBITRUM_SEPOLIA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/opencampuscodex-sepolia
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${OPENCAMPUSCODEX_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_DATA:-opencampuscodex-sepolia-nitro-pruned-pebble-path}:/root/.arbitrum
- ./arb/opencampuscodex/arbitrum-sepolia:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.opencampuscodex-sepolia-nitro-pruned-pebble-path-stripprefix.stripprefix.prefixes=/opencampuscodex-sepolia
- traefik.http.services.opencampuscodex-sepolia-nitro-pruned-pebble-path.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.opencampuscodex-sepolia-nitro-pruned-pebble-path.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.opencampuscodex-sepolia-nitro-pruned-pebble-path.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.opencampuscodex-sepolia-nitro-pruned-pebble-path.rule=Host(`$DOMAIN`) && (Path(`/opencampuscodex-sepolia`) || Path(`/opencampuscodex-sepolia/`))}
- ${NO_SSL:+traefik.http.routers.opencampuscodex-sepolia-nitro-pruned-pebble-path.rule=Path(`/opencampuscodex-sepolia`) || Path(`/opencampuscodex-sepolia/`)}
- traefik.http.routers.opencampuscodex-sepolia-nitro-pruned-pebble-path.middlewares=opencampuscodex-sepolia-nitro-pruned-pebble-path-stripprefix, ipallowlist
volumes:
opencampuscodex-sepolia-nitro-pruned-pebble-path:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: open-campus-codex-sepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
# not compatible with path state scheme
- name: debug_traceBlockByHash
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,147 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/playblock-mainnet-nitro-archive-leveldb-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/playblock-mainnet-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
playblock-mainnet-archive:
image: ${PLAYBLOCK_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLAYBLOCK_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=true
- --execution.caching.database-cache=${PLAYBLOCK_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${PLAYBLOCK_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${PLAYBLOCK_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${PLAYBLOCK_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.playblock.io
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ARBITRUM_NOVA_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.playblock.io
- --node.data-availability.sequencer-inbox-address=0x1297541082195356755105700451499873350464260779639
- --node.feed.input.url=wss://feed.playblock.io
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.connection.url=${ARBITRUM_NOVA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/playblock-mainnet-archive
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${PLAYBLOCK_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_DATA:-playblock-mainnet-nitro-archive-leveldb-hash}:/root/.arbitrum
- ./arb/playblock/arbitrum-nova:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.playblock-mainnet-nitro-archive-leveldb-hash-stripprefix.stripprefix.prefixes=/playblock-mainnet-archive
- traefik.http.services.playblock-mainnet-nitro-archive-leveldb-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.playblock-mainnet-nitro-archive-leveldb-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.playblock-mainnet-nitro-archive-leveldb-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.playblock-mainnet-nitro-archive-leveldb-hash.rule=Host(`$DOMAIN`) && (Path(`/playblock-mainnet-archive`) || Path(`/playblock-mainnet-archive/`))}
- ${NO_SSL:+traefik.http.routers.playblock-mainnet-nitro-archive-leveldb-hash.rule=Path(`/playblock-mainnet-archive`) || Path(`/playblock-mainnet-archive/`)}
- traefik.http.routers.playblock-mainnet-nitro-archive-leveldb-hash.middlewares=playblock-mainnet-nitro-archive-leveldb-hash-stripprefix, ipallowlist
volumes:
playblock-mainnet-nitro-archive-leveldb-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: playnance
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,151 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/playblock-mainnet-nitro-pruned-pebble-path.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/playblock-mainnet \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
playblock-mainnet:
image: ${PLAYBLOCK_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLAYBLOCK_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=${PLAYBLOCK_MAINNET_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${PLAYBLOCK_MAINNET_NITRO_PRUNED_PEBBLE_PATH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${PLAYBLOCK_MAINNET_NITRO_PRUNED_PEBBLE_PATH_SNAPSHOT_CACHE:-400}
- --execution.caching.state-scheme=path
- --execution.caching.trie-clean-cache=${PLAYBLOCK_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${PLAYBLOCK_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.playblock.io
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ARBITRUM_NOVA_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.playblock.io
- --node.data-availability.sequencer-inbox-address=0x1297541082195356755105700451499873350464260779639
- --node.feed.input.url=wss://feed.playblock.io
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.connection.url=${ARBITRUM_NOVA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/playblock-mainnet
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${PLAYBLOCK_MAINNET_NITRO_PRUNED_PEBBLE_PATH_DATA:-playblock-mainnet-nitro-pruned-pebble-path}:/root/.arbitrum
- ./arb/playblock/arbitrum-nova:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.playblock-mainnet-nitro-pruned-pebble-path-stripprefix.stripprefix.prefixes=/playblock-mainnet
- traefik.http.services.playblock-mainnet-nitro-pruned-pebble-path.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.playblock-mainnet-nitro-pruned-pebble-path.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.playblock-mainnet-nitro-pruned-pebble-path.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.playblock-mainnet-nitro-pruned-pebble-path.rule=Host(`$DOMAIN`) && (Path(`/playblock-mainnet`) || Path(`/playblock-mainnet/`))}
- ${NO_SSL:+traefik.http.routers.playblock-mainnet-nitro-pruned-pebble-path.rule=Path(`/playblock-mainnet`) || Path(`/playblock-mainnet/`)}
- traefik.http.routers.playblock-mainnet-nitro-pruned-pebble-path.middlewares=playblock-mainnet-nitro-pruned-pebble-path-stripprefix, ipallowlist
volumes:
playblock-mainnet-nitro-pruned-pebble-path:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: playnance
method-groups:
enabled:
- debug
- filter
methods:
disabled:
# not compatible with path state scheme
- name: debug_traceBlockByHash
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,148 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/plume-mainnet-nitro-archive-leveldb-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/plume-mainnet-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
plume-mainnet-archive:
image: ${PLUME_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLUME_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=true
- --execution.caching.database-cache=${PLUME_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${PLUME_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${PLUME_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${PLUME_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.plume.org
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das-plume-mainnet-1.t.conduit.xyz
- --node.data-availability.sequencer-inbox-address=0x85eC1b9138a8b9659A51e2b51bb0861901040b59
- --node.feed.input.url=wss://relay-plume-mainnet-1.t.conduit.xyz
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/plume-mainnet-archive
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${PLUME_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_DATA:-plume-mainnet-nitro-archive-leveldb-hash}:/root/.arbitrum
- ./arb/plume/mainnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.plume-mainnet-nitro-archive-leveldb-hash-stripprefix.stripprefix.prefixes=/plume-mainnet-archive
- traefik.http.services.plume-mainnet-nitro-archive-leveldb-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.plume-mainnet-nitro-archive-leveldb-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.plume-mainnet-nitro-archive-leveldb-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.plume-mainnet-nitro-archive-leveldb-hash.rule=Host(`$DOMAIN`) && (Path(`/plume-mainnet-archive`) || Path(`/plume-mainnet-archive/`))}
- ${NO_SSL:+traefik.http.routers.plume-mainnet-nitro-archive-leveldb-hash.rule=Path(`/plume-mainnet-archive`) || Path(`/plume-mainnet-archive/`)}
- traefik.http.routers.plume-mainnet-nitro-archive-leveldb-hash.middlewares=plume-mainnet-nitro-archive-leveldb-hash-stripprefix, ipallowlist
volumes:
plume-mainnet-nitro-archive-leveldb-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: plume
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,152 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/plume-mainnet-nitro-pruned-pebble-path.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/plume-mainnet \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
plume-mainnet:
image: ${PLUME_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLUME_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=${PLUME_MAINNET_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${PLUME_MAINNET_NITRO_PRUNED_PEBBLE_PATH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${PLUME_MAINNET_NITRO_PRUNED_PEBBLE_PATH_SNAPSHOT_CACHE:-400}
- --execution.caching.state-scheme=path
- --execution.caching.trie-clean-cache=${PLUME_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${PLUME_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.plume.org
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das-plume-mainnet-1.t.conduit.xyz
- --node.data-availability.sequencer-inbox-address=0x85eC1b9138a8b9659A51e2b51bb0861901040b59
- --node.feed.input.url=wss://relay-plume-mainnet-1.t.conduit.xyz
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/plume-mainnet
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${PLUME_MAINNET_NITRO_PRUNED_PEBBLE_PATH_DATA:-plume-mainnet-nitro-pruned-pebble-path}:/root/.arbitrum
- ./arb/plume/mainnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.plume-mainnet-nitro-pruned-pebble-path-stripprefix.stripprefix.prefixes=/plume-mainnet
- traefik.http.services.plume-mainnet-nitro-pruned-pebble-path.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.plume-mainnet-nitro-pruned-pebble-path.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.plume-mainnet-nitro-pruned-pebble-path.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.plume-mainnet-nitro-pruned-pebble-path.rule=Host(`$DOMAIN`) && (Path(`/plume-mainnet`) || Path(`/plume-mainnet/`))}
- ${NO_SSL:+traefik.http.routers.plume-mainnet-nitro-pruned-pebble-path.rule=Path(`/plume-mainnet`) || Path(`/plume-mainnet/`)}
- traefik.http.routers.plume-mainnet-nitro-pruned-pebble-path.middlewares=plume-mainnet-nitro-pruned-pebble-path-stripprefix, ipallowlist
volumes:
plume-mainnet-nitro-pruned-pebble-path:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: plume
method-groups:
enabled:
- debug
- filter
methods:
disabled:
# not compatible with path state scheme
- name: debug_traceBlockByHash
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,148 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/plume-testnet-nitro-archive-leveldb-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/plume-testnet-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
plume-testnet-archive:
image: ${PLUME_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLUME_TESTNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=true
- --execution.caching.database-cache=${PLUME_TESTNET_NITRO_ARCHIVE_LEVELDB_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${PLUME_TESTNET_NITRO_ARCHIVE_LEVELDB_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${PLUME_TESTNET_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${PLUME_TESTNET_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://testnet-rpc.plume.org
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das-plume-testnet-1.t.conduit.xyz
- --node.data-availability.sequencer-inbox-address=0xbCa991f1831bE1F1E7e5576d5F84A645e70F3E4d
- --node.feed.input.url=wss://relay-plume-testnet-1.t.conduit.xyz
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_SEPOLIA_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/plume-testnet-archive
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${PLUME_TESTNET_NITRO_ARCHIVE_LEVELDB_HASH_DATA:-plume-testnet-nitro-archive-leveldb-hash}:/root/.arbitrum
- ./arb/plume/testnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.plume-testnet-nitro-archive-leveldb-hash-stripprefix.stripprefix.prefixes=/plume-testnet-archive
- traefik.http.services.plume-testnet-nitro-archive-leveldb-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.plume-testnet-nitro-archive-leveldb-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.plume-testnet-nitro-archive-leveldb-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.plume-testnet-nitro-archive-leveldb-hash.rule=Host(`$DOMAIN`) && (Path(`/plume-testnet-archive`) || Path(`/plume-testnet-archive/`))}
- ${NO_SSL:+traefik.http.routers.plume-testnet-nitro-archive-leveldb-hash.rule=Path(`/plume-testnet-archive`) || Path(`/plume-testnet-archive/`)}
- traefik.http.routers.plume-testnet-nitro-archive-leveldb-hash.middlewares=plume-testnet-nitro-archive-leveldb-hash-stripprefix, ipallowlist
volumes:
plume-testnet-nitro-archive-leveldb-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: plume-testnet
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,152 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/plume-testnet-nitro-pruned-pebble-path.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/plume-testnet \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
plume-testnet:
image: ${PLUME_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLUME_TESTNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=${PLUME_TESTNET_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${PLUME_TESTNET_NITRO_PRUNED_PEBBLE_PATH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${PLUME_TESTNET_NITRO_PRUNED_PEBBLE_PATH_SNAPSHOT_CACHE:-400}
- --execution.caching.state-scheme=path
- --execution.caching.trie-clean-cache=${PLUME_TESTNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${PLUME_TESTNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://testnet-rpc.plume.org
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das-plume-testnet-1.t.conduit.xyz
- --node.data-availability.sequencer-inbox-address=0xbCa991f1831bE1F1E7e5576d5F84A645e70F3E4d
- --node.feed.input.url=wss://relay-plume-testnet-1.t.conduit.xyz
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_SEPOLIA_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/plume-testnet
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${PLUME_TESTNET_NITRO_PRUNED_PEBBLE_PATH_DATA:-plume-testnet-nitro-pruned-pebble-path}:/root/.arbitrum
- ./arb/plume/testnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.plume-testnet-nitro-pruned-pebble-path-stripprefix.stripprefix.prefixes=/plume-testnet
- traefik.http.services.plume-testnet-nitro-pruned-pebble-path.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.plume-testnet-nitro-pruned-pebble-path.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.plume-testnet-nitro-pruned-pebble-path.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.plume-testnet-nitro-pruned-pebble-path.rule=Host(`$DOMAIN`) && (Path(`/plume-testnet`) || Path(`/plume-testnet/`))}
- ${NO_SSL:+traefik.http.routers.plume-testnet-nitro-pruned-pebble-path.rule=Path(`/plume-testnet`) || Path(`/plume-testnet/`)}
- traefik.http.routers.plume-testnet-nitro-pruned-pebble-path.middlewares=plume-testnet-nitro-pruned-pebble-path-stripprefix, ipallowlist
volumes:
plume-testnet-nitro-pruned-pebble-path:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: plume-testnet
method-groups:
enabled:
- debug
- filter
methods:
disabled:
# not compatible with path state scheme
- name: debug_traceBlockByHash
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,148 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/real-mainnet-nitro-archive-leveldb-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/real-mainnet-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
real-mainnet-archive:
image: ${REAL_NITRO_IMAGE:-offchainlabs/nitro-node}:${REAL_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=true
- --execution.caching.database-cache=${REAL_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${REAL_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${REAL_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${REAL_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.realforreal.gelato.digital
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.realforreal.gelato.digital
- --node.data-availability.sequencer-inbox-address=0x466813324240923703236721233648302990016039913376
- --node.feed.input.url=wss://feed.realforreal.gelato.digital
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/real-mainnet-archive
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${REAL_MAINNET_NITRO_ARCHIVE_LEVELDB_HASH_DATA:-real-mainnet-nitro-archive-leveldb-hash}:/root/.arbitrum
- ./arb/real/mainnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.real-mainnet-nitro-archive-leveldb-hash-stripprefix.stripprefix.prefixes=/real-mainnet-archive
- traefik.http.services.real-mainnet-nitro-archive-leveldb-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.real-mainnet-nitro-archive-leveldb-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.real-mainnet-nitro-archive-leveldb-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.real-mainnet-nitro-archive-leveldb-hash.rule=Host(`$DOMAIN`) && (Path(`/real-mainnet-archive`) || Path(`/real-mainnet-archive/`))}
- ${NO_SSL:+traefik.http.routers.real-mainnet-nitro-archive-leveldb-hash.rule=Path(`/real-mainnet-archive`) || Path(`/real-mainnet-archive/`)}
- traefik.http.routers.real-mainnet-nitro-archive-leveldb-hash.middlewares=real-mainnet-nitro-archive-leveldb-hash-stripprefix, ipallowlist
volumes:
real-mainnet-nitro-archive-leveldb-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: real
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,149 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/real-mainnet-nitro-archive-pebble-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/real-mainnet-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
real-mainnet-archive:
image: ${REAL_NITRO_IMAGE:-offchainlabs/nitro-node}:${REAL_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=true
- --execution.caching.database-cache=${REAL_MAINNET_NITRO_ARCHIVE_PEBBLE_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${REAL_MAINNET_NITRO_ARCHIVE_PEBBLE_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${REAL_MAINNET_NITRO_ARCHIVE_PEBBLE_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${REAL_MAINNET_NITRO_ARCHIVE_PEBBLE_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.realforreal.gelato.digital
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.realforreal.gelato.digital
- --node.data-availability.sequencer-inbox-address=0x466813324240923703236721233648302990016039913376
- --node.feed.input.url=wss://feed.realforreal.gelato.digital
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/real-mainnet-archive
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${REAL_MAINNET_NITRO_ARCHIVE_PEBBLE_HASH_DATA:-real-mainnet-nitro-archive-pebble-hash}:/root/.arbitrum
- ./arb/real/mainnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.real-mainnet-nitro-archive-pebble-hash-stripprefix.stripprefix.prefixes=/real-mainnet-archive
- traefik.http.services.real-mainnet-nitro-archive-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.real-mainnet-nitro-archive-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.real-mainnet-nitro-archive-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.real-mainnet-nitro-archive-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/real-mainnet-archive`) || Path(`/real-mainnet-archive/`))}
- ${NO_SSL:+traefik.http.routers.real-mainnet-nitro-archive-pebble-hash.rule=Path(`/real-mainnet-archive`) || Path(`/real-mainnet-archive/`)}
- traefik.http.routers.real-mainnet-nitro-archive-pebble-hash.middlewares=real-mainnet-nitro-archive-pebble-hash-stripprefix, ipallowlist
volumes:
real-mainnet-nitro-archive-pebble-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: real
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,152 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/real-mainnet-nitro-pruned-pebble-path.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/real-mainnet \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
real-mainnet:
image: ${REAL_NITRO_IMAGE:-offchainlabs/nitro-node}:${REAL_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --conf.file=/config/baseConfig.json
- --execution.caching.archive=${REAL_MAINNET_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${REAL_MAINNET_NITRO_PRUNED_PEBBLE_PATH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${REAL_MAINNET_NITRO_PRUNED_PEBBLE_PATH_SNAPSHOT_CACHE:-400}
- --execution.caching.state-scheme=path
- --execution.caching.trie-clean-cache=${REAL_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${REAL_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.realforreal.gelato.digital
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.da-provider.enable=false
- --node.data-availability.enable=true
- --node.data-availability.parent-chain-node-url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --node.data-availability.rest-aggregator.enable=true
- --node.data-availability.rest-aggregator.urls=https://das.realforreal.gelato.digital
- --node.data-availability.sequencer-inbox-address=0x466813324240923703236721233648302990016039913376
- --node.feed.input.url=wss://feed.realforreal.gelato.digital
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/real-mainnet
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${REAL_MAINNET_NITRO_PRUNED_PEBBLE_PATH_DATA:-real-mainnet-nitro-pruned-pebble-path}:/root/.arbitrum
- ./arb/real/mainnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.real-mainnet-nitro-pruned-pebble-path-stripprefix.stripprefix.prefixes=/real-mainnet
- traefik.http.services.real-mainnet-nitro-pruned-pebble-path.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.real-mainnet-nitro-pruned-pebble-path.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.real-mainnet-nitro-pruned-pebble-path.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.real-mainnet-nitro-pruned-pebble-path.rule=Host(`$DOMAIN`) && (Path(`/real-mainnet`) || Path(`/real-mainnet/`))}
- ${NO_SSL:+traefik.http.routers.real-mainnet-nitro-pruned-pebble-path.rule=Path(`/real-mainnet`) || Path(`/real-mainnet/`)}
- traefik.http.routers.real-mainnet-nitro-pruned-pebble-path.middlewares=real-mainnet-nitro-pruned-pebble-path-stripprefix, ipallowlist
volumes:
real-mainnet-nitro-pruned-pebble-path:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: real
method-groups:
enabled:
- debug
- filter
methods:
disabled:
# not compatible with path state scheme
- name: debug_traceBlockByHash
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,6 +0,0 @@
{
"chain": {
"info-json": "[{\"chain-id\":656476,\"parent-chain-id\":421614,\"parent-chain-is-arbitrum\":true,\"chain-name\":\"Codex\",\"chain-config\":{\"homesteadBlock\":0,\"daoForkBlock\":null,\"daoForkSupport\":true,\"eip150Block\":0,\"eip150Hash\":\"0x0000000000000000000000000000000000000000000000000000000000000000\",\"eip155Block\":0,\"eip158Block\":0,\"byzantiumBlock\":0,\"constantinopleBlock\":0,\"petersburgBlock\":0,\"istanbulBlock\":0,\"muirGlacierBlock\":0,\"berlinBlock\":0,\"londonBlock\":0,\"clique\":{\"period\":0,\"epoch\":0},\"arbitrum\":{\"EnableArbOS\":true,\"AllowDebugPrecompiles\":false,\"DataAvailabilityCommittee\":true,\"InitialArbOSVersion\":20,\"GenesisBlockNum\":0,\"MaxCodeSize\":24576,\"MaxInitCodeSize\":49152,\"InitialChainOwner\":\"0xF46B08D9E85df74b6f24Ad85A6a655c02857D5b8\"},\"chainId\":656476},\"rollup\":{\"bridge\":\"0xbf3D64671154D1FB0b27Cb1decbE1094d7016448\",\"inbox\":\"0x67F231eDC83a66556148673863e73D705422A678\",\"sequencer-inbox\":\"0xd5131c1924f080D45CA3Ae97262c0015F675004b\",\"rollup\":\"0x0A94003d3482128c89395aBd94a41DA8eeBB59f7\",\"validator-utils\":\"0xB11EB62DD2B352886A4530A9106fE427844D515f\",\"validator-wallet-creator\":\"0xEb9885B6c0e117D339F47585cC06a2765AaE2E0b\",\"deployed-at\":41549214}}]",
"name": "Codex"
}
}

View File

@@ -1,6 +0,0 @@
{
"chain": {
"info-json": "[{\"chain-id\":1829,\"parent-chain-id\":42170,\"parent-chain-is-arbitrum\":true,\"chain-name\":\"Playblock\",\"chain-config\":{\"homesteadBlock\":0,\"daoForkBlock\":null,\"daoForkSupport\":true,\"eip150Block\":0,\"eip150Hash\":\"0x0000000000000000000000000000000000000000000000000000000000000000\",\"eip155Block\":0,\"eip158Block\":0,\"byzantiumBlock\":0,\"constantinopleBlock\":0,\"petersburgBlock\":0,\"istanbulBlock\":0,\"muirGlacierBlock\":0,\"berlinBlock\":0,\"londonBlock\":0,\"clique\":{\"period\":0,\"epoch\":0},\"arbitrum\":{\"EnableArbOS\":true,\"AllowDebugPrecompiles\":false,\"DataAvailabilityCommittee\":true,\"InitialArbOSVersion\":11,\"GenesisBlockNum\":0,\"MaxCodeSize\":24576,\"MaxInitCodeSize\":49152,\"InitialChainOwner\":\"0x10Fe3cb853F7ef551E1598d91436e95d41Aea45a\"},\"chainId\":1829},\"rollup\":{\"bridge\":\"0xD4FE46D2533E7d03382ac6cACF0547F336e59DC0\",\"inbox\":\"0xFF55fB76F5671dD9eB6c62EffF8D693Bb161a3ad\",\"sequencer-inbox\":\"0xe347C1223381b9Dcd6c0F61cf81c90175A7Bae77\",\"rollup\":\"0x04ea347cC6A258A7F65D67aFb60B1d487062A1d0\",\"validator-utils\":\"0x6c21303F5986180B1394d2C89f3e883890E2867b\",\"validator-wallet-creator\":\"0x2b0E04Dc90e3fA58165CB41E2834B44A56E766aF\",\"deployed-at\":55663578}}]",
"name": "Playblock"
}
}

View File

@@ -1,5 +0,0 @@
{
"chain": {
"info-json": "[{\"chain-id\":98866,\"parent-chain-id\":1,\"chain-name\":\"conduit-orbit-deployer\",\"chain-config\":{\"chainId\":98866,\"homesteadBlock\":0,\"daoForkBlock\":null,\"daoForkSupport\":true,\"eip150Block\":0,\"eip150Hash\":\"0x0000000000000000000000000000000000000000000000000000000000000000\",\"eip155Block\":0,\"eip158Block\":0,\"byzantiumBlock\":0,\"constantinopleBlock\":0,\"petersburgBlock\":0,\"istanbulBlock\":0,\"muirGlacierBlock\":0,\"berlinBlock\":0,\"londonBlock\":0,\"clique\":{\"period\":0,\"epoch\":0},\"arbitrum\":{\"EnableArbOS\":true,\"AllowDebugPrecompiles\":false,\"DataAvailabilityCommittee\":true,\"InitialArbOSVersion\":32,\"InitialChainOwner\":\"0x5Ec32984332eaB190cA431545664320259D755d8\",\"GenesisBlockNum\":0}},\"rollup\":{\"bridge\":\"0x35381f63091926750F43b2A7401B083263aDEF83\",\"inbox\":\"0x943fc691242291B74B105e8D19bd9E5DC2fcBa1D\",\"sequencer-inbox\":\"0x85eC1b9138a8b9659A51e2b51bb0861901040b59\",\"rollup\":\"0x35c60Cc77b0A8bf6F938B11bd3E9D319a876c2aC\",\"validator-utils\":\"0x84eA2523b271029FFAeB58fc6E6F1435a280db44\",\"validator-wallet-creator\":\"0x0A5eC2286bB15893d5b8f320aAbc823B2186BA09\",\"deployed-at\":21887008}}]"
}
}

View File

@@ -1,6 +0,0 @@
{
"chain": {
"info-json": "[{\"chain-id\":111188,\"parent-chain-id\":1,\"parent-chain-is-arbitrum\":false,\"chain-name\":\"real\",\"chain-config\":{\"homesteadBlock\":0,\"daoForkBlock\":null,\"daoForkSupport\":true,\"eip150Block\":0,\"eip150Hash\":\"0x0000000000000000000000000000000000000000000000000000000000000000\",\"eip155Block\":0,\"eip158Block\":0,\"byzantiumBlock\":0,\"constantinopleBlock\":0,\"petersburgBlock\":0,\"istanbulBlock\":0,\"muirGlacierBlock\":0,\"berlinBlock\":0,\"londonBlock\":0,\"clique\":{\"period\":0,\"epoch\":0},\"arbitrum\":{\"EnableArbOS\":true,\"AllowDebugPrecompiles\":false,\"DataAvailabilityCommittee\":true,\"InitialArbOSVersion\":11,\"GenesisBlockNum\":0,\"MaxCodeSize\":24576,\"MaxInitCodeSize\":49152,\"InitialChainOwner\":\"0xbB0385FebfD25E01527617938129A34bD497331e\"},\"chainId\":111188},\"rollup\":{\"bridge\":\"0x39D2EEcC8B55f46aE64789E2494dE777cDDeED03\",\"inbox\":\"0xf538671ddd60eE54BdD6FBb0E309c491A7A2df11\",\"sequencer-inbox\":\"0x51C4a227D59E49E26Ea07D8e4E9Af163da4c87A0\",\"rollup\":\"0xc4F7B37bE2bBbcF07373F28c61b1A259dfe49d2a\",\"validator-utils\":\"0x2b0E04Dc90e3fA58165CB41E2834B44A56E766aF\",\"validator-wallet-creator\":\"0x9CAd81628aB7D8e239F1A5B497313341578c5F71\",\"deployed-at\":19446518}}]",
"name": "real"
}
}

View File

@@ -1 +0,0 @@
arb/nitro/arbitrum-nova-nitro-pruned-pebble-hash.yml

View File

@@ -1,45 +0,0 @@
services:
arbitrum-classic:
image: 'offchainlabs/arb-node:v1.4.5-e97c1a4'
stop_grace_period: 30s
user: root
volumes:
- ${ARBITRUM_ONE_MAINNET_ARBNODE_ARCHIVE_TRACE_DATA:-arbitrum-one-mainnet-arbnode-archive-trace}:/data
- ./arbitrum/classic-entrypoint.sh:/entrypoint.sh
expose:
- 8547
- 8548
entrypoint: ["/home/user/go/bin/arb-node"]
command:
- --l1.url=http://eth.drpc.org
- --core.checkpoint-gas-frequency=156250000
- --node.rpc.enable-l1-calls
- --node.cache.allow-slow-lookup
- --node.rpc.tracing.enable
- --node.rpc.addr=0.0.0.0
- --node.rpc.port=8547
- --node.rpc.tracing.namespace=trace
- --node.chain-id=42161
- --node.ws.addr=0.0.0.0
- --node.ws.port=8548
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=7070
- --l2.disable-upstream
- --persistent.chain=/data/datadir/
- --persistent.global-config=/data/
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.arbitrum-one-mainnet-arbnode-archive-trace-stripprefix.stripprefix.prefixes=/arbitrum-classic"
- "traefik.http.services.arbitrum-one-mainnet-arbnode-archive-trace.loadbalancer.server.port=8547"
- "${NO_SSL:-traefik.http.routers.arbitrum-one-mainnet-arbnode-archive-trace.entrypoints=websecure}"
- "${NO_SSL:-traefik.http.routers.arbitrum-one-mainnet-arbnode-archive-trace.tls.certresolver=myresolver}"
- "${NO_SSL:-traefik.http.routers.arbitrum-one-mainnet-arbnode-archive-trace.rule=Host(`$DOMAIN`) && PathPrefix(`/arbitrum-classic`)}"
- "${NO_SSL:+traefik.http.routers.arbitrum-one-mainnet-arbnode-archive-trace.rule=PathPrefix(`/arbitrum-classic`)}"
- "traefik.http.routers.arbitrum-one-mainnet-arbnode-archive-trace.middlewares=arbitrum-one-mainnet-arbnode-archive-trace-stripprefix, ipwhitelist"
networks:
- chains
volumes:
arbitrum-one-mainnet-arbnode-archive-trace:

View File

@@ -1 +0,0 @@
arb/nitro/alephzero-mainnet-nitro-archive-leveldb-hash.yml

View File

@@ -1 +0,0 @@
arb/nitro/arbitrum-one-nitro-pruned-pebble-hash.yml

View File

@@ -1 +0,0 @@
arb/nitro/arbitrum-sepolia-nitro-archive-pebble-hash.yml

View File

@@ -1 +0,0 @@
arb/nitro/arbitrum-sepolia-nitro-pruned-pebble-hash.yml

View File

@@ -1 +0,0 @@
arb/nitro/arbitrum-sepolia-nitro-pruned-pebble-hash.yml

View File

@@ -1 +0,0 @@
avalanche/go/avalanche-fuji-go-pruned-pebbledb.yml

View File

@@ -1 +0,0 @@
avalanche/go/avalanche-mainnet-go-archive-leveldb.yml

View File

@@ -1,51 +0,0 @@
services:
avalanche-archive-client:
image: avaplatform/avalanchego:${AVALANCHEGO_VERSION:-v1.12.2}
ulimits:
nofile: 1048576
expose:
- "9650"
- "30720"
ports:
- "30720:30720/tcp"
- "30720:30720/udp"
volumes:
- ${AVALANCHE_MAINNET_GO_ARCHIVE_DATA:-avalanche-mainnet-go-archive}:/root/.avalanchego
- ./avalanche/configs/chains/C/archive-config.json:/root/.avalanchego/configs/chains/C/config.json
environment:
- "IP=${IP}"
networks:
- chains
command: "/avalanchego/build/avalanchego --http-host= --http-allowed-hosts=* --staking-port=30720 --public-ip=$IP"
restart: unless-stopped
avalanche-archive:
restart: unless-stopped
image: nginx
depends_on:
- avalanche-archive-client
expose:
- 80
environment:
PROXY_HOST: avalanche-archive-client
RPC_PORT: 9650
RPC_PATH: /ext/bc/C/rpc
WS_PORT: 9650
WS_PATH: /ext/bc/C/ws
networks:
- chains
volumes:
- ./nginx-proxy:/etc/nginx/templates
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.avalanche-mainnet-go-archive-stripprefix.stripprefix.prefixes=/avalanche-archive"
- "traefik.http.services.avalanche-mainnet-go-archive.loadbalancer.server.port=80"
- "${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-archive.entrypoints=websecure}"
- "${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-archive.tls.certresolver=myresolver}"
- "${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-archive.rule=Host(`$DOMAIN`) && PathPrefix(`/avalanche-archive`)}"
- "${NO_SSL:+traefik.http.routers.avalanche-mainnet-go-archive.rule=PathPrefix(`/avalanche-archive`)}"
- "traefik.http.routers.avalanche-mainnet-go-archive.middlewares=avalanche-mainnet-go-archive-stripprefix, ipwhitelist"
volumes:
avalanche-mainnet-go-archive:

View File

@@ -1 +0,0 @@
avalanche/go/avalanche-mainnet-go-pruned-pebbledb.yml

View File

@@ -1,6 +0,0 @@
{
"state-sync-enabled": false,
"pruning-enabled": false,
"rpc-gas-cap": 2500000000,
"eth-rpc-gas-limit": 2500000000
}

View File

@@ -1,4 +0,0 @@
{
"rpc-gas-cap": 2500000000,
"eth-rpc-gas-limit": 2500000000
}

View File

@@ -1,118 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:avalanche/go/avalanche-fuji-go-archive-leveldb.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/avalanche-fuji-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
avalanche-fuji-archive:
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_FUJI_GO_VERSION:-v1.14.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 10046:10046
- 10046:10046/udp
expose:
- 9650
entrypoint: [/avalanchego/build/avalanchego]
command:
- --chain-config-dir=/config/archive
- --db-type=leveldb
- --http-allowed-hosts=*
- --http-host=
- --network-id=fuji
- --public-ip=${IP}
- --staking-port=10046
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${AVALANCHE_FUJI_GO_ARCHIVE_LEVELDB_DATA:-avalanche-fuji-go-archive-leveldb}:/root/.avalanchego
- ./avalanche/fuji:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.middlewares.avalanche-fuji-go-archive-leveldb-set-path.replacepath.path=/ext/bc/C/rpc
- traefik.http.middlewares.avalanche-fuji-go-archive-leveldb-stripprefix.stripprefix.prefixes=/avalanche-fuji-archive
- traefik.http.services.avalanche-fuji-go-archive-leveldb.loadbalancer.server.port=9650
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-archive-leveldb.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-archive-leveldb.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-archive-leveldb.rule=Host(`$DOMAIN`) && (Path(`/avalanche-fuji-archive`) || Path(`/avalanche-fuji-archive/`))}
- ${NO_SSL:+traefik.http.routers.avalanche-fuji-go-archive-leveldb.rule=Path(`/avalanche-fuji-archive`) || Path(`/avalanche-fuji-archive/`)}
- traefik.http.routers.avalanche-fuji-go-archive-leveldb.middlewares=avalanche-fuji-go-archive-leveldb-stripprefix, avalanche-fuji-go-archive-leveldb-set-path, ipallowlist
- traefik.http.routers.avalanche-fuji-go-archive-leveldb.priority=50 # gets any request that is not GET with UPGRADE header
- traefik.http.routers.avalanche-fuji-go-archive-leveldb-ws.priority=100 # answers GET requests first
- traefik.http.middlewares.avalanche-fuji-go-archive-leveldb-set-ws-path.replacepath.path=/ext/bc/C/ws
- traefik.http.services.avalanche-fuji-go-archive-leveldb-ws.loadbalancer.server.port=9650
- traefik.http.routers.avalanche-fuji-go-archive-leveldb-ws.service=avalanche-fuji-go-archive-leveldb-ws
- traefik.http.routers.avalanche-fuji-go-archive-leveldb.service=avalanche-fuji-go-archive-leveldb
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-archive-leveldb-ws.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-archive-leveldb-ws.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-archive-leveldb-ws.rule=Host(`$DOMAIN`) && (Path(`/avalanche-fuji-archive`) || Path(`/avalanche-fuji-archive/`)) && Headers(`Upgrade`, `websocket`)}
- ${NO_SSL:+traefik.http.routers.avalanche-fuji-go-archive-leveldb-ws.rule=(Path(`/avalanche-fuji-archive`) || Path(`/avalanche-fuji-archive/`)) && Headers(`Upgrade`, `websocket`)}
- traefik.http.routers.avalanche-fuji-go-archive-leveldb-ws.middlewares=avalanche-fuji-go-archive-leveldb-stripprefix, avalanche-fuji-go-archive-leveldb-set-ws-path, ipallowlist
volumes:
avalanche-fuji-go-archive-leveldb:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: avalanche
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,118 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:avalanche/go/avalanche-fuji-go-pruned-leveldb.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/avalanche-fuji \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
avalanche-fuji:
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_FUJI_GO_VERSION:-v1.14.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 12059:12059
- 12059:12059/udp
expose:
- 9650
entrypoint: [/avalanchego/build/avalanchego]
command:
- --chain-config-dir=/config/pruned
- --db-type=leveldb
- --http-allowed-hosts=*
- --http-host=
- --network-id=fuji
- --public-ip=${IP}
- --staking-port=12059
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${AVALANCHE_FUJI_GO_PRUNED_LEVELDB_DATA:-avalanche-fuji-go-pruned-leveldb}:/root/.avalanchego
- ./avalanche/fuji:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.middlewares.avalanche-fuji-go-pruned-leveldb-set-path.replacepath.path=/ext/bc/C/rpc
- traefik.http.middlewares.avalanche-fuji-go-pruned-leveldb-stripprefix.stripprefix.prefixes=/avalanche-fuji
- traefik.http.services.avalanche-fuji-go-pruned-leveldb.loadbalancer.server.port=9650
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-pruned-leveldb.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-pruned-leveldb.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-pruned-leveldb.rule=Host(`$DOMAIN`) && (Path(`/avalanche-fuji`) || Path(`/avalanche-fuji/`))}
- ${NO_SSL:+traefik.http.routers.avalanche-fuji-go-pruned-leveldb.rule=Path(`/avalanche-fuji`) || Path(`/avalanche-fuji/`)}
- traefik.http.routers.avalanche-fuji-go-pruned-leveldb.middlewares=avalanche-fuji-go-pruned-leveldb-stripprefix, avalanche-fuji-go-pruned-leveldb-set-path, ipallowlist
- traefik.http.routers.avalanche-fuji-go-pruned-leveldb.priority=50 # gets any request that is not GET with UPGRADE header
- traefik.http.routers.avalanche-fuji-go-pruned-leveldb-ws.priority=100 # answers GET requests first
- traefik.http.middlewares.avalanche-fuji-go-pruned-leveldb-set-ws-path.replacepath.path=/ext/bc/C/ws
- traefik.http.services.avalanche-fuji-go-pruned-leveldb-ws.loadbalancer.server.port=9650
- traefik.http.routers.avalanche-fuji-go-pruned-leveldb-ws.service=avalanche-fuji-go-pruned-leveldb-ws
- traefik.http.routers.avalanche-fuji-go-pruned-leveldb.service=avalanche-fuji-go-pruned-leveldb
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-pruned-leveldb-ws.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-pruned-leveldb-ws.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-pruned-leveldb-ws.rule=Host(`$DOMAIN`) && (Path(`/avalanche-fuji`) || Path(`/avalanche-fuji/`)) && Headers(`Upgrade`, `websocket`)}
- ${NO_SSL:+traefik.http.routers.avalanche-fuji-go-pruned-leveldb-ws.rule=(Path(`/avalanche-fuji`) || Path(`/avalanche-fuji/`)) && Headers(`Upgrade`, `websocket`)}
- traefik.http.routers.avalanche-fuji-go-pruned-leveldb-ws.middlewares=avalanche-fuji-go-pruned-leveldb-stripprefix, avalanche-fuji-go-pruned-leveldb-set-ws-path, ipallowlist
volumes:
avalanche-fuji-go-pruned-leveldb:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: avalanche
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,118 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:avalanche/go/avalanche-fuji-go-pruned-pebbledb.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/avalanche-fuji \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
avalanche-fuji:
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_FUJI_GO_VERSION:-v1.14.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 10350:10350
- 10350:10350/udp
expose:
- 9650
entrypoint: [/avalanchego/build/avalanchego]
command:
- --chain-config-dir=/config/pruned
- --db-type=pebbledb
- --http-allowed-hosts=*
- --http-host=
- --network-id=fuji
- --public-ip=${IP}
- --staking-port=10350
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${AVALANCHE_FUJI_GO_PRUNED_PEBBLEDB_DATA:-avalanche-fuji-go-pruned-pebbledb}:/root/.avalanchego
- ./avalanche/fuji:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.middlewares.avalanche-fuji-go-pruned-pebbledb-set-path.replacepath.path=/ext/bc/C/rpc
- traefik.http.middlewares.avalanche-fuji-go-pruned-pebbledb-stripprefix.stripprefix.prefixes=/avalanche-fuji
- traefik.http.services.avalanche-fuji-go-pruned-pebbledb.loadbalancer.server.port=9650
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-pruned-pebbledb.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-pruned-pebbledb.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-pruned-pebbledb.rule=Host(`$DOMAIN`) && (Path(`/avalanche-fuji`) || Path(`/avalanche-fuji/`))}
- ${NO_SSL:+traefik.http.routers.avalanche-fuji-go-pruned-pebbledb.rule=Path(`/avalanche-fuji`) || Path(`/avalanche-fuji/`)}
- traefik.http.routers.avalanche-fuji-go-pruned-pebbledb.middlewares=avalanche-fuji-go-pruned-pebbledb-stripprefix, avalanche-fuji-go-pruned-pebbledb-set-path, ipallowlist
- traefik.http.routers.avalanche-fuji-go-pruned-pebbledb.priority=50 # gets any request that is not GET with UPGRADE header
- traefik.http.routers.avalanche-fuji-go-pruned-pebbledb-ws.priority=100 # answers GET requests first
- traefik.http.middlewares.avalanche-fuji-go-pruned-pebbledb-set-ws-path.replacepath.path=/ext/bc/C/ws
- traefik.http.services.avalanche-fuji-go-pruned-pebbledb-ws.loadbalancer.server.port=9650
- traefik.http.routers.avalanche-fuji-go-pruned-pebbledb-ws.service=avalanche-fuji-go-pruned-pebbledb-ws
- traefik.http.routers.avalanche-fuji-go-pruned-pebbledb.service=avalanche-fuji-go-pruned-pebbledb
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-pruned-pebbledb-ws.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-pruned-pebbledb-ws.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.avalanche-fuji-go-pruned-pebbledb-ws.rule=Host(`$DOMAIN`) && (Path(`/avalanche-fuji`) || Path(`/avalanche-fuji/`)) && Headers(`Upgrade`, `websocket`)}
- ${NO_SSL:+traefik.http.routers.avalanche-fuji-go-pruned-pebbledb-ws.rule=(Path(`/avalanche-fuji`) || Path(`/avalanche-fuji/`)) && Headers(`Upgrade`, `websocket`)}
- traefik.http.routers.avalanche-fuji-go-pruned-pebbledb-ws.middlewares=avalanche-fuji-go-pruned-pebbledb-stripprefix, avalanche-fuji-go-pruned-pebbledb-set-ws-path, ipallowlist
volumes:
avalanche-fuji-go-pruned-pebbledb:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: avalanche
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,118 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:avalanche/go/avalanche-mainnet-go-archive-leveldb.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/avalanche-mainnet-archive \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
avalanche-mainnet-archive:
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_MAINNET_GO_VERSION:-v1.14.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 12934:12934
- 12934:12934/udp
expose:
- 9650
entrypoint: [/avalanchego/build/avalanchego]
command:
- --chain-config-dir=/config/archive
- --db-type=leveldb
- --http-allowed-hosts=*
- --http-host=
- --network-id=mainnet
- --public-ip=${IP}
- --staking-port=12934
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${AVALANCHE_MAINNET_GO_ARCHIVE_LEVELDB_DATA:-avalanche-mainnet-go-archive-leveldb}:/root/.avalanchego
- ./avalanche/mainnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.middlewares.avalanche-mainnet-go-archive-leveldb-set-path.replacepath.path=/ext/bc/C/rpc
- traefik.http.middlewares.avalanche-mainnet-go-archive-leveldb-stripprefix.stripprefix.prefixes=/avalanche-mainnet-archive
- traefik.http.services.avalanche-mainnet-go-archive-leveldb.loadbalancer.server.port=9650
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-archive-leveldb.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-archive-leveldb.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-archive-leveldb.rule=Host(`$DOMAIN`) && (Path(`/avalanche-mainnet-archive`) || Path(`/avalanche-mainnet-archive/`))}
- ${NO_SSL:+traefik.http.routers.avalanche-mainnet-go-archive-leveldb.rule=Path(`/avalanche-mainnet-archive`) || Path(`/avalanche-mainnet-archive/`)}
- traefik.http.routers.avalanche-mainnet-go-archive-leveldb.middlewares=avalanche-mainnet-go-archive-leveldb-stripprefix, avalanche-mainnet-go-archive-leveldb-set-path, ipallowlist
- traefik.http.routers.avalanche-mainnet-go-archive-leveldb.priority=50 # gets any request that is not GET with UPGRADE header
- traefik.http.routers.avalanche-mainnet-go-archive-leveldb-ws.priority=100 # answers GET requests first
- traefik.http.middlewares.avalanche-mainnet-go-archive-leveldb-set-ws-path.replacepath.path=/ext/bc/C/ws
- traefik.http.services.avalanche-mainnet-go-archive-leveldb-ws.loadbalancer.server.port=9650
- traefik.http.routers.avalanche-mainnet-go-archive-leveldb-ws.service=avalanche-mainnet-go-archive-leveldb-ws
- traefik.http.routers.avalanche-mainnet-go-archive-leveldb.service=avalanche-mainnet-go-archive-leveldb
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-archive-leveldb-ws.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-archive-leveldb-ws.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-archive-leveldb-ws.rule=Host(`$DOMAIN`) && (Path(`/avalanche-mainnet-archive`) || Path(`/avalanche-mainnet-archive/`)) && Headers(`Upgrade`, `websocket`)}
- ${NO_SSL:+traefik.http.routers.avalanche-mainnet-go-archive-leveldb-ws.rule=(Path(`/avalanche-mainnet-archive`) || Path(`/avalanche-mainnet-archive/`)) && Headers(`Upgrade`, `websocket`)}
- traefik.http.routers.avalanche-mainnet-go-archive-leveldb-ws.middlewares=avalanche-mainnet-go-archive-leveldb-stripprefix, avalanche-mainnet-go-archive-leveldb-set-ws-path, ipallowlist
volumes:
avalanche-mainnet-go-archive-leveldb:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: avalanche
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,118 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:avalanche/go/avalanche-mainnet-go-pruned-leveldb.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/avalanche-mainnet \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
avalanche-mainnet:
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_MAINNET_GO_VERSION:-v1.14.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 12757:12757
- 12757:12757/udp
expose:
- 9650
entrypoint: [/avalanchego/build/avalanchego]
command:
- --chain-config-dir=/config/pruned
- --db-type=leveldb
- --http-allowed-hosts=*
- --http-host=
- --network-id=mainnet
- --public-ip=${IP}
- --staking-port=12757
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${AVALANCHE_MAINNET_GO_PRUNED_LEVELDB_DATA:-avalanche-mainnet-go-pruned-leveldb}:/root/.avalanchego
- ./avalanche/mainnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.middlewares.avalanche-mainnet-go-pruned-leveldb-set-path.replacepath.path=/ext/bc/C/rpc
- traefik.http.middlewares.avalanche-mainnet-go-pruned-leveldb-stripprefix.stripprefix.prefixes=/avalanche-mainnet
- traefik.http.services.avalanche-mainnet-go-pruned-leveldb.loadbalancer.server.port=9650
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-pruned-leveldb.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-pruned-leveldb.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-pruned-leveldb.rule=Host(`$DOMAIN`) && (Path(`/avalanche-mainnet`) || Path(`/avalanche-mainnet/`))}
- ${NO_SSL:+traefik.http.routers.avalanche-mainnet-go-pruned-leveldb.rule=Path(`/avalanche-mainnet`) || Path(`/avalanche-mainnet/`)}
- traefik.http.routers.avalanche-mainnet-go-pruned-leveldb.middlewares=avalanche-mainnet-go-pruned-leveldb-stripprefix, avalanche-mainnet-go-pruned-leveldb-set-path, ipallowlist
- traefik.http.routers.avalanche-mainnet-go-pruned-leveldb.priority=50 # gets any request that is not GET with UPGRADE header
- traefik.http.routers.avalanche-mainnet-go-pruned-leveldb-ws.priority=100 # answers GET requests first
- traefik.http.middlewares.avalanche-mainnet-go-pruned-leveldb-set-ws-path.replacepath.path=/ext/bc/C/ws
- traefik.http.services.avalanche-mainnet-go-pruned-leveldb-ws.loadbalancer.server.port=9650
- traefik.http.routers.avalanche-mainnet-go-pruned-leveldb-ws.service=avalanche-mainnet-go-pruned-leveldb-ws
- traefik.http.routers.avalanche-mainnet-go-pruned-leveldb.service=avalanche-mainnet-go-pruned-leveldb
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-pruned-leveldb-ws.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-pruned-leveldb-ws.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-pruned-leveldb-ws.rule=Host(`$DOMAIN`) && (Path(`/avalanche-mainnet`) || Path(`/avalanche-mainnet/`)) && Headers(`Upgrade`, `websocket`)}
- ${NO_SSL:+traefik.http.routers.avalanche-mainnet-go-pruned-leveldb-ws.rule=(Path(`/avalanche-mainnet`) || Path(`/avalanche-mainnet/`)) && Headers(`Upgrade`, `websocket`)}
- traefik.http.routers.avalanche-mainnet-go-pruned-leveldb-ws.middlewares=avalanche-mainnet-go-pruned-leveldb-stripprefix, avalanche-mainnet-go-pruned-leveldb-set-ws-path, ipallowlist
volumes:
avalanche-mainnet-go-pruned-leveldb:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: avalanche
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,118 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:avalanche/go/avalanche-mainnet-go-pruned-pebbledb.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/avalanche-mainnet \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
avalanche-mainnet:
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_MAINNET_GO_VERSION:-v1.14.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 11929:11929
- 11929:11929/udp
expose:
- 9650
entrypoint: [/avalanchego/build/avalanchego]
command:
- --chain-config-dir=/config/pruned
- --db-type=pebbledb
- --http-allowed-hosts=*
- --http-host=
- --network-id=mainnet
- --public-ip=${IP}
- --staking-port=11929
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${AVALANCHE_MAINNET_GO_PRUNED_PEBBLEDB_DATA:-avalanche-mainnet-go-pruned-pebbledb}:/root/.avalanchego
- ./avalanche/mainnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.middlewares.avalanche-mainnet-go-pruned-pebbledb-set-path.replacepath.path=/ext/bc/C/rpc
- traefik.http.middlewares.avalanche-mainnet-go-pruned-pebbledb-stripprefix.stripprefix.prefixes=/avalanche-mainnet
- traefik.http.services.avalanche-mainnet-go-pruned-pebbledb.loadbalancer.server.port=9650
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb.rule=Host(`$DOMAIN`) && (Path(`/avalanche-mainnet`) || Path(`/avalanche-mainnet/`))}
- ${NO_SSL:+traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb.rule=Path(`/avalanche-mainnet`) || Path(`/avalanche-mainnet/`)}
- traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb.middlewares=avalanche-mainnet-go-pruned-pebbledb-stripprefix, avalanche-mainnet-go-pruned-pebbledb-set-path, ipallowlist
- traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb.priority=50 # gets any request that is not GET with UPGRADE header
- traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb-ws.priority=100 # answers GET requests first
- traefik.http.middlewares.avalanche-mainnet-go-pruned-pebbledb-set-ws-path.replacepath.path=/ext/bc/C/ws
- traefik.http.services.avalanche-mainnet-go-pruned-pebbledb-ws.loadbalancer.server.port=9650
- traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb-ws.service=avalanche-mainnet-go-pruned-pebbledb-ws
- traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb.service=avalanche-mainnet-go-pruned-pebbledb
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb-ws.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb-ws.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb-ws.rule=Host(`$DOMAIN`) && (Path(`/avalanche-mainnet`) || Path(`/avalanche-mainnet/`)) && Headers(`Upgrade`, `websocket`)}
- ${NO_SSL:+traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb-ws.rule=(Path(`/avalanche-mainnet`) || Path(`/avalanche-mainnet/`)) && Headers(`Upgrade`, `websocket`)}
- traefik.http.routers.avalanche-mainnet-go-pruned-pebbledb-ws.middlewares=avalanche-mainnet-go-pruned-pebbledb-stripprefix, avalanche-mainnet-go-pruned-pebbledb-set-ws-path, ipallowlist
volumes:
avalanche-mainnet-go-pruned-pebbledb:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: avalanche
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,4 +0,0 @@
{
"state-sync-enabled": false,
"pruning-enabled": false
}

View File

@@ -1,21 +0,0 @@
{
"snowman-api-enabled": false,
"coreth-admin-api-enabled": false,
"net-api-enabled": true,
"rpc-gas-cap": 2500000000,
"rpc-tx-fee-cap": 100,
"eth-rpc-gas-limit": 2500000000,
"eth-api-enabled": true,
"personal-api-enabled": false,
"tx-pool-api-enabled": false,
"debug-api-enabled": false,
"web3-api-enabled": true,
"local-txs-enabled": false,
"pruning-enabled": true,
"api-max-duration": 0,
"api-max-blocks-per-request": 0,
"allow-unfinalized-queries": false,
"log-level": "info",
"state-sync-enabled": false,
"state-sync-skip-resume": true
}

View File

@@ -1,35 +0,0 @@
services:
backup-http:
image: abassi/node-http-server:latest
restart: unless-stopped
volumes:
- /backup:/dir_to_serve
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.backup-server-stripprefix.stripprefix.prefixes=/backup"
- "traefik.http.services.backup-server.loadbalancer.server.port=8080"
- "traefik.http.routers.backup-server.entrypoints=websecure"
- "traefik.http.routers.backup-server.tls.certresolver=myresolver"
- "traefik.http.routers.backup-server.rule=Host(`$DOMAIN`) && PathPrefix(`/backup`)"
- "traefik.http.routers.backup-server.middlewares=backup-server-stripprefix"
networks:
- chains
backup-dav:
image: 117503445/go_webdav:latest
restart: unless-stopped
environment:
- "dav=/null,/webdav,null,null,false"
volumes:
- /backup:/webdav
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.backup-storage-stripprefix.stripprefix.prefixes=/dav"
- "traefik.http.services.backup-storage.loadbalancer.server.port=80"
- "traefik.http.routers.backup-storage.entrypoints=websecure"
- "traefik.http.routers.backup-storage.tls.certresolver=myresolver"
- "traefik.http.routers.backup-storage.rule=Host(`$DOMAIN`) && PathPrefix(`/dav`)"
- "traefik.http.routers.backup-storage.middlewares=backup-storage-stripprefix"
networks:
- chains

View File

@@ -1,51 +0,0 @@
#!/bin/bash
backup_dir="/backup"
if [[ -n $2 ]]; then
echo "upload backup via webdav to $2"
else
if [ ! -d "$backup_dir" ]; then
echo "Error: /backup directory does not exist"
exit 1
fi
fi
# Read the JSON input and extract the list of keys
keys=$(cat /root/rpc/$1.yml | yaml2json - | jq '.volumes' | jq -r 'keys[]')
# Iterate over the list of keys
for key in $keys; do
echo "Executing command with key: /var/lib/docker/volumes/rpc_$key/_data"
source_folder="/var/lib/docker/volumes/rpc_$key/_data"
folder_size=$(du -shL "$source_folder" | awk '{
size = $1
sub(/[Kk]$/, "", size) # Remove 'K' suffix if present
sub(/[Mm]$/, "", size) # Remove 'M' suffix if present
sub(/[Gg]$/, "", size) # Remove 'G' suffix if present
sub(/[Tt]$/, "", size) # Remove 'T' suffix if present
if ($1 ~ /[Kk]$/) {
size *= 0.001 # Convert kilobytes to gigabytes
} else if ($1 ~ /[Mm]$/) {
size *= 0.001 # Convert megabytes to gigabytes
} else if ($1 ~ /[Tt]$/) {
size *= 1000 # convert terabytes to gigabytes
}
print size
}')
folder_size_gb=$(printf "%.0f" "$folder_size")
target_file="rpc_$key-$(date +'%Y-%m-%d-%H-%M-%S')-${folder_size_gb}G.tar.zst"
#echo "$target_file"
if [[ -n $2 ]]; then
tar -cf - --dereference "$source_folder" | pv -pterb -s $(du -sb "$source_folder" | awk '{print $1}') | zstd | curl -X PUT --upload-file - "$2/null/uploading-$target_file"
curl -X MOVE -H "Destination: /null/$target_file" "$2/null/uploading-$target_file"
else
tar -cf - --dereference "$source_folder" | pv -pterb -s $(du -sb "$source_folder" | awk '{print $1}') | zstd -o "/backup/uploading-$target_file"
mv "/backup/uploading-$target_file" "/backup/$target_file"
fi
done

View File

@@ -1 +0,0 @@
op/erigon/base-mainnet-op-erigon-archive-trace.yml

View File

@@ -1 +0,0 @@
op/reth/base-mainnet-op-reth-archive-trace.yml

View File

@@ -1 +0,0 @@
op/reth/base-mainnet-op-reth-pruned-trace.yml

View File

@@ -1 +0,0 @@
op/geth/base-mainnet-op-geth-pruned-pebble-path.yml

View File

@@ -1 +0,0 @@
op/reth/base-sepolia-op-reth-pruned-trace.yml

View File

@@ -1 +0,0 @@
op/geth/base-sepolia-op-geth-pruned-pebble-path.yml

View File

@@ -1,6 +0,0 @@
networks:
chains:
driver: bridge
ipam:
config:
- subnet: ${CHAINS_SUBNET:-192.168.0.0/26}

View File

@@ -1,24 +0,0 @@
services:
benchmark-proxy:
build:
context: ./benchmark-proxy
dockerfile: Dockerfile
expose:
- "8080"
environment:
- LISTEN_ADDR=:8080
- SUMMARY_INTERVAL=60
- PRIMARY_BACKEND=${BENCHMARK_PROXY_PRIMARY_BACKEND}
- SECONDARY_BACKENDS=${BENCHMARK_PROXY_SECONDARY_BACKENDS}
restart: unless-stopped
networks:
- chains
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.benchmark-proxy-stripprefix.stripprefix.prefixes=/benchmark"
- "traefik.http.services.benchmark-proxy.loadbalancer.server.port=8080"
- "${NO_SSL:-traefik.http.routers.benchmark-proxy.entrypoints=websecure}"
- "${NO_SSL:-traefik.http.routers.benchmark-proxy.tls.certresolver=myresolver}"
- "${NO_SSL:-traefik.http.routers.benchmark-proxy.rule=Host(`$DOMAIN`) && PathPrefix(`/benchmark`)}"
- "${NO_SSL:+traefik.http.routers.benchmark-proxy.rule=PathPrefix(`/benchmark`)}"
- "traefik.http.routers.benchmark-proxy.middlewares=benchmark-proxy-stripprefix, ipwhitelist"

View File

@@ -1 +0,0 @@
berachain/reth/berachain-bartio-reth-archive-trace.yml

View File

@@ -1 +0,0 @@
berachain/reth/berachain-bepolia-reth-archive-trace.yml

View File

@@ -1 +0,0 @@
berachain/reth/berachain-mainnet-reth-archive-trace.yml

View File

@@ -1 +0,0 @@
berachain/reth/berachain-mainnet-reth-pruned-trace.yml

View File

@@ -1,9 +0,0 @@
ARG BEACONKIT_IMAGE
ARG BEACONKIT_VERSION
FROM ${BEACONKIT_IMAGE}:${BEACONKIT_VERSION}
COPY ./scripts/init.sh /usr/local/bin/init.sh
RUN chmod +x /usr/local/bin/init.sh
ENTRYPOINT [ "init.sh" ]

View File

@@ -1,185 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:berachain/geth/berachain-bepolia-geth-pruned-pebble-path.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/berachain-bepolia-geth \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
berachain-bepolia-geth:
image: ${BERACHAIN_GETH_IMAGE:-ghcr.io/berachain/bera-geth}:${BERACHAIN_BEPOLIA_GETH_VERSION:-v1.011607.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 14888:14888
- 14888:14888/udp
expose:
- 8545
- 6060
- 8551
environment:
- CHAINID=80069
- CHAINNAME=bepolia
- CHAIN_SPEC=testnet
- GETH_BOOTNODES=enode://879752877fe08e483eba228b7544ff2de4a90e8ddaa59f122dfbc5c3689ea3030a0bfe4532d6a411a936dadce84e7a89086dca39dc39b9119603d87469010d9e@34.152.29.103:30303,enode://11f28864a15162c66b4b23d20b91fe08fda1d541df778d8db33be72558e80489a785d3fcc870926d10b7a6b848c7c6319bc215481319e527ce54ac139c2f44dc@34.118.159.247:30303,enode://47f41b9ab5a45e880a78330d2ae3f95a61f5cb41f203bbc9c9ff0e37778fc6c7fd46a6ee103e65ac36df8c024075a33f535a03cd8d800800c27fa2699fa0182b@34.47.28.251:30303,enode://fe6d2429b582de7daf387c6e5436f05d9185965267b72b8b6b4924125b50afdad765a820213d73d2afbfe64a721160a1a94750b4874ed97ccbe97c51443d1c42@34.95.21.165:30303,enode://1273d68cb5a884630aff8c35a30f643ad739461a5f7c8d2d691dc65547d5ad023829e1b6bf128edddd032047cb5c13e9dcd0f93a5de4aa5b958cef5970541863@34.159.207.112:30303,enode://fac9c2a0f719ec0ebc9021e88ad32eb7c05f4f351993dab6e4c86ae3ef5a47cade21749281263221c2b5583437893f957ac282858fd8a00f75f9a495eb53ae4e@34.141.54.42:30303,enode://5c0d582c19ea9f19928cfd6b7e156372d051b5720f67a444c38b671b8119bb097abfd1e4b868a389ab4a65bb9b405ef837df0ae195c0f32a33c61c39ca54e8ab@34.107.71.151:30303,enode://59aec227e87f4cd7c0a24c6cb0f870ef77abcc0c6d640f5a536c597a206b3505f7963f6459c265952b7bed3c3f260edf8a1412bbdb05f0e59d6bd612dc4bf077@34.141.48.88:30303,enode://281e3927839de398cb571279b324b6251d92c8de79263648f24dd4e451499a7fb336354bd4b7dc616e439f804cdd518fe49789f20b060c6426020de2c1cb285a@35.240.139.2:30303,enode://ae3f2e3f2caffc82aa0a5f5c6036def3d0be046b9245651b61567b4743c3c4937c08cb075d4ac543b163045942028384f3ef93bb25f7d500b644a2aab78f6b46@34.87.140.47:30303,enode://59e79f22bbf1645c60666ba6c17c54845080473df80711807fb3a4fbefc26278f78c43fa44ea6e52db38c57f1fccf6caa2de7589ca6cce8e161c3120e7a8d0d8@35.198.214.183:30303,enode://91d6e1878f5ab434a36625ac9017b792179324d42f4650a8dcc38774578d7ffb68882ec56861155e738eb62a7365b8463bd26886523f60edcfbf6137b43b9b65@35.240.234.137:30303,enode://f55dc8bac635b9c3b9fad688e3200c6b0744e2930d8d8a8e91d1effc0cd91b0547da3eac6d8d1431f0dd130ccc20f3c856fc90e32002bfc1e27d3286eef8c570@34.47.125.153:30303,enode://13da5e4ff1ab3481966f8d83309d8bd898ef5fa56355da57e683bdcc26b4c205be40341c87236934610d0c0bdddf11f6cef495966bba0092465839b8942bc54c@34.47.93.104:30303,enode://c2d43cba3380df975e78befc7294fe07a042eab1c2b743bdf436274b884bcdd0ef218821ff24732ff2106bf796964ada13dbeb42e27756f2491710be402fce59@34.64.123.223:30303,enode://f5b93e79932028a30bacb1b1cd7e7d42b30fe69640c0081bb51e29483bfe715c698dfcf50a309f208743c0ef07884afa39d0c96c6e56a8e58ce7ff1181fdffa0@34.22.89.170:30303
command:
- --bootnodes=enode://879752877fe08e483eba228b7544ff2de4a90e8ddaa59f122dfbc5c3689ea3030a0bfe4532d6a411a936dadce84e7a89086dca39dc39b9119603d87469010d9e@34.152.29.103:30303,enode://11f28864a15162c66b4b23d20b91fe08fda1d541df778d8db33be72558e80489a785d3fcc870926d10b7a6b848c7c6319bc215481319e527ce54ac139c2f44dc@34.118.159.247:30303,enode://47f41b9ab5a45e880a78330d2ae3f95a61f5cb41f203bbc9c9ff0e37778fc6c7fd46a6ee103e65ac36df8c024075a33f535a03cd8d800800c27fa2699fa0182b@34.47.28.251:30303,enode://fe6d2429b582de7daf387c6e5436f05d9185965267b72b8b6b4924125b50afdad765a820213d73d2afbfe64a721160a1a94750b4874ed97ccbe97c51443d1c42@34.95.21.165:30303,enode://1273d68cb5a884630aff8c35a30f643ad739461a5f7c8d2d691dc65547d5ad023829e1b6bf128edddd032047cb5c13e9dcd0f93a5de4aa5b958cef5970541863@34.159.207.112:30303,enode://fac9c2a0f719ec0ebc9021e88ad32eb7c05f4f351993dab6e4c86ae3ef5a47cade21749281263221c2b5583437893f957ac282858fd8a00f75f9a495eb53ae4e@34.141.54.42:30303,enode://5c0d582c19ea9f19928cfd6b7e156372d051b5720f67a444c38b671b8119bb097abfd1e4b868a389ab4a65bb9b405ef837df0ae195c0f32a33c61c39ca54e8ab@34.107.71.151:30303,enode://59aec227e87f4cd7c0a24c6cb0f870ef77abcc0c6d640f5a536c597a206b3505f7963f6459c265952b7bed3c3f260edf8a1412bbdb05f0e59d6bd612dc4bf077@34.141.48.88:30303,enode://281e3927839de398cb571279b324b6251d92c8de79263648f24dd4e451499a7fb336354bd4b7dc616e439f804cdd518fe49789f20b060c6426020de2c1cb285a@35.240.139.2:30303,enode://ae3f2e3f2caffc82aa0a5f5c6036def3d0be046b9245651b61567b4743c3c4937c08cb075d4ac543b163045942028384f3ef93bb25f7d500b644a2aab78f6b46@34.87.140.47:30303,enode://59e79f22bbf1645c60666ba6c17c54845080473df80711807fb3a4fbefc26278f78c43fa44ea6e52db38c57f1fccf6caa2de7589ca6cce8e161c3120e7a8d0d8@35.198.214.183:30303,enode://91d6e1878f5ab434a36625ac9017b792179324d42f4650a8dcc38774578d7ffb68882ec56861155e738eb62a7365b8463bd26886523f60edcfbf6137b43b9b65@35.240.234.137:30303,enode://f55dc8bac635b9c3b9fad688e3200c6b0744e2930d8d8a8e91d1effc0cd91b0547da3eac6d8d1431f0dd130ccc20f3c856fc90e32002bfc1e27d3286eef8c570@34.47.125.153:30303,enode://13da5e4ff1ab3481966f8d83309d8bd898ef5fa56355da57e683bdcc26b4c205be40341c87236934610d0c0bdddf11f6cef495966bba0092465839b8942bc54c@34.47.93.104:30303,enode://c2d43cba3380df975e78befc7294fe07a042eab1c2b743bdf436274b884bcdd0ef218821ff24732ff2106bf796964ada13dbeb42e27756f2491710be402fce59@34.64.123.223:30303,enode://f5b93e79932028a30bacb1b1cd7e7d42b30fe69640c0081bb51e29483bfe715c698dfcf50a309f208743c0ef07884afa39d0c96c6e56a8e58ce7ff1181fdffa0@34.22.89.170:30303
- --datadir=/root/.ethereum
- --db.engine=pebble
- --gcmode=full
- --maxpeers=50
- --metrics
- --metrics.addr=0.0.0.0
- --metrics.port=6060
- --nat=extip:${IP}
- --port=14888
- --rpc.gascap=600000000
- --rpc.txfeecap=0
- --state.scheme=path
- --syncmode=snap
- --http
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,debug,admin,txpool,engine
- --http.port=8545
- --http.vhosts=*
- --ws
- --ws.addr=0.0.0.0
- --ws.api=eth,net,web3,debug,admin,txpool,engine
- --ws.origins=*
- --ws.port=8545
- --authrpc.addr=0.0.0.0
- --authrpc.jwtsecret=/jwtsecret
- --authrpc.vhosts=*
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${BERACHAIN_BEPOLIA_GETH_PRUNED_PEBBLE_PATH_DATA:-berachain-bepolia-geth-pruned-pebble-path}:/root/.ethereum
- .jwtsecret:/jwtsecret:ro
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6060
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.berachain-bepolia-geth-pruned-pebble-path-stripprefix.stripprefix.prefixes=/berachain-bepolia-geth
- traefik.http.services.berachain-bepolia-geth-pruned-pebble-path.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.berachain-bepolia-geth-pruned-pebble-path.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.berachain-bepolia-geth-pruned-pebble-path.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.berachain-bepolia-geth-pruned-pebble-path.rule=Host(`$DOMAIN`) && (Path(`/berachain-bepolia-geth`) || Path(`/berachain-bepolia-geth/`))}
- ${NO_SSL:+traefik.http.routers.berachain-bepolia-geth-pruned-pebble-path.rule=Path(`/berachain-bepolia-geth`) || Path(`/berachain-bepolia-geth/`)}
- traefik.http.routers.berachain-bepolia-geth-pruned-pebble-path.middlewares=berachain-bepolia-geth-pruned-pebble-path-stripprefix, ipallowlist
berachain-bepolia-geth-node:
build:
context: ./berachain
dockerfile: beacon-kit.Dockerfile
args:
BEACONKIT_VERSION: ${BERACHAIN_BEPOLIA_BEACON_KIT_VERSION:-v1.3.4}
BEACONKIT_IMAGE: ${BERACHAIN_BEPOLIA_BEACON_KIT_IMAGE:-ghcr.io/berachain/beacon-kit}
ports:
- 19888:19888
- 19888:19888/udp
environment:
- AUTH_RPC=http://berachain-bepolia-geth:8551
- CHAINID=80069
- CHAINNAME=bepolia
- CHAIN_SPEC=testnet
- IP=${IP}
- MONIKER=d${DOMAIN:-local}
- P2P_PORT=19888
- PERSISTENT_PEERS=${BERACHAIN_BEPOLIA_BEACON_KIT_PEERS:-6d3a988ad84e02c37249b78628dbaf344784ea1a@108.129.86.179:30175,73ab55f52534c394f769b3a89d22eb349f053f3b@54.155.2.82:26656,7c4df69bc100d0eff84eefcf294f91c7aadebfd2@150.136.10.166:26656,b74208a6bcd76b6d06d4f308f1dd6c7b6d0a9f29@129.213.43.186:26656,233133cf27dd844af9b78e89625f001864bcfc92@57.129.73.89:32001,3d254fc9ed08346aedb71ccdf90bea67f7cbd00b@144.217.189.120:26656,e726816f42831689eab9378d5d577f1d06d25716@134.65.195.117:36656,f14545716ceeafba11f6ecd64d8d77aaf16d40c0@150.136.41.96:26656,737a435f2cb00c1e7b945cc52d7864eb7b70b992@185.242.112.107:16347,7724febbb335af74669fd8d8a88d37401c657386@129.213.27.223:26656,8bac4ade15a1d2f8665b62c71bf9d27112dbbbd6@40.160.22.31:26656,a3f5727fa125cdcd247b893f267f406c3f7d44a5@15.204.104.159:26656,79dee163a6ddedf2cb1edd0cbcbb26722c0e7f86@148.113.216.106:26656,d1251c410183c5357161a4a1812f35144e9db560@150.136.211.15:26656,a5ccf1b754651ec080dabc7e065d078ab2bc83bb@34.152.4.236:26656,0f3c31b2cef708275ad846fce4f30a8e0fbceecc@150.136.220.194:26656,a5f001d5e032caf46b81bf5d067b190ebf519f28@150.136.149.242:26656,55747a55410a3dbda9479d4cbb0a57016e1a4587@34.126.82.251:26656,5bb0cf7aab12a40ca920f84eddb9485b9d9284e1@23.227.222.185:37557,33b7a034787d7dbe95044ec140a34bba691b7310@3.252.67.180:30175,986f47732f3297c746d995696bb632a8735e2f30@82.197.183.13:26656,83983b76ba834110c4fe03b63d59f317b9c837c2@34.159.191.94:26656,76c1693781b9070d96f8cd640bc46d29e694fef5@74.15.24.112:21001,9805ae8ca90374c4cc3a0b1ba1d32a541e5da0c6@87.246.108.86:30715,068d83eb15836cd1777b06442e3c52d5e9bd3a99@164.152.161.131:37557,8a5908219e7a566acb7fbad157b37c32f2ff7e92@217.22.153.180:26656,e2f056e84cad629dfb7e7b0ca82060f0f5d45363@38.18.230.23:16347,e82aa9e1c41397cfc92f9af24a0cf5b3f1839dcd@91.198.59.11:27756,b82d334d16385c16148c6852ed956abcdae03b5e@38.114.121.59:16347,6da7d1e06161238fe5855eea0710dcd71ef00535@40.160.22.37:26656,e5f97876c7a3d39013b2eef291ca5670ef595b30@66.70.164.133:26656,c8ac56bc9045c720a4dbc541598e5b1a76ec7e6e@134.65.194.144:37557,b8c2dc40585e32c28c69d8c14818c9828d76709a@65.108.98.89:26656,e1b058e5cfa2b836ddaa496b10911da62dcf182e@169.155.168.156:36656,e93fbb087acb7c0f8ca850a796310bb745b510b6@23.227.222.131:36656,9989f4bb3e3b6975f7d056aaf150e7f10f046344@34.47.86.5:26656,6ea643a1bdf026460769f3e89e2bd213d610281f@44.246.151.147:26656,b442414dda8fa23e893b8ea703ebf01ce0655e14@162.19.240.10:26656}
- SEEDS=${BERACHAIN_BEPOLIA_BEACON_KIT_SEEDS:-9aa463497679e18a8a6ddc5e8503071255ce3844@34.47.83.64:26656,52b8bfb58c1d60774dd55310b24740569587ef5e@34.64.49.76:26656,1c8aa4af80684c29904a09044ffafba2e1814a30@34.64.211.6:26656,9989f4bb3e3b6975f7d056aaf150e7f10f046344@34.64.76.252:26656,30ebf8be0523a060b6b2baca1095abc92cca5caa@34.124.255.167:26656,55747a55410a3dbda9479d4cbb0a57016e1a4587@34.143.242.236:26656,f833e6317680a4dec3e2c6a11c8f4361bcff0c89@34.142.137.107:26656,65d3fb4f953532a9cc1f3a9417831419a7847ad8@34.143.221.131:26656,27d1edaed99a711a3f2862cc95c6361bc8c08cb9@34.159.202.238:26656,e56e965bd53832d47cf2f7cc2b7118c20c0c5b19@34.89.234.189:26656,f0f1a94a45f894be013ba16ebac09426aeadfe3b@34.159.175.241:26656,58b349f2d3f2206798aae4f8d87141f095095503@34.159.220.160:26656,d095b955e4ee6e48d6ed821179d8de87725f2ed7@35.203.95.212:26656,9499000274a522d85f91dbdac26945bbe2a144ba@35.234.244.49:26656,33ad9c7b2fa9cf0c5425d26af9a2af9d75600a9c@35.203.75.59:26656,a5ccf1b754651ec080dabc7e065d078ab2bc83bb@34.95.45.160:26656}
restart: unless-stopped
networks:
- chains
volumes:
- ${BERACHAIN_BEPOLIA_GETH_PRUNED_PEBBLE_PATH__BEACON_KIT_DATA:-berachain-bepolia-geth-pruned-pebble-path_beacon-kit}:/root/.beacond/data
- .jwtsecret:/jwtsecret:ro
- berachain-bepolia-geth-pruned-pebble-path_config:/root/.beacond/config
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
volumes:
berachain-bepolia-geth-pruned-pebble-path:
berachain-bepolia-geth-pruned-pebble-path_beacon-kit:
berachain-bepolia-geth-pruned-pebble-path_config:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: berachain-bepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
# not compatible with path state scheme
- name: debug_traceBlockByHash
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,183 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:berachain/geth/berachain-mainnet-geth-pruned-pebble-path.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/berachain-mainnet-geth \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
berachain-mainnet-geth:
image: ${BERACHAIN_GETH_IMAGE:-ghcr.io/berachain/bera-geth}:${BERACHAIN_MAINNET_GETH_VERSION:-v1.011607.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 11562:11562
- 11562:11562/udp
expose:
- 8545
- 6060
- 8551
environment:
- CHAINID=80094
- CHAINNAME=mainnet
- CHAIN_SPEC=mainnet
- GETH_BOOTNODES=enode://0c5a4a3c0e81fce2974e4d317d88df783731183d534325e32e0fdf8f4b119d7889fa254d3a38890606ec300d744e2aa9c87099a4a032f5c94efe53f3fcdfecfe@34.64.176.79:30303,enode://b6a3137d3a36ef37c4d31843775a9dc293f41bcbde33b6309c80b1771b6634827cd188285136a57474427bd8845adc2f6fe2e0b106bd58d14795b08910b9c326@34.64.181.70:30303,enode://0b6633300614bc2b9749aee0cace7a091ec5348762aee7b1d195f7616d03a9409019d9bef336624bab72e0d069cd4cf0b0de6fbbf53f04f6b6e4c5b39c6bdca6@34.64.39.31:30303,enode://552b001abebb5805fcd734ad367cd05d9078d18f23ec598d7165460fadcfc51116ad95c418f7ea9a141aa8cbc496c8bea3322b67a5de0d3380f11aab1a797513@34.64.183.158:30303,enode://5b037f66099d5ded86eb7e1619f6d06ceb15609e8cc345ced22a4772b06178004e1490a3cd32fd1222789de4c6e4021c2d648a3d750f6d5323e64b771bbd8de7@34.87.142.180:30303,enode://846db253c53753d3ea1197aec296306dc84c25f3afdf142b65cb0fe0f984de55072daa3bbf05a9aea046a38a2292403137b6eafefd5646fcf62120b74e3b898d@34.142.170.110:30303,enode://64b7f6ee9bcd942ad4949c70f2077627f078a057dfd930e6e904e12643d8952f5ae87c91e24559765393f244a72c9d5c011d7d5176e59191d38f315db85a20f5@34.126.161.16:30303,enode://cf4d19bfb8ec507427ec882bac0bac85a0c8c9ddaa0ec91b773bb614e5e09d107cd9fbe323b96f62f31c493f8f42cc5495c18b87c08560c5dea1dfd25256dcf6@35.247.162.2:30303,enode://bb7e44178543431feac8f0ee3827056b7b84d8235b802a8bdbbcd4939dab7f7dd2579ff577a38b002bb0139792af67abd2dd5c9f4f85b8da6e914fa76dca82bc@35.198.150.35:30303,enode://8fef1f5df45e7b31be00a21e1da5665d5a5f5bf4c379086b843f03eade941bdd157f08c95b31880c492577edb9a9b185df7191eaebf54ab06d5bd683b289f3af@34.107.7.241:30303,enode://ce9c87cfe089f6811d26c96913fa3ec10b938d9017fc6246684c74a33679ee34ceca9447180fb509e37bf2b706c2877a82085d34bfd83b5b520ee1288b0fc32f@35.198.109.49:30303,enode://713657eb6a53feadcbc47e634ad557326a51eb6818a3e19a00a8111492f50a666ccbf2f5d334d247ecf941e68d242ef5c3b812b63c44d381ef11f79c2cdb45c7@34.141.15.100:30303,enode://d071fa740e063ce1bb9cdc2b7937baeff6dc4000f91588d730a731c38a6ff0d4015814812c160fab8695e46f74b9b618735368ea2f16db4d785f16d29b3fb7b0@35.203.2.210:30303,enode://ffc452fe451a2e5f89fe634744aea334d92dcd30d881b76209d2db7dbf4b7ee047e7c69a5bb1633764d987a7441d9c4bc57ccdbfd6442a2f860bf953bc89a9b9@34.152.50.224:30303,enode://da94328302a1d1422209d1916744e90b6095a48b2340dcec39b22002c098bb4d58a880dab98eb26edf03fa4705d1b62f99a8c5c14e6666e4726b6d3066d8a4d7@34.95.61.106:30303,enode://19c7671a4844699b481e81a5bcfe7bafc7fefa953c16ebbe1951b1046371e73839e9058de6b7d3c934318fe7e7233dde3621c1c1018eb8b294ea3d4516147150@35.203.82.137:30303
command:
- --bootnodes=enode://0c5a4a3c0e81fce2974e4d317d88df783731183d534325e32e0fdf8f4b119d7889fa254d3a38890606ec300d744e2aa9c87099a4a032f5c94efe53f3fcdfecfe@34.64.176.79:30303,enode://b6a3137d3a36ef37c4d31843775a9dc293f41bcbde33b6309c80b1771b6634827cd188285136a57474427bd8845adc2f6fe2e0b106bd58d14795b08910b9c326@34.64.181.70:30303,enode://0b6633300614bc2b9749aee0cace7a091ec5348762aee7b1d195f7616d03a9409019d9bef336624bab72e0d069cd4cf0b0de6fbbf53f04f6b6e4c5b39c6bdca6@34.64.39.31:30303,enode://552b001abebb5805fcd734ad367cd05d9078d18f23ec598d7165460fadcfc51116ad95c418f7ea9a141aa8cbc496c8bea3322b67a5de0d3380f11aab1a797513@34.64.183.158:30303,enode://5b037f66099d5ded86eb7e1619f6d06ceb15609e8cc345ced22a4772b06178004e1490a3cd32fd1222789de4c6e4021c2d648a3d750f6d5323e64b771bbd8de7@34.87.142.180:30303,enode://846db253c53753d3ea1197aec296306dc84c25f3afdf142b65cb0fe0f984de55072daa3bbf05a9aea046a38a2292403137b6eafefd5646fcf62120b74e3b898d@34.142.170.110:30303,enode://64b7f6ee9bcd942ad4949c70f2077627f078a057dfd930e6e904e12643d8952f5ae87c91e24559765393f244a72c9d5c011d7d5176e59191d38f315db85a20f5@34.126.161.16:30303,enode://cf4d19bfb8ec507427ec882bac0bac85a0c8c9ddaa0ec91b773bb614e5e09d107cd9fbe323b96f62f31c493f8f42cc5495c18b87c08560c5dea1dfd25256dcf6@35.247.162.2:30303,enode://bb7e44178543431feac8f0ee3827056b7b84d8235b802a8bdbbcd4939dab7f7dd2579ff577a38b002bb0139792af67abd2dd5c9f4f85b8da6e914fa76dca82bc@35.198.150.35:30303,enode://8fef1f5df45e7b31be00a21e1da5665d5a5f5bf4c379086b843f03eade941bdd157f08c95b31880c492577edb9a9b185df7191eaebf54ab06d5bd683b289f3af@34.107.7.241:30303,enode://ce9c87cfe089f6811d26c96913fa3ec10b938d9017fc6246684c74a33679ee34ceca9447180fb509e37bf2b706c2877a82085d34bfd83b5b520ee1288b0fc32f@35.198.109.49:30303,enode://713657eb6a53feadcbc47e634ad557326a51eb6818a3e19a00a8111492f50a666ccbf2f5d334d247ecf941e68d242ef5c3b812b63c44d381ef11f79c2cdb45c7@34.141.15.100:30303,enode://d071fa740e063ce1bb9cdc2b7937baeff6dc4000f91588d730a731c38a6ff0d4015814812c160fab8695e46f74b9b618735368ea2f16db4d785f16d29b3fb7b0@35.203.2.210:30303,enode://ffc452fe451a2e5f89fe634744aea334d92dcd30d881b76209d2db7dbf4b7ee047e7c69a5bb1633764d987a7441d9c4bc57ccdbfd6442a2f860bf953bc89a9b9@34.152.50.224:30303,enode://da94328302a1d1422209d1916744e90b6095a48b2340dcec39b22002c098bb4d58a880dab98eb26edf03fa4705d1b62f99a8c5c14e6666e4726b6d3066d8a4d7@34.95.61.106:30303,enode://19c7671a4844699b481e81a5bcfe7bafc7fefa953c16ebbe1951b1046371e73839e9058de6b7d3c934318fe7e7233dde3621c1c1018eb8b294ea3d4516147150@35.203.82.137:30303
- --datadir=/root/.ethereum
- --db.engine=pebble
- --gcmode=full
- --maxpeers=50
- --metrics
- --metrics.addr=0.0.0.0
- --metrics.port=6060
- --nat=extip:${IP}
- --port=11562
- --rpc.gascap=600000000
- --rpc.txfeecap=0
- --state.scheme=path
- --syncmode=snap
- --http
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,debug,admin,txpool,engine
- --http.port=8545
- --http.vhosts=*
- --ws
- --ws.addr=0.0.0.0
- --ws.api=eth,net,web3,debug,admin,txpool,engine
- --ws.origins=*
- --ws.port=8545
- --authrpc.addr=0.0.0.0
- --authrpc.jwtsecret=/jwtsecret
- --authrpc.vhosts=*
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${BERACHAIN_MAINNET_GETH_PRUNED_PEBBLE_PATH_DATA:-berachain-mainnet-geth-pruned-pebble-path}:/root/.ethereum
- .jwtsecret:/jwtsecret:ro
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6060
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.berachain-mainnet-geth-pruned-pebble-path-stripprefix.stripprefix.prefixes=/berachain-mainnet-geth
- traefik.http.services.berachain-mainnet-geth-pruned-pebble-path.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.berachain-mainnet-geth-pruned-pebble-path.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.berachain-mainnet-geth-pruned-pebble-path.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.berachain-mainnet-geth-pruned-pebble-path.rule=Host(`$DOMAIN`) && (Path(`/berachain-mainnet-geth`) || Path(`/berachain-mainnet-geth/`))}
- ${NO_SSL:+traefik.http.routers.berachain-mainnet-geth-pruned-pebble-path.rule=Path(`/berachain-mainnet-geth`) || Path(`/berachain-mainnet-geth/`)}
- traefik.http.routers.berachain-mainnet-geth-pruned-pebble-path.middlewares=berachain-mainnet-geth-pruned-pebble-path-stripprefix, ipallowlist
berachain-mainnet-geth-node:
build:
context: ./berachain
dockerfile: beacon-kit.Dockerfile
args:
BEACONKIT_VERSION: ${BERACHAIN_MAINNET_BEACON_KIT_VERSION:-v1.3.4}
BEACONKIT_IMAGE: ${BERACHAIN_MAINNET_BEACON_KIT_IMAGE:-ghcr.io/berachain/beacon-kit}
ports:
- 16562:16562
- 16562:16562/udp
environment:
- AUTH_RPC=http://berachain-mainnet-geth:8551
- CHAINID=80094
- CHAINNAME=mainnet
- CHAIN_SPEC=mainnet
- IP=${IP}
- MONIKER=d${DOMAIN:-local}
- P2P_PORT=16562
restart: unless-stopped
networks:
- chains
volumes:
- ${BERACHAIN_MAINNET_GETH_PRUNED_PEBBLE_PATH__BEACON_KIT_DATA:-berachain-mainnet-geth-pruned-pebble-path_beacon-kit}:/root/.beacond/data
- .jwtsecret:/jwtsecret:ro
- berachain-mainnet-geth-pruned-pebble-path_config:/root/.beacond/config
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
volumes:
berachain-mainnet-geth-pruned-pebble-path:
berachain-mainnet-geth-pruned-pebble-path_beacon-kit:
berachain-mainnet-geth-pruned-pebble-path_config:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: berachain
method-groups:
enabled:
- debug
- filter
methods:
disabled:
# not compatible with path state scheme
- name: debug_traceBlockByHash
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -1,205 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# you should backup the peers of your beacond node frequently.
# docker exec -it rpc-berachain-bartio-reth-node-1 curl http://localhost:26657/net_info | jq -r '
# .result.peers[]
# | "\(.node_info.id)@\(.remote_ip):\(.node_info.listen_addr | split(":") | last)"
#' | paste -sd,
# and then set the PERSISTENT_PEERS environment variable for the beacon node which will be inserted into the config toml by the init script...
# SEEDS are published in the official repository https://github.com/berachain/beacon-kit/tree/main/testing/networks/80084/cl-seeds.txt
# if that file is found then the init script will merge them into any configured seeds from the environment variable.
# If the environment variable is empty and the file is found then the init script will use the seeds from the config.toml file that was
# downloaded in initialization from the repo above.
#
# something wild that appeared to me was that I had to delete /var/lib/docker/volumes/rpc_berachain-bepolia-reth-archive-trace_config/_data/genesis.json
# after a fork of the chain to trigger a redownload of the genesis.json file and init of the node datadir.
#
# also it may be necessary to update the static peers and bootnodes from the repository to get it going again.
#
# frequently reth complains about never seeing a beacon client pushing for new blocks to sync to while the beacon client
# complains about reth being synching and it can't push new block hashes. force-recreate helps.
#
# this is honestly the most flaky constructs of all of the chains in this repository. they expect you to run their refrence setup
# and git pull to update and if something doesn't work to get a new chaindata snapshot.
#
# sometimes the genesis has to be updated and is a parameter to the reth node
# force-recreate on chain fork would do the trick I guess...
#
# don't forget to docker compose build from time to time...
#
# ETH_GENESIS_URL="https://raw.githubusercontent.com/berachain/beacon-kit/main/testing/networks/$CHAINID/eth-genesis.json"
# curl -sL "$ETH_GENESIS_URL" -o "$CONFIG_DIR/eth-genesis.json"
#
# something like this needs to be done sometimes when there is a fork of the chain somehow. may or may not be automated.
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:berachain/reth/berachain-bartio-reth-archive-trace.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/berachain-bartio-reth \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
berachain-bartio-reth:
image: ${BERACHAIN_RETH_IMAGE:-ghcr.io/berachain/bera-reth}:${BERACHAIN_BARTIO_RETH_VERSION:-v1.3.1}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
memlock: -1 # Disable memory locking limits (for in-memory DBs like MDBX)
user: root
ports:
- 10527:10527
- 10527:10527/udp
expose:
- 8545
- 9001
- 8551
environment:
- BOOTNODES=enode://0401e494dbd0c84c5c0f72adac5985d2f2525e08b68d448958aae218f5ac8198a80d1498e0ebec2ce38b1b18d6750f6e61a56b4614c5a6c6cf0981c39aed47dc@34.159.32.127:30303,enode://9b6c1eb143c9e3af0c7283262a9a38fe8bf844114b1f304673c2ac1c23e6bccfdaa8f4e9cb8c460bded495933fd92eeff30e6ab2e0538b56e249beea2c512906@35.234.88.149:30303,enode://e9675164b5e17b9d9edf0cc2bd79e6b6f487200c74d1331c220abb5b8ee80c2eefbf18213989585e9d0960683e819542e11d4eefb5f2b4019e1e49f9fd8fff18@berav2-bootnode.staketab.org:30303,enode://16e21c20f670d9e88570b8d3c580c7ef54f3515bffab864f1f3047c4125c3e7d98e782b990165808363a1b54ddca51c9dafaca9d6cd7ecca93e2e809ba522cae@berachain-testnet-v2.enode.l0vd.com:30304,enode://e31aa249638083d34817eed2b499ccd4b0718a332f0ea530e3062e13f624cb03a7d6b6e0460193ee87b5fc12e73a726070a3126ef53492ffbdc5e6c102f6dfb3@34.64.198.56:30303,enode://3f2f85e2e711f198fb7324b74fab6a0599b2534774f3aa26241dbbabe870b650574324da01aa98ee24ce97c8d76362a2db03034a6ddff43119ccfdc269663cbf@34.47.79.13:30303,enode://7a2f67d22b12e10c6ba9cd951866dda6471604be5fbd5102217dbad1cc56e590befd2009ecc99958a468a5b8e0dc28e14d9b6822491719c93199be6aa0319077@34.124.220.31:30303,enode://a96aac0b81c7e75fecc2ae613eaf13b27b2aaf3d46a90db904f94797d1746aa31e6593ae4cd476f81d5c6d1d2228ca60c885727978c369586c38871c63a330ee@35.240.182.27:30303,enode://dc44744074ac2dd76db0e0f9d95eb86cd558f6ba75e4a4af1303f2259624c8ce041198f976862a284165253b6dc6b2fa91b995cbca3ef2683879b6247e05e553@34.95.61.239:30303,enode://bf5364e1cf7ecd11646ccaea5c06b56622c04d52200d9cd141e01db9c9661237ceebecde1616e66e390a968ffd1c07e027531cad23044517b7bf36caa8b97f5f@34.152.41.26:30303,enode://f61e51c18fdb6ddf5e520209c53a0e60b2864d168eb0d3c02541050de9fee003b61818c7f70b32b61adee082280e7de4811fd3da47d87c87b3d17bf44e3bb76c@beacond-testnet.blacknodes.net:30303,enode://f24b54da77cf604e92aeb5ee5e79401fd3e66111563ca630e72330ccab6f385ccbbde5eba4577ee7bfb5e83347263d0e4cad042fd4c10468d0e38906fc82ba31@bera-testnet-seeds.nodeinfra.com:30303,enode://2e44e8e12b4666632dd2d4d555cfca5ceac4ca6cf6f45c46fc0ba27d1f9f7578dd598c74ae8b4189430a85b15d103c215a63cdbeafd41895fee1405a094fa77a@135.125.188.10:30303
- CHAINID=80084
- CHAINNAME=bartio
- CHAIN_SPEC=testnet
entrypoint: [/bin/bash, -c, "trap 'exit 0' SIGTERM; while [ ! -f /config/eth-genesis.json ] && [ ! -f /tmp/stop ]; do sleep 1 & wait $!; done; exec bera-reth node \"$@\"", --]
command:
- --bootnodes=enode://0401e494dbd0c84c5c0f72adac5985d2f2525e08b68d448958aae218f5ac8198a80d1498e0ebec2ce38b1b18d6750f6e61a56b4614c5a6c6cf0981c39aed47dc@34.159.32.127:30303,enode://9b6c1eb143c9e3af0c7283262a9a38fe8bf844114b1f304673c2ac1c23e6bccfdaa8f4e9cb8c460bded495933fd92eeff30e6ab2e0538b56e249beea2c512906@35.234.88.149:30303,enode://e9675164b5e17b9d9edf0cc2bd79e6b6f487200c74d1331c220abb5b8ee80c2eefbf18213989585e9d0960683e819542e11d4eefb5f2b4019e1e49f9fd8fff18@berav2-bootnode.staketab.org:30303,enode://16e21c20f670d9e88570b8d3c580c7ef54f3515bffab864f1f3047c4125c3e7d98e782b990165808363a1b54ddca51c9dafaca9d6cd7ecca93e2e809ba522cae@berachain-testnet-v2.enode.l0vd.com:30304,enode://e31aa249638083d34817eed2b499ccd4b0718a332f0ea530e3062e13f624cb03a7d6b6e0460193ee87b5fc12e73a726070a3126ef53492ffbdc5e6c102f6dfb3@34.64.198.56:30303,enode://3f2f85e2e711f198fb7324b74fab6a0599b2534774f3aa26241dbbabe870b650574324da01aa98ee24ce97c8d76362a2db03034a6ddff43119ccfdc269663cbf@34.47.79.13:30303,enode://7a2f67d22b12e10c6ba9cd951866dda6471604be5fbd5102217dbad1cc56e590befd2009ecc99958a468a5b8e0dc28e14d9b6822491719c93199be6aa0319077@34.124.220.31:30303,enode://a96aac0b81c7e75fecc2ae613eaf13b27b2aaf3d46a90db904f94797d1746aa31e6593ae4cd476f81d5c6d1d2228ca60c885727978c369586c38871c63a330ee@35.240.182.27:30303,enode://dc44744074ac2dd76db0e0f9d95eb86cd558f6ba75e4a4af1303f2259624c8ce041198f976862a284165253b6dc6b2fa91b995cbca3ef2683879b6247e05e553@34.95.61.239:30303,enode://bf5364e1cf7ecd11646ccaea5c06b56622c04d52200d9cd141e01db9c9661237ceebecde1616e66e390a968ffd1c07e027531cad23044517b7bf36caa8b97f5f@34.152.41.26:30303,enode://f61e51c18fdb6ddf5e520209c53a0e60b2864d168eb0d3c02541050de9fee003b61818c7f70b32b61adee082280e7de4811fd3da47d87c87b3d17bf44e3bb76c@beacond-testnet.blacknodes.net:30303,enode://f24b54da77cf604e92aeb5ee5e79401fd3e66111563ca630e72330ccab6f385ccbbde5eba4577ee7bfb5e83347263d0e4cad042fd4c10468d0e38906fc82ba31@bera-testnet-seeds.nodeinfra.com:30303,enode://2e44e8e12b4666632dd2d4d555cfca5ceac4ca6cf6f45c46fc0ba27d1f9f7578dd598c74ae8b4189430a85b15d103c215a63cdbeafd41895fee1405a094fa77a@135.125.188.10:30303
- --chain=/config/eth-genesis.json
- --datadir=/root/.local/share/reth
- --discovery.port=10527
- --engine.always-process-payload-attributes-on-canonical-head
- --engine.cross-block-cache-size=${BERACHAIN_BARTIO_RETH_STATE_CACHE:-4096}
- --engine.memory-block-buffer-target=0
- --engine.persistence-threshold=0
- --max-inbound-peers=50
- --max-outbound-peers=50
- --metrics=0.0.0.0:9001
- --nat=extip:${IP}
- --port=10527
- --rpc-cache.max-blocks=10000
- --rpc-cache.max-concurrent-db-requests=2048
- --rpc.gascap=600000000
- --rpc.max-blocks-per-filter=0
- --rpc.max-connections=50000
- --rpc.max-logs-per-response=0
- --http
- --http.addr=0.0.0.0
- --http.api=admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,mev
- --http.corsdomain=*
- --http.port=8545
- --ws
- --ws.addr=0.0.0.0
- --ws.api=admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,mev
- --ws.origins=*
- --ws.port=8545
- --authrpc.addr=0.0.0.0
- --authrpc.jwtsecret=/jwtsecret
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${BERACHAIN_BARTIO_RETH_ARCHIVE_TRACE_DATA:-berachain-bartio-reth-archive-trace}:/root/.local/share/reth
- .jwtsecret:/jwtsecret:ro
- /slowdisk:/slowdisk
- berachain-bartio-reth-archive-trace_config:/config
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=9001
- prometheus-scrape.path=/metrics
- traefik.enable=true
- traefik.http.middlewares.berachain-bartio-reth-archive-trace-stripprefix.stripprefix.prefixes=/berachain-bartio-reth
- traefik.http.services.berachain-bartio-reth-archive-trace.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.berachain-bartio-reth-archive-trace.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.berachain-bartio-reth-archive-trace.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.berachain-bartio-reth-archive-trace.rule=Host(`$DOMAIN`) && (Path(`/berachain-bartio-reth`) || Path(`/berachain-bartio-reth/`))}
- ${NO_SSL:+traefik.http.routers.berachain-bartio-reth-archive-trace.rule=Path(`/berachain-bartio-reth`) || Path(`/berachain-bartio-reth/`)}
- traefik.http.routers.berachain-bartio-reth-archive-trace.middlewares=berachain-bartio-reth-archive-trace-stripprefix, ipallowlist
shm_size: 2gb
berachain-bartio-reth-node:
build:
context: ./berachain
dockerfile: beacon-kit.Dockerfile
args:
BEACONKIT_VERSION: ${BERACHAIN_BARTIO_BEACON_KIT_VERSION:-v1.3.4}
BEACONKIT_IMAGE: ${BERACHAIN_BARTIO_BEACON_KIT_IMAGE:-ghcr.io/berachain/beacon-kit}
ports:
- 15527:15527
- 15527:15527/udp
environment:
- AUTH_RPC=http://berachain-bartio-reth:8551
- CHAINID=80084
- CHAINNAME=bartio
- CHAIN_SPEC=testnet
- IP=${IP}
- MONIKER=d${DOMAIN:-local}
- P2P_PORT=15527
restart: unless-stopped
networks:
- chains
volumes:
- ${BERACHAIN_BARTIO_RETH_ARCHIVE_TRACE__BEACON_KIT_DATA:-berachain-bartio-reth-archive-trace_beacon-kit}:/root/.beacond/data
- .jwtsecret:/jwtsecret:ro
- berachain-bartio-reth-archive-trace_config:/root/.beacond/config
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
volumes:
berachain-bartio-reth-archive-trace:
berachain-bartio-reth-archive-trace_beacon-kit:
berachain-bartio-reth-archive-trace_config:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: bartio
method-groups:
enabled:
- debug
- filter
- trace
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,206 +0,0 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# you should backup the peers of your beacond node frequently.
# docker exec -it rpc-berachain-bartio-reth-pruned-node-1 curl http://localhost:26657/net_info | jq -r '
# .result.peers[]
# | "\(.node_info.id)@\(.remote_ip):\(.node_info.listen_addr | split(":") | last)"
#' | paste -sd,
# and then set the PERSISTENT_PEERS environment variable for the beacon node which will be inserted into the config toml by the init script...
# SEEDS are published in the official repository https://github.com/berachain/beacon-kit/tree/main/testing/networks/80084/cl-seeds.txt
# if that file is found then the init script will merge them into any configured seeds from the environment variable.
# If the environment variable is empty and the file is found then the init script will use the seeds from the config.toml file that was
# downloaded in initialization from the repo above.
#
# something wild that appeared to me was that I had to delete /var/lib/docker/volumes/rpc_berachain-bepolia-reth-archive-trace_config/_data/genesis.json
# after a fork of the chain to trigger a redownload of the genesis.json file and init of the node datadir.
#
# also it may be necessary to update the static peers and bootnodes from the repository to get it going again.
#
# frequently reth complains about never seeing a beacon client pushing for new blocks to sync to while the beacon client
# complains about reth being synching and it can't push new block hashes. force-recreate helps.
#
# this is honestly the most flaky constructs of all of the chains in this repository. they expect you to run their refrence setup
# and git pull to update and if something doesn't work to get a new chaindata snapshot.
#
# sometimes the genesis has to be updated and is a parameter to the reth node
# force-recreate on chain fork would do the trick I guess...
#
# don't forget to docker compose build from time to time...
#
# ETH_GENESIS_URL="https://raw.githubusercontent.com/berachain/beacon-kit/main/testing/networks/$CHAINID/eth-genesis.json"
# curl -sL "$ETH_GENESIS_URL" -o "$CONFIG_DIR/eth-genesis.json"
#
# something like this needs to be done sometimes when there is a fork of the chain somehow. may or may not be automated.
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:berachain/reth/berachain-bartio-reth-pruned-trace.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/berachain-bartio-reth-pruned \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
berachain-bartio-reth-pruned:
image: ${BERACHAIN_RETH_IMAGE:-ghcr.io/berachain/bera-reth}:${BERACHAIN_BARTIO_RETH_VERSION:-v1.3.1}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
memlock: -1 # Disable memory locking limits (for in-memory DBs like MDBX)
user: root
ports:
- 14467:14467
- 14467:14467/udp
expose:
- 8545
- 9001
- 8551
environment:
- BOOTNODES=enode://0401e494dbd0c84c5c0f72adac5985d2f2525e08b68d448958aae218f5ac8198a80d1498e0ebec2ce38b1b18d6750f6e61a56b4614c5a6c6cf0981c39aed47dc@34.159.32.127:30303,enode://9b6c1eb143c9e3af0c7283262a9a38fe8bf844114b1f304673c2ac1c23e6bccfdaa8f4e9cb8c460bded495933fd92eeff30e6ab2e0538b56e249beea2c512906@35.234.88.149:30303,enode://e9675164b5e17b9d9edf0cc2bd79e6b6f487200c74d1331c220abb5b8ee80c2eefbf18213989585e9d0960683e819542e11d4eefb5f2b4019e1e49f9fd8fff18@berav2-bootnode.staketab.org:30303,enode://16e21c20f670d9e88570b8d3c580c7ef54f3515bffab864f1f3047c4125c3e7d98e782b990165808363a1b54ddca51c9dafaca9d6cd7ecca93e2e809ba522cae@berachain-testnet-v2.enode.l0vd.com:30304,enode://e31aa249638083d34817eed2b499ccd4b0718a332f0ea530e3062e13f624cb03a7d6b6e0460193ee87b5fc12e73a726070a3126ef53492ffbdc5e6c102f6dfb3@34.64.198.56:30303,enode://3f2f85e2e711f198fb7324b74fab6a0599b2534774f3aa26241dbbabe870b650574324da01aa98ee24ce97c8d76362a2db03034a6ddff43119ccfdc269663cbf@34.47.79.13:30303,enode://7a2f67d22b12e10c6ba9cd951866dda6471604be5fbd5102217dbad1cc56e590befd2009ecc99958a468a5b8e0dc28e14d9b6822491719c93199be6aa0319077@34.124.220.31:30303,enode://a96aac0b81c7e75fecc2ae613eaf13b27b2aaf3d46a90db904f94797d1746aa31e6593ae4cd476f81d5c6d1d2228ca60c885727978c369586c38871c63a330ee@35.240.182.27:30303,enode://dc44744074ac2dd76db0e0f9d95eb86cd558f6ba75e4a4af1303f2259624c8ce041198f976862a284165253b6dc6b2fa91b995cbca3ef2683879b6247e05e553@34.95.61.239:30303,enode://bf5364e1cf7ecd11646ccaea5c06b56622c04d52200d9cd141e01db9c9661237ceebecde1616e66e390a968ffd1c07e027531cad23044517b7bf36caa8b97f5f@34.152.41.26:30303,enode://f61e51c18fdb6ddf5e520209c53a0e60b2864d168eb0d3c02541050de9fee003b61818c7f70b32b61adee082280e7de4811fd3da47d87c87b3d17bf44e3bb76c@beacond-testnet.blacknodes.net:30303,enode://f24b54da77cf604e92aeb5ee5e79401fd3e66111563ca630e72330ccab6f385ccbbde5eba4577ee7bfb5e83347263d0e4cad042fd4c10468d0e38906fc82ba31@bera-testnet-seeds.nodeinfra.com:30303,enode://2e44e8e12b4666632dd2d4d555cfca5ceac4ca6cf6f45c46fc0ba27d1f9f7578dd598c74ae8b4189430a85b15d103c215a63cdbeafd41895fee1405a094fa77a@135.125.188.10:30303
- CHAINID=80084
- CHAINNAME=bartio
- CHAIN_SPEC=testnet
entrypoint: [/bin/bash, -c, "trap 'exit 0' SIGTERM; while [ ! -f /config/eth-genesis.json ] && [ ! -f /tmp/stop ]; do sleep 1 & wait $!; done; exec bera-reth node \"$@\"", --]
command:
- --bootnodes=enode://0401e494dbd0c84c5c0f72adac5985d2f2525e08b68d448958aae218f5ac8198a80d1498e0ebec2ce38b1b18d6750f6e61a56b4614c5a6c6cf0981c39aed47dc@34.159.32.127:30303,enode://9b6c1eb143c9e3af0c7283262a9a38fe8bf844114b1f304673c2ac1c23e6bccfdaa8f4e9cb8c460bded495933fd92eeff30e6ab2e0538b56e249beea2c512906@35.234.88.149:30303,enode://e9675164b5e17b9d9edf0cc2bd79e6b6f487200c74d1331c220abb5b8ee80c2eefbf18213989585e9d0960683e819542e11d4eefb5f2b4019e1e49f9fd8fff18@berav2-bootnode.staketab.org:30303,enode://16e21c20f670d9e88570b8d3c580c7ef54f3515bffab864f1f3047c4125c3e7d98e782b990165808363a1b54ddca51c9dafaca9d6cd7ecca93e2e809ba522cae@berachain-testnet-v2.enode.l0vd.com:30304,enode://e31aa249638083d34817eed2b499ccd4b0718a332f0ea530e3062e13f624cb03a7d6b6e0460193ee87b5fc12e73a726070a3126ef53492ffbdc5e6c102f6dfb3@34.64.198.56:30303,enode://3f2f85e2e711f198fb7324b74fab6a0599b2534774f3aa26241dbbabe870b650574324da01aa98ee24ce97c8d76362a2db03034a6ddff43119ccfdc269663cbf@34.47.79.13:30303,enode://7a2f67d22b12e10c6ba9cd951866dda6471604be5fbd5102217dbad1cc56e590befd2009ecc99958a468a5b8e0dc28e14d9b6822491719c93199be6aa0319077@34.124.220.31:30303,enode://a96aac0b81c7e75fecc2ae613eaf13b27b2aaf3d46a90db904f94797d1746aa31e6593ae4cd476f81d5c6d1d2228ca60c885727978c369586c38871c63a330ee@35.240.182.27:30303,enode://dc44744074ac2dd76db0e0f9d95eb86cd558f6ba75e4a4af1303f2259624c8ce041198f976862a284165253b6dc6b2fa91b995cbca3ef2683879b6247e05e553@34.95.61.239:30303,enode://bf5364e1cf7ecd11646ccaea5c06b56622c04d52200d9cd141e01db9c9661237ceebecde1616e66e390a968ffd1c07e027531cad23044517b7bf36caa8b97f5f@34.152.41.26:30303,enode://f61e51c18fdb6ddf5e520209c53a0e60b2864d168eb0d3c02541050de9fee003b61818c7f70b32b61adee082280e7de4811fd3da47d87c87b3d17bf44e3bb76c@beacond-testnet.blacknodes.net:30303,enode://f24b54da77cf604e92aeb5ee5e79401fd3e66111563ca630e72330ccab6f385ccbbde5eba4577ee7bfb5e83347263d0e4cad042fd4c10468d0e38906fc82ba31@bera-testnet-seeds.nodeinfra.com:30303,enode://2e44e8e12b4666632dd2d4d555cfca5ceac4ca6cf6f45c46fc0ba27d1f9f7578dd598c74ae8b4189430a85b15d103c215a63cdbeafd41895fee1405a094fa77a@135.125.188.10:30303
- --chain=/config/eth-genesis.json
- --datadir=/root/.local/share/reth
- --discovery.port=14467
- --engine.always-process-payload-attributes-on-canonical-head
- --engine.cross-block-cache-size=${BERACHAIN_BARTIO_RETH_STATE_CACHE:-4096}
- --engine.memory-block-buffer-target=0
- --engine.persistence-threshold=0
- --full
- --max-inbound-peers=50
- --max-outbound-peers=50
- --metrics=0.0.0.0:9001
- --nat=extip:${IP}
- --port=14467
- --rpc-cache.max-blocks=10000
- --rpc-cache.max-concurrent-db-requests=2048
- --rpc.gascap=600000000
- --rpc.max-blocks-per-filter=0
- --rpc.max-connections=50000
- --rpc.max-logs-per-response=0
- --http
- --http.addr=0.0.0.0
- --http.api=admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,mev
- --http.corsdomain=*
- --http.port=8545
- --ws
- --ws.addr=0.0.0.0
- --ws.api=admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,mev
- --ws.origins=*
- --ws.port=8545
- --authrpc.addr=0.0.0.0
- --authrpc.jwtsecret=/jwtsecret
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${BERACHAIN_BARTIO_RETH_PRUNED_TRACE_DATA:-berachain-bartio-reth-pruned-trace}:/root/.local/share/reth
- .jwtsecret:/jwtsecret:ro
- /slowdisk:/slowdisk
- berachain-bartio-reth-pruned-trace_config:/config
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=9001
- prometheus-scrape.path=/metrics
- traefik.enable=true
- traefik.http.middlewares.berachain-bartio-reth-pruned-trace-stripprefix.stripprefix.prefixes=/berachain-bartio-reth-pruned
- traefik.http.services.berachain-bartio-reth-pruned-trace.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.berachain-bartio-reth-pruned-trace.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.berachain-bartio-reth-pruned-trace.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.berachain-bartio-reth-pruned-trace.rule=Host(`$DOMAIN`) && (Path(`/berachain-bartio-reth-pruned`) || Path(`/berachain-bartio-reth-pruned/`))}
- ${NO_SSL:+traefik.http.routers.berachain-bartio-reth-pruned-trace.rule=Path(`/berachain-bartio-reth-pruned`) || Path(`/berachain-bartio-reth-pruned/`)}
- traefik.http.routers.berachain-bartio-reth-pruned-trace.middlewares=berachain-bartio-reth-pruned-trace-stripprefix, ipallowlist
shm_size: 2gb
berachain-bartio-reth-pruned-node:
build:
context: ./berachain
dockerfile: beacon-kit.Dockerfile
args:
BEACONKIT_VERSION: ${BERACHAIN_BARTIO_BEACON_KIT_VERSION:-v1.3.4}
BEACONKIT_IMAGE: ${BERACHAIN_BARTIO_BEACON_KIT_IMAGE:-ghcr.io/berachain/beacon-kit}
ports:
- 19467:19467
- 19467:19467/udp
environment:
- AUTH_RPC=http://berachain-bartio-reth-pruned:8551
- CHAINID=80084
- CHAINNAME=bartio
- CHAIN_SPEC=testnet
- IP=${IP}
- MONIKER=d${DOMAIN:-local}
- P2P_PORT=19467
restart: unless-stopped
networks:
- chains
volumes:
- ${BERACHAIN_BARTIO_RETH_PRUNED_TRACE__BEACON_KIT_DATA:-berachain-bartio-reth-pruned-trace_beacon-kit}:/root/.beacond/data
- .jwtsecret:/jwtsecret:ro
- berachain-bartio-reth-pruned-trace_config:/root/.beacond/config
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
volumes:
berachain-bartio-reth-pruned-trace:
berachain-bartio-reth-pruned-trace_beacon-kit:
berachain-bartio-reth-pruned-trace_config:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: bartio
method-groups:
enabled:
- debug
- filter
- trace
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

Some files were not shown because too many files have changed in this diff Show More