179 Commits

Author SHA1 Message Date
f6f9d92171 Update versions: besu 26.1.0, haqq v1.9.2, go-wemix w0.10.13, goat testnet3 node v0.4.3
Co-Authored-By: Claude Agent <claude@stakesquid.eu>
2026-02-03 21:10:59 +00:00
goldsquid
e70f42196d fix 2026-02-02 17:59:55 +07:00
aa3ac10893 Revert haqq to v1.9.1 - Docker image v1.9.2 not published
The v1.9.2 release exists on GitHub but the Docker image hasn't been
published to Docker Hub yet.

Co-Authored-By: Claude Agent <claude@stakesquid.eu>
2026-02-01 21:15:17 +00:00
8094518094 Update versions: haqq v1.9.2, dshackle 0.75.5
- haqq client: v1.9.1 -> v1.9.2
- dshackle: 0.75.4 -> 0.75.5
- hashkeychain testnet: sequencer URL update

Co-Authored-By: Claude Agent <claude@stakesquid.eu>
2026-02-01 21:09:37 +00:00
124d19dbb6 Add tempo consensus parameters for RPC nodes
- Add --consensus.signing-key and --consensus.fee-recipient
- Add secrets volume for validator key storage
- Add comment with key generation instructions

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 06:13:06 +00:00
6cb5b12ab0 Update tempo image tag to sha-a1ac033
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 06:08:29 +00:00
a09b33b7a0 Add Tempo network configuration (moderato, testnet)
- Add tempo-moderato-reth-archive-trace and tempo-moderato-reth-pruned-trace
- Add tempo-testnet-reth-archive-trace and tempo-testnet-reth-pruned-trace
- Update compose_registry.json with new endpoints

Chain IDs:
- Tempo Moderato: 42431
- Tempo Testnet (Andantino): 42429

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 06:04:52 +00:00
c35fdd0f15 fix(hashkey-testnet): update reference RPC endpoint
The previous endpoint https://hashkeychain-testnet.alt.technology had
DNS resolution failure, causing sync-status to return "unverified ()".

Updated to official endpoint https://testnet.hsk.xyz

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 03:52:20 +00:00
d814146f13 Update juno (starknet) v0.15.17 -> v0.15.18
Stable release bump for Starknet juno client.

Co-Authored-By: Claude Agent <claude@stakesquid.eu>
2026-01-31 21:10:25 +00:00
goldsquid
39648446a5 add a full sync version 2026-01-31 12:10:23 +07:00
goldsquid
42d9d64dfa fix 2026-01-31 12:03:32 +07:00
goldsquid
2ac216bdfe hashkeychain testnet 2026-01-31 11:57:38 +07:00
goldsquid
bf2f75d1cd fix 2026-01-31 11:28:28 +07:00
goldsquid
376b1a750f fix 2026-01-31 11:26:01 +07:00
goldsquid
eafbb2e2c3 fix 2026-01-31 11:24:14 +07:00
goldsquid
10f429e743 fix 2026-01-31 11:13:56 +07:00
goldsquid
a6e7348b40 some fix 2026-01-31 11:09:28 +07:00
goldsquid
3c20aac136 aztec maybe 2026-01-31 11:00:36 +07:00
goldsquid
3c68c92ecc haqq downgrade 2026-01-31 08:59:11 +07:00
56d7772909 Update erigon3 v3.3.5 -> v3.3.7, dshackle 0.75.3 -> 0.75.4
- Erigon3 for ethereum, gnosis, linea: v3.3.5 -> v3.3.7
- Dshackle: 0.75.3 -> 0.75.4

Co-Authored-By: Claude Agent <claude@stakesquid.eu>
2026-01-30 21:11:18 +00:00
3517b98ef5 Update versions: erigon3 v3.3.5, nimbus v26.1.0, bor 2.5.8, dshackle 0.75.3
- erigon3: v3.3.4 -> v3.3.5 (ethereum, gnosis)
- nimbus: multiarch-v25.12.0 -> multiarch-v26.1.0 (ethereum consensus)
- bor: 2.5.7 -> 2.5.8 (polygon)
- dshackle: 0.75.1 -> 0.75.3

Co-Authored-By: Claude Agent <claude@stakesquid.eu>
2026-01-29 21:11:17 +00:00
6bd7b35ae0 Update versions: dshackle 0.75.1, haqq v1.9.2, rippled 3.1.0
Co-Authored-By: Claude Agent <claude@stakesquid.eu>
2026-01-28 21:11:20 +00:00
5687d74a62 Pin Loki to version 3.4.3 to trigger container restart
Co-Authored-By: Claude Agent <claude@stakesquid.eu>
2026-01-28 13:49:05 +00:00
beacca7986 Enable 7-day log retention for Loki
Co-Authored-By: Claude Agent <claude@stakesquid.eu>
2026-01-28 13:27:19 +00:00
737a8e24a7 Fix superseed mainnet rollup.json and haqq version
- Add granite_time, holocene_time, isthmus_time to superseed mainnet rollup.json
  (fetched from live superseed sequencer)
- Update haqq image from v1.9.2 to v1.9.1 (v1.9.2 doesn't exist on Docker Hub)

Co-Authored-By: Claude Agent <claude@stakesquid.eu>
2026-01-28 03:30:37 +00:00
336adb68e6 Update zircuit geth/node: v1.127.13-beta -> v1.132.6
Co-Authored-By: Claude Agent <claude@stakesquid.eu>
2026-01-27 21:10:37 +00:00
1022f44959 Fix bob-sepolia: remove deprecated da_challenge_contract_address
op-node v1.16.5 no longer accepts the da_challenge_contract_address field
in rollup.json (it was replaced by alt_da in newer OP Stack versions).
Since BOB sepolia doesn't use alt DA (zero address), simply removing the
field fixes the crash loop.

Co-Authored-By: Claude Agent <claude@stakesquid.eu>
2026-01-27 09:58:10 +00:00
f174b0cc61 Update dshackle: 0.74.0 → 0.74.1
Co-Authored-By: Claude Agent <claude@stakesquid.eu>
2026-01-26 21:11:22 +00:00
rob
65919f6c01 Update dshackle: 0.73.0 → 0.74.0
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 05:11:36 +00:00
rob
73d376f589 Update client versions
- agave (solana): v3.1.7 → v3.1.8
- bitcoind (bitcoin-cash): 0.32.6 → 0.32.7
- erigon3 (ethereum, gnosis): v3.3.3 → v3.3.4
- geth/node (mantle.sepolia): v1.4.1 → v1.4.2
- go-wemix: w0.10.11 → w0.10.12
- haqq: v1.9.1 → v1.9.2
- reth (op-stack, ethereum): v1.10.1 → v1.10.2

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 05:08:20 +00:00
rob
c6d33fde72 Update Celo versions: geth celo-v2.1.3, op-node celo-v2.1.1
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 03:16:55 +00:00
rob
08e537ee71 Update client versions (cursor-verified)
- juno: v0.15.16 → v0.15.17 (starknet)
- scroll l2geth: v5.10.1 → v5.10.2 (SECURITY FIX)
- reth: v1.9.3 → v1.10.1 (ethereum)
- op-reth: v1.9.3 → v1.10.1 (base/lisk/op/soneium)
- metis dtl: v0.2.5 → v0.2.6
- xlayer geth/node: v0.1.2 → v0.1.3
- solana agave: v3.0.13/v3.1.6 → v3.1.7
- linea geth: v1.16.7 → v1.16.8

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 01:45:48 +00:00
rob
6915a759d1 Update taiko-hekla nethermind to 1.36.0, fix linea-sepolia sync mode
- taiko-hekla nethermind: 1.35.8 -> 1.36.0 (security + Taiko fixes)
- linea-sepolia-besu: SNAP -> FULL sync mode (SNAP broken for Linea)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 10:27:52 +00:00
rob
a7661930be Fix Linea Sepolia maru config for snapshot sync
- Disable payload-validation-enabled (same as mainnet fix)
- Increase desync-tolerance to 100000 (allow CL/EL sync gap)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 05:26:30 +00:00
rob
749ff64f8f Switch Linea Besu from SNAP to FULL sync mode
SNAP sync is broken for Linea - it picks an old pivot block (~24.7M) that
no peers can serve world state for. This causes:
- World state download stuck with 0 pending requests
- Maru unable to push blocks to EL without complete world state
- Node stuck returning block 0 for "latest"

FULL sync executes every block from genesis. It's slower but reliable
and allows maru to drive the sync via engine API.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 05:02:06 +00:00
rob
b7fe145fa5 Fix Linea maru: increase desync-tolerance to allow catchup sync
When Besu is behind the CL head (e.g., during initial sync or after restart),
desync-tolerance=0 prevents maru from sending any fork choice updates to Besu.
This causes Besu to remain stuck at its current block.

Increasing desync-tolerance to 100000 allows maru to continue sending blocks
even when Besu is significantly behind, enabling it to catch up.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 04:49:52 +00:00
rob
bd6083231f Fix Linea maru: disable payload-validation-enabled to match official config
When payload-validation-enabled is true, maru validates every block against
Besu before sending fork choice updates. If Besu is in an inconsistent state
(e.g., stuck in SNAP sync), this causes maru to stop sending fork choice
updates entirely, preventing Besu from ever syncing.

The official Linea configuration uses payload-validation-enabled = false,
which allows maru to continue sending fork choice updates regardless of
Besu's current state.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 04:37:19 +00:00
rob
0a880c3f3f Revert Linea mainnet Besu to SNAP sync mode 2026-01-17 03:41:44 +00:00
rob
1987f07cf8 Change Linea mainnet Besu to CHECKPOINT sync mode 2026-01-17 03:40:54 +00:00
rob
a0f098de79 Change Linea mainnet Besu from SNAP to FULL sync mode 2026-01-17 03:40:05 +00:00
rob
6fb1d76b13 fix(zircuit): update Garfield configs for Sepolia L1 and testnet op_network 2026-01-17 03:02:50 +00:00
rob
e136b0fc52 Update Linea Besu to beta-v4.4-rc7-20260108212219-738a446 2026-01-15 13:40:08 +00:00
rob
607dbe7020 Add bootnodes for Linea Besu - remove network exclusion from template 2026-01-15 13:34:19 +00:00
rob
c2582b0b76 Enable payload-validation for Linea Maru
Set payload-validation-enabled=true in Maru config to ensure
payloads are sent to the execution client. Without this, Maru
doesn't send forkchoice/newPayload calls when EL reports synced
status (even at block 0).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 13:23:17 +00:00
rob
ec2fb6c883 Add Linea Geth configs with Maru consensus
- Add linea/geth/ compose files (mainnet/sepolia, pruned/archive)
- Update Maru version and --network flag in besu/erigon3 configs
- Update compose_registry.json

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 10:08:04 +00:00
rob
d28342683c Update Maru to v1.0.0 and use --network flag
- Upgrade from 9737a45 to v1.0.0-20260108114606-36f5e2f
- Use --network=linea-mainnet for built-in config
- May fix advertise-ip issue for peer discovery

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 09:40:36 +00:00
rob
52d7ec6d40 Fix Starknet chain ID matching - handle hex-encoded ASCII
Juno returns chain ID as hex-encoded ASCII (0x534e5f5345504f4c4941)
rather than plain string (SN_SEPOLIA). Match both formats.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 09:17:22 +00:00
rob
6917005776 Add Starknet support to blocknumber.sh
- Detect Starknet paths and use starknet_getBlockWithTxHashes
- Return decimal block_number directly instead of hex conversion

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 09:11:39 +00:00
rob
63b720f1e9 Add Starknet support to sync-status and check-health scripts
- sync-status.sh now detects Starknet paths and uses starknet_chainId
- Maps SN_MAIN/SN_SEPOLIA chain IDs to reference endpoints
- check-health.sh accepts --starknet flag for Starknet mode
- Uses starknet_getBlockWithTxHashes instead of eth_getBlockByNumber
- Handles decimal timestamps and block_hash field differences

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 09:10:52 +00:00
rob
db681b5a74 Add Starknet RPC support to sync check scripts 2026-01-15 09:07:36 +00:00
rob
42a91a5bac Add explicit http-port and ws-port to juno config 2026-01-15 07:23:23 +00:00
rob
8dfdaf3548 Add --p2p-feeder-node to juno for feeder gateway sync 2026-01-15 06:48:22 +00:00
rob
98e1c88293 Update client versions
- erigon3: v3.3.2 → v3.3.3
- taiko geth: v1.17.3 → v1.17.4
- taiko driver: v1.11.0 → v1.11.2

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 04:56:51 +00:00
rob
5c0fb760cc Remove nginx sidecar from rootstock, use traefik headers
- Service name simplified to rootstock-mainnet (no -client suffix)
- Traefik middlewares handle Host:localhost header rewriting
- Proper WS routing on port 8546

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 10:27:19 +00:00
rob
004476216e Remove nginx sidecar from rootstock, use traefik headers
Replace nginx proxy with traefik headers middleware for Host rewriting.
Fixes container IP mismatch issues on container restart.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 10:22:46 +00:00
rob
eee0a4092d Update dshackle to v0.73.0
Bump drpcorg/dshackle from 0.72.6 to 0.73.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 09:13:59 +00:00
rob
662ecbfe5c Update client versions for multiple chains
- geth: v1.16.7 → v1.16.8 (security fix for p2p vulnerabilities)
- blsync: alltools-v1.16.7 → alltools-v1.16.8
- nethermind: 1.35.8 → 1.36.0 (major release with 416 improvements)
- avalanche-go: v1.14.0 → v1.14.1 (Granite.1 release)
- bsc: 1.6.5 → 1.6.6 (security fixes from geth v1.16.8)
- pathfinder: v0.21.3 → v0.21.5 (sync hotfix)
- reth_gnosis: v1.0.0 → v1.0.1 (RPC bugfixes)
- bitcoind: 0.32.5 → 0.32.6
- zircuit: v1.125.6-hotfix → v1.127.13-beta
- fraxtal geth: v1.101603.5 → v1.101605.0
- fraxtal op-node: v1.16.3 → v1.16.5

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 09:11:35 +00:00
goldsquid
32b09c75f4 security update 2026-01-14 10:33:28 +07:00
rob
e4dfd2f4ab update reference-rpc file with hyperliquid fix forked status 2026-01-14 01:25:53 +00:00
goldsquid
541f985686 zircuit needs that toml even if it is empty 2026-01-11 17:52:14 +07:00
goldsquid
79d135c8bc node is root 2026-01-11 17:47:17 +07:00
goldsquid
db9b33fe89 update 2026-01-11 16:59:38 +07:00
goldsquid
eedcc5642e try without genesis 2026-01-11 16:46:43 +07:00
goldsquid
fc02491f69 download the genesis 2026-01-11 16:39:47 +07:00
goldsquid
e71237963a lfg 2026-01-10 17:53:28 +07:00
goldsquid
d870dccd8d now with leveldb 2026-01-10 16:55:48 +07:00
goldsquid
f72af4022d downgrade 2026-01-10 16:27:19 +07:00
goldsquid
5e65e6d147 fix 2026-01-09 16:10:31 +07:00
goldsquid
52ae79c775 caplin archive 2026-01-09 16:03:34 +07:00
goldsquid
1dae8221d6 updates plus caplin archive for blobs. 2026-01-09 12:34:01 +07:00
goldsquid
0912ec794e update 2026-01-08 23:05:02 +07:00
ee98772660 fixes 2026-01-08 12:47:07 +01:00
goldsquid
9c285b6004 better limits 2026-01-08 09:47:03 +07:00
goldsquid
521b5709b1 better limits 2026-01-08 09:44:19 +07:00
goldsquid
e3dcadfb7c better limits 2026-01-08 09:41:56 +07:00
goldsquid
624b1da9a6 better limits 2026-01-08 09:38:38 +07:00
goldsquid
e9bd0dd120 allow limiting bandwidth usage on public ports inspred by gnosis pulling 500 MBit/s 2026-01-08 09:26:38 +07:00
goldsquid
c02425462b update 2026-01-07 10:51:54 +07:00
goldsquid
234664593a try alternative sequencer url for xlayer testnet 2026-01-07 09:48:12 +07:00
goldsquid
f70add2811 better jitter 2026-01-06 12:08:14 +07:00
goldsquid
8c4c4da978 better jitter 2026-01-06 11:34:32 +07:00
goldsquid
4916f4d62d new script 2026-01-06 11:24:40 +07:00
goldsquid
66a8748a8d update 2026-01-04 10:33:41 +07:00
goldsquid
76268600c8 fix 2025-12-31 16:44:00 +07:00
goldsquid
47cb694887 add bob sepolia 2025-12-31 16:40:08 +07:00
rob
a24f8ed258 add testnet support for cdk-erigon 2025-12-31 02:25:45 +00:00
goldsquid
87d2ab96eb update 2025-12-30 20:22:37 +07:00
goldsquid
164afcc244 removed a 0 2025-12-30 20:16:51 +07:00
goldsquid
c8f11d8490 update 2025-12-30 20:02:25 +07:00
goldsquid
3c36671481 now with reth triedb 2025-12-30 19:27:57 +07:00
goldsquid
3aa1ea9718 fix 2025-12-28 17:24:34 +07:00
goldsquid
2cdaa2ca11 fix 2025-12-28 17:10:06 +07:00
goldsquid
6c249bf321 fix 2025-12-28 17:06:43 +07:00
goldsquid
da88241a33 fix 2025-12-28 16:42:03 +07:00
goldsquid
a1cb2455a1 fix 2025-12-28 16:38:59 +07:00
goldsquid
adef3f5a99 fix 2025-12-28 16:22:13 +07:00
goldsquid
a6c2542cbf new gravity 2025-12-28 16:13:55 +07:00
goldsquid
5cff27d93b now with aristotele support 2025-12-28 16:07:59 +07:00
rob
85f24b1d2f rpc: add xlayer testnet erigon config 2025-12-24 22:03:17 -04:00
goldsquid
760798416f add gas 2025-12-25 08:21:20 +07:00
goldsquid
44882c072a bump bsc gaslimit and shackle update 2025-12-24 22:27:06 +07:00
goldsquid
3d4ac86453 updates by cursor implemented by vibe 2025-12-24 21:58:59 +07:00
goldsquid
ae22c0080c new script 2025-12-23 10:46:34 +07:00
goldsquid
80224eb627 remove download for chaindata 2025-12-22 19:47:09 +07:00
goldsquid
774408aae1 fix 2025-12-22 19:17:24 +07:00
3115055079 fix 2025-12-21 13:16:26 +01:00
goldsquid
611e97751b fix 2025-12-21 18:59:18 +07:00
goldsquid
06bfc888c9 backjup update 2025-12-21 18:56:24 +07:00
goldsquid
575b7133bb allow peers backups 2025-12-21 18:37:14 +07:00
goldsquid
099958b848 recommend upgrade 2025-12-21 14:30:08 +07:00
goldsquid
39ff5152da have peer statistics 2025-12-21 14:25:24 +07:00
goldsquid
af32f59d48 fix 2025-12-21 14:12:59 +07:00
goldsquid
42ccb7aa2d fix 2025-12-21 14:09:48 +07:00
goldsquid
34e7fb6abf check for admin api 2025-12-21 14:07:44 +07:00
goldsquid
c690c74d30 remove the symlinks 2025-12-21 12:59:14 +07:00
goldsquid
6c44963b74 allow to prune nitro nodes automatically 2025-12-21 03:36:50 +07:00
goldsquid
a71b92079a update 2025-12-21 02:33:12 +07:00
goldsquid
42daff3853 updates 2025-12-21 02:31:10 +07:00
goldsquid
354bfe0530 update 2025-12-21 02:23:38 +07:00
goldsquid
b35b3cd568 new helper script 2025-12-20 13:35:37 +07:00
goldsquid
c5ab00f29c no more error 2025-12-20 09:53:37 +07:00
goldsquid
5808945ef1 overrides from example repo 2025-12-20 09:19:46 +07:00
goldsquid
73830df4a8 Merge remote-tracking branch 'production/main' 2025-12-20 09:12:26 +07:00
goldsquid
7a35a9fcd5 updates 2025-12-20 09:11:47 +07:00
rob
f8112122e0 remove deprecated Nitro execution flags 2025-12-19 14:55:26 +00:00
goldsquid
4ef3d5c55f fix 2025-12-19 15:15:28 +07:00
goldsquid
8e7c8057bd also delete symlinked directories 2025-12-19 13:42:28 +07:00
goldsquid
8d29690279 added support for esxtracting static files to the slowdisk folde3r of a supporting host so that there is more space for dynamic chaindata 2025-12-19 13:38:58 +07:00
goldsquid
e3350eae72 add metadata to the backup volumes in order to be able to restore static_files via symlink in a different location 2025-12-19 13:20:47 +07:00
goldsquid
4b1de78b84 more clear 2025-12-19 13:10:27 +07:00
goldsquid
25c2f8c101 more clear 2025-12-19 13:07:14 +07:00
goldsquid
f282b2173f more useful 2025-12-19 13:01:20 +07:00
goldsquid
74716312d0 follow symlinks when calculating size 2025-12-19 12:15:47 +07:00
goldsquid
578d558d3f show free disk in the volumes folder 2025-12-19 12:10:08 +07:00
goldsquid
a2db47caf5 new script to stream backups directly to target servers without extraction on the source 2025-12-19 12:00:55 +07:00
goldsquid
0bd645a05d update 2025-12-19 09:03:19 +07:00
goldsquid
782087b89a some overrides and bootnodes 2025-12-18 14:14:15 +07:00
goldsquid
845a1eb088 boba only with historical data 2025-12-18 13:39:10 +07:00
goldsquid
3c5db8da34 boba only with historical data 2025-12-18 13:38:38 +07:00
goldsquid
f1ed7cc835 added cancun time to genesis because ENV overrides are not picked up by geth 2025-12-18 12:57:21 +07:00
goldsquid
3e5c44d728 fix 2025-12-18 12:30:18 +07:00
goldsquid
f9b5c59452 fix 2025-12-18 12:26:46 +07:00
goldsquid
eb46b13d5c extra 2025-12-18 09:12:37 +07:00
goldsquid
51af4d44fe fix 2025-12-18 09:02:54 +07:00
goldsquid
e92805056a fix 2025-12-18 08:56:18 +07:00
goldsquid
5bd83eed3d resilience 2025-12-18 08:53:41 +07:00
goldsquid
dd18b750fe ipv4 2025-12-18 08:49:54 +07:00
goldsquid
359b064938 try 2025-12-18 08:47:00 +07:00
goldsquid
90a2cf6114 fix 2025-12-17 21:27:42 +07:00
goldsquid
8a317d7578 fix 2025-12-17 21:14:53 +07:00
goldsquid
1f5c59c887 fix 2025-12-17 16:50:29 +07:00
goldsquid
cc66cde43c no max eth version for linea 2025-12-17 16:22:19 +07:00
goldsquid
06340d5b16 fix 2025-12-17 16:16:27 +07:00
goldsquid
90ddaee53c fix 2025-12-17 16:02:02 +07:00
goldsquid
0b7cfc32a8 fixes 2025-12-17 11:11:05 +07:00
goldsquid
9bde9731fc updates 2025-12-17 11:06:16 +07:00
goldsquid
3a8180482b only heal the sonic db after a crash. 2025-12-16 13:20:05 +07:00
goldsquid
0d38493c9e smol fixes 2025-12-16 13:11:14 +07:00
goldsquid
773bed911a let them gossip 2025-12-16 12:58:40 +07:00
goldsquid
7765dd160f verify if the peer isn known before adding it. 2025-12-16 12:47:33 +07:00
goldsquid
00c9c53cc8 check peer before copying it 2025-12-16 12:35:58 +07:00
rob
21c826baf5 Update Sonic and Taiko RPC configurations 2025-12-16 01:11:35 +00:00
goldsquid
8c04689484 update 2025-12-15 22:04:13 +07:00
goldsquid
e3a5eb5c54 update 2025-12-14 14:53:42 +07:00
goldsquid
2ec4bcf1a6 update 2025-12-14 08:55:22 +07:00
goldsquid
9ab679ace3 adding snappy support 2025-12-13 17:17:01 +07:00
goldsquid
12a9d63adb update 2025-12-13 17:04:40 +07:00
goldsquid
36ed7d2c3d update 2025-12-13 17:00:27 +07:00
goldsquid
bb4469f32e fix 2025-12-13 16:14:50 +07:00
goldsquid
eed62c45f2 update llvm 2025-12-13 16:08:15 +07:00
goldsquid
6469e2cd92 logs not streaming for agents 2025-12-13 15:45:50 +07:00
goldsquid
cfe0b50ae1 ipv4 2025-12-13 15:22:51 +07:00
goldsquid
5ac31610f7 fix 2025-12-13 15:19:24 +07:00
goldsquid
5784bee792 fix 2025-12-13 15:14:28 +07:00
goldsquid
870b658616 fix 2025-12-13 15:04:42 +07:00
goldsquid
f918cd2701 executable 2025-12-13 14:59:41 +07:00
goldsquid
84362b482c clone peers script 2025-12-13 14:47:39 +07:00
goldsquid
557b81666b update 2025-12-13 14:23:35 +07:00
goldsquid
a6884fde5e update 2025-12-13 14:13:30 +07:00
580 changed files with 27563 additions and 137027 deletions

1
.gitignore vendored
View File

@@ -1 +1,2 @@
.env
peer-backups/

View File

@@ -1 +0,0 @@
abstract/external-node/abstract-mainnet-external-node-pruned.yml

View File

@@ -1 +0,0 @@
abstract/external-node/abstract-testnet-external-node-pruned.yml

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
abstract-mainnet-archive:
image: ${ABSTRACT_EXTERNAL_NODE_IMAGE:-matterlabs/external-node}:${ABSTRACT_MAINNET_EXTERNAL_NODE_VERSION:-v29.1.2}
image: ${ABSTRACT_EXTERNAL_NODE_IMAGE:-matterlabs/external-node}:${ABSTRACT_MAINNET_EXTERNAL_NODE_VERSION:-v29.7.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
abstract-mainnet:
image: ${ABSTRACT_EXTERNAL_NODE_IMAGE:-matterlabs/external-node}:${ABSTRACT_MAINNET_EXTERNAL_NODE_VERSION:-v29.1.2}
image: ${ABSTRACT_EXTERNAL_NODE_IMAGE:-matterlabs/external-node}:${ABSTRACT_MAINNET_EXTERNAL_NODE_VERSION:-v29.7.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
abstract-testnet-archive:
image: ${ABSTRACT_EXTERNAL_NODE_IMAGE:-matterlabs/external-node}:${ABSTRACT_TESTNET_EXTERNAL_NODE_VERSION:-v29.1.2}
image: ${ABSTRACT_EXTERNAL_NODE_IMAGE:-matterlabs/external-node}:${ABSTRACT_TESTNET_EXTERNAL_NODE_VERSION:-v29.7.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
abstract-testnet:
image: ${ABSTRACT_EXTERNAL_NODE_IMAGE:-matterlabs/external-node}:${ABSTRACT_TESTNET_EXTERNAL_NODE_VERSION:-v29.1.2}
image: ${ABSTRACT_EXTERNAL_NODE_IMAGE:-matterlabs/external-node}:${ABSTRACT_TESTNET_EXTERNAL_NODE_VERSION:-v29.7.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -1 +0,0 @@
arb/nitro/alephzero-mainnet-nitro-archive-pebble-hash.yml

View File

@@ -1 +0,0 @@
arb/nitro/alephzero-mainnet-nitro-pruned-pebble-path.yml

View File

@@ -1 +0,0 @@
arb/nitro/alephzero-sepolia-nitro-archive-leveldb-hash.yml

View File

@@ -1 +0,0 @@
arb/nitro/alephzero-sepolia-nitro-pruned-pebble-path.yml

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
arbitrum-one-arbnode-archive:
image: ${ARBITRUM_ARBNODE_IMAGE:-offchainlabs/arb-node}:${ARBITRUM_ONE_ARBNODE_VERSION:-v1.4.5-e97c1a4}
image: ${ARBITRUM_ARBNODE_IMAGE:-offchainlabs/arb-node}:${ARBITRUM_ONE_ARBNODE_VERSION:-v1.4.6-551a39b3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
alephzero-mainnet-archive:
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_MAINNET_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -63,7 +63,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
alephzero-mainnet:
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_MAINNET_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -57,14 +57,12 @@ services:
- --execution.caching.trie-dirty-cache=${ALEPHZERO_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.alephzero.raas.gelato.cloud
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
alephzero-sepolia-archive:
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_SEPOLIA_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -63,7 +63,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
alephzero-sepolia:
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${ALEPHZERO_NITRO_IMAGE:-offchainlabs/nitro-node}:${ALEPHZERO_SEPOLIA_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -57,14 +57,12 @@ services:
- --execution.caching.trie-dirty-cache=${ALEPHZERO_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.alephzero-testnet.gelato.digital
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
arbitrum-nova-archive:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_NOVA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_NOVA_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -62,8 +62,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=archive
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -0,0 +1,141 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/arbitrum-nova-nitro-pruned-pebble-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-nova \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-nova:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_NOVA_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --chain.id=42170
- --execution.caching.archive=${ARBITRUM_NOVA_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${ARBITRUM_NOVA_NITRO_PRUNED_PEBBLE_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ARBITRUM_NOVA_NITRO_PRUNED_PEBBLE_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ARBITRUM_NOVA_NITRO_PRUNED_PEBBLE_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ARBITRUM_NOVA_NITRO_PRUNED_PEBBLE_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.prune=full
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/arbitrum-nova
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_NOVA_NITRO_PRUNED_PEBBLE_HASH_DATA:-arbitrum-nova-nitro-pruned-pebble-hash}:/root/.arbitrum
- ./tmp/arbitrum-nova:/tmp
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-nova-nitro-pruned-pebble-hash-stripprefix.stripprefix.prefixes=/arbitrum-nova
- traefik.http.services.arbitrum-nova-nitro-pruned-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-nova-nitro-pruned-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-nova-nitro-pruned-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-nova-nitro-pruned-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-nova`) || Path(`/arbitrum-nova/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-nova-nitro-pruned-pebble-hash.rule=Path(`/arbitrum-nova`) || Path(`/arbitrum-nova/`)}
- traefik.http.routers.arbitrum-nova-nitro-pruned-pebble-hash.middlewares=arbitrum-nova-nitro-pruned-pebble-hash-stripprefix, ipallowlist
volumes:
arbitrum-nova-nitro-pruned-pebble-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum-nova
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
arbitrum-nova:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_NOVA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_NOVA_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -62,8 +62,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=pruned
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
arbitrum-one-archive:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_ONE_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_ONE_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -63,8 +63,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=archive
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
@@ -100,7 +98,7 @@ services:
- traefik.http.routers.arbitrum-one-nitro-archive-leveldb-hash.middlewares=arbitrum-one-nitro-archive-leveldb-hash-stripprefix, ipallowlist
arbitrum-one-arbnode-archive:
image: ${ARBITRUM_ARBNODE_IMAGE:-offchainlabs/arb-node}:${ARBITRUM_ONE_ARBNODE_VERSION:-v1.4.5-e97c1a4}
image: ${ARBITRUM_ARBNODE_IMAGE:-offchainlabs/arb-node}:${ARBITRUM_ONE_ARBNODE_VERSION:-v1.4.6-551a39b3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
arbitrum-one-archive:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_ONE_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_ONE_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -63,8 +63,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=archive
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
@@ -101,7 +99,7 @@ services:
- traefik.http.routers.arbitrum-one-nitro-archive-pebble-hash.middlewares=arbitrum-one-nitro-archive-pebble-hash-stripprefix, ipallowlist
arbitrum-one-arbnode-archive:
image: ${ARBITRUM_ARBNODE_IMAGE:-offchainlabs/arb-node}:${ARBITRUM_ONE_ARBNODE_VERSION:-v1.4.5-e97c1a4}
image: ${ARBITRUM_ARBNODE_IMAGE:-offchainlabs/arb-node}:${ARBITRUM_ONE_ARBNODE_VERSION:-v1.4.6-551a39b3}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -0,0 +1,205 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/arbitrum-one-nitro-pruned-pebble-hash--fireeth.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-one \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-one:
image: ${ARBITRUM_FIREETH_IMAGE:-ghcr.io/streamingfast/go-ethereum}:${ARBITRUM_ONE_FIREETH_VERSION:-v2.12.4-nitro-nitro-v3.6.7-fh3.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
entrypoint: [sh, -c, exec fireeth start reader-node --log-to-file=false --reader-node-arguments "$*", _]
command:
- --chain.id=42161
- --execution.caching.archive=${ARBITRUM_ONE_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.prune=full
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/arbitrum-one
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_ONE_FIREETH_DATA:-arbitrum-one-fireeth}:/app/firehose-data
- ${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_DATA:-arbitrum-one-nitro-pruned-pebble-hash}:/root/.arbitrum
- ./tmp/arbitrum-one:/tmp
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-one-nitro-pruned-pebble-hash-stripprefix.stripprefix.prefixes=/arbitrum-one
- traefik.http.services.arbitrum-one-nitro-pruned-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-one`) || Path(`/arbitrum-one/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.rule=Path(`/arbitrum-one`) || Path(`/arbitrum-one/`)}
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.middlewares=arbitrum-one-nitro-pruned-pebble-hash-stripprefix, ipallowlist
arbitrum-one-firehose:
image: ${ARBITRUM_FIREETH_IMAGE:-ghcr.io/streamingfast/go-ethereum}:${ARBITRUM_ONE_FIREETH_VERSION:-v2.12.4-nitro-nitro-v3.6.7-fh3.0}
expose:
- 10015
- 10014
environment:
- ${ARBITRUM_ONE_FIREETH_BLOCKS_STORE:-/app/firehose-data/storage/merged-blocks}
entrypoint: [sh, -c, exec fireeth --config-file="" --log-to-file=false start firehose index-builder relayer merger $@, _]
command:
- --firehose-rate-limit-bucket-fill-rate=${ARBITRUM_ONE_FIREHOSE_RATE_LIMIT_BUCKET_FILL_RATE:-1s}
- --firehose-rate-limit-bucket-size=${ARBITRUM_ONE_FIREHOSE_RATE_LIMIT_BUCKET_SIZE:-200}
- --log-to-file=false
- --relayer-source=arbitrum-one:10010
restart: unless-stopped
depends_on:
- arbitrum-one
networks:
- chains
volumes:
- ${ARBITRUM_ONE_FIREETH_DATA:-arbitrum-one-fireeth}:/app/firehose-data
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.services.arbitrum-one-nitro-pruned-pebble-hash-firehose.loadbalancer.server.scheme=h2c
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-firehose.service=arbitrum-one-nitro-pruned-pebble-hash-firehose
- traefik.http.services.arbitrum-one-nitro-pruned-pebble-hash-firehose.loadbalancer.server.port=10015
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-firehose.entrypoints=grpc
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-firehose.tls.certresolver=myresolver}
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-firehose.rule=Host(`arbitrum-one-firehose.${DOMAIN}`)
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-firehose.middlewares=ipallowlist
arbitrum-one-events:
image: ${ARBITRUM_FIREETH_IMAGE:-ghcr.io/streamingfast/go-ethereum}:${ARBITRUM_ONE_FIREETH_VERSION:-v2.12.4-nitro-nitro-v3.6.7-fh3.0}
expose:
- 10016
entrypoint: [sh, -c, exec fireeth --config-file="" --log-to-file=false start substreams-tier1 substreams-tier2 $@, _]
command:
- --common-live-blocks-addr=arbitrum-one-firehose:10014
- --log-to-file=false
- --substreams-block-execution-timeout=${ARBITRUM_ONE_SUBSTREAMS_BLOCK_EXECUTION_TIMEOUT:-3m0s}
- --substreams-rpc-endpoints=${ARBITRUM_ONE_EXECUTION_ARCHIVE_RPC}
- --substreams-tier1-max-subrequests=${ARBITRUM_ONE_SUBSTREAMS_TIER1_MAX_SUBREQUESTS:-4}
restart: unless-stopped
depends_on:
- arbitrum-one
networks:
- chains
volumes:
- ${ARBITRUM_ONE_FIREETH_DATA:-arbitrum-one-fireeth}:/app/firehose-data
logging: *logging-defaults
labels:
- traefik.enable=true
- traefik.http.services.arbitrum-one-nitro-pruned-pebble-hash-events.loadbalancer.server.scheme=h2c
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-events.service=arbitrum-one-nitro-pruned-pebble-hash-events
- traefik.http.services.arbitrum-one-nitro-pruned-pebble-hash-events.loadbalancer.server.port=10016
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-events.entrypoints=grpc
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-events.tls.certresolver=myresolver}
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-events.rule=Host(`arbitrum-one-events.${DOMAIN}`)
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash-events.middlewares=ipallowlist
volumes:
arbitrum-one-nitro-pruned-pebble-hash:
arbitrum-one-nitro-pruned-pebble-hash_fireeth:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -63,8 +63,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=pruned
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -0,0 +1,141 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/arbitrum-one-nitro-pruned-pebble-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-one \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-one:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_ONE_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --chain.id=42161
- --execution.caching.archive=${ARBITRUM_ONE_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.prune=full
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_MAINNET_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_MAINNET_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/arbitrum-one
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_ONE_NITRO_PRUNED_PEBBLE_HASH_DATA:-arbitrum-one-nitro-pruned-pebble-hash}:/root/.arbitrum
- ./tmp/arbitrum-one:/tmp
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-one-nitro-pruned-pebble-hash-stripprefix.stripprefix.prefixes=/arbitrum-one
- traefik.http.services.arbitrum-one-nitro-pruned-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-one`) || Path(`/arbitrum-one/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.rule=Path(`/arbitrum-one`) || Path(`/arbitrum-one/`)}
- traefik.http.routers.arbitrum-one-nitro-pruned-pebble-hash.middlewares=arbitrum-one-nitro-pruned-pebble-hash-stripprefix, ipallowlist
volumes:
arbitrum-one-nitro-pruned-pebble-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
arbitrum-one:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_ONE_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_ONE_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -62,8 +62,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=pruned
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
arbitrum-sepolia-archive:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_SEPOLIA_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -62,8 +62,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=archive
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -0,0 +1,141 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:arb/nitro/arbitrum-sepolia-nitro-pruned-pebble-hash.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/arbitrum-sepolia \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
arbitrum-sepolia:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_SEPOLIA_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
expose:
- 8545
command:
- --chain.id=421614
- --execution.caching.archive=${ARBITRUM_SEPOLIA_ARCHIVE_DB:-false}
- --execution.caching.database-cache=${ARBITRUM_SEPOLIA_NITRO_PRUNED_PEBBLE_HASH_DATABASE_CACHE:-2048}
- --execution.caching.snapshot-cache=${ARBITRUM_SEPOLIA_NITRO_PRUNED_PEBBLE_HASH_SNAPSHOT_CACHE:-400}
- --execution.caching.trie-clean-cache=${ARBITRUM_SEPOLIA_NITRO_PRUNED_PEBBLE_HASH_TRIE_CLEAN_CACHE:-600}
- --execution.caching.trie-dirty-cache=${ARBITRUM_SEPOLIA_NITRO_PRUNED_PEBBLE_HASH_TRIE_DIRTY_CACHE:-1024}
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.log-history=0
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.prune=full
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070
- --node.batch-poster.enable=false
- --node.sequencer=false
- --node.staker.enable=false
- --parent-chain.blob-client.beacon-url=${ETHEREUM_SEPOLIA_BEACON_REST}
- --parent-chain.connection.url=${ETHEREUM_SEPOLIA_EXECUTION_RPC}
- --persistent.chain=/root/.arbitrum/arbitrum-sepolia
- --persistent.db-engine=pebble
- --ws.addr=0.0.0.0
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${ARBITRUM_SEPOLIA_NITRO_PRUNED_PEBBLE_HASH_DATA:-arbitrum-sepolia-nitro-pruned-pebble-hash}:/root/.arbitrum
- ./tmp/arbitrum-sepolia:/tmp
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=6070
- prometheus-scrape.path=/debug/metrics/prometheus
- traefik.enable=true
- traefik.http.middlewares.arbitrum-sepolia-nitro-pruned-pebble-hash-stripprefix.stripprefix.prefixes=/arbitrum-sepolia
- traefik.http.services.arbitrum-sepolia-nitro-pruned-pebble-hash.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-hash.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-hash.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-hash.rule=Host(`$DOMAIN`) && (Path(`/arbitrum-sepolia`) || Path(`/arbitrum-sepolia/`))}
- ${NO_SSL:+traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-hash.rule=Path(`/arbitrum-sepolia`) || Path(`/arbitrum-sepolia/`)}
- traefik.http.routers.arbitrum-sepolia-nitro-pruned-pebble-hash.middlewares=arbitrum-sepolia-nitro-pruned-pebble-hash-stripprefix, ipallowlist
volumes:
arbitrum-sepolia-nitro-pruned-pebble-hash:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: arbitrum-sepolia
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
# standard geth only
- name: debug_getRawBlock
- name: debug_getRawTransaction
- name: debug_getRawReceipts
- name: debug_getRawHeader
- name: debug_getBadBlocks
# non standard geth only slightly dangerous
- name: debug_intermediateRoots
- name: debug_dumpBlock
# standard geth and erigon
- name: debug_accountRange
- name: debug_getModifiedAccountsByNumber
- name: debug_getModifiedAccountsByHash
# non standard geth and erigon
- name: eth_getRawTransactionByHash
- name: eth_getRawTransactionByBlockHashAndIndex
...

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
arbitrum-sepolia:
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${ARBITRUM_NITRO_IMAGE:-offchainlabs/nitro-node}:${ARBITRUM_SEPOLIA_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -62,8 +62,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --init.latest=pruned
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
connext-sepolia-archive:
image: ${CONNEXT_NITRO_IMAGE:-offchainlabs/nitro-node}:${CONNEXT_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${CONNEXT_NITRO_IMAGE:-offchainlabs/nitro-node}:${CONNEXT_SEPOLIA_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -63,7 +63,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
connext-sepolia:
image: ${CONNEXT_NITRO_IMAGE:-offchainlabs/nitro-node}:${CONNEXT_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${CONNEXT_NITRO_IMAGE:-offchainlabs/nitro-node}:${CONNEXT_SEPOLIA_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -57,14 +57,12 @@ services:
- --execution.caching.trie-dirty-cache=${CONNEXT_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.connext-sepolia.gelato.digital
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
everclear-mainnet-archive:
image: ${EVERCLEAR_NITRO_IMAGE:-offchainlabs/nitro-node}:${EVERCLEAR_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${EVERCLEAR_NITRO_IMAGE:-offchainlabs/nitro-node}:${EVERCLEAR_MAINNET_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -63,7 +63,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
everclear-mainnet:
image: ${EVERCLEAR_NITRO_IMAGE:-offchainlabs/nitro-node}:${EVERCLEAR_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${EVERCLEAR_NITRO_IMAGE:-offchainlabs/nitro-node}:${EVERCLEAR_MAINNET_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -57,14 +57,12 @@ services:
- --execution.caching.trie-dirty-cache=${EVERCLEAR_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.everclear.raas.gelato.cloud
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
opencampuscodex-sepolia-archive:
image: ${OPENCAMPUSCODEX_NITRO_IMAGE:-offchainlabs/nitro-node}:${OPENCAMPUSCODEX_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${OPENCAMPUSCODEX_NITRO_IMAGE:-offchainlabs/nitro-node}:${OPENCAMPUSCODEX_SEPOLIA_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -63,7 +63,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
opencampuscodex-sepolia:
image: ${OPENCAMPUSCODEX_NITRO_IMAGE:-offchainlabs/nitro-node}:${OPENCAMPUSCODEX_SEPOLIA_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${OPENCAMPUSCODEX_NITRO_IMAGE:-offchainlabs/nitro-node}:${OPENCAMPUSCODEX_SEPOLIA_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -57,14 +57,12 @@ services:
- --execution.caching.trie-dirty-cache=${OPENCAMPUSCODEX_SEPOLIA_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.open-campus-codex.gelato.digital
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
playblock-mainnet-archive:
image: ${PLAYBLOCK_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLAYBLOCK_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${PLAYBLOCK_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLAYBLOCK_MAINNET_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -63,7 +63,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
playblock-mainnet:
image: ${PLAYBLOCK_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLAYBLOCK_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${PLAYBLOCK_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLAYBLOCK_MAINNET_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -57,14 +57,12 @@ services:
- --execution.caching.trie-dirty-cache=${PLAYBLOCK_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.playblock.io
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
plume-mainnet-archive:
image: ${PLUME_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLUME_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${PLUME_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLUME_MAINNET_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -63,7 +63,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
plume-mainnet:
image: ${PLUME_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLUME_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${PLUME_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLUME_MAINNET_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -57,14 +57,12 @@ services:
- --execution.caching.trie-dirty-cache=${PLUME_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.plume.org
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
plume-testnet-archive:
image: ${PLUME_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLUME_TESTNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${PLUME_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLUME_TESTNET_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -63,7 +63,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
plume-testnet:
image: ${PLUME_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLUME_TESTNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${PLUME_NITRO_IMAGE:-offchainlabs/nitro-node}:${PLUME_TESTNET_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -57,14 +57,12 @@ services:
- --execution.caching.trie-dirty-cache=${PLUME_TESTNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://testnet-rpc.plume.org
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
real-mainnet-archive:
image: ${REAL_NITRO_IMAGE:-offchainlabs/nitro-node}:${REAL_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${REAL_NITRO_IMAGE:-offchainlabs/nitro-node}:${REAL_MAINNET_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -63,7 +63,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
real-mainnet-archive:
image: ${REAL_NITRO_IMAGE:-offchainlabs/nitro-node}:${REAL_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${REAL_NITRO_IMAGE:-offchainlabs/nitro-node}:${REAL_MAINNET_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -63,7 +63,6 @@ services:
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
real-mainnet:
image: ${REAL_NITRO_IMAGE:-offchainlabs/nitro-node}:${REAL_MAINNET_NITRO_VERSION:-v3.9.4-rc.2-7f582c3}
image: ${REAL_NITRO_IMAGE:-offchainlabs/nitro-node}:${REAL_MAINNET_NITRO_VERSION:-v3.9.5-66e42c4}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -57,14 +57,12 @@ services:
- --execution.caching.trie-dirty-cache=${REAL_MAINNET_NITRO_PRUNED_PEBBLE_PATH_TRIE_DIRTY_CACHE:-1024}
- --execution.forwarding-target=https://rpc.realforreal.gelato.digital
- --execution.rpc.gas-cap=5500000000
- --execution.rpc.state-scheme=path
- --execution.sequencer.enable=false
- --http.addr=0.0.0.0
- --http.api=eth,net,web3,arb,txpool,debug
- --http.corsdomain=*
- --http.port=8545
- --http.vhosts=*
- --init.download-path=/tmp
- --metrics
- --metrics-server.addr=0.0.0.0
- --metrics-server.port=6070

View File

@@ -1 +0,0 @@
arb/nitro/arbitrum-nova-nitro-pruned-pebble-hash.yml

View File

@@ -1 +0,0 @@
arb/nitro/alephzero-mainnet-nitro-archive-leveldb-hash.yml

View File

@@ -1 +0,0 @@
arb/nitro/arbitrum-one-nitro-pruned-pebble-hash.yml

View File

@@ -1 +0,0 @@
arb/nitro/arbitrum-sepolia-nitro-archive-pebble-hash.yml

View File

@@ -1 +0,0 @@
arb/nitro/arbitrum-sepolia-nitro-pruned-pebble-hash.yml

View File

@@ -1 +0,0 @@
arb/nitro/arbitrum-sepolia-nitro-pruned-pebble-hash.yml

View File

@@ -1 +0,0 @@
avalanche/go/avalanche-fuji-go-pruned-pebbledb.yml

View File

@@ -1 +0,0 @@
avalanche/go/avalanche-mainnet-go-archive-leveldb.yml

View File

@@ -1 +0,0 @@
avalanche/go/avalanche-mainnet-go-pruned-pebbledb.yml

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
avalanche-fuji-archive:
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_FUJI_GO_VERSION:-v1.14.0}
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_FUJI_GO_VERSION:-v1.14.1}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
avalanche-fuji:
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_FUJI_GO_VERSION:-v1.14.0}
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_FUJI_GO_VERSION:-v1.14.1}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
avalanche-fuji:
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_FUJI_GO_VERSION:-v1.14.0}
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_FUJI_GO_VERSION:-v1.14.1}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
avalanche-mainnet-archive:
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_MAINNET_GO_VERSION:-v1.14.0}
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_MAINNET_GO_VERSION:-v1.14.1}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
avalanche-mainnet:
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_MAINNET_GO_VERSION:-v1.14.0}
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_MAINNET_GO_VERSION:-v1.14.1}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
avalanche-mainnet:
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_MAINNET_GO_VERSION:-v1.14.0}
image: ${AVALANCHE_GO_IMAGE:-avaplatform/avalanchego}:${AVALANCHE_MAINNET_GO_VERSION:-v1.14.1}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -0,0 +1,112 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Aztec full node. See https://docs.aztec.network/network/setup/running_a_node
# Admin port (8880) is not exposed; use docker exec for admin API.
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:aztec/aztec/aztec-devnet-aztec-pruned.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/aztec-devnet \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
aztec-devnet:
image: ${AZTEC_AZTEC_IMAGE:-aztecprotocol/aztec}:${AZTEC_DEVNET_AZTEC_VERSION:-3.0.0-devnet.6-patch.1}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 12024:12024
- 12024:12024/udp
expose:
- 8080
environment:
AZTEC_ADMIN_PORT: '8880'
AZTEC_PORT: '8080'
DATA_DIRECTORY: /var/lib/data
ETHEREUM_HOSTS: ${ETHEREUM_SEPOLIA_EXECUTION_RPC}
L1_CONSENSUS_HOST_URLS: ${ETHEREUM_SEPOLIA_BEACON_REST}
LOG_LEVEL: ${AZTEC_LOG_LEVEL:-info}
P2P_IP: ${IP}
P2P_PORT: '12024'
entrypoint: [node, --no-warnings, /usr/src/yarn-project/aztec/dest/bin/index.js, start]
command:
- --archiver
- --network=devnet
- --node
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${AZTEC_DEVNET_AZTEC_PRUNED_DATA:-aztec-devnet-aztec-pruned}:/var/lib/data
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.middlewares.aztec-devnet-aztec-pruned-stripprefix.stripprefix.prefixes=/aztec-devnet
- traefik.http.services.aztec-devnet-aztec-pruned.loadbalancer.server.port=8080
- ${NO_SSL:-traefik.http.routers.aztec-devnet-aztec-pruned.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.aztec-devnet-aztec-pruned.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.aztec-devnet-aztec-pruned.rule=Host(`$DOMAIN`) && (Path(`/aztec-devnet`) || Path(`/aztec-devnet/`))}
- ${NO_SSL:+traefik.http.routers.aztec-devnet-aztec-pruned.rule=Path(`/aztec-devnet`) || Path(`/aztec-devnet/`)}
- traefik.http.routers.aztec-devnet-aztec-pruned.middlewares=aztec-devnet-aztec-pruned-stripprefix, ipallowlist
volumes:
aztec-devnet-aztec-pruned:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: aztec-devnet
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -0,0 +1,112 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Aztec full node. See https://docs.aztec.network/network/setup/running_a_node
# Admin port (8880) is not exposed; use docker exec for admin API.
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:aztec/aztec/aztec-testnet-aztec-pruned.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/aztec-testnet \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
aztec-testnet:
image: ${AZTEC_AZTEC_IMAGE:-aztecprotocol/aztec}:${AZTEC_TESTNET_AZTEC_VERSION:-3.0.2}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
user: root
ports:
- 13009:13009
- 13009:13009/udp
expose:
- 8080
environment:
AZTEC_ADMIN_PORT: '8880'
AZTEC_PORT: '8080'
DATA_DIRECTORY: /var/lib/data
ETHEREUM_HOSTS: ${ETHEREUM_SEPOLIA_EXECUTION_RPC}
L1_CONSENSUS_HOST_URLS: ${ETHEREUM_SEPOLIA_BEACON_REST}
LOG_LEVEL: ${AZTEC_LOG_LEVEL:-info}
P2P_IP: ${IP}
P2P_PORT: '13009'
entrypoint: [node, --no-warnings, /usr/src/yarn-project/aztec/dest/bin/index.js, start]
command:
- --archiver
- --network=testnet
- --node
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${AZTEC_TESTNET_AZTEC_PRUNED_DATA:-aztec-testnet-aztec-pruned}:/var/lib/data
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=false
- traefik.enable=true
- traefik.http.middlewares.aztec-testnet-aztec-pruned-stripprefix.stripprefix.prefixes=/aztec-testnet
- traefik.http.services.aztec-testnet-aztec-pruned.loadbalancer.server.port=8080
- ${NO_SSL:-traefik.http.routers.aztec-testnet-aztec-pruned.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.aztec-testnet-aztec-pruned.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.aztec-testnet-aztec-pruned.rule=Host(`$DOMAIN`) && (Path(`/aztec-testnet`) || Path(`/aztec-testnet/`))}
- ${NO_SSL:+traefik.http.routers.aztec-testnet-aztec-pruned.rule=Path(`/aztec-testnet`) || Path(`/aztec-testnet/`)}
- traefik.http.routers.aztec-testnet-aztec-pruned.middlewares=aztec-testnet-aztec-pruned-stripprefix, ipallowlist
volumes:
aztec-testnet-aztec-pruned:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: aztec-testnet
method-groups:
enabled:
- debug
- filter
methods:
disabled:
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -1,5 +1,6 @@
#!/bin/bash
BASEPATH="$(dirname "$0")"
backup_dir="/backup"
if [[ -n $2 ]]; then
@@ -11,6 +12,36 @@ else
fi
fi
# Function to generate metadata for a single volume
generate_volume_metadata() {
local volume_key=$1
local source_folder=$2
local metadata_file=$3
prefix="/var/lib/docker/volumes/rpc_$volume_key"
static_file_list="$BASEPATH/static-file-path-list.txt"
# Initialize metadata file
echo "Static file paths and sizes for volume: rpc_$volume_key" > "$metadata_file"
echo "Generated: $(date)" >> "$metadata_file"
echo "" >> "$metadata_file"
# Check each static file path
if [[ -f "$static_file_list" ]]; then
while IFS= read -r path; do
# Check if the path exists
if [[ -e "$prefix/_data/$path" ]]; then
# Get the size
size=$(du -sL "$prefix/_data/$path" 2>/dev/null | awk '{print $1}')
# Format size in human-readable format
size_formatted=$(echo "$(( size * 1024 ))" | numfmt --to=iec --suffix=B --format="%.2f")
# Write to metadata file
echo "$size_formatted $path" >> "$metadata_file"
fi
done < "$static_file_list"
fi
}
# Read the JSON input and extract the list of keys
keys=$(cat /root/rpc/$1.yml | yaml2json - | jq '.volumes' | jq -r 'keys[]')
@@ -37,15 +68,37 @@ for key in $keys; do
folder_size_gb=$(printf "%.0f" "$folder_size")
target_file="rpc_$key-$(date +'%Y-%m-%d-%H-%M-%S')-${folder_size_gb}G.tar.zst"
timestamp=$(date +'%Y-%m-%d-%H-%M-%S')
target_file="rpc_$key-${timestamp}-${folder_size_gb}G.tar.zst"
metadata_file_name="rpc_$key-${timestamp}-${folder_size_gb}G.txt"
#echo "$target_file"
if [[ -n $2 ]]; then
# Upload volume archive
tar -cf - --dereference "$source_folder" | pv -pterb -s $(du -sb "$source_folder" | awk '{print $1}') | zstd | curl -X PUT --upload-file - "$2/null/uploading-$target_file"
curl -X MOVE -H "Destination: /null/$target_file" "$2/null/uploading-$target_file"
# Generate and upload metadata file
echo "Generating metadata for volume: rpc_$key"
temp_metadata="/tmp/$metadata_file_name"
generate_volume_metadata "$key" "$source_folder" "$temp_metadata"
curl -X PUT --upload-file "$temp_metadata" "$2/null/$metadata_file_name"
rm -f "$temp_metadata"
else
# Create volume archive
tar -cf - --dereference "$source_folder" | pv -pterb -s $(du -sb "$source_folder" | awk '{print $1}') | zstd -o "/backup/uploading-$target_file"
mv "/backup/uploading-$target_file" "/backup/$target_file"
# Generate metadata file
echo "Generating metadata for volume: rpc_$key"
generate_volume_metadata "$key" "$source_folder" "/backup/$metadata_file_name"
fi
done
# Run show-size.sh to display overall summary
echo ""
echo "=== Overall Size Summary ==="
if [[ -f "$BASEPATH/show-size.sh" ]]; then
"$BASEPATH/show-size.sh" "$1" 2>&1
fi

377
backup-peers.sh Executable file
View File

@@ -0,0 +1,377 @@
#!/bin/bash
# Script to backup peers from all running nodes
# Can be run as a cronjob to periodically backup peer lists
# Usage: ./backup-peers.sh [backup-directory] [--verbose]
BASEPATH="$(dirname "$0")"
source $BASEPATH/.env
# Parse arguments
VERBOSE=false
BACKUP_DIR=""
for arg in "$@"; do
case "$arg" in
--verbose|-v)
VERBOSE=true
;;
--help|-h)
echo "Usage: $0 [backup-directory] [--verbose|-v]"
echo ""
echo " backup-directory: Optional. Directory to store backups (default: ./peer-backups)"
echo " --verbose, -v: Enable verbose output"
exit 0
;;
*)
if [ -z "$BACKUP_DIR" ] && [[ ! "$arg" =~ ^- ]]; then
BACKUP_DIR="$arg"
fi
;;
esac
done
# Default backup directory if not provided
if [ -z "$BACKUP_DIR" ]; then
BACKUP_DIR="$BASEPATH/peer-backups"
fi
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
# Timestamp for this backup run
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
# Blacklist for compose files (same as show-status.sh)
blacklist=(
"drpc.yml" "drpc-free.yml" "drpc-home.yml" # dshackles
"arbitrum-one-mainnet-arbnode-archive-trace.yml" # always behind and no reference rpc
"ethereum-beacon-mainnet-lighthouse-pruned-blobs" # can't handle beacon rest api yet
"rpc.yml" "monitoring.yml" "ftp.yml" "backup-http.yml" "base.yml" # no rpcs
)
# Path blacklist (read from file if it exists)
path_blacklist=()
if [ -f "$BASEPATH/path-blacklist.txt" ]; then
while IFS= read -r line; do
if [ -n "$line" ]; then
path_blacklist+=("$line")
fi
done < "$BASEPATH/path-blacklist.txt"
fi
# Protocol and domain settings
if [ -n "$NO_SSL" ]; then
PROTO="http"
DOMAIN="${DOMAIN:-0.0.0.0}"
else
PROTO="https"
# For HTTPS, DOMAIN should be set
if [ -z "$DOMAIN" ]; then
echo "Error: DOMAIN variable not found in $BASEPATH/.env" >&2
echo "Please set DOMAIN in your .env file" >&2
exit 1
fi
fi
# Function to extract RPC paths from a compose file
extract_rpc_paths() {
local compose_file="$1"
local full_path="$BASEPATH/${compose_file}"
if [ ! -f "$full_path" ]; then
return 1
fi
# Extract paths using grep (same method as peer-count.sh)
# Try Perl regex first, fallback to extended regex if -P is not supported
pathlist=$(cat "$full_path" | grep -oP "stripprefix\.prefixes.*?/\K[^\"]+" 2>/dev/null)
if [ $? -ne 0 ] || [ -z "$pathlist" ]; then
# Fallback for systems without Perl regex support
pathlist=$(cat "$full_path" | grep -oE "stripprefix\.prefixes[^:]*:.*?/([^\"]+)" 2>/dev/null | sed -E 's/.*\/([^"]+)/\1/' | grep -v '^$')
fi
if [ -z "$pathlist" ]; then
return 1
fi
echo "$pathlist"
}
# Function to check if a path should be included
should_include_path() {
local path="$1"
# Always exclude paths ending with /node (consensus client endpoints)
if [[ "$path" =~ /node$ ]]; then
if [ "$VERBOSE" = true ]; then
echo " Path $path excluded: ends with /node"
fi
return 1
fi
for word in "${path_blacklist[@]}"; do
# Unescape the pattern (handle \-node -> -node)
pattern=$(echo "$word" | sed 's/\\-/-/g')
# Use -- to prevent grep from interpreting pattern as options
if echo "$path" | grep -qE -- "$pattern"; then
if [ "$VERBOSE" = true ]; then
echo " Path $path matches blacklist pattern: $word"
fi
return 1
fi
done
return 0
}
# Function to backup peers from a single RPC endpoint
backup_peers_from_path() {
local compose_file="$1"
local path="$2"
local compose_name="${compose_file%.yml}"
# Sanitize compose name and path for filename
local safe_compose_name=$(echo "$compose_name" | sed 's/[^a-zA-Z0-9_-]/_/g')
local safe_path=$(echo "$path" | sed 's|[^a-zA-Z0-9_-]|_|g')
# Ensure path starts with /
if [[ ! "$path" =~ ^/ ]]; then
path="/$path"
fi
local RPC_URL="${PROTO}://${DOMAIN}${path}"
# Try admin_peers first (returns detailed peer info)
response=$(curl --ipv4 -L -s -X POST "$RPC_URL" \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"admin_peers","params":[],"id":1}' \
--max-time 10 2>/dev/null)
# Check for curl errors
if [ $? -ne 0 ]; then
echo "✗ Failed to connect to $compose_file ($path): curl error"
return 1
fi
# Check if we got a valid response
if echo "$response" | jq -e '.result' > /dev/null 2>&1; then
peer_count=$(echo "$response" | jq -r '.result | length')
if [ "$peer_count" -gt 0 ]; then
# Extract enodes
enodes=$(echo "$response" | jq -r '.result[].enode' 2>/dev/null | grep -v '^$' | grep -v '^null$')
if [ -n "$enodes" ]; then
# Create backup file
local backup_file="$BACKUP_DIR/${safe_compose_name}__${safe_path}__${TIMESTAMP}.json"
# Create JSON structure with metadata
{
echo "{"
echo " \"compose_file\": \"$compose_file\","
echo " \"rpc_path\": \"$path\","
echo " \"rpc_url\": \"$RPC_URL\","
echo " \"timestamp\": \"$TIMESTAMP\","
echo " \"peer_count\": $peer_count,"
echo " \"peers\": ["
# Write enodes as JSON array
first=true
while IFS= read -r enode; do
if [ -z "$enode" ] || [ "$enode" = "null" ]; then
continue
fi
if [ "$first" = true ]; then
first=false
else
echo ","
fi
# Escape the enode string for JSON
escaped_enode=$(echo "$enode" | sed 's/\\/\\\\/g' | sed 's/"/\\"/g')
echo -n " \"$escaped_enode\""
done <<< "$enodes"
echo ""
echo " ]"
echo "}"
} > "$backup_file"
# Also create a simple text file with just enodes (one per line) for easy playback
local backup_txt_file="$BACKUP_DIR/${safe_compose_name}__${safe_path}__${TIMESTAMP}.txt"
echo "$enodes" > "$backup_txt_file"
# Extract just the filename for display
backup_filename=$(basename "$backup_file" 2>/dev/null || echo "${backup_file##*/}")
echo "✓ Backed up $peer_count peer(s) from $compose_file ($path) to $backup_filename"
return 0
fi
else
if [ "$VERBOSE" = true ]; then
echo "⚠ No peers found for $compose_file ($path)"
fi
return 2 # Return 2 for "no peers" (not a failure, just nothing to backup)
fi
else
# Check if this is a method not found error (consensus client or admin API disabled)
error_code=$(echo "$response" | jq -r '.error.code // empty' 2>/dev/null)
error_message=$(echo "$response" | jq -r '.error.message // empty' 2>/dev/null)
if [ -n "$error_code" ] && [ "$error_code" != "null" ]; then
# Check if it's a method not found error (likely consensus client)
if [ "$error_code" = "-32601" ] || [ "$error_code" = "32601" ]; then
# Method not found - likely consensus client, skip silently
return 1
else
# Other error
echo "$compose_file ($path): RPC error $error_code - ${error_message:-unknown error}"
return 1
fi
fi
# Try net_peerCount as fallback (but we can't get enodes from this)
response=$(curl --ipv4 -L -s -X POST "$RPC_URL" \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}' \
--max-time 10 2>/dev/null)
if echo "$response" | jq -e '.result' > /dev/null 2>&1; then
hex_value=$(echo "$response" | jq -r '.result')
# Convert hex to decimal (net_peerCount returns hex like "0x10")
peer_count=$((hex_value))
if [ "$peer_count" -gt 0 ]; then
echo "$compose_file ($path) has $peer_count peer(s) but admin_peers not available (cannot backup enodes)"
else
echo "$compose_file ($path): no peers connected"
fi
else
# Couldn't get peer count either
if [ -z "$response" ]; then
echo "$compose_file ($path): no response from RPC endpoint"
else
echo "$compose_file ($path): RPC endpoint not accessible or invalid"
fi
fi
return 1
fi
}
# Main execution
if [ -z "$COMPOSE_FILE" ]; then
echo "Error: COMPOSE_FILE not found in $BASEPATH/.env" >&2
exit 1
fi
# Split COMPOSE_FILE by colon
IFS=':' read -ra parts <<< "$COMPOSE_FILE"
total_backed_up=0
total_failed=0
total_skipped=0
total_no_peers=0
echo "Starting peer backup at $(date)"
echo "Backup directory: $BACKUP_DIR"
echo "COMPOSE_FILE contains: ${#parts[@]} compose file(s)"
echo ""
# Process each compose file
for part in "${parts[@]}"; do
# Handle compose file name - part might already have .yml or might not
if [[ "$part" == *.yml ]]; then
compose_file="$part"
else
compose_file="${part}.yml"
fi
# Check if file exists
if [ ! -f "$BASEPATH/$compose_file" ]; then
echo "⚠ Skipping $compose_file: file not found"
total_skipped=$((total_skipped + 1))
continue
fi
# Check blacklist
include=true
for word in "${blacklist[@]}"; do
# Use -- to prevent grep from interpreting pattern as options
if echo "$compose_file" | grep -qE -- "$word"; then
include=false
break
fi
done
if [ "$include" = false ]; then
total_skipped=$((total_skipped + 1))
continue
fi
# Extract RPC paths from compose file
paths=$(extract_rpc_paths "$compose_file")
if [ -z "$paths" ]; then
echo "⚠ Skipping $compose_file: no RPC paths found"
total_skipped=$((total_skipped + 1))
continue
fi
# Process each path
path_found=false
# Use while loop with read to safely handle paths with spaces or special characters
while IFS= read -r path || [ -n "$path" ]; do
# Skip empty paths
if [ -z "$path" ]; then
continue
fi
# Check path blacklist
if should_include_path "$path"; then
path_found=true
backup_peers_from_path "$compose_file" "$path"
exit_code=$?
if [ $exit_code -eq 0 ]; then
total_backed_up=$((total_backed_up + 1))
elif [ $exit_code -eq 2 ]; then
# No peers (not a failure)
total_no_peers=$((total_no_peers + 1))
else
total_failed=$((total_failed + 1))
fi
else
if [ "$VERBOSE" = true ]; then
echo "⚠ Skipping path $path from $compose_file: blacklisted"
fi
fi
done <<< "$paths"
if [ "$path_found" = false ]; then
total_skipped=$((total_skipped + 1))
fi
done
echo ""
echo "=========================================="
echo "Backup Summary"
echo "=========================================="
echo "Total nodes backed up: $total_backed_up"
if [ $total_no_peers -gt 0 ]; then
echo "Total nodes with no peers: $total_no_peers"
fi
echo "Total nodes failed: $total_failed"
echo "Total nodes skipped: $total_skipped"
echo "Backup directory: $BACKUP_DIR"
echo "Completed at $(date)"
echo ""
# Optional: Clean up old backups (keep last 30 days)
if [ -n "$CLEANUP_OLD_BACKUPS" ] && [ "$CLEANUP_OLD_BACKUPS" = "true" ]; then
echo "Cleaning up backups older than 30 days..."
find "$BACKUP_DIR" -name "*.json" -type f -mtime +30 -delete
find "$BACKUP_DIR" -name "*.txt" -type f -mtime +30 -delete
echo "Cleanup complete"
fi
exit 0

View File

@@ -1 +0,0 @@
op/erigon/base-mainnet-op-erigon-archive-trace.yml

View File

@@ -1 +0,0 @@
op/reth/base-mainnet-op-reth-archive-trace.yml

View File

@@ -1 +0,0 @@
op/reth/base-mainnet-op-reth-pruned-trace.yml

View File

@@ -1 +0,0 @@
op/geth/base-mainnet-op-geth-pruned-pebble-path.yml

View File

@@ -1 +0,0 @@
op/reth/base-sepolia-op-reth-pruned-trace.yml

View File

@@ -1 +0,0 @@
op/geth/base-sepolia-op-geth-pruned-pebble-path.yml

View File

@@ -1 +0,0 @@
berachain/reth/berachain-bartio-reth-archive-trace.yml

View File

@@ -1 +0,0 @@
berachain/reth/berachain-bepolia-reth-archive-trace.yml

View File

@@ -1 +0,0 @@
berachain/reth/berachain-mainnet-reth-archive-trace.yml

View File

@@ -1 +0,0 @@
berachain/reth/berachain-mainnet-reth-pruned-trace.yml

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
bitcoin-cash-mainnet:
image: ${BITCOIN_CASH_BITCOIND_IMAGE:-bitcoinabc/bitcoin-abc}:${BITCOIN_CASH_MAINNET_BITCOIND_VERSION:-0.32.4}
image: ${BITCOIN_CASH_BITCOIND_IMAGE:-bitcoinabc/bitcoin-abc}:${BITCOIN_CASH_MAINNET_BITCOIND_VERSION:-0.32.7}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
bitcoin-cash-testnet:
image: ${BITCOIN_CASH_BITCOIND_IMAGE:-bitcoinabc/bitcoin-abc}:${BITCOIN_CASH_TESTNET_BITCOIND_VERSION:-0.32.4}
image: ${BITCOIN_CASH_BITCOIND_IMAGE:-bitcoinabc/bitcoin-abc}:${BITCOIN_CASH_TESTNET_BITCOIND_VERSION:-0.32.7}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
bitcoin-mainnet:
image: ${BITCOIN_BITCOIND_IMAGE:-lncm/bitcoind}:${BITCOIN_MAINNET_BITCOIND_VERSION:-v27.2}
image: ${BITCOIN_BITCOIND_IMAGE:-lncm/bitcoind}:${BITCOIN_MAINNET_BITCOIND_VERSION:-v28.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
bitcoin-testnet:
image: ${BITCOIN_BITCOIND_IMAGE:-lncm/bitcoind}:${BITCOIN_TESTNET_BITCOIND_VERSION:-v27.2}
image: ${BITCOIN_BITCOIND_IMAGE:-lncm/bitcoind}:${BITCOIN_TESTNET_BITCOIND_VERSION:-v28.0}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle

View File

@@ -1 +0,0 @@
op/geth/blast-mainnet-op-geth-archive-leveldb-hash.yml

View File

@@ -1 +0,0 @@
op/geth/blast-mainnet-op-geth-pruned-pebble-path.yml

View File

@@ -1 +0,0 @@
op/geth/blast-sepolia-op-geth-pruned-pebble-hash.yml

View File

@@ -23,18 +23,33 @@ for path in $pathlist; do
RPC_URL="https://$DOMAIN/$path"
response_file=$(mktemp)
http_status_code=$(curl --ipv4 -m 1 -s -X POST -w "%{http_code}" -o "$response_file" -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest", false],"id":1}' $RPC_URL)
# Detect Starknet vs Ethereum based on path
if echo "$path" | grep -qi "starknet"; then
rpc_method='{"jsonrpc":"2.0","method":"starknet_getBlockWithTxHashes","params":["latest"],"id":1}'
is_starknet=true
else
rpc_method='{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest", false],"id":1}'
is_starknet=false
fi
http_status_code=$(curl --ipv4 -m 1 -s -X POST -w "%{http_code}" -o "$response_file" -H "Content-Type: application/json" --data "$rpc_method" $RPC_URL)
if [ $? -eq 0 ]; then
if [[ $http_status_code -eq 200 ]]; then
response=$(cat "$response_file")
latest_block_number=$(echo "$response" | jq -r '.result.number')
latest_block_number_decimal=$((16#${latest_block_number#0x}))
if $is_starknet; then
# Starknet returns decimal block_number
latest_block_number_decimal=$(echo "$response" | jq -r '.result.block_number')
else
# Ethereum returns hex number
latest_block_number=$(echo "$response" | jq -r '.result.number')
latest_block_number_decimal=$((16#${latest_block_number#0x}))
fi
echo "$latest_block_number_decimal"
exit 0;
fi
fi

View File

@@ -1 +0,0 @@
op/geth/bob-mainnet-op-geth-archive-pebble-hash.yml

View File

@@ -1 +0,0 @@
op/geth/bob-mainnet-op-geth-pruned-pebble-hash.yml

View File

@@ -1 +0,0 @@
op/geth/bob-mainnet-op-geth-pruned-pebble-path.yml

View File

@@ -1 +0,0 @@
op/erigon/boba-mainnet-op-erigon-archive-trace.yml

View File

@@ -1 +0,0 @@
op/erigon/boba-mainnet-op-erigon-archive-trace.yml

View File

@@ -1 +0,0 @@
op/geth/boba-mainnet-op-geth-pruned-pebble-path.yml

View File

@@ -1 +0,0 @@
bsc/erigon3/bsc-chapel-erigon3-minimal-trace.yml

View File

@@ -1 +0,0 @@
bsc/bsc/bsc-chapel-bsc-pruned-pebble-path.yml

View File

@@ -1 +0,0 @@
bsc/erigon3/bsc-mainnet-erigon3-pruned-trace.yml

View File

@@ -1 +0,0 @@
bsc/erigon3/bsc-mainnet-erigon3-minimal-trace.yml

View File

@@ -1 +0,0 @@
bsc/bsc/bsc-mainnet-bsc-pruned-pebble-path.yml

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
bsc-chapel:
image: ${BSC_BSC_IMAGE:-ghcr.io/bnb-chain/bsc}:${BSC_CHAPEL_BSC_VERSION:-1.6.4}
image: ${BSC_BSC_IMAGE:-ghcr.io/bnb-chain/bsc}:${BSC_CHAPEL_BSC_VERSION:-1.6.6}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -78,6 +78,7 @@ services:
- --rpc.txfeecap=0
- --state.scheme=path
- --syncmode=snap
- --txpool.pricelimit=50000000
- --ws
- --ws.addr=0.0.0.0
- --ws.api=eth,net,web3,txpool,debug,admin,parlia

View File

@@ -32,7 +32,7 @@ x-logging-defaults: &logging-defaults
services:
bsc-mainnet-minimal:
image: ${BSC_BSC_IMAGE:-ghcr.io/bnb-chain/bsc}:${BSC_MAINNET_BSC_VERSION:-1.6.4}
image: ${BSC_BSC_IMAGE:-ghcr.io/bnb-chain/bsc}:${BSC_MAINNET_BSC_VERSION:-1.6.6}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -78,7 +78,6 @@ services:
- --metrics
- --metrics.addr=0.0.0.0
- --metrics.port=6060
- --multidatabase=true
- --nat=extip:${IP}
- --port=14596
- --rpc.gascap=600000000
@@ -86,6 +85,7 @@ services:
- --state.scheme=path
- --syncmode=snap
- --tries-verify-mode=none
- --txpool.pricelimit=50000000
- --ws
- --ws.addr=0.0.0.0
- --ws.api=eth,net,web3,txpool,debug,admin,parlia

View File

@@ -79,6 +79,7 @@ services:
- --rpc.txfeecap=0
- --state.scheme=path
- --syncmode=full
- --txpool.pricelimit=50000000
- --vmtrace=firehose
- --ws
- --ws.addr=0.0.0.0

View File

@@ -30,7 +30,7 @@ x-logging-defaults: &logging-defaults
services:
bsc-mainnet:
image: ${BSC_BSC_IMAGE:-ghcr.io/bnb-chain/bsc}:${BSC_MAINNET_BSC_VERSION:-1.6.4}
image: ${BSC_BSC_IMAGE:-ghcr.io/bnb-chain/bsc}:${BSC_MAINNET_BSC_VERSION:-1.6.6}
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
@@ -78,6 +78,7 @@ services:
- --rpc.txfeecap=0
- --state.scheme=path
- --syncmode=snap
- --txpool.pricelimit=50000000
- --ws
- --ws.addr=0.0.0.0
- --ws.api=eth,net,web3,txpool,debug,admin,parlia

View File

@@ -0,0 +1,160 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# This node is built from source with architecture-specific optimizations
# Build command: docker compose build --build-arg ARCH_TARGET=${ARCH_TARGET:-native} bsc-chapel-reth
#
# IMPORTANT: Cache optimization considerations
# If running multiple nodes on the same machine, be aware that:
# - L3 cache is shared across all cores, causing cache contention
# - Multiple nodes compete for cache space, reducing optimization effectiveness
# - Consider CPU pinning to minimize cache conflicts:
# docker run --cpuset-cpus="0-7" bsc-chapel-reth # Pin to specific cores
# - For AMD X3D CPUs, CCD0 (cores 0-7) has the 3D V-Cache
# - For multi-node setups, generic builds may perform better than cache-optimized ones
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:bsc/reth/bsc-chapel-reth-archive-trace-triedb.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/bsc-chapel-reth \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
bsc-chapel-reth:
build:
context: ./
dockerfile: reth.Dockerfile
args:
LLVM_IMAGE: ${LLVM_IMAGE:-snowstep/llvm}
LLVM_VERSION: ${LLVM_VERSION:-20250912105042}
RETH_VERSION: ${BSC_CHAPEL_RETH_VERSION:-v0.0.7-beta}
RETH_REPO: ${BSC_CHAPEL_RETH_REPO:-https://github.com/bnb-chain/reth-bsc.git}
ARCH_TARGET: ${ARCH_TARGET:-native}
PROFILE: ${RETH_BUILD_PROFILE:-maxperf}
BUILD_BSC_RETH: true
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
memlock: -1 # Disable memory locking limits (for in-memory DBs like MDBX)
user: root
ports:
- 12431:12431
- 12431:12431/udp
expose:
- 8545
- 9001
entrypoint: [reth, node]
command:
- --chain=bsc-testnet
- --datadir=/root/.local/share/reth
- --discovery.port=12431
- --engine.cross-block-cache-size=${BSC_CHAPEL_RETH_STATE_CACHE:-4096}
- --engine.memory-block-buffer-target=128
- --engine.parallel-sparse-trie
- --gpo.maxprice=500000000
- --http
- --http.addr=0.0.0.0
- --http.api=admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,mev
- --http.corsdomain=*
- --http.port=8545
- --max-inbound-peers=50
- --max-outbound-peers=50
- --metrics=0.0.0.0:9001
- --nat=extip:${IP}
- --pooled-tx-response-soft-limit=20971520
- --port=12431
- --rpc-cache.max-blocks=10000
- --rpc-cache.max-concurrent-db-requests=2048
- --rpc.gascap=600000000
- --rpc.max-blocks-per-filter=0
- --rpc.max-connections=50000
- --rpc.max-logs-per-response=0
- --statedb.triedb
- --ws
- --ws.addr=0.0.0.0
- --ws.api=admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,mev
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${BSC_CHAPEL_RETH_ARCHIVE_TRACE_TRIEDB_DATA:-bsc-chapel-reth-archive-trace-triedb}:/root/.local/share/reth
- ./bsc/chapel:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=9001
- prometheus-scrape.path=/metrics
- traefik.enable=true
- traefik.http.middlewares.bsc-chapel-reth-archive-trace-triedb-stripprefix.stripprefix.prefixes=/bsc-chapel-reth
- traefik.http.services.bsc-chapel-reth-archive-trace-triedb.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.bsc-chapel-reth-archive-trace-triedb.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.bsc-chapel-reth-archive-trace-triedb.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.bsc-chapel-reth-archive-trace-triedb.rule=Host(`$DOMAIN`) && (Path(`/bsc-chapel-reth`) || Path(`/bsc-chapel-reth/`))}
- ${NO_SSL:+traefik.http.routers.bsc-chapel-reth-archive-trace-triedb.rule=Path(`/bsc-chapel-reth`) || Path(`/bsc-chapel-reth/`)}
- traefik.http.routers.bsc-chapel-reth-archive-trace-triedb.middlewares=bsc-chapel-reth-archive-trace-triedb-stripprefix, ipallowlist
shm_size: 2gb
volumes:
bsc-chapel-reth-archive-trace-triedb:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: bsc-testnet
method-groups:
enabled:
- debug
- filter
- trace
methods:
disabled:
- name: eth_getProof
- name: eth_getAccount
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -49,7 +49,7 @@ services:
args:
LLVM_IMAGE: ${LLVM_IMAGE:-snowstep/llvm}
LLVM_VERSION: ${LLVM_VERSION:-20250912105042}
RETH_VERSION: ${BSC_CHAPEL_RETH_VERSION:-v0.0.4-archivenode-alpha}
RETH_VERSION: ${BSC_CHAPEL_RETH_VERSION:-v0.0.7-beta}
RETH_REPO: ${BSC_CHAPEL_RETH_REPO:-https://github.com/bnb-chain/reth-bsc.git}
ARCH_TARGET: ${ARCH_TARGET:-native}
PROFILE: ${RETH_BUILD_PROFILE:-maxperf}
@@ -83,7 +83,7 @@ services:
- --engine.cross-block-cache-size=${BSC_CHAPEL_RETH_STATE_CACHE:-4096}
- --engine.memory-block-buffer-target=128
- --engine.parallel-sparse-trie
- --gpo.maxprice=1000000
- --gpo.maxprice=500000000
- --http
- --http.addr=0.0.0.0
- --http.api=admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,mev

View File

@@ -0,0 +1,161 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# This node is built from source with architecture-specific optimizations
# Build command: docker compose build --build-arg ARCH_TARGET=${ARCH_TARGET:-native} bsc-chapel-reth-pruned
#
# IMPORTANT: Cache optimization considerations
# If running multiple nodes on the same machine, be aware that:
# - L3 cache is shared across all cores, causing cache contention
# - Multiple nodes compete for cache space, reducing optimization effectiveness
# - Consider CPU pinning to minimize cache conflicts:
# docker run --cpuset-cpus="0-7" bsc-chapel-reth-pruned # Pin to specific cores
# - For AMD X3D CPUs, CCD0 (cores 0-7) has the 3D V-Cache
# - For multi-node setups, generic builds may perform better than cache-optimized ones
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:bsc/reth/bsc-chapel-reth-pruned-trace-triedb.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/bsc-chapel-reth-pruned \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
bsc-chapel-reth-pruned:
build:
context: ./
dockerfile: reth.Dockerfile
args:
LLVM_IMAGE: ${LLVM_IMAGE:-snowstep/llvm}
LLVM_VERSION: ${LLVM_VERSION:-20250912105042}
RETH_VERSION: ${BSC_CHAPEL_RETH_VERSION:-v0.0.7-beta}
RETH_REPO: ${BSC_CHAPEL_RETH_REPO:-https://github.com/bnb-chain/reth-bsc.git}
ARCH_TARGET: ${ARCH_TARGET:-native}
PROFILE: ${RETH_BUILD_PROFILE:-maxperf}
BUILD_BSC_RETH: true
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
memlock: -1 # Disable memory locking limits (for in-memory DBs like MDBX)
user: root
ports:
- 10347:10347
- 10347:10347/udp
expose:
- 8545
- 9001
entrypoint: [reth, node]
command:
- --chain=bsc-testnet
- --datadir=/root/.local/share/reth
- --discovery.port=10347
- --engine.cross-block-cache-size=${BSC_CHAPEL_RETH_STATE_CACHE:-4096}
- --engine.memory-block-buffer-target=128
- --engine.parallel-sparse-trie
- --full
- --gpo.maxprice=500000000
- --http
- --http.addr=0.0.0.0
- --http.api=admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,mev
- --http.corsdomain=*
- --http.port=8545
- --max-inbound-peers=50
- --max-outbound-peers=50
- --metrics=0.0.0.0:9001
- --nat=extip:${IP}
- --pooled-tx-response-soft-limit=20971520
- --port=10347
- --rpc-cache.max-blocks=10000
- --rpc-cache.max-concurrent-db-requests=2048
- --rpc.gascap=600000000
- --rpc.max-blocks-per-filter=0
- --rpc.max-connections=50000
- --rpc.max-logs-per-response=0
- --statedb.triedb
- --ws
- --ws.addr=0.0.0.0
- --ws.api=admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,mev
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${BSC_CHAPEL_RETH_PRUNED_TRACE_TRIEDB_DATA:-bsc-chapel-reth-pruned-trace-triedb}:/root/.local/share/reth
- ./bsc/chapel:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=9001
- prometheus-scrape.path=/metrics
- traefik.enable=true
- traefik.http.middlewares.bsc-chapel-reth-pruned-trace-triedb-stripprefix.stripprefix.prefixes=/bsc-chapel-reth-pruned
- traefik.http.services.bsc-chapel-reth-pruned-trace-triedb.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.bsc-chapel-reth-pruned-trace-triedb.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.bsc-chapel-reth-pruned-trace-triedb.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.bsc-chapel-reth-pruned-trace-triedb.rule=Host(`$DOMAIN`) && (Path(`/bsc-chapel-reth-pruned`) || Path(`/bsc-chapel-reth-pruned/`))}
- ${NO_SSL:+traefik.http.routers.bsc-chapel-reth-pruned-trace-triedb.rule=Path(`/bsc-chapel-reth-pruned`) || Path(`/bsc-chapel-reth-pruned/`)}
- traefik.http.routers.bsc-chapel-reth-pruned-trace-triedb.middlewares=bsc-chapel-reth-pruned-trace-triedb-stripprefix, ipallowlist
shm_size: 2gb
volumes:
bsc-chapel-reth-pruned-trace-triedb:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: bsc-testnet
method-groups:
enabled:
- debug
- filter
- trace
methods:
disabled:
- name: eth_getProof
- name: eth_getAccount
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

View File

@@ -49,7 +49,7 @@ services:
args:
LLVM_IMAGE: ${LLVM_IMAGE:-snowstep/llvm}
LLVM_VERSION: ${LLVM_VERSION:-20250912105042}
RETH_VERSION: ${BSC_CHAPEL_RETH_VERSION:-v0.0.4-archivenode-alpha}
RETH_VERSION: ${BSC_CHAPEL_RETH_VERSION:-v0.0.7-beta}
RETH_REPO: ${BSC_CHAPEL_RETH_REPO:-https://github.com/bnb-chain/reth-bsc.git}
ARCH_TARGET: ${ARCH_TARGET:-native}
PROFILE: ${RETH_BUILD_PROFILE:-maxperf}
@@ -84,7 +84,7 @@ services:
- --engine.memory-block-buffer-target=128
- --engine.parallel-sparse-trie
- --full
- --gpo.maxprice=1000000
- --gpo.maxprice=500000000
- --http
- --http.addr=0.0.0.0
- --http.api=admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,mev

View File

@@ -0,0 +1,161 @@
---
x-logging-defaults: &logging-defaults
driver: json-file
options:
max-size: "10m"
max-file: "3"
# This node is built from source with architecture-specific optimizations
# Build command: docker compose build --build-arg ARCH_TARGET=${ARCH_TARGET:-native} bsc-mainnet-reth
#
# IMPORTANT: Cache optimization considerations
# If running multiple nodes on the same machine, be aware that:
# - L3 cache is shared across all cores, causing cache contention
# - Multiple nodes compete for cache space, reducing optimization effectiveness
# - Consider CPU pinning to minimize cache conflicts:
# docker run --cpuset-cpus="0-7" bsc-mainnet-reth # Pin to specific cores
# - For AMD X3D CPUs, CCD0 (cores 0-7) has the 3D V-Cache
# - For multi-node setups, generic builds may perform better than cache-optimized ones
# Usage:
#
# mkdir rpc && cd rpc
#
# git init
# git remote add origin https://github.com/StakeSquid/ethereum-rpc-docker.git
# git fetch origin vibe
# git checkout origin/vibe
#
# docker run --rm alpine sh -c "printf '0x'; head -c32 /dev/urandom | xxd -p -c 64" > .jwtsecret
#
# env
# ...
# IP=$(curl ipinfo.io/ip)
# DOMAIN=${IP}.traefik.me
# COMPOSE_FILE=base.yml:rpc.yml:bsc/reth/bsc-mainnet-reth-archive-trace-triedb.yml
#
# docker compose up -d
#
# curl -X POST https://${IP}.traefik.me/bsc-mainnet-reth \
# -H "Content-Type: application/json" \
# --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
services:
bsc-mainnet-reth:
build:
context: ./
dockerfile: reth.Dockerfile
args:
LLVM_IMAGE: ${LLVM_IMAGE:-snowstep/llvm}
LLVM_VERSION: ${LLVM_VERSION:-20250912105042}
RETH_VERSION: ${BSC_MAINNET_RETH_VERSION:-v0.0.7-beta}
RETH_REPO: ${BSC_MAINNET_RETH_REPO:-https://github.com/bnb-chain/reth-bsc.git}
ARCH_TARGET: ${ARCH_TARGET:-native}
PROFILE: ${RETH_BUILD_PROFILE:-maxperf}
BUILD_BSC_RETH: true
sysctls:
# TCP Performance
net.ipv4.tcp_slow_start_after_idle: 0 # Disable slow start after idle
net.ipv4.tcp_no_metrics_save: 1 # Disable metrics cache
net.ipv4.tcp_rmem: 4096 87380 16777216 # Increase TCP read buffers
net.ipv4.tcp_wmem: 4096 87380 16777216 # Increase TCP write buffers
net.core.somaxconn: 32768 # Higher connection queue
# Memory/Connection Management
# net.core.netdev_max_backlog: 50000 # Increase network buffer
net.ipv4.tcp_max_syn_backlog: 30000 # More SYN requests
net.ipv4.tcp_max_tw_buckets: 2000000 # Allow more TIME_WAIT sockets
ulimits:
nofile: 1048576 # Max open files (for RPC/WS connections)
memlock: -1 # Disable memory locking limits (for in-memory DBs like MDBX)
user: root
ports:
- 12492:12492
- 12492:12492/udp
expose:
- 8545
- 9001
entrypoint: [reth, node]
command:
- --chain=bsc
- --datadir=/root/.local/share/reth
- --db.max-size=8TB
- --discovery.port=12492
- --engine.cross-block-cache-size=${BSC_MAINNET_RETH_STATE_CACHE:-4096}
- --engine.memory-block-buffer-target=128
- --engine.parallel-sparse-trie
- --gpo.maxprice=500000000
- --http
- --http.addr=0.0.0.0
- --http.api=admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,mev
- --http.corsdomain=*
- --http.port=8545
- --max-inbound-peers=50
- --max-outbound-peers=50
- --metrics=0.0.0.0:9001
- --nat=extip:${IP}
- --pooled-tx-response-soft-limit=20971520
- --port=12492
- --rpc-cache.max-blocks=10000
- --rpc-cache.max-concurrent-db-requests=2048
- --rpc.gascap=600000000
- --rpc.max-blocks-per-filter=0
- --rpc.max-connections=50000
- --rpc.max-logs-per-response=0
- --statedb.triedb
- --ws
- --ws.addr=0.0.0.0
- --ws.api=admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,mev
- --ws.origins=*
- --ws.port=8545
restart: unless-stopped
stop_grace_period: 5m
networks:
- chains
volumes:
- ${BSC_MAINNET_RETH_ARCHIVE_TRACE_TRIEDB_DATA:-bsc-mainnet-reth-archive-trace-triedb}:/root/.local/share/reth
- ./bsc/mainnet:/config
- /slowdisk:/slowdisk
logging: *logging-defaults
labels:
- prometheus-scrape.enabled=true
- prometheus-scrape.port=9001
- prometheus-scrape.path=/metrics
- traefik.enable=true
- traefik.http.middlewares.bsc-mainnet-reth-archive-trace-triedb-stripprefix.stripprefix.prefixes=/bsc-mainnet-reth
- traefik.http.services.bsc-mainnet-reth-archive-trace-triedb.loadbalancer.server.port=8545
- ${NO_SSL:-traefik.http.routers.bsc-mainnet-reth-archive-trace-triedb.entrypoints=websecure}
- ${NO_SSL:-traefik.http.routers.bsc-mainnet-reth-archive-trace-triedb.tls.certresolver=myresolver}
- ${NO_SSL:-traefik.http.routers.bsc-mainnet-reth-archive-trace-triedb.rule=Host(`$DOMAIN`) && (Path(`/bsc-mainnet-reth`) || Path(`/bsc-mainnet-reth/`))}
- ${NO_SSL:+traefik.http.routers.bsc-mainnet-reth-archive-trace-triedb.rule=Path(`/bsc-mainnet-reth`) || Path(`/bsc-mainnet-reth/`)}
- traefik.http.routers.bsc-mainnet-reth-archive-trace-triedb.middlewares=bsc-mainnet-reth-archive-trace-triedb-stripprefix, ipallowlist
shm_size: 2gb
volumes:
bsc-mainnet-reth-archive-trace-triedb:
x-upstreams:
- id: $${ID}
labels:
provider: $${PROVIDER}
connection:
generic:
rpc:
url: $${RPC_URL}
ws:
frameSize: 20Mb
msgSize: 50Mb
url: $${WS_URL}
chain: bsc
method-groups:
enabled:
- debug
- filter
- trace
methods:
disabled:
- name: eth_getProof
- name: eth_getAccount
enabled:
- name: txpool_content # TODO: should be disabled for rollup nodes
...

Some files were not shown because too many files have changed in this diff Show More