fantom-foundation / sonic Goto Github PK
View Code? Open in Web Editor NEWgo-opera fork for Carmen and Tosca integration
License: GNU Lesser General Public License v3.0
go-opera fork for Carmen and Tosca integration
License: GNU Lesser General Public License v3.0
Describe the bug
Version
$ sonicd version
Sonic
Version: 1.2.1-d
Git Commit: 350f8b77bfa3060340ea650d25a16a8517f21007
Git Commit Date: 1718888037
Architecture: amd64
Protocol Versions: [63]
Go Version: go1.21.6
Operating System: linux
GOPATH=
GOROOT=/home/pchung/.gvm/gos/go1.21.6
Request Payload
{
"method":"debug_traceBlockByNumber",
"params":[
"0x511b860",
{
"tracer":"callTracer"
}
],
"id":1,
"jsonrpc":"2.0"
}
Stacktrace
ERROR[07-10|03:45:37.458] RPC method debug_traceBlockByNumber crashed: runtime error: invalid memory address or nil pointer dereference
goroutine 31261877 [running]:
github.com/ethereum/go-ethereum/rpc.(*callback).call.func1()
/home/pchung/.gvm/pkgsets/go1.21.6/global/pkg/mod/github.com/!fantom-foundation/[email protected]/rpc/service.go:201 +0x85
panic({0x14b3660?, 0x23a5400?})
/home/pchung/.gvm/gos/go1.21.6/src/runtime/panic.go:914 +0x21f
github.com/Fantom-foundation/go-opera/ethapi.(*PublicDebugAPI).traceTx(0xc00095ace0, {0x193f138, 0xc394654380}, {0x1948ff0?, 0xc416fecc60?}, 0xc394679268, 0xc53d483400?, {0x195a2f0, 0xc6266334c0}, 0xc60150cbe0)
/home/pchung/source/Sonic/ethapi/api.go:2200 +0xdf1
github.com/Fantom-foundation/go-opera/ethapi.(*PublicDebugAPI).traceBlock(0xc00095ace0, {0x193f138, 0xc394654380}, 0xc2723ef340, 0x1520700?)
/home/pchung/source/Sonic/ethapi/api.go:2272 +0x5a5
github.com/Fantom-foundation/go-opera/ethapi.(*PublicDebugAPI).TraceBlockByNumber(0xc00095ace0, {0x193f138, 0xc394654380}, 0x5?, 0x8?)
/home/pchung/source/Sonic/ethapi/api.go:2230 +0x66
reflect.Value.call({0xc00046a540?, 0xc07968f3b0?, 0x7fa24aa15948?}, {0x16d6e83, 0x4}, {0xc555363aa0, 0x4, 0x428fb2?})
/home/pchung/.gvm/gos/go1.21.6/src/reflect/value.go:596 +0xce7
reflect.Value.Call({0xc00046a540?, 0xc07968f3b0?, 0x7fa243160aa8?}, {0xc555363aa0?, 0x10?, 0xc29d154400?})
/home/pchung/.gvm/gos/go1.21.6/src/reflect/value.go:380 +0xb9
github.com/ethereum/go-ethereum/rpc.(*callback).call(0xc23cf1ca20, {0x193f138?, 0xc394654380}, {0xc646a46a80, 0x18}, {0xc30184f290, 0x2, 0x45d964b800?})
/home/pchung/.gvm/pkgsets/go1.21.6/global/pkg/mod/github.com/!fantom-foundation/[email protected]/rpc/service.go:207 +0x379
github.com/ethereum/go-ethereum/rpc.(*handler).runMethod(0x193f0c8?, {0x193f138?, 0xc394654380?}, 0xc394654310, 0x2?, {0xc30184f290?, 0xc3df0d8120?, 0x90?})
/home/pchung/.gvm/pkgsets/go1.21.6/global/pkg/mod/github.com/!fantom-foundation/[email protected]/rpc/handler.go:405 +0x3c
github.com/ethereum/go-ethereum/rpc.(*handler).handleCall(0xc3df0d8000, 0xc30184f230, 0xc394654310)
/home/pchung/.gvm/pkgsets/go1.21.6/global/pkg/mod/github.com/!fantom-foundation/[email protected]/rpc/handler.go:353 +0x270
github.com/ethereum/go-ethereum/rpc.(*handler).handleCallMsg(0xc3df0d8000, 0x30?, 0xc394654310)
/home/pchung/.gvm/pkgsets/go1.21.6/global/pkg/mod/github.com/!fantom-foundation/[email protected]/rpc/handler.go:310 +0xbd
github.com/ethereum/go-ethereum/rpc.(*handler).handleMsg.func1(0xc30184f230)
/home/pchung/.gvm/pkgsets/go1.21.6/global/pkg/mod/github.com/!fantom-foundation/[email protected]/rpc/handler.go:148 +0x2f
github.com/ethereum/go-ethereum/rpc.(*handler).startCallProc.func1()
/home/pchung/.gvm/pkgsets/go1.21.6/global/pkg/mod/github.com/!fantom-foundation/[email protected]/rpc/handler.go:238 +0xeb
created by github.com/ethereum/go-ethereum/rpc.(*handler).startCallProc in goroutine 31135388
/home/pchung/.gvm/pkgsets/go1.21.6/global/pkg/mod/github.com/!fantom-foundation/[email protected]/rpc/handler.go:232 +0x95
vecmt.Index wraps the epoch store database into VecFlushable
wrapper, which postpones all writes until the Flush() call.
Such behavior is correct. (It allows to revert invalid events insertion - only the correct ones are flushed in the saveAndProcessEvent
method.)
However VecFlushable
in lachesis-base does not flush directly into the backing database, it flushes into its in-memory backedMap
, which is being flushed to the backing on-disk database only when the map size exceeds IndexConfig.Caches.DBCache
(around 10MiB). This leads to keeping the "HighestBefore" only in the memory even when Opera shuts down, which sometimes leads to following error on the next Opera start:
Vector clock error err: Event A=2:2:875e5a not found
It seems this occur mostly (or maybe only) when "pebble-flg" instead of "pebble-fsh" is being used - probably the fsh transaction avoid writing other db records of the event, so this error does not occur.
In go-opera-norma I have fixed this by creating a new VecFlushable, which flushes directly to the backing database.
Describe the bug
Hello, I try to sync node with snapshot, but got error:
Fantom World State Live data not available in the genesis module=gossip-store err="hashes root not found"
To Reproduce
Steps to reproduce the behavior:
After starting synchronization, after some time, the node crashes without any errors.
Last logs:
INFO [06-25|14:51:56.145] New LLR summary last_epoch=280028 last_block=80365033 new_evs=0 new_ers=0 new_bvs=320 new_brs=0 age=none
INFO [06-25|14:51:56.163] New DAG summary new=5211 last_id=280028:4304:56f251 age=1mo22d21h t=8.031s
INFO [06-25|14:52:04.241] New DAG summary new=20 last_id=280028:4314:592af7 age=1mo22d21h t=7.660s
INFO [06-25|14:52:04.722] New block index=80365093 id=280028:4267:f5c1d6 gas_used=729,853 txs=1/0 age=1mo22d21h t=9.500s
System requirements:
32 GB / 16 CPU
SSD 2 TB
Launch commands:
GOMEMLIMIT=28GiB
sonicd --datadir="/home/fantom/data" --http --http.vhosts="*" --cache 12000
After shutdown I got error:
failed to initialize the node: failed to make consensus engine: failed to open existing databases: dirty state: gossip: DE
Opera correctly joins fakenet, if it does not contain any txs yet, but after sending first tx, additional Opera nodes are unable to join - stay stuck at block 1.
When connected using opera attach
, admin.peers
confirms the node see the other nodes.
Following Norma scenario reproduce the bug:
# This scenario reproduce bug, where go-opera-norma does not sync
# when joined into a network, where are already present some txs.
name: Reproduce go-opera-norma bug
duration: 60
num_validators: 3
nodes:
- name: B # node starting after the app start - never sync
instances: 1
start: 1
applications:
- name: counter
start: 0
end: 60
accounts: 2
rate:
constant: 2 # txs/s
2023/06/08 08:02:54 Nodes: 4, block heights: [89 *1* 89 89], tx/s: [1.8315018 1.8450185 1.8181819], txs: 1, gas: 28693, block processing: [2.599ms 2.724ms 2.432ms]
To Reproduce
Request:
{
"method": "debug_traceBlockByNumber",
"params": [
"0x1",
{
"tracer": "callTracer"
}
],
"id": 1,
"jsonrpc": "2.0"
}
Sonic returns:
{"jsonrpc":"2.0","id":1,"error":{"code":-32000,"message":"unable to get Carmen archive StateDB - unexpected state root (4207b216e67033ef3f53427bce268a7bea009b9268ccd000c682c7be701b4de1 != 0000000000000000000000000000000000000000000000000000000000000000)"}}
Expected behavior
Opera returns
{"jsonrpc":"2.0","id":1,"result":{"difficulty":"0x0","epoch":"0x1","extraData":"0x","gasLimit":"0xffffffffffff","gasUsed":"0x0","hash":"0x00000001000000027bcad26d4ef227709ac6ad024fb2505d78da77b1d7c13d37","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","miner":"0x0000000000000000000000000000000000000000","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","number":"0x1","parentHash":"0x0000000000000000c20dbfb2ec18ae20037c716f3ba2d9e1da768a9deca17cb4","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","size":"0x21f","stateRoot":"0x4207b216e67033ef3f53427bce268a7bea009b9268ccd000c682c7be701b4de1","timestamp":"0x5e0580f8","timestampNano":"0x15e41e3914f3b002","totalDifficulty":"0x0","transactions":[],"transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","uncles":[]}}
Additional context
$ sonicd version
Sonic
Version: 1.2.1-e
Git Commit: a6f79ac99065b347b678f66446697737d2fe87f6
Git Commit Date: 1721218173
Architecture: amd64
Protocol Versions: [63]
Go Version: go1.21.6
Operating System: linux
Genesis file: mainnet-288000-archive.g
I used full mpt genesis file: https://download.fantom.network/mainnet-109331-full-mpt.g and it took several days to read it and import to carmen but now it has to sync 2 years period. Is this how it supposed to work? Could you please confirm that or update documentation about how to setup sonic node.
Could you please, update the documentation how to start sonic node correctly. Also would be nice to have updated snapshot.
When trying to attach opera console, it aways fails on revision 60cb9f7:
$ build/opera attach http://localhost:18545/
Fatal: Failed to start the JavaScript console: bignumber.js: SyntaxError: bignumber.js: Line 5:1 Could not load source map: open doc/bignumber.js.map: no such file or directory
From my investigation this is caused by upgrade of
github.com/dop251/goja v0.0.0-20200721192441-a695b0cdd498 // indirect
to
github.com/dop251/goja v0.0.0-20220405120441-9037c2b61cbf // indirect
which have been done in go-ethereum-substate. (When I do this upgrade on clean go-opera, it reproduce the same issue.)
However attempts to downgrade it are immediately reverted by go mod tidy
Describe the bug
Fantom Sonic Mainnet Archive node gets corrupted DB.
To Reproduce
Steps to reproduce the behavior:
sonicd[288420]: failed to initialize the node: failed to make consensus engine: failed to open existing databases: dirty state: gossip: DE
Expected behavior
Node is able to sync properly, without getting its DB corrupted.
Desktop (please complete the following information):
Additional context
Not quite sure how to mitigate this. It's the second time we're running into such issues on two different nodes - changed the machines as well thinking it would be a local storage problem.
We are using systemd, here is the service file:
Any feedback is highly appreciated!
When tracing the following block/transaction, an output
field of the block trace is incorrect.
{"jsonrpc":"2.0","id":1,"method":"debug_traceBlockByNumber","params":["0x52abce3",{"tracer":"callTracer"}]}
This call has the following output:
jq '.result[0].txHash'
"0xb06bd115501cfe1bb2d8a472132005489d93bf21a25000a2e11f3e2d28ea67b7"
jq '.result[0].result.calls[0].calls[0].calls[0].output'
"0x00000000000000000000000000000000000000000a236335b2743d0560119632"
However, if you make the SAME call against an Opera node the output is:
"0x00000000000000000000000000000000000000000a2363398e64287530582c29"
{"jsonrpc":"2.0","id":1,"method":"debug_traceTransaction","params":["0xb06bd115501cfe1bb2d8a472132005489d93bf21a25000a2e11f3e2d28ea67b7",{"tracer":"callTracer"}]}
The output from both Sonic and Opera is: jq '.result.calls[0].calls[0].calls[0].output'
"0x00000000000000000000000000000000000000000a2363398e64287530582c29"
As you can see, when Sonic traces the transaction by hash it has the same output as Opera nodes, but when Sonic traces the entire block the output for this transaction is different.
$ sonicd version
Sonic
Version: 1.2.1-e
Git Commit: a6f79ac99065b347b678f66446697737d2fe87f6
Git Commit Date: 1721218173
Architecture: amd64
Protocol Versions: [63]
Go Version: go1.21.6
Operating System: linux
Genesis file: mainnet-288000-archive.g
Is your feature request related to a problem? Please describe.
"github.com/Fantom-foundation/Carmen/go/state" is deprecated: external users should switch to the carmen package as the new top-level API"
Describe the solution you'd like
Please add a guide how to integrate carmen in the sonic package correctly
When running with high load (1200-1700 txs/sec) for like 24 hours the memory usage of go-opera-norma exceeds 123 GB of memory and follows OOM kill.
From memory profiles it seems the allocations comes from P2P messages decoding and it happens on both, validator and non-validator nodes:
Full memory profiles attached:
memprofiles.zip
Version:
Go-Opera-norma
Version: 1.1.2-rc.6
Git Commit: 0e0f61816f87102ec4d06baeba2bfa4a64244fcc
Git Commit Date: 1690548434
Architecture: amd64
Protocol Versions: [63]
Go Version: go1.19.11
Operating System: linux
Hi Team,
The initialization of the Sonic Database is not working, we followed this guide https://github.com/Fantom-foundation/Sonic?tab=readme-ov-file#initialization-of-the-sonic-database and used the latest genesis file https://files.fantom.network/mainnet-171200-pruned-mpt.g
We got the issue as the following:
INFO [05-02|09:09:52.640] - Reading EVM unit 0 progress=98.77% elapsed=14h12m10.451s eta=10m42.998s
INFO [05-02|09:11:23.280] - Reading EVM unit 0 progress=98.94% elapsed=14h13m41.091s eta=9m10.972s
INFO [05-02|09:12:54.851] - Reading EVM unit 0 progress=99.12% elapsed=14h15m12.663s eta=7m39.115s
INFO [05-02|09:14:25.842] - Reading EVM unit 0 progress=99.29% elapsed=14h16m43.654s eta=6m7.412s
INFO [05-02|09:15:56.763] - Reading EVM unit 0 progress=99.47% elapsed=14h18m14.575s eta=4m35.869s
INFO [05-02|09:17:27.973] - Reading EVM unit 0 progress=99.64% elapsed=14h19m45.785s eta=3m4.486s
INFO [05-02|09:18:59.676] - Reading EVM unit 0 progress=99.82% elapsed=14h21m17.488s eta=1m33.264s
INFO [05-02|09:20:31.604] - Reading EVM unit 0 progress=100.00% elapsed=14h22m49.415s eta=2.198s
INFO [05-02|09:20:33.825] Importing legacy EVM data into Carmen module=evm-store index=50,870,730 root=0f968a..dbb6d5
failed to write Gossip genesis state: import of legacy genesis data into StateDB failed; missing preimage for account address hash [0 0 2 106 173 44 72 182 210 169 85 38 106 60 227227 178 230 218 1 1 31 22 196 101 52 14 114 219 251 253 49]; <nil>
This issue to be referenced by TODOs related to Blobs / BlobTx / BlobGasPrice.
To be done as part of preparing of Cancun future network upgrade.
flushable.SyncedPool wraps the db connection to postpone all db writes (including the db dropping) until Flush() method is called. (like db transactions)
asyncflushproducer wraps the db producer, to postpone any all-producer-dbs Flush until the not-flushed-size estimation exceeds some threshold. It also run the Flush in a new gorutine/thread.
The issue occurs when Opera use "lachesis-%d" database following way:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x138 pc=0x119a971]
goroutine 241 [running]:
github.com/cockroachdb/pebble.(*DB).AsyncFlush(0xc0059ea000?)
/home/jkalina/go/pkg/mod/github.com/cockroachdb/[email protected]/db.go:1521 +0x31
github.com/Fantom-foundation/lachesis-base/kvdb/pebble.(*Database).AsyncFlush(...)
/home/jkalina/go/pkg/mod/github.com/!fantom-foundation/[email protected]/kvdb/pebble/pebble.go:201
github.com/Fantom-foundation/lachesis-base/kvdb/pebble.(*Database).Stat(0x1?, {0x19d62e0?, 0x2a5d5e0?})
/home/jkalina/go/pkg/mod/github.com/!fantom-foundation/[email protected]/kvdb/pebble/pebble.go:326 +0x51
github.com/Fantom-foundation/lachesis-base/kvdb/flushable.(*Flushable).Stat(0xc0000cfbd8?, {0x19d62e0, 0xb})
/home/jkalina/go/pkg/mod/github.com/!fantom-foundation/[email protected]/kvdb/flushable/flushable.go:241 +0xa2
github.com/Fantom-foundation/go-opera/utils/dbutil/asyncflushproducer.(*Producer).Flush.func1()
/home/jkalina/Fantom/go-opera-norma/utils/dbutil/asyncflushproducer/producer.go:72 +0xe8
created by github.com/Fantom-foundation/go-opera/utils/dbutil/asyncflushproducer.(*Producer).Flush in goroutine 712
/home/jkalina/Fantom/go-opera-norma/utils/dbutil/asyncflushproducer/producer.go:68 +0xf9
Solution in the official Opera: "leveldb-fsh" is used for epoch databases, which does not use asyncflushproducer
Solution used in go-opera-norma: use "pebble-flg", which writes directly into the database, without waiting for a Flush - the database is still detected as dirty if terminated with not-flushed changes. (Put called without following Flush call) This is equivalent to the current Carmen behavior.
** Info
OS:Linux Docker
Sonic Version:1.2.1-e
Describe the bug
Using snapshot (https://download.fantom.network/mainnet-294520-validator.g) loading, the service can be started normally, but after shutting down the service normally and starting it again, it reports that there is dirty data, resulting in the service not being able to be started
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Services can be started and stopped, and data can be synchronised properly after startup
Screenshots
INFO [07-25|19:30:59.654] New block index=86112128 id=294520:232:62af99 gas_used=2,459,933 txs=5/0 age=5d23h5m t=382.612ms
INFO [07-25|19:30:59.661] New block index=86112129 id=294520:247:940308 gas_used=984,975 txs=1/0 age=5d23h5m t=6.019ms
INFO [07-25|19:30:59.754] New block index=86112130 id=294520:262:ed6ff8 gas_used=737,979 txs=5/0 age=5d23h5m t=78.691ms
INFO [07-25|19:30:59.755] New block index=86112131 id=294520:271:2a8f5a gas_used=21000 txs=1/0 age=5d23h5m t="962.896µs"
INFO [07-25|19:30:59.964] New block index=86112132 id=294520:311:7bf01d gas_used=1,968,667 txs=7/0 age=5d23h5m t=193.471ms
INFO [07-25|19:31:00.245] New block index=86112133 id=294520:340:7a92f5 gas_used=3,408,249 txs=5/0 age=5d23h5m t=271.863ms
INFO [07-25|19:31:00.248] New block index=86112134 id=294520:349:0eabc1 gas_used=716,249 txs=1/0 age=5d23h5m t=3.528ms
INFO [07-25|19:31:00.672] Got interrupt, shutting down...
INFO [07-25|19:31:00.673] HTTP server stopped endpoint=[::]:8545
INFO [07-25|19:31:00.673] HTTP server stopped endpoint=[::]:8546
INFO [07-25|19:31:00.673] IPC endpoint closed url=/mnt/ftmmain/node/opera.ipc
INFO [07-25|19:31:00.673] Stopping Fantom protocol
[2024-07-25 19:31:00] [node_command.sh] receive the service exit signal
[2024-07-25 19:31:00] [node_command.sh] get service pid [52958] by command [/opt/ftmmain/core/sonicd]
[2024-07-25 19:31:00] [node_command.sh] exec command [kill -15 52958], try count [1]
WARN [07-25|19:31:00.689] Already shutting down, interrupt more to panic. times=9
INFO [07-25|19:31:01.878] New block index=86112135 id=294520:366:a6c24d gas_used=4,049,643 txs=4/0 age=5d23h5m t=1.616s
INFO [07-25|19:31:01.879] Fantom protocol stopped
INFO [07-25|19:31:01.970] New block index=86112136 id=294520:384:e278cd gas_used=1,613,396 txs=2/0 age=5d23h5m t=92.498ms
INFO [07-25|19:31:01.990] Fantom service stopped
INFO [07-25|19:31:01.991] Closing State DB... module=evm-store
[2024-07-25 19:31:10] [node_command.sh] get service pid [52958] by command [/opt/ftmmain/core/sonicd]
[2024-07-25 19:31:10] [node_command.sh] exec command [kill -15 52958], try count [2]
WARN [07-25|19:31:10.705] Already shutting down, interrupt more to panic. times=8
[2024-07-25 19:31:20] [node_command.sh] get service pid [52958] by command [/opt/ftmmain/core/sonicd]
[2024-07-25 19:31:20] [node_command.sh] exec command [kill -15 52958], try count [3]
WARN [07-25|19:31:20.718] Already shutting down, interrupt more to panic. times=7
[2024-07-25 19:31:30] [node_command.sh] get service pid [52958] by command [/opt/ftmmain/core/sonicd]
[2024-07-25 19:31:30] [node_command.sh] exec command [kill -15 52958], try count [4]
WARN [07-25|19:31:30.736] Already shutting down, interrupt more to panic. times=6
[2024-07-25 19:31:40] [node_command.sh] get service pid [52958] by command [/opt/ftmmain/core/sonicd]
[2024-07-25 19:31:40] [node_command.sh] exec command [kill -15 52958], try count [5]
WARN [07-25|19:31:40.750] Already shutting down, interrupt more to panic. times=5
[2024-07-25 19:31:50] [node_command.sh] get service pid [52958] by command [/opt/ftmmain/core/sonicd]
[2024-07-25 19:31:50] [node_command.sh] exec command [kill -15 52958], try count [6]
WARN [07-25|19:31:50.762] Already shutting down, interrupt more to panic. times=4
[2024-07-25 19:35:37] [node_command.sh] exec command [/opt/ftmmain/core/sonicd --datadir=/mnt/ftmmain/node --cache=8192 --mode=rpc --http --http.addr=0.0.0.0 --http.port=8545 --http.vhosts=* --http.corsdomain=* --http.api=admin,eth,web3,net,ftm,txpool,abft,dag --ws --ws.addr=0.0.0.0 --ws.port=8546 --ws.origins=* --ws.api=admin,eth,web3,net,ftm,txpool,abft,dag --rpc.gascap=0 --rpc.txfeecap=0 --port=30303]
INFO [07-25|19:35:37.304] Maximum peer count total=50
INFO [07-25|19:35:37.304] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory"
failed to initialize the node: failed to make consensus engine: failed to open existing databases: dirty state: gossip-294520: DE
Current Behaviour
The EVM data from a legacy genesis are unpacked into a temp directory before the Carmen state is reconstructed. This temporary database is hardcoded to be created in the system temp path. This may fail due to insufficient storage space.
Expected Behaviour
The temporary storage should be created under the target datadir
by default. This storage path is supposed to be used for the created state DB, so it should be used for all the data handling.
INFO Unpacking legacy EVM data into a temporary directory module=evm-store dir=/tmp/opera-tmp-import-legacy-genesis1496505001
Interrupting "opera check evm" tool in the middle of verifyLastState
function (where the state db is opened) breaks the archive state, as described in Fantom-foundation/Carmen#679
$ opera check evm --datadir /var/opera/mainnet-carmen-archive-37M-experiment/ --statedb.impl carmen-s5 --archive.impl s5
WARN [12-10|21:19:40.690] Please add '--cache 64358' flag to allocate more cache for Opera. Total memory is 128717 MB.
INFO [12-10|21:19:40.690] Maximum peer count total=50
INFO [12-10|21:19:40.690] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory"
archive writing thread started
archive flush done
^C
$ opera check evm --datadir /var/opera/mainnet-carmen-archive-37M-experiment/ --statedb.impl carmen-s5 --archive.impl s5
WARN [12-10|21:19:55.569] Please add '--cache 64358' flag to allocate more cache for Opera. Total memory is 128717 MB.
INFO [12-10|21:19:55.569] Maximum peer count total=50
INFO [12-10|21:19:55.569] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory"
CRIT [12-10|21:19:57.600] Verification of the Fantom World State failed err="verification of the last block failed: failed to open carmen live state in /var/opera/mainnet-carmen-archive-37M-experiment/carmen: unexpected EOF"
We should make sure the Carmen database is closed correctly even on Ctrl+C during any phase of the verification.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.