Comments (51)
Looking into this today @mvanhalen !
Just to confirm: Are you running with --hypersync=false and --sync-type=blocksync? These two flags are needed for Postgres.
from core.
Hey guys, wanted to give an update here. It took us a while, but we were able to find and patch the bug that was causing the sync to fail on block 46631. It was due to a slight difference in how Postgres was handling block rewards vs Badger.
My node, with the patch included, is currently at block 82732. We should wait until we hit a full sync before we declare victory-- but at this point I would recommend you guys try and continue your sync with version 2.2.3. It should pick up right where it left off at block 46631 (but make it past this block this time).
I'll report back once I get a full sync. Again sorry this took so long guys, but Hypersync was a big change, with a lot of moving parts, and the bright side is that it paves the way for a much faster Postgres sync that doesn't take many hours to complete.
from core.
@tijno - we are still investigating how the UTXO in that transaction is not being found.
from core.
Hi @mvanhalen - the issue isn't actually in block 130972, but rather 130970. I was syncing a postgres node again yesterday and managed to find this in the logs:
I0604 05:27:02.540805 649742 server.go:1731] Server._handleBlock: Received block ( 130970 / 135333 ) from Peer [ Remote Address: 34.123.41.111:17000 PeerID=4 ]
I0604 05:27:02.553050 649742 base.go:292] Refreshing USD to ETH exchange rate
E0604 05:27:02.670763 649742 server.go:1725] Server._handleBlock: Encountered an error processing block <Header: < 130970, 00000000000016027d8104a31980eff4e973c61d82ca8e7f13c843cac5603eca
, 1 >, Signer Key: BC1YLh768bVj2R3QpSiduxcvn7ipxF3L3XHsabZYtCGtsinUnNrZvNN>. Disconnecting from peer [ Remote Address: 34.123.41.111:17000 PeerID=4 ]: Error while processing block: : Proce
ssBlock: Problem flushing view to db: ERROR #42601 syntax error at or near ":"
E0604 05:27:02.670897 649742 peer.go:1229] ReadDeSoMessage: : ReadMessage: Problem decoding NetworkType: read tcp 192.168.86.63:55678->34.123.41.111:17000: use of closed network connectio
n
E0604 05:27:02.670917 649742 peer.go:1057] Peer.inHandler: Can't read message from peer [ Remote Address: 34.123.41.111:17000, DISCONNECTED PeerID=4 ]: ReadDeSoMessage: : ReadMessage: Pro
blem decoding NetworkType: read tcp 192.168.86.63:55678->34.123.41.111:17000: use of closed network connection
It looks like it's hitting some sort of syntax error when trying to dump the view into postgres. Unfortunately, it's unclear which insert/delete caused this issue as the logs aren't that helpful here. So I'll be adding some better error messaging here and syncing again.
from core.
@tijno @mvanhalen - quick update: I was able to reproduce the syntax error running a regtest node locally and could trigger the same syntax issue when inserting a derived key with a transaction spending limit. I've put up this PR which will insert a byte slice representing the TransactionSpendingLimitTracker instead of the struct itself. I am currently syncing a prod node from scratch based off that PR and will report when it finishes syncing.
from core.
@tijno @mvanhalen - another update: so we got PAST block 130970 without the syntax issue - YAY! So the PR mentioned above does resolve SOME issues.
However, at block 132804, this transaction was causing postgres to throw a RuleErrorCoinTransferBalanceEntryDoesNotExist
. This transaction is transferring zero DAO coins, which isn't against the consensus rules (at the moment). After hypersync, when querying for a BalanceEntry from badger, if the balance entry does not exist, we return a BalanceEntry initialized with a balance of zero coins. However, Postgres was simply returning nil, thus causing the rule error. I've updated the PR above with this commit to make Postgres behave the same as badger.
I'm syncing again now and will update you once this sync completes or hits another issue.
from core.
Mine crashed at 39139
8gb server for backend with 16gb DB server .
Previous postgres nodes did not have issues with this.
No OOM errors in logs
No logs either
I0512 00:35:56.208076 1 server.go:1731] Server._handleBlock: Received block ( 39138 / 128763 ) from Peer [ Remote Address: 34.123.41.111:17000 PeerID=2 ]
I0512 00:35:56.318618 1 server.go:1731] Server._handleBlock: Received block ( 39139 / 128763 ) from Peer [ Remote Address: 34.123.41.111:17000 PeerID=2 ]
I0512 00:35:57.279873 1 base.go:228] Refreshing exchange rate...
<< CRASHED >>>
I0512 00:35:59.832240 1 config.go:149] Logging to directory /deso/db/v-00000
There was a huge CPU spike at the crash though on the backend server.
Memory of that server peaked at 85%
DB Server no CPU spike.
Memory usage at 10%
Restarting the backend container just now (11 hours after crash) it seemded to startup fine but server is now unavailable with memory maxing 100%.
Looks like it was deleting "empty files" and crashing repeatedly.
Like this:
badger 2022/05/12 10:37:43 INFO: All 0 tables opened in 1ms
badger 2022/05/12 10:37:43 INFO: Discard stats nextEmptySlot: 0
badger 2022/05/12 10:37:43 INFO: Set nextTxnTs to 78236
badger 2022/05/12 10:37:43 INFO: Deleting empty file: /deso/db/v-00000/badgerdb/000069.vlog
Migrations already up to date
I0512 10:37:43.765678 1 mempool.go:2262] Calling StartReadOnlyUtxoViewRegenerator...
I0512 10:37:43.766803 1 server.go:2344] Server.Start: Starting Server
I0512 10:37:43.769170 1 node.go:40] GlobalState BadgerDB Dir: /deso/db/v-00000/badgerdb/global_state
I0512 10:37:43.769196 1 node.go:41] GlobalState BadgerDB ValueDir: /deso/db/v-00000/badgerdb/global_state/badgerdb
I0512 10:37:43.819180 1 connection_manager.go:833] Full node socket initialized
<< CRASHED >>>
I0512 10:37:46.057936 1 config.go:149] Logging to directory /deso/db/v-00000
I0512 10:37:46.058947 1 config.go:150] Running node in MAINNET mode
from core.
Mine gut stuck again with 16GB RAM around same block.
See some IO reads spiking. Mem and CPU are well within limits. The odd thing is the blocks that make it hang do not have a huge amount of transactions.
from core.
Ive not retried as ive not seen a relevant fix. There was a postgres fix for a dao trade related bug.
it would be a shame if hardfork will go live again without postgres fix in place
from core.
lol :) that was a fun block
from core.
Hey guys, @lazynina and I have been degugging here. We're hoping to have everything sorted out this weekend, with hopefully an update tonight!
from core.
@diamondhands0 any updates on this?
cc @mvanhalen
from core.
@tijno - I believe it is 3JuEUd2owMuXAkMnvh2JzgigVKGor8W2Ws4aDrRGSMH8hsuVVJfq5z
The logs @diamondhands0 posted above state that the block was marked invalid because of a Creator Coin transaction that was spending non-existent UTXOs and this is the only CREATOR_COIN
transaction in block 130972
from core.
@mvanhalen - so there were still some issues with how matching orders were pulled for DAO_COIN_LIMIT_ORDER
and storing MESSAGING_GROUP
transaction in postgres. To get around this, I've written these to badger - we write some other data to badger even when running a postgres node already. The latest commit to main synced a postgres node for me so you should be good to go now!
I'll continue to look at getting both MESSAGING_GROUP
and DAO_COIN_LIMIT_ORDER
fully supported in postgres and will keep you apprised of updates on that front as I have them.
from core.
We are fully in sync again. Thanks @lazynina
from core.
@diamondhands0 is this resolved in 2.2.1 release ?
from core.
Tried the new latest release yesterday. Now it stopped earlier.
] Server._handleHeaderBundle: Duplicate header 00000000000071a580a6bbc58fd98c6b891fedad216f926cb473fd3ad67b70d3 received from peer [ Remote Address: 35.232.92.5:17000 PeerID=1 ] in state SYNCING_HEADERS. Local header tip height 34685 hash 000000000000941945fe6cf2ab3fd846728104c6540077f22a8f05e42138f837 with duplicate 00000000000071a580a6bbc58fd98c6b891fedad216f926cb473fd3ad67b70d3
Feels a bit like it corrupted. Will try stable release now
from core.
Got stuck with stable again 39120 last processed block. But saw some spikes in memory no logged exceptions. Giving it some more juice. Normally the previous config could handle it without issues. But new changes might cause a bit more consumption. Retrying again.
from core.
Thanks @diamondhands0 !
I used the hypersync flag only. But I just started a new sync with
`
HYPERSYNC=false
SYNC_TYPE=blocksync
`
In the dev.env file. That should enable those options correct? The sync is working should have the answer tomorrow.
from core.
Awesome. Those two flags the way you have them are correct. Let's see how your sync does in the morning.
Hypersync isn't yet supported with Postgres which is why we have to specify "false". It will be supported soon, though, and a big motivation for hypersync is that it can solve the scaling issues with Postgres that were preventing us from fully productionizing it.
from core.
Awesome. Those two flags the way you have them are correct. Let's see how your sync does in the morning.
Hypersync isn't yet supported with Postgres which is why we have to specify "false". It will be supported soon, though, and a big motivation for hypersync is that it can solve the scaling issues with Postgres that were preventing us from fully productionizing it.
So tried it with the new flags. Hypersync flag I had enabled before. But get stuck on the same block 39121. No exceptions logged. @diamondhands0
Performance profile remains like posted before. See lots of disk reads just after sync fails. Not sure that is because of the restart of backend each x seconds.
from core.
Any ideas on how to continue @diamondhands0 @tijno ?
from core.
I have set up a new sync yesterday with a more powerful server. 32 GB more cpu and io. (8GB was enough in the past for postgres sync) Now have past 42K blocks and it's still syncing. Think it´s quite some overkill, but if that works we are at least in sync. Will let you now if it completes @diamondhands0
from core.
what is your db server running with @mvanhalen ?
i ran node with 16gb and dbserver with 16gb and that wasnt enough for v2.2 sync.
like you - previously 8gb on node was more then enough.
from core.
@tijno Db server has:
Compute size:
Standard_D4s_v3 (4 vCores, 16 GiB memory, 6400 max iops)
Storage 1024 GiB
node sync server has:
4 vcpus, 32 GiB memory not sure if it was the iops or the memory what´s making it work now. Still syncing
from core.
Did not get a complete sync. Got this error from the logs:
E0519 14:41:00.545169 1 blockchain.go:1380] MarkBlockInvalid: Block height: 46631, Block hash: 00000000000009f34619f1108617feb354fe8e8d0abef12254f3d252da7cfa87, Error: ConnectBlock: : ConnectTransaction: : _connectSubmitPost: : RuleErrorInputSpendsPreviouslySpentOutput E0519 14:41:00.545204 1 blockchain.go:1381] MarkBlockInvalid: Printing stack trace so error is easy to find: E0519 14:41:00.545232 1 blockchain.go:1382] goroutine 147 [running]: runtime/debug.Stack() /usr/lib/go/src/runtime/debug/stack.go:24 +0x65 /deso/src/core/lib/blockchain.go:1382 +0x14a /deso/src/core/lib/blockchain.go:2105 +0x1b39 github.com/deso-protocol/core/lib.(*Server)._handleBlock(0xc0000ed320, 0xc1e38698c0, 0xc190170f00) /deso/src/core/lib/server.go:1796 +0x625 github.com/deso-protocol/core/lib.(*Server)._handlePeerMessages(0xf4241, 0x0) /deso/src/core/lib/server.go:2117 +0x1cd github.com/deso-protocol/core/lib.(*Server).messageHandler(0xc0000ed320) /deso/src/core/lib/server.go:2160 +0x1ad created by github.com/deso-protocol/core/lib.(*Server).Start /deso/src/core/lib/server.go:2346 +0xb1 E0519 14:41:00.545236 1 blockchain.go:1389] MarkBlockInvalid: Not marking blocks invalid for now because it makes debugging easier
'
I0520 15:46:07.746432 1 server.go:1731] Server._handleBlock: Received block ( 46631 / 131238 ) from Peer [ Remote Address: 35.232.92.5:17000 PeerID=2 ]
E0520 15:46:10.825402 1 server.go:1725] Server._handleBlock: Encountered an error processing block <Header: < 46631, 00000000000009f34619f1108617feb354fe8e8d0abef12254f3d252da7cfa87, 1 >, Signer Key: BC1YLh768bVj2R3QpSiduxcvn7ipxF3L3XHsabZYtCGtsinUnNrZvNN>. Disconnecting from peer [ Remote Address: 35.232.92.5:17000 PeerID=2 ]: Error while processing block: : ConnectBlock: : ConnectTransaction: : _connectNFTBid: Error getting bidder balance: : GetSpendableUtxosForPublicKey: balance underflow (236254451029,34049025845311)
'
Synced again. Stops at some block. Think this block might have first NFT bids?
'_connectNFTBid: Error getting bidder balance: : GetSpendableUtxosForPublicKey: balance underflow (236254451029,34049025845311)'
Previous postgres sync crashed. So we are out of sync now. Will check code to see if I can find the issue
from core.
BC1YLh768bVj2R3QpSiduxcvn7ipxF3L3XHsabZYtCGtsinUnNrZvNN
seems BC1YLh768bVj2R3QpSiduxcvn7ipxF3L3XHsabZYtCGtsinUnNrZvNN does not exists.
https://www.openprosper.com/pk/BC1YLh768bVj2R3QpSiduxcvn7ipxF3L3XHsabZYtCGtsinUnNrZvNN
from core.
That pub key triggers a panic in the backend - not sure why
test:
curl 'http://localhost:17001/api/v1/transaction-info' -X 'POST' -H 'Content-Type: application/json' --data '{"PublicKeyBase58Check":"BC1YLh768bVj2R3QpSiduxcvn7ipxF3L3XHsabZYtCGtsinUnNrZvNN","LastTransactionIDBase58Check":"","LastPublicKeyTransactionIndex":-1,"Limit":200}
Backend panic log:
I0520 16:12:55.352463 1 server.go:3160] http: panic serving 172.18.0.1:39838: runtime error: index out of range [-1]
goroutine 2364449 [running]:
net/http.(*conn).serve.func1()
/usr/lib/go/src/net/http/server.go:1802 +0xb9
panic({0x1225020, 0xc3672c83d8})
/usr/lib/go/src/runtime/panic.go:1047 +0x266
github.com/deso-protocol/backend/routes.(*APIServer).APITransactionInfo(0xc203cb8000, {0x17fd518, 0xc00134da40}, 0x478893)
/deso/src/backend/routes/exchange.go:1043 +0x2158
net/http.HandlerFunc.ServeHTTP(0x0, {0x17fd518, 0xc00134da40}, 0xc397034ef0)
/usr/lib/go/src/net/http/server.go:2047 +0x2f
github.com/deso-protocol/backend/routes.Logger.func1({0x17fd518, 0xc00134da40}, 0xc200899d00)
/deso/src/backend/routes/server.go:1795 +0xa2
net/http.HandlerFunc.ServeHTTP(0x11fa280, {0x17fd518, 0xc00134da40}, 0xc)
/usr/lib/go/src/net/http/server.go:2047 +0x2f
github.com/deso-protocol/backend/routes.AddHeaders.func1({0x17fd518, 0xc00134da40}, 0xc200899d00)
/deso/src/backend/routes/server.go:1922 +0x9c6
net/http.HandlerFunc.ServeHTTP(0xc3514e4a00, {0x17fd518, 0xc00134da40}, 0xc37507a570)
/usr/lib/go/src/net/http/server.go:2047 +0x2f
github.com/unrolled/secure.(*Secure).Handler.func1({0x17fd518, 0xc00134da40}, 0xc37507a5a0)
/root/go/pkg/mod/github.com/unrolled/[email protected]/secure.go:177 +0x98
net/http.HandlerFunc.ServeHTTP(0xc200899c00, {0x17fd518, 0xc00134da40}, 0x7f987c65f2a0)
/usr/lib/go/src/net/http/server.go:2047 +0x2f
github.com/gorilla/mux.(*Router).ServeHTTP(0xc0fbe75d40, {0x17fd518, 0xc00134da40}, 0xc200899b00)
/root/go/pkg/mod/github.com/gorilla/[email protected]/mux.go:210 +0x1cf
gopkg.in/DataDog/dd-trace-go.v1/contrib/internal/httputil.TraceAndServe({0x17e9160, 0xc0fbe75d40}, {0x17fce58, 0xc2012d8620}, 0xc200899a00, {0x12bcdf3, 0xa}, {0xc0fb23b500, 0x1d}, {0x0, ...}, ...)
/root/go/pkg/mod/gopkg.in/!data!dog/[email protected]/contrib/internal/httputil/trace.go:43 +0x9da
gopkg.in/DataDog/dd-trace-go.v1/contrib/gorilla/mux.(*Router).ServeHTTP(0xc351598e30, {0x17fce58, 0xc2012d8620}, 0x10)
/root/go/pkg/mod/gopkg.in/!data!dog/[email protected]/contrib/gorilla/mux/mux.go:115 +0x30c
net/http.serverHandler.ServeHTTP({0x17fa390}, {0x17fce58, 0xc2012d8620}, 0xc200899a00)
/usr/lib/go/src/net/http/server.go:2879 +0x43b
net/http.(*conn).serve(0xc36994ef00, {0x180a398, 0xc351a515c0})
/usr/lib/go/src/net/http/server.go:1930 +0xb08
created by net/http.(*Server).Serve
/usr/lib/go/src/net/http/server.go:3034 +0x4e8
lastKey := keysFound[len(keysFound)-1]
So maybe just panics because key not found.
from core.
BC1YLh768bVj2R3QpSiduxcvn7ipxF3L3XHsabZYtCGtsinUnNrZvNN
seems BC1YLh768bVj2R3QpSiduxcvn7ipxF3L3XHsabZYtCGtsinUnNrZvNN does not exists. https://www.openprosper.com/pk/BC1YLh768bVj2R3QpSiduxcvn7ipxF3L3XHsabZYtCGtsinUnNrZvNN
That pub key triggers a panic in the backend - not sure why
test:
curl 'http://localhost:17001/api/v1/transaction-info' -X 'POST' -H 'Content-Type: application/json' --data '{"PublicKeyBase58Check":"BC1YLh768bVj2R3QpSiduxcvn7ipxF3L3XHsabZYtCGtsinUnNrZvNN","LastTransactionIDBase58Check":"","LastPublicKeyTransactionIndex":-1,"Limit":200}Backend panic log:
I0520 16:12:55.352463 1 server.go:3160] http: panic serving 172.18.0.1:39838: runtime error: index out of range [-1] goroutine 2364449 [running]: net/http.(*conn).serve.func1() /usr/lib/go/src/net/http/server.go:1802 +0xb9 panic({0x1225020, 0xc3672c83d8}) /usr/lib/go/src/runtime/panic.go:1047 +0x266 github.com/deso-protocol/backend/routes.(*APIServer).APITransactionInfo(0xc203cb8000, {0x17fd518, 0xc00134da40}, 0x478893) /deso/src/backend/routes/exchange.go:1043 +0x2158 net/http.HandlerFunc.ServeHTTP(0x0, {0x17fd518, 0xc00134da40}, 0xc397034ef0) /usr/lib/go/src/net/http/server.go:2047 +0x2f github.com/deso-protocol/backend/routes.Logger.func1({0x17fd518, 0xc00134da40}, 0xc200899d00) /deso/src/backend/routes/server.go:1795 +0xa2 net/http.HandlerFunc.ServeHTTP(0x11fa280, {0x17fd518, 0xc00134da40}, 0xc) /usr/lib/go/src/net/http/server.go:2047 +0x2f github.com/deso-protocol/backend/routes.AddHeaders.func1({0x17fd518, 0xc00134da40}, 0xc200899d00) /deso/src/backend/routes/server.go:1922 +0x9c6 net/http.HandlerFunc.ServeHTTP(0xc3514e4a00, {0x17fd518, 0xc00134da40}, 0xc37507a570) /usr/lib/go/src/net/http/server.go:2047 +0x2f github.com/unrolled/secure.(*Secure).Handler.func1({0x17fd518, 0xc00134da40}, 0xc37507a5a0) /root/go/pkg/mod/github.com/unrolled/[email protected]/secure.go:177 +0x98 net/http.HandlerFunc.ServeHTTP(0xc200899c00, {0x17fd518, 0xc00134da40}, 0x7f987c65f2a0) /usr/lib/go/src/net/http/server.go:2047 +0x2f github.com/gorilla/mux.(*Router).ServeHTTP(0xc0fbe75d40, {0x17fd518, 0xc00134da40}, 0xc200899b00) /root/go/pkg/mod/github.com/gorilla/[email protected]/mux.go:210 +0x1cf gopkg.in/DataDog/dd-trace-go.v1/contrib/internal/httputil.TraceAndServe({0x17e9160, 0xc0fbe75d40}, {0x17fce58, 0xc2012d8620}, 0xc200899a00, {0x12bcdf3, 0xa}, {0xc0fb23b500, 0x1d}, {0x0, ...}, ...) /root/go/pkg/mod/gopkg.in/!data!dog/[email protected]/contrib/internal/httputil/trace.go:43 +0x9da gopkg.in/DataDog/dd-trace-go.v1/contrib/gorilla/mux.(*Router).ServeHTTP(0xc351598e30, {0x17fce58, 0xc2012d8620}, 0x10) /root/go/pkg/mod/gopkg.in/!data!dog/[email protected]/contrib/gorilla/mux/mux.go:115 +0x30c net/http.serverHandler.ServeHTTP({0x17fa390}, {0x17fce58, 0xc2012d8620}, 0xc200899a00) /usr/lib/go/src/net/http/server.go:2879 +0x43b net/http.(*conn).serve(0xc36994ef00, {0x180a398, 0xc351a515c0}) /usr/lib/go/src/net/http/server.go:1930 +0xb08 created by net/http.(*Server).Serve /usr/lib/go/src/net/http/server.go:3034 +0x4e8
Seems this block had the first NFT transactions also. Create NFT and bid.
https://www.openprosper.com/deso-scan/block/00000000000009f34619f1108617feb354fe8e8d0abef12254f3d252da7cfa87
from core.
So how come this works for non Postgres? Or is the Postgres block handling the issue here now. Used to work.
Guess there was some change in GetSpendableDeSoBalanceNanosForPublicKey or in calling that function for the hardfork code?
Any ideas @lazynina @diamondhands0 ?
from core.
Great @diamondhands0 @lazynina I´m syncing this version now. Will close this issue when sync completes.
from core.
Hey all, just an update here. Sadly, my node appears to have gotten stuck right around the fork height with the following error. Recall the fork height was 130901
, so I expect that we'll find the issue if we trace this txn in the block explorer. We're on it!
I0523 13:13:55.453209 3125824 block_view.go:2377] ConnectBlock: Connecting block <Header: < 130972, 000000000000a465485d847fbcf4b5c74b88137377771a3aff155a6e1dfacac5, 1 >, Signer Key: BC1YLh768bVj2R3QpSiduxcvn7i
pxF3L3XHsabZYtCGtsinUnNrZvNN>
E0523 13:13:55.459782 3125824 blockchain.go:1380] MarkBlockInvalid: Block height: 130972, Block hash: 000000000000a465485d847fbcf4b5c74b88137377771a3aff155a6e1dfacac5, Error: ConnectBlock: : ConnectTransaction:
: _connectCreatorCoin: : RuleErrorInputSpendsNonexistentUtxo
E0523 13:13:55.460074 3125824 blockchain.go:1381] MarkBlockInvalid: Printing stack trace so error is easy to find:
E0523 13:13:55.460133 3125824 blockchain.go:1382] goroutine 60 [running]:
runtime/debug.Stack()
/usr/local/go/src/runtime/debug/stack.go:24 +0x7a
github.com/deso-protocol/core/lib.(*Blockchain).MarkBlockInvalid(0xc00077c000, 0xc10586ad80, {0xc15f5aaf00, 0x60})
/media/veracrypt1/project/code/goland_directory_2/core/lib/blockchain.go:1382 +0x1f4
github.com/deso-protocol/core/lib.(*Blockchain).ProcessBlock(0xc00077c000, 0xc000630900, 0x0)
/media/veracrypt1/project/code/goland_directory_2/core/lib/blockchain.go:2105 +0x317b
github.com/deso-protocol/core/lib.(*Server)._handleBlock(0xc000968a20, 0xc0000e8900, 0xc000630900)
/media/veracrypt1/project/code/goland_directory_2/core/lib/server.go:1796 +0x8fc
github.com/deso-protocol/core/lib.(*Server)._handlePeerMessages(0xc000968a20, 0xc15ddbe5e0)
/media/veracrypt1/project/code/goland_directory_2/core/lib/server.go:2117 +0x53c
github.com/deso-protocol/core/lib.(*Server).messageHandler(0xc000968a20)
/media/veracrypt1/project/code/goland_directory_2/core/lib/server.go:2160 +0x2b9
created by github.com/deso-protocol/core/lib.(*Server).Start
/media/veracrypt1/project/code/goland_directory_2/core/lib/server.go:2346 +0x108
E0523 13:13:55.460169 3125824 blockchain.go:1389] MarkBlockInvalid: Not marking blocks invalid for now because it makes debugging easier
E0523 13:13:55.460194 3125824 server.go:1725] Server._handleBlock: Encountered an error processing block <Header: < 130972, 000000000000a465485d847fbcf4b5c74b88137377771a3aff155a6e1dfacac5, 1 >, Signer Key: BC1
YLh768bVj2R3QpSiduxcvn7ipxF3L3XHsabZYtCGtsinUnNrZvNN>. Disconnecting from peer [ Remote Address: 35.232.92.5:17000 PeerID=1 ]: Error while processing block: : ConnectBlock: : ConnectTransaction: : _connectCreat
orCoin: : RuleErrorInputSpendsNonexistentUtxo
from core.
Thanks for the update @diamondhands0 seems like the same public key? Error probably related.
from core.
ok thx 🙏🏼
hopefully you figure it out
from core.
@lazynina out if interest whats the tx its failing on ?
from core.
@tijno - I believe it is 3JuEUd2owMuXAkMnvh2JzgigVKGor8W2Ws4aDrRGSMH8hsuVVJfq5z
The logs @diamondhands0 posted above state that the block was marked invalid because of a Creator Coin transaction that was spending non-existent UTXOs and this is the only
CREATOR_COIN
transaction in block 130972
What is odd that the same public key as the previous error is triggered. The one that gave some other errors as well. @lazynina @tijno
from core.
What can we do to move this forward? @lazynina @diamondhands0
from core.
Great @lazynina that is progres. Thanks for doing this in your weekend. Let's hope this brings us back in sync
from core.
Super! fingers crossed
from core.
@tijno @mvanhalen - quick update: I was able to reproduce the syntax error running a regtest node locally and could trigger the same syntax issue when inserting a derived key with a transaction spending limit. I've put up this PR which will insert a byte slice representing the TransactionSpendingLimitTracker instead of the struct itself. I am currently syncing a prod node from scratch based off that PR and will report when it finishes syncing.
Did we get lucky with the sync? @lazynina
from core.
Any news on this @lazynina @diamondhands0 thanks.
from core.
@mvanhalen - I ran into another issue in Block 132767. When this AUTHORIZE_DERIVED_KEY
txn is being connected, I'm hitting a RuleErrorInputWithPublicKeyDifferentFromTxnPublicKey
error. I pulled up the transaction in which this output was generated and it was this DAO_COIN_LIMIT_ORDER
txn, so I believe there is a discrepancy between how matching orders are being pulled by Postgres vs. Badger that I'm trying to debug.
from core.
Thanks for the update @lazynina seems we´re almost there. Anxious to have it working again and have NFTz.me in full swing.
from core.
Saw that @lazynina already started a fresh sync. Think this will help us out for now.
from core.
Fresh build and images, hitting the same issue (at block 130970/130972).
Have the docker images been updated? (cc @diamondhands0 @lazynina )
Backend output:
E0820 07:30:55.570707 1 server.go:1725] Server._handleBlock: Encountered an error processing block <Header: < 130972, 000000000000a465485d847fbcf4b5c74b88137377771a3aff155a6e1dfacac5, 1 >, Signer Key: BC1YLh768bVj2R3QpSiduxcvn7ipxF3L3XHsabZYtCGtsinUnNrZvNN>. Disconnecting from peer [ Remote Address: 35.232.92.5:17000 PeerID=118 ]: Error while processing block: : ConnectBlock: : ConnectTransaction: : _connectFollow: : RuleErrorInputSpendsPreviouslySpentOutput
E0820 07:30:55.570928 1 peer.go:1229] ReadDeSoMessage: : ReadMessage: Could not read payload for message type (BLOCK): read tcp 172.19.0.3:59282->35.232.92.5:17000: use of closed network connection
E0820 07:30:55.571001 1 peer.go:1057] Peer.inHandler: Can't read message from peer [ Remote Address: 35.232.92.5:17000, DISCONNECTED PeerID=118 ]: ReadDeSoMessage: : ReadMessage: Could not read payload for message type (BLOCK): read tcp 172.19.0.3:59282->35.232.92.5:17000: use of closed network connection
I0820 07:30:55.583608 1 peer.go:519] StartDeSoMessageProcessor: Cleaning up message queue for peer: [ Remote Address: 35.232.92.5:17000, DISCONNECTED PeerID=118 ]
I0820 07:30:55.583709 1 peer.go:535] StartDeSoMessageProcessor: Stopping because peer disconnected: [ Remote Address: 35.232.92.5:17000, DISCONNECTED PeerID=118 ]
I0820 07:30:56.482973 1 base.go:281] Refreshing USD to BTC exchange rate
I0820 07:30:56.915470 1 base.go:288] New USD to BTC exchange rate: 21311.900000
I0820 07:30:57.317120 1 base.go:292] Refreshing USD to ETH exchange rate
I0820 07:30:57.805701 1 peer.go:1148] Peer.Start: Starting peer [ Remote Address: 35.232.92.5:17000 PeerID=119 ]
I0820 07:30:57.805835 1 peer.go:531] StartDeSoMessageProcessor: Starting for peer [ Remote Address: 35.232.92.5:17000 PeerID=119 ]
I0820 07:30:57.848815 1 base.go:299] New USD to ETH exchange rate: 1647.420000
I0820 07:30:57.972404 1 server.go:765] Received header bundle with 0 headers in state SYNCING_BLOCKS from peer [ Remote Address: 35.232.92.5:17000 PeerID=119 ]. Downloaded ( 157279 / 157279 ) total headers
I0820 07:30:58.484140 1 server.go:1731] Server._handleBlock: Received block ( 130972 / 157279 ) from Peer [ Remote Address: 35.232.92.5:17000 PeerID=119 ]
E0820 07:30:58.551193 1 blockchain.go:1380] MarkBlockInvalid: Block height: 130972, Block hash: 000000000000a465485d847fbcf4b5c74b88137377771a3aff155a6e1dfacac5, Error: ConnectBlock: : ConnectTransaction: : _connectFollow: : RuleErrorInputSpendsPreviouslySpentOutput
E0820 07:30:58.551241 1 blockchain.go:1381] MarkBlockInvalid: Printing stack trace so error is easy to find:
E0820 07:30:58.551273 1 blockchain.go:1382] goroutine 2986 [running]:
runtime/debug.Stack()
/usr/lib/go/src/runtime/debug/stack.go:24 +0x65
github.com/deso-protocol/core/lib.(*Blockchain).MarkBlockInvalid(0x1850a40?, 0xc302a27400, {0xc00a620cb0, 0x61})
/deso/src/core/lib/blockchain.go:1382 +0x12c
github.com/deso-protocol/core/lib.(*Blockchain).ProcessBlock(0xc2a597e000, 0xc00a6400f0, 0xd0?)
/deso/src/core/lib/blockchain.go:2105 +0x1ccd
github.com/deso-protocol/core/lib.(*Server)._handleBlock(0xc2a5978000, 0xc00a281200, 0xc00a6400f0)
/deso/src/core/lib/server.go:1796 +0x60d
github.com/deso-protocol/core/lib.(*Server)._handlePeerMessages(0xc2a5978000?, 0xc00a281200?)
/deso/src/core/lib/server.go:2117 +0x1cd
github.com/deso-protocol/core/lib.(*Server).messageHandler(0xc2a5978000)
/deso/src/core/lib/server.go:2160 +0x1ad
created by github.com/deso-protocol/core/lib.(*Server).Start
/deso/src/core/lib/server.go:2346 +0xac
EDIT:
Appears docker images have been updated lots of times since this issue.
Error: ConnectBlock: : ConnectTransaction: : _connectFollow: : RuleErrorInputSpendsPreviouslySpentOutput
Any ideas?
from core.
A full week later, 6 sync attempts, double-checked images are correct deso-backend:stable.
image: docker.io/desoprotocol/backend:stable
backend | E0826 18:18:07.308577 1 server.go:1725] Server._handleBlock: Encountered an error processing block <Header: < 130970, 00000000000016027d8104a31980eff4e973c61d82ca8e7f13c843cac5603eca, 1 >, Signer Key: BC1YLh768bVj2R3QpSiduxcvn7ipxF3L3XHsabZYtCGtsinUnNrZvNN>. Disconnecting from peer [ Remote Address: 35.232.92.5:17000 PeerID=1 ]: Error while processing block: : ProcessBlock: Problem flushing view to db: ERROR #42601 syntax error at or near ":" backend | I0826 18:18:07.308925 1 server.go:1731] Server._handleBlock: Received block ( 130971 / 159099 ) from Peer [ Remote Address: 35.232.92.5:17000, DISCONNECTED PeerID=1 ]
Pretty please 🍒 @diamondhands0 @lazynina 🍒 can anyone tell me where I'm going wrong here.
I'm not sure what else I can check here. Everything has been cleared, images, volumes and containers deleted - down to re-installing docker. Images all pushed fresh during the run script.
@tijno @mvanhalen - anything else you guys can suggest?
from core.
Can you try the image with the latest tag please? I think the stable image is out of date and I'll cut a new one as stable hasn't been updated in 3 months
from core.
I had tried :latest, but with OOM issues.
If you feel it'll resolve the issues I'll give it another go - but will be ~3 days until I reach the block height.
from core.
@StarGeezerPhil how much memory does your machine have?
from core.
I'm running with 48gb available to docker. (@lazynina)
Backend seems to run ~16-20gb and PG ~16gb, but at a number of problematic heights (I jotted down noticing memory peaks around ~9700, ~109147, ~123387, but there were some others) it seems to really make the machine hang and (if I'm unlucky) the TX enters badger, but not postgres.
I'm no expert, but would it be possible for the TX to not enter badger until there's verification PG is successful (this might prevent the need for a full re-sync where issues like this arise).
from core.
I couldn't be happier to confirm that the new stable images seem to have resolved the issue - literally just passed block 130970:
from core.
Related Issues (20)
- FAQ: Current state 2024
- HyperSync Error
- Do not reuse old node Id's HOT 2
- Unexpected hard fork stops at 137270 HOT 1
- Unable to sync blocks after Hypersync
- Weird issue with node saying it´s fully synced, but not syncing HOT 4
- Rosetta API Response to Calculate Balance of an Account Address at a given BlockHeight is incorrect
- Node stopped syncing at 166069
- confusion about the daodao order book mechanism
- Panic when `node.nodeMessageChan <- lib.NodeErase` message is sent for the first time
- JWT can't be validated when the user has signed up with metamask HOT 7
- Postgres stops sycning at block 205527 HOT 5
- {'error': 'SubmitTransaction: Problem processing transaction: VerifyAndBroadcastTransaction: Problem validating txn: ValidateTransaction: Problem validating transaction: : ConnectTransaction: : _connectSubmitPost: : _connectBasicTransfer: Problem verifying txn signature: : _verifySignature: high-S deteceted: RuleErrorTxnSigHasHighS'} HOT 6
- Issue with postgres after fork. HOT 1
- Issue at Postgres sync block height 230799 HOT 4
- Issue: POTENTIAL DEADLOCK with Postgres Node HOT 3
- Issue Postgres at height 239769 HOT 11
- associations endpoinst query for post associations HOT 4
- Postgres node hangs on block 293635 HOT 2
- Broken link in README HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from core.