Git Product home page Git Product logo

silkworm's People

Contributors

andrealanfranchi avatar battlmonstr avatar canepat avatar chfast avatar claudioutt avatar dario1995 avatar enriavil1 avatar gcolvin avatar giulio2002 avatar gooddaisy avatar greg7mdp avatar gumb0 avatar jacekglen avatar lupin012 avatar mriccobene avatar omahs avatar peroket avatar sandakersmann avatar shuoer86 avatar sixtysixter avatar tbcd avatar tjayrush avatar vorot93 avatar wezrule avatar yperbasis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

silkworm's Issues

Evaluate package manager alternatives to Hunter

Evaluate:

  • Conan
  • vcpkg
  • and more

Should work fine on Windows, Linux, and macOS. Also, we should still be able to compile core into WebAssembly.

TODO: what are other criteria, what do we want to optimize?

Might be related to Issue #358.

If we do move away from Hunter, one prerequisite is to include intx and ethash as submodules rather than Hunter packages.

Two points in favour of vcpkg:

  • We already use it for MPIR on Windows (by the way, GMP is now available as well)
  • It integrates nicely with Visual Studio

CRoaring is available in vcpkg (what about Conan?), so we probably should use its packaged version rather than have it as a git submodule.

Build issues when building silkrpc with silkworm as a submodule

I cloned a fresh copy of silkworm and followed the build instructions. The build completed successfully (although there were a few warnings from the cmake step.)

I later cloned a fresh copy of silkrpc (git clone --recurse-submodules [email protected]:torquem-ch/silkrpc.git) which uses silkworm as a submodule.

I followed the silkrpc's build instructions, but the silkworm build fails because of line 216 in ./silkrpc/silkworm/CMakeLists.txt.

g++-10: error: unrecognized command-line option '-Wthread-safety'
make[2]: *** [silkworm/core/CMakeFiles/silkworm_core.dir/silkworm/chain/blockchain.cpp.o] Error 1
make[1]: *** [silkworm/core/CMakeFiles/silkworm_core.dir/all] Error 2
make: *** [all] Error 2

If I comment out that line (in the silkrpc repo), the silkworm submodule build works and so does silkrpc.

Not sure if this issue belongs in silkworm or silkrpc repos.

Remove tg_api

silkworm/tg_api was used to allow Turbo-Geth to execute blocks with Silkworm. Since the feature has been removed from Erigon, tg_api is now obsolete and we should remove it, moving the necessary functionality into silkworm/core/execution or db/silkworm/stagesync or something.

Warning: not a good idea if we need tg_api for Akula (see PR #265).

C++20 mode in CI

Add an option to build Silkworm with C++20 (C++17 stays the default) and run it in CI.

Rationale: SilkRPC is C++20 and uses Silkworm. See also Issue #292.

GCC 11 is probably a good option for the C++20 compiler.

Move genesis data to separate repo and import as submodule

Actually, due to size limitazions in Json data, we need to transform genesis Json data into byte arrays.
This process is carried out by a cmake macro which is slow. Besides, being genesis data constant, makes no sense to recreate the arrays on every build.

Worth move those to a separate repo where we keep and mantain the pre-built byte arrays.

Current logging implementation works badly in multi-threaded process

Outputs overlap badly

DEBUG[12-23|17:15:26.737] Worker #3 completed batch #94
INFO DE[BU12G[-1223-2|13|7:1715:1:25:6.2682.82]22 ] BlWoocrkk e r  1#0320 s00ta1 rtTredan bsaatctchio #n10s 1
    1020000 Workers 7/7
DEBUG[12-23|17:15:27.040] Worker #4 completed batch #95
DEBUG[12-23|17:15:27.126] Worker #5 completed batch #96
INFO DEBUG[[112-23|17:15:27.21-23|67] 17B:lo1ck5:27.167 ]  1030001  Worker #T4ransactions       1030000started batch # 102Workers
6/7
INFO D[EB12U-23|17:15:27.545G] [Blo1ck 2  1040001- Transactions 2  3   1040000| W1orkers 77:1/5:72
7.545] Worker #5 started batch #103
DEBUG[12-23|17:15:28.197] Worker #6 completed batch #97
IDEBUGNFO[ [12-1223-23|17|1:17:5:1258:2.48.34311]]  WBloorkcerk  #160 5s0t0a01r Ttraends abatccth i#o1n0s4
     1050000 Workers 7/7
DEBUG[12-23|17:15:29.038] Worker #0 completed batch #98
INFO D[EBUG12-23|17:15:29.318[]1 2Block -23|17:15:29.3 1 91]0 6W0o0r0k1e rT r#a0n ssatcatritoends  b a t c h  1#0160050
00 Workers 7/7
DEBUG[12-23|17:15:30.814] Worker #1 completed batch #99
INFO [DEBUG[1122-2-3|2173:1|5:1317.2:111] 5Wo:rk3er1 #.2111 s]ta rBteld obactckh  # 1 10076000
1 Transactions      1070000 Workers 7/7
DEBUG[12-23|17:15:32.266] Worker #2 completed batch #100
DEBUG[12-23|17:15:32.539] Worker #3 completed batch #101
INFO DEBUG[[112-23|2-23|17:15:32.61717:15:32.617] ] Block W  1080001orker # 2Transactions       1080000started batch # 107Workers
6/7
IDEBUNGF[O 1[2-1223-|2137|:1175::1352:.3923.19]3 1W]o rBkleorc k#      1 0 9 0 0 031  sTtraarntseadc tbiaotncsh   # 1 0 8
1090000 Workers 7/7
DEBUG[12-23|17:15:33.116] Worker #4 completed batch #102
INFO D[EBUG12-23|17:15:33.337[]1 2Block -23  1100001| Transactions 17     1:1000001 Workers 5:33.3387]/ 7
Worker #4 started batch #109
DEBUG[12-23|17:15:33.457] Worker #5 completed batch #103
IDEBUGNF[O 12[-1223-|127:315|:313.7:1571:37] 3Wo.rk7er1 #7]5  Blstaocrtk e d  ba1tc1h 1#010100
1 Transactions      1110000 Workers 7/7
DEBUG[12-23|17:15:34.468] Worker #6 completed batch #104
INFO DEB[UG1[212-23|17:15:3-243|.59517] :W15orker #:634. st59a5r]t eBld ocbak t  c11h2 0#011011
 Transactions      1120000 Workers 7/7
DEBUG[12-23|17:15:35.289] Worker #0 completed batch #105
INFO D[EBUG12-23|17:15:35.489[]1 2Block -23| 1 71:11350:03051. 4T9r0a]ns aWcotrikoenrs  # 0   s t a1r1t3e0d0 0b0a tWcohr k#e1r1s2
7/7
DEBUG[12-23|17:15:37.154] Worker #1 completed batch #106
INFO D[EB12U-23G|17:15:37.404[]1 2-Bl23|1oc7k :  1511:3407.0041 T05] ranWsaorctkeior ns# 1     st 1ar14te00d 00bat Wchor #k1er13s
7/7
DEBUG[12-23|17:15:38.459] Worker #2 completed batch #107
DEBUG[12-23|17:15:38.642] Worker #3 completed batch #108
INFO DEBUG[[112-2-23|1723|:1517:15:38.830:38.830] ]Worker # 2B started batch #l114ock
  1150001 Transactions      1150000 Workers 6/7
CRIT [12-23|17:15:39.034] Unexpected error : Got invalid signature in tx for block number 1155442
INFO [12-23|17:15:39.037] Stopping worker thread #0
DEBUG[12-23|17:15:39.369] Worker #4 completed batch #109
DEBUG[12-23|17:15:39.506] Worker #5 completed batch #110
DEBUG[12-23|17:15:40.065] Worker #6 completed batch #111
DEBUG[12-23|17:15:40.565] Worker #0 completed batch #112
INFO [12-23|17:15:40.567] Stopping worker thread #1
DEBUG[12-23|17:15:41.585] Worker #1 completed batch #113
INFO [12-23|17:15:41.587] Stopping worker thread #2
DEBUG[12-23|17:15:42.395] Worker #2 completed batch #114
INFO [12-23|17:15:42.397] Stopping worker thread #3
INFO [12-23|17:15:42.399] Stopping worker thread #4
INFO [12-23|17:15:42.400] Stopping worker thread #5
INFO [12-23|17:15:42.401] Stopping worker thread #6
INFO [12-23|17:15:42.878] All done !

Build failure on Mac

When I try to build on my mac desktop, I get these errors/warnings:

[ 65%] Built target silkworm
Scanning dependencies of target consensus
[ 66%] Building CXX object CMakeFiles/consensus.dir/cmd/consensus.cpp.o
[ 67%] Linking CXX executable consensus
ld: warning: direct access in function 'unsigned long std::__1::__str_find_first_of<char, unsigned long, std::__1::char_traits<char>, 18446744073709551615ul>(char const*, unsigned long, char const*, unsigned long, unsigned long)' from file '/Users/jrush/.hunter/_Base/9c5c7fa/c914b13/2c5e733/Install/lib/libboost_filesystem-mt-d-x64.a(path.o)' to global weak symbol 'std::__1::char_traits<char>::eq(char, char)' from file 'CMakeFiles/consensus.dir/cmd/consensus.cpp.o' means the weak symbol cannot be overridden at runtime. This was likely caused by different translation units being compiled with different visibility settings.
[ 67%] Built target consensus
Scanning dependencies of target unit_test
[ 68%] Building CXX object CMakeFiles/unit_test.dir/silkworm/db/change_test.cpp.o
[ 69%] Building CXX object CMakeFiles/unit_test.dir/silkworm/db/lmdb_test.cpp.o
[ 70%] Building CXX object CMakeFiles/unit_test.dir/silkworm/db/temp_lmdb_test.cpp.o
[ 71%] Building CXX object CMakeFiles/unit_test.dir/silkworm/db/util_test.cpp.o
[ 72%] Building CXX object CMakeFiles/unit_test.dir/silkworm/execution/evm_test.cpp.o
[ 73%] Building CXX object CMakeFiles/unit_test.dir/silkworm/execution/processor_test.cpp.o
[ 74%] Building CXX object CMakeFiles/unit_test.dir/silkworm/rlp/decode_test.cpp.o
[ 75%] Building CXX object CMakeFiles/unit_test.dir/silkworm/rlp/encode_test.cpp.o
[ 76%] Building CXX object CMakeFiles/unit_test.dir/silkworm/trie/vector_root_test.cpp.o
[ 77%] Building CXX object CMakeFiles/unit_test.dir/silkworm/types/account_test.cpp.o
[ 78%] Building CXX object CMakeFiles/unit_test.dir/silkworm/types/block_test.cpp.o
[ 79%] Building CXX object CMakeFiles/unit_test.dir/silkworm/types/transaction_test.cpp.o
[ 80%] Linking CXX executable unit_test
[ 88%] Built target unit_test
Scanning dependencies of target check_db
[ 89%] Building CXX object CMakeFiles/check_db.dir/cmd/check_db.cpp.o
/Users/jrush/Development/silkworm/cmd/check_db.cpp:149:31: error: cannot initialize a parameter of type 'size_t *' (aka 'unsigned long *') with an rvalue of type 'uint64_t *' (aka 'unsigned long long *')
                b->get_rcount(&rcount);
                              ^~~~~~~
/Users/jrush/Development/silkworm/silkworm/db/chaindb.hpp:237:36: note: passing argument to parameter 'count' here
            int get_rcount(size_t* count);       // Returns the number of records held in bucket
                                   ^
1 error generated.
make[2]: *** [CMakeFiles/check_db.dir/cmd/check_db.cpp.o] Error 1
make[1]: *** [CMakeFiles/check_db.dir/all] Error 2
make: *** [all] Error 2
jrush@desktop:~/D/s/build|stage-senders⚡*?➤

I happy to try to help fix it. Please advise.

Etl load inefficiency when all keys are already sorted

For tracking purposes.

Etl load() method exhibit a great inefficiency when is granted all items are properly sorted across all flushed buffers.
This case is what sender's recovery implements: items are collected in the very same order they will be inserted into db.
In this case scenario there is no need to go through a priority queue traversing all flushed buffers to check which one has the lower key on every item being processed.

See https://github.com/torquem-ch/silkworm/blob/1d454b11c9efba66ac81eb3b39c7c9ed67aefc36/db/silkworm/etl/collector.cpp#L102-L157

Windows build failed tests

Total Test time (real) = 25.73 sec
The following tests FAILED:
1 - Storage change (Exit code 0xc0000409)
6 - Smart contract with storage (Exit code 0xc0000409)
7 - Maximum call depth (Exit code 0xc0000409)
8 - DELEGATECALL (Exit code 0xc0000409)
14 - BN_MUL (Failed)
15 - SNARKV (Failed)
17 - No refund on error (Exit code 0xc0000409)
18 - Self-destruct (Exit code 0xc0000409)
20 - Empty suicide beneficiary (Exit code 0xc0000409)

Exit code 0xc0000409 is vector index out of range

Transaction analytics by transaction replay

This in an introductory project designed to find some very useful information and perhaps optimisation opportunities for Silkworm execution, at the same time requiring understanding of the code and some aspect of Turbo-Geth/Silkworm data model.
Carrying out this task will require the database obtained by fully syncing a Turbo-Geth node, which should be compatible with the silkworm code. This is because the task requires replaying all transactions from the mainnet history. Of course, for most debugging and testing, only a small subset of transaction should be replayed.

Goals

Here are 3 high-level goals:

  1. Find number of transactions that fail and consume the entire allotted gas. These are "Out of Gas" errors, hitting unrecognised instruction, making invalid jump, or explicitly calling INVALID instruction. These transactions do not make any changes in Ethereum state, except for deducting the ETH (gasprice * tx gaslimit) from the sender's balance of the transaction, and adding the same amount to the miner's balance.
  2. Think about data structure (for example, compressed bitmap) that would allow us to mark the transactions found in 1. so that we can use this data structure as one of the inputs to replay
  3. Using the data structure from 2., see if there is a benefit in skipping the execution of these transactions entirely, and how much benefits that could bring.
  4. Also using data structure from 2., see if we can simplify the configuration of EVM, by retrofitting some opcodes that did not exist from the beginning, and "pretend" that they existed from the very beginning of Ethereum in 2015. For example, bit shifting opcodes, if used prior to Byzantium hard fork where they were introduced, would produce failure of the type described in 1., and if we skip these transactions, we can simply pretend that the bit shifting opcodes were always around. Find all such opcodes and see how this can simplify the EVM configuration.

Tools

Apart from having the synced database, you will need the code that is capable of re-executing any historical transactions at will. Such code exists in Silkworm, in the silkworm/cmd/check_changes.cpp file. This command line utility can be used as a starting point, it accept parameters that specify the database, as well as range of historical block numbers for which to re-play transactions.

The way this program works is similar to how transaction replay is done in the "live" mode. The difference here is that instead of reading state items (accounts with their balances and nonces, contract storage items) from the PLAIN-CST2 table, this historical replay uses the combination of change set tables PLAIN-ACS and PLAIN-SCS, and history index tables hAT and hST to read historical state. Unlike the "live" execution, it does not modify the database and is totally read-only, so it is very good for repetitive experiments and learning. If you want to dig deeper how it works, you can refer to the file silkworm/silkworm/db/buffer.cpp, which defines the type Buffer. An instance of this type is passed into the execute function, and this buffer is used to access historical state. Going even deeper, look at the file silkworm/silkworm/db/access_layer.cpp, specifically at functions read_account and read_storage

Preprocessing of live transaction to optimistically calculate read and write sets

This is continuation of #192

Modify the EVM engine (this might need a custom version of evmone) to change the value stack and semantics of some opcodes.

On the stack, apart from usual 256-bit values, we can also store "unknown" value, which can be just a bit flag

Change semantics of all opcodes that read or write anything to the state. For example
for SLOAD, which reads from contract storage, if the location parameter is "unknown" and not a concrete value, the whole execution aborts
However, if locations is a concrete value, these opcodes don't actually read anything, but push "unknown" on the stack instead of the read value.
Similarly, SSTORE, which writes to contract storage, aborts the execution is location is "unknown", but if location is concrete value, it does not do anything (no-op)

So forth for BALANCE, EXTCODEHASH, and all other opcodes that need access to the state.

In essence, this version of EVM does not access the state. But it is able to look at a transaction and potentially compute "read set" and "write set", or fail. The idea is to first look at historical transactions and see how many of them could have been pre-processed this way.
If many, then the next step is to try to take advantage of such pre-processing - if you know "read sets" and "write sets", you can try to run some of them in parallel.

If we find something interesting there, we might create some block composition strategies (for mining) that create better parallelizable blocks

Port bitmap history DB migration

Port ledgerwatch/erigon#1374

Since it's about historical access, it doesn't affect execution, but does affect cmd/check_changes.

In terms of implementation, find_account_in_history & find_storage_in_history in db/silkworm/db/access_layer probably have to be updated.

Header downloader

This figure shows the overall architecture:
image

One component/thread (BlockProvider) is devoted to replay to external requests.
One component (HeaderDownloader) is devoted to chain downloading.

This figure details the HeaderDownloader structure:

architecture

State Transition Tests: General and FuzzyVM

State Transition Tests have been proposed back in 2017, and finalized in 2021. This is a powerful tool to test various scenarios of contract execution. Most importantly, the solution is meant to be compatible with all implementations of EVM.

In 2021, Marius van der Wijden proposed using the exact mechanism for fuzzy testing. This has been successfully utlized to find bugs in Geth, Nethermind and Besu clients.

Suggestion for dbtool

I think it would be a good idea to add user confirmation to the dbtool silkworm tool for the clear and drop options and possibly the compact option.

I'm concerned that someone (like me -- but luckily not me) will use them without thinking and destroy existing data

When I first looked at dbtool, I just assumed it would protect me, but it doesn't.

dbtool --datadir <path> clear

will wipe away data without asking the user to confirm. It's easy enough to just ask for user confirmation with a default of no.

I know this tool isn't intended to be part of any official release, but if it's possible, someone will do it.

Analysis of benefits from turning off gas accounting for historical transactions

This is continuation of #78

This time, we run a slightly modified version of EVM through the historical transactions. These are the modifications (unless I forgot something):

  1. "Out Of Gas" exception is fatal, meaning that it does not just abort the current frame of execution, but fails the entire execution. The transaction that failed in such way, we exclude from consideration for this analysis.
  2. EVM stack now has a new type of value, "unknown". This could be implemented by a separate stack of boolean values, for example.
  3. Opcode GAS pushes "unknown" value onto the stack instead of the current leftover gas (it is supposed to be inaccessible when the gas accounting is turned off).
  4. Any arithmetic operation with "unknown" value result in "unknown" value (it is "sticky" in that way)
  5. Any JUMP or JUMPI opcodes that attempt jumping to "unknown" destination, fail the execution (and transaction is excluded from the consideration).
  6. Any state reading or writing operations (for example, SLOAD, SSTORE, BALANCE) with "unknown" address fail the execution.

I might have forgotten some other cases where "unknown" value must fail execution, should be added to the list as we proceed with the analysis.

The goal of the analysis is understand how many transactions can execute in exactly the same way as they did, but without EVM performing any gas accounting. Then, construct a data structure that would store "used gas" for each transaction in the past, and use that information to augment the execution without gas accounting, to get exactly the same effect as with the gas accounting. Then, measure potential runtime benefit of such turning off the gas accounting.

check_changes & execute are broken after migration to MDBX

cmd/check_changes:

INFO [07-16|13:36:44.764]   Checking change sets in /Volumes/NVMe2/mainnet_mdbx_12_8/chaindata
ERROR[07-16|13:36:48.617]  Value mismatch for 05a56e2d52c817161883f50c441c3228cfe54d9f:
030104074faf89ab3cd40e
vs DB

ERROR[07-16|13:36:48.617]  Account change mismatch for block 1 😲

cmd/execute:

INFO [07-14|13:54:22.430]  Blocks <= 1043437 committed
ERROR[07-14|13:54:22.453]  Validation error 26 at block 1043439
ERROR[07-14|13:54:22.453]  Error in silkworm_execute_blocks: kSilkwormInvalidBlock, DB: 0

(for mainnet data)

build issue on windows - std::filesystem not found

When building silkworm on windows (using VS 2019), I have an error in cmd/genesistool.cpp:

    genesistool.cpp
    The contents of <filesystem> are available only with C++17 or later.
C:\greg\github\silkworm\cmd\genesistool.cpp(27,21): error C2039: 'filesystem': is not a member of 'std' 

which is likely because the /std:c++11 or /std:c++14 option is added somewhere in the CMakeLists, but I can't find out where? Any idea?

C++20 and MDBX

Due to this aliasing in MDBX

namespace mdbx {

// Functions whose signature depends on the `mdbx::byte` type
// must be strictly defined as inline!
#if defined(DOXYGEN) || (defined(__cpp_char8_t) && __cpp_char8_t >= 201811)
// Wanna using a non-aliasing type to release more power of an optimizer.
using byte = char8_t;
#else
// Wanna not using std::byte since it doesn't add features,
// but add inconvenient restrictions.
using byte = unsigned char;
#endif /* __cpp_char8_t >= 201811*/

with C++20 mdbx::byte is now a different type : char8_t != unsigned char

By consequence byte_ptr() property of slice returns a different type pointer incompatible with our Byteview and and boost:endian

Always use iov_base (which is void*) and cast it to required type.

Windows build with MinGW fails

Trying to build with MinGW (GNU CC) on Windows to get uint128 support I get this error

[ 13%] Built target leak_check_disable
[ 13%] Building CXX object absl/time/CMakeFiles/time_zone.dir/internal/cctz/src/time_zone_posix.cc.obj
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:40:6: error: 'mutex' in namespace 'std' does not name a type
   40 | std::mutex& TimeZoneMutex() {
      |      ^~~~~
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:26:1: note: 'std::mutex' is defined in header '<mutex>'; did you forget to '#include <mutex>'?
   25 | #include "time_zone_fixed.h"
  +++ |+#include <mutex>
   26 |
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc: In static member function 'static bool absl::lts_2020_09_23::time_internal::cctz::time_zone::Impl::LoadTimeZone(const string&, absl::lts_2020_09_23::time_internal::cctz::time_zone*)':
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:63:26: error: 'mutex' is not a member of 'std'
   63 |     std::lock_guard<std::mutex> lock(TimeZoneMutex());
      |                          ^~~~~
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:63:26: note: 'std::mutex' is defined in header '<mutex>'; did you forget to '#include <mutex>'?
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:63:31: error: template argument 1 is invalid
   63 |     std::lock_guard<std::mutex> lock(TimeZoneMutex());
      |                               ^
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:63:38: error: 'TimeZoneMutex' was not declared in this scope
   63 |     std::lock_guard<std::mutex> lock(TimeZoneMutex());
      |                                      ^~~~~~~~~~~~~
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:63:33: warning: unused variable 'lock' [-Wunused-variable]
   63 |     std::lock_guard<std::mutex> lock(TimeZoneMutex());
      |                                 ^~~~
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:77:24: error: 'mutex' is not a member of 'std'
   77 |   std::lock_guard<std::mutex> lock(TimeZoneMutex());
      |                        ^~~~~
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:77:24: note: 'std::mutex' is defined in header '<mutex>'; did you forget to '#include <mutex>'?
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:77:29: error: template argument 1 is invalid
   77 |   std::lock_guard<std::mutex> lock(TimeZoneMutex());
      |                             ^
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:77:36: error: 'TimeZoneMutex' was not declared in this scope
   77 |   std::lock_guard<std::mutex> lock(TimeZoneMutex());
      |                                    ^~~~~~~~~~~~~
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:77:31: warning: unused variable 'lock' [-Wunused-variable]
   77 |   std::lock_guard<std::mutex> lock(TimeZoneMutex());
      |                               ^~~~
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc: In static member function 'static void absl::lts_2020_09_23::time_internal::cctz::time_zone::Impl::ClearTimeZoneMapTestOnly()':
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:88:24: error: 'mutex' is not a member of 'std'
   88 |   std::lock_guard<std::mutex> lock(TimeZoneMutex());
      |                        ^~~~~
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:88:24: note: 'std::mutex' is defined in header '<mutex>'; did you forget to '#include <mutex>'?
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:88:29: error: template argument 1 is invalid
   88 |   std::lock_guard<std::mutex> lock(TimeZoneMutex());
      |                             ^
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:88:36: error: 'TimeZoneMutex' was not declared in this scope
   88 |   std::lock_guard<std::mutex> lock(TimeZoneMutex());
      |                                    ^~~~~~~~~~~~~
C:\.hunter\_Base\6c9b2bc\156a9f8\fec2e58\Build\abseil\Source\absl\time\internal\cctz\src\time_zone_impl.cc:88:31: warning: unused variable 'lock' [-Wunused-variable]
   88 |   std::lock_guard<std::mutex> lock(TimeZoneMutex());
      |                               ^~~~
make[5]: *** [absl\time\CMakeFiles\time_zone.dir\build.make:121: absl/time/CMakeFiles/time_zone.dir/internal/cctz/src/time_zone_impl.cc.obj] Error 1
make[5]: *** Waiting for unfinished jobs....

See also abseil/abseil-cpp#12

AppVeyor builds are slow, especially when changing Hunter dependencies

Either fix our AppVeyor or move to CircleCI or GitHub Actions.

Rationale for GitHub Actions: potentially faster and cheaper builds.

Potentially we also want to test on macOS and on amd64.

Might be related to Issue #359.

Also, we should fix the following problem with AppVeyor. It caches Hunter builds, but we invalidate the cache if our Hunter config changes. This is the right thing to do, of course. However, consequently if one branch has a change to Hunter dependencies (e.g. ethash) and others don't, Appveyor starts Hunter build from scratch when switching branches and builds take 2 hours. Circle CI doesn't have this problem because it supports multiple caches with different keys (and hunter dependencies are hashed into the key.)

Collector file naming may clash on two instances

If collector is instantiated with a specific working directory all buffer flushed files are named sequentially with 1, 2, 3 etc
Should we have two instances of collector pointing to same directory file naming overlaps amongst the two.

This does not happen if collector is instantiated providing no working directory as in that case set_working_path ensure uniqueness of working directory using temp path + unique name.

Don't throw error codes, throw exceptions

cc @yperbasis for thoughts
Agree and that's why we shouldn't do that imho.
A thrown exception can be catched by a try-catch block by inheritance through its propagation in the stack.
This example will leave the exception unhandled

enum class [[nodiscard]] DecodingResult{
    kOk = 0,
    kOverflow,
    kLeadingZero,
    kInputTooShort,
    kNonCanonicalSingleByte,
    kNonCanonicalSize,
    kUnexpectedLength,
    kUnexpectedString,
    kUnexpectedList,
    kListLengthMismatch,
    kUnsupportedEip2718Type,
};

try {
   throw kUnexpectedLength;
} catch (const std::exception& ex) {
  // Whatever to recover or log the error
}

cause kUnexpectedLength does not inherit from std::exception.

It could only work if the catch is modified as follows

try {
   throw kUnexpectedLength;
} catch (...) {
  // Whatever to recover or log the error
  // but we actually don't know what exactly happened
}

or

try {
   throw kUnexpectedLength;
} catch (const DecodingResult& ex) {
  // Whatever to recover or log the error
  // but we actually need to catch a specific type
}

My suggestion is to always have thrown exception to inherit from std::exception cause in higher level calls we don't know which kind of exception might be thrown at lower levels (unless perfectly documented).

Originally posted by @AndreaLanfranchi in #197 (comment)

Ethereum Consensus Tests

Run and pass Ethereum Consensus Tests.

Current status:

  • BasicTests (difficulty tests) - pass
  • BlockchainTests / GeneralStateTests - pass
  • BlockchainTests / InvalidBlocks - fail
  • BlockchainTests / TransitionTests - pass
  • BlockchainTests / ValidBlocks - pass
  • TransactionTests - pass
  • RLPTests - not integrated

Ethash PoW block validation

As a prerequisite, we should probably restore ethash Hunter package. I removed it with PR #115 because it was not compatible with Wasm. Ideally we should find a way to make ethash package compatible with Wasm, perhaps using some macros/options, and then revert PR #115.

There's a couple of consensus tests that check Ethash PoW: wrongMixHash & wrongNonce. Unfortunately, they were removed in the latest release.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.