Git Product home page Git Product logo

blockchain-core's People

Contributors

abhay avatar allenan avatar amirhaleem avatar andymck avatar benoitduffez avatar chadbrading avatar ci-work avatar dpezely avatar evanmcc avatar fvasquez avatar jadeallenx avatar jaykickliter avatar jeffgrunewald avatar joecaswell avatar ke6jjj avatar lthiery avatar macpie avatar madninja avatar mfalkvidd avatar michaeldjeffrey avatar mikev avatar paulvmo avatar syuan100 avatar tylerferrara avatar vagabond avatar vihu avatar xandkar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blockchain-core's Issues

Problem with state channel DC consumption

Hello, I have an issue on mt router ID : 11awcuSbVURPkXX3FbKC7KF6bgEPRZqqPzv1FTEYABMLttUr13E

on March 24th after the two state_channel creation, my DC balance was 2956372.
on March 28th one of the state_channel has expired and a new one has been created ; 202 DC has been consumed on this state_channel.

The state channel configuration have 50K DC blocked per SC

my new balance is 2823340 DC

the delta is 133032 DC I can't explain

I was expecting it to be 2956372 + ( 50000 - 202 ) - 35000 - 50000 = 2921170 but it is 2823340 difference is 97830

There has no new device registered during this period of time ( no xor transactions )

Fix build status

Hard to tell if current code is ready for deployment when build fail.

Support an "auxiliary ledger" to be able to quantify the result of consensus changes

Optionally run a second ledger that has special rules for absorbing txns such that we can test different chain var values or behavior over some period of time. We could add some magic check like case blockchain_ledger_v1:is_auxiliary and run conditional code (or set a different chain var on the auxiliary ledger).

We'd absorb blocks/txns into both ledgers, but we'd only consult the main ledger. At the end of the run or the test we could compare the ledgers against each other to see what differs. This would allow for easier validation of new or proposed changes before actually making the change live.

Expose why a witness is invalid

Expose why a witness is invalid when doing witness validity checks for a poc receipt txn, it would help ETL use that information and expose it to various UIs

Confusing usage of "share"

Hi there,

I'm a bit new here and by no means an erlang expert.
As I'm reading the implementation of the reward units I think it is a bit confusing that you mix "shares"(%) and "amount" (#) in the two snippets below:

ShareOfDCRemainder = share_of_dc_rewards(poc_witnesses_percent, Vars),
WitnessesReward = (EpochReward * PocWitnessesPercent) + ShareOfDCRemainder,

share_of_dc_rewards(_Key, #{dc_remainder := 0}) ->
0;
share_of_dc_rewards(Key, Vars=#{dc_remainder := DCRemainder}) ->
erlang:round(DCRemainder
* ((maps:get(Key, Vars) /
(maps:get(poc_challengers_percent, Vars)
+ maps:get(poc_challengees_percent, Vars)
+ maps:get(poc_witnesses_percent, Vars))))
).

I believe the best way to clarify it is to change is to rename ShareOfDCRemainder to DCRemainder.
If it actually is a share, then it should be multiplied with EpochReward. Doesn't make sense adding HNT and percentages.

/MZ

Corrupt checkpoint crash loop

2021-06-18 19:06:15 =SUPERVISOR REPORT====
Supervisor: {local,blockchain_sup}
Context: start_error
Reason: {{badmatch,{error,{error,"Corruption: block checksum mismatch: stored = 1520935999, computed = 1200219786 in /var/data/checkpoints/885474/ledger.db-1624043170
714480180/ledger.db/1521297.sst offset 0 size 3961"}}},[{blockchain_ledger_v1,commit_context,1,[{file,"blockchain_ledger_v1.erl"},{line,619}]},{blockchain_ledger_v1,context_sna
pshot,1,[{file,"blockchain_ledger_v1.erl"},{line,752}]},{blockchain,'-fold_blocks/5-fun-1-',4,[{file,"blockchain.erl"},{line,534}]},{lists,foldl,3,[{file,"lists.erl"},{line,126
3}]},{blockchain,ledger_at,3,[{file,"blockchain.erl"},{line,484}]},{blockchain,load,2,[{file,"blockchain.erl"},{line,1828}]},{blockchain,new,4,[{file,"blockchain.erl"},{line,16
0}]},{blockchain_worker,load_chain,3,[{file,"blockchain_worker.erl"},{line,1056}]}]}
Offender: [{pid,undefined},{id,blockchain_worker},{mfargs,{blockchain_worker,start_link,[[{port,44158},{base_dir,"/var/data"},{update_dir,"/opt/miner/update"}]]}},{resta
rt_type,permanent},{shutdown,5000},{child_type,worker}]

2021-06-18 19:06:15 =CRASH REPORT====
crasher:
initial call: blockchain_worker:init/1
pid: <0.1263.0>
registered_name: []
exception error: {{badmatch,{error,{error,"Corruption: block checksum mismatch: stored = 1520935999, computed = 1200219786 in /var/data/checkpoints/885474/ledger.db-162404
3170714480180/ledger.db/1521297.sst offset 0 size 3961"}}},[{blockchain_ledger_v1,commit_context,1,[{file,"blockchain_ledger_v1.erl"},{line,619}]},{blockchain_ledger_v1,context
snapshot,1,[{file,"blockchain_ledger_v1.erl"},{line,752}]},{blockchain,'-fold_blocks/5-fun-1-',4,[{file,"blockchain.erl"},{line,534}]},{lists,foldl,3,[{file,"lists.erl"},{line
,1263}]},{blockchain,ledger_at,3,[{file,"blockchain.erl"},{line,484}]},{blockchain,load,2,[{file,"blockchain.erl"},{line,1828}]},{blockchain,new,4,[{file,"blockchain.erl"},{lin
e,160}]},{blockchain_worker,load_chain,3,[{file,"blockchain_worker.erl"},{line,1056}]}]}
ancestors: [blockchain_sup,miner_critical_sup,miner_sup,<0.1219.0>]
message_queue_len: 1
messages: [{'$gen_call',{<0.1262.0>,#Ref<0.363591680.1890058241.13192>},blockchain}]
links: [<0.1225.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 196650
stack_size: 27
reductions: 8243250
neighbours:
2021-06-18 19:06:15 =SUPERVISOR REPORT====
Supervisor: {local,miner_critical_sup}
Context: start_error
Reason: {shutdown,{failed_to_start_child,blockchain_worker,{{badmatch,{error,{error,"Corruption: block checksum mismatch: stored = 1520935999, computed = 1200219786 i
n /var/data/checkpoints/885474/ledger.db-1624043170714480180/ledger.db/1521297.sst offset 0 size 3961"}}},[{blockchain_ledger_v1,commit_context,1,[{file,"blockchain_ledger_v1.e
rl"},{line,619}]},{blockchain_ledger_v1,context_snapshot,1,[{file,"blockchain_ledger_v1.erl"},{line,752}]},{blockchain,'-fold_blocks/5-fun-1-',4,[{file,"blockchain.erl"},{line,
534}]},{lists,foldl,3,[{file,"lists.erl"},{line,1263}]},{blockchain,ledger_at,3,[{file,"blockchain.erl"},{line,484}]},{blockchain,load,2,[{file,"blockchain.erl"},{line,1828}]},
{blockchain,new,4,[{file,"blockchain.erl"},{line,160}]},{blockchain_worker,load_chain,3,[{file,"blockchain_worker.erl"},{line,1056}]}]}}}
Offender: [{pid,undefined},{id,blockchain_sup},{mfargs,{blockchain_sup,start_link,[[{key,{{ecc_compact,{{'ECPoint',<<4,39,238,209,234,154,123,23,83,246,202,148,2,93,196,
13,5,116,162,201,254,218,44,244,248,62,143,175,24,68,121,198,206,16,178,101,44,246,177,231,42,69,18,94,87,55,125,184,224,224,9,166,238,92,175,114,26,90,109,144,232,224,140,40,2
53>>},{namedCurve,{1,2,840,10045,3,1,7}}}},#Fun<miner_keys.1.35972986>,#Fun<miner_keys.0.35972986>}},{seed_nodes,["/ip4/35.166.211.46/tcp/2154","/ip4/44.236.95.167/tcp/2154","/
ip4/3.248.105.103/tcp/2154","/ip4/54.195.15.233/tcp/443","/ip4/44.237.171.231/tcp/443","/ip4/35.155.234.98/tcp/443","/ip4/44.240.226.82/tcp/2154"]},{max_inbound_connections,6},
{port,44158},{num_consensus_members,16},{base_dir,"/var/data"},{update_dir,"/opt/miner/update"},{group_delete_predicate,fun miner_consensus_mgr:group_predicate/1}]]}},{restart

type,permanent},{shutdown,infinity},{child_type,supervisor}]

2021-06-18 19:06:15 =SUPERVISOR REPORT====
Supervisor: {local,miner_sup}
Context: start_error
Reason: {shutdown,{failed_to_start_child,blockchain_sup,{shutdown,{failed_to_start_child,blockchain_worker,{{badmatch,{error,{error,"Corruption: block checksum mismatc
h: stored = 1520935999, computed = 1200219786 in /var/data/checkpoints/885474/ledger.db-1624043170714480180/ledger.db/1521297.sst offset 0 size 3961"}}},[{blockchain_ledger_v1
,commit_context,1,[{file,"blockchain_ledger_v1.erl"},{line,619}]},{blockchain_ledger_v1,context_snapshot,1,[{file,"blockchain_ledger_v1.erl"},{line,752}]},{blockchain,'-fold_bl
ocks/5-fun-1-',4,[{file,"blockchain.erl"},{line,534}]},{lists,foldl,3,[{file,"lists.erl"},{line,1263}]},{blockchain,ledger_at,3,[{file,"blockchain.erl"},{line,484}]},{blockchai
n,load,2,[{file,"blockchain.erl"},{line,1828}]},{blockchain,new,4,[{file,"blockchain.erl"},{line,160}]},{blockchain_worker,load_chain,3,[{file,"blockchain_worker.erl"},{line,10
56}]}]}}}}}

Unable to sync block 173564

Miner gets stuck at block 173563 when attempting to complete a full sync (honor_quick_sync = false). The following error is logged attempting to validate a transaction in block 173563. This is occurs on miner-amd64_2021.05.29.0_GA running docker and was recreated on a second instance.

2021-06-01 13:47:40.819 7 [warning] <0.5745.47>@blockchain_txn:separate_res:352 invalid txn blockchain_txn_poc_receipts_v1 : {error,receipt_not_in_order} / type=poc_receipts_v1 hash="1ndJnUZ47Mb9zpqQoF2xtfLFRcKkRgPnLU3NkFz9Z7Kz1qo2uc" challenger="curly-porcelain-antelope" onion="12Z7E1Ci26RoYy1HVQg4axzEYEknDNrWziDUCMAz4G8vBY9pUQk" path:
2021-06-01 13:47:26.813 7 [error] <0.5283.47>@blockchain_txn_poc_receipts_v1:validate:1021 receipt not in order```

v2 packet routing between light gateways and router over GRPC

Provide GRPC APIs via which light gateway can submit packets to router via the existing state channel flows. The APIs will need to enable the light gateways to avail of all functionality as provided by the existing blockchain_state_channel_client functionality.

of Memory error or blockchain synch stuck

f your miner added recently and it stucked at 99*,***, it may caused by a snapshot issue which have already had a fix. You can try to restart your Miner to trigger an update. But in the moment the issue is that this is not working out anymore!

If it’s not 99****, it may caused by another issue: #910. An Out of Memory error caused failure when loading newest snapshot (help your miner sync faster). The Miner uses about 200M meory when executing normal tasks. There was a sudden increase of memory usage when loading a big snapshot file which caused the crash. A new OTA has been released, keep your miner online can receive the newest OTA update. If it’s not working as expected, try to restart your Miner. If your Miner kept crashing (Green, Yellow and Red light kept switching in a circle) and it didn’t fix after restarting it, contact your hotspot maker.

But well, when the issue with the synch up is fixed, the hotspot still is not doing any workflow. Many Bobcatminers are affected.

RAK hotspots keep syncing for over 4 days

I own 5 Rak hotspots that have been online for like 4 months until the 18th of august when all of a sudden, 4/5 hotspots jumped into the syncing and won't fully sync to the blockchain.

I have already done the following steps:

-Port forwarding is open
-Ethernet cable is plugged in
-Device and router have been turned off and on for like 10 times

No earning and no beacons, however in and outbound are green, so could you please help me?

Explore using a series of rocksdb checkpoints instead of having a lagging ledger

It's possible that moving to a series of checkpoint ledgers instead of having one double ledger with a number of intermediate states would be more efficient and safer (since we wouldn't have to recompute the intermediates on startup. it would definitely lead to less complicated ledger code.

It's also possible that this would use a ton more memory and disk, since each one would be a separate rocks instance, or would hurt performance in some unanticipated way.

Supply latest snapshot block along with all election blocks back to the requested snapshot

When a peer asks us for a snapshot, if we have a newer snapshot (and we have that previous snapshot) we could provide the requested snapshot, all the intermediate election blocks, and the latest snapshot block. This would allow us to stop bumping the 'blessed' snapshot as regularly.

The receiving node would load the blessed snapshot, then it could validate all the election blocks (which describe the transition to new consensus members) and the snapshot block, which contains the hash of the latest snapshot (which is signed by the latest consensus group). The node can then request that latest snapshot hash from a peer and load that.

hip17_res_11, hip17_res_12 incorrect values

hip17_res_11 is currently [2, 100000, 100000] (N, density_target, density_max)

according to the algo:

image

given n equals any valid value, the limit = min(>100000, 100000), or always 100000

consequently at res_11, scale = 100000/unclipped hex density
image

the same is true for res_12, but since res 13 is never used it can't be encountered.

propose hip17_res_11 (and _12) is changed to [2, 1, 1]

or alternatively [1,1,7], so that limit is in the range 1..7 which I think would make clipped and unclipped equal, so reward scale was always calculated as 1 at this level.

edit: non critical issue unless res consideration changes

fail to download snapshot from s3

It is observed on some hotspots that the miner fails to download the blessed snapshot from S3, the temporary scratch file snap-*.scratch will keep growing, up to a super large size than the actual (~130MB). This issue makes the hotspot stuck on chain syncing.

2021-08-07 07:44:26.391 160 [error] <0.9550.0>@blockchain_worker:start_snapshot_sync:949 snapshot download or loading failed because {error,timeout}: [{blockchain_worker,do_s3_download,2,[{file,"blockchain_worker.erl"},{line,1034}]},{blockchain_worker,'-start_snapshot_sync/4-fun-0-',6,[{file,"blockchain_worker.erl"},{line,936}]}]
2021-08-07 07:44:26.392 160 [info] <0.9550.0>@blockchain_worker:attempt_fetch_p2p_snapshot:955 attempting snapshot sync with "/p2p/11uvSUGnmwtQ6uB1grfrdgrDZnFugrX64oEqJUvWxjAQC9WDJtb"
2021-08-07 07:44:31.468 160 [info] <0.1481.0>@blockchain_worker:handle_info:601 snapshot sync down reason normal
2021-08-07 07:44:31.473 160 [info] <0.1481.0>@blockchain_worker:snapshot_sync:741 snapshot_sync starting <0.11069.0> #Ref<0.169456592.1972371457.214293>
2021-08-07 07:44:31.476 160 [info] <0.11069.0>@blockchain_worker:do_s3_download:1024 Attempting snapshot download from "https://snapshots.helium.wtf/mainnet/snap-953281", writing to scratch file "/var/data/snap/snap-953281.scratch"
...


/opt/miner # ls -lh /var/data/snap
total 601M   
-rw-r--r--    1 root     root      600.6M Aug  7 07:59 snap-953281.scratch

For this example, the size of the scratch file already gets 600MB+.

Attached is the result of grep start_snapshot_sync from the recent log of one hotspot
miner-fail-to-download-snapshot.log

Consider switching from xor to fuse filter

Hello,

I noticed that this repo is using the exor_filter. Recently, the successor library was released, the efuse_filter. Fuse filters are measurably smaller and faster than xor filters. Only fuse8 is supported, there is no xor16 equivalent, however I believe that fuse8 has an even lower false positive rate than xor16 but I am not 100% sure on that as the research paper has yet to be released. It also does not support a custom hashing function yet, but that can be added.

Support for adding light gateways

Now most of the groundwork has been laid for light gateways, it's time to add a way to add them to the chain.

For simplicity the plan is to re-use the existing add_gateway txn but if the 'payer' is not one of the accounts blessed in the staking_keys chain variable we will add the gateway as a light gateway. Light gateways will only cost $20 to onboard, not $40 and will only be able to, initially, forward device packets.

A chain var will be added to allow the capabilities of light gateways to be ratcheted forwards as we build out more functionality (witnessing/being challenged, etc).

  • Add field to ledger gateway to track if a gateway is full or light
  • Check payer against staking_keys for add_gateway txn and flag gateway as light and lower fee if the payer is not a staking key
  • Check the gateway type is full for challenging/being challenged/witnessing/etc
  • Add chain var we can check for enabling further light client capabilities (could be a bitmask)
  • Update ETL to track gateway type

Investigate why unclipped density had missing location key

Error excerpt from blockchain-etl

3}],[]},{blockchain_hex,'-calculate_scale/4-fun-0-',5,[{file,"/opt/blockchain-etl/_build/default/lib/blockchain/src/blockchain_hex.erl"},{line,166}]},{lists,foldl,3,[{file,"lists.erl"},{line,1263}]},{blockchain_hex,scale,4,[{file,"/opt/blockchain-etl/_build/default/lib/blockchain/src/blockchain_hex.erl"},{line,65}]},{blockchain_hex,scale,2,[{file,"/opt/blockchain-etl/_build/default/lib/blockchain/src/blockchain_hex.erl"},{line,47}]},{be_db_gateway,maybe_reward_scale,2,[{file,"/opt/blockchain-etl/src/be_db_gateway.erl"},{line,285}]},{be_db_gateway,mk_gateway,2,[{file,"/opt/blockchain-etl/src/be_db_gateway.erl"},{line,277}]},{be_db_gateway,mk_gateway_hash,2,[{file,"/opt/blockchain-etl/src/be_db_gateway.erl"},{line,158}]}]},{gen_server,call,[<0.6058.3>,{with_transaction,#Fun<be_db_follower.1.29441482>},infinity]}}
Offender: [{pid,<0.1342.0>},{id,db_follower},{mfargs,{blockchain_follower,start_link,[[{follower_module,{be_db_follower,[{base_dir,"data"}]}}]]}},{restart_type,permanent},{shutdown,5000},{child_type,worker}]

Exclude dataonly hotspots from reward/tx scale

Currently, dataonly hotspots (which also have some data transfer rewards) are being include in reward scale calculations. However, since dataonly do not participate in proof of coverage, they should not be considered for reward scaling and thus not be assigned a tx scale.

Example: https://api.helium.io/v1/hotspots/112oTtA4B9bULBWM2TnH84C8Y5csoQNcmmPg2DigGvJ24j5Nd6zE. This dataonly hotspot appears to be using a miner (not gateway-rs) and has received rewards for data transfer. It has a reward scale.

Bit

%% -- erlang --
{cover_enabled, true}.
{cover_opts, [verbose]}.
{cover_excl_mods,
[
blockchain_txn_handler,
blockchain_poc_target_v2, % obsolete

blockchain_poc_path_v2, % obsolete
blockchain_poc_path_v3, % obsolete

%% cli stuff
blockchain_console,
blockchain_cli_ledger,
blockchain_cli_peer,
blockchain_cli_txn,
blockchain_cli_sc,

%% test stuff
blockchain_worker_meck_original,
blockchain_event_meck_original
]}.
{cover_export_enabled, true}.
{covertool, [{coverdata_files,
[
"ct.coverdata",
"eunit.coverdata"
]}]
}.

{deps, [
{lager, "3.9.1"},
{erl_base58, "0.0.1"},
{base64url, "1.0.1"},
{libp2p, ".", {git, "https://github.com/helium/erlang-libp2p.git", {branch, "master"}}},
{clique, ".
", {git, "https://github.com/helium/clique.git", {branch, "develop"}}},
{h3, ".", {git, "https://github.com/helium/erlang-h3.git", {branch, "master"}}},
{erl_angry_purple_tiger, ".
", {git, "https://github.com/helium/erl_angry_purple_tiger.git", {branch, "master"}}},
{erlang_stats, ".", {git, "https://github.com/helium/erlang-stats.git", {branch, "master"}}},
{e2qc, ".
", {git, "https://github.com/helium/e2qc", {branch, "master"}}},
{vincenty, ".", {git, "https://github.com/helium/vincenty", {branch, "master"}}},
{helium_proto, {git, "https://github.com/helium/proto.git", {branch, "master"}}},
{merkerl, ".
", {git, "https://github.com/helium/merkerl.git", {branch, "master"}}},
{xxhash, {git, "https://github.com/pierreis/erlang-xxhash", {branch, "master"}}},
{exor_filter, ".*", {git, "https://github.com/mpope9/exor_filter", {branch, "master"}}},
{erbloom, {git, "https://github.com/Vagabond/erbloom", {branch, "master"}}}
]}.

{erl_opts, [
debug_info,
{parse_transform, lager_transform},
{i, "./_build/default/plugins/gpb/include"},
warnings_as_errors
]}.

{plugins,
[
covertool,
{rebar3_eqc, {git, "https://github.com/Vagabond/rebar3-eqc-plugin", {branch, "master"}}}
]}.

{xref_checks, [
undefined_function_calls,
undefined_functions
%% deprecated_function_calls,
%% deprecated_functions
]}.

{profiles, [
{test, [
{deps, [{meck, "0.8.12"}]}
]},
{eqc, [
{erl_opts, [{d, 'TEST'}]},
{src_dirs, ["test", "src"]},
%% {cover_enabled, false},
{deps, [{meck, "0.8.12"}]}
]}
]}.

Break apart HIP-10 PoC rewards redistribution as a separate rewards type

After the implementation of HIP-10, the remainder of the DC rewards pool is given back to PoC participants during epochs with unspent rewards. This causes a dramatic variation in PoC rewards since they are not broken apart and this causes frequent questions in the community.

Proposal: Implement a new set of reward types and treat them as separate "bonus" categories. We may not be able to backfill this data into etl but at the very least, we can account for this going forward. This will need appropriate planning for mobile app changes.

Provide ability to stake multiple validators in single transaction

A core part of our validator go-to-market strategy is working with staking providers (such as Bison Trails).

Currently, to stake multiple validators requires multiple transactions. This is inefficient.

Similar to how multi-payments are batched and sent in a single transaction, provide the ability to stake multiple validators in a single transaction.

Denormalize (some?) gateway record fields into new CF

The current state of things is that for any bit of gateway data, we need to pull the entire thing off the disk, even if we just need something like the location, which is just a few bytes. This is pretty grim and cache-unfriendly on the Pis. The idea here would be to pull a few fields (probably location, witness addresses, and score-relevant fields at first) into a new column family where they could be directly addressed, making it a lot cheaper to consult them.

This is a cache, and could be read-through. It does not need to go into the snapshots.

Oracle Update

The DeWi has solicited and verified a new group of price oracle submitters that we would like to add on chain:

  • 13ZGgWX4Ajz9g9t3tM9ohDyso12o2E2CpYbMF2RBaT93rj7souE
  • 1489qpKWAoLrURcaQEM1wJEViD4mk9WcqZMGhiTFfNGmaz8NFdX
  • 14hntpRicek9pxzHBDPVWPwYHHmExrksaxzAsTjjstgLfnfG5Ve
  • 14aQaRARuwLTLHLygiDNNapKZ7bcLSyXhHq9DeZ5kB2dnCxiiKv

We'd like to remove the following oracles for inactivity:

  • 14t33QjopqCUVr8FXG4sr58FTu5HnPwGBLPrVK1BFXLR3UsnQSn
  • 13Btezbvbwr9LhKmDQLgBnJUgjhZighEjNPLeu79dqBbmXRwoWm

DeWi contacted prospective oracles privately to preserve anonymity. This group was individually instructed to set up a dedicated server and mainnet wallet exclusively for the purpose of submitting price data. DeWi witnessed failed transactions from all four of the proposed new oracles above. Once the oracles are added on-chain, they'll be required to submit prices at a minimum of twice per day although most are expected to report more frequently.

PoC reward is not correct with Beacon and invalid witness

Invalid reward issue: From rewards distributed for a beacon with varying number of witnesses in HIP15, if there are invalid witness, the reward of beacon(TX) will be reduced (TX scale )by invalid. But I found that all the reward of beacon keep the same if valid witness more than 4. And the reward of witness(RX) is not as doucument expressed. the whole rewards distributed need to be correct when invalid witness there.

Beacon reward issue: from rewards distributed for a beacon with varying number of witnesses in HIP15, the reward of TX will increase when the valid witness increase while it will decrease the reward of TX will decreas when the valid witness decrease. but I found that all the reward of TX are same at same block.

Make current fixed-res hex index code use the h3dex

Currently there's some code in core that uses a hacky fixed resolution hex index for various things. We should be able to replace this with the better and more flexible h3dex code and drop the older code.

validators penalty calculations fail when variables are unset

Issue

  • The ledger_validators command on the JSONRPC endpoint returns Internal error.

Expected behavior

  • The ledger_validators command on the JSONRPC should return the list of validators with associated data in JSON format

Further info

  • Command:

curl -X POST -H 'Content-Type: application/json' -d '{"jsonrpc":"2.0","id":1,"method":"ledger_validators"}' http://localhost:4467

  • Return value:

{"jsonrpc":"2.0","error":{"code":-32603,"message":"Internal error."},"id":null}

  • Output in error.log:

[error] <0.10064.9> Failed encoding reply as JSON: {[{<<"jsonrpc">>,<<"2.0">>},{<<"result">>,{error,{could_not_fold,error,{badmatch,{error,not_found}},[{blockchain_ledger_validator_v1,calculate_penalties,2,[{file,"blockchain_ledger_validator_v1.erl"},{line,170}]},{blockchain_ledger_validator_v1,print,4,[{file,"blockchain_ledger_validator_v1.erl"},{line,212}]},{miner_jsonrpc_ledger,format_ledger_validator,5,[{file,"miner_jsonrpc_ledger.erl"},{line,162}]},{miner_jsonrpc_ledger,'-handle_rpc/2-fun-5-',5,[{file,"miner_jsonrpc_ledger.erl"},{line,85}]},{blockchain_ledger_v1,'-rocks_fold/6-L/2-0-',4,[{file,"blockchain_ledger_v1.erl"},{line,3843}]},{blockchain_ledger_v1,rocks_fold,6,[{file,"blockchain_ledger_v1.erl"},{line,3846}]},{blockchain_ledger_v1,cf_fold,4,[{file,"blockchain_ledger_v1.erl"},{line,962}]},{jsonrpc2,dispatch,2,[{file,"jsonrpc2.erl"},{line,194}]}]}}},{<<"id">>,1}]}

Normalized Beaconing Reward Units Capped at 1 instead of 2 (HIP15)

Hi team,

After doing some weekend research and random sample checks of reward unit distributions, @thenorpan and I have noticed that the the reward formula for w>N might have a bug that caps the total reward units at 1 instead of 2.

According to HIP15 (Reward formula) the transmitter's reward units should grow asymptotically towards 2, as shown in yellow bar in the graph. Furthermore, in the the text for w>N it says:

we give a small portion of the witness rewards to the transmitter up to 1 additional unit of rewards

Key here is to understand that it says up to 1 additional unit, on top of the 1 reward unit received for witnessing w=N.

Looking into the code it seems like the normalize_reward_unit function runs after Unit has been calculated. Consequently all reward units to beaconers are capped at 1 instead of 2.

Maybe I'm reading it the wrong way, let me know if I've missed anything :)

poc_challengee_reward_unit(WitnessRedundancy, DecayRate, Witnesses) ->
case {WitnessRedundancy, DecayRate} of
{undefined, _} -> {error, witness_redundancy_undefined};
{_, undefined} -> {error, poc_reward_decay_rate_undefined};
{N, R} ->
W = length(Witnesses),
Unit = poc_reward_tx_unit(R, W, N),
{ok, normalize_reward_unit(Unit)}
end.

normalize_reward_unit(Unit) when Unit > 1.0 -> 1.0;

With xor filters only charge delta vs entire xor filter

XOR filter transaction happens every 10 mins to start join process for devices to send packets on the network.

Each time, in addition to base txn cost (35,000 DC), there is also a charge for space on the blockchain. XOR filters are charged same each time regardless of the actual change to the filter. This increases the costs of adding devices to the network significantly.

Instead the charge for space on the blockchain should only take into account the delta or difference that gets added.

Simplify transfer_hotspot transaction

Current implementation of transfer_hotspot has deep dependencies on wallet-api and makes it hard for us to compartmentalize these new features for future work.

We'd like to remove wallet-api from the equation and further simplify the user experience by removing liveness check and the amount as required fields for the transaction to be processed.

Proposed:
replace transfer_hotspot_v1 with transfer_hotspot_v2 that takes the following arguments:

  • owner_signature
  • owner_address
  • gateway_address
  • new_owner_address

Once submitted to the chain, the owner changes. Hotspot location, gain, elevation do not change.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.