helium / blockchain-core Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Hello, I have an issue on mt router ID : 11awcuSbVURPkXX3FbKC7KF6bgEPRZqqPzv1FTEYABMLttUr13E
on March 24th after the two state_channel creation, my DC balance was 2956372.
on March 28th one of the state_channel has expired and a new one has been created ; 202 DC has been consumed on this state_channel.
The state channel configuration have 50K DC blocked per SC
my new balance is 2823340 DC
the delta is 133032 DC I can't explain
I was expecting it to be 2956372 + ( 50000 - 202 ) - 35000 - 50000 = 2921170 but it is 2823340 difference is 97830
There has no new device registered during this period of time ( no xor transactions )
dewi: 13LVwCqZEKLTVnf3sjGPY1NMkTE7fWtUVjmDfeuscMFgeK3f9pn
SenseCap: 14NBXJE5kAAZTMigY4dcjXSMG4CSqjYwvteQWwQsYhsu2TKN6AF
Hard to tell if current code is ready for deployment when build fail.
"dac8d0126697f1d64220315fd747e9c21a9b0af9822463df0c8f9edf80c57de4"
Originally posted by @woahdy in blockparty-sh/slp-explorer#413 (comment)
Originally posted by @woahdy in blockparty-sh/slp-explorer#438
address: 13cbbZXzqwp6YMM5JvAu5T1TRhenENEJVU5Q8vpLhunQYE1Acpp
mode: full
I bumped into the error message duplicate_group
when creating an election transaction on the wrong height. I wonder if this error message name is intentional or happened to get there by accident.
Optionally run a second ledger that has special rules for absorbing txns such that we can test different chain var values or behavior over some period of time. We could add some magic check like case blockchain_ledger_v1:is_auxiliary
and run conditional code (or set a different chain var on the auxiliary ledger).
We'd absorb blocks/txns into both ledgers, but we'd only consult the main ledger. At the end of the run or the test we could compare the ledgers against each other to see what differs. This would allow for easier validation of new or proposed changes before actually making the change live.
Expose why a witness is invalid when doing witness validity checks for a poc receipt txn, it would help ETL use that information and expose it to various UIs
https://github.com/helium/HIP/blob/master/0016-random-consensus-group-election.md
This HIP has been approved by the community. The Helium core team will be implementing this change.
To start, there will be a Helium manufacturer key and a CalChip manufacturer key.
Hi there,
I'm a bit new here and by no means an erlang expert.
As I'm reading the implementation of the reward units I think it is a bit confusing that you mix "shares"(%) and "amount" (#) in the two snippets below:
blockchain-core/src/transactions/v2/blockchain_txn_rewards_v2.erl
Lines 1084 to 1085 in d7abd28
blockchain-core/src/transactions/v2/blockchain_txn_rewards_v2.erl
Lines 1305 to 1313 in d7abd28
I believe the best way to clarify it is to change is to rename ShareOfDCRemainder
to DCRemainder
.
If it actually is a share, then it should be multiplied with EpochReward
. Doesn't make sense adding HNT and percentages.
/MZ
2021-06-18 19:06:15 =SUPERVISOR REPORT====
Supervisor: {local,blockchain_sup}
Context: start_error
Reason: {{badmatch,{error,{error,"Corruption: block checksum mismatch: stored = 1520935999, computed = 1200219786 in /var/data/checkpoints/885474/ledger.db-1624043170
714480180/ledger.db/1521297.sst offset 0 size 3961"}}},[{blockchain_ledger_v1,commit_context,1,[{file,"blockchain_ledger_v1.erl"},{line,619}]},{blockchain_ledger_v1,context_sna
pshot,1,[{file,"blockchain_ledger_v1.erl"},{line,752}]},{blockchain,'-fold_blocks/5-fun-1-',4,[{file,"blockchain.erl"},{line,534}]},{lists,foldl,3,[{file,"lists.erl"},{line,126
3}]},{blockchain,ledger_at,3,[{file,"blockchain.erl"},{line,484}]},{blockchain,load,2,[{file,"blockchain.erl"},{line,1828}]},{blockchain,new,4,[{file,"blockchain.erl"},{line,16
0}]},{blockchain_worker,load_chain,3,[{file,"blockchain_worker.erl"},{line,1056}]}]}
Offender: [{pid,undefined},{id,blockchain_worker},{mfargs,{blockchain_worker,start_link,[[{port,44158},{base_dir,"/var/data"},{update_dir,"/opt/miner/update"}]]}},{resta
rt_type,permanent},{shutdown,5000},{child_type,worker}]
2021-06-18 19:06:15 =CRASH REPORT====
crasher:
initial call: blockchain_worker:init/1
pid: <0.1263.0>
registered_name: []
exception error: {{badmatch,{error,{error,"Corruption: block checksum mismatch: stored = 1520935999, computed = 1200219786 in /var/data/checkpoints/885474/ledger.db-162404
3170714480180/ledger.db/1521297.sst offset 0 size 3961"}}},[{blockchain_ledger_v1,commit_context,1,[{file,"blockchain_ledger_v1.erl"},{line,619}]},{blockchain_ledger_v1,context
snapshot,1,[{file,"blockchain_ledger_v1.erl"},{line,752}]},{blockchain,'-fold_blocks/5-fun-1-',4,[{file,"blockchain.erl"},{line,534}]},{lists,foldl,3,[{file,"lists.erl"},{line
,1263}]},{blockchain,ledger_at,3,[{file,"blockchain.erl"},{line,484}]},{blockchain,load,2,[{file,"blockchain.erl"},{line,1828}]},{blockchain,new,4,[{file,"blockchain.erl"},{lin
e,160}]},{blockchain_worker,load_chain,3,[{file,"blockchain_worker.erl"},{line,1056}]}]}
ancestors: [blockchain_sup,miner_critical_sup,miner_sup,<0.1219.0>]
message_queue_len: 1
messages: [{'$gen_call',{<0.1262.0>,#Ref<0.363591680.1890058241.13192>},blockchain}]
links: [<0.1225.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 196650
stack_size: 27
reductions: 8243250
neighbours:
2021-06-18 19:06:15 =SUPERVISOR REPORT====
Supervisor: {local,miner_critical_sup}
Context: start_error
Reason: {shutdown,{failed_to_start_child,blockchain_worker,{{badmatch,{error,{error,"Corruption: block checksum mismatch: stored = 1520935999, computed = 1200219786 i
n /var/data/checkpoints/885474/ledger.db-1624043170714480180/ledger.db/1521297.sst offset 0 size 3961"}}},[{blockchain_ledger_v1,commit_context,1,[{file,"blockchain_ledger_v1.e
rl"},{line,619}]},{blockchain_ledger_v1,context_snapshot,1,[{file,"blockchain_ledger_v1.erl"},{line,752}]},{blockchain,'-fold_blocks/5-fun-1-',4,[{file,"blockchain.erl"},{line,
534}]},{lists,foldl,3,[{file,"lists.erl"},{line,1263}]},{blockchain,ledger_at,3,[{file,"blockchain.erl"},{line,484}]},{blockchain,load,2,[{file,"blockchain.erl"},{line,1828}]},
{blockchain,new,4,[{file,"blockchain.erl"},{line,160}]},{blockchain_worker,load_chain,3,[{file,"blockchain_worker.erl"},{line,1056}]}]}}}
Offender: [{pid,undefined},{id,blockchain_sup},{mfargs,{blockchain_sup,start_link,[[{key,{{ecc_compact,{{'ECPoint',<<4,39,238,209,234,154,123,23,83,246,202,148,2,93,196,
13,5,116,162,201,254,218,44,244,248,62,143,175,24,68,121,198,206,16,178,101,44,246,177,231,42,69,18,94,87,55,125,184,224,224,9,166,238,92,175,114,26,90,109,144,232,224,140,40,2
53>>},{namedCurve,{1,2,840,10045,3,1,7}}}},#Fun<miner_keys.1.35972986>,#Fun<miner_keys.0.35972986>}},{seed_nodes,["/ip4/35.166.211.46/tcp/2154","/ip4/44.236.95.167/tcp/2154","/
ip4/3.248.105.103/tcp/2154","/ip4/54.195.15.233/tcp/443","/ip4/44.237.171.231/tcp/443","/ip4/35.155.234.98/tcp/443","/ip4/44.240.226.82/tcp/2154"]},{max_inbound_connections,6},
{port,44158},{num_consensus_members,16},{base_dir,"/var/data"},{update_dir,"/opt/miner/update"},{group_delete_predicate,fun miner_consensus_mgr:group_predicate/1}]]}},{restart
type,permanent},{shutdown,infinity},{child_type,supervisor}]
2021-06-18 19:06:15 =SUPERVISOR REPORT====
Supervisor: {local,miner_sup}
Context: start_error
Reason: {shutdown,{failed_to_start_child,blockchain_sup,{shutdown,{failed_to_start_child,blockchain_worker,{{badmatch,{error,{error,"Corruption: block checksum mismatc
h: stored = 1520935999, computed = 1200219786 in /var/data/checkpoints/885474/ledger.db-1624043170714480180/ledger.db/1521297.sst offset 0 size 3961"}}},[{blockchain_ledger_v1
,commit_context,1,[{file,"blockchain_ledger_v1.erl"},{line,619}]},{blockchain_ledger_v1,context_snapshot,1,[{file,"blockchain_ledger_v1.erl"},{line,752}]},{blockchain,'-fold_bl
ocks/5-fun-1-',4,[{file,"blockchain.erl"},{line,534}]},{lists,foldl,3,[{file,"lists.erl"},{line,1263}]},{blockchain,ledger_at,3,[{file,"blockchain.erl"},{line,484}]},{blockchai
n,load,2,[{file,"blockchain.erl"},{line,1828}]},{blockchain,new,4,[{file,"blockchain.erl"},{line,160}]},{blockchain_worker,load_chain,3,[{file,"blockchain_worker.erl"},{line,10
56}]}]}}}}}
Miner gets stuck at block 173563 when attempting to complete a full sync (honor_quick_sync = false). The following error is logged attempting to validate a transaction in block 173563. This is occurs on miner-amd64_2021.05.29.0_GA running docker and was recreated on a second instance.
2021-06-01 13:47:40.819 7 [warning] <0.5745.47>@blockchain_txn:separate_res:352 invalid txn blockchain_txn_poc_receipts_v1 : {error,receipt_not_in_order} / type=poc_receipts_v1 hash="1ndJnUZ47Mb9zpqQoF2xtfLFRcKkRgPnLU3NkFz9Z7Kz1qo2uc" challenger="curly-porcelain-antelope" onion="12Z7E1Ci26RoYy1HVQg4axzEYEknDNrWziDUCMAz4G8vBY9pUQk" path:
2021-06-01 13:47:26.813 7 [error] <0.5283.47>@blockchain_txn_poc_receipts_v1:validate:1021 receipt not in order```
Although we're currently blocked on the RocksDB side (see facebook/rocksdb#2343), if that issue ever gets fixed, it would be nice to see if this works, because it would allow us to drop our current transaction mechanism, which is perhaps not the fastest and quite complex.
https://github.com/helium/HIP/blob/master/0012-remote-location-assert.md
This HIP has been approved by the community but not been prioritized by the Helium core team. We believe that this would be a good first project for an external developer to tackle.
Provide GRPC APIs via which light gateway can submit packets to router via the existing state channel flows. The APIs will need to enable the light gateways to avail of all functionality as provided by the existing blockchain_state_channel_client functionality.
f your miner added recently and it stucked at 99*,***, it may caused by a snapshot issue which have already had a fix. You can try to restart your Miner to trigger an update. But in the moment the issue is that this is not working out anymore!
If it’s not 99****, it may caused by another issue: #910. An Out of Memory error caused failure when loading newest snapshot (help your miner sync faster). The Miner uses about 200M meory when executing normal tasks. There was a sudden increase of memory usage when loading a big snapshot file which caused the crash. A new OTA has been released, keep your miner online can receive the newest OTA update. If it’s not working as expected, try to restart your Miner. If your Miner kept crashing (Green, Yellow and Red light kept switching in a circle) and it didn’t fix after restarting it, contact your hotspot maker.
But well, when the issue with the synch up is fixed, the hotspot still is not doing any workflow. Many Bobcatminers are affected.
I own 5 Rak hotspots that have been online for like 4 months until the 18th of august when all of a sudden, 4/5 hotspots jumped into the syncing and won't fully sync to the blockchain.
I have already done the following steps:
-Port forwarding is open
-Ethernet cable is plugged in
-Device and router have been turned off and on for like 10 times
No earning and no beacons, however in and outbound are green, so could you please help me?
It's possible that moving to a series of checkpoint ledgers instead of having one double ledger with a number of intermediate states would be more efficient and safer (since we wouldn't have to recompute the intermediates on startup. it would definitely lead to less complicated ledger code.
It's also possible that this would use a ton more memory and disk, since each one would be a separate rocks instance, or would hurt performance in some unanticipated way.
To start, there will be a Helium manufacturer key and a CalChip manufacturer key.
When a peer asks us for a snapshot, if we have a newer snapshot (and we have that previous snapshot) we could provide the requested snapshot, all the intermediate election blocks, and the latest snapshot block. This would allow us to stop bumping the 'blessed' snapshot as regularly.
The receiving node would load the blessed snapshot, then it could validate all the election blocks (which describe the transition to new consensus members) and the snapshot block, which contains the hash of the latest snapshot (which is signed by the latest consensus group). The node can then request that latest snapshot hash from a peer and load that.
hip17_res_11
is currently [2, 100000, 100000]
(N, density_target, density_max)
according to the algo:
given n
equals any valid value, the limit
= min(>100000, 100000), or always 100000
consequently at res_11, scale = 100000/unclipped hex density
the same is true for res_12, but since res 13 is never used it can't be encountered.
propose hip17_res_11
(and _12) is changed to [2, 1, 1]
or alternatively [1,1,7]
, so that limit is in the range 1..7 which I think would make clipped and unclipped equal, so reward scale was always calculated as 1 at this level.
edit: non critical issue unless res consideration changes
It is observed on some hotspots that the miner fails to download the blessed snapshot from S3, the temporary scratch file snap-*.scratch
will keep growing, up to a super large size than the actual (~130MB). This issue makes the hotspot stuck on chain syncing.
2021-08-07 07:44:26.391 160 [error] <0.9550.0>@blockchain_worker:start_snapshot_sync:949 snapshot download or loading failed because {error,timeout}: [{blockchain_worker,do_s3_download,2,[{file,"blockchain_worker.erl"},{line,1034}]},{blockchain_worker,'-start_snapshot_sync/4-fun-0-',6,[{file,"blockchain_worker.erl"},{line,936}]}]
2021-08-07 07:44:26.392 160 [info] <0.9550.0>@blockchain_worker:attempt_fetch_p2p_snapshot:955 attempting snapshot sync with "/p2p/11uvSUGnmwtQ6uB1grfrdgrDZnFugrX64oEqJUvWxjAQC9WDJtb"
2021-08-07 07:44:31.468 160 [info] <0.1481.0>@blockchain_worker:handle_info:601 snapshot sync down reason normal
2021-08-07 07:44:31.473 160 [info] <0.1481.0>@blockchain_worker:snapshot_sync:741 snapshot_sync starting <0.11069.0> #Ref<0.169456592.1972371457.214293>
2021-08-07 07:44:31.476 160 [info] <0.11069.0>@blockchain_worker:do_s3_download:1024 Attempting snapshot download from "https://snapshots.helium.wtf/mainnet/snap-953281", writing to scratch file "/var/data/snap/snap-953281.scratch"
...
/opt/miner # ls -lh /var/data/snap
total 601M
-rw-r--r-- 1 root root 600.6M Aug 7 07:59 snap-953281.scratch
For this example, the size of the scratch file already gets 600MB+.
Attached is the result of grep start_snapshot_sync
from the recent log of one hotspot
miner-fail-to-download-snapshot.log
Enable support for memo
fields for payment-v2 transactions.
Hello,
I noticed that this repo is using the exor_filter. Recently, the successor library was released, the efuse_filter. Fuse filters are measurably smaller and faster than xor filters. Only fuse8
is supported, there is no xor16
equivalent, however I believe that fuse8
has an even lower false positive rate than xor16
but I am not 100% sure on that as the research paper has yet to be released. It also does not support a custom hashing function yet, but that can be added.
Provide a GRPC API via which light gateways can submit packets to router. Packets will be delivered without cost and outside of state channels
Controllino: 14go8hvEDnotWTyhYv6Hu5PTnRUAQzJqbB6dsDm1oThkCcZe9zd, mode: Full
Heltec Automation: 14iC6N1HkqUjH7WEChHVQhPqJ1hbWBKpZXZVeHHykCA7tNDYF2C, mode: Full
FreedomFi: 13y2EqUUzyQhQGtDSoXktz8m5jHNSiwAKLTYnHNxZq2uH5GGGym, mode: Full
Now most of the groundwork has been laid for light gateways, it's time to add a way to add them to the chain.
For simplicity the plan is to re-use the existing add_gateway txn but if the 'payer' is not one of the accounts blessed in the staking_keys
chain variable we will add the gateway as a light gateway. Light gateways will only cost $20 to onboard, not $40 and will only be able to, initially, forward device packets.
A chain var will be added to allow the capabilities of light gateways to be ratcheted forwards as we build out more functionality (witnessing/being challenged, etc).
Error excerpt from blockchain-etl
3}],[]},{blockchain_hex,'-calculate_scale/4-fun-0-',5,[{file,"/opt/blockchain-etl/_build/default/lib/blockchain/src/blockchain_hex.erl"},{line,166}]},{lists,foldl,3,[{file,"lists.erl"},{line,1263}]},{blockchain_hex,scale,4,[{file,"/opt/blockchain-etl/_build/default/lib/blockchain/src/blockchain_hex.erl"},{line,65}]},{blockchain_hex,scale,2,[{file,"/opt/blockchain-etl/_build/default/lib/blockchain/src/blockchain_hex.erl"},{line,47}]},{be_db_gateway,maybe_reward_scale,2,[{file,"/opt/blockchain-etl/src/be_db_gateway.erl"},{line,285}]},{be_db_gateway,mk_gateway,2,[{file,"/opt/blockchain-etl/src/be_db_gateway.erl"},{line,277}]},{be_db_gateway,mk_gateway_hash,2,[{file,"/opt/blockchain-etl/src/be_db_gateway.erl"},{line,158}]}]},{gen_server,call,[<0.6058.3>,{with_transaction,#Fun<be_db_follower.1.29441482>},infinity]}}
Offender: [{pid,<0.1342.0>},{id,db_follower},{mfargs,{blockchain_follower,start_link,[[{follower_module,{be_db_follower,[{base_dir,"data"}]}}]]}},{restart_type,permanent},{shutdown,5000},{child_type,worker}]
Currently, dataonly hotspots (which also have some data transfer rewards) are being include in reward scale calculations. However, since dataonly do not participate in proof of coverage, they should not be considered for reward scaling and thus not be assigned a tx scale.
Example: https://api.helium.io/v1/hotspots/112oTtA4B9bULBWM2TnH84C8Y5csoQNcmmPg2DigGvJ24j5Nd6zE. This dataonly hotspot appears to be using a miner (not gateway-rs) and has received rewards for data transfer. It has a reward scale.
%% -- erlang --
{cover_enabled, true}.
{cover_opts, [verbose]}.
{cover_excl_mods,
[
blockchain_txn_handler,
blockchain_poc_target_v2, % obsolete
blockchain_poc_path_v2, % obsolete
blockchain_poc_path_v3, % obsolete
%% cli stuff
blockchain_console,
blockchain_cli_ledger,
blockchain_cli_peer,
blockchain_cli_txn,
blockchain_cli_sc,
%% test stuff
blockchain_worker_meck_original,
blockchain_event_meck_original
]}.
{cover_export_enabled, true}.
{covertool, [{coverdata_files,
[
"ct.coverdata",
"eunit.coverdata"
]}]
}.
{deps, [
{lager, "3.9.1"},
{erl_base58, "0.0.1"},
{base64url, "1.0.1"},
{libp2p, ".", {git, "https://github.com/helium/erlang-libp2p.git", {branch, "master"}}},
{clique, ".", {git, "https://github.com/helium/clique.git", {branch, "develop"}}},
{h3, ".", {git, "https://github.com/helium/erlang-h3.git", {branch, "master"}}},
{erl_angry_purple_tiger, ".", {git, "https://github.com/helium/erl_angry_purple_tiger.git", {branch, "master"}}},
{erlang_stats, ".", {git, "https://github.com/helium/erlang-stats.git", {branch, "master"}}},
{e2qc, ".", {git, "https://github.com/helium/e2qc", {branch, "master"}}},
{vincenty, ".", {git, "https://github.com/helium/vincenty", {branch, "master"}}},
{helium_proto, {git, "https://github.com/helium/proto.git", {branch, "master"}}},
{merkerl, ".", {git, "https://github.com/helium/merkerl.git", {branch, "master"}}},
{xxhash, {git, "https://github.com/pierreis/erlang-xxhash", {branch, "master"}}},
{exor_filter, ".*", {git, "https://github.com/mpope9/exor_filter", {branch, "master"}}},
{erbloom, {git, "https://github.com/Vagabond/erbloom", {branch, "master"}}}
]}.
{erl_opts, [
debug_info,
{parse_transform, lager_transform},
{i, "./_build/default/plugins/gpb/include"},
warnings_as_errors
]}.
{plugins,
[
covertool,
{rebar3_eqc, {git, "https://github.com/Vagabond/rebar3-eqc-plugin", {branch, "master"}}}
]}.
{xref_checks, [
undefined_function_calls,
undefined_functions
%% deprecated_function_calls,
%% deprecated_functions
]}.
{profiles, [
{test, [
{deps, [{meck, "0.8.12"}]}
]},
{eqc, [
{erl_opts, [{d, 'TEST'}]},
{src_dirs, ["test", "src"]},
%% {cover_enabled, false},
{deps, [{meck, "0.8.12"}]}
]}
]}.
After the implementation of HIP-10, the remainder of the DC rewards pool is given back to PoC participants during epochs with unspent rewards. This causes a dramatic variation in PoC rewards since they are not broken apart and this causes frequent questions in the community.
Proposal: Implement a new set of reward types and treat them as separate "bonus" categories. We may not be able to backfill this data into etl
but at the very least, we can account for this going forward. This will need appropriate planning for mobile app changes.
Pisces: 134C7Hn3vhfBLQZex4PVwtxQ2uPJH97h9YD2bhzy1W2XhMJyY6d
Mode: Full
ClodPi: 13XuP2DjHEHVkKguDDZD2ev5AeqMLuJ8UQ44efEcDmVTnBcvc6F
Mode: Full
Linxdot: 14eUfY1GsjK4WH6uZYoeagnFtigBKdvPruAXLmc5UsUMEDj3yib
Mode: Full
Kerlink: 13Mpg5hCNjSxHJvWjaanwJPBuTXu1d4g5pGvGBkqQe3F8mAwXhK
Rak Wireless: 14h2zf1gEr9NmvDb2U53qucLN2jLrKU1ECBoxGnSnQ6tiT6V2kM
A core part of our validator go-to-market strategy is working with staking providers (such as Bison Trails).
Currently, to stake multiple validators requires multiple transactions. This is inefficient.
Similar to how multi-payments are batched and sent in a single transaction, provide the ability to stake multiple validators in a single transaction.
The current state of things is that for any bit of gateway data, we need to pull the entire thing off the disk, even if we just need something like the location, which is just a few bytes. This is pretty grim and cache-unfriendly on the Pis. The idea here would be to pull a few fields (probably location, witness addresses, and score-relevant fields at first) into a new column family where they could be directly addressed, making it a lot cheaper to consult them.
This is a cache, and could be read-through. It does not need to go into the snapshots.
LongAP: 12zX4jgDGMbJgRwmCfRNGXBuphkQRqkUTcLzYHTQvd4Qgu8kiL4
Smart Mimic: 13MS2kZHU4h6wp3tExgoHdDFjBsb9HB9JBvcbK9XmfNyJ7jqzVv
The DeWi has solicited and verified a new group of price oracle submitters that we would like to add on chain:
We'd like to remove the following oracles for inactivity:
DeWi contacted prospective oracles privately to preserve anonymity. This group was individually instructed to set up a dedicated server and mainnet wallet exclusively for the purpose of submitting price data. DeWi witnessed failed transactions from all four of the proposed new oracles above. Once the oracles are added on-chain, they'll be required to submit prices at a minimum of twice per day although most are expected to report more frequently.
Invalid reward issue: From rewards distributed for a beacon with varying number of witnesses in HIP15, if there are invalid witness, the reward of beacon(TX) will be reduced (TX scale )by invalid. But I found that all the reward of beacon keep the same if valid witness more than 4. And the reward of witness(RX) is not as doucument expressed. the whole rewards distributed need to be correct when invalid witness there.
Beacon reward issue: from rewards distributed for a beacon with varying number of witnesses in HIP15, the reward of TX will increase when the valid witness increase while it will decrease the reward of TX will decreas when the valid witness decrease. but I found that all the reward of TX are same at same block.
Currently there's some code in core that uses a hacky fixed resolution hex index for various things. We should be able to replace this with the better and more flexible h3dex code and drop the older code.
ledger_validators
command on the JSONRPC endpoint returns Internal error.
ledger_validators
command on the JSONRPC should return the list of validators with associated data in JSON format
curl -X POST -H 'Content-Type: application/json' -d '{"jsonrpc":"2.0","id":1,"method":"ledger_validators"}' http://localhost:4467
{"jsonrpc":"2.0","error":{"code":-32603,"message":"Internal error."},"id":null}
error.log
:
[error] <0.10064.9> Failed encoding reply as JSON: {[{<<"jsonrpc">>,<<"2.0">>},{<<"result">>,{error,{could_not_fold,error,{badmatch,{error,not_found}},[{blockchain_ledger_validator_v1,calculate_penalties,2,[{file,"blockchain_ledger_validator_v1.erl"},{line,170}]},{blockchain_ledger_validator_v1,print,4,[{file,"blockchain_ledger_validator_v1.erl"},{line,212}]},{miner_jsonrpc_ledger,format_ledger_validator,5,[{file,"miner_jsonrpc_ledger.erl"},{line,162}]},{miner_jsonrpc_ledger,'-handle_rpc/2-fun-5-',5,[{file,"miner_jsonrpc_ledger.erl"},{line,85}]},{blockchain_ledger_v1,'-rocks_fold/6-L/2-0-',4,[{file,"blockchain_ledger_v1.erl"},{line,3843}]},{blockchain_ledger_v1,rocks_fold,6,[{file,"blockchain_ledger_v1.erl"},{line,3846}]},{blockchain_ledger_v1,cf_fold,4,[{file,"blockchain_ledger_v1.erl"},{line,962}]},{jsonrpc2,dispatch,2,[{file,"jsonrpc2.erl"},{line,194}]}]}}},{<<"id">>,1}]}
Hi team,
After doing some weekend research and random sample checks of reward unit distributions, @thenorpan and I have noticed that the the reward formula for w>N
might have a bug that caps the total reward units at 1 instead of 2.
According to HIP15 (Reward formula) the transmitter's reward units should grow asymptotically towards 2, as shown in yellow bar in the graph. Furthermore, in the the text for w>N
it says:
we give a small portion of the witness rewards to the transmitter up to 1 additional unit of rewards
Key here is to understand that it says up to 1 additional unit, on top of the 1 reward unit received for witnessing w=N
.
Looking into the code it seems like the normalize_reward_unit
function runs after Unit
has been calculated. Consequently all reward units to beaconers are capped at 1 instead of 2.
Maybe I'm reading it the wrong way, let me know if I've missed anything :)
blockchain-core/src/transactions/v2/blockchain_txn_rewards_v2.erl
Lines 934 to 943 in dda3843
"5794effa9731c70e36a84eab9a5de7566e22e7efe30eb4b1b91c0717291dce82"
Originally posted by @woahdy in blockparty-sh/slp-explorer#413 (comment)
PantherX: 13v9iGhjvQUtVaZXcFFRCEbL1nPR4R8QJowBgMUcaGM2v1aV6mn, mode: full
hummingbird: 14DdSjvEkBQ46xQ24LAtHwQkAeoUUZHfGCosgJe33nRQ6rZwPG3, mode: full
Add logic to set assert location for data-only hotspots to $5, and lower add_gateway to $10.
new naming convention:
Old → New
Light -> Data-Only Hotspots
Non Consensus → Light Hotspots
Full Hotspots → no change
XOR filter transaction happens every 10 mins to start join process for devices to send packets on the network.
Each time, in addition to base txn cost (35,000 DC), there is also a charge for space on the blockchain. XOR filters are charged same each time regardless of the actual change to the filter. This increases the costs of adding devices to the network significantly.
Instead the charge for space on the blockchain should only take into account the delta or difference that gets added.
I've seen an uptick in regards to this error while I spin-up my ETL. It seems like it's not a fatal error and also not sure how the API is affected..
Current implementation of transfer_hotspot
has deep dependencies on wallet-api and makes it hard for us to compartmentalize these new features for future work.
We'd like to remove wallet-api from the equation and further simplify the user experience by removing liveness check and the amount as required fields for the transaction to be processed.
Proposed:
replace transfer_hotspot_v1
with transfer_hotspot_v2
that takes the following arguments:
owner_signature
owner_address
gateway_address
new_owner_address
Once submitted to the chain, the owner changes. Hotspot location, gain, elevation do not change.
https://github.com/helium/HIP/blob/master/0015-beaconing-rewards.md
https://github.com/helium/HIP/blob/master/0017-hex-density-based-transmit-reward-scaling.md
These two HIPs have been approved by the community. The Helium core team will be scoping and beginning work on this shortly. This ticket is to track progress towards implementation.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.