Git Product home page Git Product logo

graft-ng's Introduction

Graft Supernode (graft-ng)

Supernodes represent the Proof-of-Stake Network layer for GRAFT Network.
Their main function is instant authorizations. Instant authorizations are accomplished via supernode sample-based consensus. Supernodes get rewarded with part of the sale transaction fee for their work. Supernodes also provide DAPI into GRAFT Network, supporting various transaction types including and most notably a "Sale" transaction compatible with the Point-of-Sale environments.

GRAFT Network itself is an attempt at building a payment network that functions similarly to other credit card networks with instant authorizations, merchant paid (greately reduced) transaction proportionate fees, multiple transaction types that are compatible with point-of-sale workflows, adaptable to regulatory environments and with unlimited TPS via decentralization.

Compiling Graft Supernode from Source

Dependencies

Due to gcc 7.3.0 being a hard requirement, we strongly recommend to use Ubuntu 18.04 as a build platform*

Dependency Min. Version Debian/Ubuntu Pkg Arch Pkg Optional Purpose
GCC 7.3.0 build-essential base-devel NO
CMake 3.10.2 cmake^ cmake NO
pkg-config any pkg-config base-devel NO
Boost 1.65 libboost-all-dev boost NO C++ libraries
OpenSSL basically any libssl-dev openssl NO
autoconf any autoconf NO libr3 dependency
automake any automake NO libr3 dependency
check any check NO libr3 dependency
PCRE3 any libpcre3-dev NO libr3 dependency
RapidJson 1.1.0 rapidjson-dev NO
Readline 7.0 libreadline-dev NO command line interface

[^] Some Debian/Ubuntu versions (for example, Ubuntu 16.04) don't support CMake 3.10.2 from the package. To install it manually see Install non-standard dependencies bellow.

Install non-standard dependencies

CMake 3.10.2

Go to the download page on the CMake official site https://cmake.org/download/ and download sources (.tar.gz) or installation script (.sh) for CMake 3.10.2 or later.

If you downloaded sources, unpack them and follow the installation instruction from CMake. If you downloaded installation script, run following command to install CMake:

sudo /bin/sh cmake-3.10.2-Linux-x86_64.sh --prefix=/opt/cmake --skip-license

If you don't want to download the installation script manually, you can simply run following command from the command line (it requires curl and you must accept license by yourself):

curl -s https://cmake.org/files/v3.12/cmake-3.10.2-Linux-x86_64.sh | bash -e

Prepare sources

Clone repository:

git clone --recurse-submodules https://github.com/graft-project/graft-ng.git

Build instructions

Linux (Ubuntu)

To build graft_server, please run the following commands:

mkdir -p <build directory>
cd <build directory>
cmake  <project root>
make

If you want to build unit test suit as well, run cmake with additional parameter:

mkdir -p <build directory>
cd <build directory>
cmake  <project root> -DOPT_BUILD_TESTS=ON
make all

Then execute graft_server_test to run the tests.

<build directory>/graft_server_test

Detailed Installation Instructions

See Instructions for more details.

graft-ng's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graft-ng's Issues

RTA Flow: Add validation on Consensus of Approval and Rejection for Auth Sample Supernodes

Add validation on Consensus of Approval and Rejection for Auth Sample Supernodes (Part Of Authorize RTA Tx)

The consensus of Approval (CoA): Transaction considered valid as soon as

  • supernode receives a valid PoS signature,
  • supernode receives at least 6 out of 8 approvals until the validation timeout,
  • supernode receives valid PoS and Wallet Proxy Supernode signatures.

The consensus of Rejection (CoR): Transaction considered invalid as soon as

  • supernode receives an invalid PoS signature (fail status message from PoS),
  • supernode receives a rejection from some auth sample member,
  • supernode receives a rejection from PoS or Wallet Proxy Supernodes.

For more details: https://github.com/graft-project/DesignDocuments/blob/master/RFCs/%5BRFC-003-RTVF%5D-RTA-Transaction-Validation-Flow.md#validation-flow-description

[RFC-002-SLS]-Supernode-List-Selection

This algorithm has the following advantages:

It actually doesn't appear to have any of the listed advantages:

  1. Consistency, since it based on consistent Blockchain-based List

False. Consistency in a decentralized network means that all properly performing network nodes agree on an answer. The blockchain-based list is indeed consistent, but the sample selection doesn't only depend on that; it also depends on the announce-based list, and the announce system can easily differ across individual nodes. Network latency, local system clock differences, node restarts, and momentary connection losses can all contribute to such inconsistencies. Thus the algorithm is not consistent across the network. You even stated as much earlier:

On this level, the [announce-based] list isn't completely consistent over the network but our chance that selected supernodes are online at that moment of time is high.

It is completely irrelevant if it is "high" because if it isn't 100% you cannot reject RTA transactions that used the wrong supernodes, and if you can't do that then you allow proxy SN operators to cheat the system by altering their proxy SN to use their own 8 RTA SNs all the time (and thus capture all of the fees of every transaction through that proxy SN).

  1. There is a good chance two sequential sets of Auth Sample participants overlap, and hence, RTA validation becomes even more consistent.

Something either is or is not consistent. If random chance makes something "even more consistent" then it is not consistent. See point 1.

  1. Auth Sample is unique for each payment since it depends from payment id.

This has the same cheating potential as having an inconsistent list: even if the list itself wasn't inconsistent, this opens up another exploit: I could simply craft a payment ID (rather than using a fully random ID) designed to choose as many of my own SNs as possible.

I'm also concerned here by the use of payment IDs: if this is a payment ID included in the transaction then it is relying on a feature that is already deprecated by Monero and on the way out (even in its encrypted form) in favour of using vastly superior one-time subaddresses. But perhaps you just mean an internal payment ID rather than a transaction payment ID?

  1. Can be potentially restored on any graft node or supernode with the probability of supernode activity.

It is unclear to me what this means. If you mean that any supernode can obtain the same list given the same payment ID, then this is just point 1 again (and is not true because the list is not consistent). If it means that the SN sample can be verified by some other node then it is similarly wrong: there is neither the temporal data (which SNs were valid at block X?) nor the sample consistency that would be required to perform such verification.

Stake amount will be locked... remaining amount also may be locked. Is this okay?

When staking you are given this warning:

Stake amount will be locked. If you don't have exact amount for stake transaction, remaining amount also may be locked. Is this okay? (Y/Yes/N/No):

This is most definitely not ever acceptable. You need to apply the lock only to the stake amount; the change should have the standard 10-block lock.

A hosted API to pull Supernode stats

Stats to include # of active SN's, # of SN's per tier, monthly ROI per tier. Returns json object.
The API is needed for services like Masternodes.online to list GRAFT on their platform.
Thank You!
25k GRFT reward.

x86 virtulization (segmentation faults)

As of now graft-ng has 2 issues ive seen.

Core Dumps from segmentation faults and from illegal instruction sets from prebuilt binarys.

I believe both are coming from the fact the code requires specific instruction sets that VPS don't have. or that are being ignored in the compilation process that might be needed.

This results in l2 or l3 cache not owned by the process asking ot be written to by an instruction set being virtulized and results in a segmentation fault. (the same issue) results in an illegal instruction if the cpu doesnt support the instruction set.

When compiling you can compile non-natively meaning the code will run on most cpu's are any cpu's
You also compile with debug or release this optimizes code.

There needs to be a massive amount of core dumps uploaded from people that are having them across the x86 virtualized cpu's and people need to specify what host they have when running it.

As of now the segmentation faults I think are coming from the cache of the cpu due to instruction sets being expected.

Disqualification Flow: part 1. Blockchain-based List Qualification Flow

Disqualification Flow: part 1. Blockchain-based List Qualification Flow (GNRTA-316):
n+1
get Blockchain-based List
get hash(es) of previous Blockchain-based List(s)
create BBQS & QCL from BBL
n+2
if the supernode in the QCL it Forward "multicast" MRME (self sing (SN ID | block_height)) to BBQS
n+3
if SN is in BBQS
make list of non-answered IDs (DL(n+1) = (QCL(n+1) - (answered IDs))
Forward "multicast" MRME (DL(n+1)) to BBQS
n+4
if SN is in BBQS
make disqualification list with sign set of each item. (id (signes > 66% of BBQS))
create disqualification transaction (type II)
send it to the blockchain
n+5 = (n+4) + 1

    a handler that validates and collects "ping" responses (if SN is in BBQS) [n+1..n+3).

    a handler that validates and collects DLs (if SN is in BBQS) [n+3..n+4).

git submodule update command

I just built the graft-ng project on a Ubuntu 18.04 system. Using the suggested command:

git submodule update --recursive

from the root directory of the project does not work for me to recursively download all the modules.

What did work was:

git submodule update --init --recursive

Is this a system specific issue? If it is a broader issue you might consider changing the installation guide.

Where is the source code for the stimulus package?

Release the source code for the stimulus transaction mechanism so that we can tell that it is fair?

This doesn't have to do with trust; as much as, it has to do with finding bugs that may or may not happen! Depending upon certain unforeseen situations certain nodes might get more of the stimulus.
#217
#225

cmake step failed because of googletest

[mbg033@ubuntu ~/.../build (development/v.0.0.1)$] cmake ..
-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/lib/ccache/cc
-- Check for working C compiler: /usr/lib/ccache/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/lib/ccache/c++
-- Check for working CXX compiler: /usr/lib/ccache/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
==> Build tests section included
CMake Error at CMakeLists.txt:107 (add_subdirectory):
  The source directory

    /home/mbg033/dev/graft-project/graft-ng/build/googletest-src

  does not contain a CMakeLists.txt file.
-- Configuring incomplete, errors occurred!
See also "/home/mbg033/dev/graft-project/graft-ng/build/CMakeFiles/CMakeOutput.log".
See also "/home/mbg033/dev/graft-project/graft-ng/build/CMakeFiles/CMakeError.log".

Nicehash Attacks

As many of you know, GRAFT and numerous other projects are being regularly attacked using Nicehash (service providing hash power on demand). The attackers goal is to perform a sustained 51% attack to produce an alternative chain that would include their own double-spending transaction in it and attempt to withdraw the funds they’ve transferred to themselves on the alternative chain from an exchange before the chain returns to its valid state.

These attacks are unpleasant for several reasons: 1) the exchanges have to raise number of confirmations leading to very long withdrawal and deposit times, 2) some legitimate transactions get stuck while being written to the alt-chain, and it takes days until they get reversed.

What we can do about it

It has been suggested that we implement a custom hash algorithm that would preclude attackers from using Nicehash to carry out these attacks. While this approach may seem easy and attractive, in reality it provides very little defense as both the blockchain and the mining software have to be updated in order to facilitate this defense, and there’s nothing other than demand that stops Nicehash from updating their mining software to include our patches. So the efficacy of this approach depends solely on “lowered attractiveness” to the Nicehash community, which is temporary at best but does carry a significant burden of extensive validation and testing, distracting the team from the other core tasks.

The other approach is what’s referred to as ChainLocks, which is using the 2nd layer network to help validate the blockchain. This approach is much more robust and is eventually the answer to the issue, with the only condition that we need to have a robust 2nd tier network to execute it, so we’re looking at couple of months out.

Conclusion

We’re still studying this topic and analyzing what others have done in this regard to see whether there’s a good interim solution before we can implement chain locking. We will keep the community posted of our progress, also appreciate any constructive, researched input from the community themselves.

Wallet address isn't updated from announces

If the supernode wallet is changed and the supernode restarted, other running supernodes ignore the changed wallet in announces, but new or restarted supernodes start using it.

This is a problem currently because people are starting up supernodes with a blank wallet, the other SNs see this announce, then the supernode gets restarted with the correct wallet but the other SNs never update the wallet and so are left with "" until they get restarted.

The wallet address in the curl request response is truncated

When performing the curl request to receive the public identifier key, supernode public wallet address and signature from the supernode, the address is truncated by one digit.

The response is:
{"testnet":true,"wallet_public_address":"F9BcBsUbKNhSmJuKe3xnY1WMQWaViRFtvX4GYtNdUgWkAGPAVL7qbMb2NxX1xMmUeRgVtmf76dxESbpahf4utXEkSgqTCT","id_key":"ddac6273228c43939ee9203203f4d8ee6efa616410fb1b69e55924c9ac0d1e41","signature":"be8dc395a5f8cda78d8d1701ae5c3a4367028399ecb1d0f982699204f5ba2d004163a1ff8d6d7eac434c84741a7ef8506e300b993e25ea03fdf5396784702408"

While the actual address is:
F9BcBsUbKNhSmJuKe3xnY1WMQWaViRFtvX4GYtNdUgWkAGPAVL7qbMb2NxX1xMmUeRgVtmf76dxESbpahf4utXEkSgqTCTv

When performing the stake transaction the wallet gives an "invalid supernode signature" error, as expected.

The get request has been tested on local host and remote using curl, and in a web browser. The error is consistent in all cases.

[RFC 003 RTVF] RTA Transaction Validation Flow

Comments. Two major, a few smaller issues.

Privacy leakage.

This design leaks privacy to the PoS proxy, the auth sample, and the wallet proxy. To quote from https://www.graft.network/2018/11/21/how-graft-is-similar-to-and-at-the-same-time-different-from-visa-and-other-payment-card-networks-part-2/

This property is absolute privacy provided by GRAFT Network to both buyer and merchant. Unlike plastic cards and most cryptocurrencies, GRAFT’s sender address, recipient address, transaction amount, and transaction fee amount are invisible to everyone except for the sender and recipient themselves.

This design, however, does not accomplish that: the PoS proxy is able to identify all payments received by the PoS, and all SNs involved in the transaction see the amount sent (even if they can't see the recipient address).

A cryptocurrency that is only private as long as you have to trust a single party (the PoS proxy) is no longer a privacy coin.

But it gets worse: from the description in the RFC it is possible for various network participants other than the receiving and paying wallets to get "serialized payment data" which consists of "serialized payment data – list of purchased items, price and amount of each item, etc.".

So, to summarize the privacy leaks that seem to be here:

  • the PoS proxy SN sees the recipient wallet address, the total amount, and individual items purchased including the amount of each item.
  • auth sample SNs see the total amount including the amount received by the proxy PoS
  • wallet proxy SN plus, apparently, any SN can get an itemized list of the transaction

Other comments

  • this design has no protection against a selfish mining double-spending attack. Unlike a double-spending attack against an exchange, double-spending here does not have to reach any minimum number of confirmations; and can be timed (with a little effort) to not even require 51% of the network. (I pointed this out just over two months ago in the public JIRA with details of how to carry out an attack and a demo but the issue has had no response).

(4. Regular key image checking (double spent checking.) does nothing against the above attack: the key image isn't spent on the network visible to the SNs until the private block is released.)

  • The PoS <-> PoS proxy SN communication layer should be encrypted so that the PoS can verify it is talking to the expected party (since the PoS in this design has to be trusted with all RTA payment data). This should require HTTPS (with certificate validation enabled), or something similar, both to encrypt the data against MITM snooping, but also importantly to avoid someone spoofing the PoS proxy connection to send false authorization updates back to the PoS.
  1. Each supernode from auth sample and PoS Proxy Supernode ...

There is a huge amount of complexity added here for little apparent reason. You set the success/failure conditions at 6/3 replies so that you have can have a consistent concensus among the SNs, which I understand, but you don't need this success/failure concensus when you have a single party that is in charge: the PoS proxy.

If you simply changed the rules so that the PoS proxy is always the one to distribute the block, you would simplify the traffic (SN auth sample results can be unicast to the PoS proxy, and the payment success can simply be a state variable that never needs to be broadcast over the network), but more importantly you would allow a 6/1 success/failure trigger without incurring any consistency problem.

ii. Transaction considered to be rejected in the case at least 3 out of 8 auth sample members or PoS Proxy rejected it.

Allowing 2 failures is a recipe for fee cheating: hack your wallet to reduce two of the eight SN fees to zero (or just leave them out) in every transaction to give yourself a small rebate.

iii. When any auth sample supernode or PoS Proxy Supernode gets in:

What happens if there are 5 successes, 2 failures, and one timeout?

Graftnode that handles RTA transaction validates:
i. Correctness of the selected auth sample;

Which is done how, exactly? In particular, how much deviation from what it thinks is correct will it allow? This needs to be specified.

  1. Once the graftnode accepts the transaction, supernode, which submitted it to the cryptonode, broadcasts successful pay status over the network

Why is this needed at all? Success can already been seen (and is already transmitted across the network) by the fact that the transaction enters the mempool. Can't the wallet just check for that instead?

This design is non-trustless!

This design puts far too much centralized control in the hands of the proxy SN. The design here puts this single node as RTA transaction gatekeeper, with the possibility to lie to the PoS about transaction validity—a lie here could be deliberate, or could be because the proxy SN in use was hacked. This is not how a decentralized cryptocurrency should work: it needs to be possible to trust no one on the network and yet have the network still work.

A non-trustless design like this should be a non-starter.

Supernode error net.http

I am currently staking with T1 SuperNode.
The log file, where i send the logs of the super node are non-stop filled with the following error and warning:

2019-03-27 11:57:37.249	    7ff856881700	WARN 	net	modules/cryptonode/contrib/epee/include/net/net_helper.h:367	Some problems at connect, message: Connection refused
2019-03-27 11:57:37.249	    7ff856881700	ERROR	net.http	src/rta/DaemonRpcClient.cpp:205	/json_rpc/rta/send_supernode_blockchain_based_list error

I doubt this is okay.

RTA Flow: StorePaymentData, GetPaymentData

Problems with compiling on Ubuntu 18.04

Im getting this error while compiling:

[ 1%] Building CXX object CMakeFiles/graftlet_lib.dir/src/lib/graft/GraftletLoader.cpp.o
In file included from /home/graft/graft-ng/include/lib/graft/GraftletLoader.h:23:0,
from /home/graft/graft-ng/src/lib/graft/GraftletLoader.cpp:26:
/home/graft/graft-ng/include/lib/graft/IGraftlet.h:10:15: fatal error: any: No such file or directory
compilation terminated.
CMakeFiles/graftlet_lib.dir/build.make:62: recipe for target 'CMakeFiles/graftlet_lib.dir/src/lib/graft/GraftletLoader.cpp.o' failed
make[2]: *** [CMakeFiles/graftlet_lib.dir/src/lib/graft/GraftletLoader.cpp.o] Error 1
CMakeFiles/Makefile2:225: recipe for target 'CMakeFiles/graftlet_lib.dir/all' failed
make[1]: *** [CMakeFiles/graftlet_lib.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

The needed file is right there, i prooved it.
I use cmake version 3.13.3, could this be the cause of that problem, or is it a bug ?
Thanx for helping...

CMake 3.14.0 fails to build.

System is Ubuntu 18.04 LTS.
Downloaded the CMake 3.14.0 from their git, already compiled. Create symlink from /usr/bin/cmake to /opt/cmake/bin/cmake.

When doing cmake .. -DOPTS_BUILD_TESTS=ON it fails with the following:

root@ku4eto-18:/opt/graft-ng/build-test# cmake .. -DOPT_BUILD_TESTS=ON
==> The configuration is RelWithDebInfo. Debug info will be extracted into separate files.
-- The C compiler identification is GNU 7.3.0
-- The CXX compiler identification is GNU 7.3.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
CMake Error at CMakeLists.txt:14 (add_custom_command):
  TARGET 'Git::Git' is IMPORTED and does not build here.
Call Stack (most recent call first):
  /opt/cmake/share/cmake-3.14/Modules/FindGit.cmake:90 (add_executable)
  version.cmake:8 (find_package)
  CMakeLists.txt:51 (include)


-- Found Git: /usr/bin/git
-- Found Boost Version: 106501
-- Found Boost Version: 106501
==> Test graftlets included
==> Build tests section included
-- Configuring incomplete, errors occurred!
See also "/opt/graft-ng/build-test/CMakeFiles/CMakeOutput.log".
See also "/opt/graft-ng/build-test/CMakeFiles/CMakeError.log".

Compiling with the package provided CMake 3.10.2 or the manually downloaded 3.11.0 is successful.

Wrong instructions for Test Suit

The instructions for including the test suit:

cmake <project root> -DOPT_BUILD_TEST

are incorrect.

Proper syntax is actually as follow:

cmake <project root> -DOPT_BUILD_TESTS=ON

Pre-compiled Ubuntu 18.04 binaries do not work

As the title says.

I have a VM on VirtualBox, running a Ubuntu 18.04.

The moment i try to start the ./supernode, it terminates with Illegal instruction (core dumped).

Any ideas on that?

Stake submission allows a multi-tx stake to be sent, which breaks.

[wallet FCJasi]: stake_transfer FCJasiDtSZCFyPDUXeCuN3EhVmagA1h9KFMLLMHfCxCSLeZeUmPQG1QJ8YDa7mucXYVzkFZZtNNzXcVKJytfu5GfDmD3LYz 50000 100 63
26d5e5ed117e03fddfc2e9a69c6643438c33914e045802edae91bd0a729ac3 35117a05019d17e37d28d4a2fb3fa80d863917434ae2a30675a626b7f303310a1b71a4aa67d1f
2e2a7554c8793e178d94f7340c8dbacb5691b08e7bc0dc0e701
Wallet password:
Stake amount will be locked. If you don't have exact amount for stake transaction, remaining amount also may be locked. Is this okay?  (Y/Yes/N/No): y
There is currently a 2 block backlog at that fee level. Is this okay?  (Y/Yes/N/No)y
tx type: 0
tx type: 0
Sending 50000.0000000000.  Your transaction needs to be split into 2 transactions.  This will result in a transaction fee being applied to each transaction, for a total fee of 2.4224525280
Is this okay?  (Y/Yes/N/No): y
Transaction successfully submitted, transaction <c636abb929d24d0d614538b616387632ec2d4004eb2c4f4391c98a9ec62f84ed>
You can check its status by using the `show_transfers` command.
Transaction successfully submitted, transaction <782a7f6589ad2b8b903b80c67554abe8dc705b0915b6f0781ea5999290aac879>
You can check its status by using the `show_transfers` command.

then in the node:

Mar 07 11:32:22 keynes graftnoded.alpha[4103195]: 2019-03-07 15:32:22.577	[P2P5]	WARN 	global	src/cryptonote_core/stake_transaction_processor.cpp:195	Ignore stake transaction at block #274657, tx_hash=<c636abb929d24d0d614538b616387632ec2d4004eb2c4f4391c98a9ec62f84ed>, supernode_public_id '6326d5e5ed117e03fddfc2e9a69c6643438c33914e045802edae91bd0a729ac3' because amount 8716.37 is less than minimum required 50000
...
Mar 07 11:41:01 keynes graftnoded.alpha[4103195]: 2019-03-07 15:41:01.270	[P2P9]	WARN 	global	src/cryptonote_core/stake_transaction_processor.cpp:195	Ignore stake transaction at block #274659, tx_hash=<782a7f6589ad2b8b903b80c67554abe8dc705b0915b6f0781ea5999290aac879>, supernode_public_id '6326d5e5ed117e03fddfc2e9a69c6643438c33914e045802edae91bd0a729ac3' because amount 41392.3 is less than minimum required 50000

You either need to accumulate these and accept the stake once enough has been accumulated, or make the stake command fail (with a message about needing to do a sweep_all first) if the stake ends up requiring two txes.

RTA Flow: RTA Transaction Validation by Proxy Supernodes

RTA Transaction Validation by Proxy Supernodes (Part of Authorize RTAT x)

When PoS or Wallet Proxy Supernode receives RTA transaction data for RTA validation, it performs the following operations:

  • Decrypts received RTA transaction data:
  • Decrypts message key using PoS or Wallet Proxy Supernode private identification key.
  • Decrypts RTA transaction and transaction private key using message key.
  • Checks the service fee, using transaction private key and proxy supernode public wallet address based on Monero Prove Payment Mechanism.

For more details see: https://github.com/graft-project/DesignDocuments/blob/master/RFCs/%5BRFC-003-RTVF%5D-RTA-Transaction-Validation-Flow.md#rta-transaction-validation-by-proxy-supernodes

RTA Flow: Modify Pay according new design

RTA Flow: Modify Pay according new design (GNRTA-228)

Once Wallet Proxy Supernode receives wallet Pay request with the encrypted transaction blob and transaction private key, it multicasts encrypted transaction, encrypted transaction private key and encrypted message keys to the auth sample supernodes and proxy supernodes.

Pay Request structure description: https://github.com/graft-project/GraftDocuments/blob/master/API/%5BAPI-001-SCA%5D%20Supernode%20RTA%20and%20Cryptonode%20RTA%20APIs.md#pay---process-payment

For more details see: https://github.com/graft-project/DesignDocuments/blob/master/RFCs/%5BRFC-003-RTVF%5D-RTA-Transaction-Validation-Flow.md

RTA Flow: RTA Transaction Validation by Auth Sample Supernode

RTA Transaction Validation by Auth Sample Supernode (part of Authorize RTA Tx)

When a supernode in auth sample receives RTA transaction data for RTA validation, it performs the following operations:

  • Decrypts received RTA transaction data:
    • Decrypts message key using its supernode private identification key.
    • Decrypts RTA transaction and transaction private key using message key.
  • Checks correctness of selected auth sample using payment block hash and RTA payment ID.
  • Validates transaction key images (double-spent check) in the blockchain, transaction pool and the list of currently processing RTA Transaction on supernode.
  • Checks the validation fee using transaction private key and its supernode public wallet address based on Monero Prove Payment Mechanism.

For more details see: https://github.com/graft-project/DesignDocuments/blob/master/RFCs/%5BRFC-003-RTVF%5D-RTA-Transaction-Validation-Flow.md#rta-transaction-validation-byauth-sample-supernode

graft_server attempts to validate wrong SN wallet address.

[No modifications made to config.ini]

On first run of graft_server it generates the stake-wallet files in ~/.graft/supernode/data/stake-wallet. After opening the wallet and obtaining the address, seed then loading the wallet with a stake; when re-running server the address it states it is looking to validate SN holdings for is an entirely different wallet address than the one it generated and therefore the node is never staked and an error is thrown.

graft-ng license

graft-ng doesn't appear to have a license anywhere, which needs to be addressed.

Moreover, it relies on mongoose, which is GPL v2 licensed (not v2 or later)—the terms of which require graft-ng to be distributed under the GPL v2 (or else the mongoose dependency needs to be dropped).

SIGHUP exit code does not seem correct

OS: Ubuntu 18.04

Using the supernode as a systemd service. Upon sending a SIGHUP, the process actually fails to reload.

root@graftbox:/opt/graft-node-builded# systemctl reload GraftSuperNode
root@graftbox:/opt/graft-node-builded# systemctl status GraftSuperNode
● GraftSuperNode.service - Graft SuperNode
   Loaded: loaded (/etc/systemd/system/GraftSuperNode.service; enabled; vendor preset: enabled)
   Active: activating (auto-restart) (Result: core-dump) since Thu 2019-03-28 11:54:30 EET; 976ms ago
  Process: 13580 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS)
  Process: 13564 ExecStart=/usr/bin/authbind --deep /opt/graft-node-builded/supernode --log-file /var/log/graft/supernode.log (code=dumped, signal=ABRT)
 Main PID: 13564 (code=dumped, signal=ABRT)

root@graftbox:/opt/graft-node-builded# systemctl status GraftSuperNode
● GraftSuperNode.service - Graft SuperNode
   Loaded: loaded (/etc/systemd/system/GraftSuperNode.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2019-03-28 11:54:45 EET; 5min ago
  Process: 13580 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS)
 Main PID: 13683 (supernode)
    Tasks: 9 (limit: 2339)
   CGroup: /system.slice/GraftSuperNode.service
           └─13683 /opt/graft-node-builded/supernode --log-file /var/log/graft/supernode.log

It changes the main process PID.

The systemd file is as follows:

[Unit]
Description=Graft SuperNode

[Service]
Type=simple
User=graftbox
WorkingDirectory=/opt/graft-node-builded/
ExecStart=/usr/bin/authbind --deep /opt/graft-node-builded/supernode --log-file /var/log/graft/supernode.log
ExecReload=/bin/kill -HUP $MAINPID
TimeoutStopSec=15
KillSignal=SIGINT
Restart=on-failure
RestartSec=15

[Install]
WantedBy=multi-user.target

As you can see, i am using ExecReload=/bin/kill -HUP $MAINPID

The thing that worries me is what the main process returns.

  Process: 13564 ExecStart=/usr/bin/authbind --deep /opt/graft-node-builded/supernode --log-file /var/log/graft/supernode.log (code=dumped, signal=ABRT)
 Main PID: 13564 (code=dumped, signal=ABRT)

Seems like it performs an ExecStart again, and thus it leaves an code=dumped. Meaning, its probably not graceful enough.

[RFC 001 GSD] General Supernode Design

Some comments:

The supernode charges the clients an optional fee for this activity.

Optional?

Upon start, each supernode should be given a public wallet address that is used to collect service fees and may be a receiver of a stake transaction.

What is the point of this? That receiving wallet is already included in the registration transaction on the blockchain; I don't see why the supernode needs to have a wallet (even just the wallet address) manually configured at all rather than just picking it up from the registration transaction.

The supernode must regenerate the key pair per each stake renewal.

This is, as I have mentioned before, a very odd requirement. It adds some (small) extra work on the part of the operator, and it would seem to make it impossible to verify when a SN is being renewed rather than newly registered (and thus not double-counted if it is both renewed and in the "overhang" period). It also means that as soon as a SN stake is renewed (thus changing the key) any RTA requests that still use the old key simply won't be received by the SN in question. In theory, you could make the SN keep both keys, but this raises the obvious question of: Why bother? In #176 you wrote:

You asked why we did not declare permanent supernode identification keypair. The main reason was that we didn't see any reason to make it permanent. The temporal keypair is enough for our goals and regeneration of this key won't create large overwork during stake renewal. And yes, the lifespan of this key pair will be equal to the stake period and during stake renewal supernode owner also need to update it. If someone wants to build a tracking system, they can do it anyway.

I carefully counted the number of benefits of mandatory regeneration provided in this description: 0. So it has zero benefits and more than zero drawbacks. So why is it here?

Not storing any wallet related private information on supernode is a more secure approach, but it doesn't allow automatic re-staking.

Why not? Other coins are able to implement automatic renewal without requiring a password-unprotected wallet or having the wallet on a service node; what part of the Graft design prevents Graft from doing what other coins have done?

Stake transaction must include the following data:

  • the receiver of this transaction must be supernode's public wallet address;
    ...
  • tx_extra must contain supernode public wallet address;

This is a minor point, but it isn't entirely clear why this is required: you could simply include both a recipient wallet address and a reward recipient wallet to allow the possibility of wallet A to submit a stake with rewards going to wallet B, which seems like it could be useful.

TRP determines the number of blocks during which supernode is allowed to participate in RTA validation even if it has no locked stake. If during TRP supernode owner doesn't renew its stake transaction, the supernode will be removed from active supernode list and will not be able to participate in RTA validation.

And how, exactly, will you determine that the SN has been renewed since it won't have the old stake's pubkey anymore?

The mechanism of periodic announcements has, therefore, a two-fold purpose:

  1. make the best effort to deliver current status to all supernodes in the network without releasing the sender's IP to the whole network;

Verifying uptime is fine. The design, however, of including incrementing hop counts makes it almost trivial to find the IP of any SN (or, at least, the graftnoded that the SN is connected to).

  1. build reliable communication channels between any two active supernodes in the network without releasing IPs of the participants, while producing minimal traffic overhead.

It may reduce traffic somewhat, but at the cost of a massive increase in traffic of frequent periodic traffic expenses that is almost certain to vastly eclipse any savings. A simple back-of-the-envelope calculation:

A = 2000 active service nodes (each of which a node will received an announce for)
B = 1000 bytes per announce
R = 1440 announces per day (= 1 announce per minute)
N = 50 p2p connections typical for a mainnet node

A * B * R * N = 144 GB of traffic per day both uploaded *and* downloaded just to transmit announces across the network.

And this isn't just incurred by supernodes, this is incurred by all network nodes. Even if you decrease the announcement rate to 1 announce every 10 minutes you are still looking at 14GB/day of announcement traffic both uploaded and downloaded which applies to ordinary network nodes.

This is not a design that can be considered to incurs only "minimal traffic overhead".

RTA validation participants may use encrypted messages.

"may"?

Multiple Recipients Message Encryption

This whole feature seems rather pointless. Multicast messages are going to have to be transmitted much more broadly than unicast messages: You can't just sent it along the best three paths, which you proposed for unicast messages, because each recipient is highly likely to have a completely different best three paths. It doesn't seem like this multicast approach is going to save anything compared to simply sending 8 unicast messages (and then simplifying the code by dropping multicast support if there are no remaining cases for it). There is potential for optimization here — you could use protocol pipelining to send all the unicast messages at once — the the proposed complexity added for encrypted multicast messages seems to have little benefit.

Authorization Sample Selection Algorithm

https://github.com/graft-project/graft-ng/wiki/%5BDesign%5D-Authorization-Sample-Selection-Algorithm comments on the design of the supernode sample selection. I have some comments/questions about the algorithm.

Most importantly, I have to ask: why this approach instead of some other approach?

I see some downsides that I'll get into, but this RFC (and the others) feel like they are simply describing what is being done rather than why it was chosen or is needed. I can guess some of that, of course, but it would be quite valuable to have it written down why this aspect of the design was chosen to be the way it is.

What the algorithm describes is effectively uniform random sampling done in a deterministic way via a recent block hash and supernode public keys (whether the wallet public keys via the wallet address, or using a separate SN-specific public key as I suggest in #176 (comment) doesn't really matter).

The big problem I see with this approach is this:

Uniform random sampling leads to an enormously variable distribution of SN rewards.

Assuming a (long run) 50% supernode lock-in, with about 50% of the that going into T1 supernodes, we get somewhere around 9000 T1 supernodes expected on the network (once near maximum supply).

Thus, with this pure random selection formula, each T1 supernode would have a probability of 1 - (8999/9000)^2 (approximately 0.000222) of being selected in any block.

This in turn implies that there is only about a 14.7% chance of getting selected into the auth sample for at least one block in a day, and only a 67.4% chance of getting at least one auth sample entry in a week.

If your SN is online for 2 weeks, you still have slightly more than 10% chance of never being in the auth sample, and a 3.5% chance of never being in the auth sample after having your SN up for 3 weeks.

When considering get into the auth sample at least twice, the numbers are worse:

  • 1.1% chance of getting 2+ auth samples in a day
  • 30% chance of getting 2+ auth samples in a week
  • 65.5% chance of getting 2+ auth samples in 2 weeks
  • 95% chance of getting 2+ auth samples in a month

When you also consider the exponential distribution of block times, things look worse still because of the distribution of block times:

  • 1.4% get less than 15 seconds of auth sample time per month
  • 2.0% get between 15 and 60 seconds of auth sample time per month
  • 3.9% get [1,2) minutes/month
  • 5.1% get [2,3) minutes/month
  • 6.0% get [3,4) minutes/month
  • 6.6% get [4,5) minutes/month
  • 7.0%, 7.0%, 6.9%, 6.6%, 6.2% get [5,6), [6,7), [7,8), [8,9), [9,10) minutes/month
  • 5.7, 5.2, 4.7, 4.0, 3.6, 3.1, 2.6, 2.2, 1.9, 1.6% for [10,11) through [19,20)
  • 5.9% get 20-30 minutes of auth time per month
  • 0.6% get more than 30 minutes of auth time per month

If we then consider RTA earnings, the distribution becomes considerably more unequal still because of variation in the timing and amounts being spent. The above represents a "best case" distribution where RTA payment amounts are constant, very frequent, and perfectly spread out over time.

I've deliberately chosen a 30-day timescale above because I believe that it is about as far as one can reasonable go while thinking that rewards will "average out." As you can see above, though, they aren't averaging out in a reasonable time frame: even if RTA traffic was perfectly spread over time and for a constant amount, we have the top 10% of tier-1 SNs (ranking by auth sample time) earning seven times what the bottom 10% earns.

This sort of risk in reward distribution seems undesirable for potential SN operators and is likely to create a strong motivation for SN pooling--thus inducing centralization on the SN side of the network in the same way we have centralization currently among mining pool operators.

In Dash there is some randomness to MN selection, but it is strongly biased towards being a much fairer distribution: there is a random selection only from MNs that have not been one of the last 90% of MNs to earn a reward. Unlike Graft, the reward is simply a portion of the block reward, so there is no extra time-dependent or transaction volume-dependent components to further spread out the distribution. Loki is similar, but perfectly fair: SNs enter a queue and receive a payment when they reach the top.

One key distinction of Graft compared to both Dash and Loki, however, is that MN/SN sample selection in Dash/Loki is completely independent of MN/SN rewards. In Loki, for example, there are performance metrics that a SN must satisfy or risk being deregistered (and thus losing rewards until the stake expires). Dash, similarly, requires that MNs participate in network operations to stay active, foregoing any reward potential if they fail a network test and become inactive.

Neither of these are directly applicable to Graft, given the percentage nature of fees, but I feel that given the highly erratic nature of SN rewards that I laid out above this needs to be addressed. Either a change to improve the fairness of SN rewards, or at least a solid explanation of why a fairer distribution of earnings isn't feasible.

Just to throw out a couple of ideas for discussion:

  • have 5 queues (one queue for each tier plus a proxy SN queue). Require that 0.5% of all RTA payments be burned, then remint some fraction (say 0.1%) of all outstanding burnt, non-reminted fees in each block and send an equal portion to the SN at top of each queue, returning that SN to the bottom of its queue. Use network-assessed performance requirements to deregister (via a quorum) any SN with poor performance.

  • Use 5 queues, as above, but just drop the RTA fee entirely and instead award SNs a constant fraction of the block reward (say 50%), combined with a meaningful tail emission (this could be one that declines over time until it hits a fixed level, or just a switch to an outright fixed emission level).

Supernode' lists of online supernodes is highly inconsistent

The bot in the RTA alpha channel monitors several supernodes to get a list of what the network considers to be "online" supernodes: that is, remote supernodes for which the local supernode has received an announce within the past hour.

Here's a recent snapshot from about an hour ago:
Jasi: 47 💓, 6 💔, 1 🛑 (11/18/18) [9-9-10-19] 🔗
JasL: 47 💓, 11 💔, 65 🛑 (11/18/18) [9-9-10-19] 🔗
ionse: 44 💓, 9 💔, 23 🛑 (40/3/1) [9-9-8-18] 🔗
dev1: 26 💓, 12 💔, 76 🛑 (22/3/1) [4-5-6-11] 🔗
dev2: 40 💓, 0 💔, 91 🛑 (37/3/0) [8-8-7-17] 🔗
dev3: 25 💓, 12 💔, 65 🛑 (23/1/1) [8-4-4-9] 🔗
dev4: 23 💓, 12 💔, 118 🛑 (21/1/1) [6-4-5-8] 🔗
Legend: 💓=online (announce within last hour); 💔=expired; 🛑=offline; (a/b/c)=2m/10m/1h uptime counts; [w-x-y-z]=[t₁]-…-[t₄] counts

Note the large inconsistencies across different nodes on the network (the only two that are the same are Jasi and JasL, because they are attached to the exact same node).

Here's another snapshot from just now:
Jasi: 46 💓, 7 💔, 1 🛑 (10/28/8) [8-9-9-19] 🔗
JasL: 47 💓, 12 💔, 65 🛑 (11/28/8) [8-10-9-19] 🔗
ionse: 46 💓, 9 💔, 23 🛑 (43/2/1) [8-10-9-18] 🔗
dev1: 42 💓, 12 💔, 76 🛑 (41/1/0) [6-9-8-19] 🔗
dev2: 47 💓, 0 💔, 90 🛑 (44/2/1) [8-9-10-19] 🔗
dev3: 29 💓, 12 💔, 65 🛑 (28/1/0) [4-6-7-12] 🔗
dev4: 40 💓, 12 💔, 118 🛑 (40/0/0) [6-8-8-18] 🔗
Legend: 💓=online; 💔=expired; 🛑=offline; (a/b/c)=2m/10m/1h uptime counts; [w-x-y-z]=[t₁]-…-[t₄] counts

It's extremely rare to actually see all 7 sampled nodes with an identical view of online supernodes.

This seems likely caused by the change in alpha 4.2 to randomly eliminate (N-1)/N of announces. There wasn't 100% consistency before, but it was far more likely to be close across supernodes than it is now.

deb package

would be good to create debian package. better to use graftnoded's package as dependency (once we have it)

Messages from unstaked supernodes shouldn't announce or be forwarded

Currently every time a supernode starts up, it begins announcing itself to the network, and the announcement is propagated through the network. This is undesirable: it has no benefit to the network, but opens up a potential DDOS vector by letting someone spam with announces without having to have to any collateral at all.

Supernodes should not send announces, and nodes should not forward messages for supernodes without an active stake on the blockchain.

BUG: supernode writes to syslog, even when using --log-file (when run as service)

As the tittle says.

Using Ubuntu 18.04.
Both graftnoded and supernode are running as systemd services.
graftnoded is using Type=forking, which allows it to run and not output everything as well to syslog.

Same cannot be said for supernode. It runs with Type=simple, and it outputs EVERYTHING that the log-level allows to syslog. Using --log-file or logfile in the config.ini, pipes the output to both the --log-file location AND syslog.

Tried poking around src/supernode/server.cpp, but i cant seem to find why it still sends to syslog.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.