Git Product home page Git Product logo

go-nebulas's Introduction

go-nebulas

Official Go implementation of the Nebulas protocol. The current version is 2.0, also called Nebulas Nova.

Build Status

For the roadmap of Nebulas, please visit the roadmap page.

For more information of Nebulas protocol, design documents, please refer to our wiki.

TestNet is released, please check here for more details.

Mainnet is released, please check here for more details.

Building from source

Prerequisites

Components Version Description
Golang >= 1.12 The Go Programming Language

Build

Checkout repo.

git clone github.com/nebulasio/go-nebulas

The project is under active development. New users may want to checkout and use the stable mainnet release in master.

cd github.com/nebulasio/go-nebulas
git checkout master

Or use the stable testnet release in testnet.

git checkout testnet

Install native libs.

Nebulas execution need NVM and NBRE two dependent libraries. We provide stable versions of both libraries. Execute the execution script to install

cd github.com/nebulasio/go-nebulas

OS X:
./setup.sh

Linux:
source setup.sh
Note:

The dependency libraries are not installed in the system directory, and there are different path-loading methods used in Darwin and Linux systems.

  • OS X:

    • In the user's root directory to create lib folder, system to load the library path can read this path, ensure that the root directory of the current folder does not exist. All of these operations in setup.sh already processing.(DYLD_LIBRARY_PATH is not possible unless System Integrity Protection (SIP) is disabled)
    ./setup.sh
    
  • Linux - Ubuntu

    • setup.sh export LD_LIBRARY_PATH for native libs.

Build the neb binary.

  • run command
make build

Building from Docker

You can specify the config file by modifying the docker-compose environment configuration.

  • default docker compose config(version3):
version: '3'

services:
  
  node:
    image: nebulasio/go-nebulas
    build:
      context: ./docker
    ports:
      - '8680:8680'
      - '8684:8684'
      - '8685:8685'
      - '8888:8888'
      - '8086:8086'
    volumes:
      - .:/go/src/github.com/nebulasio/go-nebulas
    environment:
      - REGION=China
      - config=mainnet/conf/config.conf
    command: bash docker/scripts/neb.bash

sudo docker-compose build
sudo docker-compose up -d

Run

Run node

Starting a Nebulas node is simple. After the build step above, run a command:

./neb [-c /path/to/config.conf]

Quick start please use script and added check(Recommend):

./start.sh mainnet|testnet|[-c /path/to/config.conf]

tips: more details about configuration, please refer to template.conf

You will see log message output like:

INFO[2018-03-30T01:39:16+08:00] Setuped Neblet.                               file=neblet.go func="neblet.(*Neblet).Setup" line=161
INFO[2018-03-30T01:39:16+08:00] Starting Neblet...                            file=neblet.go func="neblet.(*Neblet).Start" line=183
INFO[2018-03-30T01:39:16+08:00] Starting NebService...                        file=net_service.go func="net.(*NebService).Start" line=58
INFO[2018-03-30T01:39:16+08:00] Starting NebService Dispatcher...             file=dispatcher.go func="net.(*Dispatcher).Start" line=85
INFO[2018-03-30T01:39:16+08:00] Starting NebService Node...                   file=node.go func="net.(*Node).Start" line=96
INFO[2018-03-30T01:39:16+08:00] Starting NebService StreamManager...          file=stream_manager.go func="net.(*StreamManager).Start" line=74
INFO[2018-03-30T01:39:16+08:00] Started NewService Dispatcher.                file=dispatcher.go func="net.(*Dispatcher).loop" line=93
INFO[2018-03-30T01:39:16+08:00] Starting NebService RouteTable Sync...        file=route_table.go func="net.(*RouteTable).Start" line=91
INFO[2018-03-30T01:39:16+08:00] Started NebService StreamManager.             file=stream_manager.go func="net.(*StreamManager).loop" line=146
INFO[2018-03-30T01:39:16+08:00] Started NebService Node.                      file=net_service.go func="net.(*NebService).Start" id=QmP7HDFcYmJL12Ez4ZNVCKjKedfE7f48f1LAkUc3Whz4jP line=65 listening address="[/ip4/127.0.0.1/tcp/8680 /ip4/127.94.0.1/tcp/8680 /ip4/127.94.0.2/tcp/8680 /ip4/192.168.1.13/tcp/8680]"
INFO[2018-03-30T01:39:16+08:00] Started NebService.                           file=net_service.go func="net.(*NebService).Start" line=74
INFO[2018-03-30T01:39:16+08:00] Starting RPC GRPCServer...                    file=server.go func="rpc.(*Server).Start" line=87
INFO[2018-03-30T01:39:16+08:00] Started RPC GRPCServer.                       address="0.0.0.0:8684" file=server.go func="rpc.(*Server).Start" line=95
INFO[2018-03-30T01:39:16+08:00] Started NebService RouteTable Sync.           file=route_table.go func="net.(*RouteTable).syncLoop" line=123
INFO[2018-03-30T01:39:16+08:00] Starting RPC Gateway GRPCServer...            file=neblet.go func="neblet.(*Neblet).Start" http-cors="[]" http-server="[0.0.0.0:8685]" line=212 rpc-server="0.0.0.0:8684"
INFO[2018-03-30T01:39:16+08:00] Starting BlockChain...                        file=blockchain.go func="core.(*BlockChain).Start" line=194
INFO[2018-03-30T01:39:16+08:00] Starting BlockPool...                         file=neblet.go func="neblet.(*Neblet).Start" line=219 size=128
INFO[2018-03-30T01:39:16+08:00] Starting TransactionPool...                   file=neblet.go func="neblet.(*Neblet).Start" line=220 size=327680
INFO[2018-03-30T01:39:16+08:00] Started BlockChain.                           file=blockchain.go func="core.(*BlockChain).loop" line=208
INFO[2018-03-30T01:39:16+08:00] Starting EventEmitter...                      file=neblet.go func="neblet.(*Neblet).Start" line=221 size=40960
INFO[2018-03-30T01:39:16+08:00] Started BlockPool.                            file=block_pool.go func="core.(*BlockPool).loop" line=232
INFO[2018-03-30T01:39:16+08:00] Started TransactionPool.                      file=asm_amd64.s func=runtime.goexit line=2362 size=327680
INFO[2018-03-30T01:39:16+08:00] Started EventEmitter.                         file=event.go func="core.(*EventEmitter).loop" line=156
INFO[2018-03-30T01:39:16+08:00] Starting Dpos Mining...                       file=dpos.go func="dpos.(*Dpos).Start" line=136
INFO[2018-03-30T01:39:16+08:00] Started Sync Service.                         file=sync_service.go func="sync.(*Service).startLoop" line=150
INFO[2018-03-30T01:39:16+08:00] Started Dpos Mining.                          file=dpos.go func="dpos.(*Dpos).blockLoop" line=619
INFO[2018-03-30T01:39:16+08:00] Enabled Dpos Mining...                        file=dpos.go func="dpos.(*Dpos).EnableMining" line=155
INFO[2018-03-30T01:39:16+08:00] This is a seed node.                          file=neblet.go func="neblet.(*Neblet).Start" line=247
INFO[2018-03-30T01:39:16+08:00] Resumed Dpos Mining.                          file=dpos.go func="dpos.(*Dpos).ResumeMining" line=296
INFO[2018-03-30T01:39:16+08:00] Started Neblet.                               file=neblet.go func="neblet.(*Neblet).Start" line=259

From the log, we can see the binary execution starts neblet, starts network service, starts RPC API server, and starts consensus state machine.

TestNet

We are glad to release Nebulas Testnet here. You can use and join our TestNet right now.

MaintNet

We are glad to release Nebulas Mainnet here. You can use and join our MainNet right now.

Explorer

Nebulas provides a block explorer to view block/transaction information. Please check Explorer.

Wallet

Nebulas provides a web wallet to send transaction and deploy/call contract. Please check Web-Wallet

Wiki

Please check our Wiki to learn more about Nebulas.

Contribution

We are very glad that you are considering to help Nebulas Team or go-nebulas project, including but not limited to source code, documents or others.

If you'd like to contribute, please fork, fix, commit and send a pull request for the maintainers to review and merge into the main code base. If you wish to submit more complex changes though, please check up with the core devs first on our slack channel to ensure those changes are in line with the general philosophy of the project and/or get some early feedback which can make both your efforts much lighter as well as our review and merge procedures quick and simple.

Please refer to our contribution guideline for more information.

Thanks.

License

The go-nebulas project is licensed under the GNU Lesser General Public License Version 3.0 (“LGPL v3”).

For the more information about licensing, please refer to Licensing page.

go-nebulas's People

Contributors

andelf avatar athrunarthur avatar bibibong avatar blockheader avatar caiyesd avatar chengorangeju avatar daemonp avatar dreamflyfengzi avatar fbzhong avatar joelwangusa avatar kvakirsanov avatar leonli000 avatar magicstuff avatar mrwangyu2 avatar nebulashub avatar ottokafka avatar pluohust avatar qywang2012 avatar royshang avatar samuelchen85 avatar silentttttt avatar tianlongzu avatar uynarud avatar xchmwang avatar yangmaojun avatar yangshun2532 avatar yeouchien avatar yoginski avatar yupnano avatar zhaitianduo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-nebulas's Issues

neb rpc service receive same transaction again lead to failed to push a tx into tx pool duplicated transaction

Description

when send one transaction to one rpc neb service, the neb rpc service will receive the same transaction message again .

Error Log

msg="Failed to push a tx into tx pool." err="duplicated transaction" file=asm_amd64.s func=runtime.goexit goroutine=16 line=2338 messageType=newtx transaction="{"chainID":1001, "hash":"cc29e71ce53682be97663f2fbcc57490b9b2ac355828b46604576c06c30a63fe", "from":"e81f56a0bccf6500c8458abfa79b4287715d74edc59c2ad9", "to":"bbb213b48cf7554885177ad889c92be409e4ab08595273ab", "nonce":171, "value":"1", "timestamp":1517024148, "gasprice": "1000000", "gaslimit":"20000", "type":"binary"}"

Cause analysis

in p2p stream ,when neb node received other node msg, it will add the message (node.pid+message.hash) to bloom filter. it avoid to send parent node again.
But when the first node send one message to second node, the second node send the same message to the third node , the third node may send the same message to the first node。

the msg send routing path

A--> B --> C --> A

Solution

the firs solution : When send message to other node , add the messages to bloom filter , it avoid to filter message not to push the pool, but not avoid to receive again

the second solution : avoid send A again A--> B --> C -XX-> A , every node shoud record all all the received the same message nodes.

We will fix the bug later

neb RPC api.sendTransaction does not return contract_address

I'm unable to access the contract via RPC as no contract_address is returned using sendTransaction.

contract

var contract = {
    "args": "",
    "sourceType": "js",
    "src": "var DepositeContent=function(t){if(t){let n=JSON.parse(t);this.balance=new BigNumber(n.balance),this.expiryHeight=new BigNumber(n.expiryHeight)}else this.balance=new BigNumber(0),this.expiryHeight=new BigNumber(0)};DepositeContent.prototype={toString:function(){return JSON.stringify(this)}};var BankVaultContract=function(){LocalContractStorage.defineMapProperty(this,\"bankVault\",{parse:function(t){return new DepositeContent(t)},stringify:function(t){return t.toString()}})};BankVaultContract.prototype={init:function(){},save:function(t){var n=Blockchain.transaction.from,e=Blockchain.transaction.value,a=new BigNumber(Blockchain.block.height),r=this.bankVault.get(n);r\u0026\u0026(e=e.plus(r.balance));vari=new DepositeContent;i.balance=e,i.expiryHeight=a.plus(t),this.bankVault.put(n,i)},takeout:function(t){var n=Blockchain.transaction.from,e=new BigNumber(Blockchain.block.height),a=new BigNumber(t),r=this.bankVault.get(n);if(!r)throw new Error(\"No deposit before.\");if(e.lt(r.expiryHeight))throw new Error(\"Cant takeout before expiryHeight.\");if(a.gt(r.balance))throw new Error(\"Insufficient balance.\");if(0!=Blockchain.transfer(n,a))throw new Error(\"transfer failed.\");Event.Trigger(\"BankVault\",{Transfer:{from:Blockchain.transaction.to,to:n,value:a.toString()}}),r.balance=r.balance.sub(a),this.bankVault.put(n,r)},balanceOf:function(){var t=Blockchain.transaction.from;return this.bankVault.get(t)},monkas:function(){return\"MONKAS OOH OOH AHH AHH\"}},module.exports=BankVaultContract;"
}

./neb console

api.sendTransaction("1a263547d167c74cf4b8f9166cfa244de0481c514a45aa2c", "1a263547d167c74cf4b8f9166cfa244de0481c514a45aa2c", "0", 7, 1000000, 2000000, contract)


/* RETURNED VALUE
{
    "txhash": "0c791527888a8022f388d9dd072c79e0c7356860ce59fe182b5373942ef83504"
}
*/

The transaction receipt looks like:

api.getTransactionReceipt

{
    "chainId": 100,
    "from": "1a263547d167c74cf4b8f9166cfa244de0481c514a45aa2c",
    "gas_limit": "2000000",
    "gas_price": "1000000",
    "hash": "f3a89c1e88a4fdaf8ce90c272dfa17b1a9371b1253bf6c02bc50e382c19c9283",
    "nonce": "5",
    "timestamp": "1518045154",
    "to": "1a263547d167c74cf4b8f9166cfa244de0481c514a45aa2c",
    "type": "binary",
    "value": "0"
}

Expected behavior

api.sendTransaction("1a263547d167c74cf4b8f9166cfa244de0481c514a45aa2c", "1a263547d167c74cf4b8f9166cfa244de0481c514a45aa2c", "0", 7, 1000000, 2000000, contract)


/* RETURNED VALUE
{
    "contract_address:" <contract_address>,
    "txhash": "0c791527888a8022f388d9dd072c79e0c7356860ce59fe182b5373942ef83504"
}

Multiple nodes run at the same time with occasional crashes

When I deploy on four servers, the seed nodes will collapse inexplicably.
The error stack information:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xa567c8]

goroutine 733 [running]:
github.com/nebulasio/go-nebulas/core.(*Block).Nonce(...)
    /root/go/src/github.com/nebulasio/go-nebulas/core/block.go:191
github.com/nebulasio/go-nebulas/consensus/pow.(*MiningState).searchingNonce(0xc4201b39e0)
    /root/go/src/github.com/nebulasio/go-nebulas/consensus/pow/mining.go:84 +0x68
created by github.com/nebulasio/go-nebulas/consensus/pow.(*MiningState).Enter
    /root/go/src/github.com/nebulasio/go-nebulas/consensus/pow/mining.go:66 +0x8d

panic: unsupported go version go1.9.3

Following instructions on: https://github.com/nebulasio/wiki/blob/master/tutorials/%5BEnglish%5D%20Nebulas%20101%20-%2001%20Installation.md

Environment: MacOS, homebrew go version: 1.9.3

After setting up all dependencies and git clone from master, executing ./neb command would fail with unsupported go version error (see title).

Possible reason: gorountine package used in vendor/github.com/huandu/goroutine/info.go which uses https://github.com/huandu/goroutine/ that doesn't have go 1.9.3 runtime supported

is stream_manager in net/p2p necessary?

Source code
In libp2p, streams are multiplexed over single connections so, unlike connections themselves, they are cheap to create and dispose of. We can newa stream when we want to send message and close it after receiving response.
Why the nebulas needs to reuse a stream and holds only one stream to a certain peer at the same time?
What's the stream_manager actually used for?

Nebulas console log specification

The nebulae need to be normalized for the logs in the console. We have compiled the current service startup and shutdown specifications.

Nebulas console log statements

// log level can be `Info`,`Warning`,`Error`
logging.CLog().Info("")

Nebulas services handled in neblet.go, we should add console logs for the following services:

  • Metrics
  • NetService
    • Dispatcher
    • Node
      • StreamManager
  • ApiServer
    • Server
    • Gateway
  • BlockPool
  • TransactionPool
  • EventEmitter
  • SyncManager
  • Consensus

Startup specifications

Nebulas start service should give a console log, the logs should before the service start. The log format just like this:

logging.CLog().Info("Starting xxx...")

Stopping specifications

Nebulas stop service should give a console log, the logs should before the service stoped. The log format just like this:

logging.CLog().Info("Stopping xxx...")

instruction_counter.js can't handle async statement

Script Content

test/mozilla_js_tests/ecma_2017/AsyncFunctions/semantics.js

// |reftest| skip-if(!xulRuntime.shell) -- needs drainJobQueue
var BUGNUMBER = 1185106;
var summary = "async functions semantics";

print(BUGNUMBER + ": " + summary);

async function empty() {
}
assertEventuallyEq(empty(), undefined);

async function simpleReturn() {
  return 1;
}
assertEventuallyEq(simpleReturn(), 1);

async function simpleAwait() {
  var result = await 2;
  return result;
}
assertEventuallyEq(simpleAwait(), 2);

async function simpleAwaitAsync() {
  var result = await simpleReturn();
  return 2 + result;
}
assertEventuallyEq(simpleAwaitAsync(), 3);

async function returnOtherAsync() {
  return 1 + await simpleAwaitAsync();
}
assertEventuallyEq(returnOtherAsync(), 4);

async function simpleThrower() {
  throw new Error();
}
assertEventuallyThrows(simpleThrower(), Error);

async function delegatedThrower() {
  var val = await simpleThrower();
  return val;
}

async function tryCatch() {
  try {
    await delegatedThrower();
    return 'FAILED';
  } catch (_) {
    return 5;
  }
}
assertEventuallyEq(tryCatch(), 5);

async function tryCatchThrow() {
  try {
    await delegatedThrower();
    return 'FAILED';
  } catch (_) {
    return delegatedThrower();
  }
}
assertEventuallyThrows(tryCatchThrow(), Error);

async function wellFinally() {
  try {
    await delegatedThrower();
  } catch (_) {
    return 'FAILED';
  } finally {
    return 6;
  }
}
assertEventuallyEq(wellFinally(), 6);

async function finallyMayFail() {
  try {
    await delegatedThrower();
  } catch (_) {
    return 5;
  } finally {
    return delegatedThrower();
  }
}
assertEventuallyThrows(finallyMayFail(), Error);

async function embedded() {
  async function inner() {
    return 7;
  }
  return await inner();
}
assertEventuallyEq(embedded(), 7);

// recursion, it works!
async function fib(n) {
    return (n == 0 || n == 1) ? n : await fib(n - 1) + await fib(n - 2);
}
assertEventuallyEq(fib(6), 8);

// mutual recursion
async function isOdd(n) {
  async function isEven(n) {
      return n === 0 || await isOdd(n - 1);
  }
  return n !== 0 && await isEven(n - 1);
}
assertEventuallyEq(isOdd(12).then(v => v ? "oops" : 12), 12);

// recursion, take three!
var hardcoreFib = async function fib2(n) {
  return (n == 0 || n == 1) ? n : await fib2(n - 1) + await fib2(n - 2);
}
assertEventuallyEq(hardcoreFib(7), 13);

var asyncExpr = async function() {
  return 10;
}
assertEventuallyEq(asyncExpr(), 10);

var namedAsyncExpr = async function simple() {
  return 11;
}
assertEventuallyEq(namedAsyncExpr(), 11);

async function executionOrder() {
  var value = 0;
  async function first() {
    return (value = value === 0 ? 1 : value);
  }
  async function second() {
    return (value = value === 0 ? 2 : value);
  }
  async function third() {
    return (value = value === 0 ? 3 : value);
  }
  return await first() + await second() + await third() + 6;
}
assertEventuallyEq(executionOrder(), 9);

async function miscellaneous() {
  if (arguments.length === 3 &&
      arguments.callee.name === "miscellaneous")
      return 14;
}
assertEventuallyEq(miscellaneous(1, 2, 3), 14);

function thrower() {
  throw 15;
}

async function defaultArgs(arg = thrower()) {
}
assertEventuallyEq(defaultArgs().catch(e => e), 15);

let arrowAwaitExpr = async () => await 2;
assertEventuallyEq(arrowAwaitExpr(), 2);

let arrowAwaitBlock = async () => { return await 2; };
assertEventuallyEq(arrowAwaitBlock(), 2);

// Async functions are not constructible
assertThrows(() => {
  async function Person() {

  }
  new Person();
}, TypeError);

if (typeof reportCompare === "function")
    reportCompare(true, true);

Error

INFO[0183] Testing test/mozilla_js_tests/ecma_2017/AsyncFunctions/semantics.js  file=buffer.go func=nvm.TestRunMozillaJSTestSuite.func1 line=462
ERRO[0183] V8 Exception:
instruction_counter.js:235
                        if (ancestor.node.type in InjectableExpressions) {
                                          ^

TypeError: Cannot read property 'type' of null
    at instruction_counter.js:235:43
    at traverse (instruction_counter.js:31:37)
    at traverse (instruction_counter.js:52:17)
    at traverse (instruction_counter.js:52:17)
    at traverse (instruction_counter.js:52:17)
    at traverse (instruction_counter.js:52:17)
    at traverse (instruction_counter.js:52:17)
    at Object.processScript (instruction_counter.js:152:5)
    at _inject_tracer.js:4:35
    at _inject_tracer.js:6:3  file=logger.go func=nvm.V8Log line=32

Root Cause Analysis

The instruction_counter can't handle AssignmentPattern inside Function Params, so it will not stop find the parent until the parent is NULL, and then crash.

The following script and it's AST causes this crash:

async function defaultArgs(arg = thrower()) {
}
{
      "type": "FunctionDeclaration",
      "id": {
        "type": "Identifier",
        "name": "defaultArgs",
        "range": [
          115,
          126
        ]
      },
      "params": [
        {
          "type": "AssignmentPattern",
          "left": {
            "type": "Identifier",
            "name": "arg",
            "range": [
              127,
              130
            ]
          },
          "right": {
            "type": "CallExpression",
            "callee": {
              "type": "Identifier",
              "name": "thrower",
              "range": [
                133,
                140
              ]
            },
            "arguments": [],
            "range": [
              133,
              142
            ]
          },
          "range": [
            127,
            142
          ]
        }
      ],
      "body": {
        "type": "BlockStatement",
        "body": [],
        "range": [
          144,
          147
        ]
      },
      "generator": false,
      "expression": false,
      "async": true,
      "range": [
        100,
        147
      ]
    },

Add more metrics in Nebulas

Nebulas metrics system is based on the go-metrics library. We use influxdb to store the metrics data. And we use grafana to fetch data from the InfluxDB database and draw diagrams.

Creating and updating metrics

Metrics can be created and updated equally simply:

meter := metrics.NewMeter(metername)
timer := metrics.NewTimer(timername)

meter.Mark(n) // Record the occurrence of `n` events
timer.Update(duration)  // Record an event that took `duration`
timer.UpdateSince(time) // Record an event that started at `time`

metrics data display on grafana

image

Now we have add some necessary metrics in Nebulas. But we need more.

"v8 received signal SIGSEGV, Segmentation fault" after execution timeout and call V8::TerminateExecution()

Version

c8b8b49

Scenario

I am implementing V8 Execution Timeout Mechanism: after long running, call V8::TerminateExecution() to terminate execution, prevent malicious smart contracts break out the jail.

In engine_v8.go, put C.RunSourceScript into goroutine, set a timer in main thread. If timeout, call C.TerminateExecution().

func (e *V8Engine) RunScriptSource(content string) (err error) {
        // ...

	done := make(chan bool, 1)

	go func() {
		ret = C.RunScriptSource(e.v8engine, cSource, C.uintptr_t(e.lcsHandler),
			C.uintptr_t(e.gcsHandler))
		done <- true
	}()

	select {
	case <-done:
		if ret != 0 {
			err = ErrExecutionFailed
		}
	case <-time.After(15 * time.Second):
		log.Info("timeout...")
		C.TerminateExecution(e.v8engine)
		err = ErrExecutionTimeout
	}

	// ...

	return
}

After call C.TerminateExecution(), it crash. Callstack is the following:

INFO[0010] timeout...                                    file=engine_v8.go func="nvm.(*V8Engine).RunScriptSource" line=195
ERRO[0010] Err is execution timeout                      file=main.go func=main.main line=47
ERRO[0010] [V8 Exception] null                           file=logger.go func=nvm.V8Log line=32

Thread 1 "v8" received signal SIGSEGV, Segmentation fault.
0x00007ffff6b3033c in v8::Isolate::Dispose() () from /usr/local/lib/libv8.so
(gdb) bt
#0  0x00007ffff6b3033c in v8::Isolate::Dispose() () from /usr/local/lib/libv8.so
#1  0x00007ffff7a4078c in DeleteEngine (e=0x8f91d0) at engine.cc:104
#2  0x0000000000456bb0 in runtime.asmcgocall () at /usr/local/go/src/runtime/asm_amd64.s:624
#3  0x0000000000000000 in ?? ()
(gdb) q
A debugging session is active.

        Inferior 1 [process 19112] will be killed.

Block transaction process not put in Transaction_pool

In Nebulas, all transactions submit from RPC and another node or packaged in a block will be put in transaction_pool. We record the pending transaction and verify them in transaction pool. For block packaged transactions, we need not put in the transaction_pool.

installation error

Hi, we are trying to get it up on our mac,
when we run

make dep

we got this error:

**dep ensure -v
The following issues were found in Gopkg.toml:

✗ unable to deduce repository and source type for "leb.io/hashland": unable to read metadata: unable to fetch raw metadata: failed HTTP request to URL "http://leb.io/hashland?go-get=1": Get http://leb.io/hashland?go-get=1: dial tcp 104.131.190.18:80: getsockopt: connection refused

ProjectRoot name validation failed
make: *** [dep] Error 1**

any idea?
Thanks,

Support TypeScript source for Transaction

Hi,

Nebulas V8 Engine is now support TypeScript by calling TranspileTypeScript function in

func (e *V8Engine) TranspileTypeScript(source string) (string, int, error) {
.

In order to let developers write smart contract in TS and deploy it, we should do the following modifications:

  1. Add source type field in DeployPayload struct in transaction_deploy_payload.go file.
  2. Call V8Engine.TranspileTypeScript before
    err = engine.DeployAndInit(payload.Source, payload.Args)
    to convert TS to JS.
  3. Update the RPC related API.

For more example, you may take a look the following test case:

func TestTypeScriptExecution(t *testing.T) {

Failed transaction processing: gas consumption must be change state.

In Nebulas, the transaction also support the execution of contract code in addition to the transfer of NAS and consumer gas.

What we need to do:

  • gasPrice Get current gasPrice on chain. The method can add to blockchain.go, which return the lowest price in the latest blocks. We can find the minimum value for the tailblock's transactions, if tailblock's transactions is empty, recursive query the parent block.
  • estimateGas Get the gas consumed by the transaction/call.The method's parameters like sendTransaction and call, which estimate the consumption of gas for transactions. This function simulate execution of smart contracts and get the actual count of execution instructions. If the transaction only transfer fees(the source, function, args params is empty), return a default gas for normal transaction.
    // run nvm to execute smart contracts
    ctx := nvm.NewContext(block, ctxTx, owner, contract, context)
	engine := nvm.NewV8Engine(ctx)
	//add gas limit and memory use limit
	engine.SetExecutionLimits(tx.GasLimit().Uint64(), nvm.DefaultLimitsOfTotalMemorySize)
	defer engine.Dispose()
	// deploy source
	engine.DeployAndInit(payload.Source, payload.Args)
	// call method
	engine.Call(deploy.Source, payload.Function, payload.Args)
  • As the implementation of the smart contract consumes the computer resources, when the contract execution fails then the transaction is submitted and gas is consumed

for execute transactiontransaction.go:

// Execute transaction and return result.
func (tx *Transaction) Execute(block *Block) error {
...
// execute smart contract and sub the calcute gas. If the payload execute faild, the transaction can be submit successfully but the gas has been deducted. 
	return payload.Execute(tx, block)
}

Event functionality

Since Smart Contract and Transaction is executed in async way, submit and wait. As a developers or users, they are want to know when the submitted tx is onchain, or notice them it's done.

So in nebulas, we introduce the Event functionality, as the following:

  1. Smart Contract developers can define their own Event in Smart Contract. After successful execution, the events will be triggered in order.
  2. Developers can subscribe the successful events.

Nebulas status page

We are glad to release Nebulas Testnet. We need a status page to show the network running state. The contents of state page at here:

  • running state by area:

    • testnet-cal
    • testnet-can
    • testnet-lon
    • testnet-par
    • testnet-jan
    • testnet-sin
    • testnet-vir
    • testnet-hkg
    • testnet-ger
  • api running status

  • block status

    • block height
    • block hash
    • expected miner(GetDynasty & BlockDump)
  • transaction status

    • submits count
    • execution success count
    • execution fail count
    • contracts count(optional)
  • history status records

RPC:使用curl连接http://localhost:8090/v1/user/accountstate 返回404错误

源代码:

下载的是release中的v0.4.0版本,以及vender.tar.gz文件

ubuntu 16.04

分别使用一台物理机运行seed-node,另一台物理机运行normal-node

使用ss -ant查看端口,8090端口已经打开。
运行
curl -i -H Accept:application/json -X POST http://localhost:8090/v1/user/accountstate -d '{"address":"1a263547d167c74cf4b8f9166cfa244de0481c514a45aa2c"}'
返回的是404 Not Found

附件

normal.log是保存的普通节点的neb运行信息
seed.log 是种子节点的neb运行信息

另外

./neb console api.** 接口操作正常。

normal.log
seed.log

测试链的同步过程在网络中途中断时后停止同步,网络恢复而同步没有恢复

nebulas v0.5.0

  1. 在启动节点后,开始同步,中途网络中断,nebulas进程显示仍然在运行,通过
    curl -i -H Accept:application/json -X GET http://localhost:8685/v1/user/nebstate
    显示peer_count=1

  2. 网络重新恢复,此时,再次查看nebstate,peer_count=1

网络中断后和网络恢复之后查看
curl -i -H Accept:application/json -X POST http://localhost:8685/v1/user/accountstate -d '{"address":"0b9cd051a6d7129ab44b17833c63fe4abead40c3714cde6d"}'
显示的balance一样,
根据这种情况,推断应该是同步过程在网络中断之后停止了,网络重新恢复之后同步进程没有恢复

[Design] Add optimize strategy for message dispatching in network

In current implementation of go-nebulas, a network message, for example, a newtx message will broadcast to whole network, each network will also relay to peer nodes as well. That makes broadcast storm effect, which still has some impaction even if we implement receved_message strategy.

When a duplicated message are spread in network, they bring lots of computation pressure to blockchain core, especially newtx message during stress test.

To optimize that, I suppose to introduce a duplication message check in dispatcher.go module.

In function PutMessage(), check whether this message are dispatched. If so, ignore; otherwise escalate to upper level.

DPOS: performance concern in tallying delegate votes.

DPOS is implemented in the branch feature/dpos.

Situation

In DPOS consensus algorithm, you can delegate your voting right to others. In Nebulas, we use a merkle trie to store all delegate votes. When a dynasty is over, we tally current delegate votes and elect some delegatees whose voters have more tokens to join the BFT-like consensus.

the codes about tallying votes and electing new dynasty are in dpos_context.go.

Performance Concern

If we have 1 million delegatees or more in our merkle tree, it'll be time consuming to tally all votes and sort all delegatees by their voters' total tokens.

feel free to share your ideas or questions.

Nebulas smart contract conditional expression not work

For nebulas smart contract, we can't write a conditional expression in a smart contract.
If a contract contains a conditional expression like this:

balanceOf: function (owner) {
       var balance = this.balances.get(owner);
       return (typeof balance != “undefined”) ? balance.toString() : “0";
   },

an error will be throwed when running the contract:

The _instruction_counter make it throw error

time=“2018-01-31T17:01:02+08:00” level=error msg=“V8 Exception:\nlib/contract.js:114\n        return (!_instruction_counter.incr(6) || typeof balance != \“undefined\“) ? !_instruction_counter.incr(12) || balance.toString() : \“0\“;\n                                                                                                                            ^\nTypeError: Cannot read property ‘toString’ of null\n    at StandardToken.balanceOf (lib/contract.js:114:125)\n    at _contract_runner.js:5:56” file=logger.go func=nvm.V8Log line=32

From the error log, NVM added _instruction_counter when executing the contract. Instruction_counter is used to count gas consumption.

New config schema

Hi,

I'd suggest we change the schema of config.proto in neblet package, which is used config application startup parameters.

The old config schema is less descriptive, fields are not grouped well, and some important fields are missing.

So I propose to adopt a new config schema, to address the following issues:

  • new network config section, describe all basic network configs;
    • neb should support listen on multiple addresses;
  • new chain config section, describe chain related configs;
    • chainID, data/key dir, coinbase and gasprice, etc.
  • refined rpc config section;
    • use Enum to describe all available RPC Modules;
    • two types listen address, the one is gPRC, the other is HTTP RESTful;
  • new app config section, describe all application's own configs;
    • LogLevel and log dir;
  • new stats config section, describe stats/metrics related configs;
  • new mist config section, which is no idea to put :)

New config.proto is the following:

syntax = "proto3";
package nebletpb;

// Neblet global configurations.
message Config {
	// Network config.
    NetworkConfig network  = 1;

	// Chain config.
	ChainConfig chain = 2;

	// RPC config.
	RPCConfig rpc = 3;

	// App Config.
	AppConfig app = 10;

	// Stats config.
	StatsConfig stats = 100;

	// Misc config.
	MiscConfig misc = 101;
}

message NetworkConfig {
	// Neb seed node address.
	repeated string seed = 1;

	// Listen addresses.
	repeated string listen = 2;
}

message ChainConfig {
	// ChainID.
	uint32 chain_id = 1;

	// Data dir.
	string data_dir = 11;

	// Key dir.
	string key_dir = 12;

	// Coinbase.
	string coinbase = 21;

	// GasPrice.
	string gas_price = 22;

	// Supported signature cipher list.
	enum SignatureCiphers {
		ECC_SECP256K1 = 0;
	}
	repeated SignatureCiphers signature_ciphers = 23;
}

message RPCConfig {
	// PRC modules.
	enum RPCModule {
		App = 0;
		Admin = 1;
	}

	// RPC listen addresses.
	repeated string rpc_listen = 1;

	// Enabled RPC modules.
	repeated RPCModule rpc_module = 2;

	// HTTP listen addresses.
	repeated string http_listen = 3;

	// Enabled HTTP modules.
	repeated RPCModule http_module = 4;
}

message AppConfig {
	// LogLevel.
	enum LogLevel {
		Info = 0;
		Warn = 1;
		Error = 2;
		Debug = 3;
	}
	LogLevel log_level = 1;

	// Log dir.
	string log_dir = 2;
}

message MiscConfig {
	// Default encryption ciper when create new keystore file.
    string default_keystore_file_ciper = 1;
}

message StatsConfig {
	// Enable metrics or not.
	bool metrics_enabled = 1;

	// Reporting modules.
	enum ReportingModule {
		Influxdb = 0;
	}
	repeated ReportingModule reporting_module = 2;

	// Influxdb config.
	InfluxdbConfig influxdb = 3;
}

message InfluxdbConfig {
	// Host.
	string host = 1;

	// Port.
	uint32 port = 2;

	// Database name.
	string db = 3;

	// Auth user.
	string user = 4;

	// Auth password.
	string password = 5;
}

How to prevent Ddos attack?

How to prevent Ddos attack in Nebulas?
If someone maliciously falsifies a lot of invalid transactions to attack our network in Nebulas, our block broadcast and relay will not work. Because the network will be occupied by those invalid transactions.
So, how to prevent Ddos attack?

Routing table synchronization may not work.

Scenario

I have deployed nebulas on four servers and used grafana to collect metrics information.
Several days later, I found that the blockchain have forked.
expect:
image
actual:
image

Root Cause Analysis

I found the routing table synchronization discovery was interrupted after 2017-11-23 16:40. The four servers stopped communicating with each other.
image
I checked our source code, and found some problems.
Line 102: I just handle the case that the stream cache is ok. If the stream is not exist, I should reconnect to the remote peer.
image

Multi V8Engine crash

Hi all,

Last few days, we encountered a crash issue when executing multi V8Engine scripts.

The code:

func TestMultiEngine(t *testing.T) {
	mem, _ := storage.NewMemoryStorage()
	context, _ := state.NewAccountState(nil, mem)
	owner := context.GetOrCreateUserAccount([]byte("account1"))
	contract, _ := context.CreateContractAccount([]byte("account2"), nil)

	var wg sync.WaitGroup
	for i := 0; i < 1000; i++ {
		wg.Add(1)
		idx := i
		go func() {
			defer wg.Done()
			engine := NewV8Engine(owner, contract, context)
			defer engine.Dispose()

			err := engine.RunScriptSource("console.log('running.');")
			log.Infof("run script %d; err %v", idx, err)
			assert.Nil(t, err)
		}()
	}
	wg.Wait()
}

The call stack:

Thread 59 "v8" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fff5a7ec700 (LWP 7181)]
0x00007ffff6f6b138 in v8::internal::Isolate::StackOverflow() () from /usr/local/lib/libv8.so
(gdb) r
The program being debugged has been started already.
Start it from the beginning? (y or n) n
Program not restarted.
(gdb) bt
#0  0x00007ffff6f6b138 in v8::internal::Isolate::StackOverflow() () from /usr/local/lib/libv8.so
#1  0x00007ffff6bc6650 in v8::internal::Genesis::Genesis(v8::internal::Isolate*, v8::internal::MaybeHandle<v8::internal::JSGlobalProxy>, v8::Local<v8::ObjectTemplate>, unsigned long, v8::DeserializeInternalFieldsCallback, v8::internal::GlobalContextType) () from /usr/local/lib/libv8.so
#2  0x00007ffff6ba5764 in v8::internal::Bootstrapper::CreateEnvironment(v8::internal::MaybeHandle<v8::internal::JSGlobalProxy>, v8::Local<v8::ObjectTemplate>, v8::ExtensionConfiguration*, unsigned long, v8::DeserializeInternalFieldsCallback, v8::internal::GlobalContextType) () from /usr/local/lib/libv8.so
#3  0x00007ffff6b51d32 in v8::NewContext(v8::Isolate*, v8::ExtensionConfiguration*, v8::MaybeLocal<v8::ObjectTemplate>, v8::MaybeLocal<v8::Value>, unsigned long, v8::DeserializeInternalFieldsCallback) () from /usr/local/lib/libv8.so
#4  0x00007ffff6b3445f in v8::Context::New(v8::Isolate*, v8::ExtensionConfiguration*, v8::MaybeLocal<v8::ObjectTemplate>, v8::MaybeLocal<v8::Value>, v8::DeserializeInternalFieldsCallback) () from /usr/local/lib/libv8.so
#5  0x00007ffff7a42346 in RunScriptSource (e=<optimized out>, data=0x7fffcc056fe0 "console.log('running.');", lcsHandler=0x9, gcsHandler=0xa) at engine.cc:120
#6  0x00007ffff7a42515 in RunScriptSource2 (e=<optimized out>, data=<optimized out>, lcsHandler=<optimized out>, gcsHandler=<optimized out>) at engine.cc:101
#7  0x00000000005273f3 in _cgo_0ff50ced1bb6_Cfunc_RunScriptSource2 (v=0xc4200d3ea0) at cgo-gcc-prolog:139
#8  0x0000000000453d80 in runtime.asmcgocall () at /usr/local/go/src/runtime/asm_amd64.s:624
#9  0x0000000000451245 in runtime.newdefer.func2 () at /usr/local/go/src/runtime/panic.go:223
#10 0x00000000004525a9 in runtime.systemstack () at /usr/local/go/src/runtime/asm_amd64.s:344
#11 0x000000000042f3a0 in ?? () at /usr/local/go/src/runtime/proc.go:1060
#12 0x000000c420024600 in ?? ()
#13 0x00007ffff5732a7f in ?? ()
#14 0x000000c42018f680 in ?? ()
#15 0x00007fff5a7ebeb8 in ?? ()
#16 0x000000000042f404 in runtime.mstart () at /usr/local/go/src/runtime/proc.go:1142
#17 0x0000000000527b23 in crosscall_amd64 () at gcc_amd64.S:35
#18 0x00007ffff5732b00 in ?? ()
#19 0x00007fff5a7ec9c0 in ?? ()
#20 0x00007ffff5732a7f in ?? ()
#21 0x0000000000000000 in ?? ()

It seems like the stack usage of v8 exceed the stack limit of goroutine.

Disable GlobalContractStorage before finalize the tech solution of Upgradability

The GlobalContractStorage is used to provide Upgradability of Smart Contract describing in technical whitepaper section 3.3 Upgrade Design of Smart Contract.

In current codebase, the GlobalContractStorage demonstrates a shared storage across smart contracts of the same developers. It shows the possibility of Smart Contract Upgradability well, and needs more restriction on that, to prevent developers changing the codes behaviors.

So we disable this functionality now. After we figure out a better model of Upgradability, we will re-enable this.

Nebulas RPC subscribe interface issue

Nebulas subscribe interface using the long connection, RPC using gRPC connection, the long connection by a stream can work well. However, in the HTTP request, using the keep-alive can request but don't return data.

gRPC usage

	addr := fmt.Sprintf("127.0.0.1:%d", uint32(8684))
	conn, err := rpc.Dial(addr)
	if err != nil {
		log.Fatal(err)
	}
	defer conn.Close()

	ac := rpcpb.NewApiServiceClient(conn)

	stream, err := ac.Subscribe(context.Background(), &rpcpb.SubscribeRequest{})

	if err != nil {
		log.Fatalf("could not subscribe: %v", err)
	}
	for {
		reply, err := stream.Recv()
		if err == io.EOF {
			break
		}
		if err != nil {
			log.Printf("failed to recv: %v", err)
		}
		log.Println("recv notification: ", reply.Topic, reply.Data)
	}

http usage

curl and the JS implementation of the RPC don't seem to return the data properly because the data is encoded in chunks.

const http = require("http")

callback = function(response) {
  response.on("data", function(chunk) {
    console.log(chunk.toString("utf8"))
  })
}

var req = http.request(
  {
    host: "localhost",
    port: 8685,
    path: "/v1/user/subscribe",
    method: "POST",
    headers: { "Content-Type": "application/json" },
  },
  callback
)
req.on("error", function(e) {
  console.log("problem with request: " + e.message)
})
req.write('{"topics":["chain.executeTxSuccess"]}')
req.end()

Anybody has a better way to handle the Subscribe via http?

issue

The event emitter is a non-blocking design, evnent data will be lost when the channle is full when there are many subscribed events. Since the RPC stream is sent serially, the message is not delivered in time. We need a non-blocking pipeline for event collection. Can anyone alse take this job, using kafka, redis, etc as the adapter?

The events in event trie May not response in Subscribe, This may be a bug in event emitter.

seed node can't sync with others if it's offline for a long time.

Requirement:
I start a seed node and six normal nodes, the normal nodes will connect to the seed node at first. They can sync with each other very well when all are online.

Situation:
After a few minutes, I stop the seed node for a long time and then restart it. The normal nodes will discover the seed node again, and send the newest block to it. Now the seed node will receive many blocks whose parent can't be found locally.

for example.
seed node: genesis -> 0 -> 1 -> ... -> 10.
normal nodes: genesis -> 0 -> 1 -> ... -> 10 -> ... -> 100.
seed node will receive the block at height 100 from others.

In general, if the gap between new received block and the local tail block is small, we will use downloader to download the blocks in the gap. But if the gap is big enough, we will restart sync manager to sync blocks from others.

Problem:
In the situation described above, the seed node should restart sync but it didn't.

Nebulas smart contract support anonymous function call

Whether the nebula needs to support anonymous function calls for smart contracts?
For some scenarios that call smart contracts quickly, when to be the contract address, there is no need to specify the contract method to invoke the contract, directly invoking the contract's anonymous function.

example

  • When using the contract wallet, transfer value to the contract address and execute the contract method automatically.

The current nebula does not support calls that do not specify contract methods. Anybody can give a point about it.

If we do change for this, we should:

  • update RPC for the different type of transaction submit;
  • update transaction execution logic;
  • update nvm to support anonymous function call.

Limit the number of socket requests

During the stress test last night, there was a problem that leveldb could not write.
I got this error:

err="open data.db/035303.ldb: too many open files" 

Our system is ubuntu. I looked at the maximum number of files that the system allocated for neb to open. The soft limit is 1024.
image

This error means that the number of fds opened by leveldb exceeds the number of system allocations.

However, I found that the number of fds occupied by leveldb was far lower than the number of fds allocated by the system. The maximum number of files that leveldb can open by default is 500. It only takes up 300 fds.
Then, I found that socket takes up a lot of fds. It takes up 700 fds.

Therefore, we may need to limit the number of socket requests in our process, otherwise it will directly affect the stability of the neb system.

Why do we have so many sockets in our process?
In nebulas, Neb streams and rpc invoke and http requests all can create sockets.
The stress test created a lot of http sockets, and then took up a lot of fds.

So, How to limit the number of socket requests in our process?

First, we can limit the number of streams in nebulas. For example, we can limit the number of streams to 100, and we don't deal with streams that exceed 100.

Then the maximum number of files that leveldb can open by default is 500.

And then we need to limit the number of sockets created through HTTP and RPC. This may be a little more difficult.(TODO)

We can even configure the number of fds for sockets and leveldb to our configuration files as advanced features for configuration files.

Sync module seems doesn't work with high-speed production blocks.

It seems that sync module will hang accidental with high-speed production blocks.
I set up the system to generate four blocks per second. Then I found when I start a common node to sync from the seed node, the sync module will hang accidental.
I suppose that if the speed of generating blocks higher than network transfer speed, the system may encounter many unforeseen problems.

Nebulas convenient to decide whether to receive the NAS in the smart contract

What mechanism is provided to allow developers of smart contracts to make it very convenient to decide whether to receive the transferred NAS in contract code? The smart contracts of Nebulas do not currently support permission control and keyword markup.

In Ethereum, Solidity has a payable keyword. http://solidity.readthedocs.io/en/develop/common-patterns.html

Design points

  • Each Public Function itself decides whether to receive the transferred NAS
  • Provide a unified processing function that is logically judged by parameters

The flexibility and logic independence of point 1 is higher than that of plan 2, but plan 1 will require each function to decide whether to receive or not and to write redundant code. We tend to use the first one.

The first plan suggestions

  • NVM does not make any adjustments, and all of them are given to JS/TS for their own treatment
  • The processing of JS/TS is implemented through the external Lib, and the official provides a set of solutions that developers can use their own implementation.

In JavaScript, you can use the features of Leverage Closure and Function first, which are implemented in a similar way to Mixi.

StandardToken.prototype = {
    init: function (name, symbol, decimals, totalSupply) {
        this._name = name;
        this._symbol = symbol;
        this._decimals = decimals | 0;
        this._totalSupply = new BigNumber(totalSupply);

        var from = Blockchain.transaction.from;
        this.balances.set(from, new BigNumber(totalSupply));
    },

    _: Mixin.decorate(Mixin.PAYABLE, function(){
        var from = Blockchain.transaction.from;
        var value = Blockchain.transaction.value;
        this.balances.set(from, value);
    }),

Can't find seed seed.conf file

Thank you for a great tutorial

I am stuck on the creation block configuration part of the tutorial

I can't seem to find the seed.conf file under the go-nebulas/conf/default.

I have the genesis.conf and config.conf files

Please advise what steps I may have missed.

Thank you

Transactions with non-positive gas limit should not be pushed into transaction pool

In Nebulas, transactions from RPC, console or http are carefully verified before they are pushed into transaction pool. The verification contain several steps, such as the minimum value of gas price , the correctness of signature, etc.

There is one more check we need to add to this process: the gas limit of the transaction should be greater than 0, otherwise it will be blocked from the transaction pool.

It should be noted that such a transaction will also not be executed in the old process.

func (pool *TransactionPool) push(tx *Transaction) error {
        ...
        if tx.gasLimit.Cmp(util.NewUint128().Int) <= 0 {
		metricsTxPoolGasLimitLessOrEqualToZero.Inc(1)
		return ErrGasLimitLessOrEqualToZero
	}
        ...
}

Crypto security improvement issues

There is an crypto security discussion inside the nebulas, and some questions are raised:

  • The passphrase is transmitted through the RPC interface to the node (also local). There is a risk of intercepting the leak, which is a big risk. Is there any other way?

some RPC interface:

	// NewAccount create a new account with passphrase
	NewAccount(ctx context.Context, in *NewAccountRequest, opts ...grpc.CallOption) (*NewAccountResponse, error)
	// UnlockAccount unlock account with passphrase
	UnlockAccount(ctx context.Context, in *UnlockAccountRequest, opts ...grpc.CallOption) (*UnlockAccountResponse, error)
	// SendTransactionWithPassphrase send transaction with passphrase
	SendTransactionWithPassphrase(ctx context.Context, in *SendTransactionPassphraseRequest, opts ...grpc.CallOption) (*SendTransactionPassphraseResponse, error)

  • The method of generation of random number is the method of calling the go language itself, whether there are better ways to improve randomness?

The go random func:

// RandomCSPRNG a cryptographically secure pseudo-random number generator
func RandomCSPRNG(n int) []byte {
	buff := make([]byte, n)
	_, err := io.ReadFull(rand.Reader, buff)
	if err != nil {
		panic("reading from crypto/rand failed: " + err.Error())
	}
	return buff
}
  • Currently, the Keystore file is stored directly in the memory or file of the node, and there are some risks. Future nodes can be run on a machine with HSM or a mobile phone with TEE, so you can consider the hardware encryption module of the Keystore.

Can't start node (develop branch)

Hi,

Trying to connect a linux node to testnet but I get the following error:

./neb -c conf/testnet-config.conf
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0xc204cf]

goroutine 1 [running]:
github.com/go-nebulas/vendor/github.com/nebulasio/go-nebulas/neblet.New(0xc420232c60, 0xc420058840, 0xa, 0x0)
/go/src/github.com/go-nebulas/vendor/github.com/nebulasio/go-nebulas/neblet/neblet.go:81 +0x3f
main.makeNeb(0xc42009af20, 0xc420230c70, 0xc420165c38, 0xafa275)
/go/src/github.com/go-nebulas/cmd/neb/main.go:170 +0x15a
main.neb(0xc42009af20, 0xc42009af20, 0xc420165c1f)
/go/src/github.com/go-nebulas/cmd/neb/main.go:87 +0x2f
github.com/go-nebulas/vendor/github.com/urfave/cli.HandleAction(0xcef420, 0xf96340, 0xc42009af20, 0xc420067140, 0x0)
/go/src/github.com/go-nebulas/vendor/github.com/urfave/cli/app.go:490 +0xd2
github.com/go-nebulas/vendor/github.com/urfave/cli.(*App).Run(0xc420070ea0, 0xc42006e060, 0x3, 0x3, 0x0, 0x0)
/go/src/github.com/go-nebulas/vendor/github.com/urfave/cli/app.go:264 +0x635
main.main()
/go/src/github.com/go-nebulas/cmd/neb/main.go:83 +0xbc1

PS: I've built neb from develop branch

send p2p message in nebulas should add retry strategy

we implemented send p2p message method in nebulas.

SendMsg(name string, msg []byte, target string)

but we are supposed to give a retry strategy when send message failed. when we download the block from a node, we hope the sooner the better. in this case, we hope to try again when we fail.
so we should add retry strategy in nebulas p2p network.

Add more command line options.(commands & flags)

Nebulas uses urfave/cli for building command line apps in Go. Currently, nebulas only provide few commands and flags, we need more command line options.

Commands

Nebulas have added some commands:

  • accountCommand: Account new, import, list and update etc.
  • consoleCommand: Neb console implementation.
  • networkCommand: Manage nebulas network

We need more commands like:

  • configCommand: Generate or update config file.
  • dumpCommand: Dump a specific block from storage.
  • versionCommand: Show version.
  • More commands...

Flags

Nebulas have added flags:

  • config: Load config file

We need more flags like:

  • config field: All config field should add for neb.
  • Other more flags...

Crypto algorithms compatibility check: using the test suits from Bitcoin/Ethereum to make sure the correctness of crypto algorithms.

Nebulas crypto contains the transaction's signature and key encryption. Currently, nebulas choose secp256k1 and encrypt as signatures and encryption algorithms, like Bitcoin and Ethereum.

Because of the extensive use of Bitcoin and Ethereum, and their encryption algorithms have been validated, our crypto algorithms need compatibility check: using the test suits from Bitcoin/Ethereum to make sure the correctness of crypto algorithms.

signature

Nebulas choose secp256k1 as the signature algorithm. We use the bitcoin's libsecp256k1 to sign, verify and recover public key.

Our testing files:

crypto/keystore/secp256k1/ecdsa_test.go

We need to verify reliability with the bitcoin test suits.

encryption

Nebulas choose scrypt as the encryption algorithm. Like Ethereum, Nebulas encrypt the private key in a key file. But the difference is, Ethereum uses Keccak256 to calculate hash and nebulas use sha3256. Because of the wide use of Ethereum, Nebulas are also compatible with the keystore of Ethereum.

Our testing files:

crypto/cipher/scrypt_test.go

We need to verify reliability with the Ethereum test suits.

seed-node停止运行时,normal-node会意外停止

测试环境

ubuntu 16.04,
nebulas v0.4.0,
go 1.9.2,
dep 0.3.2

测试过程如下

  1. 先在192.168.31.106上运行seed-node

  2. 然后在192.168.31.100上运行normal-node

  3. 第1、2步两个节点都正常运行

  4. 在seed-node上CTRL+C终止neb进程,此时normal-seed的进程也终止了

我的猜想
也就是说在没有没有种子节点的情况下,普通节点将无法运行。另一种情况就是,假设网络中的种子节点挂掉,那么与这个种子节点连接的普通节点会全部挂掉,这样会不会导致网络不稳定。
再说说我的理解:
我的理解是这样的,对与普通节点来说,当出现一个种子节点挂掉时,应该自动去寻找网络中的其他种子节点

附件

附件是程序终止时normal-node的日志

error201712292007.log

Nebulas segwit support discuss

Nebulae are discussing whether to use Segregated Witness. Should we need to separate the signatures from transaction body?

segwit

Anyone can comment on this and discuss it.

Genesis Block Configuration

Genesis block carries all initial information in a blockchain. These information will includes:

  1. Initial token distribution.
  2. Initial consensus. such as first dynasty in dpos consensus.
  3. Meta. such as ChainId.

We need a configuration schema which can carry all these initial information or more. And we also need to initial the genesis block with the configuration.

the example configuration schema.

{
      token_distribution: [
              "address1": 10,
              "address2": 15,
      ]
      consensus: {
              dpos: {
                      dynasty: ["addr1", "addr2", ...]
              }
      }
      meta: {
              chain_id: 0,
      }
}

the scheme can be added into neblet/pb and parsed in core/genesis.go.

feel free to discuss it and submit pull requests about it.

Support multiple IP

Now, nebulas node will bind a IP address and listening a Port when it started. It is implemented in the following way.
create a multi-address:

address, err := multiaddr.NewMultiaddr(
		fmt.Sprintf(
			"/ip4/%s/tcp/%d",
			node.config.IP,
			node.config.Port,
		),
	)

new network:

network, err := swarm.NewNetwork(
		ctx,
		[]multiaddr.Multiaddr{address},
		node.id,
		node.peerstore,
		nil,
	)

set the node host:

options := &basichost.HostOpts{}
node.host, err = basichost.NewHost(node.context, network, options)

Now we want our node to support multiple IP. Other nodes can connect the node by any one IP.

Reorganization the folders structure for testnet

Hello!
What do you think about the reorganization of the folders structure and adding testnet configuration files?
It seems to me so it would be convenient:

~/Go/src/github.com/nebulasio/go-nebulas $ tree testnet/
testnet/
├── conf
│   ├── config.conf
│   ├── ed25519key
│   └── genesis.conf
├── data.db
└── keydir

If you like it, I can do this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.