Git Product home page Git Product logo

scalechain's Introduction

Introduction

Introducing ScaleChain, the groundbreaking altcoin project designed to revolutionize payment networks between robots and humans. In an era marked by the proliferation of automation and robotics, ScaleChain emerges as the solution to facilitate effortless transactions between intelligent machines and their human counterparts.

Operating on a decentralized blockchain platform, ScaleChain offers unparalleled security and transparency in transactions. Leveraging smart contract technology, ScaleChain enables automated payments for services rendered by robots, spanning from industrial tasks to household chores, while also serving as a convenient payment gateway for humans engaging with automated systems.

With a focus on interoperability and user-friendly features, ScaleChain aims to streamline the exchange of value between robots and humans, fostering a dynamic ecosystem where automation enhances productivity and efficiency. Join us in shaping the future of commerce and collaboration with ScaleChain – where machines and humans unite through seamless transactions.

For the avoidance of doubt, this particular copy of the software is released under the version 3 of the GNU General Public License. It is brought to you by ScaleChain.

Copyright (c) 2015, ScaleChain and/or its affiliates. All rights reserved.

How to build

Create a jar file with all dependencies included.

gradle clean test shadowJar

Start up the node with the run.sh script.

# copy the environment template and edit as you want
cp scripts/.env-template scripts/.env

# run it
scripts/run.sh

Or you can run ScaleChainPeer class with the created jar included in the classpath.

java -cp ./scalechain-cli/build/libs/scalechain-cli-all.jar io.scalechain.blockchain.cli.ScaleChainPeer

How to test

Run unit tests

gradle clean test

Run automated end to end test written in python

gradle clean test shadowJar
# kill all ScaleChainPeer java processes, and then run all end to end tests
scripts/kill-all.sh ; scripts/run-tests.sh

Getting Started

A guide on starting a ScaleChain peer to peer network.

Supported Features

  • Compatible with Bitcoin remote procedure calls and peer-to-peer protocols.

Under construction

ScaleChain source code is under construction. Big changes are to come to stablize the code.

Current project status

Unit tests passed. (Under construction) Automated end-to-end test.

License

ScaleChain Commercial License for OEMs, ISVs, and VARs ScaleChain provides its ScaleChain Server and Client Libraries under a dual license model designed to meet the development and distribution needs of both commercial distributors (such as OEMs, ISVs, and VARs) and open source projects.

For OEMs, ISVs, VARs and Other Distributors of Commercial Applications: OEMs (Original Equipment Manufacturers), ISVs (Independent Software Vendors), VARs (Value Added Resellers) and other distributors that combine and distribute commercially licensed software with ScaleChain software and do not wish to distribute the source code for the commercially licensed software under version 3 of the GNU General Public License (the "GPL") must enter into a commercial license agreement with ScaleChain.

For Open Source Projects and Other Developers of Open Source Applications: For developers of Free Open Source Software ("FOSS") applications under the GPL that want to combine and distribute those FOSS applications with ScaleChain software, ScaleChain open source software licensed under the GPL is the best option.

For developers and distributors of open source software under a FOSS license other than the GPL, ScaleChain makes its GPL-licensed ScaleChain Client Libraries available under a FOSS Exception that enables use of the ScaleChain Client Libraries under certain conditions without causing the entire derivative work to be subject to the GPL.

scalechain's People

Contributors

kangmo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scalechain's Issues

Wallet RPCs Planning

Analysis

  1. analyze wallet.dat
    • output: wallet-format-analysis.md
  2. analyze requirements for wallet RPCs
    • output: sequence diagram & summary of requirements documents
  3. analyze BlockDatabase
  4. prototype WalletDatabase

Design

  1. design wallet architecture
    • output: architecture documents & rocksdb schema
  2. design wallet RPCs
    • output: sequence diagram & module design documents

Implement

  1. implement wallet database (test cases & production code)
  2. implement wallet PRCs (test cases & production code)

CheckMultiSig : Signature format validation fails for some transactions

problem

CheckMultiSig fails to verify signature format for a specific transcation.
[[ErrorCode(invalid_signature_format)]message=ScriptOp:CheckSig]

Transaction and locking/unlocking script :

        MergedScript(
          transaction=
            Transaction(
              version=1,
              inputs=
                List(
                  NormalTransactionInput(
                    outputTransactionHash=TransactionHash(bytes("60a20bd93aa49ab4b28d514ec10b06e1829ce6818ec06cd3aabd013ebcdc4bb1")),
                    outputIndex=0L,
                    unlockingScript=UnlockingScript(bytes("0047304402203f16c6f40162ab686621ef3000b04e75418a0c0cb2d8aebeac894ae360ac1e780220ddc15ecdfc3507ac48e1681a33eb60996631bf6bf5bc0a0682c4db743ce7ca2b01")),
                    /* ops:ScriptOpList(operations=Array(
                          Op0(),
                          OpPush(71,ScriptBytes(bytes("304402203f16c6f40162ab686621ef3000b04e75418a0c0cb2d8aebeac894ae360ac1e780220ddc15ecdfc3507ac48e1681a33eb60996631bf6bf5bc0a0682c4db743ce7ca2b01"))))),
                          hashType:None */
                    sequenceNumber=4294967295L
                  )
                ),
                outputs=
                  List(
                    TransactionOutput(
                      value=1000000L,
                      lockingScript=LockingScript(bytes("76a914660d4ef3a743e3e696ad990364e555c271ad504b88ac"))
                      /* ops:ScriptOpList(operations=Array(
                            OpDup(),
                            OpHash160(),
                            OpPush(20,ScriptBytes(bytes("660d4ef3a743e3e696ad990364e555c271ad504b"))),
                            OpEqualVerify(),
                            OpCheckSig(Script(bytes("76a914660d4ef3a743e3e696ad990364e555c271ad504b88ac"))))) */
                    )
                  ),
                lockTime=0L
              /* hash:bytes("23b397edccd3740a74adb603c9756370fafcde9bcc4483eb271ecad09a94dd63") */
            ),
          inputIndex=0,
          unlockingScript=UnlockingScript(bytes("0047304402203f16c6f40162ab686621ef3000b04e75418a0c0cb2d8aebeac894ae360ac1e780220ddc15ecdfc3507ac48e1681a33eb60996631bf6bf5bc0a0682c4db743ce7ca2b01"))
          /* ops:ScriptOpList(operations=Array(
                  Op0(),
                  OpPush(71,ScriptBytes(bytes("304402203f16c6f40162ab686621ef3000b04e75418a0c0cb2d8aebeac894ae360ac1e780220ddc15ecdfc3507ac48e1681a33eb60996631bf6bf5bc0a0682c4db743ce7ca2b01"))))),
                  hashType:None */,
          lockingScript=LockingScript(bytes("514104cc71eb30d653c0c3163990c47b976f3fb3f37cccdcbedb169a1dfef58bbfbfaff7d8a473e7e2e6d317b87bafe8bde97e3cf8f065dec022b51d11fcdd0d348ac4410461cbdcc5409fb4b4d42b51d33381354d80e550078cb532a34bfa2fcfdeb7d76519aecc62770f5b0e4ef8551946d8a540911abe3e7854a26f39f58b25c15342af52ae"))
          /* ops:ScriptOpList(operations=Array(
                  Op1(),
                  OpPush(65,ScriptBytes(bytes("04cc71eb30d653c0c3163990c47b976f3fb3f37cccdcbedb169a1dfef58bbfbfaff7d8a473e7e2e6d317b87bafe8bde97e3cf8f065dec022b51d11fcdd0d348ac4"))),
                  OpPush(65,ScriptBytes(bytes("0461cbdcc5409fb4b4d42b51d33381354d80e550078cb532a34bfa2fcfdeb7d76519aecc62770f5b0e4ef8551946d8a540911abe3e7854a26f39f58b25c15342af"))),
                  OpNum(2),
                  OpCheckMultiSig(Script(bytes("514104cc71eb30d653c0c3163990c47b976f3fb3f37cccdcbedb169a1dfef58bbfbfaff7d8a473e7e2e6d317b87bafe8bde97e3cf8f065dec022b51d11fcdd0d348ac4410461cbdcc5409fb4b4d42b51d33381354d80e550078cb532a34bfa2fcfdeb7d76519aecc62770f5b0e4ef8551946d8a540911abe3e7854a26f39f58b25c15342af52ae"))))) */
          )

root cause

The transaction has a negative value for the first byte of the S value of the signature.

solution

Looks like IsValidCanonicalEncoding, which checks if the first byte of S value of the signature was written after the above transaction was put into blockchain.

We will skip checking if the first byte of the S value of the signature is positive.

We have more evidence of allowing negative values for S value string, but not sure how bitcoin core implementation passes the validation of the above transaction.

from : https://github.com/bitcoin/bips/blob/master/bip-0062.mediawiki
Inherent ECDSA signature malleability ECDSA signatures themselves are already malleable: taking the negative of the number S inside (modulo the curve order) does not invalidate it.

Add debug-ability for the transaction verification process

What

Add debug-ability for the transaction verification process

Why

Currently, almost half of the transactions are failing verification.
We need to find out root causes of each failure, and resolve the issue.

How

  1. Add descriptive information on exceptions.
  2. Add logging code to check the root cause of the issue.
  3. Add statistics tracking code for failure categories.

Implement full duplex communication over Akka Stream

What

Need to implement full duplex communication over Akka Stream

Requirements

  1. Both server and client communicate via PeerNode Actor. All business logics are in the actor, and the actor need not to know about communication channel. Clean separation of communication channel management and business logic.
  2. Use the following configuration to connect peers. scalechain-cli/src/main/resources/scalechain.conf
  3. The server should be able to send requests even though the client did not send any requests.

Why

Camel Server consumer turned out that it does not support full duplex communication.
( It can't send requests to clients, it only accepts requests from clients to send responses )

How

Read core concepts of akka stream.
http://doc.akka.io/docs/akka-stream-and-http-experimental/2.0.3/scala/stream-flows-and-basics.html

Use the following tutorial to create a working TCP client and server.
http://doc.akka.io/docs/akka-stream-and-http-experimental/2.0.3/scala/stream-io.html

Use the following tutorial to connect an actor to the TCP connection.
http://doc.akka.io/docs/akka-stream-and-http-experimental/current/scala/stream-integrations.html#integrating-with-actors

Design P2P networking layer with Akka Stream

What

Redesign P2P networking layer with Akka Streams.

Why

After using both Akka Streams and Akka actors for implementing P2P layer, we have found the following facts.

  1. Akka Actors are not type-safe for receiving messages.
  2. Akka Streams are type-safe for input/output messages.
  3. Akka Streams checks configuration/programming errors during the startup, whereas Akka Actors show errors upon the receival of messages.

We will try to use Akka Streams when possible. We will use Akka actors when it is not possible to use Akka Streams.

How

  1. Read Akka Streams manual.

Currently I read until page 25. Today, I will read until page 93.

http://doc.akka.io/docs/akka-stream-and-http-experimental/2.0.3/AkkaStreamAndHTTPScala.pdf

Streams 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 How to read these docs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Quick Start Guide: Reactive Tweets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Transforming and consuming simple streams . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Flattening sequences in streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 Broadcasting a stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.4 Back-pressure in action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.5 Materialized values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Design Principles behind Akka Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 What shall users of Akka Streams expect? . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 Interoperation with other Reactive Streams implementations . . . . . . . . . . . . . . . 7
1.3.3 What shall users of streaming libraries expect? . . . . . . . . . . . . . . . . . . . . . . 7
1.3.4 The difference between Error and Failure . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Basics and working with Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.1 Core concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.2 Defining and running streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.3 Back-pressure explained . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.4 Stream Materialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.5 Stream ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5 Working with Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5.1 Constructing Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5.2 Constructing and combining Partial Graphs . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5.3 Constructing Sources, Sinks and Flows from Partial Graphs . . . . . . . . . . . . . . . . 19
1.5.4 Combining Sources and Sinks with simplified API . . . . . . . . . . . . . . . . . . . . 20
1.5.5 Building reusable Graph components . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.5.6 Predefined shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.5.7 Bidirectional Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.5.8 Accessing the materialized value inside the Graph . . . . . . . . . . . . . . . . . . . . . 25
1.5.9 Graph cycles, liveness and deadlocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.6 Modularity, Composition and Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.6.1 Basics of composition and modularity . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.6.2 Composing complex systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.6.3 Materialized values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.6.4 Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.7 Buffers and working with rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.7.1 Buffers in Akka Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.7.2 Rate transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
1.8 Custom stream processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
1.8.1 Custom processing with GraphStage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
1.8.2 Thread safety of custom processing stages . . . . . . . . . . . . . . . . . . . . . . . . . 58
1.9 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
1.9.1 Integrating with Actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
1.9.2 Integrating with External Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
1.9.3 Integrating with Reactive Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
1.10 Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
1.10.1 Supervision Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
1.10.2 Errors from mapAsync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
1.11 Working with streaming IO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
1.11.1 Streaming TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
1.11.2 Streaming File IO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
1.12 Pipelining and Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
1.12.1 Pipelining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
1.12.2 Parallel processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
1.12.3 Combining pipelining and parallel processing . . . . . . . . . . . . . . . . . . . . . . . 75
1.13 Testing streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
1.13.1 Built in sources, sinks and combinators . . . . . . . . . . . . . . . . . . . . . . . . . . 76
1.13.2 TestKit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
1.13.3 Streams TestKit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
1.13.4 Fuzzing Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
1.14 Overview of built-in stages and their semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
1.14.1 Simple processing stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
1.14.2 Asynchronous processing stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
1.14.3 Timer driven stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
1.14.4 Backpressure aware stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
1.14.5 Nesting and flattening stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
1.14.6 Fan-in stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
1.14.7 Fan-out stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
1.15 Streams Cookbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
1.15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
1.15.2 Working with Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
1.15.3 Working with Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
1.15.4 Working with rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
1.15.5 Working with IO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
1.16 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
  1. Redesign using UML
    I have to meet customers from Wed to Thr, so I can start working on this issue on this Friday.
    On this Friday, I will start designing P2P Layer using Star UML (MacOS edition).
    To illustrate the requirements, I will use Use case diagrams.
    To illustrate message exchanges, I will use Sequence diagrams.

I will come up with actual list of diagrams on Friday.

RPC calls such as getrawtransaction hangs during IBD

problem

RPC calls such as getrawtransaction hangs during IBD(Initial block download process).

root cause

During the IBD process, we have many blocks coming from peers. These blocks are sent to block processor, but for block processor, it takes time to verify signatures of transactions, as signature verification is a costly operation.

An evidence of this is, whenever we hit this issue, jstack shows that we are verifying transaction signatures.

"ScaleChainPeer-akka.actor.default-dispatcher-20" #68 prio=5 os_prio=31 tid=0x00007fcb01594800 nid=0x9403 runnable [0x0000700003253000]
   java.lang.Thread.State: RUNNABLE
        at org.spongycastle.math.ec.custom.sec.SecP256K1Field.multiply(SecP256K1Field.java:78)
        at org.spongycastle.math.ec.custom.sec.SecP256K1Point.twice(SecP256K1Point.java:226)
        at org.spongycastle.math.ec.custom.sec.SecP256K1Point.twicePlus(SecP256K1Point.java:275)
        at org.spongycastle.math.ec.ECAlgorithms.implSumOfMultiplies(ECAlgorithms.java:480)
        at org.spongycastle.math.ec.ECAlgorithms.implSumOfMultiplies(ECAlgorithms.java:434)
        at org.spongycastle.math.ec.ECAlgorithms.implSumOfMultipliesGLV(ECAlgorithms.java:395)
        at org.spongycastle.math.ec.ECAlgorithms.sumOfTwoMultiplies(ECAlgorithms.java:90)
        at org.spongycastle.crypto.signers.ECDSASigner.verifySignature(ECDSASigner.java:162)
        at io.scalechain.crypto.ECKey.verify(ECKey.java:37)
        at io.scalechain.blockchain.script.ops.CheckSig$class.checkSig(Crypto.scala:186)
        at io.scalechain.blockchain.script.ops.OpCheckSig.execute(Crypto.scala:333)
...

solution

  1. Move RPC operations out of actors.
  2. In the future, we may have to come up with worker pool, which has workers that do costly operations such as transaction validations.

Analyze Bitcoin core for each requirements on Issue #43

What

Analyze Bitcoin core source code for each requirement for p2p networking in the following issue.
#43

Why

Need to understand how Bitcoin core works, before implementing ScaleChain peer-to-peer networking layer, which is compatible with Bitcoin core.

How

  1. Create a .md file for each requirement.
  2. Analyze Bitcoin core source code for each requirement.
  3. Summarize the result of Bitcoin core source code analysis in each .md file.

Design details for Wallet, Account and CoinAddress

What

  • Design details for Wallet, Account and CoinAddress

Why

  • Need to implement RPCs related to wallet
  • Current version is not designed for relationship

How

  • Analyze the relationship of Wallet, Account and CoinAddress
  • Design details and Make design documents
  • Implement the relationship of Wallet, Account and CoinAddress

What is a Block ?

14:25don1ruillis a block equal to a bitcoin ?
14:26don1ruillif there is two parts doing a transaction, and the transaction is about a certain amout of bitcoin
14:27don1ruillmeans a block is a function of bitcoins
14:27don1ruillthen a hash is affected to a transaction
14:28don1ruillan then define what is a hash for a bitcoin

Investigate Mini-blockchain.

what

Investigate Mini-blockchain.
http://cryptonite.info/wiki/index.php?title=Main_Page

why

The mini-blockchain suggests a way to get rid of historic data. Learn about the idea, attack vectors, solutions, etc.

how

  1. Read the white paper.
    http://cryptonite.info/files/mbc-scheme-rev2.pdf
  2. Read the discussion history.
    https://bitcointalk.org/index.php?topic=195275.0;all
  3. Take a look at the source code of cryptonite which implements the mini-blockchain.
    http://cryptonite.info/wiki/index.php?title=Main_Page

sbt test fails

sbt test fails with:

[info] BlockDirectoryReaderSpec:
[info] readFrom
[info] - should read all blocks in a file *** FAILED ***
[info]   java.lang.NullPointerException:

Looking at the test, the test refers to the hardcoded path /Users/kangmo. It seems it should be replaced with a relative path?

Test 6 RPCs, fix major issues.

What

Test following RPCS. Make sure these RPCs work well.

  • getbestblockhash
  • getblock
  • help
  • sendrawtranasction
  • getrawtransaction
  • decoderawtransaction

Why?

Need to get the RPCs work.

How?

Test RPCs using real bitcoin main-net blocks and transactions. We already connected to bitcoind main net.
Use bash scripts that sends requests to scalechain in data/scripts/jsonrpc .

Analyze how bitcoin regtest mode works.

What

Analyze how bitcoin regtest mode works.

Why

ScaleChain also should have the regtest mode. We will implement automated test cases run during the regtest mode.

How

Analyze the regtest mode source code of Bitcoin core.

Transaction verification failure results in node failure

what

If a transaction verification fails, the scalechain node shuts down.

why

We did not catch exception, the akka actor caught it, and the actor stops.

how

catch TransactionVerficationException, and write a log, instead of leaving an actor to act on it.

Connect RPCs to DiskBlockDatabase.

what

Connect following RPCs to DiskBlockDatabase.

why

We have disk block database ready. Let's start connecting RPCs to it.

how

  1. getbestblockhash : call DiskBlockDatabase.getBestBlockHash
  2. getblock : call DiskBlockDatabase.getBlock
  3. getBlockHash : need to design how to code to get a given block by block height. We need to return a block from the best blockchain, in case any fork happend.

Analyze IBM openblockchain source code

What

Analyze IBM openblockchain source code.

Why

Understand the problem of blockchain IBM wants to solve.

How

  1. Read code ;-)
  2. Summarize important code snippets to doc/refs/obc-analysis.md

Initial block download from Bitcoin core

what

Download blocks from Bitcoin core using headers-first method :
https://bitcoin.org/en/developer-guide#headers-first

why

(1) Need to download blocks to become a full node.
(2) Blocks-first method has drawbacks, such as downloading blocks on a shorter chain than the longest one.

how

(1) Create an actor that has two states, Version Exchange State and Status Exchange State
(2) Version Exchange State : Exchange version and verack with peers.
(3) Status Exchange State : Exchange transactions and blocks with peers.
(4) The actor will change its state using 'become' invokcation.
(5) One actor will be created for a peer to communicate.

Rewrite Codec | Block layer to use the new codec interface

What :
Rewrite codec_block layer to use the new codec interface

Why :

  1. The case classes in codec | block layers are used by the bitcoin protocol messages. Ex> "tx" message has a description of a transaction. the same message is embedded in "block" message as well as "tx" message.
  2. To get rid of duplicate codes on parse and serialize methods on codec_block layer.

How :

  1. Rewrite BlockSerializer and BlockParser by merging them into one codec like we did in codec_proto layer.
  2. The codec_block layer will be removed, so the new code will be written in the codec_proto layer.
  3. The codec_proto layer will be renamed to proto_codec layer
  4. The block layer will be merged to proto layer.

After the code change, we will have following layers.

+-------------------------+----------------------+
|          Cli            |          Main        |
+-------------------------+----------------------+
|                   API                          |
+------------------------------------------------+
|                   APIDomain                    |
+------------------------------------------------+
|                   Net                          |
+------------------------------------------------+
|                   Transaction                  |
+------------------------------------------------+
|                   Storage                      |
+------------------------------------------------+
|                   Script                       |
+----------------+----------------+--------------+
|           Proto|Codec           |              |
+---------------------------------+    Crypto    |
|              Proto              |              |
+----------------+----------------+--------------+
|                   Util Layer                   |
+------------------------------------------------+

Send/Receive Version/Verack to/from Bitcoin core.

what

Create an actor that sends Version to Bitcoin core, and receives Verack in response to the Version received from Bitcoin core.

why

  1. Need to make sure our protocol implementation works well in end-to-end test scenarios.
  2. To communicate with a peer in Bitcoin network, we need to exchange Version/Verack messages.

how

  1. We have two peers, (a) ScaleChain P2P node (b) bitcoin core.
  2. ScaleChain P2P node will act as if it were a Bitcoin core.
  3. Will test it on main net while as we do not need to send money right now.
  4. We will create an Akka actor that knows how to send/receive bitcoin protocols to/from Bitcoin core.

Prototype using embedded Cassandra for block storage.

what

Prototype using embedded Cassandra for block storage.

why

To see if Cassandra is a good option for the block storage.

how

  1. Implement KeyValueDatabase trait using Cassandra.
  2. Do not create a separate Cassandra process, but embed it with ScaleChian.

Rewrite the net layer using Akka Streams

What

Rewrite the net layer using Akka Streams.

Why

  1. Currently the net layer is implemented using Akka Actors, which is not type-safe.
  2. And also, the codes in the net layer is not tested. Rewrite with test codes.

How

  1. Make sure that the ScaleChain node connects to Bitcoin network. We tested this before.
  2. Read Akka Streams manual again.
  3. Rewrite the PeerBroker, without using any actor.
  4. Make sure that the new implementation successfully connects to Bitcoin network, because we already tested that ScaleChain node connects to Bitcoin network before.

Prototype using column families of rocksdb for wallet index

what

Prototype using column families of rocksdb for wallet index

why

need to manage efficiently wallet data.
wallet has many accounts --> account has many addresses

how

  1. Implement column families of rocksdb
  2. Test account(column family) - address(key) - address info(value)
  3. Test many accounts(column families) > 1,000,000

Move classes required to implement RPCs to lower layers

What

Move classes required to implement RPCs to lower layers.

Why

Each RPC should use classes under the lower layers. Having all necessary classes on the RPC layer makes the code hard to maintain.

How

Move classes to storage, net layer.
Create a wallet layer, which has features related to a virtual currency wallet.

Connect ScaleChain client to Bitcoin core using Akka Streams

What

Connect ScaleChain client to Bitcoin core using Akka Streams

Why

Need to connect to Bitcoin core to make sure that ScaleChain protocol is compatible with Bitcoin protocol.

How

  1. Add encoder and decoder on the "echo" flow of serverLogic.
  2. Connect to Bitcoin core and see if ScaleChain and Bitcoin core communicates without any error.
  3. Make sure sending a message from a server to a client works well.
  4. Make sure sending a message from a client to a server works well.

Add block storage, transaction storage

What

Add block storage

  1. Feature : Store blocks in fixed size block files, and each block file has the on-the-wire format of blocks.
  2. Feature : Search blocks by block hash
  3. Feature : Store blockheaders separately, to implement initial block download steps with headers-first apporach.
  4. Feature : The block storage should be persistent.
  5. Feature : Search blocks by transaction hash
  6. Feature : Search UTXO by address.

Add transaction storage.

  1. Feature : Store transactions in memory, and each transaction should be searchable by the transaction hash.
  2. Feature : Search blocks by transaction hash
  3. Feature : Search UTXO by address.

Transaction Reader.

Looking up the block storage and transaction storage, transaction reader should be able to :

  1. Feature : Search blocks by transaction hash,
  2. Feature : Search UTXO by address,

Why

To manage blocks and transactions, we need these modules.

How

Add block storage

Implement block storage that stores blocks on multiple fixed size block files.
Each block file is implemented using random access file and file channel.
The block storage uses bitcoin protocol encoder to encode/decode blockheader/block/transaction data.

Add transaction storage.

Use HashMap to store transactions by hash.

Transaction Reader.

Transaction Reader uses block storage and transaction storage to implement features.

GetAccount RPC

What

Implement getaccount RPC (developer-reference)

Feature: returns the name of the account associated with the given address

Why

To manage Wallet

How

  1. Add test cases
  2. Implement getaccount RPC

Add Bitcoin Protocol Encoder/Decoder

What :
Add Bitcoin Protocol Encoder/Decoder as a channel adapter.

Why :
ScaleChain provides 100% compatibility with Bitcoin protocol. It can run as a bitcoin node based on the protocol compatibility.

How :

  1. Use Akka/Camel/Netty4 to implement the transport and protocol of Bitcoin P2P node.
  2. Use StringEncoder and StringDecoder of netty4 code to implement BitcoinMessageEncoder and BitcoinMessageDecoder.
  3. Write integrate tests to make sure ScaleChain Bitcoin protocol works well with Bitcoin testnet.
  4. Write unit tests.

Best block is not found by getblock RPC

Problem

After getting the hash of the best block using getbestblockhash RPC, the getblock RPC returns null for the best block hash.

Root cause

A block with only block header and without any block data is put as the best block.
Because the getblock returns some block only if it has block data, the best block hash without any block data was not returned by getblock

Solution

Update the bestblockhash if we have block data as well as the block header.
This also fixes an issue of IBD, which downloads blocks from the best block hash. Upon restart of scalechaind, we will have some block without any data, if we put the hash of a block only with a header as the best block hash.

ListTransactions RPC

What

Implement listtransactions RPC (developer-reference)

Feature : returns the most recent transactions that affect the wallet

Why

To manage Wallet

How

  1. Add test cases
    • if there is the given account, returns the transactions associated with the account
    • if there is no given account, returns all transactions associated with wallet
  2. Implement listtransactions RPC

Implement block store with record-based files.

Implement block store with record-based files.

  • Record based files store blocks. (1 record = 1 block )
  • Use BlockDatabase for indexing blocks and transactions on the record based files.
  • Add test cases for record based files.

Analyze blockchain database of Bitcoin core

What

Analyze blockchain database of Bitcoin core. Analyze what Bitcoin core stores on leveldb.

Why

Need to analyze what kind of data we need to store.

How

Find out keys and values stored on leveldb.

Write API layer for MVP.

what

Write API layer, for the MVP, which has the minimal set of blockchain APIs to run blockchain cloud service.

why

To design Net layer, and Storage layer, we need to understand requirements of API layer, which implements REST API based RPC.

how

  1. Create dummy classes that does nothing.
  2. Create case classes with dummy values for each REST API, and return dummy values.
  3. Write functions and classes without any implementation for API, Net, Transaction, Storage layer.
  4. Write unit tests.
  5. Implement features.

Investigate Ethereum 2.0 approach for the scalability

what

Investigate Ethereum 2.0 approach for the scalability

Ethereum 2.0… now processing 100,000 towers of ugly javascript callback code per second!

They have a plan to implement the idea.
https://www.reddit.com/r/ethereum/comments/40u54x/eip_105_serenity_binary_sharding_plus_contract/

why

Learn about ethereum developer's approach for resolving the scalability issue.

how

Read the doc :
https://docs.google.com/presentation/d/1CjD0W4l4-CwHKUvfF5Vlps76fKLEC6pIwu1a_kC_YRQ/edit#slide=id.gd284b9333_0_6

Unable to decode a specific block.

Problem

Unable to decode the following block after reading it from a record storage.

Block(header=BlockHeader(version=1, hashPrevBlock=BlockHash(bytes("00000000d1145790a8694403d4063f323d499e655c83426834d4ce2f8dd4a2ee")), hashMerkleRoot=MerkleRootHash(bytes("d5f2d21453a6f0e67b5c42959c9700853e4c4d46fa7519d1cc58e77369c893f2")), timestamp=1231731401L, target=486604799L, nonce=653436935L), transactions=List(Transaction(version=1, inputs=List(GenerationTransactionInput(transactionHash=TransactionHash(bytes("0000000000000000000000000000000000000000000000000000000000000000")), outputIndex=4294967295L, coinbaseData=CoinbaseData(bytes("04ffff001d010e")), sequenceNumber= 4294967295L)), outputs=List(TransactionOutput(value=5000000000L, lockingScript=LockingScript(bytes("4104566824c312073315df60e5aa6490b6cdd80cd90f6a8f02e022ca3c2d52968c253006c9c602e03aed7be52d6ac55f5b557c72529bcc3899ace7eb4227153eb44bac")) /* ops:ScriptOpList(operations=Array(OpPush(65,ScriptBytes(bytes("04566824c312073315df60e5aa6490b6cdd80cd90f6a8f02e022ca3c2d52968c253006c9c602e03aed7be52d6ac55f5b557c72529bcc3899ace7eb4227153eb44b"))),OpCheckSig(Script(bytes("4104566824c312073315df60e5aa6490b6cdd80cd90f6a8f02e022ca3c2d52968c253006c9c602e03aed7be52d6ac55f5b557c72529bcc3899ace7eb4227153eb44bac"))))) */ )), lockTime=0L /* hash:bytes("d5f2d21453a6f0e67b5c42959c9700853e4c4d46fa7519d1cc58e77369c893f2") */)))

Root Cause

RecordFile.appendRecord was overwriting data if readRead was called.

Solution

Check if we are at the end of the file when appendRecord was called.
If not, move to the end of the file.

Add Akka Streams Source which materializes a concurrent queue.

what

Add Akka Streams Source which materializes a concurrent queue.

why

Need to send messages from multiple running TCP streams to a running TCP stream.
We will merge incoming messages from a TCP streams with the source that materialzes the concurrent queue. Other (running) TCP connection streams will send messages via the concurrent queue.

how

  1. Create a GraphStage which meterializes the concurrent queue.
  2. Create Source from the graph stage.

GetAccountAddress RPC

What

Implement getaccountaddress RPC (developer-reference)

Feature : returns the current Bitcoin address for receiving payments to the given account

Why

To manage Wallet

How

  1. Add test cases
  2. Implement geaccountaddress RPC

Add test case for the RPCs we will release today.

What

Add test cases for the RPCs that will be released today.

Why

Need to make sure the RPCs are working well.

How

Write test cases for the following RPCs.

  1. getbestblockhash
  2. getblock
  3. getblockhash
  4. help
  5. submitblock
  6. getpeerinfo
  7. decoderawtransaction
  8. getrawtransaction
  9. sendrawtranasction

Redesign CoinAddress isvalid module

What

  • Redesign CoinAddress isvalid module

Why

  • 'isvalid module' needs base58 and sha256
  • now, use Base58Util and Sha256Util in util layer
  • but, HashFunction exists in crypto layer

How

  • modify wallet layer dependency (add crypto layer) to remove Sha256Util in util layer
  • or, maintain current version

Transaction validation fails if we have more than one transaction input

What

The block 707 we got from bitcoin core by connecting to it does not have an unlocking script on an input of a transaction. ( Attached badblock707.txt)

The correct block data is attached as block707.txt, this was dumped by DumpChain utility by reading a block file written by the same Bitcoin core instance running on my local machine.

Why

While we calculate the hash for the signature validation of a specific transaction input, we changed unlocking script of all other inputs to empty. But we did not revert the unlocking scripts back to the original one after the signature validation.

How

  1. Do no use var for the unlockingScript. use val.
  2. Instead of shallow copying a transaction, copy the transaction case class and create a new transaction by replacing the locking scripts with ones for signature validation.
  3. Do not change the original transaction.

Analyze Bitcoin core v0.5.0 source code.

What

Fully analyze Bitcoin core v0.5.0 source code.

Why

Need to understand basic working mechanism of Bitcoin core.

How

  1. Create a branch from the Bitcoin core v0.5.0 tag.
  2. After analysis, write comments to the source code.
  3. For requirements ScaleChain has to implement, prefix a comment with "kangmo : req - ".

CheckSig : Signature format validation fails for some transactions.

problem

CheckSig fails to verify signature format for a specific transcation.
[[ErrorCode(invalid_signature_format)]message=ScriptOp:CheckSig]

Transaction and locking/unlocking script :

        "[[ErrorCode(invalid_signature_format)]message=ScriptOp:CheckSig]",
        MergedScript(
          transaction=
            Transaction(
              version=1,
              inputs=
                List(
                  NormalTransactionInput(
                    outputTransactionHash=TransactionHash(bytes("406b2b06bcd34d3c8733e6b79f7a394c8a431fbf4ff5ac705c93f4076bb77602")),
                    outputIndex=0L,
                    unlockingScript=UnlockingScript(bytes("493046022100d23459d03ed7e9511a47d13292d3430a04627de6235b6e51a40f9cd386f2abe3022100e7d25b080f0bb8d8d5f878bba7d54ad2fda650ea8d158a33ee3cbd11768191fd004104b0e2c879e4daf7b9ab68350228c159766676a14f5815084ba166432aab46198d4cca98fa3e9981d0a90b2effc514b76279476550ba3663fdcaff94c38420e9d5")),
                    /* ops:ScriptOpList(operations=Array(
                        OpPush(73,ScriptBytes(bytes("3046022100d23459d03ed7e9511a47d13292d3430a04627de6235b6e51a40f9cd386f2abe3022100e7d25b080f0bb8d8d5f878bba7d54ad2fda650ea8d158a33ee3cbd11768191fd00"))),
                        OpPush(65,ScriptBytes(bytes("04b0e2c879e4daf7b9ab68350228c159766676a14f5815084ba166432aab46198d4cca98fa3e9981d0a90b2effc514b76279476550ba3663fdcaff94c38420e9d5"))))),
                        hashType:Some(0) */
                    sequenceNumber=0L)),
              outputs=
                List(
                  TransactionOutput(
                    value=4000000L,
                    lockingScript=LockingScript(bytes("76a9149a7b0f3b80c6baaeedce0a0842553800f832ba1f88ac"))
                    /* ops:ScriptOpList(operations=Array(
                        OpDup(),
                        OpHash160(),
                        OpPush(20,ScriptBytes(bytes("9a7b0f3b80c6baaeedce0a0842553800f832ba1f"))),
                        OpEqualVerify(),
                        OpCheckSig(Script(bytes("76a9149a7b0f3b80c6baaeedce0a0842553800f832ba1f88ac"))))) */ )),
                    lockTime=0L /* hash:bytes("c99c49da4c38af669dea436d3e73780dfdb6c1ecf9958baa52960e8baee30e73") */),
          inputIndex=0,
          unlockingScript=UnlockingScript(bytes("493046022100d23459d03ed7e9511a47d13292d3430a04627de6235b6e51a40f9cd386f2abe3022100e7d25b080f0bb8d8d5f878bba7d54ad2fda650ea8d158a33ee3cbd11768191fd004104b0e2c879e4daf7b9ab68350228c159766676a14f5815084ba166432aab46198d4cca98fa3e9981d0a90b2effc514b76279476550ba3663fdcaff94c38420e9d5"))
          /* ops:ScriptOpList(operations=Array(
            OpPush(73,ScriptBytes(bytes("3046022100d23459d03ed7e9511a47d13292d3430a04627de6235b6e51a40f9cd386f2abe3022100e7d25b080f0bb8d8d5f878bba7d54ad2fda650ea8d158a33ee3cbd11768191fd00"))),
            OpPush(65,ScriptBytes(bytes("04b0e2c879e4daf7b9ab68350228c159766676a14f5815084ba166432aab46198d4cca98fa3e9981d0a90b2effc514b76279476550ba3663fdcaff94c38420e9d5"))))),
            hashType:Some(0) */,
          lockingScript=LockingScript(bytes("76a914dc44b1164188067c3a32d4780f5996fa14a4f2d988ac"))
          /* ops:ScriptOpList(operations=Array(
            OpDup(),
            OpHash160(),
            OpPush(20,ScriptBytes(bytes("dc44b1164188067c3a32d4780f5996fa14a4f2d9"))),
            OpEqualVerify(),
            OpCheckSig(Script(bytes("76a914dc44b1164188067c3a32d4780f5996fa14a4f2d988ac"))))) */
        )
      ),

root cause

  1. The reference implementation does not check hash type (the last byte of the signature), but we are checking it.
  2. The reference implementation uses SIGHASH_ALL if the hash type is not SIGHASH_ALL, SIGHASH_SINGLE, but we are using SIGHASH_ALL only if the hash type is 1.

solution

  1. Do not check the value of the hash type. (The last byte of the signature)
  2. Do not check if the value of the hash type is 1. Just use SIGHASH_ALL if none of SIGHASH_ALL or SIGHASH_SIGNLE is set for the hash type.

reference

See following code on the reference implementaton.
CTransactionSignatureSerializer.

/**
 * Wrapper that serializes like CTransaction, but with the modifications
 *  required for the signature hash done in-place
 */
class CTransactionSignatureSerializer {
private:
    const CTransaction &txTo;  //! reference to the spending transaction (the one being serialized)
    const CScript &scriptCode; //! output script being consumed
    const unsigned int nIn;    //! input index of txTo being signed
    const bool fAnyoneCanPay;  //! whether the hashtype has the SIGHASH_ANYONECANPAY flag set
    const bool fHashSingle;    //! whether the hashtype is SIGHASH_SINGLE
    const bool fHashNone;      //! whether the hashtype is SIGHASH_NONE

public:
    CTransactionSignatureSerializer(const CTransaction &txToIn, const CScript &scriptCodeIn, unsigned int nInIn, int nHashTypeIn) :
        txTo(txToIn), scriptCode(scriptCodeIn), nIn(nInIn),
        fAnyoneCanPay(!!(nHashTypeIn & SIGHASH_ANYONECANPAY)),
        fHashSingle((nHashTypeIn & 0x1f) == SIGHASH_SINGLE),
        fHashNone((nHashTypeIn & 0x1f) == SIGHASH_NONE) {}

    /** Serialize the passed scriptCode, skipping OP_CODESEPARATORs */
    template<typename S>
    void SerializeScriptCode(S &s, int nType, int nVersion) const {
        CScript::const_iterator it = scriptCode.begin();
        CScript::const_iterator itBegin = it;
        opcodetype opcode;
        unsigned int nCodeSeparators = 0;
        while (scriptCode.GetOp(it, opcode)) {
            if (opcode == OP_CODESEPARATOR)
                nCodeSeparators++;
        }
        ::WriteCompactSize(s, scriptCode.size() - nCodeSeparators);
        it = itBegin;
        while (scriptCode.GetOp(it, opcode)) {
            if (opcode == OP_CODESEPARATOR) {
                s.write((char*)&itBegin[0], it-itBegin-1);
                itBegin = it;
            }
        }
        if (itBegin != scriptCode.end())
            s.write((char*)&itBegin[0], it-itBegin);
    }

    /** Serialize an input of txTo */
    template<typename S>
    void SerializeInput(S &s, unsigned int nInput, int nType, int nVersion) const {
        // In case of SIGHASH_ANYONECANPAY, only the input being signed is serialized
        if (fAnyoneCanPay)
            nInput = nIn;
        // Serialize the prevout
        ::Serialize(s, txTo.vin[nInput].prevout, nType, nVersion);
        // Serialize the script
        if (nInput != nIn)
            // Blank out other inputs' signatures
            ::Serialize(s, CScriptBase(), nType, nVersion);
        else
            SerializeScriptCode(s, nType, nVersion);
        // Serialize the nSequence
        if (nInput != nIn && (fHashSingle || fHashNone))
            // let the others update at will
            ::Serialize(s, (int)0, nType, nVersion);
        else
            ::Serialize(s, txTo.vin[nInput].nSequence, nType, nVersion);
    }

    /** Serialize an output of txTo */
    template<typename S>
    void SerializeOutput(S &s, unsigned int nOutput, int nType, int nVersion) const {
        if (fHashSingle && nOutput != nIn)
            // Do not lock-in the txout payee at other indices as txin
            ::Serialize(s, CTxOut(), nType, nVersion);
        else
            ::Serialize(s, txTo.vout[nOutput], nType, nVersion);
    }

    /** Serialize txTo */
    template<typename S>
    void Serialize(S &s, int nType, int nVersion) const {
        // Serialize nVersion
        ::Serialize(s, txTo.nVersion, nType, nVersion);
        // Serialize vin
        unsigned int nInputs = fAnyoneCanPay ? 1 : txTo.vin.size();
        ::WriteCompactSize(s, nInputs);
        for (unsigned int nInput = 0; nInput < nInputs; nInput++)
             SerializeInput(s, nInput, nType, nVersion);
        // Serialize vout
        unsigned int nOutputs = fHashNone ? 0 : (fHashSingle ? nIn+1 : txTo.vout.size());
        ::WriteCompactSize(s, nOutputs);
        for (unsigned int nOutput = 0; nOutput < nOutputs; nOutput++)
             SerializeOutput(s, nOutput, nType, nVersion);
        // Serialize nLockTime
        ::Serialize(s, txTo.nLockTime, nType, nVersion);
    }
};

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.