Git Product home page Git Product logo

vector's Introduction

production

↗️ Vector

Vector is an ultra-simple, flexible state channel protocol and implementation.

At Connext, our goal is to build the cross-chain routing and micropayment layer of the decentralized web. Vector sits on top of Ethereum, evm-compatible L2 blockchains, and other turing-complete chains, and enables instant, near free transfers that can be routed across chains and over liquidity in any asset.

Out of the box, it supports the following features:

  • 💸 Conditional transfers with arbitrary generality routed over one (eventually many) intermediary nodes.
  • 🔀 Instant cross-chain and cross-asset transfers/communication. Works with any evm-compatible chain.
  • 🔌 Plug in support for non-evm turing complete chains.
  • 💳 Simplified deposits/withdraw, just send funds directly to the channel address from anywhere and use your channel as a wallet!
  • ⛽ Native e2e gas abstraction for end-users.
  • 💤 Transfers to offline recipients.

This monorepo contains a number of packages hoisted using lerna. Documentation for each package can be found in their respective readme, with some helpful links in Architecture below.

Contents:

Quick Start - Local Development

Prerequisites:

  • make: Probably already installed, otherwise install w brew install make or apt install make or similar.
  • jq: Probably not installed yet, install w brew install jq or apt install jq or similar.
  • docker: See the Docker website for installation instructions.

To start, clone & enter the Vector repo:

git clone https://github.com/connext/vector.git
cd vector

To build everything and deploy a Vector node in dev-mode, run the following:

make start-router

# view the node's logs
bash ops/logs.sh node

# view the router's logs
bash ops/logs.sh router

That's all! But beware: the first time make start-node is run, it will take a very long time (maybe 10 minutes, depends on your internet speed) but have no fear: downloads will be cached & most build steps won't ever need to be repeated again so subsequent make start runs will go much more quickly. Get this started asap & browse the rest of the README while the first make start runs.

By default, Vector will launch using two local chains (ganache with chain id 1337 and 1338) but you can also run a local Vector stack against a public chain (or multiple chains!) such as Rinkeby. To do so, edit the chainProviders and chainAddresses fields of config.json according to the chain you want to support.

Note: this will start a local Connext node pointed at a remote chain, so make sure the mnemonic used to start your node is funded in the appropriate native currencies and supported chain assets. By default, the node starts with the account:

mnemonic: "candy maple cake sugar pudding cream honey rich smooth crumble sweet treat";
privateKey: "0xc87509a1c067bbde78beb793e6fa76530b6382a4c0241e5e4a9ec0a0f44dc0d3";
address: "0x627306090abaB3A6e1400e9345bC60c78a8BEf57";

To apply updates to config.json, you'll need to restart your vector node with make restart-node.

(make start/make restart are aliases for make start-router/make restart-router)

Four different Vector stacks are supported:

  • messaging: standalone messaging + auth service
  • chains: EVMs in dev-mode
  • node: vector node + database
  • router: vector node + router + database
  • duet: 2x node/db pairs, used to test one-on-one node interactions
  • trio: 2x node/db pairs + 1x node/router/db , used to test node interactions via a routing node.

For any of these stacks, you can manage them with:

  • make ${stack} eg make duet builds everything required by the given stack
  • make start-${stack} eg make start-router will start up the router stack.
  • make stop-${stack} stops the stack
  • make restart-${stack} stops the stack if it's running & starts it again
  • make test-${stack} runs unit tests against some stack. It will build & start the stack if that hasn't been done already.

You can find WIP documentation on integrating and using Vector here.

Architecture and Module Breakdown

Vector uses a layered-approach to compartmentalize risk and delegate tasks throughout protocol usage. In general, lower layers are not context-aware of higher level actions. Information flows downwards through call params and upwards through events. The only exception to this are services, which are set up at the services layer and passed down to the protocol directly.

alt

You can find documentation on each layer in its respective readme:

  • Contracts - holds user funds and disburses them during a dispute based on commitments provided by channel parties.
  • Protocol - creates channels, generates channel updates/commitments, validates them, and then synchronizes channel state with a peer.
  • Engine - implements default business logic for channel updates and wraps the protocol in a JSON RPC interface.
  • Server-Node - sets up services to be consumed by the engine, spins up the engine, and wraps everything in REST and gRPC interfaces.
  • Router - consumes the server-node interface to route transfers across multiple channels (incl across chains/assets)

Note that the engine and protocol are isomorphic. Immediately after the core implementation is done, we plan to build a browser-node implementation which sets up services in a browser-compatible way and exposes a direct JS interface to be consumed by a dApp developer.

Development and Running Tests

You can build the whole stack by running make.

Running tests:

  • Unit tests are run using make test-{{$moduleName}}.
  • Two party integration tests are run using make start-duet and then make test-duet
  • Three party (incl routing node) itests are run using make start-trio and then make test-trio

vector's People

Contributors

alfredolopez80 avatar arjunbhuptani avatar eavilesmejia avatar georgeroman avatar heikofisch avatar hkalodner avatar jakekidd avatar laynehaber avatar rhlsthrm avatar sanchaymittal avatar tomafrench avatar uadevops avatar ztcrypto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

vector's Issues

xud wishlist

  • prod deployment (ability to run node and router-node on separate machines)
  • ability set node mnemonic during runtime
  • hashlock-transfer timelock
  • hashlock-status endpoint
    • needs expired: true | false field
  • cancel transfer endpoint
  • onchain balances
  • expose ethprovider proxy
  • request collateral

Readme/description suggestions

Listing down all the issues encountered after a general walk-through of the readme:

  1. make start-trio gets stuck at

    Preparing to launch trio stack
    
    Preparing to launch trio stack
    global.auth configured to be exposed on *:5040
    WARNING: generating new nats jwt signing keys & saving them in .env
    

    was fixed by manually populating the .env

  2. Nomenclature and Port numbers mismatch on the make start-trio command, the README and the scripts inside /modules/server-node/examples

    • the fix is required in the scripts and the README to match with the output from make command
    •     trio.carol will be exposed on *:8005
          trio.dave will be exposed on *:8006
          trio.roger will be exposed on *:8007
          trio.router will be exposed on *:8009
      
  3. The 3-transfer.http script mentions transfer API as /linked-transfer/, should instead be /hashlock-transfer/

  4. For the README:

    • the purpose of linkedHash and routingId
    • mention of the withdraw API (POST {{bobUrl}}/withdraw) in the README
    • viewing logs (not sure if it should be mentioned in the README, but would be helpful) bash ops/logs.sh carol
    • maybe a mention of what a hashlock transfer is, and why it requires resolution of transfers for the recipient to be able to claim

[engine] Injected `create` validation

Expected Behavior

The engine should not allow unresolvable (or invalid) transfers to be created in its channel. There should be some injected validation to ensure this. Blocked by #25

Current Behavior

Always creates a transfer application (regardless of transfer state).

[ops] Fixes for XUD integration

Notes:

- Need to be able to run start-router and start-node on separate machines. Right now they each spin up their own global stack.
- For testing, we tried to run them on the same machine.
- Both start-node and start-router use the same mnemonic.
- We tried to change mnemonics in the `start-node` script but it still didn't change mnemonics. This is likely because the secrets and/or db volumes are not namespaced properly.

[engine] Allow transfers to go to recovery address

Transfers should be allowed to go outside the channel, represented via a different address in the balance.to entry. This address would receive funds in the event of a dispute.

Currently, the engine hardcodes the transfer recipient (for linked transfers) to be the channel counterparty instead of a recovery address. The protocol also assumes that any balance headed out of the channel within a transfer does not get reconciled back into the channel balance on resolution

[protocol, contracts] Balance out of transfer state

Context:

While we're redesigning the transfers anyway, this might be a good time to actually take the initial balances out of the encoded transfer state, in order to avoid having that information stored redundantly in the core transfer state, which is a potential source of errors.
[...] What I think is not ideal with the current solution is that the initial balance is stored redundantly in the core transfer: in the initialBalance and (hashed) in the initialStateHash. The former is used in the adjudicator, the latter is used by the transfer definition -- and we hope we don't make a mistake that gets the two out of sync... Admittedly, that's not very likely, because it's the "initial" state and therefore not supposed to change. Still, having the supposedly same information doubly-stored in the transfer state is not really clean.
That's why I wanted to suggest to change the signature of the create and resolve function to take in the initial balance as (unencoded) first argument and take it completely out of the (encoded) app-specific state. For example, the LinkedTransfer would then only have the linkedHash as the app-specific state.

[engine] Refactor conditional transfer params to accept arbitrary transferDefs

Ideally, implementers should be able to pass in their own custom conditional transfer definitions without needing to make changes to our types. This can be done (unsafely) by allowing users to pass in transferDef instead of transferType + restructuring conditionalTransfer params to look more like the transfer initial state.

To do this safely, we need to implement a registry pattern where we have a global onchain registry of "approved" transferDefs which users can call/reference. That way, routers can keep blindly forwarding transfers without needing to worry about whether the transfer definition might be malicious.

Recommend splitting the above into two separate PRs, with the first paragraph happening immediately.

[protocol] Blocking `utils` unit tests

Expected Behavior

The following tests should pass:

describe("generateSignedChannelCommitment", () => {
    it("should not sign anything if there are two signatures", () => {});
    it("should work for participants[0] if there is not a counterparty signature included", () => {});
    it("should work for participants[1] if there is not a counterparty signature included", () => {});
    it("should work for participants[0] if there is a counterparty signature included", () => {});
    it("should work for participants[1] if there is a counterparty signature included", () => {});
});

describe("validateChannelUpdateSignatures", () => {
    it("should work for a valid single signed update", () => {});
    it("should work for a valid double signed update", () => {});
    it("should fail if there are not at the number of required sigs included", () => {});
    it("should fail if any of the signatures are invalid", () => {});
});

[ops] Chain snapshots

Problem

Can't run the happy case duet or trio tests multiple times in a row (will fail if channel is setup between the same participants)

Solution

Create snapshots of the chain to restore tests too in the integration test setups

RFC re Transfers

Currently:

  • LinkedTransfer resolves based on a hash preImage & optionally accepts a recipient who's allowed to resolve
  • HashlockTransfer resolves based on a hash preImage & enforces a timeout
  • SignedTransfer resolves based on a signature w/out a timeout

I propose:

  • HashlockTransfer resolves based on a hash preImage
  • SignedTransfer resolves based on a signature
  • both accept an expiry after which the transfer gets reverted, 0 means it never expires

Core transfer params would be: (type: "SignedTransfer" | "HashlockTransfer", payload: NonceToSign | HashedPreImage, timeout: number)

SignedTransfer would become createTransfer("SignedTransfer", nonce, 0)
HashlockTransfer would become createTransfer("HashlockTransfer", hash, timeout)
LinkedTransfer w recipient specified would become createTransfer("SignedTransfer", nonce, 0)
LinkedTransfer w/out a recipient would become createTransfer("HashlockTransfer", hash, 0)

We could consider pulling the expiry enforcement logic into the adjudicator too to prevent it from being duplicated across every app definition.

[protocol] Create integration tests

Should be passing:

it.skip("should work for alice creating transfer to bob", async () => {});
  it.skip("should work for alice creating transfer out of channel", async () => {});
  it.skip("should work for bob creating transfer to alice", async () => {});
  it.skip("should work for bob creating transfer out of channel", async () => {});
  it.skip("should work if channel is out of sync", async () => {});

[protocol] Unable to validate some details

The Problem

At the protocol, there are some details you are unable to validate based on the information you have available, including:

  • validity of the networkContext in setup updates
  • validity of the transferEncodings[1] (resolver encoding) in create updates before signing a transfer into your channel

Potential Solution

Validate this via the validation service the protocol is instantiated with on the engine. It is a bit strange, since they are core to the protocol, but the engine will have to validate that the transfer encodings match the transfer definitions anyway so I don't think it is too bad.

[protocol] Disputes

Protocol should be aware of disputes to the point that it no longer continues updating the channel when it is in dispute (and there is the ability to manually resolve it)

[protocol] Blocking `update` unit tests

Expected Behavior

The following generateUpdate tests should be implemented:

it("should fail if it fails parameter validation", () => {});
it("should fail if it is unable to reconcile the deposit", () => {});
it("should fail if trying to resolve an inactive transfer", () => {});
it("should fail if fails to call resolve using chain service", () => {});
it("should work if creating a transfer to someone outside of channel", () => {});
it("should work if resolving a transfer to someone outside of channel", () => {});

Multiple setup calls give weird errors.

Describe the bug
Multiple setup calls throw a strange error.

To Reproduce

  1. Call setup endpoint in example .http file from start-duet mode.

Expected behavior
Should be a more clear error.

Alice side:

[1600845344685] INFO  (493 on 7656c9ff4880): Sending protocol message
    module: "VectorProtocol"
    method: "outbound"
    to: "indra5ArRsL26avPNyfvJd2qMAppsEVeJv11n31ex542T9gCd5B1cP3"
    type: "setup"
[1600845344703] INFO  (493 on 7656c9ff4880): Received protocol response
    module: "VectorProtocol"
    method: "outbound"
    to: "indra5ArRsL26avPNyfvJd2qMAppsEVeJv11n31ex542T9gCd5B1cP3"
    type: "setup"
[1600845344706] ERROR (493 on 7656c9ff4880):
    message: "Cannot read property 'signatures' of undefined"
    stack: "TypeError: Cannot read property 'signatures' of undefined\n    at Object.<anonymous> (/root/modules/protocol/src/sync.ts:142:42)\n    at Generator.next (<anonymous>)\n    at fulfilled (/root/modules/protocol/dist/sync.js:5:58)\n    at processTicksAndRejections (internal/process/task_queues.js:97:5)"

Bob side:

[1600845344690] INFO  (497 on bb62da2c5b35): Received message
    module: "VectorProtocol"
    method: "onReceiveProtocolMessage"
[1600845344699] ERROR (497 on bb62da2c5b35): Error validating incoming channel update
    module: "VectorProtocol"
    method: "inbound"
    channel: "0xc27e351f43a8cCad42B609343a0A43905336c97b"
    error: "Update does not progress channel nonce"
[1600845344700] ERROR (497 on bb62da2c5b35): Error validating update
    module: "VectorProtocol"
    method: "inbound"
    error: "Update does not progress channel nonce"

Server node:

HTTP/1.1 500 Internal Server Error
content-type: application/json; charset=utf-8
content-length: 60
Date: Wed, 23 Sep 2020 08:20:54 GMT
Connection: close

{
  "message": "Cannot read property 'signatures' of undefined"
}

[protocol] Utils return `Result` type

The following functions should return Result types in the utils file within the protocol module:

  1. validateChannelUpdateSignatures
  2. generateSignedChannelCommitment

[engine] Injected `withdraw` validation #26

Expected Behavior

Should not sign withdrawal resolution updates if there is a fee and the transaction has not been submitted to chain (or the submitted transaction is invalid)

[engine] isAlive implemented

When a protocol user comes online, it should send an isAlive message to all channel counterparts so they know when to send queued updates

[protocol] Deposit integration tests

These tests should pass:

it.skip("should deposit tokens for alice", async () => {});
  it.skip("should deposit tokens for bob", async () => {});
  it.skip("should work after multiple deposits", async () => {});
  it.skip("should work if there have been no deposits onchain", async () => {});
  it.skip("should work if the channel is out of sync", async () => {});
  it.skip("should work if channel is out of sync", async () => {});

[protocol] Blocking `sync` unit tests

Expected Behavior

The following inbound tests should be implemented:

it.skip("should fail if you are 3+ states behind the update", async () => {});
  it.skip("should fail if validating the update fails", async () => {});
  it.skip("should fail if saving the data fails", async () => {});
  it.skip("IFF update is invalid and channel is out of sync, should fail on retry, but sync properly", async () => {});
  describe.skip("should sync channel and retry update IFF state nonce is behind by 2 updates", async () => {
    describe.skip("initiator trying deposit", () => {
      it("missed setup, should work", async () => {});
      it("missed deposit, should work", async () => {});
      it("missed create, should work", async () => {});
      it("missed resolve, should work", async () => {});
    });

    describe.skip("initiator trying create", () => {
      it("missed deposit, should work", async () => {});
      it("missed create, should work", async () => {});
      it("missed resolve, should work", async () => {});
    });

    describe.skip("initiator trying resolve", () => {
      it("missed deposit, should work", async () => {});
      it("missed create, should work", async () => {});
      it("missed resolve, should work", async () => {});
    });
  });

The following outbound tests should be implemented:

it.skip("should fail if update to sync is single signed", async () => {});
  it.skip("should fail if the channel is not saved to store", async () => {});
  it.skip("IFF update is invalid and channel is out of sync, should fail on retry, but sync properly", async () => {});
describe("should sync channel and retry update IFF update nonce === state nonce", async () => {
    describe.skip("initiator trying setup", () => {
      it("missed setup, should sync without retrying", async () => {});
    });

    describe("initiator trying deposit", () => {
      it("missed deposit, should work", async () => {})
      it.skip("missed create, should work", async () => {});
      it.skip("missed resolve, should work", async () => {});
    });

    describe.skip("initiator trying create", () => {
      it("missed deposit, should work", async () => {});
      it("missed create, should work", async () => {});
      it("missed resolve, should work", async () => {});
    });

    describe.skip("initiator trying resolve", () => {
      it("missed deposit, should work", async () => {});
      it("missed create, should work", async () => {});
      it("missed resolve, should work", async () => {});
    });
  });

[protocol] Resolve integration tests

These tests should pass:

it.skip("should work for withdraw", async () => {});
  it.skip("should work for alice resolving an eth transfer", async () => {});
  it.skip("should work for alice resolving an eth transfer out of channel", async () => {});
  it.skip("should work for alice resolving a token transfer", async () => {});
  it.skip("should work for alice resolving a token transfer out of channel", async () => {});
  it.skip("should work for bob resolving an eth transfer", async () => {});
  it.skip("should work for bob resolving an eth transfer out of channel", async () => {});
  it.skip("should work for bob resolving a token transfer", async () => {});
  it.skip("should work for bob resolving a token transfer out of channel", async () => {});
  it.skip("should work if channel is out of sync", async () => {});

[utils] Transaction storage + retrying

The onchain service does not retry transactions automatically, or store sent transactions permanently. This means in the event of a server crash, any pending transactions will be lost (though the commitments will be saved)

[test-runner] Set up load tests

Bot-farm-like load tests that use the new server-node key management infra to create many many transfers and stress test the router

[protocol] Support injected validation

Expected Behavior

Protocol does not have context into transfer-specific validation, but the engine should be able to inject some.

Current Behavior

Only runs provided validation (i.e. without transfer context)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.