Git Product home page Git Product logo

gravity-bridge's Introduction

Gravity Bridge

Gravity bridge is Cosmos <-> Ethereum bridge designed to run on the Cosmos Hub focused on maximum design simplicity and efficiency.

Gravity can transfer ERC20 assets originating on Ethereum to a Cosmos based chain and back to Ethereum.

The ability to transfer assets originating on Cosmos to an ERC20 representation on Ethereum is coming within a few months.

Status

Gravity bridge is under development and will be undergoing audits soon. Instructions for deployment and use are provided in the hope that they will be useful.

It is your responsibility to understand the financial, legal, and other risks of using this software. There is no guarantee of functionality or safety. You use Gravity entirely at your own risk.

You can keep up with the latest development by watching our public standups feel free to join yourself and ask questions.

  • Solidity Contract
    • Multiple ERC20 support
    • Tested with 100+ validators
    • Unit tests for every throw condition
    • Audit for assets originating on Ethereum
    • Support for issuing Cosmos assets on Ethereum
  • Cosmos Module
    • Basic validator set syncing
    • Basic transaction batch generation
    • Ethereum -> Cosmos Token issuing
    • Cosmos -> Ethereum Token issuing
    • Bootstrapping
    • Genesis file save/load
    • Validator set syncing edge cases
    • Slashing
    • Relaying edge cases
    • Transaction batch edge cases
    • Support for issuing Cosmos assets on Ethereum
    • Audit
  • Orchestrator / Relayer
    • Validator set update relaying
    • Ethereum -> Cosmos Oracle
    • Transaction batch relaying
    • Tendermint KMS support
    • Audit

The design of Gravity Bridge

  • Trust in the integrity of the Gravity bridge is anchored on the Cosmos side. The signing of fraudulent validator set updates and transaction batches meant for the Ethereum contract is punished by slashing on the Cosmos chain. If you trust the Cosmos chain, you can trust the Gravity bridge operated by it, as long as it is operated within certain parameters.
  • It is mandatory for peg zone validators to maintain a trusted Ethereum node. This removes all trust and game theory implications that usually arise from independent relayers, once again dramatically simplifying the design.

Key design Components

  • A highly efficient way of mirroring Cosmos validator voting onto Ethereum. The Gravity solidity contract has validator set updates costing ~500,000 gas ($2 @ 20gwei), tested on a snapshot of the Cosmos Hub validator set with 125 validators. Verifying the votes of the validator set is the most expensive on chain operation Gravity has to perform. Our highly optimized Solidity code provides enormous cost savings. Existing bridges incur more than double the gas costs for signature sets as small as 8 signers.
  • Transactions from Cosmos to ethereum are batched, batches have a base cost of ~500,000 gas ($2 @ 20gwei). Batches may contain arbitrary numbers of transactions within the limits of ERC20 sends per block, allowing for costs to be heavily amortized on high volume bridges.

Operational parameters ensuring security

  • There must be a validator set update made on the Ethereum contract by calling the updateValset method at least once every Cosmos unbonding period (usually 2 weeks). This is because if there has not been an update for longer than the unbonding period, the validator set stored by the Ethereum contract could contain validators who cannot be slashed for misbehavior.
  • Cosmos full nodes do not verify events coming from Ethereum. These events are accepted into the Cosmos state based purely on the signatures of the current validator set. It is possible for the validators with >2/3 of the stake to put events into the Cosmos state which never happened on Ethereum. In this case observers of both chains will need to "raise the alarm". We have built this functionality into the relayer.

Run Gravity bridge right now using docker

We provide a one button integration test that deploys a full arbitrary validator Cosmos chain and testnet Geth chain for both development + validation. We believe having a in depth test environment reflecting the full deployment and production-like use of the code is essential to productive development.

Currently on every commit we send hundreds of transactions, dozens of validator set updates, and several transaction batches in our test environment. This provides a high level of quality assurance for the Gravity bridge.

Because the tests build absolutely everything in this repository they do take a significant amount of time to run. You may wish to simply push to a branch and have Github CI take care of the actual running of the tests.

To run the test simply have docker installed and run.

bash tests/all-up-test.sh

There are optional tests for specific features

Valset stress changes the validating power randomly 25 times, in an attempt to break validator set syncing

bash tests/all-up-test.sh VALSET_STRESS

Batch stress sends 300 transactions over the bridge and then 3 batches back to Ethereum. This code can do up to 10k transactions but Github Actions does not have the horsepower.

bash tests/all-up-test.sh BATCH_STRESS

Validator out tests a validator that is not running the mandatory Ethereum node. This validator will be slashed and the bridge will remain functioning.

bash tests/all-up-test.sh VALIDATOR_OUT

Developer guide

Solidity Contract

in the solidity folder

Run HUSKY_SKIP_INSTALL=1 npm install, then npm run typechain.

Run npm run evm in a separate terminal and then

Run npm run test to run tests.

After modifying solidity files, run npm run typechain to recompile contract typedefs.

The Solidity contract is also covered in the Cosmos module tests, where it will be automatically deployed to the Geth test chain inside the development container for a micro testnet every integration test run.

Cosmos Module

We provide a standard container-based development environment that automatically bootstraps a Cosmos chain and Ethereum chain for testing. We believe standardization of the development environment and ease of development are essential so please file issues if you run into issues with the development flow.

Go unit tests

These do not run the entire chain but instead test parts of the Go module code in isolation. To run them, go into /module and run make test

To hand test your changes quickly

This method is dictinct from the all up test described above. Although it runs the same components it's much faster when editing individual components.

  1. run ./tests/build-container.sh
  2. run ./tests/start-chains.sh
  3. switch to a new terminal and run ./tests/run-tests.sh
  4. Or, docker exec -it gravity_test_instance /bin/bash should allow you to access a shell inside the test container

Change the code, and when you want to test it again, restart ./tests/start-chains.sh and run ./tests/run-tests.sh.

Explanation:

./tests/build-container.sh builds the base container and builds the Gravity test zone for the first time. This results in a Docker container which contains cached Go dependencies (the base container).

./tests/start-chains.sh starts a test container based on the base container and copies the current source code (including any changes you have made) into it. It then builds the Gravity test zone, benefiting from the cached Go dependencies. It then starts the Cosmos chain running on your new code. It also starts an Ethereum node. These nodes stay running in the terminal you started it in, and it can be useful to look at the logs. Be aware that this also mounts the Gravity repo folder into the container, meaning changes you make will be reflected there.

./tests/run-tests.sh connects to the running test container and runs the integration test found in ./tests/integration-tests.sh

Tips for IDEs:

  • Launch VS Code in /solidity with the solidity extension enabled to get inline typechecking of the solidity contract
  • Launch VS Code in /module/app with the go extension enabled to get inline typechecking of the dummy cosmos chain

Working inside the container

It can be useful to modify, recompile, and restart the testnet without restarting the container, for example if you are running a text editor in the container and would not like it to exit, or if you are editing dependencies stored in the container's /go/ folder.

In this workflow, you can use ./tests/reload-code.sh to recompile and restart the testnet without restarting the container.

For example, you can use VS Code's "Remote-Container" extension to attach to the running container started with ./tests/start-chains.sh, then edit the code inside the container, restart the testnet with ./tests/reload-code.sh, and run the tests with ./tests/integration-tests.sh.

Debugger

To use a stepping debugger in VS Code, follow the "Working inside the container" instructions above, but set up a one node testnet using ./tests/reload-code.sh 1. Now kill the node with pkill gravityd. Start the debugger from within VS Code, and you will have a 1 node debuggable testnet.

gravity-bridge's People

Contributors

albertchon avatar alexanderbez avatar alpe avatar dependabot[bot] avatar devashishdxt avatar ethanfrey avatar fedekunze avatar hannydevelop avatar jackzampolin avatar jkilpatr avatar jtremback avatar kobs30 avatar levicook avatar mankenavenkatesh avatar michaelfig avatar mossid avatar mvid avatar okwme avatar poldsam avatar shella avatar tac0turtle avatar yoongbok-lee avatar zmanian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gravity-bridge's Issues

Refactor javascript truffle exec scripts to golang contract bindings

Currently the testnet-contracts/scripts direcotry contains individual truffle exec scripts which enable smart contract interaction. Some limitations of using truffle exec include limited command line argument parsing, limited error messages, and a long execution time. Instead of being dependent on truffle-specific JavaScript scripts for contract interaction, we should build out a set of Golang scripts which use generated contract bindings to facilitate contract interaction. The Golang scripts will be accessible via either the existing ebcli or ebrelayer. This will concentrate all the smart contract interaction and allow us to easily expose the scripts through the planned JavaScript wrapper.

Update to use new Module an Sdk conventions

The project was started before various new SDK APIs and conventions emerged. The next phase should update the code to use these new APIs and conventions. Various small things can be changed to fit newer cosmos module conventions:

  • the way keepers are stored in the ethereumBridgeApp struct
  • use the new supply module
  • Use new module generalization allows you to register the modules easily. eg: in the SDK simapp/app.go
  • remove need for Genesis.go
  • use param functionality for the consensusNeeded field
  • can use new validator.IsBonded() in keeper.go
  • further suggestions from https://github.com/cosmos/cosmos-sdk/blob/master/docs/modules/SPEC.md
  • put querier into Keeper module
  • return Events from handler appropriately based on new Events paradigm rather than Tags
  • switch to go.mod
  • use new account receiver in txs/relay.go
  • update to latest version of sdk and remove dependency on https://github.com/swishlabsco/crypto
  • change staking.Keeper to an interface that exposes only the methods used by this module
  • can use new sdk.NewCoins() in parser.go
  • add checks from #51 (comment)
  • use module level cdc structure, ie: #51 (comment)

Automatic claim processing when detects a lock event

In automatic claim processing the relayer is not able to detect the events and performs a succesfull transation.
Since i tried transferring the tokens from remix i didnt received any output from relayer but the transaction was successful but when i checked the receiver account there was no update in the account .

Event logs should contain only static types

When dynamic types like bytes are included in a log, it makes the length of the log data not a multiple of 32. For example, this is a log data for Lock(bytes to, uint64 amount, address token)(line break for every 32 bytes == 1 word).

0000000000000000000000000000000000000000000000000000000000000060
0000000000000000000000000000000000000000000000000000000000002710
0000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000003
686868

The first word contains the relative position of the first element(hex 60 == dec 96). The word in that position contains the length of the bytes. The 3-byte data 686868 follows it.

It makes the parse_log in ethabi package not works. (ethabi/src/util.rs line 7-9)

if data.len() % 32 != 0 {
    return Err(ErrorKind::InvalidData.into());
}

To solve this problem, we have to change every bytes in solidity files into bytes32. On the frontend side, we can either 1. hash the input data to make it bytes32 or 2. accept bytes only whose length is less than 32.

So the api should be modified as:

lock(bytes32 to, uint64 value, address token) external payable returns (bool);
event Lock(bytes32 to, uint64 value, address token);

unlock(address[2] addressArg, uint64 value, bytes32 chain, uint16[] idxs, uint8[] v, bytes32[] r, bytes32[] s) external returns (bool)

unlock also has to take bytes32 in favor of #32

Implement signer component

The signer component holds a Secpk key that is registered with the pegzone contract on Ethereum.
It listens for RedeemTx events from the pegzone and then signs those and posts a SignTx back to the pegzone.

@ethanfrey @mossid @spacejam

create validator ebcli command not working

I am having a testuser i want to make him validator.
I have used the command mentioned in Image-1.The command is showing the output successfully but when i am checking the staking validators its showing me the default one provided at time of creating the genensis.
Image-2
Image-1

Relayer finality threshold of n blocks

The Relayer should enforce a finality threshold of n blocks so that witnessed events are probabilistically deterministic. The proposed implementation is a finality queue which enforces a 6 block finality threshold on witnessed events before allowing validators to sign them. This event persister can be expanded to a Relayer service which scans the blockchain for missed transactions upon initialization.

Thanks for creating this! I like the design you guys chose. Is there a finality check in the Relayer code? Relayer should have some finalityBound parameter so that it waits finalityBound number of blocks after seeing an event before creating an EthBridgeClaims. This mitigates risk that Etheruem re-orgs from causing peggy state to be invalid.

Originally posted by @AdityaSripal in #51 (comment)

In the new commits why validator deifinition is removed.It was present in earlier version of peggy

Earlier we had to define the validator separately with the commission rates and all other characteristics.In the present version there is no mention of Validator credentials.
I think this might be the reason there is issue with the relayer and smart contract connection.
Can anybody please put some light on this..

Earlier peggy

ebcli tx staking create-validator
--amount=100000000stake
--pubkey=$(ebd tendermint show-validator)
--moniker="test_moniker"
--chain-id=testing
--commission-rate="0.10"
--commission-max-rate="0.20"
--commission-max-change-rate="0.01"
--min-self-delegation="1"
--gas=200000
--gas-prices="0.001stake"
--from=validator

Then wait 10 seconds then confirm your validator was created correctly, and has become Bonded status

ebcli query staking validators --trust-node

in present peggy this has been removed ..why?????

Replace Item struct's unique item id hash directly with nonce

On contract, each Item's id is currently a hash containing: sender's contract address on the Ethereum network, specified recipient address on the Bridge network, the token type's deployed address on the Ethereum network, the amount, and the current nonce. Changing Item id to just the nonce maintains security and simplifies the smart contracts. The Relayer's LockEvent struct must also be updated in order to integrate these changes.

Proposed change from:

If the nonce is unique, why can't this mapping just use the nonce as the key?

Originally posted by @AdityaSripal in https://github.com/_render_node/MDExOlB1bGxSZXF1ZXN0Mjg1NTA0Nzkz/timeline/more_items

Upgradeable contracts

We need to natively build in a mechanism to upgrade the peggy contracts on Ethereum in order to add features in the future (and potentially but really really hopefully not solve bug fixes). Normally building Ethereum upgradeable contracts is a weird concept because its often unclear who should have the right to upgrade a Dapp, but in the case of Peggy, it seems that the current validator set should be able to upgrade it.

Some resources on upgradable contracts: https://sunnya97.gitbooks.io/a-beginner-s-guide-to-ethereum-and-dapp-developme/content/upgradable-contracts.html

Building upgradeable contracts is not easy and I don't there are that many solidity devs with experience building them. You'd have to complicated delegatecalls and stuff like that and we'd like to avoid a Parity bug like situation

Discussion - Add id to prophecies?

The rest client interface requires a lot of paramaters to query a prophecy, so means an annoying time making the request:

	r.HandleFunc(fmt.Sprintf("/%s/prophecies/{%s}/{%s}/{%s}/{%s}/{%s}/{%s}", storeName, restEthereumChainID, restBridgeContract, restNonce, restSymbol, restTokenContract, restEthereumSender), getProphecyHandler(cliCtx, storeName)).Methods("GET")

This could be improved by adding an id to the prophecy object and allow for querying just by id. This probably makes sense, though should be thought through more - there are a bunch of implications of adding ids, like:

  • more space used
  • doesn't improve UX for end users at all really, only devEX for developers building ontop of peggy, which is a minor efficiency difference anyway. End users will likely eventually interact with these systems through some frontend/UI/dapp, where the chainid/contract/nonce/etc will likely be either hardcoded, or selectable in a dropdown, or remembered from previous usage.
  • having an id and endpoint based on id means that prophecies can no longer be retrieved easily by deterministically working out the url based on what original request they got triggered by, and there are likely some consumers based that need this characteristic. we can of course still leave this compound-key endpoint in, in addition to adding a single-key id endpoint, so that those needs are still met
  • clients/fat web apps that are built in future consuming this UI now also may need to keep track of ids, and the id will be generated server(blockchain) side and so they need to get it from the server
  • most transactions/requests will likely come in asynchronously, where it is not possible to return the id in that request, so consumers will need to do a second query later on, or wait for transaction confirmation until they get the id back and know what it is.

anyway, as mentioned, we could also have both query endpoints in so may make sense to go ahead with adding ids

Implement Ethereum pegzone contract

The Ethereum pegzone contract has a function that takes amount int64, denom string, signatures []signatures. Signatures is a solidity struct that encapsulates the components of a signature. It is used by the smart contract to authenticate the rest of the data.

The smart contract hashes amount and denom and checks whether all signatures come from a validator and wether the sum of signatures accounts for >2/3 of the voting power.

type Signatures struct {
    ...
}

@mossid @fedekunze @sunnya97

Proposal for architecture simplification: All validators are relayers

The Relayer as a validator component rather than a trustless separate component.

  • No need for relayer signing or trust infrastructure in EthBridge.
  • No need for a relayer cryptoincentive model.
  • No need to consider relayer/validator trust in any way. Only Eth finality must be considered.

Disadvantages of all validators also being relayers

  • validators must now have and configure a trusted Eth full node
  • validators must now maintain the uptime of that full node in addition to the validator. Although it won't really matter that much if the full node goes down for short periods.
  • If less than 66% of the validators have a working Eth full node configured the bridge will cease to function.

I'm curious for thoughts/feedback on this. It seems to me that you could realize a fully working system with a much more anaemic relayer and then separate it out later. Rather than tackling the hard work up front. Sure the validator operators have to do more work, but since most peg zones have the bridge as a very core component I don't think that's a huge problem.

Implement relayer component

The relayer component listens to events from the pegzone and forwards data from the pegzone to Ethereum via contract calls.

The relayer can be run by anyone and is trustless.

@spacejam @mossid

mint in CosmosERC20.sol doesn't work ?

the func in CosmosERC20.sol doesn't work ?

function mint(address to, uint amount) public onlyByController() returns (bool success) {
balances[to] = balances[to].add(amount);
_totalSupply = _totalSupply.add(amount);
Mint(to, amount);
return true;
}

Relayer test suite

Description: The Relayer test suite should be more robust and expansive. Any broken/pending code related to the Relayer test suite has been removed and will be reintegrated in another PR once complete.

Originally posted by @musnit in https://github.com/_render_node/MDE3OlB1bGxSZXF1ZXN0UmV2aWV3MjU0MjU1ODA1/pull_request_reviews/more_threads

"I think it's fine if we create a github issue for doing a proper relayer test suite, and then just clean this up, remove broken/pending code, and put a reference to that Github issue."

Add support for ERC-20 and for tagging the source contract of the bridge

The current module only mints ETH representative tokens on the cosmos chain. It should be enhanced to consider the ERC-20 token sent and possibly mint tokens on the cosmos chain to represent arbitrary ERC-20 tokens.

The module also does not track the source chain and contract which the message came from, which may be useful for ensuring nonfungibility of tokens minted from different source bridge contracts for security benefits.

Simulation support

Steps:

  1. Extend module.go to implement the AppModuleSimulation interface and create the simulation/ folder with the decoder.go (+ _test.go), genesis.go, operations.go (see x/staking/simulation for reference)

  2. Create the app simulation manager (you can use the app/app.go on Gaia for reference).

  3. Add test-sim-... cmds to the Makefile (use SDK as reference)

  4. Create sim_test.go(simulation tests for the app). I'd recommend you to use this Gaia PR as a reference since it simplifies the creation process of those tests.

Implement witness component

The witness component is responsible for following the Ethereum consensus. It witnesses events that happen on the Ethereum blockchain after a certain number of blocks and witnessing an event makes it final. The witness component submits a WitnessTx to the pegzone with the event data.

@spacejam @mossid

Create iterator for store prophecies

Consumers may want to iterate over the prophecies in the database. Currently, only APIs for querying a single prophecy exist. It could be useful to add an iterator and API to get multiple/all.

Method to return locked ERC20 tokens

The proper method of sending ERC20 tokens to a contract is to use the approve/transferFrom paradigm as due to the design of the current ERC20 standard, if an external account transfers to a contract directly, the contract does not get notified that it was sent tokens and thus has no way of knowing which address sent it tokens. (Note: ERC223 is a proposed solution to this problem that also allows transfers to trigger a function on the recieving contract as well as include some data, however 223 is not officially approved nor widely adopted, and I also have some issues with its design).

Inevitably, people are going to accidentally send ERC20 tokens to our peg zone contract (which will be silly because there's no way of attaching destination data to this, but still, it's going to happen). I've added to the Cosmos ERC20 contracts that it reverts if someone tries to do this but I can't stop people from sending native Ethereum ERC20s to the contract. But I think we can actually solve this with the following mechanism:

While it does not inform a receiving contract of a transfer, the ERC20 standard does however include an event called Transfer that adds to the Logs whenever a transfer happens. We can have witnesses listen for all Transfer events in which the _to address is the peggy contract. They can then log this to the Peg Zone and the signers can sign a new Ethereum transaction that sends the funds back to the _from address from the Event. (You can actually combine these into one step because the witnesses are the signers). Once the returnTx has enough signatures, it can be relayed back to Ethereum to return the locked funds.

ebcli is failing out with "Error initializing DB: resource temporarily unavailable"

Hello.

I am trying to use ebcli to burn tokens that have been or will be unlocked onto Ethereum, with the following command:

ebcli tx ethbridge burn cosmos1py2hyelc8lwqfm44wtjdfe3gq6vhlrykhaw4gz 0xBe4680d29AC7129D0Db45E4017DD146579a6C931 1000000000000000000blz --token-contract-address=0x75E6f42Daba0e300287f14e99086104820582D34 --ethereum-chain-id=3 --from=validator --chain-id=mychain --yes

Unfortunately, and rather non-deterministically, this command will sometimes (it sometimes succeeds too!) fail with the following error:

failed to get from fields: couldn't create db: Error initializing DB: resource temporarily unavailable

I see this error alot across the board, and seem to recall seeing it being output from the ebrelayer as well. It seems to be a cause of a large number of problems, as we will sometimes be able to do tests and sometimes, not.

Any thoughts here? Which "DB" is this? I suspect it is the blockchain's database, but am unsure.

I get this error quite a bit if I want to run a relayer but am already running the REST-server, as follows:

ebcli rest-server --laddr=tcp:// --chain-id=<my_chain_id>

If I am running the REST-server and get that error when running the relayer, I am forced to kill the REST-server, to get further.

Please share your thoughts.

Implement ERC721 support - Cosmos NFT's

Support NFT's (ERC721) by deploying a NFT contract which handles all NFT's that are from external chains and another which can hold any NFT and send it to cosmos chains.

Cosmos NFT uuid
When sending from NFT contract to external cosmos chain, NFT uuid should perhaps be
[chainId][nftAdress][orginalUUID] -> 8B + 32B + 32B = 72 bytes UUID

When receiving external NFT's, for compatibility take sha3 as uint256 of "cosmos NFT uuid", this is applicable when deploying a peggy-erc721-receiver in a ethermint zone

Otherwise cosmos NFT's should be identified by their full 72bytes UUID.

Metadata in ethereum mainnet could be extended with fields:
function getSourceChainId(uint index) returns uint8
function getSourceAdress(uint index) returns bytes32
function getSourceTokenId(uint index) returns uint256

See ERC721 EIP:
https://github.com/ethereum/EIPs/blob/master/EIPS/eip-721.md

Other solution would be to make a ERC721 contract for every received NFT source

How can I build the project?

I know this project came along from etgate and I didn't have luck to build a working binary for etgate yet. Is this project able to be built and run? Thanks.

Update styling with Uber's go style guide

Code should adhere to Uber's go style guide. Some notable sections from the style guide include:

  • Start Enums at One
  • Error Types
  • Group Similar Declarations
  • Import Group Ordering
  • Function Grouping and Ordering
  • Prefix Unexported Globals with _
  • Initializing Struct References
  • Format Strings outside Printf
  • Patterns: Test Tables

what's the status with this repo?

It's unclear what the status of this repo is. I was following that the concept of peg started with other ideas first but culminated in this repository. It seems pretty central for the cosmos project, but it hasn't been updated in a while.

I was hoping to try to create a proof of concept that I can transfer tokens back and forth between a vanilla Ethereum chain and a tendermint backed application using this, but I can't really figure out how to run this or what's left to do before it's in a testable state. Can someone elaborate?

Error: Cannot find module 'lodash'

truffle test --network ganache

Using network 'ganache'.

Error: Cannot find module 'lodash'
at Function.Module._resolveFilename (module.js:555:15)
at Function.Module._load (module.js:482:25)
at Module.require (module.js:604:17)
at require (internal/module.js:11:18)
at Object. (/home/wade.lee/goWorkProject/src/github.com/cosmos/peggy/ethereum-contracts/test/utils.js:3:9)
at Module._compile (module.js:660:30)
at Object.Module._extensions..js (module.js:671:10)
at Module.load (module.js:573:32)
at tryModuleLoad (module.js:513:12)
at Function.Module._load (module.js:505:3)
at Module.require (module.js:604:17)
at require (internal/module.js:11:18)
at Object. (/home/wade.lee/goWorkProject/src/github.com/cosmos/peggy/ethereum-contracts/test/cosmosERC20.js:4:15)
at Module._compile (module.js:660:30)
at Object.Module._extensions..js (module.js:671:10)
at Module.load (module.js:573:32)
at tryModuleLoad (module.js:513:12)
at Function.Module._load (module.js:505:3)
at Module.require (module.js:604:17)
at require (internal/module.js:11:18)
at /usr/local/lib/node_modules/truffle/node_modules/mocha/lib/mocha.js:231:27
at Array.forEach ()
at Mocha.loadFiles (/usr/local/lib/node_modules/truffle/node_modules/mocha/lib/mocha.js:228:14)
at Mocha.run (/usr/local/lib/node_modules/truffle/node_modules/mocha/lib/mocha.js:514:10)
at /usr/local/lib/node_modules/truffle/build/webpack:/~/truffle-core/lib/test.js:125:1
at
at process._tickCallback (internal/process/next_tick.js:160:7)

Missing app modules

I noticed some modules are missing on the app.go. I wanted to start a discussion about the available modules that the peg zone should or must have.

Must have:

  • x/evidence: evidence handling was migrated from x/slashing to the new evidence module
  • x/slashing: for downtime and double sign slashing
  • x/upgrade: reduces the upgrading time from software updates (i.e new version releases) to only a few minutes. Integrates upgrade proposals to ease the coordination process on versioning, exact time/height of the upgrade.
  • x/gov: for param change, text, community spend and upgrade proposals, etc.
  • x/distribution: in protocol staking reward and fee distribution mechanism for validators and delegators.
  • x/crisis: submit a tx to halt the chain in case an invariant check has been found.
  • x/mint: Inflation rates for the coins (eg: staking coin).

Other modules (TBD):

  • x/ibc: IBC protocol implementation

production Level

which modules should we modify or pay attention to bring Peggy to the production level?
why do you mention that Peggy is not of production level, is it due to the lack of security or efficiency?
Basically, what are the functionalities that the Peggy currently lack and make it unsuitable for the production?

go build error

Hi, there are some problems when I "cd to cmd/etgate and go build". Can I get some instruction or help? thanks a lot for any response!

when my input is like:
cd $GOPATH/src/github.com/cosmos/peggy/cmd/etgate/
go build
the result is like:

main.go:14:5: local import "./commands" in non-local package
gate.go:36:5: local import "../../commands" in non-local package
gate.go:37:5: local import "../../contracts" in non-local package
gate.go:35:5: local import "../../plugins/etgate" in non-local package
gate.go:38:5: local import "../../plugins/etgate/abi" in non-local package

or when my input is like
go build main.go

the result is like:

# github.com/tendermint/tendermint/rpc/client
../../../../tendermint/tendermint/rpc/client/httpclient.go:244:12: query.String undefined (type pubsub.Query has no field or method String)
../../../../tendermint/tendermint/rpc/client/httpclient.go:261:12: query.String undefined (type pubsub.Query has no field or method String)
# _/root/workspace/src/github.com/cosmos/peggy/plugins/etgate
../../plugins/etgate/etgate.go:73:16: undefined: "github.com/tendermint/abci/types".Result
../../plugins/etgate/etgate.go:88:42: undefined: "github.com/tendermint/abci/types".Result
../../plugins/etgate/etgate.go:109:38: undefined: "github.com/tendermint/abci/types".Result
../../plugins/etgate/etgate.go:122:39: undefined: "github.com/tendermint/abci/types".Result
../../plugins/etgate/etgate.go:134:34: undefined: "github.com/tendermint/abci/types".CodeType
../../plugins/etgate/etgate.go:135:33: undefined: "github.com/tendermint/abci/types".CodeType
../../plugins/etgate/etgate.go:136:32: undefined: "github.com/tendermint/abci/types".CodeType
../../plugins/etgate/etgate.go:137:31: undefined: "github.com/tendermint/abci/types".CodeType
../../plugins/etgate/etgate.go:138:33: undefined: "github.com/tendermint/abci/types".CodeType
../../plugins/etgate/etgate.go:139:35: undefined: "github.com/tendermint/abci/types".CodeType

Signature validation by OpenZeppelin's ECDSA library

This issue is for the reintegration of OpenZeppelin's ECDSA library, including any minor updates to the ebrelayer/contract test suite if required.

Background:
Valset's signature verification has been updated in #77 due to issues with OpenZeppelin's ECDSA library. While the library correctly prefixes and validates messages signed off-chain with web3.eth.sign() in the contract test suite, it does not correctly validate messages signed by validators running the ebrelayer. We can confirm that validator signatures are correct using cmd/ebrelayer/txs/utils.go's verifySignature(), and are also working with ecrecover(...) on-chain inside Valset.sol outside of OpenZeppelin's ECDSA library.

Created due to:
create issue

Originally posted by @fedekunze in #77

Discussion: combine ethereum/cosmos relayer into unified relayer

As further described in #77, ethereum and cosmos relayer could be refactored into a unified relayer which only has 1 ebrelayer init for ease of use. The unified relayer could share a logger, event data, init config information, and validator account information through the use of goroutines and environment variables.

WithdrawTx Ethereum + Pegzone fees

For the WithdrawTx that sends funds from pegzone --> Ethereum there are 2 fees that need to be paid: the ones from the transaction in the zone itself (in Photons) and the ones of the Ethereum transaction. Technically the Relayer is the one that pays the fee in ETH, since it's the one who sends the parsed transaction to native Ethereum, while the address that submits the WithdrawTx is the one that pays the first fee in Photons.

To avoid having the Relayer pay the Ethereum fees, I suggest that the user from the Pegzone pay both fees. For this one approach is to split the total Fee in two (x% and 100 - x%) according to a certain default arbitrary percentage (which can be updated by the user) or otherwise set the amount of each fee manually using the user's input.

relayer ropsten network integration not working

I tried running the ropsten netwrok integration for automatic relaying process but it seems it is not working properly.The relayer is not able to connect to the deployed address showing web socket issue.I have attached screenshot for the reference
Screenshot from 2019-10-25 11-38-03

Implement pegzone component

The pegzone ties all other components (Witness, Relayer and Signer) together. It defines the format for WitnessTx, WithdrawTx, SignTx and SendTx.

The WitnessTx is send by the Witness component. It results in the pegzone writing the event (usually a deposit into the Ethereum smart contract) into its state. There is aggregates all WitnessTxs for a given event and credits a user account once >2/3 of validators have witnessed an event.

type WitnessTx struct {
    ethEvent EthEvent // encapsulates amount, token and destination address
    signature Signature // has to be a signature from a key in the validator set
}

type EthEvent struct {
    destination []byte // pegzone address
    coin Coin // contains denom and amount of the token to be send

The WithdrawTx is sent by a user to send money from the pegzone to Ethereum. The pegzone checks whether the user has the correct token balances and then deducts them from the user account.

type WithdrawTx struct {
    destination []byte // Ethereum address of destinatary
    coin Coin // contains denom and amount of the token to be send
    signature Signature // a signature that authorises this transaction
}

type WithdrawData struct {
     SignedWithdraw []SignTx	 // Accumulates `SignTx`s until it reaches +2/3 of total power
}

The SignTx is sent by the Signer component.

type SignTx struct {
    signatureBytes []bytes // signature bytes over the concatenation of the top three fields of ReedemTx
    signature Signature // signature that authorises this transaction is coming from a validator
}

@ethanfrey @spacejam @mossid @sunnya97 @fedekunze
The SendTx is used to send tokens between user account on the pegzone.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.