confio / ts-relayer Goto Github PK
View Code? Open in Web Editor NEWIBC Relayer in TypeScript
License: MIT License
IBC Relayer in TypeScript
License: MIT License
Ensure we have 3 nodes and can send over multiple hops. Nodes can maintain different connections.
After stephenh/ts-proto#211 is merged.
Then I can remove my local hacks and we have an easy flow to regenerate the protobuf bindings from source from anyone's machine.
Depends on #43
Perform a similar query set and return equivalent metadata.
Note we also need the original packet when submitting an acknowledgement. I am open for suggestions if this function should do it, or if we should have a second function to take the (filtered) list of acks and get the packet data for them. If the data is easily available here, add it. If it involves queries to a second chain, then don't include it in this query.
We should accept a winstonjs compatible logger interface in Link, Endpoint, and IbcClient and produce debug/info/warn/error logging. Verbose should be every rpc call we make (debug with the entire payload), info the general actions token ("create connection musselnet {07-tendermint-5:connection-3} <=> bifrost {07-tendermint-2:connection-1}"), and warn/error only when there are problems.
Question whether the logger should be passed in every method call, or we should set it in the constructors.
Interface should be a subset of winstonjs.Logger
. We can expand over time, but harder to reduce. My proposal is something like this (taking only one aspect of LeveledLogMethod
):
interface Logger {
error: (message: string, ...meta: any[]) => Logger,
warn: (message: string, ...meta: any[]) => Logger,
info: (message: string, ...meta: any[]) => Logger,
verbose: (message: string, ...meta: any[]) => Logger,
debug: (message: string, ...meta: any[]) => Logger,
}
Maybe we want to expose the isDebugEnabled(): boolean
... methods as well to avoid doing unneeded calculations for the logger?
I'm submitting a ...
[ ] bug report
[x] feature request
[ ] question about the decisions made in the repository
[ ] question about how to use this project
Summary
Builds on #6
Implement findConnection
and CreateConnection
, findOrCreateConnection
on Link
and test it. Tune API so it works well with pre-existing data as well as a fresh chain. Also implement updateClient
and test handling error messages on client timeout
In ibcclient.ts
I created a prebuilt fee table with hardcoded values
// TODO: replace this with buildFees in IbcClient constructor to take custom gasPrice, override gas needed
const fees = {
initClient: {
amount: coins(2500, 'ucosm'),
gas: '100000',
},
// ...
};
We should:
IbcClient:
Test cases:
updateClient
and matching proofs. Ensure success and check receiver balance on remote chainThis is a series of slower tests in a more realistic environment that can be run occasionally (eg. only on master merges, not by default in yarn test
).
Somehow setup 4 node testnets with docker (compose?) and connect them. Use staking to create wildly changing validator sets, and add/remove/stop a node. Ensure we can update light clients even when the validator set changes quickly and handle "real world" issues
Part of #24
There are some queries which hit their limit quickly, and if running many tests on a chain will fail to list all items, leading to test failures. Worse, they will cause the relayer to misbehave on highly loaded chains.
To solve this, let's add some query variants that get all the data using pagination. And ensure any grpc query that does accept PageRequest
/ paginationKey
exposes that in the query interface. In particular, at least the following variants should be added:
ibc.channel.channels
=> allChannels
ibc.channel.connectionChannels
=> allConnectionChannels
ibc.client.states
=> allStates
, also allStatesTm
which maps the above result with the decoding as in stateTm
ibc.connection.connections
=> allConnections
ibc.connection.clientConnections
=> allClientConnections
ibc.channel.packetCommitments
=> allPacketCommitments
ibc.channel.packetAcknowledgements
=> allPacketAcknowledgements
ibc.channel.unreceivedPackets
=> allUnreceivedPackets
ibc.channel.unreceivedAcks
=> allUnreceivedAcks
Why? https://classic.yarnpkg.com/blog/2016/11/24/lockfiles-for-all/
yarn build
is failing. It's related to a different dependency tree than in the previous build. It seems like some dependency (types) released a new version with breaking change and since we don't lock the versions, it exploded: https://app.circleci.com/pipelines/github/confio/ts-relayer/145/workflows/234297c6-fe84-4112-a03f-b52a28461013/jobs/300
To reproduce the error locally:
yarn.lock
and node_modules
on your machineyarn
yarn build
Simply, committing yarn.lock
would help.
Builds on #53
The following just need the registry (should work if mnemonic is not provided):
ibc-setup ports --chain=ABC
ibc-setup connections --chain=ABC
ibc-setup channels --chain=ABC [--port=transfer]
The following needs the mnemonic as well, and will reuse an existing connection via Link.createWithExistingConnections
ibc-setup channel --src=ABC --dest=XYZ --src_connection=connection-3 --dest_connection=connection-4 ...
(All commands defined in spec)
Please add a test: create connection as in #53, query channels with nothing, make new channel on existing connection, query channels that it is there
This depends on a wasmd``v0.16.0-alpha1
release based on cosmwasm v0.14
.
Use the cw20-ics20
contract and make a unit test to send tokens with that to/from a normal ibctransfer module on the other chain.
Note: this has been manually tested as it became urgent and I had no time to set this up, this should still be done soon, but is not the most urgent.
This provides an event attribute connection_id
on the packet and receipt events. We can use that for filtering and querying across all channels on a connection much easier.
Depends on #40
Use the new connection_id
attribute for the searches.
The packets should be returned in a list with metadata:
{
packet: Packet,
sender: string, // who signed the message to send it
height: number, // which block height was it on
}
Height is important to check which headers we need. Signer may be interesting later for filtering in the application (only relay packets of people that paid their relayer service fees)
Performs the same work as #53 and #54 but automated. You can do those first to get a feel for the steps, or do this first with just one workflow. Ensure the flow is tested, both with creating a new connection, and reusing an existing connection. (Queries from #54 are helpful for testing, but can be done in TS as well)
Implement the workflow defined here
It dumps out way too much info and all the binary data is encoded like [34, 72, 143, ...]
Let us:
(1) print this out in base64 in TestLogger (and see if this effects winstonjs as well)
(2) allow enabling info, verbose, or debug levels in test logger
Update: I made a workaround that debug is never logger in TestLogger. verbose or more only.
We still need to make debug usable. I think pre-processing the msgs before writing to logs would be better than just dumping the message and hoping base64 makes it readable
I assume this is 0 everywhere in the code and it worked in the CI. I wondered where/when it would be set. This came up recently on the cosmos discord, and here was the resolution:
Doing some testing here with relayer debug turned on, I think the above statement has it in the wrong direction. The last statement should read: IBC packets destined to the chain (as opposed to originating from) must have their "revision_number" field set to the chain id suffix.
In testing, I have two chains: "gaiad-microtick-testnet" which is revision number = 0, and "microtick-testnet-rc3-1" which would be revision number = 1. In order for the packet commitments to verify correctly, IBC packets from gaiad -> microtick need the revision number set to "1". In the other direction, packets from microtick -> gaiad need the revision number set to "0".
Using these values, everything seems to be working correctly again. I wanted to post the solution here in case anyone else comes across this.
Thanks for your help guys!
So, it is based on the chain-id, and we will need that to connect to most networks. It is easy to add a suffix in the CI chains to enforce this. Then we need to ensure support throughout the stack until all tests pass again.
This is needed to make the hashes match.
The default implementation of ts-proto
returns Date
for the protobuf.Timestamp
fields. This is nice to work with, but only has millisecond precision, not nanosecond. When we serialize this, the header is different than the signed header (we set 6 digits to 0). Thus, we cannot submit valid SignedHeader to ibc to update the client.
I found this on the ts-proto README:
With --ts_proto_opt=useDate=false, fields of type google.protobuf.Timestamp will not be mapped to type Date in the generated types.
Will try that.
Builds on #50
Start work on ibc-setup
Follow-up to #39
Requires #52
Implement ibc-setup init
as defined in spec.
Ensure all commands read app.yaml
as well for initialization.
We should have simple setup scripts to get two chains up and running to run CI (and local) tests against. I would like to use different chains with different prefixes, tokens to ensure everything handles this (normal case)
We can base on the wasmd
and simapp
setup scripts from https://github.com/cosmos/cosmjs.
We should start with wasmd v0.15.0
and simapp v0.41.0
. Maybe make patch updates, but these should be compatible with the Cosmos Hub on upgrade, so let's not support any newer features coming in the sdk.
Setup a common code style guide (review eslint and prettier configs).
This only needs the supposed connection id on both sides.
It should check to see if they really do point to the other side. Most of the logic is here: https://github.com/confio/ts-relayer/blob/main/spec/ibc-relayer.md#connect
Once that works, you could also check the header for the chains and make sure they match what is stored in the client on the remote chain
How do we share with people who will run the relayer?
We could push to npm, we could just share a link to the repo and say "build from source", we could provide a docker image with compiled code. Or maybe there are other ideas.
This issue is to define and document such a process. We can implement it in another issue
I'm submitting a ...
[ ] bug report
[ ] feature request
[ ] question about the decisions made in the repository
[ ] question about how to use this project
Summary
Builds on #7
listPorts
, listChannels(port?: string)
on EndpointcreateChannel
on LinkWe should be able to create multiple ics20 channels between two chains, as well as find and reuse exisiting channels. Do tests that cover error handling (eg. put an invalid version so one call will fail... how to handle cleanup?)
I'm submitting a ...
[ ] bug report
[x] feature request
[ ] question about the decisions made in the repository
[ ] question about how to use this project
Summary
See Loop from spec, and implement with polling:
Follow-up to #39
This one is tricky. We need to allow the option when creating a connection, and read the option when using an existing connection.
If delay=0, then the general behavior we have everywhere works.
If delay > 0, then we must do the following to relay packets.
This means there is a delay between updating a client and being able to use that client header to prove anything. As a potential protection against byzantine networks.
Both queries and messages.
Follow up from #65
Let's do this for acks also. Need test cases
We can create 2 ics20 channels (different IDs) on the same connection. Send some packets on each, then try the getPendingPackets
and relayPackets
commands to ensure this works.
If you try to update the client to header H, but a different header was previously submitted, we have a clear case of a byzantine network (> 2/3 voting for different blocks at the same height). In this case, the relayer should be able to repackage it's knowledge into evidence of cheating on the source chain (the one who's header we were submitting, not the chain we were submitting to).
This will be hard to test without a real Byzantine testnet, so this is mainly a placeholder until there is a good test environment for this.
Builds on #52
ibc-setup
Link.createWithNewConnections()
with proper IbcClients
Builds on #51
-i
is hard, that can be done as a later issueibc-setup
commands for key managementI'm submitting a ...
[ ] bug report
[x] feature request
[ ] question about the decisions made in the repository
[ ] question about how to use this project
Summary
Like #9 but for acknowledgements
For the txSearch construction, support filter by src/dest port/channel. Any number of them AND'ed.
Depends on #46 (blocked)
Not so sure what is needed to be done here. In the normal case, I think we will relay all packets in order. Let's ensure they are sorted by (channel, sequence).
However, we will need to be careful (1) we filter out packets - we cannot skip on ordered channels and (2) if a packet is timed-out, we need to submit that before the packet to relay. So, most low-level primitives should work the same, but the higher-level ones may need some adjustment
Requires #57
It should implement the logger spec. The only needed transports are to console/stdout and to file. Assuming a human-readable format on console, JSON if written to file (so we can send it to Kibana, etc)
This should be enough for the first release, we can revisit more transports and format options in v2 based on user feedback
One is not ucosm
, ensure it works. (No src/dest mixups)
Maybe switch to gaia v4.0.1 rather than simapp as well?
Based on https://github.com/confio/ts-relayer/blob/main/spec/config.md#registry-format This should live in a demo
folder with all configs we can use for testing.
We should include entries for:
bonus points for other ibc-enabled testnets. We should ensure they are on cosmos SDK 0.41.1 or higher. Or at least the nodes we point to are using the patch release
Given two rpc endpoints, chain ids, and proposed existing connection_ids...
Builds on #5
Ensure we have an API to manually override the sequence, or provide N tx and each gets an auto-incremented sequence. If we submit 10 packets, we don't want to do this over 10 blocks, but rather in 1.
I'm submitting a ...
[ ] bug report
[x] feature request
[ ] question about the decisions made in the repository
[ ] question about how to use this project
Summary
Builds on #8
To ensure #8 is closed properly and there is no accidental src/dest mixup, we need to try a case where the source and dest connections use different channel ids and different ports. For ports, we can reconfigure what the default ics20 port is on one app. For channels, we need to init a channel on one side, then forget it. This will auto-increment one number on one side and not on the other.
The file was copied over from @cosmjs as a starting point, but the organization doesn't match what we need. It only has about half the exposed queries as well.
I propose:
queryRawProof
in the verified queries and make a separate subsection for them (eg. ibc.proofs
). clientStateWithProof
should also go in there (which is more of what we will want)unverified
namespace (that is default), and reorganize them. Some ideas: ibc.clientState()
, client.clientState()
, ibc.client.clientState()
. Or if you really want to rename them to remove stutter: ibc.client.state()
- these all would call clientQueryService.ClientState()
codec/ibc/core/*/queries.ts
files as direct grpc queries following whatever format you choose in 2A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.