emerishq / demeris-backend Goto Github PK
View Code? Open in Web Editor NEWMonorepo containing all the Demeris backend code and infrastructure definitions.
License: GNU Affero General Public License v3.0
Monorepo containing all the Demeris backend code and infrastructure definitions.
License: GNU Affero General Public License v3.0
Tracelistener must subscribe to the Websocket endpoint pointed by CHAIN_NAME
, and write in the database the last block time received on it.
This is needed to unblock #28.
We need to add stakingDenoms to the CNS so we can implement staking features.
We need to implement a way for the frontend to easily query the status of a given transaction after it has been broadcasted.
The tx
endpoint will return a ticket t
(a UUID) along with the response message.
This ticket can then be used by the frontend to query the status of the transaction initiated by the very same call that generated t
:
GET /tx/{:ticket}
A Redis cache will be used to store tickets and the associated data, while a component rpcwatcher
watches the Tendermint RPCs in search of patterns generated from the data contained in Redis.
On the backend we must differentiate IBC transfers from plain bank sends:
t
with the transaction hash that a full node returns after broadcasting a transaction; rpcwatcher
will observe the events and will mark a transaction as completed
when it'll observe a successful event with the given transaction hasht
; when we observe a successful IBC transfer event (TBD), we mark it as done on the backend.By using Redis we don't need to communicate directly with rpcwatcher
as it'll query data from it at every execution tick, and also we don't need to handle concurrency edge cases since Redis is guaranteed to be coherent in multi-{writer,reader} contexts.
These constants should be configurable somehow so we can use tracelistener
from different registry, and change the namespace of demeris in Kubernetes clusters.
We need a secure interface to manually add things to the CNS. Examples of things we'll need to do manually:
The CNS being a critical component, access rights to the interface must be tightly restricted and properly secured.
Tracelistener doesn't produce reliable auth
numbers, we need to launch with a more stable version that queries nodes directly. In order not to break the frontend, we need to write an adapter in the API server that will serve the data in the same format as before.
chain_status
endpoint does the following:
If all the checks pass, return "live" status.
On the front-end, we should call chain_status on all the chains when users log in @clockworkgr . If a chain is down, it has a lot of consequences:
When modifying a chain, enforce that at least one fee denom must be specified for the process to complete successfully.
Currently the processor passes the data to every individual module processor that owns the current key prefix.
However, key prefixes are unique PER MODULE and not overall.
So e.g. BankProcessor and DenomTraceProcessor both own key prefix 0x02
thus both attempt to process data that might be meant for the other.
At best, this is a waste of resources and unneeded/wird log entries.
At worst , although very unlikely, depending on actual data contained) it might lead to corrupted data being written to the DB.
Individual processors should be grouped by module name and TraceListener should check the store key(module name) as well as the prefix.
On the very short term, one gaia full node can serve all Cosmos Hub queries, but given that we'll make a lot of /liquidity queries, it might be good to have a second gaia full-node specifically for handling these in order to not stress the RPC too much.
Integrate operator in add-new-chain process, in order to start a new full node and register the proper info (nodeInfo) in the CNS
Cf. #12
To dos:
When a ticket is updated based on tx event (e.g. transit -> complete), we want to add the err to the status info, so that we know if the tx was passed DeliverTX or not (err will be nil if everything is ok).
Right now, the tracelistener unfortunately doesn't contain context about the store key of the module. This is an issue, as store writes between different modules can use the same prefix (cf #60).
To properly fix this issue, we would need to either add another prefix that pertains to the module itself to the key in Cosmos SDK (no time), or we would need to transition away from using tracelistener entirely (no time).
As a stop-gap, we need to "harden" writes for each module we track so that there can be no conflicts. For example, the tracelistener shouldn't write denom_traces from the IBC module to the DB if the path does not follow a certain format (e.g. begin with /transfer).
Right now, the frontend uses a global hardcoded gas parameter. This is good enough for MVP, but it would be good to have another gas endpoint so that the frontend can more accurately estimate gas, and therefore tx fee, for users.
One idea would be to have a /simulate endpoint that would accept a tx type and return a gas estimation.
Right now, tickets are stored in cache and expire after a certain period of time. We don't want stuck tickets to expire.
Cosmos SDK 0.43 is closer to being released and with it, a new version of Gaia.
We should define what versions of the SDK we support to further focustracelistener
development efforts on that regard.
Right now we fully support 0.42 core modules, but since Gaia (and some partner chains like Akash) move as fast as it, I'm not sure we should keep this version around.
Is there a complete list of the chains we support for MVP?
For each chain we need to define an amount of time in seconds that we tolerate regarding last block received.
This amount will then be used in the /status
endpoint:
tNow := time.Now()
passed := database.GetBlockTime(chainName).Sub(tNow)
threshold := database.GetThresh(chainName)
thresh, err := time.ParseDuration(fmt.Sprintf("%ds", threshold))
if err != nil {
return err
}
if passed > thresh {
// return status = false
}
We need to have at least two Demeris accounts per chain we support:
As part of this, we will need to define how to properly store and secure seed phrases
When we add a new chain to Demeris backend by adding an entry to the CNS, we need to make sure that it has a connection/port/channel with all the chains we currently support.
In other words, we need a process that does the following:
// Process that needs to be followed when trying to enable a new chain NewChain with id newchain-id
// get list of clientIDs on NewChain
clientIDSNewChain := db.getClientIDs(newchain-id)
for each chain-id in CNS {
// matchingClientIDs contains a list of matching client IDs between NewChain and chain
matchingClientIDs := []string
// this loop attemps to find out the list of matching clients between chain and NewChain
for each clientID in clientIDSNewChain
clientState := db.getClientState(newchain-id, clientID)
if clientState.chainID == chain-id
// there is a client on newchain that matches the chain-id of a chain in the CNS
// we now need to verify that there is a matching client on the other side
counterpartyClientID = clientState.counterparty.ClientID
(found, counterPartyClientState) := db.getClientState(chain-id, counterpartyClientID) // find clientState where ChaindID == chain-id AND ClientID = counterpartyClientID
if (found) && (counterPartyClientState.chainID == newchain-id) && (counterPartyClientState.counterparty.clientID == clientID)
matchingClientIDs.add(clientID)
if matchingClientIDs.isEmpty
// no matching client IDs, we need to create client, connection and channel
(success, channelToChain, channelToNewChain) = hermes.createChannel(chain-id, newchain-id)
if success
cns.PrimaryChannel(chain-id, newchain-id) = channelToNewChain
cns.PrimaryChannel(newchain-id, chain-id) = channelToChain
return
else
// there is at least one matching client
// matchingConnectionIDs contains a list of matching connection IDs between NewChain and chain
matchingConnectionIDs := []string
for each clientID in matchingClientIDs
// get list of connectionIDs on NewChain associated with clientID
connectionIDSNewChain := db.getConnectionIDsFromClientID(newchain-id, clientID)
// this loop attemps to find out the list of matching connections between chain and NewChain
for each connectionID in connectionIDsNewChain
counterPartyConnectionID = db.getCounterpartyConnectionID(newchain-id, connectionID)
if db.getCounterpartyConnectionID(chain-id, counterPartyConnectionID) == connectionID
// there is a matching connection on both sides
macthingConnectionIDs.add(connectionID)
if matchingConnectionIDs.isEmpty
// there is at least a matching client on each side but no connection
// we open a new client, connection and channel to be safe
(success, channelToChain, channelToNewChain) = hermes.createChannel(chain-id, newchain-id)
if success
cns.PrimaryChannel(chain-id, newchain-id) = channelToNewChain
cns.PrimaryChannel(newchain-id, chain-id) = channelToChain
return
else
// at least one of the matching clients has a matching connection
// matchingChannelIDs contains a list of matching channelIDs (of unordered channels) between NewChain and chain
matchingChannelIDs := []{string, string}
for each connectionID in matchingConnectionIDs
// get list of channelIDs on NewChain associated with connectionID
channelIDSNewChain := db.getChannelIDsFromConnexionID(newchain-id, connectionID)
// this loop attemps to find out the list of matching channels between chain and NewChain
for each channelID in channelIDSNewChain
if channelID.isUNORDERED
counterPartyChannelID = db.getCounterpartyChannelID(newchain-id, channelID)
if db.getCounterpartyChannelID(chain-id, counterPartyChannelID) == channelID
// there is a matching unordered channel on both sides
macthingChannelIDs.add(channelID, counterPartyChannelID)
if matchingChannelIDs.isEmpty
// there is at least a matching connection on each side but no matching unordered channel
// we open a new client, connection and channel to be safe
(success, channelToChain, channelToNewChain) = hermes.createChannel(chain-id, newchain-id)
if success
cns.PrimaryChannel(chain-id, newchain-id) = channelToNewChain
cns.PrimaryChannel(newchain-id, chain-id) = channelToChain
return
else
// at least one of the matching connections has a matching unordered channel
// pick the first matching unordered channel as the primary channel
cns.PrimaryChannel(newchain-id, chain-id) = matchingChannelIDs[0]{0}
cns.PrimaryChannel(chain-id, newchain-id) = matchingChannelIDs[0]{1}
return
}
Some tables in tracelistener don't employ chainName
as uniqueness constraint.
Add that so that chains with overlapping fields can insert their data.
We need to implement and properly document the process to add a new chain to Demeris. From a high-level perspective, the steps would include:
denoms
should be added to the CNS, and at least one baseTxFee
must be registered (#18).All the previous steps will need to be wired together using a parent process
Change base tx fee struct to
type baseTxFee struct {
feeDenom Denom
low uint
average uint
high uint
}
In the CNS, we now store an array of baseTxFee
([]baseTxFee
) instead of a single one.
There are several issues we need to fix in the RPC Watcher.
In the logic below, we query the database. We shouldn't. This logic would prevent users redeeming through secondary channels.
Let's not do queries like this in the rpcwatcher. Instead, to filter IBC transfers we care about, let's just check whether the txHash already exists.
The event we currently use to track rcv packets is the following:
This is not the one we should use, because this event is triggered whether the rcvPacket was successfully processed or not, cf:
The event we should use instead is this one:
And the status of the ticket should be set to:
Complete
if ack == true
Receive_failed
if ack == false
An IBC transaction lifecycle doesn't end at the receive packet, because the rcvPacket may fail (see above), or never be relayed at all. That is why we need to handle timeouts and acknowledgements.
We only care about acknowledgements if the rcvPacket's failed, i.e. if the ack
of the rcvPacket was false
(this is our filtering condition for acknowledgements (the existence of a ticket with said sequence number where status == Received_failed
) In this case, we do the same logic as below, but with acknowledgements instead of rcvPacket:
An acknowledgement for a given rcvPacket still has the same sequence number as the original transfer packet, so we can use the same logic of (newKey, key). If the acknowledgement is successfully processed, we set the ticket status to Tokens_unlocked_ack
.
Again, we need the same logic as below, but for timeout packets:
The filtering condition for timeout packets is the same as for receive packet (existence of a key with the correct sequence number). Note that we need to make sure that tickets expiry is greater than the packet timeout parameter set in the frontend.
If the timeout is successfully processed, we set the ticket status to Tokens_unlocked_timeout
.
Things to monitor:
One of the functionalities of the management UI is to add infos about CNS denoms. At the moment, denoms are pulled from the CNS itself, which is populated at chain launch. Instead of pulling raw denoms from the CNS, we want to pull them from the supply module of each chain (most importantly the Cosmos Hub).
The /chains
endpoint should return an array of tuples of:
chain_name
: (unique & used for further querying in /chain/:chain/*
)
and
display_name
: (what we actually should display to the user in the frontend)
Further more, display_name should be included in the full chain object as returned by /chain/:chain
for consistency
When relayer account on a given chain goes below a value defined in CNS, we need to be alerted.
To dos:
When a liquidity pool is created, an LP token (poolCoin) is created at the same time. We need to verify this token automatically, otherwise the LP token wouldn't show up in the LP asset list, which would be super confusing.
One way to solve this is to modify the rpcwatcher to listen for pool creation. When we catch the event for a pool creation, we do the following:
Display name convention for pool coins:
For example, for the gravity dex pool uatom
/ transfer/channelToTerra/uluna
, the display name of the LP token would be:
Ticker Name convention:
For example, for gravity dex lp token of pool above, we'd have the following ticker G-ATOM-LUNA
For LP assets whose associated pool contains at least one verified LP asset, we do verify it but we use the poolID as display name and ticker.
Right now after configuring a Go client with rest.InClusterConfig()
configuration, using it to e.g. call List()
yields an error probably due to a RBAC misconfiguration:
nodesets.apps.starport.cloud "akash" is forbidden: User "system:serviceaccount:default:default" cannot list resource "nodesets" in API group "apps.starport.cloud" at the cluster scope
Currently we don't really handle cases where things go wrong, namely when tickets won't update. We need to handle these "stuck" cases, as define here: https://whimsical.com/ticketinglifecycle-WUXezTMhMCeHj4G7YNFywP
One idea would be to transition to a "stuck" state after a certain period (greater than timeout period)
Just like we did with #9.
We will have another endpoint that will list sequence/account numbers for accounts, this brings the account-related endpoints to a total of 3.
I propose we change our API slightly, but still retaining the across-chain querying logic we built from the beginning:
/account/{address}
will be the base path for account-related endpoints/balance
returns the balance of address
/staking
returns the staking balance of address
/numbers
returns sequence/account number for address
Already spoke with @gamarin2 and @clockworkgr about this.
@lukitsbrian opinions?
When adding a new chain to the backend, we must go through a certain number of steps: #12
An entry in the CNS will be created for a chain before it is ready to be used in production and queried from the front-end. Therefore, I propose we add a enabled
boolean to the chain struct. enabled
will be manually set to true only after all the steps in the process have been properly completed.
Endpoints should only return value of enabled chains. For example, the /balances
response should only contain balances of enabled chains.
I propose we change the Chain
type from:
type Chain struct {
ID uint64 `db:"id" json:"-"`
ChainName string `db:"chain_name" binding:"required" json:"chain_name"` // the unique name of the chain
Logo string `db:"logo" binding:"required" json:"logo"` // logo of the chain
DisplayName string `db:"display_name" binding:"required" json:"display_name"` // user-friendly chain name
CounterpartyNames DbStringMap `db:"counterparty_names" binding:"required" json:"counterparty_names"` // a mapping of client_id to chain names used to identify which chain a given client_id corresponds to
PrimaryChannel DbStringMap `db:"primary_channel" binding:"required" json:"primary_channel"` // a mapping of chain name to primary channel
NativeDenoms DenomList `db:"native_denoms" binding:"required,dive" json:"native_denoms"` // a list of denoms native to the chain
FeeTokens DenomList `db:"fee_tokens" binding:"required,dive" json:"fee_tokens"` // a list of denoms accepted as fee on the chain, fee tokens must be verified
FeeAddress string `db:"fee_address" binding:"required" json:"fee_address"` // the address on which we accept fee payments
PriceModifier float64 `db:"price_modifier" binding:"required" json:"price_modifier"` // modifier (between 0 and 1) applied when estimating the price of a token hopping through the chain
BaseIBCFee float64 `db:"base_ibc_fee" binding:"required" json:"base_ibc_fee"` // average cost (in dollar) to submit an IBC transaction to the chain
BaseFee float64 `db:"base_fee" binding:"required" json:"base_fee"` // average cost (in dollar) to submit a transaction to the chain
GenesisHash string `db:"genesis_hash" binding:"required" json:"genesis_hash"` // hash of the chain's genesis file
NodeInfo NodeInfo `db:"node_info" binding:"required,dive" json:"node_info"` // info required to query full-node (e.g. to submit tx)
}
to
type Chain struct {
ID uint64 `db:"id" json:"-"`
ChainName string `db:"chain_name" binding:"required" json:"chain_name"` // the unique name of the chain
Logo string `db:"logo" binding:"required" json:"logo"` // logo of the chain
DisplayName string `db:"display_name" binding:"required" json:"display_name"` // user-friendly chain name
CounterpartyNames DbStringMap `db:"counterparty_names" binding:"required" json:"counterparty_names"` // a mapping of client_id to chain names used to identify which chain a given client_id corresponds to
PrimaryChannel DbStringMap `db:"primary_channel" binding:"required" json:"primary_channel"` // a mapping of chain name to primary channel
Denoms DenomList `db:"native_denoms" binding:"required,dive" json:"native_denoms"` // a list of denoms native to the chain
DemerisAddresses []string `db:"fee_address" binding:"required" json:"fee_address"` // our addresses
BaseTxFee txFee `db:"base_fee" binding:"required" json:"base_fee"` // average cost (in denom of fee token) to submit a transaction to the chain
GenesisHash string `db:"genesis_hash" binding:"required" json:"genesis_hash"` // hash of the chain's genesis file
NodeInfo NodeInfo `db:"node_info" binding:"required,dive" json:"node_info"` // info required to query full-node (e.g. to submit tx)
}
With this, we should also change Denoms
type to:
type Denom struct {
DisplayName string `db:"display_name" binding:"required" json:"display_name"`
Logo string `db:"logo" json:"logo,omitempty"`
Precision int64 `db:"precision" binding:"required" json:"precision,omitempty"`
Name string `db:"name" binding:"required" json:"name,omitempty"`
Verified bool `db:"verified" binding:"required" json:"verified,omitempty"`
FeeToken bool `db:"verified" binding:"required" json:"verified,omitempty"`
}
And add the following type:
type txFee struct {
low uint
average uint
high uint
}
Summary of the changes:
FeeTokens
and NativeDenoms
into Denom
. Fee status is now part of the Denom
struct. Denoms will be manually added to the CNS from the CNS administration UI (#11).PriceModifier
FeeAddress
to DemerisAddresses
and its type from string
to []string
, allowing us to have more than one address per chain.BaseIBCFee
and BaseFee
into BaseTxFee
We need to define and implement a way to define baseTxFee
for a given chain. The algorithm could be triggered periodically on the backend to update the value in the CNS.
Note: Instead of a single baseTxFee
, we'll probably have three values: low, average and high.
type baseTxFee struct {
feeDenom Denom
low uint
average uint
high uint
}
We need the bech32 config for each chain in order to make sure that the address is correctly formatted when doing transfers. The config should be exposed via an API endpoint.
In current denom struct, we have
FeeLevels TxFee `db:"fee_levels" json:"fee_levels"`
It should become:
GasPriceLevels GasPrice `db:"gas_price_levels" json:"gas_price_levels"
The TxFee
struct must be renamed too. The rest can remain unchanged (just the naming needs to change).
In addition to that, we need to specify which denom is to be used for fees in the relayer.
type Chains struct {
...
RelayerDenom string // name of the denom used by relayer to pay fees on this chain
}
Currently CNS (and maybe API server) queries for NodeSet
resources in all namespaces, making it dangerous to run demeris
alongside other projects. CNS should only handle operator resources in its own namespace.
When adding a new chain, we need to figure out how to define the baseTxFee that will be added to the CNS and exposed via the /fee endpoint.
Given that the value might change over time, it might be good to also have a script that we could run periodically (maybe once a week or something) that would run the process to figure out baseTxFee on all the chains we support and update the value.
List of API endpoints to implement for MVP:
GET /balances
: Given a list of addresses, returns balances @lukitsbrianGET /verified_path
: Given a token denom and a chain, returns verified path. Also returns whether token is verified/native. @lukitsbrianGET /fee_address
: Given a chain, returns our address (the one to send coverFee to) @gsoraGET /fee
: Given a chain, returns the base transaction fee in dollar @gsoraGET /fee_token
: Given a chain, returns the denom of the token accepted as fee @gsoraGET /staking_balances
: Given a list of addresses, returns staking balances @gsoraGET /prices
: Returns fiat price for all verified tokens @gsoraGET /chains
: Returns the list of chains supported by Demeris, as well as their logo @gsoraGET /verified_denoms
: Returns a list of all the verified base denoms supported by Demeris, as well as their logo @gsoraGET /liquidity/...
: Forward any query prefixed by /liquidity to the Cosmos Hub's full-node @gsoraGET /primary_channel
: Given a source chain and a destination chain, returns the primary channel @gsoraGET /chain_status
: Given a chain, returns status (live, down, ...) @gsoraGET /relayer_status
: Tells if relayer instance is up or downGET /relayer_account_status
: Given a chain, returns whether we have funds to submit rcv packets on this chainGET /bech32_config
: Given a chain, returns the bech32 config @gsoraPOST /tx
: Given a signed tx and a chain, submit the tx to the chain's full-node @lukitsbrianWe need to update config of relayer with new values before mainnets. Example of params:
max_msg_num
from 30 to 12
clock_drift
from 5s to ? (7200s used by Osmosis)
While trying to relay a transaction, the following error happens:
{
"L": "ERROR",
"T": "2021-06-11T14:37:29.700Z",
"C": "deps/deps.go:41",
"M": "relaying tx failed",
"id": "358852318482923535",
"error": "Post \"endpoint\": unsupported protocol scheme \"\""
}
This is due to the fact that in relayTx
we don't specify "http" in the string URL:
// RelayTx relays the tx to the specifc endpoint
// RelayTx will also perform the ticketing mechanism
// Always expect broadcast mode to be `async`
func relayTx(d *deps.Deps, tx TxData, meta TxMeta) (string, error) {
// ...
resp, err := http.Post(meta.Chain.NodeInfo.Endpoint, "application/json", b)
// ...
}
Needed for Keplr. For Cosmos chains, it should be 118
. For other chains, we can refer to: https://github.com/satoshilabs/slips/blob/master/slip-0044.md
Right now we have chain query-related API calls under specified endpoints - e.g. fee_token
or fee_address
- in a way that to query them one needs to call /endpoint/chainName
.
It might be better suited API design-wise to move everything under the /chain/chainName/...
path.
This way, if the frontend needs the fee address for gaia
, it would query /chain/gaia/fee_address
.
Opinions?
We need to be able to retrieve all tickets whose lifecycle is not finished for a given address, so that users that log out and come back in can still know the status of their unprocessed transactions.
(cf ticket lifecycle flowchart https://whimsical.com/ticketinglifecycle-WUXezTMhMCeHj4G7YNFywP)
This issue must be addressed after #23.
Implement the /account/{address}/numbers
that returns sequence/account number for a given address
.
A call to this endpoint will return the following response:
{
"numbers": {
"sequence": 4,
"account": 0
}
}
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.