Git Product home page Git Product logo

demeris-backend's People

Contributors

0xmuralik avatar akc2267 avatar akhilkumarpilli avatar clockworkgr avatar darpan-tendermint avatar desherbojhaa avatar emerisbot avatar gamarin2 avatar gsora avatar helder-moreira avatar julio-jimenez avatar lukitsbrian avatar nodebreaker0-0 avatar pitasi avatar prathyushalakkireddy avatar richardchester avatar sahith-narahari avatar sarthakg19 avatar sgerogia avatar shahbazn avatar tbruyelle avatar tenderbot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

deltabridge

demeris-backend's Issues

Tracelistener: trace block time

Tracelistener must subscribe to the Websocket endpoint pointed by CHAIN_NAME, and write in the database the last block time received on it.

This is needed to unblock #28.

API: transaction observability process

We need to implement a way for the frontend to easily query the status of a given transaction after it has been broadcasted.

The tx endpoint will return a ticket t (a UUID) along with the response message.

This ticket can then be used by the frontend to query the status of the transaction initiated by the very same call that generated t:

GET /tx/{:ticket}

A Redis cache will be used to store tickets and the associated data, while a component rpcwatcher watches the Tendermint RPCs in search of patterns generated from the data contained in Redis.

On the backend we must differentiate IBC transfers from plain bank sends:

  • to monitor simple bank sends we associate t with the transaction hash that a full node returns after broadcasting a transaction; rpcwatcher will observe the events and will mark a transaction as completed when it'll observe a successful event with the given transaction hash
  • for IBC transfers the situation is a little bit more complex - the API server will mark a transaction hash as "containing IBC transfers" to differentiate it from a simple bank transfer; when we observe a transaction with the given hash, we put the IBC sequence number in the Redis cache, associating it with the ticket t; when we observe a successful IBC transfer event (TBD), we mark it as done on the backend.

By using Redis we don't need to communicate directly with rpcwatcher as it'll query data from it at every execution tick, and also we don't need to handle concurrency edge cases since Redis is guaranteed to be coherent in multi-{writer,reader} contexts.

CNS management UI

We need a secure interface to manually add things to the CNS. Examples of things we'll need to do manually:

  • Mark tokens as verified
  • Change primary channel if need be

The CNS being a critical component, access rights to the interface must be tightly restricted and properly secured.

Directly query chains instead of the DB for auth numbers

Tracelistener doesn't produce reliable auth numbers, we need to launch with a more stable version that queries nodes directly. In order not to break the frontend, we need to write an adapter in the API server that will serve the data in the same format as before.

API: chain_status endpoint

chain_status endpoint does the following:

  1. checks if full-node is up, if not return an error
  2. checks if full-node is syncing, if so return an error
  3. checks last_block_time. If last_block_time > chain.MaxBlockTime, return "down" status

If all the checks pass, return "live" status.

On the front-end, we should call chain_status on all the chains when users log in @clockworkgr . If a chain is down, it has a lot of consequences:

  • Assets are marked "unavailable"
  • It won't be possible to transfer to/transfer from/ redeem via a chain that is down
  • If the Hub is down, it will obviously be impossible to swap/add liquidity.

Tracelistener: TraceProcessor bug

Currently the processor passes the data to every individual module processor that owns the current key prefix.

However, key prefixes are unique PER MODULE and not overall.

So e.g. BankProcessor and DenomTraceProcessor both own key prefix 0x02 thus both attempt to process data that might be meant for the other.

At best, this is a waste of resources and unneeded/wird log entries.

At worst , although very unlikely, depending on actual data contained) it might lead to corrupted data being written to the DB.

Individual processors should be grouped by module name and TraceListener should check the store key(module name) as well as the prefix.

Run a specific GAIA full-node to serve liquidity GET queries

On the very short term, one gaia full node can serve all Cosmos Hub queries, but given that we'll make a lot of /liquidity queries, it might be good to have a second gaia full-node specifically for handling these in order to not stress the RPC too much.

Remove counterPartyName from CNS

To dos:

  • Remove counterpartyName from CNS @gsora
  • Update verify-path endpoint to query counterparty chain-id in order to figure out counterparty chain. Then makes sure clientIDs information matches on both sides @lukitsbrian

Add err to tickets

When a ticket is updated based on tx event (e.g. transit -> complete), we want to add the err to the status info, so that we know if the tx was passed DeliverTX or not (err will be nil if everything is ok).

Harden tracelistener module writes

Right now, the tracelistener unfortunately doesn't contain context about the store key of the module. This is an issue, as store writes between different modules can use the same prefix (cf #60).

To properly fix this issue, we would need to either add another prefix that pertains to the module itself to the key in Cosmos SDK (no time), or we would need to transition away from using tracelistener entirely (no time).

As a stop-gap, we need to "harden" writes for each module we track so that there can be no conflicts. For example, the tracelistener shouldn't write denom_traces from the IBC module to the DB if the path does not follow a certain format (e.g. begin with /transfer).

Make gas parameter more accurate

Right now, the frontend uses a global hardcoded gas parameter. This is good enough for MVP, but it would be good to have another gas endpoint so that the frontend can more accurately estimate gas, and therefore tx fee, for users.

One idea would be to have a /simulate endpoint that would accept a tx type and return a gas estimation.

Tracelistener: add SDK-version service

Cosmos SDK 0.43 is closer to being released and with it, a new version of Gaia.

We should define what versions of the SDK we support to further focustracelistener development efforts on that regard.

Right now we fully support 0.42 core modules, but since Gaia (and some partner chains like Akash) move as fast as it, I'm not sure we should keep this version around.

Is there a complete list of the chains we support for MVP?

CNS: implement block time tolerance threshold

For each chain we need to define an amount of time in seconds that we tolerate regarding last block received.

This amount will then be used in the /status endpoint:

tNow := time.Now()

passed := database.GetBlockTime(chainName).Sub(tNow)

threshold := database.GetThresh(chainName)

thresh, err := time.ParseDuration(fmt.Sprintf("%ds", threshold))
if err != nil {
    return err
}

if passed > thresh {
    // return status = false
}

Define account management solution for relayer accounts

We need to have at least two Demeris accounts per chain we support:

  • One for the relayer, where the private key will be stored in the relayer instance
  • One for the temporary relayer

As part of this, we will need to define how to properly store and secure seed phrases

Open connections and channels when adding a new chain

When we add a new chain to Demeris backend by adding an entry to the CNS, we need to make sure that it has a connection/port/channel with all the chains we currently support.

In other words, we need a process that does the following:

// Process that needs to be followed when trying to enable a new chain NewChain with id newchain-id

// get list of clientIDs on NewChain 
clientIDSNewChain := db.getClientIDs(newchain-id)

for each chain-id in CNS {

    // matchingClientIDs contains a list of matching client IDs between NewChain and chain 
    matchingClientIDs := []string


    // this loop attemps to find out the list of matching clients between chain and NewChain 
    for each clientID in clientIDSNewChain
        clientState := db.getClientState(newchain-id, clientID)
        if clientState.chainID == chain-id
            // there is a client on newchain that matches the chain-id of a chain in the CNS
            // we now need to verify that there is a matching client on the other side

            counterpartyClientID = clientState.counterparty.ClientID 
            (found, counterPartyClientState) := db.getClientState(chain-id, counterpartyClientID) // find clientState where ChaindID == chain-id AND ClientID = counterpartyClientID
            if (found) && (counterPartyClientState.chainID == newchain-id) && (counterPartyClientState.counterparty.clientID == clientID)
                matchingClientIDs.add(clientID)
                
    
    if matchingClientIDs.isEmpty 
        // no matching client IDs, we need to create client, connection and channel

        (success, channelToChain, channelToNewChain) = hermes.createChannel(chain-id, newchain-id)
        if success 
            cns.PrimaryChannel(chain-id, newchain-id) = channelToNewChain 
            cns.PrimaryChannel(newchain-id, chain-id) = channelToChain 
        
        return 
    
    else 
        // there is at least one matching client 

        // matchingConnectionIDs contains a list of matching connection IDs between NewChain and chain 
        matchingConnectionIDs := []string

        for each clientID in matchingClientIDs

            // get list of connectionIDs on NewChain associated with clientID
            connectionIDSNewChain := db.getConnectionIDsFromClientID(newchain-id, clientID)

            // this loop attemps to find out the list of matching connections between chain and NewChain 
            for each connectionID in connectionIDsNewChain
                counterPartyConnectionID = db.getCounterpartyConnectionID(newchain-id, connectionID)
                if db.getCounterpartyConnectionID(chain-id, counterPartyConnectionID) == connectionID
                    // there is a matching connection on both sides 
                    macthingConnectionIDs.add(connectionID)
            
        if matchingConnectionIDs.isEmpty
            // there is at least a matching client on each side but no connection 

            // we open a new client, connection and channel to be safe
            (success, channelToChain, channelToNewChain) = hermes.createChannel(chain-id, newchain-id)
            if success 
                cns.PrimaryChannel(chain-id, newchain-id) = channelToNewChain 
                cns.PrimaryChannel(newchain-id, chain-id) = channelToChain 
        
            return  
        
        else 
            // at least one of the matching clients has a matching connection 
            
            // matchingChannelIDs contains a list of matching channelIDs (of unordered channels) between NewChain and chain 
            matchingChannelIDs := []{string, string}

            for each connectionID in matchingConnectionIDs

                // get list of channelIDs on NewChain associated with connectionID
                channelIDSNewChain := db.getChannelIDsFromConnexionID(newchain-id, connectionID)

                // this loop attemps to find out the list of matching channels between chain and NewChain 
                for each channelID in channelIDSNewChain
                    if channelID.isUNORDERED
                        counterPartyChannelID = db.getCounterpartyChannelID(newchain-id, channelID)
                        if db.getCounterpartyChannelID(chain-id, counterPartyChannelID) == channelID
                            // there is a matching unordered channel on both sides 
                            macthingChannelIDs.add(channelID, counterPartyChannelID)
                
            if matchingChannelIDs.isEmpty
                // there is at least a matching connection on each side but no matching unordered channel 

                // we open a new client, connection and channel to be safe
                (success, channelToChain, channelToNewChain) = hermes.createChannel(chain-id, newchain-id)
                if success 
                    cns.PrimaryChannel(chain-id, newchain-id) = channelToNewChain 
                    cns.PrimaryChannel(newchain-id, chain-id) = channelToChain 
            
                return  
            
            else 
                // at least one of the matching connections has a matching unordered channel

                // pick the first matching unordered channel as the primary channel  
                cns.PrimaryChannel(newchain-id, chain-id) = matchingChannelIDs[0]{0} 
                cns.PrimaryChannel(chain-id, newchain-id) = matchingChannelIDs[0]{1}  

                return
}

[Tracking] Process for adding a new chain

We need to implement and properly document the process to add a new chain to Demeris. From a high-level perspective, the steps would include:

  • 1. Add new entry to the CNS (#11)
  • 2. Start full-node using the operator (#17).
  • 3. Wait for full-node to be synced (check with chain_status endpoint).
  • 4. If one does not already exist, open an IBC connection between the new chain and all the supported chains. (#2)
  • 5. Manually finalize CNS entry by adding what's missing (#11). Importantly, denoms should be added to the CNS, and at least one baseTxFee must be registered (#18).
  • 6. Add the new chain to the relayer instance and fund the relayer account with fee tokens (#19)
  • 7. Fund the account to cover for user's tx fees (#19)
  • 8. Make sure everything is running properly, including monitoring of the various components (full-node, relayer, accounts...) (#20)

All the previous steps will need to be wired together using a parent process

CNS: Change baseTxFee struct

Change base tx fee struct to

type baseTxFee struct {
    feeDenom    Denom
    low    uint
    average    uint
    high    uint
}

In the CNS, we now store an array of baseTxFee ([]baseTxFee) instead of a single one.

rpcwatcher update (events, timeouts, ack, ...)

There are several issues we need to fix in the RPC Watcher.

Issue 1: Unnecessary logic in processing IBC transfers

In the logic below, we query the database. We shouldn't. This logic would prevent users redeeming through secondary channels.

https://github.com/allinbits/demeris-backend/blob/f601bc22ee0e2e08fd33812528cc04d8ceae0c73/rpcwatcher/watcher.go#L176-L195

Let's not do queries like this in the rpcwatcher. Instead, to filter IBC transfers we care about, let's just check whether the txHash already exists.

Issue 2: IBC receive event is not the correct one, and status is not correct

The event we currently use to track rcv packets is the following:

https://github.com/allinbits/demeris-backend/blob/f601bc22ee0e2e08fd33812528cc04d8ceae0c73/rpcwatcher/watcher.go#L126

This is not the one we should use, because this event is triggered whether the rcvPacket was successfully processed or not, cf:

https://github.com/cosmos/ibc-go/blob/007c6804bd30174bd24938466352ac3812f37f05/modules/core/04-channel/keeper/packet.go#L310

The event we should use instead is this one:

https://github.com/cosmos/ibc-go/blob/2548ab5f52d3dff51bb2e7075b4bb0d3b79949eb/modules/apps/transfer/module.go#L351

And the status of the ticket should be set to:

  • Complete if ack == true
  • Receive_failed if ack == false

Issue 3: Acknowledgments and Timeouts are not handled

An IBC transaction lifecycle doesn't end at the receive packet, because the rcvPacket may fail (see above), or never be relayed at all. That is why we need to handle timeouts and acknowledgements.

Acknowledgments

We only care about acknowledgements if the rcvPacket's failed, i.e. if the ack of the rcvPacket was false (this is our filtering condition for acknowledgements (the existence of a ticket with said sequence number where status == Received_failed) In this case, we do the same logic as below, but with acknowledgements instead of rcvPacket:

https://github.com/allinbits/demeris-backend/blob/f601bc22ee0e2e08fd33812528cc04d8ceae0c73/rpcwatcher/watcher.go#L201-L236

An acknowledgement for a given rcvPacket still has the same sequence number as the original transfer packet, so we can use the same logic of (newKey, key). If the acknowledgement is successfully processed, we set the ticket status to Tokens_unlocked_ack.

Timeouts

Again, we need the same logic as below, but for timeout packets:

https://github.com/allinbits/demeris-backend/blob/f601bc22ee0e2e08fd33812528cc04d8ceae0c73/rpcwatcher/watcher.go#L201-L236

The filtering condition for timeout packets is the same as for receive packet (existence of a key with the correct sequence number). Note that we need to make sure that tickets expiry is greater than the packet timeout parameter set in the frontend.
If the timeout is successfully processed, we set the ticket status to Tokens_unlocked_timeout.

Pull denoms from supply module in management UI

One of the functionalities of the management UI is to add infos about CNS denoms. At the moment, denoms are pulled from the CNS itself, which is populated at chain launch. Instead of pulling raw denoms from the CNS, we want to pull them from the supply module of each chain (most importantly the Cosmos Hub).

Display name in /chains endpoint

The /chains endpoint should return an array of tuples of:

chain_name : (unique & used for further querying in /chain/:chain/* )

and

display_name: (what we actually should display to the user in the frontend)

Further more, display_name should be included in the full chain object as returned by /chain/:chain for consistency

Watch pool creation and automatically verify poolCoin denoms

When a liquidity pool is created, an LP token (poolCoin) is created at the same time. We need to verify this token automatically, otherwise the LP token wouldn't show up in the LP asset list, which would be super confusing.

One way to solve this is to modify the rpcwatcher to listen for pool creation. When we catch the event for a pool creation, we do the following:

  • Check if the reserve coins of the newly created pool are verified (verify-path, then check if base denom is verified in CNS)
  • If ALL reserve coins in the pool are verified, then automatically add the poolDenom to the CNS and mark it as verified.

Display name convention for pool coins:

  • [AMM] [Diplayname(ReserveCoin1)] / [(Displayname(ReserveCoin2)] LP

For example, for the gravity dex pool uatom / transfer/channelToTerra/uluna, the display name of the LP token would be:

  • GDEX ATOM / LUNA LP

Ticker Name convention:

  • [Firstletter(AMM)]-[Ticker(ReserveCoin1]-[Ticker(ReserveCoin2)]

For example, for gravity dex lp token of pool above, we'd have the following ticker G-ATOM-LUNA

For LP assets whose associated pool contains at least one verified LP asset, we do verify it but we use the poolID as display name and ticker.

K8S: let api-server interact with starport-operator

Right now after configuring a Go client with rest.InClusterConfig() configuration, using it to e.g. call List() yields an error probably due to a RBAC misconfiguration:

nodesets.apps.starport.cloud "akash" is forbidden: User "system:serviceaccount:default:default" cannot list resource "nodesets" in API group "apps.starport.cloud" at the cluster scope

API: move account-related endpoints under `/account`

Just like we did with #9.

We will have another endpoint that will list sequence/account numbers for accounts, this brings the account-related endpoints to a total of 3.

I propose we change our API slightly, but still retaining the across-chain querying logic we built from the beginning:

  • /account/{address} will be the base path for account-related endpoints
  • /balance returns the balance of address
  • /staking returns the staking balance of address
  • /numbers returns sequence/account number for address

Already spoke with @gamarin2 and @clockworkgr about this.

@lukitsbrian opinions?

CNS: add "enabled" boolean to chain struct

When adding a new chain to the backend, we must go through a certain number of steps: #12

An entry in the CNS will be created for a chain before it is ready to be used in production and queried from the front-end. Therefore, I propose we add a enabled boolean to the chain struct. enabled will be manually set to true only after all the steps in the process have been properly completed.

Endpoints should only return value of enabled chains. For example, the /balances response should only contain balances of enabled chains.

cc @gsora @clockworkgr @lukitsbrian

CNS: Changing denoms model and other small things

I propose we change the Chain type from:

type Chain struct {
	ID                uint64      `db:"id" json:"-"`
	ChainName         string      `db:"chain_name" binding:"required" json:"chain_name"`                 // the unique name of the chain
	Logo              string      `db:"logo" binding:"required" json:"logo"`                             // logo of the chain
	DisplayName       string      `db:"display_name" binding:"required" json:"display_name"`             // user-friendly chain name
	CounterpartyNames DbStringMap `db:"counterparty_names" binding:"required" json:"counterparty_names"` // a mapping of client_id to chain names used to identify which chain a given client_id corresponds to
	PrimaryChannel    DbStringMap `db:"primary_channel" binding:"required" json:"primary_channel"`       // a mapping of chain name to primary channel
	NativeDenoms      DenomList   `db:"native_denoms" binding:"required,dive" json:"native_denoms"`      // a list of denoms native to the chain
	FeeTokens         DenomList   `db:"fee_tokens" binding:"required,dive" json:"fee_tokens"`            // a list of denoms accepted as fee on the chain, fee tokens must be verified
	FeeAddress        string      `db:"fee_address" binding:"required" json:"fee_address"`               // the address on which we accept fee payments
	PriceModifier     float64     `db:"price_modifier" binding:"required" json:"price_modifier"`         // modifier (between 0 and 1) applied when estimating the price of a token hopping through the chain
	BaseIBCFee        float64     `db:"base_ibc_fee" binding:"required" json:"base_ibc_fee"`             // average cost (in dollar) to submit an IBC transaction to the chain
	BaseFee           float64     `db:"base_fee" binding:"required" json:"base_fee"`                     // average cost (in dollar) to submit a transaction to the chain
	GenesisHash       string      `db:"genesis_hash" binding:"required" json:"genesis_hash"`             // hash of the chain's genesis file
	NodeInfo          NodeInfo    `db:"node_info" binding:"required,dive" json:"node_info"`              // info required to query full-node (e.g. to submit tx)
}

to

type Chain struct {
	ID                uint64      `db:"id" json:"-"`
	ChainName         string      `db:"chain_name" binding:"required" json:"chain_name"`                 // the unique name of the chain
	Logo              string      `db:"logo" binding:"required" json:"logo"`                             // logo of the chain
	DisplayName       string      `db:"display_name" binding:"required" json:"display_name"`             // user-friendly chain name
	CounterpartyNames DbStringMap `db:"counterparty_names" binding:"required" json:"counterparty_names"` // a mapping of client_id to chain names used to identify which chain a given client_id corresponds to
	PrimaryChannel    DbStringMap `db:"primary_channel" binding:"required" json:"primary_channel"`       // a mapping of chain name to primary channel
	Denoms            DenomList   `db:"native_denoms" binding:"required,dive" json:"native_denoms"`      // a list of denoms native to the chain
	DemerisAddresses    []string      `db:"fee_address" binding:"required" json:"fee_address"`               // our addresses 
	BaseTxFee           txFee     `db:"base_fee" binding:"required" json:"base_fee"`                     // average cost (in denom of fee token) to submit a transaction to the chain
	GenesisHash       string      `db:"genesis_hash" binding:"required" json:"genesis_hash"`             // hash of the chain's genesis file
	NodeInfo          NodeInfo    `db:"node_info" binding:"required,dive" json:"node_info"`              // info required to query full-node (e.g. to submit tx)
}

With this, we should also change Denoms type to:

type Denom struct {
	DisplayName string `db:"display_name" binding:"required" json:"display_name"`
	Logo        string `db:"logo" json:"logo,omitempty"`
	Precision   int64  `db:"precision" binding:"required" json:"precision,omitempty"`
	Name        string `db:"name" binding:"required" json:"name,omitempty"`
	Verified    bool   `db:"verified" binding:"required" json:"verified,omitempty"`
        FeeToken    bool  `db:"verified" binding:"required" json:"verified,omitempty"`
}

And add the following type:

type txFee struct {
        low uint
        average uint
        high uint
}

Summary of the changes:

  • We merge FeeTokens and NativeDenoms into Denom. Fee status is now part of the Denom struct. Denoms will be manually added to the CNS from the CNS administration UI (#11).
  • We remove PriceModifier
  • We change the naming of FeeAddress to DemerisAddresses and its type from string to []string, allowing us to have more than one address per chain.
  • We merge BaseIBCFee and BaseFee into BaseTxFee

cc @gsora @clockworkgr

Process to define `baseTxFee`

We need to define and implement a way to define baseTxFee for a given chain. The algorithm could be triggered periodically on the backend to update the value in the CNS.

Note: Instead of a single baseTxFee, we'll probably have three values: low, average and high.

type baseTxFee struct {
        feeDenom Denom
        low uint
        average uint
        high uint
}

CNS: add bech32 config to CNS

We need the bech32 config for each chain in order to make sure that the address is correctly formatted when doing transfers. The config should be exposed via an API endpoint.

Rename "TxFee" to "GasPrice" and add Relayer Denom to Chains struct

In current denom struct, we have

FeeLevels   TxFee  `db:"fee_levels" json:"fee_levels"`

It should become:

GasPriceLevels   GasPrice `db:"gas_price_levels" json:"gas_price_levels"

The TxFee struct must be renamed too. The rest can remain unchanged (just the naming needs to change).

In addition to that, we need to specify which denom is to be used for fees in the relayer.

type Chains struct {
    ...
    RelayerDenom string // name of the denom used by relayer to pay fees on this chain
}

Only handle operator resources in a single namespace

Currently CNS (and maybe API server) queries for NodeSet resources in all namespaces, making it dangerous to run demeris alongside other projects. CNS should only handle operator resources in its own namespace.

Process to figure out baseTxFee

When adding a new chain, we need to figure out how to define the baseTxFee that will be added to the CNS and exposed via the /fee endpoint.

Given that the value might change over time, it might be good to also have a script that we could run periodically (maybe once a week or something) that would run the process to figure out baseTxFee on all the chains we support and update the value.

API Endpoints for MVP

List of API endpoints to implement for MVP:

  • GET /balances : Given a list of addresses, returns balances @lukitsbrian
  • GET /verified_path : Given a token denom and a chain, returns verified path. Also returns whether token is verified/native. @lukitsbrian
  • GET /fee_address : Given a chain, returns our address (the one to send coverFee to) @gsora
  • GET /fee : Given a chain, returns the base transaction fee in dollar @gsora
  • GET /fee_token : Given a chain, returns the denom of the token accepted as fee @gsora
  • GET /staking_balances : Given a list of addresses, returns staking balances @gsora
  • GET /prices : Returns fiat price for all verified tokens @gsora
  • GET /chains : Returns the list of chains supported by Demeris, as well as their logo @gsora
  • GET /verified_denoms : Returns a list of all the verified base denoms supported by Demeris, as well as their logo @gsora
  • GET /liquidity/... : Forward any query prefixed by /liquidity to the Cosmos Hub's full-node @gsora
  • GET /primary_channel : Given a source chain and a destination chain, returns the primary channel @gsora
  • GET /chain_status : Given a chain, returns status (live, down, ...) @gsora
  • GET /relayer_status: Tells if relayer instance is up or down
  • GET /relayer_account_status: Given a chain, returns whether we have funds to submit rcv packets on this chain
  • GET /bech32_config: Given a chain, returns the bech32 config @gsora
  • POST /tx : Given a signed tx and a chain, submit the tx to the chain's full-node @lukitsbrian

API: tx relaying doesn't prepend "http" to endpoint name

While trying to relay a transaction, the following error happens:

{
  "L": "ERROR",
  "T": "2021-06-11T14:37:29.700Z",
  "C": "deps/deps.go:41",
  "M": "relaying tx failed",
  "id": "358852318482923535",
  "error": "Post \"endpoint\": unsupported protocol scheme \"\""
}

This is due to the fact that in relayTx we don't specify "http" in the string URL:

// RelayTx relays the tx to the specifc endpoint
// RelayTx will also perform the ticketing mechanism
// Always expect broadcast mode to be `async`
func relayTx(d *deps.Deps, tx TxData, meta TxMeta) (string, error) {

        // ...
	resp, err := http.Post(meta.Chain.NodeInfo.Endpoint, "application/json", b)
        // ...
}

API: move chain-related queries under the /chain endpoint

Right now we have chain query-related API calls under specified endpoints - e.g. fee_token or fee_address - in a way that to query them one needs to call /endpoint/chainName.

It might be better suited API design-wise to move everything under the /chain/chainName/... path.

This way, if the frontend needs the fee address for gaia, it would query /chain/gaia/fee_address.

Opinions?

@gamarin2 @lukitsbrian @clockworkgr

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.