Git Product home page Git Product logo

ledger-subquery's Introduction

Ledger SubQuery

This is the Fetch ledger SubQuery project, an indexer for the Fetch network.

Developing

Getting Started

1. Ensure submodules are updated

git submodule update --init --recursive

2. Install dependencies

yarn

# install submodule dependencies
(cd ./subql && yarn)

3. Generate types

yarn codegen

4. Build

yarn build

# build submodule
(cd ./subql && yarn build)

5. Run locally

yarn start:docker

End-to-end Testing

1. Install dependencies

pipenv install

2. Run e2e tests

Note: end-to-end tests will truncate tables in the DB and interact with the configured fetchd node.

All tests

pipenv run python -m unittest discover -s ./tests/e2e

Specific test module

pipenv run python -m unittest tests.e2e.entities.<module>

(see: pipenv run python -m unittest --help for more)

Tracing

The SubQuery node and GraphQL API services have been instrumented using open telemetry. This enables end-to-end tracing support for both services, which covers querying Fetchd RPC to inserting into the DB and then reading from the DB to respond to an end-user GQL query.

To run the tracing stack locally, start the docker composition using the tracing-compose.yml file:

docker compose -f ./tracing-compose.yml up -d

The tracing composition is notably more substantial than the conventional stack and will take a bit longer to start-up. Once running, you can point your browser to localhost:3301 to access the SigNoz dashboard.

(see: SigNoz documentation for more)

DB Migrations

This repository uses graphile-migrate CLI to manage database migrations.

Install dependencies

Install global npm dependencies:

npm install -g graphile-migrate plv8ify

Running Migrations

Given that you already have a database with some, potentially outdated, schema, you can catch up to the latest state by running all committed but unapplied migrations:

graphile-migrate migrate

(see: graphile-migrate README for more)

Creating Migrations

New table schemas

When introducing schema change which adds an entity / table, it is currently most convenient to allow the subquery node to initially generate any new tables (including indexes and constraints). This schema can then be dumped for use in the accompanying migration.

The current schema sql file can be generated from an existing DB schema (optionally, including data) via pg_dump. Package scripts are available for dumping the schema only or for dumping the schema plus any data present:

yarn db:dump:schema
# OR
yarn db:dump:all

Additional arguments may be forwarded to the underlying pg_dump command by appending -- <additional args / flags> when running package scripts (see: npm-run-script docs). For example:

# Dumps schema only from blocks and messages tables
yarn db:dump:schema -- -t blocks -t messages
When pg_dump is insufficient

In some cases, pg_dump may not export a relevant DB object; for example, enums. In these cases it is necessary to manually extract the relevant data from the DB and incorporate it into the initial_schema.sql file.

In the case of enums, the following queries expose the relevant records, the results from which can be re-written as a COPY or INSERT statements into the respective tables from which they came:

# List enum types
SELECT oid, typname FROM pg_type WHERE typcategory = 'E';

# List values for a particular enum
SELECT * from pg_enum where enumtypid = <pg_type.oid>;

TypeScript migration

It is not necessary to add any TypeScript source as part of any migration (e.g. 000002). In the event that the migration is too complex to be easily reasoned about it in SQL, it may be more straightforward to use the plv8ify workflow:

  1. Write a migration in ./migrations/current.ts which exports a single function that kicks off the migration (see: plv8 docs > built-ins).

  2. Transform the current typescript migration function to a .sql function:

    yarn plv8ify
    

    This writes the generated SQL to ./migrations/current.sql.

  3. Since we're using a non-default schema name, it's prudent to prepend the following preamble to current.sql:

    CREATE SCHEMA IF NOT EXISTS app;
    SET SCHEMA 'app';
    
  4. Lastly, in order for the migration function to execute when the migration is applied, append the following to current.sql (substituting labels identified by surrounding brackets with their respective values):

    SELECT * from <migration function>([arg, ...]);
    

Committing Migrations

Once you're ready to test / apply the migration:

graphile-migrate commit

If it was successful, graphile-migrate will have moved the contents of current.sql (and reset it) into a file named by the next number in the migration count sequence.

(see: graphile-migrate README for more)

Cleanup

If the plv8ify workflow was used, until it is automated, it is conventional to manually move the contents of the current.ts file into a file in migrations/src named after its respective generated SQL file. It is against convention to update any import paths as a result of this move.

Debugging

When things aren't going perfectly, plv8.elog is extremely useful for getting more detail out of the running migration. Be sure to be looking at the postgres service's logs (as opposed to the error message returned by graphile-migrate):

docker compose logs -f postgres

ledger-subquery's People

Contributors

ayushvtf avatar bryanchriswhite avatar daemon63 avatar ejfitzgerald avatar guplersaxanoid avatar jamesbayly avatar jonathansumner avatar manjeet-fetchai avatar missingno57 avatar seandotau avatar stwiname avatar teasel-ian avatar vijaysharma815 avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

ledger-subquery's Issues

DB schema migration support

Background

Currently, have to re-index the entire network any time we want to deploy a schema change. This is undesirable because it increases the complexity of our infrastructure and significantly increases the lower bound on how quickly the indexer can sync up with a given network after a schema change.

Proposal

Consider using migrate which is part of the Graphile suite and is intended to work alongside Postgraphile (which we're also using).

Acceptance criteria

  1. Add migrate and dependencies
  2. Add a section to the readme explaining or linking to how to use

Test coverage over:
Example use cases:

  • adding an entity
  • changing a field's data type
  • changing a @jsonField type (add/remove/etc. its properties)
  • removing an entity (too many questions, need more discussion)

meaning:
1. Alter graphql schema
2. Generate DB schema (a. let SubQuery do its thing b. pg_dump or something to dump schema)
3. Assert agains the DB schema diff

It's sufficient to determine that the tool works as expected.

Release v0.1

Features

  • Initial SubQuery attributes fetchai/ecosystem-tasks#43
  • Native Transfers - #11
  • Account Balance Tracking - #17 - moved to #9
  • Contract Executions - record user interactions with contract #12
  • Initial infrastructure code changes - fetchai/ecosystem-tasks#110 (pre-requisites and sandbox sections)

Process genesis for initial state

Acceptance Criteria

  1. The repo contains a script which downloads the network Genesis from a given URL.
  2. The schema includes a NativeBalanceGenesis entity.
  3. The repo contains a script which processes the downloaded Genesis according to the needs of various features (i.e. entities which depend on it).
  4. These scripts are executed automatically at some point in the startup of the subquery node.

Proposal

  1. An environment variable seems like the most straightforward place to store the genesis URL.
  2. Example:
type NativeBalanceGenesis {
  id: ID!
  account: String!
  amount: BigInt!
  denom: String!
}
  1. It probably makes sense for the initial version of script to interact directly with the database. We can use the same tooling as our end-to-end tests (python + psycopg) to avoid introducing additional dependencies and keep tooling consistent. Perhaps we'll want/need to upgrade to something more robust and/or javascript-native in the future. I would also propose that it would be good for maintainability and onboarding if the script employed inversion of control to support distinct genesis state handler functions, defined separately. I would suggest considering ReactiveX (PyPI) which implements the observer pattern flavor of IoC. This design also allows for concern-scoped handlers to be reasoned about and maintained independently.
  2. I imagine we can use a kubernetes job to run through once at the beginning of indexing but we should ensure that the job doesn't start until after the SubQuery node is up (i.e. has created all tables).

Fork subquery/subql :roll_eyes:

Background

Subquery's graphql API service (@subql/query on npm) is using Postgraphile to generate query resolvers. @suql/query also includes the postgraphile-plugin-connection-filter plugin, which can be configured to enable a variety of features.

Acceptance Criteria

  • Fork subquery/subql and the fork add as a git submodule 🙄 (#42)
  • Update "graphql-engine" docker compose service to build from local @subql/query in the submodule (#42)
  • Update "api" deployment template to build and run the @subql/query in the submodule (#42)

IBC transfer tracking

Acceptance Criteria

API consumers can query for IBC transfers between two accounts. This means that a user should be able to perform the following searches:

  • Transfers made to a specified account
  • Transfers made from a specified account
  • Transfers made to or from a specified account

In addition the following filters should be able to queries above

  • Filter by denom
  • Filter by block height
    • In a specified range
    • Greater than value
    • Less than value
  • Filter timestamp
    • In a specified range
    • Greater than value
    • Less than value

Relevant Links

IBC transfer module:

Update balance tracking logic (fees in v0.45.9)

Background

As noted in #89, transaction fee amounts should be available in the respective event as of cosmos-sdk v0.45.8 (currently v0.45.9 due to dragonberry). #123 implemented an alternative approach to accounting for fee amounts in the context of balance change events which is sub-optimal by comparison (i.e. requires an additional round-trip to the DB).

Acceptance Criteria

The balance change handlers don't query the DB in order to construct new entities.

Identify and add missing `chainTypes|

Summary

At least one message type is unaccounted for in our project.yaml and needs to be added. The following is an example of the type of error produced when the indexer encounters a message type that's not registered with SubQuery:

failed to index block at height 5585256 handleMessage(
[{"idx":0,"tx":{"idx":0,"block":{"block":{"id":"E105D0370D8C594E73CC23BDE0E43422512643B07709F9C40275B05923F8767F","header":{"version":{"block":"11","app":"0"},"height":5585256,"chainId":"fetchhub-4","time":"2022-04-25T15:15:38.385747527Z"},"txs":[{"0":10,"1":93,"2":10,"3":91,"4":10,"5":34,"6":47,"7":99,"8":111,"9":115,"10":109,"11":111,"12":115,"13":46,"14":115,"15":108,"16":97,"17":115,"18":104,"19":105,"20":110,"21":103,"22":46,"23":118,"24":49,"25":98,"26":101,"27":116,"28":97,"29":49,"30":46,"31":77,"32":115,"33":103,"34":85,"35":110,"36":106,"37":97,"38... 
Error: Unregistered type url: /cosmos.slashing.v1beta1.MsgUnjail at Registry.lookupTypeWithError (/usr/local/lib/node_modules/@subql/node-cosmos/node_modules/@cosmjs/proto-signing/build/registry.js:74:19) at Registry.decode (/usr/local/lib/node_modules/@subql/node-cosmos/node_modules/@cosmjs/proto-signing/build/registry.js:117:27) at CosmosClient.decodeMsg (/usr/local/lib/node_modules/@subql/node-cosmos/dist/indexer/api.service.js:148:46) at Object.get decodedMsg (/usr/local/lib/node_modules/@subql/node-cosmos/dist/utils/cosmos.js:122:47) at get (<anonymous>) at BaseHandler.get (/usr/local/lib/node_modules/@subql/node-cosmos/node_modules/vm2/lib/bridge.js:441:11) at JSON.stringify (<anonymous>) at Object.e.handleMessage (/tmp/NClDJM/QmcUD4ysek4FiU2Tm5T1rUFvbt7WubbxFNQeNM9ffbLX1u.js:2:1232) at /tmp/NClDJM/sandbox:2:50 at BaseHandler.apply (/usr/local/lib/node_modules/@subql/node-cosmos/node_modules/vm2/lib/bridge.js:479:11)

SubQuery team have linked to an example of registering additional messages types:

 chainTypes: # This is a beta feature that allows support for any Cosmos chain by importing the correct protobuf messages

Acceptance Criteria

  • The unjail message type is added to our projects chainTypes list, preventing the current error from occurring in the future
  • Any additional necessary messages identified should either be added or commented with an explanation of why not and in which case, the next steps should be outlined in a comment here or in a new issue.

DistDelegatorClaim missing amount field

Acceptance criteria

The DistDelegatorClaim entity includes an amount field.

Proposal

In the current cosmos-sdk version, the value to populate this field is not included in the message for which the current handler (handleDistDelegatorClaims) is filtering. Instead, it's value is available in the related "withdraw_rewards" event type (see: cosmos-sdk docs). If we add another handler which filters for these events we should be able to lookup the existing claim entity and add the amount information.

Node service crashing due to connection reset

Background

  • The subquery node queries the configured fetch rpc endpoint to fetch data as it runs.
  • The node container exits when it encounters errors and then relies on it's orchestration environment to restart it.
  • Cross-cluster (from sandbox to testnet) requests may be triggering the istio circuit breaker

This is the error occurs frequently and causes the deployment to go into crashloop backoff:

2022-08-16T09:40:52.005Z <fetch> ERROR failed to index block at height 977210  Error: read ECONNRESET

End-to-end testing - entity creation script (part one)

Acceptance Criteria

The repo contains a cosmpy script which executes these scenarios:

  • legacy bridge contact swap call(s)
    1. Send native FET to a new test bridge user account
    2. Generate valid and useable Ethereum address
    3. Swap native fet for ERC20 FET for native FET (eth chain starts with 0 ERC20 FET iirc)
    4. Swap ERC20 FET for native FET
  • delegator reward claim
    1. Send native FET to a new test delegator account
    2. As the test delegator, delegate half of balance to a/the validator
    3. Wait a few blocks (I guess)
    4. As the delegator, claim staking rewards
  • governance proposal vote
    1. Send native FET to a new test proposer account
    2. As the test proposer, submit a new governance proposal
    3. (Maybe?) As the proposer, submit a deposit for the created proposal
    4. Repeat steps 1 & 2 of the "delegator reward claim" scenario
    5. As the test delegator, vote on the proposal

Unregistered type url error

Background

The project.yaml chainTypes value includes the unjail message and the protobuf file it references exists as well. Despite this, we encounter this error while indexing testnet (dorado-1):

2022-08-29T05:03:27.841Z <sandbox> INFO [handleMessage] (tx EBBC9C4649A1F18E1B7E9081EE5950233FE41A2CD3FF11A053E32D70C3141F61): indexing message 1 / 1
2022-08-29T05:03:27.842Z <api> ERROR Failed to decode message Error: Unregistered type url: /cosmos.slashing.v1beta1.MsgUnjail
2022-08-29T05:03:27.875Z <fetch> ERROR failed to index block at height 2590346 handleMessage() Error: Unregistered type url: /cosmos.slashing.v1beta1.MsgUnjail

Steps to reproduce

Create a docker-compose.override.yml which includes:

services:
  subquery-node:
    environment:
      START_BLOCK: "2590346"
      CHAIN_ID: dorado-1
      NETWORK_ENDPOINT: https://rpc-dorado.fetch.ai:443

CW20 interface compliant contracts

Acceptance Criteria

  1. An entity exists which tracks contracts and relates to primitive entities (e.g. msgs / events).
  2. An entity exists which represents contracts that support the CW20 interface.

Proposal

Example schema

As far as I'm aware, the only sources of information the indexer will have to deduce a contract's API are the store, initialize, or execution messages for that contract.

Options:

  1. We could consider looking at the structure of the initialization message a given contract to classify it.
    For example, the base CW20 instantiate message looks like this:
pub struct InstantiateMsg {
    pub name: String,
    pub symbol: String,
    pub decimals: u8,
    pub initial_balances: Vec<Cw20Coin>,
    pub mint: Option<MinterResponse>,
    pub marketing: Option<InstantiateMarketingInfo>,
}

(NB: cw20::Cw20Coin defined here)

  1. We could also watch for execution messages (success and error) over time to build a picture of known supported methods. Technically, this is already happening as we process everything in primitives. What would still be needed for this approach would be to check the methods seen and classifying a contract to interface(s) when some confidence threshold has been reached. Perhaps it makes sense to do this off of the ExecuteContractMessage handler as that's where we're getting our reference to what methods have been called.
  2. Combine 1 & 2: We could use the approach 1 to take an educated guess as to what interface(s) we think a contract might support as soon as it's initialized. As we see actual messages over time, we can update our guess based on both success and error responses to messages.

Account for fees in NativeBalanceChange events

Background

@daeMOn63 pointed out in #88:

Also worth noting that fee spendings are actually not tracked by coin_spent events for some reason. I've asked on cosmos Discord to see if this is intended

and

On tracking fees, there's an event tracking the fees we could grab, but it's been added in cosmos-sdk v0.45.8, so we'll have to wait for our chain to use this version to have it. (see: https://github.com/cosmos/cosmos-sdk/blob/v0.45.8/x/auth/ante/fee.go#L121-L123)

Acceptance Criteria

  1. NativeBalanceChange events include the fee for the transaction to which they're related in their amount.
  2. An end-to-end test covers this behavior.

Proposal

Perhaps while we wait for cosmos-sdk v0.45.8, we can reference fees from the related transaction in the event handler.

Run tests in CI

Acceptance criteria

On pull request to main, the end-to-end tests are run.

LegacyBridgeSwap filter: contract address

Background

  1. The LegacyBridgeSwap handler element in the project.yaml dataSources currently filters by a contract address. My understanding is that this address has historically been useful as it is deterministic with respect to some parameters which I can't recall at the moment.
  2. Recent experimentation with the local subquery environment has proven this not to hold for this environment.
  3. We assume it is faster to allow subquery to filter at this level rather than to call our handler for all contract executions and returning early in case of no match.

Acceptance criteria

The legacy bridge contract address can be overridden with an environment variable (e.g. LEGACY_BRIDGE_CONTRACT_ADDRESS).

Proposal

I think this could be integrated into the existing node-entrypoint.sh which already replaces other values in the project.yaml (via yq) based on environment variables.

Account Balance Tracking

Acceptance Criteria

To add the ability for a user to query a specific address account balance. For the first version of this task only native balances should be captured.

Below is an illustrative schema for the problem

type NativeBalance {
  amount: BigInt!
  denom: String!
}

type Account @entity {
  address: ID! # might be also String!
  nativeBalances: [NativeBalance!]!
}

Potential strategy

Create updateNativeBalance function:

  1. Function queries database for existing instances of 'Account' with the relevant native transfer address
  2. If no existing entity returned, create new one
  3. Update the balance of the 'Account'
  4. Insert this function to all cases where balances may change in handlers for coin_spent and coin_received events
  5. Iterate to increase coverage of balance changes

Blocked on

  • #79 - Process genesis for initial state

Investigate cockroach DB integration

Background

As the graphql API service scales to meat demand, the DB will quickly become a performance bottleneck.

(see: CockroachDB quick start)
(see: CockroachDB comparison)

Acceptance Criteria

We've determined whether CockroachDB can be easily integrated by swapping out the postgres for it:
a. If not, then we've enumerated those pain points / issues
b. If so, then we have a working proof-of-concept

Contract Execution Tracking

Acceptance Criteria

In addition to the work specified in fetchai/ecosystem-tasks#43 the following functionality is required when making queries against contracts.

To add the ability for a user to query contract execute events that have happened on the chain.

  • Contract calls that have be made to a specified contract
  • Contract calls that have been made from a specified account

In addition the following filters should be able to queries above

  • Filter by payload name
  • Filter by block height
    • In a specified range
    • Greater than value
    • Less than value
  • Filter timestamp
    • In a specified range
    • Greater than value
    • Less than value

Below is an illustrative schema for the problem

type ContractCall @entity {
  id: ID!
  timestamp: Timestamp! # indexable
  from: String!  # indexable
  to: String! #indexable
  blockHeight: BigInt! # indexable
  txIndex: Int!
  msgIndex: Int!
  method: String! # indexable
  payload: String! # json payload for the contract
  funds: [Funds!]
}

type Funds {
  amount: BigInt!
  denom: String!
}

Define process for adding entities

Acceptance criteria

Consolidate the "Adding Entities" section of the description of fetchai/ecosystem-tasks#43 with the state of e2e testing. Update the instructions such that it would have contributors producing code and tests which are consistent with our current testing tools and methodology.

Disk size too small (again)

Background

We bumped the size of the physical volume claim of the Postgres StatefulSet from 1 to 7GiB in #30. Dorado has now run out of space once again. See the logs:

2022-09-07 13:57:07 | 2022-09-07T11:57:07.643781213Z stdout F 2022-09-07T11:57:07.643Z <fetch> ERROR failed to index block at height 2834960 handleBlock() SequelizeDatabaseError: could not extend file "base/16384/16417": No space left on device
-- | --

CW20 Transfer Tracking

Acceptance Criteria

To add the ability for a user to query CW20 transfers between two accounts. This means that a user should be able to perform the following searches:

  • Transfers made to a specified account
  • Transfers made from a specified account
  • Transfers made to or from a specified account

In addition the following filters should be able to queries above

  • Filter by denom
  • Filter by block height
    • In a specified range
    • Greater than value
    • Less than value
  • Filter timestamp
    • In a specified range
    • Greater than value
    • Less than value

Below is an illustrative schema for the problem

type Cw20Transfer @entity {
  id: ID!
  timestamp: Timestamp! # indexable
  from: String!  # indexable
  contract: String! #indexable
  to: String! #indexable
  blockHeight: BigInt! # indexable
  txIndex: Int!
  msgIndex: Int!
  amount: BigInt!
}

Expose ledger-subquery version to GQL API

Proposal

For now let's add a ReleaseVersion entity and write the version record at boot or something:
- git tag
- git commit hash
- timestamp when deployed

It looks like this is already exposed via the subql framework and we just needed to understand how to use it. TLDR; Apparently we can increment the version property in the package.json of the node and query packages of the subql (git) submodule. (See: #125 (comment) for more.)

  • Document subql package.json version update process
  • Update package.json versions to present
  • Ensure metrics to uses this version

Add issue templates

Acceptance criteria

Following the example set in the cosmpy repo, this repo should have distinct issue templates for bug reports and feature requests.

End-to-end testing - initial tests (part three)

Acceptance Criteria

Initial tests coverage includes ensuring that all implemented entities are correctly written to the underlying PostgreSQL database. They exercise the graphql API in the test and make expectations against results of SQL queries.

If corresponding test gql queries are already available (eventually we want gql test queries for everything), tests ensure that they return the expected results as well.

Proposal

Currently, the postgres service is exposing it's default port on the host OS, which would allow us to run our tests from the host. However, as this is repository is a npm/yarn package and already depends on docker for development and testing, I think it would be more appropriate to run the tests in an additional service in the docker-compose.yml. Such a service could access the repo via a volume, would already have all necessary dependencies (i.e. python and friends) installed, and would have network access to (and DNS resolution for) other docker services in the composition. (use docker-compose.override.yml to expose necessary services / networks)

For making assertions about postgres data, I think the path of least resistance might be to use something like psycopg3 with SQL queries for the time being. This has the potential to make our e2e tests brittle with respect to Subquery's DB schema generation; however, I would expect a changes in the schema generation methodology to be infrequent and to come with a major version increment and due notice.

For graphql queries, after a brief search, I've discovered gql (available on PyPI).

Add community connection-filter schema plugin

Genesis processing must exit gracefully

To check with SRE team - updated scope of ticket is to make sure that when genesis processing is integrated into the init container a backup of the database has been taken so that a manual recovery can be made.

MVP Deployment strategy

Acceptance Criteria

  1. Manifest templates support multiple instances at distinct versions in the same k8s namespace.
  2. We have a way to switch the public-facing API DNS name between these instances.

Monitor DB size

Add metric for monitoring DB size for testnet and mainnet

End-to-end test tidying

Acceptance Criteria

  • Function signatures and non-assignment variable declarations are type-complete in all end-to-end tests.
  • All tests which exercise multiple scenarios in a single test method call the self.subTest method with a unique, descriptive name.

Consolidate DB setup

Background

DB setup (i.e. schema generation and table creation) is currently handled by the SubQuery node at startup and is all derived from the graphql.schema file.

Authz support (delegator claim bug)

Background

It seems that when authz actions happen differently than their vanilla counterparts.

It looks like the nested messages may still be encoded with protobuf, in which case we will need to decode them in the handler.

Example authz message body (see investigation-query.gql):

{
  "grantee": "fetch1m5hgzum68rjf4c7zhezgkj8hlnmr0kgh9uj0g8",
  "msgs": [
    {
      "typeUrl": "/cosmos.distribution.v1beta1.MsgWithdrawDelegatorReward",
      "value": {
        "0": 10,
        "1": 44,
        "2": 102,
        ...
    }
  },
  {
    "typeUrl": "/cosmos.staking.v1beta1.MsgDelegate",
    "value": {
      "0": 10,
      "1": 44,
      "2": 102,
      ...
    }
  ]
}

Acceptance Criteria

Handler exists for authz's MsgExec message which:

  • checks that it's transaction was successful and bails if not
  • decodes the message's Msgs and calls the primitive message handler, and any respective message handler function

SubQuery node service crashing due to 503s

Background

503 status responses in mainnet from the fetchd node are causing the subquery node to bounce. I think this is likely causing the pod to go into crashloop backoff similar to #31.

Logs:

2022-09-07 09:21:22 | 2022-09-07T07:21:22.23486883Z stdout F 2022-09-07T07:21:22.192Z <indexer> ERROR failed to fetch block Error: Request failed with status code 503
-- | --
  |   | 2022-09-07 09:21:22 | 2022-09-07T07:21:22.192607881Z stdout F 2022-09-07T07:21:22.132Z <fetch> ERROR failed to fetch block info 5548720 Error: Request failed with status code 503

Proposal

Perhaps we can re-purpose #43 to persist through the container to survive the 503 error.

CW20 balance tracking

Acceptance Criteria

  1. An entity exists which is analogous to the NativeBalanceChange.
  2. Handlers exists which cover contract execution with all CW20 methods and creates instance(s) of the new entity, tracking the respective change (e.g. burn method would create an instance with a negative amount, transfer would create an instance with positive amount for the receiver and negative for the sender).

Load testing

Background

In order to measure the impact of making architectural changes, we need a way to assert a load against that infrastructure.

Acceptance Criteria

  • no. of "complex" simultaneous gql clients / queries
  • average response time under load and not
  • matrix for different scales
    • lone Postgres
    • Postgres with 2 read replicas
    • Postgres with ...

End-to-end testing - local environment (part two)

Acceptance Criteria

The fetchd and ethereum (plus more) services are being setup and run locally from within this repository. In order to facilitate future automation (e.g. CI/CD) of the end-to-end tests, this repository provides everything required to support the e2e test environment.

Proposal

As we already have a docker-compose.yml, I think the shortest path for this might be to represent these additional services there and I think that currently means that we can copy everything (maybe except the bridge-tests service) from legacy bridge relayer's docker-compose.yml into this repo's docker-compose.yml.

For consistency, I think it might make sense to implement the script(s) which sets up and/or resets the test environment to be in typescript and to be included as a package script (i.e. as an entry in the package.json "scripts" section), perhaps as something like "build:e2e", "start:e2e", and/or "test:e2e".

Ensure high-level entities from failed txs are not persisted

Background

SubQuery node will encounter and process messages contained within failed transactions. This is desirable behavior as we do want to index failed transactions (and related primitives). However, I think this fact is missing from the logic of handlers for higher-level entities (e.g. legacy bridge swaps). The result is that any message-based handlers which we currently have are creating these higher-level entities based on txs that did not result in any network state change.

For each message received, an event with type "message" is emitted during handling. The baseapp will always set the "action" attribute and cosmos modules (e.g. vanilla modules, wasm) often add "module" and "sender" attributes as well.

Acceptance Criteria

  1. Existing non-primitive message-based handlers MUST check the status field of their respective transaction ("Success" | "Error") and return early if it's "Error" only if the entity which would be created doesn't accurately represent the new network state.
  2. Any message-based handlers which can be re-implemented as event-based handlers SHOULD be. This will improve performance as the handler will only be called for successful messages.

Monitor sync rate

Acceptance Criteria

  1. We can measure the duration of time that it takes for an indexer instance to sync from the start block to the current block (at time of completion).
  2. This measurement is automatically started when a preview deployment happens (i.e. a fresh indexer instance begins syncing) in any given cluster. Measurement should stop when syncing is complete.
  3. These measurements are persisted somewhere and should be sufficient to show the trend of sync time across indexer versions within the same network.

Nice to have

  • We can correlate this with downtime on the subquery node.

Native Transfer Tracking

Acceptance Criteria

To add the ability for a user to query native transfers between two accounts. This means that a user should be able to perform the following searches:

  • Transfers made to a specified account
  • Transfers made from a specified account
  • Transfers made to or from a specified account

In addition the following filters should be able to queries above

  • Filter by denom
  • Filter by block height
    • In a specified range
    • Greater than value
    • Less than value
  • Filter timestamp
    • In a specified range
    • Greater than value
    • Less than value

Below is an illustrative schema for the problem

type NativeTransfer @entity {
  id: ID!
  timestamp: Timestamp! # indexable
  from: String!  # indexable
  to: String! #indexable
  blockHeight: BigInt! # indexable
  txIndex: Int!
  msgIndex: Int!
  amount: BigInt!
  denom: String! # indexexable
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.