sygmaprotocol / sygma-explorer-indexer Goto Github PK
View Code? Open in Web Editor NEWBackend service that collects bridging events for bridging explorer ui
License: GNU Lesser General Public License v3.0
Backend service that collects bridging events for bridging explorer ui
License: GNU Lesser General Public License v3.0
Update the indexing logic to check for the latest processed block for each domain in the database. Based on this information, continue indexing chain events. This change ensures that the system can recover robustly in case of interruptions or restarts, as it will always resume from the last processed block for each domain.
On indexer startup, the logic should be:
Indexing logic should process blocks in batches (small batches of 5, similar to the relayer). Check for all events in the batch of blocks, process them, and save information to the database. In the end, it should also record the last processed block to the database. This process is being repeated until the indexer catches up with the blockchain's latest block. At this point domain indexer skips a few seconds and checks if there are enough new blocks to process them (the batch size defines this). This is an ongoing process.
Investigate how to use Prisma migrations on our indexer so we can make model upgrades without executing the entire indexing process from the beginning.
Add failed transactions to: evm1 or evm2 and substrate-pallet image.
update substrate-pallet image with newest substrate-pallet so we can test the FeeCollected event handling
We are going to revert back the current changes on the schema
to the original implementation. This means, not using id as object.id
from mongo and passing resource and domains values
Refactor EVM listening logic to support the latest Solidity contracts and align with the new data model
Rewrite functions for persisting different chain events (e.g., saveDeposits) to align with the latest smart contracts. The new implementation should fetch and save events from a range of blocks provided as an argument. Update the EVM listening logic accordingly.
Additionally, apply the new data model when implementing the refactored EVM listening logic. A new data model and more information can be found in the research document.
We don't decode multilocation data right now and destination for substrate is invalid (hex string of multilocation).
We should parse substrate multilocation data and extract recipient general key to get the destination recipient.
transfer service
transfer service and transfer controller test
must passSince we have implemented substrate indexing logic, we should have test to cover corner cases and check that data is being stored properly.
substrate
indexer implementationMigrate away from express to fastify implementation on the API side of indexer
build
commandApi returns 500 if page or limit query params are missing as we don't have sane defaults.
ChainBridge commit (or docker tag):
chainbridge-solidity version:
chainbridge-substrate version:
Go version:
We need to comply to check OFAC addresses by adding a boolean property to the model.
Datadog doesn't parse log levels and new lines correctly currently.
Fix the logs format so it is compatible with datadog.
Currently, indexer logs mostly use info logs, so it is hard to find important information.
Review current logs, and move logs per block to debug level.
We need to create the endpoint to get all the transfers ordered by time
GET /api/transfers?status={status}&page={page}&limit={limit}
status
(optional): Transaction status (pending/executed/failed)page
(required): Page number for paginationlimit
(required): Number of items per pageSince there is the chance -small one-, that txHash could not be unique, we should fetch transfer passing txHash + chainId
We would like to implement the USD value for the Explorer UI via a single API (coingecko API) to avoid any calculation.
Consideraitons:
Currently we have a bunch of optional fields on the schema, and for some of them this optionality doesn't make too much sense. Revisit this in conjunction with the indexing logic.
There are new endpoints that we need to create or modify based on the current ones
GET /api/resources/{resourceID}/transfers?status={status}&page={page}&limit={limit}
status
(optional): Transaction status (pending/executed/failed)page
(required): Page number for paginationlimit
(required): Number of items per pageGET /api/domains/{domainID}/transfers?status={status}&page={page}&limit={limit}&domain={source/destination}
status
(optional): Transaction status (pending/executed/failed)page
(required): Page number for paginationlimit
(required): Number of items per pagedomain
(optional): Filter transfers by source or destination domain (source/destination)GET /api/domains/source/{sourceDomainID}/destination/{destinationDomainID}/transfers?page={page}&limit={limit}
page
(required): Page number for paginationlimit
(required): Number of items per pageGET /api/resources/{resourceID}/domains/source/{sourceDomainID}/destination/{destinationDomainID}/transfers?page={page}&limit={limit}
page
(required): Page number for pagination
limit
(required): Number of items per page
provide testing for all the endpoints
Load domain the RPC url-s from env variable so they are not exposed inside public repository.
Add substrate listening logic that will be aligned with our pallet and new data model.
@tcar121293 as you mentioned most of the processing logic from relayers can be translated to typescript.
sdk
Extrinsic IDs for Khala and Phala don't match the extrinsic ID of the transaction with the deposit (probably also execution).
ChainBridge commit (or docker tag):
chainbridge-solidity version:
chainbridge-substrate version:
Go version:
When indexing transfer from Khala and Phala destination is not indexed.
As the indexing process is very slow (especially on substrate chains), we need to figure out ways to make changes and upgrades on the database without reindexing entire chains. For changing the actual schema of the database in some environment, we will use Prismas migrations. What we need in parallel to this is the ability to run some action on the entire database or some subset of entries in the database.
First such use case is to re-run $ price calculations for amounts bridged on our mainnet environment. In the process of re-indexing this calculation was failing, so all transfers have 0 as the $ value.
Destination address has wrong length, hence link to the explorer will desplay invalid address once user arrives to the block explorer
Implement the following endpoint:
GET /api/sender/{senderAddress}/transfers?status={status}&page={page}&limit={limit}
status
(optional): Transaction status (pending/executed/failed)page
(required): Page number for paginationlimit
(required): Number of items per pagetransfer.service
passRefactor logic for loading configuration, so it loads configuration from provided URL and an array of RPC endpoints for each domain as ENV parameter.
getSygmaConfig
functionFrom repository sygma-explorer-indexer
we need to deploy two different services:
We need to support testnet (on each push to main) and mainnet (on release) environments.
Both services use same root Dockerfile, just different start commands
node ./build/index.js
node ./build/indexer/index.js
As a bridge user or sdk developer
I want to see reason of my failed Deposit (execution) if it was failed bcs of not enough liquidity
So that I can produce less panic.
As a protocol provider we can't really guarantee that someone will send a Transaction with amount bigger then a liquidity, so atleast we want to show to the users that their Transaction faild bcs of that reason
Please show something like "Transaction failed bcs of not enough liquidity, please be patient, your transaction will be retried soon when"
Scenario:
Given I am a bridge user
When send a deposit with amount higher then a liquidity on dest. network
And it get failed
Then I see that it was failed bcs of specific reason and will be fixed soon
You need to check the Transaction Logs if transaction was failed and search for a specific log that happens only when handlers fails on the not enough liquidity, should be pretty straight forward for an EVM but could be tricky for a Substrate.
[] Unit tests
[] Manual tests
[] If Execution failed bcs of liquidity problems, corresponding error should be displayed
Restructure the repository for maintainability and scalability reasons.
Restructure the repo to monorepo with two packages:
explorer-api
- Fastify application that exposes defined explorer APIexplorer-indexer
- Node application that indexes, processes, and saves to database Sygma on-chain events.explorer
and indexer
With this approach, we can scale explorer-api
separately to the explorer-indexer
, which is a highly possible scenario.
Please follow monorepo setup defined by Engineering Handbook.
Refactoring the indexing codebase is not in the scope of this issue, so just remove the indexer-worker
code and migrate the indexer
code to the new package explorer-indexer
.
explorer-api
and explorer-indexer
Fix feeAmount format (remove commas)
EvmIndexer
classUpdate ABI based on new published version of solidity contracts.
Our analytic team has a need for fetching transfers by date.
Add new endpoint that would allow fetching transfers filtered by date-time. It should allow two way bounded filtering, where both bounds are optional. Here is a dummy example:
../?fromDate={url_encoded_ISO_8601}&toDate={url_encoded_ISO_8601}
Use ISO 8601 for time format.
As a user
I want to understand the reason why my transaction failed on execution
So that I can feel like my assets are safe
As a on-duty developer
I want to quickly understand the reason why transaction failed on execution
So that I can take appropriate actions
We had a few incidents on PHA routes, where users had locked their founds for smaller periods as liquidity was missing on the destination chain. This on Explorer is displayed as generic failed transaction, where it would be a much better user experience if we showed the actual reason why this happened.
This piece of art is a rough sketch of how something like this could look.
We are providing additional context on failed transactions. We could possibly even use this to automate retries based on fail context in the future.
We should start with something simple, such as only recognizing the situation where there is missing liquidity on the destination chain. Still, we should design a solution that can be extendable to hold different failed contexts.
Currently you need to modifiy the hosts file to run the replicas and for those to be accessible outside the container. This setup is useful for docker-compose.only-mongo.yml
file. We should have everything inside the containers and not different compose files.
only-mongo
docker compose filedocker-compose
To make Explorer code more extendable and testable we need to try to follow the TDD approach and create first Swagger documentation that will describe full indexer API and then use it on ExplorerUI
First, we need to agree on the API design. Define URL schema, methods, and what data it returns. (mostly done here
Create swagger documentation out from this information and server that can mock this requests so it can be used in ExplorerUI local development
[] Swagger documentation and mock server added to the indexer repo
Add yarn action for adding seed data to the database so it is easier to locally test.
Check out this guide
Manually tests that database is populated once seeding action is run
Functioning data seeding command
Currently we are using pm2 package to start indexer and api processes separately. This is not really compatible with our setup as this is going to be deployed to the aws, so there is no reason to use package like this. Best approach here is to prepare separate docker images for indexer and api so we replicate similar strategy as our other services.
We want to save time of deposit and execution as two separate entries in database. This way we can calculate actual delta from information in database.
swagger.yml
definitionAdd /health
endpoint for easy AWS deployment setup
Endpoint just needs to return a 200 OK response
When the error occurs, indexer will be stopped, but since healthcheck endpoint just checks if the db is up, the service will not be restarted
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.