Git Product home page Git Product logo

txq's People

Contributors

alamowebcompany avatar attilaaf avatar bitcoinfiles avatar deanmlittle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

txq's Issues

Decouple database layer to support multiple database schemas at launch and runtime.

To support multi-tenant usage of TXQ, we must abstract the database logic to allow supporting multiple schemas (database credentials) for each tenant.

  • Configuration should be loaded at startup and also be able to reload (refetch) configuration at run-time (ie: connect newly added pools, keep existing ones if a pool is already instantiated for that tenant). Any unused connection pools should be destroyed.

  • Use host header subdomain+domain matching to route to correct database schema. Return 403 forbidden if not allowed.

  • The default 'single user mode' of TXQ should still run with a single schema, without burdening the Open Source (self-hosted) users from still running as easily with their chosen single default schema.

  • API routes correctly route to the proper schema

  • Server Sent Events (SSE socket endpoints correctly bind the requests to the correct schema)
    -> For SSE we cannot specify headers provided via EventSource therefore we cannot rely on host header/custom and must use a url ? query param such as userdomain=acmeinc

Acceptance Criteria

As a Client Dev, I want to run TXQ against a single database schema, without extra complication in configuration (keep it simple for self-hosted users)

1a. As a Client Dev, I want to configure multiple (up to 10's or 100's) database schemas by userdomain, so that we can route all API requests (REST/SSE) to the correct schema for publishing and reading tx data.

1b. As a Client Dev, I want to easily connect to my TXQ instance, and if I made a mistake get an appropriate message 403 telling me what to do (ie: set the correct project_id and api_key in header for their given userdomain)

2 .As the system, I want to ensure there is a recurring timer that checks for 'orphaned' or dropped transactions every 1 minute across all schemas, so that I can be in a consistent and eventually reliable state.

Note: in future we will replace this timer with a separate 'Executor' queue driver process so we do not have to poll across schemas (which could get inefficient for large tenants)

3. As Dev Ops, I want to route HTTP requests to the correct TXQ cluster that is capable of serving the userdomain's request.

4. As Dev Ops, I want to ensure security and privacy as much as possible, therefore the TXQ instance should also check the project_id and api_key header (or query url param) that correct validates to the bound database schema connection.

Note: We already have a 'user' and 'account' and api_key concept elsewhere. We can build a simple (optional!) configuration setting which reads from a postgres table txqconfig_prod that correlates the api_key and user domain. Open to suggestions and ideas in this area and how best to approach this goal

5. As Client Dev, I want to use the /mapi endpoints and communicate with my preferred miners, and then the mapi proxy will correctly save the response how it works today (just adapted to multiple schemas)

6. As Client Dev, I want to be able to change/configure my preferred miners as well as the broadcast and retry policy settings. Ensure that miner queue broadcast/status settings are respected for each userdomain

Note: TXQ empowers clients to control which miners they deal with and which priority they are used for broadcasting (push tx) and also fetching status. Long term is to let them configure this dynamically and then the TXQ instances will be aware of the change for subsequent requests (ie: I might add or remove a miner in my endpoints setup via console.matterpool.io someday)

Future considerations (out of scope)

  • We want to use express-slowdown (I have a fork) that allows sliding window request rate limiting. We will want to put that together so that certain lower tier tenants would say be restricted to 3 requests every 5 seconds. So we should build it in a way that we can plug that express-slowdown middleware in easily and then route requests there too.

  • Automate creation of isolated postgres schemas and binding them to the configuration (ie: user clicks 'Launch new TXQ instance')

Starting Links:

Config: https://github.com/MatterPool/TXQ/blob/master/src/cfg/index.ts

Bootstrap: https://github.com/MatterPool/TXQ/blob/master/src/bootstrap/index.ts

SSE endpoints: https://github.com/MatterPool/TXQ/blob/master/src/api/v1/sse/index.ts

API endpoints: https://github.com/MatterPool/TXQ/tree/master/src/api/v1

Use Cases: https://github.com/MatterPool/TXQ/tree/master/src/services/use_cases

Queue Logic: https://github.com/MatterPool/TXQ/tree/master/src/services/queue

SSE event service: https://github.com/MatterPool/TXQ/tree/master/src/services/event

main 'SaveTxs' Use case: https://github.com/MatterPool/TXQ/blob/master/src/services/use_cases/tx/SaveTxs.ts

main 'status tx' use case: https://github.com/MatterPool/TXQ/blob/master/src/services/use_cases/tx/SyncTxStatus.ts

This 'multi schema' aware TXQ is a dependency for: #8

Search tx's by TAG

Search transaction/s by providing one or more tags, on this sample "mytag2", "mytag2"

Where the call might be /api/v1/tags/["mytag1","mytag2"]?pretty=1&rawtx=1&limit=3

And func would be simmilar to:

  public async getTxsByTag(accountContext: IAccountContext, tags: string | null | undefined, afterId: number, limit: number, rawtx?: boolean): Promise<string[]> {
    let result: any;
    let tagsStr = tags ? tags : '[]';
    result = await this.db.getClient(accountContext).query(`
    SELECT txmeta.id, ${rawtx ? 'tx.rawtx,' : ''} tx.txid, i, h, tx.send, status, completed, tx.updated_at, tx.created_at,
    channel, metadata, tags, extracted FROM tx, txmeta
    WHERE id >= $1 AND tx.txid = txmeta.txid
    AND $2::jsonb<@txmeta.tags
    ORDER BY txmeta.created_at DESC
    LIMIT $3`, [
      afterId, tags, limit
    ]);
    return result.rows;
  }

Thanks

Build "Sister" Queue Executor Process to work in tandem with TXQ instance(s) to handle exponential back off and retries to mapi (merchantapi)

The goal of the 'Executor' is to handle the single responsibility of broadcasting, retrying and checking status of transactions.

  • Executor is schema-aware for each userdomain
  • Respects exponential back-off rules for each userdomain
  • Works exactly as TXQ now, but the 'Queue' service is basically externalized
  • TXQ must do a POST "enqueue" request to this TXQ-Executor process to ensure that it is received and will become processed. Otherwise the response code must be '500' to the TXQ. client (so they retry)
  • When "self-hosted" TXQ, the executor should be launchable easily via npm. run just as they do now.

Todo:

  • Write about security, multiple Executors, handling a subset of userdomains, routing, etc.

[Static] Display transactions activity in a table. User can view all, unconfirmed, and confirmed transactions

blank

txq console

Build into 'TXQ' section at https://console.matterpool.io

Objective:

Allow developer to view their recently broadcasted transactions and be able to brows by 'all', 'unconfirmed' and 'confirmed'. 'Double Spent' is out of scope.

Benefit to the developer:

Can quickly troubleshoot broadcast problems and see their transaction data history

Acceptance Criteria:

  1. Hardcode static view (fake data) at https://console.matterpool.io/project/YOUR-PROJECT-HERE/txq
  2. When a new filter 'all', 'unconfirmed', 'confirmed' is selected, then the pagination resets to the first page.
  3. Hardcode to showing 100 transactions at a time
  4. Ensure the txid does not overflow (especially on mobile). Consider truncating with CSS if it's too long.

Modal is opened when TXID is clicked: #17

Database constrain idx_uk_txin_prevtxid_previndex error.

Hello, I'm getting this error while trying to restore from preexistent transaction list (historic). I use nosync:true
While restoring previous transactions some of them (6 out of 481) got the same error, leading to incoherent utxo balance calculations. Thank you in advance.

{
    "method": "POST",
    "url": "/api/v1/tx",
    "query": {},
    "ip": "::ffff:192.168.1.1",
    "error": {
        "length": 302,
        "name": "error",
        "severity": "ERROR",
        "code": "23505",
        "detail": "Key (prevtxid, previndex)=(0a9e5e2aa21304829a8a618adeab0a67aab14f2985fdda1e6d25e25189efba65, 3) already exists.",
        "schema": "public",
        "table": "txin",
        "constraint": "idx_uk_txin_prevtxid_previndex",
        "file": "nbtinsert.c",
        "line": "563",
        "routine": "_bt_check_unique"
    },
    "stack": "error: duplicate key value violates unique constraint \"idx_uk_txin_prevtxid_previndex\"\n    at Parser.parseErrorMessage (/home/mkpuig/projects/TXQ/node_modules/pg-protocol/dist/parser.js:278:15)\n    at Parser.handlePacket (/home/mkpuig/projects/TXQ/node_modules/pg-protocol/dist/parser.js:126:29)\n    at Parser.parse (/home/mkpuig/projects/TXQ/node_modules/pg-protocol/dist/parser.js:39:38)\n    at Socket.<anonymous> (/home/mkpuig/projects/TXQ/node_modules/pg-protocol/dist/index.js:8:42)\n    at Socket.emit (events.js:314:20)\n    at addChunk (_stream_readable.js:298:12)\n    at readableAddChunk (_stream_readable.js:273:9)\n    at Socket.Readable.push (_stream_readable.js:214:10)\n    at TCP.onStreamRead (internal/stream_base_commons.js:188:23)",
    "level": "error",
    "message": "500",
    "timestamp": "2020-11-20 19:32:56"
}

Issue using `nosync` on transaction submission

Server eventually crashes with error:

{"method":"","url":"","query":"","ip":"","error":{"length":105,"name":"error","severity":"ERROR","code":"42P01","position":"18","file":"parse_relation.c","line":"1191","routine":"parserOpenTable"},"stack":"error: relation \"txsync\" does not exist\n    at Parser.parseErrorMessage (/git/txq/node_modules/pg-protocol/src/parser.ts:357:11)\n    at Parser.handlePacket (/git/txq/node_modules/pg-protocol/src/parser.ts:186:21)\n    at Parser.parse (/git/txq/node_modules/pg-protocol/src/parser.ts:101:30)\n    at Socket.<anonymous> (/git/txq/node_modules/pg-protocol/src/index.ts:7:48)\n    at Socket.emit (events.js:315:20)\n    at addChunk (_stream_readable.js:309:12)\n    at readableAddChunk (_stream_readable.js:284:9)\n    at Socket.Readable.push (_stream_readable.js:223:10)\n    at TCP.onStreamRead (internal/stream_base_commons.js:188:23)","level":"info","message":"Exception","timestamp":"2020-12-30 12:24:42"}

Still trying to figure out where the error is coming from in TXQ.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.