Git Product home page Git Product logo

chainhook's People

Contributors

aravindgee avatar charliec3 avatar csgui avatar deantchi avatar diwakergupta avatar elenananana avatar hugocaillard avatar lakshmilavanyakasturi avatar lgalabru avatar max-crawford avatar mefrem avatar micaiahreid avatar nnsw3 avatar omahs avatar qustavo avatar rafaelcr avatar ryanwaits avatar saralab avatar semantic-release-bot avatar smcclellan avatar timstackblock avatar tippenein avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chainhook's Issues

Chainhook always assuming script is from mainnet

When chainhook processes an inscription, it is always assuming that the owner of the inscription is on mainnet. This means if you're on testnet or regtest it will display the incorrect owner of the inscription. Probably low priority but would be nice to have a runtime flag or something like to help it know how the address should be interpreted

Add sub-commands to "list" and "create" hooks for the observable events in a given contract

As a developer, I would like to list all the possible observable events in my contract so that I can review and, as a next step, create hook(s) automatically instead of manually interpreting, analyzing what is supported, and creating them.

Currently, clarity-events as a binary provides the list of the possible events observable for a given contract.

Current behavior:

╰─$ clarity-events --version
clarity-events 1.0.0

╭─sabbyanandan ~/hiro/clarinet/components/clarity-events ‹ruby-2.7.6› ‹develop●›
╰─$ clarity-events scan ./../clarinet-cli/examples/billboard/contracts/billboard.clar
set-message
{"event_type":"transfer_stx_event"}

We could add the additional subcommands in Chainhooks CLI:

╰─$ chainhook list-contract-events --contract-path=./test.clar

Furthermore, if I want to create hooks from the available events, I could review the list and choose one of the following events through the CLI navigation. By submitting the selection against chainhooks, the new hooks definition can be generated automatically:

╰─$ chainhook predicates new token.json --contract-path=./test.clar
[0] {"event_type":"print","print":{"data_type":{"value":"hello","type":"string"}}}
[1] ...

While we are exploring these options, I would suggest improving the response payload and description. For instance,

╭─sabbyanandan ~/hiro/clarinet/components/clarity-events ‹ruby-2.7.6› ‹develop●›
╰─$ clarity-events scan ./../clarinet-cli/examples/simple-nft/contracts/simple-nft.clar
Error path: [Diagnostic { level: Error, message: "use of undeclared trait ", spans: [], suggestion: Some("traits should be either defined, with define-trait, or imported, with use-trait.") }]

It'd be helpful for me as a developer as to understand what I need to do to get around this error.

Chainhook template generation silently fails

The following commands don't generate the templates as documented. No errors, but no files either.

╭─sabbyanandan ~/hiro/chainhook ‹ruby-2.7.6› ‹develop●›
╰─$ chainhook predicates new --stacks sabbysooz

╭─sabbyanandan ~/hiro/chainhook ‹ruby-2.7.6› ‹develop●›
╰─$ chainhook predicates new --bitcoin fooz

As well, the new config generation as an option is missing:

╭─sabbyanandan ~/hiro/chainhook ‹ruby-2.7.6› ‹develop●›
╰─$ chainhook config new --testnet
error: Found argument 'config' which wasn't expected, or isn't valid in this context

USAGE:
    chainhook <SUBCOMMAND>

For more information try --help

Updating Documentation

Will be updating this as I go:

  1. chainhook config new --testnet generates the following config for mainnet named Chainhook.toml, not Testnet.toml:
[storage]
driver = "redis"
redis_uri = "redis://localhost:6379/"
cache_path = "cache"

[chainhooks]
max_stacks_registrations = 500
max_bitcoin_registrations = 500

[network]
mode = "mainnet"
bitcoind_rpc_url = "http://localhost:8332"
bitcoind_rpc_username = "devnet"
bitcoind_rpc_password = "devnet"
stacks_node_rpc_url = "http://localhost:20443"

[[event_source]]
tsv_file_url = "https://archive.hiro.so/mainnet/stacks-blockchain-api/mainnet-stacks-blockchain-api-latest.gz"
  1. If you enter command chainhook predicates new hello-ordinals.json --bitcoin if outputs the following predicate:
{
  "chain": "bitcoin",
  "uuid": "3020128f-7fe5-43e8-8160-0722beb143da",
  "name": "Hello world",
  "version": 1,
  "networks": {
    "mainnet": {
      "start_block": 0,
      "end_block": 100,
      "if_this": {
        "scope": "ordinals_protocol",
        "operation": "inscription_feed"
      },
      "then_that": {
        "file_append": {
          "path": "ordinals.txt"
        }
      }
    }
  }
}

Which I would argue doesn't really help a user learn anything. The documentation that follows is generalized predicate documentation that does not reference the hello-ordinals.json generated predicate.

My suggestion would be to have two different generated predicates: the first could be for --bitcoin and generate a basic predicate for mainnet that includes something like looking for a specific tx or txs from a specific address. Then in the testnet section of the documentation have a different chainhook predicates new hello-ordinals.json --testnet which would look like this:

{
  "chain": "bitcoin",
  "uuid": "602975ef-b1ed-49c7-8ab8-a0633a57a133",
  "name": "Hello world",
  "version": 1,
  "networks": {
    "testnet": {
      "start_block": 2413343,
      "end_block": 2453343
      "if_this": {
        "scope": "ordinals_protocol",
        "operation": "inscription_feed"
      },
      "then_that": {
        "file_append": {
          "path": "ordinals.txt"
        }
      }
    }
  }
}

This will at least see some ordinal transactions rather than none because it starts at the first ordinal block and continues for 40,000 blocks. Values can be changed accordingly

chainhook predicates scan ./path/to/predicate.json --testnet
I don't think that it's actually generating the following config:

[storage]
driver = "memory"

[chainhooks]
max_stacks_registrations = 500
max_bitcoin_registrations = 500

[network]
mode = "testnet"
bitcoind_rpc_url = "http://0.0.0.0:18332"
bitcoind_rpc_username = "testnet"
bitcoind_rpc_password = "testnet"
stacks_node_rpc_url = "http://0.0.0.0:20443"

Because when I try to connect to the bitcoin rpc node i'm getting an incorrect password error.
Nor does this appear to work with chainhook service start --testnet

  1. There's no mention of Redis anywhere, even though if you run chainhook config new --testnet you'll get a config that includes Redis.
    When it comes to running the server I'd suggest adding a note about docker and the following docker command:
    docker run --name chainhook-redis -p 6379:6379 -d redis
    Very easy and sets it up

To get chainhook service to stream blocks from bitcoind from @lgalabru:
You'll need to enable zmq in bitcoind - can be done by adding the line:
zmqpubhashblock=tcp://0.0.0.0:18543

to your bitcoind conf file.
Then you'll also have to add the following line in the [network] node of you Chainhook.toml:
bitcoind_zmq_url = "tcp://localhost:18543"
with that, you'll want to start chainhook as a service, with the command:
$ chainhook service start --start-http-api --config-path=./path/to/config.toml
or
$ chainhook service start --predicate-path=./path/to/predicate-1.json --config-path=./path/to/config.toml

if you already know that you will be working with a fixed set of predicates

SPIKE: Consolidate storage strategy [1 week]

chainhook relying on redis appeared to be a non viable approach, with the ordinals experience (pulling witness data is increasing block sizes a lot).
We need to revisit the approach - basically evaluating postgres and tikv at this point.

Add unit tests for predicates documented in README

When trying to get all contract deployments using the * value, no hits are found. Testing other predicates, eg the arkidiko predicate, works as expected.

Here is the predicate:

{
  "chain": "stacks",
  "uuid": "1",
  "name": "Contract deploys",
  "version": 1,
  "networks": {
    "mainnet": {
      "if_this": {
        "scope": "contract_deployment",
        "deployer": "*"
      },
      "then_that": {
        "http_post": {
          "url": "http://127.0.0.1:8787/api/v1/chainhooks",
          "authorization_header": "Bearer cn389ncoiwuencr"
        }
      },
      "start_block": 0
    }
  }
}

Capacity Overflow

Currently seeing this in the staging testnet deployment when starting up the chainhook node:

{"msg":"Starting service...","level":"INFO","ts":"2023-05-06T18:55:05.666357767Z"}
{"msg":"Ordinal indexing is enabled by default hord, checking index... (use --no-hord to disable ordinals)","level":"INFO","ts":"2023-05-06T18:55:05.666394322Z"}
{"msg":"Resuming hord indexing from block #2431666","level":"INFO","ts":"2023-05-06T18:55:05.708885751Z"}
{"msg":"Syncing hord_db: 305 blocks to download (2431666: 2431970), using 8 network threads","level":"INFO","ts":"2023-05-06T18:55:05.708916647Z"}
{"msg":"Queueing compacted block #2431672","level":"INFO","ts":"2023-05-06T18:55:05.850754687Z"}
{"msg":"Queueing compacted block #2431670","level":"INFO","ts":"2023-05-06T18:55:05.94351381Z"}
{"msg":"Queueing compacted block #2431668","level":"INFO","ts":"2023-05-06T18:55:06.043775254Z"}
{"msg":"Queueing compacted block #2431667","level":"INFO","ts":"2023-05-06T18:55:06.136369865Z"}
{"msg":"Queueing compacted block #2431666","level":"INFO","ts":"2023-05-06T18:55:06.218311855Z"}
{"msg":"Dequeuing block #2431666 for processing (# blocks inboxed: 4)","level":"INFO","ts":"2023-05-06T18:55:06.218346489Z"}
{"msg":"Computing ordinal number for Satoshi point 0x21e145032bcde0f118c486f6194cc1af41160fb5da6e1a9dcda079e40f1c624a:0:0 (block #2431666)","level":"INFO","ts":"2023-05-06T18:55:06.287842278Z"}
thread '<unnamed>' panicked at 'capacity overflow', library/alloc/src/raw_vec.rs:518:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Does chainhook have support for dogecoin?

I don't think it's the right channel to talk about this, but I haven't found any other.

The Dogecoin network (originally based on Litecoin, which in turn was originally based on Bitcoin) is growing a lot, and even more so in recent weeks with the ordinals (called doginals/drc-20), and at the infrastructure/tools level it is very immature (for Dogecoin in general not only for Ordinals).

I was wondering if this indexer is or will be compatible with Dogecoin.

If it's compatible, or maybe with a few small changes it can be, I think you have a great market to grow. And what about ordinals-explorer? is supported? and ordinals-api is supported?

Maybe I can help but I don't know where to start.

Thanks 👍

mainnet doesn't work for predicate scanning

when running chainhook predicates scan ./path/to/predicate.json --mainnet

it errors with Network unknown

We need this functionality to continue work on the new stacking focused API being built.

Can I run Chainhook without a Stacks node?

Hi
Is there any help to run chainhook without a stacks node?. I tried to run chainhook with both of bitcoind url on stacks_node_rpc_url and bitcoind_rpc_url but it get some bugs.
this is my config

[storage]
driver = "redis"
redis_uri = "redis://localhost:6379/"
cache_path = "cache"

[http_api]
http_port = 20456
database_uri = "redis://localhost:6379/"

[chainhooks]
max_stacks_registrations = 500
max_bitcoin_registrations = 500

[network]
mode = "mainnet"
bitcoind_rpc_url = "http://127.0.0.1:19333"
bitcoind_rpc_username = "bitcoin"
bitcoind_rpc_password = "bitcoin12345!"
stacks_node_rpc_url = "http://0.0.0.0:19333"

[limits]
max_number_of_bitcoin_predicates = 100
max_number_of_concurrent_bitcoin_scans = 100
max_number_of_stacks_predicates = 10
max_number_of_concurrent_stacks_scans = 10
max_number_of_processing_threads = 16
max_number_of_networking_threads = 16
max_caching_memory_size_mb = 32000

and my predicate
{
"chain": "bitcoin",
"uuid": "1",
"name": "Wrap BTC",
"version": 1,
"networks": {
"mainnet": {
"if_this": {
"scope": "ordinals_protocol",
"operation": "inscription_feed"
},
"then_that": {
"http_post": {
"url": "http://0.0.0.0:3000",
"authorization_header": "Bearer cn389ncoiwuencr"
}
},
"start_block": 790000
}
}
}

this is some bugs

Jun 13 02:04:03.016 INFO Unable to retrieve cached inscription data for inscription 0x027d20a9ef2f1bf046544d06f202d28cb60258a4b2082039418ae80f337d3c29
Jun 13 02:04:03.016 INFO Unable to retrieve cached inscription data for inscription 0x7c730184a3b9a0964e198cf774877471545680959f58b9c52ae54d220211a529
Jun 13 02:04:03.016 INFO Unable to retrieve cached inscription data for inscription 0x3d1a1da56b3ac8b701e38298d9737ea9f8982a694d6b195c05869321b821b929
Jun 13 02:04:03.016 ERRO slog-async: logger dropped messages due to channel overflow, count: 27
Jun 13 02:04:03.016 INFO Unable to retrieve cached inscription data for inscription 0xedf1bc3d77297bb7689480ce1b0b217e6d85276665497a72550f0bcfac012b44
Jun 13 02:04:03.016 ERRO slog-async: logger dropped messages due to channel overflow, count: 42
Jun 13 02:04:03.016 INFO Unable to retrieve cached inscription data for inscription 0xa1564334a9107560a4ce236a623ecc06ef3a10b67042762d2e74546fe66a6456
Jun 13 02:04:03.016 INFO Unable to retrieve cached inscription data for inscription 0x490cd4cfc36cca789bcb1529516d8506133e21d42dc7cb94bf316f20b68cfe62
Jun 13 02:04:03.016 ERRO slog-async: logger dropped messages due to channel overflow, count: 69
Jun 13 02:04:03.016 ERRO slog-async: logger dropped messages due to channel overflow, count: 29
Jun 13 02:04:03.016 ERRO slog-async: logger dropped messages due to channel overflow, count: 29

Thank you so much for your help. I really appreciate it

data dump for mainnet seems incorrect

the data dump that chainhooks pulls from is ~2 months old, and seems to be a different url structure than what is found https://archive.hiro.so/

Feb 19 23:27:39.091 INFO Downloading https://storage.googleapis.com/hirosystems-archive/mainnet/api/mainnet-blockchain-api-latest.tar.gz

chainhook start error

control_port = 20446
ingestion_port = 20445

The chainhook did not listen to the port after it was started, what is the problem?

Recommended number of network threads

hey @lgalabru thanks for the help on the other issue. Upgrading to bitcoin core v24 seems to work
I had a quick question. Whats the recommended number of network threads for the chainhook hord db sync command. by default it looks to be 8 but i was wondering if setting it higher can cause any issues

http_post doesn't seem to fire

running this command:

 chainhook predicates scan ./chainhooks/stx-lock.json --mainnet

with this predicate:

{
  "chain": "stacks",
  "uuid": "1",
  "name": "STX lock events",
  "version": 1,
  "networks": {
    "mainnet": {
      "if_this": {
        "scope": "stx_event",
        "actions": ["lock"]
      },
      "then_that": {
        "http_post": {
          "url": "http://localhost:8787/api/v1/chainhooks",
          "authorization_header": "Bearer cn389ncoiwuencr"
        }
      },
      "start_block": 0,
      "end_block": 32282
    }
  }
}

gives this output:

...
Hit at block 32117
Hit at block 32143
Hit at block 32144
Hit at block 32146
Hit at block 32173
Hit at block 32178
Hit at block 32180
Hit at block 32182
Hit at block 32192
Hit at block 32201
Hit at block 32215
Hit at block 32244
Hit at block 32246
Hit at block 32249
Hit at block 32254
Hit at block 32282

but no POSTing to the url provided. (the url is tested to work correctly)

Getting-started steps can be improved

While using 0.13.0, I encountered the following, so documenting my observations here.

1: Creating a "testnet" config still defaults to "mainnet" as the mode in the generated file.

╰─$ chainhook config new --testnet
Created file Chainhook.toml

Result:

[storage]
driver = "redis"
redis_uri = "redis://localhost:6379/"
cache_path = "cache"

[chainhooks]
max_stacks_registrations = 500
max_bitcoin_registrations = 500

[network]
mode = "mainnet"
bitcoind_rpc_url = "http://localhost:8332"
bitcoind_rpc_username = "devnet"
bitcoind_rpc_password = "devnet"
stacks_node_rpc_url = "http://localhost:20443"

[[event_source]]
tsv_file_url = "https://archive.hiro.so/mainnet/stacks-blockchain-api/mainnet-stacks-blockchain-api-latest.gz"

2: Remove $ from copyable commands from markdown. Devs would have to explicitly remove $ for the pasted command to work.

3: config -> config-path:

The command has evolved, so the docs and guides needs an update. The below statement is also missing an extra - before config-path.

Developers can customize their Bitcoin node's credentials and network address by adding the flag -config=/path/to/config.toml.

╰─$ chainhook predicates scan ./hello-ordinals.json --testnet --config=./btcconfig.toml
\error: Found argument '--config' which wasn't expected, or isn't valid in this context

	Did you mean '--config-path'?

4: Given --devnet is not an option, can we error on chainhook config new --devnet?

5: Given no "devnet", why are we defaulting the generated config with devnet/devnet as the user/pass for the node/GUI access? That could be confusing to end users.

[network]
mode = "mainnet"
bitcoind_rpc_url = "http://localhost:8332"
bitcoind_rpc_username = "devnet"
bitcoind_rpc_password = "devnet"
stacks_node_rpc_url = "http://localhost:20443"

6: The command chainhook predicates scan ./path/to/predicate.json --testnet will only work if the bitcoind node is accessible at the RPC location at http://localhost:8332, which wasn't clear to me from the error message. Perhaps move the customizing command above and explicitly mention that the default URL points to http://localhost:8332 upfront, so users can customize it before running the command.

╰─$ chainhook predicates scan ./hello-ordinals.json --testnet
unable to retrieve Bitcoin chain tip (JSON-RPC error: transport error: Couldn't connect to host: Connection refused (os error 61))

Alternatively, we could improve the above error message with the actual RPC URL Chainhooks tried accessing. The boilerplate message with additional details can be helpful to end users.

The node does not have a listening port and has a panick issue

Initially normal, but after running for a period of time, it was all due to this error.

config.toml

[storage]
driver = "memory"
redis_uri = ""

[chainhooks]
max_stacks_registrations = 500
max_bitcoin_registrations = 500

[network]
mode = "mainnet"
bitcoind_rpc_url = "http://127.0.0.1:8332"
bitcoind_rpc_username = "uu1"
bitcoind_rpc_password = "iou1dq"
stacks_node_rpc_url = "http://127.0.0.1:20443"

run-cmd
chainhook service start --config-path=./config.toml

err-log

thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` valuethread '', <unnamed>' panicked at 'components/chainhook-event-observer/src/hord/db/mod.rscalled `Option::unwrap()` on a `None` value:', 1230components/chainhook-event-observer/src/hord/db/mod.rs::361230
:36
thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value', components/chainhook-event-observer/src/hord/db/mod.rs:1230:36

ERRO Unable to augment bitcoin block

$chainhook service start --predicate-path=./predicate.json --start-http-api --config-path=./Chainhook.toml
Apr 28 05:35:47.629 INFO Starting service...
Apr 28 05:35:47.629 INFO Ordinal indexing is enabled by default hord, checking index... (use --no-hord to disable ordinals)
Apr 28 05:35:47.639 INFO Resuming hord indexing from block #774938
Apr 28 05:35:47.639 INFO Syncing hord_db: 12377 blocks to download (774938: 787314), using 8 network threads
Apr 28 05:35:48.183 INFO Queueing compacted block #774945
Apr 28 05:35:48.561 INFO Queueing compacted block #774940
Apr 28 05:35:48.872 INFO Queueing compacted block #774939
Apr 28 05:35:49.205 INFO Queueing compacted block #774943
Apr 28 05:35:49.623 INFO Queueing compacted block #774941
Apr 28 05:35:50.020 INFO Queueing compacted block #774942
Apr 28 05:35:50.344 INFO Queueing compacted block #774944
Apr 28 05:35:50.639 INFO Queueing compacted block #774938
Apr 28 05:35:50.639 INFO Dequeuing block #774938 for processing (# blocks inboxed: 7)
Apr 28 05:35:50.725 INFO Computing ordinal number for Satoshi point 0x93a622fb1b8579291f7e3569d95dbb618de20a6ce2494b949ff9408642342943:0:0 (block #774938)
Apr 28 05:35:50.725 INFO Computing ordinal number for Satoshi point 0x68f2a04ac37b994d59aaa0b590153d492933913b3d60b5901bc23248545d2c3a:0:0 (block #774938)
Apr 28 05:35:50.725 INFO Computing ordinal number for Satoshi point 0xc08724d8d864f27551771c2885e85cdfbf6c7ac922514ed5495f7cff02d0f2e9:0:0 (block #774938)
Apr 28 05:35:50.726 INFO Computing ordinal number for Satoshi point 0xecdeb0a72c2bb77f36908f72b63d27e032811440ea4bcea8d936727bd9bf743c:0:0 (block #774938)
Apr 28 05:35:50.726 INFO Computing ordinal number for Satoshi point 0xc5afde7345f89cf541b1ee51bf8c93d3ea28c0841680f3fdf8c254d95b64ab04:0:0 (block #774938)
Apr 28 05:35:50.727 INFO Computing ordinal number for Satoshi point 0x078be4a73a490d0adb8a173142c32eba552defc6bedd344cfe14aa51481a8013:0:0 (block #774938)
Apr 28 05:35:50.727 INFO Computing ordinal number for Satoshi point 0x12e749dbd1af412f508ada404656a4bb84422ac8d3afa10ae1716f40f4eae79b:0:0 (block #774938)
Apr 28 05:35:50.727 INFO Computing ordinal number for Satoshi point 0x6c0000402818c2308fdad8fec3d229775b3adb3f77b1427397cdf4f6505bab96:0:0 (block #774938)
Apr 28 05:35:50.727 INFO Computing ordinal number for Satoshi point 0x1e9b44e2495c96045beb881d1b75d3459f52b2371efdd3d11fc46513f807098c:0:0 (block #774938)
Apr 28 05:35:50.727 INFO Computing ordinal number for Satoshi point 0xa67c14aa8e8437262aee06d55bc63d8e2001e3027ad1bccd7ca1a5d19c01b935:0:0 (block #774938)
Apr 28 05:35:50.934 INFO Computing ordinal number for Satoshi point 0x0dd65cb9dfa10d672c16e3741d73eead9085a710ff5f8796ef626799c85f944b:0:0 (block #774938)
Apr 28 05:35:50.934 INFO Computing ordinal number for Satoshi point 0xd258fc0295a6a98bcff37f8eef6d771b65f3a69e2be2cc1c34902c0da973b5c0:0:0 (block #774938)
Apr 28 05:35:50.945 INFO Computing ordinal number for Satoshi point 0xd7a14db2c85c9cece840d44cb11b4f57d5597a7dc09b8d9b357f48762e29c84f:0:0 (block #774938)
Apr 28 05:35:50.948 INFO Computing ordinal number for Satoshi point 0x686390089336addcec25b597a5573f0ec6be39579bede35be9f92c25ad06f6f7:0:0 (block #774938)
Apr 28 05:35:50.951 INFO Computing ordinal number for Satoshi point 0x970cc5dc3c09f323d6a4e0cab52bb254898b036e4a72197278bb30391fb01270:0:0 (block #774938)
Apr 28 05:35:50.952 INFO Computing ordinal number for Satoshi point 0x010c250cadaff06f86505a4324cb1ef3c027470ad3a9fdedec25a427457f946b:0:0 (block #774938)
Apr 28 05:35:51.044 INFO Computing ordinal number for Satoshi point 0x1a94768b175a05eb09201c3b8435c4e6a8691b769848600d408a72712aff318d:0:0 (block #774938)
Apr 28 05:35:51.080 INFO Computing ordinal number for Satoshi point 0x9c8c2e3b184b69b0ec4162cb7d438d63337a39792bba7d59d335f988471306fa:0:0 (block #774938)
Apr 28 05:35:51.082 INFO Computing ordinal number for Satoshi point 0xee21b5240619eab16b498afbcefb673285d5e39f3a597b4a6a54a34dec274a3a:0:0 (block #774938)
Apr 28 05:35:51.146 INFO Computing ordinal number for Satoshi point 0x7bf325e512965080d2fa94c9c1206b2d2925fefb14c5ffc70e04e2d0d12cc04d:0:0 (block #774938)
Apr 28 05:35:51.214 INFO Computing ordinal number for Satoshi point 0x4fc272fa0496c3e084c2ef65adfa5870507bf5fb1172dec2e4e2063d2a6bcf03:0:0 (block #774938)
Apr 28 05:35:51.437 INFO Computing ordinal number for Satoshi point 0x969734270483f95aab4ed2988a96afa11cadeaa372404b42cc32ebffb68c67d7:0:0 (block #774938)
Apr 28 05:35:51.524 INFO Computing ordinal number for Satoshi point 0x53f06b0bf1aa83d51b9d08e85fa952efd5792d79e81ee2d08003c550c3773121:0:0 (block #774938)
Apr 28 05:35:51.921 INFO Computing ordinal number for Satoshi point 0xa19af1b3646f6f5fe164b63db1fd34cf2c14267216620ddbe14572f0dbb513ca:0:0 (block #774938)
Apr 28 05:35:52.084 INFO Computing ordinal number for Satoshi point 0x12c49bbfdbd555f5232bb7aa344f998ef8840019fe4c72e4017a38bf5dcaac4e:0:0 (block #774938)
Apr 28 05:35:54.604 INFO Computing ordinal number for Satoshi point 0x29eee78e1de8a6c10aa85aa79e2ab47ab0481964f7856ab39425811656d4a757:0:0 (block #774938)
Apr 28 05:35:54.938 INFO Computing ordinal number for Satoshi point 0xad274a98ac993c65c20ed23f25b9854362d98659b063e6e086de82c27f5576de:0:0 (block #774938)
Apr 28 05:35:55.139 INFO Computing ordinal number for Satoshi point 0xcbd911c0d1fbf264b1a380ad6efdf9161a6eb130064e59714f462f368698f4cd:0:0 (block #774938)
Apr 28 05:35:56.240 INFO Computing ordinal number for Satoshi point 0x6812087e29167fe96141e73d4cb24e75246b379e3dfa546d191fd786695a42db:0:0 (block #774938)
Apr 28 05:36:12.646 ERRO Unable to augment bitcoin block 774938 with hord_db: block #728057 not in database
block #728057 not in database

when I restart it fails at the same block, but different satoshi point and a message that some other block is not in the database

Outdated docker image

Hey
I would like to run the chainhook node as part of my kubernetes cluster.
Firstly, is this recommended?
And secondly, it looks like the docker image here is outdated. Last update 3 months ago

Revisit ImplementTrait Predicate spec

    "if_this": {
        "scope": "contract_deployment",
        "implement_trait": "ST1PQHQKV0RJXZFY1DGX8MNSNYVE3VGZJSRTPGZGM.sip09-protocol"
    },

should probably be

    "if_this": {
        "scope": "contract_deployment",
        "implement_traits": ["sip09", "ST1PQHQKV0RJXZFY1DGX8MNSNYVE3VGZJSRTPGZGM.my-custom-trait"]
    },

Cannot install from source, module `storage` not found

When following the instructions in the readme, cargo chainhook-install fails with:

error[E0583]: file not found for module `storage`
  --> components/chainhook-cli/src/main.rs:18:1
   |
18 | pub mod storage;
   | ^^^^^^^^^^^^^^^^
   |
   = help: to create the module `storage`, create file "components/chainhook-cli/src/storage.rs" or "components/chainhook-cli/src/storage/mod.rs"

Additional actions: `http_head`, `kafka_publish`

Today, a chainhook node can notify an remote observer using a http_post action.
Based on feedbacks, I think we need to add one of 2 new actions:

  • http_head: signal that a new payload is available for fetch (implement this first).
  • kafka_publish: publish the message to a Kafka instance, that will make sure that the payload is not getting lost.

This decreases the payload size of chainhook messages, improving how chainhook scales from a networking perspective.

Refresh `arkadiko-data-indexing` example with latest commands/subcommands

As an user, I wanted to get a sense of the DX through examples/arkadiko-data-indexing; after the local environment setup (ruby, rails, missing gems, redis, buidls, etc. etc.), I am stuck at posting predicates against the locally running chainhooks.

curl -X "POST" "http://0.0.0.0:20456/v1/chainhooks/" \
     -H 'Content-Type: application/json' \
     -d $'{
  "stacks": {
    "predicate": {
      "type": "print_event",
      "rule": {
        "contains": "vault",
        "contract_identifier": "SP2C2YFP12AJZB4MABJBAJ55XECVS7E4PMMZ89YZR.arkadiko-freddie-v1-1"
      }
    },
    "action": {
      "http": {
        "url": "http://localhost:3000/chainhooks/v1/vaults",
        "method": "POST",
        "authorization_header": "Bearer cn389ncoiwuencr"
      }
    },
    "uuid": "1",
    "decode_clarity_values": true,
    "version": 1,
    "name": "Vault events observer",
    "network": "mainnet"                                                                                                                                    <....
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="utf-8">
    <title>422 Unprocessable Entity</title>
</head>
<body align="center">
    <div role="main" align="center">
        <h1>422: Unprocessable Entity</h1>
        <p>The request was well-formed but was unable to be followed due to semantic errors.</p>
        <hr />
    </div>
    <div role="contentinfo" align="center">
        <small>Rocket</small>
    </div>
</body>
</html>%

I do have the rails app, redis, and the chainhook node running at "http://0.0.0.0:20456/v1/chainhooks.

image

Ability to use signaling coming from bitcoind

As of today, when running as a node, chainhook node knows that a bitcoin block was mined thanks to the signaling coming from a stacks-node.
We want to support bitcoind signaling as well (leveraging the bitcoind baked-in zeromq integration), to simplify the stack when a team is only working with bitcoin.

additional stx predicate ideas

thinking through some additional predicates I could see being useful:

  • new block (basically so folks can create general purpose indexers)
  • new microblock
  • new mempool transaction
  • new transaction
  • contract call (for a given contract, with optional params for function)

will add more as I think of them

422: Unprocessable entity

Hello
im running two docker containers, one in readonly mode and one in writeonly mode
but i keep getting this error when starting the writer
{"level":"error","time":"2023-06-03T19:32:58.870Z","pid":1,"hostname":"ordinals-api-writer-69f9cfd48f-2vd4p","name":"ordinals-api","err":{"type":"ResponseStatusCodeError","message":"Response status code 422: Unprocessable Entity","stack":"ResponseStatusCodeError: Response status code 422: Unprocessable Entity\n at getResolveErrorBodyCallback (/app/node_modules/undici/lib/api/api-request.js:172:34)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)","name":"ResponseStatusCodeError","code":"UND_ERR_RESPONSE_STATUS_CODE","body":"<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"utf-8\">\n <title>422 Unprocessable Entity</title>\n</head>\n<body align=\"center\">\n <div role=\"main\" align=\"center\">\n <h1>422: Unprocessable Entity</h1>\n <p>The request was well-formed but was unable to be followed due to semantic errors.</p>\n <hr />\n </div>\n <div role=\"contentinfo\" align=\"center\">\n <small>Rocket</small>\n </div>\n</body>\n</html>","status":422,"statusCode":422,"headers":{"content-type":"text/html; charset=utf-8","server":"Rocket","x-content-type-options":"nosniff","permissions-policy":"interest-cohort=()","x-frame-options":"SAMEORIGIN","content-length":"444","date":"Sat, 03 Jun 2023 19:32:58 GMT"}},"msg":"EventServer unable to register predicate"}

How to do it?

Sorry, I am a btc novice, I have already installed the project in the normal link. Now I need to get the information of the btc ordinal in real time to deploy this git library: https://github.com/hirosystems/ordinals-api

What should I do? If you have time, I hope you can give me an answer with more words, thank you!

docker image build error

build list:
chainhook-event-observer.dockerfile
chainhook-node.dockerfile

I'm trying to compile them, but I get the error
#18 0.307 cp: cannot stat 'target/release/chainhook-event-observer': No such file or directory

Secondly, I don't understand the relationship between the two. I hope to build ordinals-api on the upper layer based on chainhook. I don't understand which one is the service I should use.

Chainhook breaks when streaming zmq from bitcoind if bitcoind streams txs in addition to blocks

Steps to reproduce:

start chainhook with the following configuration:

...
[network]
bitcoind_zmq_url = "tcp://0.0.0.0:30001"
...
// rest of config

Start a bitcoin node with bitcoind -zmqpubhashblock=tcp://0.0.0.0:30001 -zmqpubhashtx=tcp://0.0.0.0:30001

If you then send a transaction on the node, chainhook will break with unable to find block XYZ where XYZ is the transactionID

If you don't have -zmqpubhashtx=tcp://0.0.0.0:30001 it works just fine as long as you've enabled zeromq on the chainhook binary.

The fix for this is subscribing only to the hashblock event

components/chainhook-event-observer/src/observer/mod.rs
line 436
socket.subscribe("hashblock").await?;

Predicate generation for stacks generates simnet config instead of testnet

the command

chainhook predicates new --stacks stacks1.json

generates the following:

{
  "chain": "stacks",
  "uuid": "1adac3cc-760f-4622-9625-b5246a44ec77",
  "name": "Hello world",
  "version": 1,
  "networks": {
    "simnet": {
      "start_block": 0,
      "end_block": 100,
      "if_this": {
        "scope": "print_event",
        "contract_identifier": "ST1SVA0SST0EDT4MFYGWGP6GNSXMMQJDVP1G8QTTC.arkadiko-freddie-v1-1",
        "contains": "vault"
      },
      "then_that": {
        "file_append": {
          "path": "arkadiko.txt"
        }
      }
    },
    "mainnet": {
      "start_block": 0,
      "end_block": 100,
      "if_this": {
        "scope": "print_event",
        "contract_identifier": "SP2C2YFP12AJZB4MABJBAJ55XECVS7E4PMMZ89YZR.arkadiko-freddie-v1-1",
        "contains": "vault"
      },
      "then_that": {
        "file_append": {
          "path": "arkadiko.txt"
        }
      }
    }
  }
}

Log should be warning instead of error

The following log is printed as an error, but should be reclassified as a warning:

Unable to process Stacks Block #107415 (0x0b60...60cb) - inboxed for later

Maybe if the block failed processing a certain number of times after being inboxed, an error would print.

Create HTTP status page

In order to generate availability metrics, it would be useful to have a status page we can periodically hit. The status page should return a HTTP 200 if the service is live and serving traffic.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.