Git Product home page Git Product logo

dna's Introduction


Apibara Direct Node Access and SDK

This repository contains the canonical implementation of the Apibara Direct Node Access (DNA) protocol and the integrations built on top of it.
The protocol enables developers to easily and efficiently stream any onchain data directly into their application.

Stream

Stream data directly from the node into your application. DNA enables you to get exactly the data you need using filters. Filters are a collection of rules that are applied to each block to select what data to send in the stream.

Transform

Data is transformed evaluating a Javascript or Typescript function on each batch of data. The resulting data is automatically sent and synced with the target integration.

Integrate

We provide a collection of integrations that send data directly where it's needed. Our integrations support all data from genesis block to the current pending block and they ensure data is kept up-to-date.

  • Webhook: call an HTTP endpoint for each batch of data.
  • PostgreSQL: stream data into a specific table, keeping it up-to-date on new blocks and chain reorganizations.
  • MongoDB: store data into a specific collection, keeping it up-to-date on new blocks and chain reorganizations.
  • Parquet: generate Parquet files to be used for data analysis.

Getting started

You can get started using Apibara by installing the official CLI tool. We provide detailed instructions in the official documentation.

Docker images

We publish docker images on quay.io. Images are available for both the x86_64 and aarch64 architectures.

Sinks

Server

Contributing

We are open to contributions.

  • Read the CONTRIBUTING.md guide to learn more about the process.
  • Some contributions are rewarded on OnlyDust. If you're interested in paid contributions, get in touch before submitting a PR.

Development

Apibara DNA is developed against stable Rust. We provide a nix environment to simplify installing all dependencies required by the project.

  • if you have nix installed, simply run nix develop.
  • if you don't have nix installed, you should install Rust using your favorite tool.

Platform Support

Tier 1

These platforms are tested against every pull request.

  • linux-x86_64
  • macos-aarch64

Tier 2

These platform are tested on new releases.

  • linux-aarch64 - used for multi-arch docker images.

Unsupported

These platforms are not supported.

  • windows - if you're a developer using Windows, we recommend the Windows Subsystem for Linux (WSL).
  • macos-x86_64 - given the slowness of CI runners for this platform, we cannot provide builds for it.

Project Structure

The project is comprised of several crates, refer to their READMEs to learn more about each one of them.

  • core: types shared by all other crates.
  • starknet: StarkNet source node.
  • sdk: connect to streams using Rust.
  • sinks: contains the code for all sinks.

License

Copyright 2024 GNC Labs Limited

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

dna's People

Contributors

ametel01 avatar anthonybuisset avatar bigherc18 avatar fracek avatar ivpavici avatar larkooo avatar martiangreed avatar ponderingdemocritus avatar ptisserand avatar tekkac avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

dna's Issues

Transform data using Typescript

Is your feature request related to a problem? Please describe.
At the moment, integrations transform data using jsonnet. It would be better if they could use Typescript instead:

  • the language is more widely known
  • language servers improve the dx

Describe the solution you'd like
We should embed a Deno runtime to transform data using Typescript.

Write to multiple Mongo collections from a single indexer

Is your feature request related to a problem? Please describe.

At the moment, one indexer can insert data only into one collection. This is mostly fine but:

  • increases the deployment complexity
  • makes data eventually consistent.

Describe the solution you'd like

We should introduce a "multi collection" mode for mongo indexers.

Configuration wise, we change the mongo options to allow a single collection (like now) or multiple collections. Like all options, we want this to be configurable from inside the script, but also cli and environment variables. The easiest way is to have two keys: collectionName and collectionNames.

pub struct SinkMongoOptions {
  /// ...
  pub collection: CollectionOptions;
}

#[derive(Args)]
pub struct CollectionOptions {
    #[arg(long, env = "MONGO_COLLECTION_NAME", conflicts_with = "collection_names")]
    pub collection_name: Option<String>,
    #[arg(long, env = "MONGO_COLLECTION_NAMES")]
    pub collection_names: Option<Vec<String>>,
}

When users opt into multi-collection mode, the value returned by the transform function is expected to be different.

  • Normal mode: { data: any, collection: string } (data goes in data, collection specify where to insert data)
  • Entity mode: { entity: any, collection: string, update: any } (new collection property)

On chain reorganizations, data from all collections should be invalidated.

Additional context

This feature has been requested by a user and should be prioritized.

Introduce runners

Is your feature request related to a problem? Please describe.

As seen in #226 and #227, we need a way to dynamically spinup indexers. This system needs to be robust enough for production usage, and support different deployment models (Kubernetes, Docker, bare metal).

Describe the solution you'd like

The idea is to introduce schedulers. A scheduler is a server (gRPC for now, + REST later) that exposes a CRUD interface for indexers, and schedules them to run in the background. The scheduler is also used to show indexing status (see #229) and logs.

Notice that the CreateIndexer operation is idempotent: if an indexer with the same indexer definition.id already exists, it will return the existing indexer without creating a new one.

See message below for a first draft of the implementation

The idea is to include at least two implementations of the scheduler:

  • one with no external dependencies used for development. it will be less robust, but for development it's good enough.
  • one based on kubernetes, that combined with the operator enables us to have a production ready setup.

After we have this, implementing #226 and #227 becomes easy:

  • #226: for each indexer in the config, call CreateIndexer. No need to check if the indexer exists since operations are idempotent. apibara down will call DeleteIndexer.
  • #227: the factory simply calls CreateIndexer with the returned value. This becomes a sink like any other.

Additional context

The idea to use an API to schedule factory indexers comes from a telegram chat with @bigherc18

Speed up streaming sparse data

Is your feature request related to a problem? Please describe.
At the moment, streaming "sparse" data (that is, when most blocks don't contain any data) is slower than it should be.

Describe the solution you'd like
We should create/update sets containing the block numbers at which events happened/account storage was updated. We should use croaring::Bitmap since that's what we are also using for Erigon DNA.

Suggested steps:

Update ingestion engine to create/update the bitmap sets.

  • Create the EventAddressIndex (map address of contract emitting events to bitmap set) and EventKeyIndex (event key -> bitmap set) tables.
  • Update write_receipts to also update the bitmap set for the events contracts and keys. Rembere to read the old bitmap set before updating.
  • Maybe do something similar with state update?

Update data stream
This is the complex part. At the moment, the data stream expects to advance to the next block. We need to introduce a mechanism to control to which block the stream is advanced. This means it needs to periodically reload the bitmap set (want to avoid having too big of a bitmap set).

This code is pretty similar to the refactor implemented for Erigon DNA.

Describe alternatives you've considered
At the moment, we have a bloom filter for each block. This is clearly not working well enough, so it should be replaced with the method described in this issue.

Additional context
Maybe this will fix #101?

Chain-aware document storage

The server should provide a document-storage interface for indexers to use. At minimum, it should include the following methods:

  • find_one and find: returns one or many matching documents,
  • delete_one and delete_many: deletes one or many matching documents,
  • find_one_and_update, find_one_and_replace: update/replace one document.

Storage should also associate block number/hash/timestamp and chain name.

Control cursor persistence from integration `Sink`

Is your feature request related to a problem? Please describe.
At the moment, the connector code always updates the persisted cursor every time it calls the handle_data and handle_invalidate methods. This creates issues with integrations like Parquet that perform their own batching on top of the existing batching mechanism provided by DNA. Consider the following example:

  • server sends data with cursors [0, 10)
  • parquet store that in memory / connector stores 10 as cursor
  • sink crashes or is stopped. no data is written to disk
  • sink resumes, reads cursor 10 and connects to server with this
  • server sends data [10, 20)
  • !!! data for blocks [0, 10) is not written!

Describe the solution you'd like
The Sink trait should be updated to return a value from handle_data and handle_invalidate. I believe it's enough to simply signal to persist the cursor or not, without the need to specify which cursor to persist. I like an explicit name like CursorAction.

#[derive(Debug, Clone)]
pub enum CursorAction {
  Persist,
  None,
}

#[async_trait]
pub trait Sink {
    async fn handle_data(
        &mut self,
        cursor: &Option<Cursor>,
        end_cursor: &Cursor,
        finality: &DataFinality,
        batch: &Value,
    ) -> Result<CursorAction, Self::Error>;

    async fn handle_invalidate(&mut self, cursor: &Option<Cursor>) -> Result<CursorAction, Self::Error>;
}

The connector persists the cursor to etcd only if the handler returns CursorAction::Persist.

On top of this, the connector should only persist the cursor if the data is finalized or accepted, but pending data is always ephemeral and so the cursor should not be persisted.

Ethereum Provider Potential Issue with SubscriptionStream

Hi,

I was auditing this code for a project I'm doing and saw that you use the ethers SubscriptionStream which uses eth_subscribe. I was looking at the geth docs and saw that If geth receives multiple blocks simultaneously, e.g. catching up after being out of sync, only the last block is emitted so its possible to miss blocks. Does apibara handle cases like this?

Start publishing tagged docker images

At the moment, we publish docker images over at quay.io using the commit sha. I propose the following changes to our docker publishing strategy:

  • use nix derivation hash to tag images. This makes it clear if a new image is actually different from a previous image. This may require changing the docker image back to being deterministic (aka created at ts = 0).
  • use git tags in the format svc/vX.Y.Z to publish new releases.

We also should investigate how to keep a changelog and do semantic releases. We should also link to quay images from the github release page.

Speedup fetching block events

It should be possible to fetch block events for the next batch of blocks while the client is processing the current batch. This means that from the point of view of the client, it only needs to wait for events on the first block.

version `GLIBC_2.38' not found

Describe the bug

I am trying to run apibara dna cli with sink plugin installed and I am getting error that preventing me from using it:

/home/user/.local/share/apibara/plugins/apibara-sink-console: /lib/x86_64-linux-gnu/libm.so.6: version GLIBC_2.38' not found (required by /home/user/.local/share/apibara/plugins/apibara-sink-console)`

To Reproduce

  • Install apibara curl -sL https://install.apibara.com | bash
  • add plugin apibara plugins install sink-console
  • create script.js file with content:
    `export const config = {
    streamUrl: "https://goerli.starknet.a5a.ch",
    startingBlock: 800_000,
    network: "starknet",
    finality: "DATA_STATUS_ACCEPTED",
    filter: {
    header: {},
    },
    sinkType: "console",
    sinkOptions: {},
    };

// This transform does nothing.
export default function transform(block) {
return block;
}`

  • try to run it $ apibara run script.js -A dna_xxx

Expected behavior

Execution of the apibara command.

Software (please complete the following information):

  • OS: Ubuntu Linux
  • Version Ubuntu 23.04
  • Apibara version apibara-cli 0.4.0
  • Programming language and version (Rust, Cargo, Typescript): Typescript

Additional context

It also happened on other versions, I already tried reinstalling my OS, installng a bunch of packages but after 2 days spent on just this it seems impossible to solve on my own.

Leverage transactional features to invalidate + update pending data

Is your feature request related to a problem? Please describe.

At the moment, data generated by pending blocks is first invalidated (delete) and then reinserted. This results in a fraction of a second when this data is missing.

Describe the solution you'd like

Invalidation + update should be done in the same db transaction (if the sink is a db).

  • Change the db trait to have a flag in handle_data that signals if the sink implementation should invalidate data before insertion
  • Change connector code to not invalidate data in between pending blocks (or between pending -> accepted), but signal this to the sink.

Describe alternatives you've considered

We could introduce the concept of transaction to the sink (e.g. a begin_txn, commit_txn method) and then have handle_data, handle_invalidate be methods on the transaction object. This is conceptually more correct, but it will introduce more complex types.

EVM chains support

Add back support for indexing EVM-compatible chains.

Chains are configured at startup through the configuration file, indexers then specify which chain they want to index.

Show clear error message if the stream requires authentication

Is your feature request related to a problem? Please describe.

When I forget to set an --auth-token I receive a long grpc error message. It's not clear what the message is about.

Describe the solution you'd like

The message should inform me that I need to authenticate with the stream.

Integrations should invalidate data if they received pending data.

Describe the bug
Integrations that support pending data will insert data produced by pending blocks. When the block is accepted, the same data is inserted again.

Expected behavior
Pending data is delete just before handling the next accepted block.

Additional context
On start, the integration should invalidate pending data from the previous run.

Handle chain reorganizations

At the moment the indexer stops indexing in case of chain reorganization. It should instead update the chain state and send an event to connected indexers.

Send empty blocks to client

Should send empty BlockEvents to the client as a heartbeat. This solves of the issues of the client always restarting from the beginning if starting at block 0 and disconnecting before receiving any event.

`GLIBC_2.32' not found error during installation of apibara indexer

Describe the bug
Getting the following error when trying to install apibara indexer:

root@vmi1205672:~# curl -sL https://install.apibara.com | bash
apibara-installer: installing Apibara CLI to /root/.local/share/apibara
apibara-installer: installing CLI version 0.1.0 for x86_64-linux
apibara-installer: checking installation
/root/.local/share/apibara/bin/apibara: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /root/.local/share/apibara/bin/apibara)
/root/.local/share/apibara/bin/apibara: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /root/.local/share/apibara/bin/apibara)
/root/.local/share/apibara/bin/apibara: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /root/.local/share/apibara/bin/apibara)
apibara-installer: command failed: /root/.local/share/apibara/bin/apibara --version

To Reproduce Steps to reproduce the behavior.
execute the command curl -sL https://install.apibara.com | bash on a ubuntu with`GLIBC_2.31'

Expected behavior A clear and concise description of what you expected to
indexer should be installed

Software (please complete the following information):
Description: Ubuntu 20.04.6 LTS
Release: 20.04

ldd --version
ldd (Ubuntu GLIBC 2.31-0ubuntu9.9) 2.31

Improve sink status server to include indexing status

Is your feature request related to a problem? Please describe.

Would be nice if the sink status server included information about the indexing status. This will enable us to build a monitoring TUI that shows the indexing progress of an indexer.

Describe the solution you'd like

The status server should return the following data:

{
  "status": "running",
  "start_block": 1000,
  "head_block": 2000,
  "current_block": 1600
}

Where the head block is fetched from the DNA server (after #119 is implemented).

Additional context

A TUI for apibara up would be amazing.

Better integration with Starknet Devnet

Is your feature request related to a problem? Please describe.
When using Starknet DNA with Devnet, I need to delete the old data volumes at every restart.

Describe the solution you'd like
Starknet DNA starts with a fresh database on every launch, so I don't have to manage volumes myself.

Additional context
Starknet Devnet is amazing for implementing end to end tests. Also the same pattern can be applied to other networks, like Ethereum's anvil.

DNA over websockets

Is your feature request related to a problem? Please describe.
It should be possible to stream data directly into frontend applications. This can be used by projects to:

  • deliver real-time notifications to their users.
  • run the indexer locally, inside the browser.
  • use DNA from wasm applications.

gRPC doesn't play well with web browsers, so dropping it in favour of a browser-native solution is an ideal solution.

Describe the solution you'd like
We should implement the StreamData method has a websocket server. Like with the gRPC server, data is encoded using protobuf.

Describe alternatives you've considered
We could implement gRPC over webrtc, but this is overkill for what it's needed since we have a few functions.

Additional context
This issue is related to #94. Websockets are bidirectional so #94 is not necessary for this use case.

Adding this functionality means we can also start working on a react library to stream data directly into the app.

Enable multiple sinks per table/collection by adding additional conditions on invalidation

Is your feature request related to a problem? Please describe.

At the moment, it's not possible to write to the same table using multiple sinks. Insertion works fine, but during invalidation one indexer deletes the other indexer's data too.

Describe the solution you'd like

I believe the easiest solution is to add a new option with a list of additional conditions for data invalidation.

For example:

export const config = {
  sinkOptions: {
    invalidate: [
      { column: 'my_column', value: 'abcd' },
      { column: 'other_column', value: 'aaa' }
    ]
  }
}

Obviously the user script must return objects that have my_column and other_column as attributes, even tho this behavior won't be enforced for simplicity sake.

All fields in invalidate will be added as AND expressions to the invalidate query, the condition being <column>==<value>.

Describe alternatives you've considered

We can add a callback in the script file to return the conditions at runtime. I believe it adds additional complexity for no reason.

Additional context

Affects the postgres and mongo sinks.

Generate Parquet files from streams

Is your feature request related to a problem? Please describe.
The idea is to generate parquet files from streams. Parquet files are useful for data scientists that want to analyze on-chain data using Python or DuckDB.

Describe the solution you'd like
We need to implement a sink (similar to the current webhook sink) that appends data to a parquet file.

  • Specify filter and transform like the webhook sink.
  • Schema is user-specified and should be consistent with the data returned by the transform operation.
  • Field elements are hex-encoded.

Additional context
Should the writer stop at a user-specified block? In that case, we need to extend the Sink trait to have a callback to control if the client should stop.

Indexers factories

Is your feature request related to a problem? Please describe.

In some cases (notably: DEXes) the set of smart contracts to index is not known at deployment, but changes dynamically over time.

Describe the solution you'd like

We should have a special "factory" sink type that is used to dynamically deploy new child indexers.

The following example listens for contract deployments, then for each contract it starts 2 indexers (one to index swaps, one to track user balances).

export const config = {
  // ... normal config
  sinkType: "factory",
  sinkOptions: {
    // no options?
  }
}

export default function transform({ header, events }: Block) {
  const startingBlock = header.blockNumber;
  return events.flatMap(({ event }) => {
    const contractAddress = event.data[0];
    // return the sinks config
    return [{
      id: `swaps-${contractAddress}`,
      src: "src/swaps.ts",
      env: {
        "STARTING_BLOCK": startingBlock,
        "CONTRACT_ADDRESS": contractAddress,
      }
    }, {
      id: `balances-${contractAddress}`,
      src: "src/balances.ts",
      env: {
       "STARTING_BLOCK": startingBlock,
        "CONTRACT_ADDRESS": contractAddress,
      }
    }]
  })
}

The sub indexers should run independently (so data is eventually consistent).

The challenge is how to store which indexers have been deployed so that they can be restarted on restart. In theory it only needs to store the indexer source path and the environment variables used to start. Maybe it makes sense to re-use the persistence layer (none, fs, etcd) to also store this information?

The idea is that using this type of indexers, users are able to build "indexers trees" like the following:

pool-factory/
├── dai-usdc-balances
├── dai-usdc-swaps
├── eth-dai-balances
├── eth-dai-swaps
├── eth-usdc-balances
└── eth-usdc-swaps

Additional Context

This one can share a lot of code with #226

`apibara up` command to run multiple indexers at the same time

Is your feature request related to a problem? Please describe.

At the moment, if I want to run multiple indexers concurrently, I need to create many terminal windows and run each one of them with apibara run my-indexer.ts. If I change my code, I then need to restart them individually.

Describe the solution you'd like

We should have a configuration file (apibara.config.ts) where I can define all indexers I want to run. Then there should be a command apibara up that reads this configuration file and starts the indexers defined there.

The configuration file should be in typescript/javascript so that developers can read environment variables and generate the indexers configuration at runtime.

export default {
  indexers: {
    "indexer-1": {
      script: "src/indexer1.ts",
      env: {
       "KEY_1": "value_1"
     }
    },
    "inedxer-2": {
      script: "src/indexer2.ts",
    }
  },
  persistence: {
    fs: ".apibara",
   // etcd: "http://localohst:2379"
  }
}

Notice that the key of the indexers property will be used as the sink-id if persistence is enabled.

This feature requires some planning:

  • how to forward environment variables to the sub-processes? At the moment --allow-env only accepts files.
    • different indexers may have different values for the same env variable. How?
    • One idea is to enable a syntax like --allow-env=KEY=VALUE1,KEY2=VALUE2
  • run multiple indexers concurrently as sub-processes. forward signals to the sub-processes (is this automatic?)
  • assign a different status server address to each sub-process, and create a new status server that is healthy only if all sub-processes are healthy

Describe alternatives you've considered

We could use yaml/json for configuration. I believe using typescript is better because we get the following for free:

  • autocomplete
  • type safety
  • powerful interpolation/templating

Additional context

This is related to the work on factory indexers.

Persist integration state between restarts

Is your feature request related to a problem? Please describe.
At the moment, when an integration (like the webhook sink) is restarted it starts over from the beginning. For real-world usage, we want it to resume from were it previously left off.

Describe the solution you'd like
We should add a new option to enable persisting state. When this option is enabled:

  • After each batch is handled, write the cursor to storage.
  • On startup, check if any cursor exists. If it does, use it when connecting to the stream.

Additional context

  • Sink need to have a sink-id (specified by the user). This is used to store multiple sinks in the same storage.
  • What data store to use? Postgres is fine, but on the other hand we only really need a basic kv store. etcd is a good choice, we should explore the lock mechanism to avoid running 2 copies of the same indexer in parallel.

Add non-bidirectional stream methods

Is your feature request related to a problem? Please describe.
Using bidirectional streams is great to reconfigure streams on the fly, but it also means the streams are impossible to use with grpc-web.

Describe the solution you'd like
Add an additional method to stream data with an unidirectional stream. The client sends the configuration that will be used for the entirety of the stream.

Additional context
This is needed by Dojo.

Add dynamic filters

There should be an API to dynamically add EventFilter to an indexer.

When a new filter is added:

  • The indexer updates its list of filters
  • The indexer stores the updated filters to persistent storage
  • The indexer gets the matching events for the current block and sends them to the handler.
    • Notice that these events will be handled after the current block's events are handled

Write stream data to MongoDB

Is your feature request related to a problem? Please describe.
Many use cases involve transforming data and then writing it into MongoDB. This database is then used to implement the API used by the frontend.

Describe the solution you'd like
We should add a sink that writes stream data directly to MongoDB, without the need to implement a typescript/python program that does it.

Additional context
This are a couple of things to keep in mind:

  • Write all records to the same collection, this collection will be a command line argument to the sink.
  • The "transform" step should return an array of records, the sink probably should check this is the case and if not print an helpful error message.
  • Add a _cursor field to each record containing the end_cursor.order_key of the batch.
  • On "invalidate", delete all records where _cursor > cursor.order_key, where cursor is the cursor in the invalidate message.

Add rpc method to retrieve indexing/chain status

Is your feature request related to a problem? Please describe.
Sometimes we need to know what's the latest available block number/hash. We can use the JSON-RPC provider to do that, but there is always the possibility that the Apibara DNA node and the provider node are not exactly at the same block.

Describe the solution you'd like
We should add a Status method to the rpc that returns the block syncing status, together with the cursors of the current finalized and accepted blocks.

service Stream {
  rpc Status(StatusRequest) returns (StatusResponse);
}

message StatusRequest {
}

message StatusResponse {
  // Optional because the chain may be so new that no block has been finalized. For example, devnets.
  optional Cursor finalized = 1;
  Cursor latest = 2;
  IngestionStatus status = 3;
}

enum IngestionStatus {
  INGESTION_STATUS_UNKNOWN = 0;
  INGESTION_STATUS_SYNCING = 1;
  INGESTION_STATUS_SYNCED = 2;
}

Build arm binaries and docker images

Is your feature request related to a problem? Please describe.
Arm cloud machines have very good pricing and it would be amazing to run Apibara on them. Developers are also using Mac M1 and M2 and at the moment they have to build Apibara from source.

Describe the solution you'd like
The CI/CD should be improved to build arm binaries and docker images.

Describe alternatives you've considered
We should consider the following approaches:

  • using an alternative ci that has native support for arm runners.
  • use qemu to build arm binaries on github actions.
  • setup nix cross compilation to compile arm binaries from an amd64 machine.

Add "entity" mode to Postgres and MongoDB sinks

Is your feature request related to a problem? Please describe.

The current sinks stores all values as a logical list (e.g. a list of transfers, a list of player moves, etc). The issue arises if I need to know the latest account balance, the player status, etc. since I need to, for each entity, only fetch the latest value.

Describe the solution you'd like

We should add an "entity mode" where the sink updates the rows in the table as if they represent an entity.
The user opts-in this feature by setting the entity property on the sink:

export const config = {
  sinkOptions = {
    // enable entity mode
    entity: {
      keys: ['id', 'realm']
    }
  }
}

Then the behavior of the indexer changes as follows:

  • _cursor becomes a range (int8range in postgres, {from: number, to: number} in mongo).
  • handle_data:
    • collect keys for returned rows
    • update: set upper bound on _cursor to endCursor for all rows where id in keys and _cursor.to = $\infty$.
    • insert: rows returned by handle_data
  • invalidate:
    • delete all rows where _cursor.from > invalidated cursor (same as now)
    • update all rows where _cursor.to > invalidated: set _cursor.to to $\infty$.

Users can then query the most recent data by selecting rows where _cursor.to = $\infty$.

Additional context

We need this for postgres and mongo sinks.

Server side event decoder

Describe how to improve Apibara

At the moment, developers need to parse events on the client side. I propose to improve the current event filter to, optionally, decode events.

Describe why this feature is important to you

I want to reduce the time it takes to integrate data into my application. In every single project I've used Apibara for, I've found myself decoding events and this is tedious.
With this change, the data served by Apibara is ready to use.

Provide more details about the change

The changes are as follows:

  • Update the filter definition to include a decode key, which accepts the event ABI
  • Update the event message to include the decoded message in it. The decoded event is serialized as a proto's Struct.
  • The server will decode events, if decoding fails they will stream the message as usual and decoded will be null.

The open question is how should the ABI be passed to the server:

  • Pass the event ABI and its dependencies in every event filter message. This approach requires sending more data from client to server, but it's both easier and more flexible.
  • Have a global ABI map and use a reference in the event filter message. This approach reduces message size, but requires (for example) handling cases where contracts names clash.

StarkNet events bloom filter

Is your feature request related to a problem? Please describe.
Filtering events for a contract that doesn't emit many events is not as fast as it could be.

Describe the solution you'd like
The StarkNet service should compute bloom filters for smart contract events (similar to how full nodes do). This filter should be used when searching for events to quickly skip blocks that won't generate any event.

Describe alternatives you've considered
NA

Additional context
The implementation will create a bloom filter for each block and store its byte content in the block body table. When the filtered stream is asked to retrieve an event, the filter will first do a fast check on the block's bloom filter to check whether the block contains any event for the requested contract. If it doesn't, it moves on to the next block. Only if the bloom filter contains the event then the filter will scan all transaction receipts for the event.

Add a Rust SDK

Is your feature request related to a problem? Please describe.

We should add an official SDK for Rust.

Describe the solution you'd like

The repository should have a apibara-sdk crate that developers can use directly. This crate provides a slim abstraction over the tonic client:

  • Split cliend and stream, handling the low level communication pattern.
  • Handle bumping the stream id and filtering out messages that belong to an old stream.

Describe alternatives you've considered

At the moment, developers are using Apibara by generating the tonic client from the proto definitions. This works fine, but it means they have to spend hours on it.

Additional context

The client will also be used to implement new integrations using Rust directly.

Write stream data into PostgreSQL database

Is your feature request related to a problem? Please describe.
We should have an integration (sink) that write data directly into a PostgreSQL database. The program connects to a stream (using the SinkConnector and Sink abstraction in sink-common) and, for each batch, writes data to the database.

Additional context

  • Tables should have one column with the block number/cursor. When a chain reorganization happens, all invalidated rows should be deleted from the database.
  • How to handle schema? Should the user manually create the tables and then craft a jsonnet transformation that is compatible with it?

Block data sent to script is missing some fields when they are zero

Describe the bug

Some fields (e.g. blockNumber) are not set if their value is 0.

To Reproduce

Stream block headers starting from the genesis block.

Expected behavior

The field should be there when it's passed to the script.

Additional context

This is because protobuf doesn't encode a field if it's set to the default value (0 for integers). We should find a way to force serde json to encode these values anyway.

Event filtering stopping after a bunch of events not matching

Describe the bug
After a short amount stream is cutting

To Reproduce
use the following configuration

Expected behavior
Stream should keep going

Software (please complete the following information):

  • Mac OS M2
  • 2022
  • main branch
  • Rust

Additional context
Here is the filter configuration:

Configuration {
    batch_size: 10,
    starting_cursor: Some(
        Cursor {
            order_key: 1,
            unique_key: [],
        },
    ),
    finality: None,
    filter: Filter {
        header: Some(
            HeaderFilter {
                weak: true,
            },
        ),
        transactions: [],
        state_update: None,
        events: [
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 141924167996489854,
                        lo_hi: 4461830264727346503,
                        hi_lo: 9027821603959284699,
                        hi_hi: 3224236691746430776,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 220494548881075745,
                        lo_hi: 8206847380437750340,
                        hi_lo: 1234864968652499189,
                        hi_hi: 1715681208735219819,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 469280195236832677,
                        lo_hi: 11835663046247465944,
                        hi_lo: 654084999609290947,
                        hi_hi: 12673034911508238373,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 204262832846582119,
                        lo_hi: 3886882853341860350,
                        hi_lo: 18223698515018162509,
                        hi_hi: 1299621932791004246,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 118913603459130459,
                        lo_hi: 1683330482416922725,
                        hi_lo: 4806693892206280204,
                        hi_hi: 3623713590300244873,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 407677964068951386,
                        lo_hi: 6740043524711749856,
                        hi_lo: 5490408125206206500,
                        hi_hi: 17272198229826014819,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 574664994837257644,
                        lo_hi: 16405020605811209554,
                        hi_lo: 8582300231383593767,
                        hi_hi: 10103514850971951750,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 398464499519201301,
                        lo_hi: 6953724885549164874,
                        hi_lo: 17395838387495380706,
                        hi_hi: 9404547491400261196,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 199262686316067488,
                        lo_hi: 5794677346871056422,
                        hi_lo: 5283490589079989995,
                        hi_hi: 16563184914782753769,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 411413150135579624,
                        lo_hi: 13878874739128723669,
                        hi_lo: 8849579891189881489,
                        hi_hi: 3066445507059866187,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 300108165784364177,
                        lo_hi: 16268838412093813293,
                        hi_lo: 7375830621221972447,
                        hi_hi: 9306067903893696843,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 157023138631907758,
                        lo_hi: 9628108160968964607,
                        hi_lo: 5059267689504923197,
                        hi_hi: 595098071118637873,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 307290986116249126,
                        lo_hi: 10365571732757603966,
                        hi_lo: 9266429263366839608,
                        hi_hi: 215688856091842599,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 418825563805480600,
                        lo_hi: 17879231772626171647,
                        hi_lo: 18111611552454258502,
                        hi_hi: 9869151969047150807,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 147403233143589836,
                        lo_hi: 8559597615794637215,
                        hi_lo: 255489816305612017,
                        hi_hi: 6102446759750312533,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 17176757370923388,
                        lo_hi: 2522789697434642549,
                        hi_lo: 5221676217163137407,
                        hi_hi: 12307862144374640426,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 292752273956002465,
                        lo_hi: 15786128675409123726,
                        hi_lo: 523166112527275946,
                        hi_hi: 14468106379219261563,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 411325541998572560,
                        lo_hi: 2542741661917590427,
                        hi_lo: 7195181328172706523,
                        hi_hi: 10887089632520108646,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 482496618823639239,
                        lo_hi: 17765301190491373427,
                        hi_lo: 15832563411425350096,
                        hi_hi: 4503765383732295529,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 441075685397749408,
                        lo_hi: 378561184919933086,
                        hi_lo: 8928383419802262321,
                        hi_hi: 137744417336161173,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 553319563445561714,
                        lo_hi: 12343562686138288661,
                        hi_lo: 494462708413400099,
                        hi_hi: 16788324342216629790,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 152914788216769582,
                        lo_hi: 8819863148202983241,
                        hi_lo: 12398898821178922528,
                        hi_hi: 14034589561769177986,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 400599871849101910,
                        lo_hi: 14320031564304653836,
                        hi_lo: 13650565629694531068,
                        hi_hi: 12916449027956587675,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 489864767934997834,
                        lo_hi: 12138524705128661165,
                        hi_lo: 7682144972930897537,
                        hi_hi: 5705329291673710126,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 429112261359923354,
                        lo_hi: 13270606280099832739,
                        hi_lo: 17280047680573175693,
                        hi_hi: 14863668625012285325,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 350884414981412172,
                        lo_hi: 5685395935939504669,
                        hi_lo: 13004546513975842611,
                        hi_hi: 819179450984021517,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 146079500267246875,
                        lo_hi: 11605882306218608580,
                        hi_lo: 8166419744231255463,
                        hi_hi: 9610049869093286425,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 255599017163590515,
                        lo_hi: 1182108537026542070,
                        hi_lo: 5997179186594174549,
                        hi_hi: 9343984826074796472,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 281980109922192838,
                        lo_hi: 2017946184851657190,
                        hi_lo: 9249960045638417717,
                        hi_hi: 7023927707814590618,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 183670846460719382,
                        lo_hi: 7757444357569743284,
                        hi_lo: 9084793767733387132,
                        hi_hi: 5800994164308672277,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 271346180633817348,
                        lo_hi: 6837097028383264371,
                        hi_lo: 197910243397615547,
                        hi_hi: 9700383418574679652,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 43086627882011708,
                        lo_hi: 2155426509572953422,
                        hi_lo: 354140801297937469,
                        hi_hi: 12327867867773176309,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 118423099747114076,
                        lo_hi: 3123493452226395608,
                        hi_lo: 3294981039196440125,
                        hi_hi: 3486763700001803426,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 415192496765954572,
                        lo_hi: 5640090242410537043,
                        hi_lo: 16588805799033720351,
                        hi_hi: 5747108692217563855,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 387271298570354965,
                        lo_hi: 16611412664022490323,
                        hi_lo: 15444432604305839094,
                        hi_hi: 11876572857415789492,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 514067444444454612,
                        lo_hi: 13270646759112001041,
                        hi_lo: 1992508807343859459,
                        hi_hi: 14390857929865026148,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 213705003065127847,
                        lo_hi: 18446722276412236596,
                        hi_lo: 5593680071317264676,
                        hi_hi: 7128362404056877621,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 415673329918882058,
                        lo_hi: 3368345783457655742,
                        hi_lo: 13559963048907331850,
                        hi_hi: 11849155649558252841,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 251162268646436793,
                        lo_hi: 9964049464893492329,
                        hi_lo: 4151483832824713744,
                        hi_hi: 2840499627555285691,
                    },
                ),
                keys: [],
                data: [],
            },
            EventFilter {
                from_address: Some(
                    FieldElement {
                        lo_lo: 410665782951632876,
                        lo_hi: 13315461071554895667,
                        hi_lo: 7048369908878677401,
                        hi_hi: 12714201051600785950,
                    },
                ),
                keys: [],
                data: [],
            },
        ],
        messages: [],
    },
}

Here is my terminal output :

[2023-04-14T13:55:07Z INFO  carbonable_indexer] Handling data within 1 and 5001
DataStatusFinalized
[]
[2023-04-14T13:55:07Z INFO  carbonable_indexer] Handling data within 5001 and 10001
DataStatusFinalized
[]
[2023-04-14T13:55:07Z INFO  carbonable_indexer] Handling data within 10001 and 15001
DataStatusFinalized
[]
[2023-04-14T13:55:07Z INFO  carbonable_indexer] Handling data within 15001 and 20001
DataStatusFinalized
[]
[2023-04-14T13:55:07Z INFO  carbonable_indexer] Handling data within 20001 and 25001
DataStatusFinalized
[]
[2023-04-14T13:55:07Z INFO  carbonable_indexer] Handling data within 25001 and 30001
DataStatusFinalized
[]
[2023-04-14T13:55:07Z INFO  carbonable_indexer] Handling data within 30001 and 35001
DataStatusFinalized
[]
[2023-04-14T13:55:07Z INFO  carbonable_indexer] Handling data within 35001 and 40001
DataStatusFinalized
[]
[2023-04-14T13:55:08Z INFO  carbonable_indexer] Handling data within 40001 and 45001
DataStatusFinalized
[]

AND SO ON UNTIL

[2023-04-14T13:55:14Z INFO  carbonable_indexer] Handling data within 190001 and 195001
DataStatusFinalized
[]

Error: Status { code: Unknown, message: "error reading a body from connection: stream error received: unexpected internal error encountered", source: Some(hyper::Error(Body, Error { kind: Reset(StreamId(1), INTERNAL_ERROR, Remote) })) }

Add `--webhook` mode to webhook integration

Is your feature request related to a problem? Please describe.
At the moment, the sink-webhook integration is not exactly great for actually talking to webhooks since it add additional data to the payload generated by the transform step.

For example:

  • transform returns { message: "say hi" }
  • integration sends { "data": { "cursor": { }, "end_cursor": { }, "finality": "accepted", "batch": { "message": "say hi" } }}
  • so if the user is trying to talk to the discord/telegram/any API it will fail

Describe the solution you'd like
There should be a new --webhook flag that changes the sink behaviour:

  • invalidate calls become noop. No data is posted to the target url since it won't know how to handle it anyway.
  • data calls send only the payload returned by the transform step, without any extra data.

Describe alternatives you've considered
We could split the sink into two (webhook and "serverless function"), but they would share 99% of the code anyway.

Make sink grpc client options configurable

Is your feature request related to a problem? Please describe.
Streaming using a filter that produces a large amount of data results in an error. This is because grpc is very conservative with the maximum message size.

Describe the solution you'd like
Maximum message size should be configurable. There should be an option to set other grpc options too.

Additional context
This is similar to what's done in the Typescript and Python SDK.

Allow sink developers to easily test their transform functions

Is your feature request related to a problem? Please describe.
It's not a problem per say, but sink developers don't have any way to test if their transform function is doing exactly what they want it to do

Describe the solution you'd like
First, we should generate the test data using the filter and transform configuration in their script.js file, they have to run a command like:
apibara test new script.js --num-blocks 100

The generated test data will include:

  • the filter and transform configuration, to be used to check we're testing against the same script
  • the raw data got from DNA
  • the data returned from their transform function

The data will be stored in a JSON file, maybe in the target/ directory

The idea is that, the dev will look at the file and check if that's exactly what he expects to see, if not he should edit the transform function and run again until he gets what he want. Then, that file will be used to test that the transform function is always doing what it's supposed to do, for that the dev will run a command every time he want to make sure his transform function is working as expected:

apibara test check script.js or apibara test check target/test/script-test-case-xxx.json

This will feed the input data to the transform function and assert that it returns the expected output

Describe alternatives you've considered A clear and concise description of
any alternative solutions or features you've considered.

Additional context Add any other context or screenshots about the feature
request here.

Support starting node at particular block

Is your feature request related to a problem? Please describe.
Currently, the node always starts at block 0. It would be great to be able to start it at a particular block.

Describe the solution you'd like

  1. Update start to take BlockIngestionConfig as input
  2. Update BlockIngestionConfig to allow specifying the start_block
  3. Use start block if set rather than 0

Send data to webhook

Is your feature request related to a problem? Please describe.

Developers are very productive writing web applications that handle http requests. Http servers can be deployed on serverless platforms (Supabase, AWS Lambda, etc), without managing any server.

Describe the solution you'd like

We should provide a small service that turns a stream of data into http requests to the server specified by the user. The service should:

  • keep track of the stream state
  • handle retries if the http server returned an error

Additional context

The webhook handler should use jsonnet to transform data before its sent to the server. It should also handle authentication (thourgh headers or url params).

Improve sink/integration developer experience

Is your feature request related to a problem? Please describe.
At the moment, the developer experience of using sinks/integrations is not ideal:

  • data filter is a json file, so we need external code generation to generate filters
  • the javascript script is only used for transformation
  • options are set using cli arguments or environment variables

Describe the solution you'd like
We should unify everything to improve the developer experience. We do that by using the javascript file for both configuration and transformation.

import { Configuration, StarknetFilter, StarknetBlock, PostgresSink } from '@apibara/integration'

export const config: Configuration<StarknetFilter, PostgresSink> = {
  type: 'starknet',
  stream: {
    url: 'https://mainnet.starknet.a5a.ch',
    bearerToken: Deno.env.get('DNA_TOKEN'),
    // other options
  },
  startingCursor: 123_456,
  filter: Filter().withHeader({ weak: false }).toObject(),
  sink: {
    type: 'postgres',
    options: {
      connectionUrl: 'postgres://....',
    }
  }
}

export default function transform(batch: StarknetBlock[]) {
  // do something with data
}

Notice that the configuration needs to be generic over the filter and sink types.

One of the challenges is that for the hosted service we want users to connect to the streams using the internal network (to avoid paying for egress charges), so we cannot let them freely select the stream url and token and instead we want to override that config.

We achieve this by having the following priority for the configuration (higher is better).

  1. defaults
  2. config from script
  3. environment variables
  4. command line arguments

This way, users can use any value in the script for testing and when they deploy to the hosted service we overrides the problematic values.

We will provide a new apibara cli tool that is the entrypoint for running Apibara scripts. For example:

  • apibara run script.ts: runs the script
  • apibara run script.ts --stream.bearer-token=xxx: overrides the stream bearer token

We want to keep the sink abstraction extensible to encourage developers to build their own to integrate with their favourite tools. We do that by delegating the execution of the script to another tool based on the value of sink.type.

The execution trace of apibara run is as follows:

  • reads and validates script.
  • gets value of sink.type.
  • forwards script and cli flags to apibara-sink-<sink.type> (e.g. apibara-sink-postgres) (the executable is expected to be in $PATH).

In the future, we can replace the third step with a more sophisticated approach where the sink and the runner communicate through a grpc service, but for now it adds complexity for no clear benefit.

By convention, sink options can be overriden as follows:

  • cli --<sink.type>.<option-name> (e.g. --postgres.connection-url)
  • env var <SINK_TYPE>_<OPTION_NAME> (e.g. POSTGRES_CONNECTION_URL)

Configuration through env variables is important for production since we can't hard-code secrets in the script.

Additional context
The configuration approach is similar to Grafana K6, the multi-binary approach is similar to Pulumi.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.