Git Product home page Git Product logo

astra's Introduction

Astra

release version release pipeline license All Contributors

Astra is a cloud-native search and analytics engine for log, trace, and audit data. It is designed to be easy to operate, cost-effective, and scale to petabytes of data.

https://slackhq.github.io/astra/

Goals

  • Native support for log, trace, audit use cases.
  • Aggressively prioritize ingest of recent data over older data.
  • Full-text search capability.
  • First-class Kubernetes support for all components.
  • Autoscaling of ingest and query capacity.
  • Coordination free ingestion, so failure of a single node does not impact ingestion.
  • Works out of the box with sensible defaults.
  • Designed for zero data loss.
  • First-class Grafana support with accompanying plugin.
  • Built-in multi-tenancy, supporting several small use-cases on a single cluster.
  • Supports the majority of Apache Lucene features.
  • Drop-in replacement for most Opensearch log use cases.
  • Operate with multiple cloud providers.

Non-Goals

  • General-purpose search cases, such as for an ecommerce site.
  • Document mutability - records are expected to be append only.
  • Additional storage engines other than Lucene.
  • Support for JVM versions other than the current LTS.
  • Supporting multiple Lucene versions.

Licensing

Licensed under MIT. Copyright (c) 2024 Slack.

Contributors

Varun Thacker
Varun Thacker

💻 📖 👀 🐛
Bryan Burkholder
Bryan Burkholder

💻 📖 👀 🐛
Kyle Sammons
Kyle Sammons

🔌 💻
Suman Karumuri
Suman Karumuri

💻 👀 🤔 📢
Emma Montross
Emma Montross

🔌
Dan Hermann
Dan Hermann

💻
Kai Chen
Kai Chen

💻
Aubrey
Aubrey

💻
Shelly Wu
Shelly Wu

💻
Ryan Katkov
Ryan Katkov

💼
Slack
Slack

💵
Salesforce
Salesforce

💵
Henry Haiying Cai
Henry Haiying Cai

💻
Geoffrey Jacoby
Geoffrey Jacoby

🐛

astra's People

Contributors

bryanlb avatar vthacker avatar mansu avatar dependabot[bot] avatar danhermann avatar allcontributors[bot] avatar fxchen avatar kx-chen avatar kyle-sammons avatar autata avatar yeikel avatar foolusion avatar gjacoby126 avatar henrycaihaiying avatar roach avatar shellywu815-zz avatar

Stargazers

Taketoday avatar Adarsh Srivastava avatar Renato Santos avatar Erick Zhao avatar Filippo Giunchedi avatar Guido Iaquinti avatar Cary Dunn avatar Julien Le Dem avatar  avatar Jira Nguyễn avatar Sean Carroll avatar Rehn avatar Brett avatar abhi avatar  avatar Benjamin Tan avatar Alexander Bezhanov avatar 饶琛琳 avatar redaready avatar Mustafa TOKER avatar Rohit Rastogi avatar Tachun Wu avatar  avatar Dmitry Chuev avatar  avatar Afshin Moazami avatar Mohamed Badawi avatar Sergey avatar Mohit Shukla avatar Mustafa Paltun avatar  avatar Steve Sloka avatar Stone Gao avatar Armen avatar Zyb Jared Valdez avatar Jonathan Halterman avatar  avatar  avatar Alex Kwiatkowski avatar wcchoi avatar Keli avatar Mohanish Patel avatar  avatar Maurice avatar Przemek Maciolek avatar Andrei Surugiu avatar  avatar Prashant Deva avatar Alexandros Vellis avatar Mohamad Fadhil avatar Drew Johnston avatar Felx avatar Thomas avatar Matt Doro avatar Ton Luong avatar Francisco Cabrita avatar Nick Brown avatar Delius avatar jiangplus avatar  avatar Ruli avatar Ovidiu Giorgi avatar Pratik Somanagoudar avatar Andriamanamihaga Zo Toavina  avatar Aaryan Pagar avatar Kevin Masseix avatar Anup Cowkur avatar uzkitio avatar Ned Letcher avatar Duke avatar Karl Leicht avatar Rafael Ferreira avatar  avatar Prakash Choudhary avatar  avatar Torsten Bøgh Köster avatar Niels Basjes avatar Mohammad Hasnain Mohsin Rajan avatar Jono Yan avatar Dan Ryan avatar  avatar  avatar Ziheng Wang avatar  avatar Asaf Mesika avatar Yifei Simon Shao avatar Jeff Li avatar  avatar Matt avatar LiuYihan avatar Michael Warkentin avatar Petter Egesund avatar  avatar Taiming Liu avatar Tony An avatar rooxvi avatar Henry Vu avatar Matt Niedelman avatar Caleb Woodbine avatar  avatar

Watchers

Mark Carey avatar Derek Smith avatar  avatar David Smiley avatar sven chen avatar Ankur Oberoi avatar  avatar James Cloos avatar  avatar Ryan Katkov avatar Don O'Neill avatar  avatar Josh Senick avatar  avatar  avatar  avatar  avatar

astra's Issues

Pin netty version

Pin netty version to fix version inconsistencies. As the warning indicates, this could result in inconsistent builds.

[WARN ] 2022-03-11 10:01:11.130 [main] TransportTypeProvider - Inconsistent Netty versions detected: {netty-buffer=netty-buffer-4.1.73.Final.b5219aeb4e, netty-codec=netty-codec-4.1.73.Final.b5219aeb4e, netty-codec-dns=netty-codec-dns-4.1.73.Final.b5219aeb4e, netty-codec-haproxy=netty-codec-haproxy-4.1.73.Final.b5219ae (repository: dirty), netty-codec-http=netty-codec-http-4.1.73.Final.b5219aeb4e, netty-codec-http2=netty-codec-http2-4.1.73.Final.b5219ae (repository: dirty), netty-codec-socks=netty-codec-socks-4.1.73.Final.b5219ae (repository: dirty), netty-common=netty-common-4.1.73.Final.b5219aeb4e, netty-handler=netty-handler-4.1.73.Final.b5219aeb4e, netty-handler-proxy=netty-handler-proxy-4.1.73.Final.b5219ae (repository: dirty), netty-resolver=netty-resolver-4.1.73.Final.b5219aeb4e, netty-resolver-dns=netty-resolver-dns-4.1.73.Final.b5219aeb4e, netty-resolver-dns-classes-macos=netty-resolver-dns-classes-macos-4.1.73.Final.b5219aeb4e, netty-resolver-dns-native-macos=netty-resolver-dns-native-macos-4.1.73.Final.b5219aeb4e, netty-transport=netty-transport-4.1.73.Final.b5219aeb4e, netty-transport-classes-epoll=netty-transport-classes-epoll-4.1.73.Final.b5219ae (repository: dirty), netty-transport-native-epoll=netty-transport-native-epoll-4.1.45.Final.136db86, netty-transport-native-unix-common=netty-transport-native-unix-common-4.1.73.Final.b5219aeb4e} This means 1) you specified Netty versions inconsistently in your build or 2) the Netty JARs in the classpath were repackaged or shaded incorrectly. Specify the '-Dcom.linecorp.armeria.warnNettyVersions=false' JVM option to disable this warning at the risk of unexpected Netty behavior, if you think it is a false positive.

`IllegalStateException` when running phrase query

java.lang.IllegalStateException: field "service_name" was indexed without position data; cannot run PhraseQuery (phrase=service_name:"flannel be") at org.apache.lucene.search.PhraseQuery$1.getPhraseMatcher(PhraseQuery.java:497) at org.apache.lucene.search.PhraseWeight.scorer(PhraseWeight.java:64) at org.apache.lucene.search.Weight.scorerSupplier(Weight.java:136) at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorerSupplier(LRUQueryCache.java:797) at org.apache.lucene.search.BooleanWeight.scorerSupplier(BooleanWeight.java:533) at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorerSupplier(LRUQueryCache.java:797) at org.apache.lucene.search.BooleanWeight.scorerSupplier(BooleanWeight.java:533) at org.apache.lucene.search.BooleanWeight.scorer(BooleanWeight.java:499) at org.apache.lucene.search.Weight.bulkScorer(Weight.java:166) at org.apache.lucene.search.BooleanWeight.bulkScorer(BooleanWeight.java:395) at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:931) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:731) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:655) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:649) at com.slack.kaldb.logstore.search.LogIndexSearcherImpl.search(LogIndexSearcherImpl.java:131) at com.slack.kaldb.chunk.ReadWriteChunk.query(ReadWriteChunk.java:253) at com.slack.kaldb.chunkManager.ChunkManagerBase.lambda$query$2(ChunkManagerBase.java:88) at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700) at brave.propagation.CurrentTraceContext$1CurrentTraceContextRunnable.run(CurrentTraceContext.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) |  

Clean error prone

Update the code fix all error prone warnings.

Run mvn package on the repo. Fix the error prone warnings.

Create recovery task when no existing offset is persisted

Description

Today we start from the beginning of the topic retention, which can cause significant amounts of lag / delay before we are ingesting current data. We should always start with the current in this case, and create one or more recovery tasks to catch up the missing data.

Pre-processor should write data to different kafka cluster

Update pre-processor to write data to a different kafka cluster than the kafka cluster the pre-processor is consuming data from.

Currently, we use Kafka streams for this transformation which is limited to reading and writing messages to the same kafka cluster.

Inconsistent Query Results Between Indexer and Query Service

I was running KalDB locally on a developer machine, and ingested a few thousand documents from October 13th, 2022 using the JSON log format.

When running the following query from the indexer service's Armeria page (port 8080), I get the expected results. (End time is on October 14th, 2022) However, when running the same query from the query service (port 8081) I get an empty response. I do get the expected results from the query service if I change the end time of the query to reflect the ingestion time rather than the data time.

`{
  "dataset": "test",
  "chunkIds": [
    
  ],
  "queryString": "*:*",
  "startTimeEpochMs": "0",
  "endTimeEpochMs": "1665705600000",
  "howMany": 100
}`

From the 8080 results I can see that the @timestamp field on my data was ingested correctly, and so I would expect the end time filter to be acting correctly. Spent some time looking through the Chunk-related and message parsing code, and the only issue I saw I filed as #505 which doesn't seem related.

Use consistent naming for metadata name/id

We currently interchangeably use name and id when referring to metadata - the primary identifier we call a "name," but when referencing it from other contexts we generally use "id".

https://github.com/slackhq/kaldb/blob/101fa32e08ec3b0662c12ab2892d3b3c53c66874/kaldb/src/main/proto/metadata.proto#L30-L35

https://github.com/slackhq/kaldb/blob/101fa32e08ec3b0662c12ab2892d3b3c53c66874/kaldb/src/main/proto/metadata.proto#L44-L46

We should decide on one version, and consistently use that (either name or id).

Add multitenancy support to Astra

We need to introduce the ability to support multitenancy through index names. The way this is roughly expected to happen is certain indexers in a cluster will be responsible for dedicated indexes, which will be determined by the preprocessor. When an indexer creates a snapshot, it will mark which indexes are contained within that snapshot. As a query is received snapshots will be filtered to only return those matching a snapshot.

If multiple indexes, wildcard, or glob-based index names are received the query layer will resolve these to exact index matches when forwarding to index/cache layer.

pre-processor > indexer(s)

         index-foo (indexers 1-10)
         index-bar (indexer 11)
         index-baz (indexer 11)

grafana > query > cache/indexer

         Query will look at inbound requested index(s), then filter to 
           snapshots that are known to have that data. Cache/indexer
           will also then filter to the exact requested indexes.

ChunkInfo start time may be too early

In ChunkInfo for a ReadWriteChunk, we initialize dataEndTimeEpochMs to a high sentinel value so that the first message's end time will always become the first end time, and then it can expand upward from there if any messages come in later with later timestamps.

However, we initialize dataStartTimeEpochMs to the chunk creation time, and it can then shrink earlier if any messages with earlier timestamps come in. This will lead to incorrect values if no message comes in with a timestamp earlier than or equal to the chunk creation time.

This doesn't seem to affect correctness, but it means that we may examine a chunk unnecessarily during search. Instead, the start time should be initialized to a low sentinel value so that the start time of the first message is always used.

Delete stale data from blobfs

Currently, blobfs can have additional objects that are not tracked by our metadata store. This can happen when an indexer service uploads files to S3 but fails to create a metadata entry for it.

Create a new service in cluster manager that compares the data from blob store with the metadata, and deletes any data from blobfs that shouldn't exist.

For cloud stores like AWS S3 and Azure blob store, we can use life cycle policies to automatically prune this data after some time. So, this functionality is more critical for stores like HDFS.

NPE when creating a recovery task on Indexer

We often see NPEs when creating a recovery task which leads to indexer restarts.

java.lang.NullPointerException at com.slack.kaldb.server.RecoveryTaskCreator.lambda$determineStartingOffset$5(RecoveryTaskCreator.java:154) at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:176) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at com.slack.kaldb.server.RecoveryTaskCreator.determineStartingOffset(RecoveryTaskCreator.java:155) at com.slack.kaldb.server.KaldbIndexer.indexerPreStart(KaldbIndexer.java:114) at com.slack.kaldb.server.KaldbIndexer.startUp(KaldbIndexer.java:81) at com.google.common.util.concurrent.AbstractExecutionThreadService$1$2.run(AbstractExecutionThreadService.java:61) at com.google.common.util.concurrent.Callables.lambda$threadRenaming$3(Callables.java:103) at java.base/java.lang.Thread.run(Thread.java:829) |  
 

Search metadata should register node type

Description

We should register what node type is providing a specific search metadata (indexer, cache) so that we can better pick executors, and provide better reporting on node failures.

Fail a recovery task after a hard timeout

Description

Sometimes recovery tasks fail for a variety of reasons: data in kafka has expired, corrupt or missing partition metadata etc.. Currently, recovery task spins when this happens. So, add a hard timeout to recovery task after which the assigned task fails.

It may be better to mark the recovery task as failed so it is not assigned to another node again and an admin manually takes care of it.

Incorrect ID field format for logs

Description

Opensearch uses a custom time-based ID which you can see here - hhttps://github.com/opensearch-project/OpenSearch/blob/main/libs/common/src/main/java/org/opensearch/common/TimeBasedUUIDGenerator.java#L80-L118

We should consider moving to a similar format.

LogDocumentBuilderImpl can overwrite fields

Description

In LogDocumentBuilderImpl#fromMessage , we first do

addProperty(doc, LogMessage.SystemField.TYPE.fieldName, message.getType()); 

and then later in the code do this

for (String key : message.source.keySet()) {
  addPropertyHandleExceptions(doc, key, message.source.get(key));
} 

Now I ran into a case where the source also had a field called "type" - We can't control incoming data. What happened was the field now had two competing values from different places

Consider passing time params to mapping API

Description

When querying the mapping API (ie, using the field autocomplete), it should potentially pass the configured date params. This will allow us to filter the autocompletes to only fields that were present during the query window.

This may get wonky with the Grafana / browser caching, so that would need to be considered in designing an approach here.

Consider exposing failure information to shard metadata

Description

ES returns failed shard information with detailed information about what shards failed and why, when it can in the request / response cycle. This makes troubleshooting failures much easier, and isn't subject to the latency or dropping of logs/traces that can accompany external monitoring systems.

We should consider supporting this as it makes operating the system much easier.

https://stackoverflow.com/questions/69585853/elasticsearch-3-of-280-shards-failed-error-has-anyone-seen-anything-like-this

Add tests for cache service

Add an integration test for testing cache service end to end in KaldbTest.

Also, add a unit test for searching CachingChunkManager.

Object storage missing cluster prefix

We currently prefix our object storage with the chunk id, as seen in the indexing chunk manager:
https://github.com/slackhq/kaldb/blob/101fa32e08ec3b0662c12ab2892d3b3c53c66874/kaldb/src/main/java/com/slack/kaldb/chunkManager/IndexingChunkManager.java#L201-L203

This results in the objects ending up in a final destination like s3://{BUCKET_NAME}/log_XXXX/_3f.cfe. Since there exists a possibility that the same bucket would be used for multiple instances we should prefix these, such that the final path looks like s3://{BUCKET_NAME}/{EXTRA_PREFIX}/log_XXXX/_3f.cfe.

This extra config would ideally be a sort of "global prefix" that would apply to all uses of the s3 client for that deployment.

Terminate Lucene query on a timeout

Currently, on a timeout or an interrupt, we can't terminate a lucene query. Implement a mechanism to terminate a lucene query when a timeout is hit. We may need something like ExitableDirectoryReader for this. Context in this diff: #419

Parameterized query timeouts

Description

Query timeouts should not primarily be configured at a consul/Armeria level for all queries, but should by dynamic depending on the amount of data being requested. This could be a query-level arg, as seen in Opensearch - https://opensearch.org/docs/2.9/api-reference/search/

timeout Time How long the operation should wait for a response from active shards. Default is 1m.

Better metrics

The metrics emitted by Kaldb can be improved in a few ways:

  • Prefix all metric names with kaldb.
  • Emit a tag for every component name.
  • Add the cluster name as a tag.

When zookeeper reaches max bytebuffer exceptions are swallowed

Describe the bug

When zookeeper reaches the max bytebuffer we seem to swallow exceptions, which then causes the apps to fail prior to service initialization. It's unclear what's happening as the instance just enters a reboot loop with no obvious logs.

This should instead throw a clearer warning about what's going on - likely better error handling within the zookeeper metadatastore is required.

On Curator LOST event restart the pod

Description

On Curator LOST event restart the pod. This might be the safer option.

Alternately check with curator codebase if this is dealt with correctly in all cases.

Preprocessor should test topic to ensure enough partitions exist

Description

Today if you attempt to connect a preprocessor to a kafka topic, and configure it to more output partitions than exist on the kafka topic it is difficult to detect. This ends up throwing a significant amount of confusing messages as it still attempts to register a stream application with an invalid config.

The preprocessor should test the validity of a dataset config (possibly using admin client) and alert the user if it detects something that cannot be satisfied.

[ERROR] LocalBlobFsTest.testFS:118 ▒ IllegalArgument Illegal character in authority at...

Sorry for the silly question but is this just for Hadoop or what does it check the logs for?

I was looking at this code and I was just running the initial build stage (what it says to do in the README) and I am getting this error about illegal characters, and the error is in a URI and the URI seems to be looking for a Hadoop service, the other idea I have is that this may just be an example use case.

But what is the intended use for this project, sure logging but logging what and how does it connect/plugin to other apps to do the logging?

Timestamp parsing in ElasticSearch API Issues

I recently tried to query the KalDB version of ElasticSearch's msearch library and got an HTTP 500 back. Upon looking at the server logs, I found a NullPointerException coming from OpenSearchRequest.getStartTimeEpochMs.

return body.get("query").findValue("gte").asLong();

The root cause was that my query didn't have a required timestamp range. This is a reasonable restriction, but the existing logic has some issues:

  1. The query parsing shouldn't create an NPE, which leads to an HTTP 500 response, which is incorrect. (It should be a 400 Bad Request, or similar, because the actual problem was my query. )
  2. It would be preferable for KalDB to return a helpful error message to the client explaining that the timestamp range filter is required.
  3. The existing logic assumes that any use of "gte" is start timestamp filter. What if a user is filtering start timestamp by "gt" or "eq" instead?
  4. The existing logic assumes that any use of "lte" is the end timestamp filter. What if a user is filtering end timestamp by "lt" or "eq"?
  5. What if a user is submitting a more complex ElasticSearch query with range filters on multiple fields? There's currently no logic ensuring that the "gte" or "lte" found corresponds to the timestamp field that KalDB is using as a partition key. The "findValue" Jackson method will return the first instance found.

Bytes per chunk config should be GB (or MB) per chunk

Description

Today we accept a bytes-per-chunk config in the config file. This can make identifying small typos very difficult given the number of digits included (ie 15000000000 bytes).

These should be swapped to ideally GB (or MB) per chunk to make identification of small typos easier, and since these will primarily be configured in GB units.

Look into objects for encoding si suffixes into string values

Allow KalDB to run in docker-compose

Currently docker-compose only runs KalDB's dependencies, such as ZooKeeper, Kafka, and Grafana. KalDB itself has a Dockerfile, but it's out of date and doesn't work anymore.

We should fix the Dockerfile, and add the image to docker-compose, in a new docker profile so that "docker-compose up" still just runs the dependencies.

Indexers never expire LIVE snapshots

Describe the bug

Indexers currently never expire live snapshots. For long running indexers, this can cause disk saturation issues and also causes failures to delete once these reach the snapshot deletion service time.

Starting delete of snapshot SnapshotMetadata{name='LIVE_log_1689833167_dc25ce42-abbf-41ca-955e-e0888a66fe75', snapshotPath='LIVE', snapshotId='LIVE_log_1689833167_dc25ce42-abbf-41ca-955e-e0888a66fe75', startTimeEpochMs=1689228850484, endTimeEpochMs=1689837644041, maxOffset=45729812334, partitionId='83', indexType=LOGS_LUCENE9}
Exception deleting snapshot
error_root_cause_stack_trace:
    java.lang.IllegalArgumentException: Parameter 'Bucket' must not be null at software.amazon.awssdk.protocols.xml.internal.marshall.SimpleTypePathMarshaller.lambda$static$0(SimpleTypePathMarshaller.java:46) at software.amazon.awssdk.protocols.xml.internal.marshall.XmlProtocolMarshaller.doMarshall(XmlProtocolMarshaller.java:104) at software.amazon.awssdk.protocols.xml.internal.marshall.XmlProtocolMarshaller.marshall(XmlProtocolMarshaller.java:80) at software.amazon.awssdk.protocols.xml.internal.marshall.XmlProtocolMarshaller.marshall(XmlProtocolMarshaller.java:49) at software.amazon.awssdk.services.s3.transform.ListObjectsV2RequestMarshaller.marshall(ListObjectsV2RequestMarshaller.java:51) at software.amazon.awssdk.services.s3.transform.Li

To Reproduce

Long running indexers will show high stale snapshot counts on reboots. Also an increase in exception attempting to delete LIVE snapshots.

Deleting 38 stale snapshots: [SnapshotMetadata{name='LIVE_log_1690035728_cf55...

Expected behavior

Indexers should expire LIVE snapshots after a configurable time, or once cache node(s) start serving the content.

Mapping API is very slow to load

Describe the bug

In the explore page when I select a terms, date-histo group by - the fields take forever to populate. This invokes the _mapping api, which occasionally times out.

Enable Error prone during compile

http://errorprone.info/bugpattern/JavaDurationWithNanos

Duration duration = Duration.ofSeconds(2);
Duration foo = duration.withNanos(10);
mvn compile
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  15.703 s
[INFO] Finished at: 2022-02-10T10:43:10-07:00
[INFO] ------------------------------------------------------------------------

No warnings thrown during compilation either.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.