Git Product home page Git Product logo

go-dcp-kafka's Introduction

Go Dcp Kafka

Go Reference Go Report Card

Go implementation of the Kafka Connect Couchbase.

Go Dcp Kafka streams documents from Couchbase Database Change Protocol (DCP) and publishes Kafka events in near real-time.

Features

  • Less resource usage and higher throughput(see Benchmarks).
  • Custom Kafka key and headers implementation(see Example).
  • Sending multiple Kafka events for a DCP event(see Example).
  • Handling different DCP events such as expiration, deletion and mutation(see Example).
  • Kafka compression support(Gzip, Snappy, Lz4, Zstd).
  • Kafka producer acknowledges support(fire-and-forget, wait for the leader, wait for the full ISR).
  • Metadata can be saved to Couchbase or Kafka.
  • Managing batch configurations such as maximum batch size, batch bytes, batch ticker durations.
  • Scale up and down by custom membership algorithms(Couchbase, KubernetesHa, Kubernetes StatefulSet or Static, see examples).
  • Easily manageable configurations.

Benchmarks

The benchmark was made with the 1,001,006 Couchbase document, because it is possible to more clearly observe the difference in the batch structure between the two packages. Default configurations for Java Kafka Connect Couchbase used for both connectors.

Package Time to Process Events Average CPU Usage(Core) Average Memory Usage
Go Dcp Kafka(1.20) 12s 0.383 428MB
Java Kafka Connect Couchbase(JDK11) 19s 1.5 932MB

Example

Struct Config

func mapper(event couchbase.Event) []message.KafkaMessage {
	// return nil if you wish to discard the event
	return []message.KafkaMessage{
		{
			Headers: nil,
			Key:     event.Key,
			Value:   event.Value,
		},
	}
}

func main() {
	c, err := dcpkafka.NewConnector(&config.Connector{
		Dcp: dcpConfig.Dcp{
			Hosts:      []string{"localhost:8091"},
			Username:   "user",
			Password:   "password",
			BucketName: "dcp-test",
			Dcp: dcpConfig.ExternalDcp{
				Group: dcpConfig.DCPGroup{
					Name: "groupName",
					Membership: dcpConfig.DCPGroupMembership{
						RebalanceDelay: 3 * time.Second,
					},
				},
			},
			Metadata: dcpConfig.Metadata{
				Config: map[string]string{
					"bucket":     "checkpoint-bucket-name",
					"scope":      "_default",
					"collection": "_default",
				},
				Type: "couchbase",
			},
			Debug: true},
		Kafka: config.Kafka{
			CollectionTopicMapping: map[string]string{"_default": "topic"},
			Brokers:                []string{"localhost:9092"},
		},
	}, mapper)
	if err != nil {
		panic(err)
	}

	defer c.Close()
	c.Start()
}

File Config

File Config

Configuration

Dcp Configuration

Check out on go-dcp

Kafka Specific Configuration

Variable Type Required Default Description
kafka.collectionTopicMapping map[string]string yes Defines which Couchbase collection events will be sent to which topic,:warning: If topic information is entered in the mapper, it will OVERWRITE this config.
kafka.brokers []string yes Broker ip and port information
kafka.producerBatchSize integer no 2000 Maximum message count for batch, if exceed flush will be triggered.
kafka.producerBatchBytes 64 bit integer no 10mb Maximum size(byte) for batch, if exceed flush will be triggered. 10mb is default.
kafka.producerBatchTimeout time.duration no 1 nano second Time limit on how often incomplete message batches will be flushed.
kafka.producerMaxAttempts int no math.MaxInt Limit on how many attempts will be made to deliver a message.
kafka.producerBatchTickerDuration time.Duration no 10s Batch is being flushed automatically at specific time intervals for long waiting messages in batch.
kafka.readTimeout time.Duration no 30s segmentio/kafka-go - Timeout for read operations
kafka.writeTimeout time.Duration no 30s segmentio/kafka-go - Timeout for write operations
kafka.compression integer no 0 Compression can be used if message size is large, CPU usage may be affected. 0=None, 1=Gzip, 2=Snappy, 3=Lz4, 4=Zstd
kafka.balancer string no Hash Define balancer strategy. Available fields: Hash, LeastBytes, RoundRobin, ReferenceHash, CRC32Balancer, Murmur2Balancer.
kafka.requiredAcks integer no 1 segmentio/kafka-go - Number of acknowledges from partition replicas required before receiving a response to a produce request. 0=fire-and-forget, do not wait for acknowledgements from the, 1=wait for the leader to acknowledge the writes, -1=wait for the full ISR to acknowledge the writes
kafka.secureConnection bool no false Enable secure Kafka.
kafka.rootCAPath string no *not set Define root CA path.
kafka.interCAPath string no *not set Define inter CA path.
kafka.scramUsername string no *not set Define scram username.
kafka.scramPassword string no *not set Define scram password.
kafka.metadataTTL time.Duration no 60s TTL for the metadata cached by segmentio, increase it to reduce network requests. For more detail please check docs.
kafka.metadataTopics []string no Topic names for the metadata cached by segmentio, define topics here that the connector may produce. In large Kafka clusters, this will reduce memory usage. For more detail please check docs.
kafka.clientID string no Unique identifier that the transport communicates to the brokers when it sends requests. For more detail please check docs.
kafka.allowAutoTopicCreation bool no false Create topic if missing. For more detail please check docs.

Kafka Metadata Configuration(Use it if you want to store the checkpoint data in Kafka)

Variable Type Description
metadata.type string Metadata storing types. kafka,file or couchbase.
metadata.readOnly bool Set this for debugging state purposes.
metadata.config map[string]string Set key-values of config. topic,partition,replicationFactor for kafka type

Exposed metrics

Metric Name Description Labels Value Type
cbgo_kafka_connector_latency_ms_current Time to adding to the batch. N/A Gauge
cbgo_kafka_connector_batch_produce_latency_ms_current Time to produce messages in the batch. N/A Gauge

You can also use all DCP-related metrics explained here. All DCP-related metrics are automatically injected. It means you don't need to do anything.

Breaking Changes

Date taking effect Date announced Change How to check
November 11, 2023 November 11, 2023 Creating connector via builder Compile project

Contributing

Go Dcp Kafka is always open for direct contributions. For more information please check our Contribution Guideline document.

License

Released under the MIT License.

go-dcp-kafka's People

Contributors

abdulsametileri avatar ademekici avatar canerpatir avatar emrekosen avatar emreodabas avatar emretanriverdi avatar enesyalinkaya avatar erayarslan avatar erdincozdemir avatar gulumseraslann avatar halilkocaoz avatar henesgokdag avatar mhmtszr avatar oguzhantasimaz avatar oguzyildirim avatar ramazan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-dcp-kafka's Issues

Ephemeral bucket DCP issue

Describe the bug

  • When using ephemeral bucket type (Metadata bucket also same) connector get started successfully but mutation events does not listen by the connector. Also there is no error logs everything seems OK. Then trigger a mutation from couchbase and run project on my local, there is no events received from dcp. We though it might be DCP side but we tested it via couchbase eventing function to receive mutation/delete logs same bucket and collection it works.

Version

  • OS: macOS & Linux
  • Golang version 1.20
  • Couchbase 7.0+
  • connector 0.0.58

Add mapstructure tags to config structures for viper config read package support

Is your feature request related to a problem? Please describe.
When using viper package to read config files, we can't parse config directly to struct provided by package which is config.Connector . It's because viper uses mapstructure tags and we have yaml tags. With yaml:",inline" tag dcp field in config.Connector, viper unable to read inlined config fields.

Describe the solution you'd like
If we add mapstructure tags to config struct it will be resolved.

custom logger not passed to dcp client

Describe the bug
When passing a custom logger (e.g., the one in the standard library of golang), it is passed to the go-kafka-connect-couchbase part but not the go-dcp-client part.

To Reproduce
Steps to reproduce the behavior:

  1. Create a custom logger like logger = log.New(os.Stdout, "cb2kafka: ", log.Ldate | log.Ltime | log.Llongfile)
  2. Pass it to the connector like connector, err := cb2kafka.NewConnectorWithLoggers(configFile, mapper, logger, logger)
  3. Run the program and see logs like the following:

cb2kafka: 2023/03/21 15:34:40 /root/go-cb-connector/main.go:40: loading config file from: /etc/configs/config.yml
{"level":"debug","time":"2023-03-21T15:34:40Z","message":"loaded checkpoint"}
{"level":"debug","time":"2023-03-21T15:34:40Z","message":"stream started"}
{"level":"info","time":"2023-03-21T15:34:40Z","message":"dcp stream started"}
{"level":"info","time":"2023-03-21T15:34:40Z","message":"metric middleware registered on path /metrics"} {"level":"info","time":"2023-03-21T15:34:40Z","message":"api starting on port 8080"} cb2kafka: 2023/03/21 15:34:50 /go/pkg/mod/github.com/segmentio/[email protected]/writer.go:1123: writing 1 messages to dev-cb6-bucket-20230310 (partition: 28) │
{"level":"debug","time":"2023-03-21T15:34:50Z","message":"saved checkpoint"}

Expected behavior
All the logs should look like the first line.

Screenshots
N/A

Version (please complete the following information):

  • go-kafka-connect-couchbase v0.0.21
  • go-dcp-client v0.0.23

Additional context
Add any other context about the problem here.

Mapper should return more than one message

func mapper(event couchbase.Event) *message.KafkaMessage {
	// return nil if you wish filter the event
	return message.GetKafkaMessage(event.Key, event.Value, nil)
}

Instead of '*message.KafkaMessage' we should return slice of messages.

add callback function when get ack from Kafka

For Kafka events in our outbox bucket, we want to use go-dcp-kafka to send them. When we receive ack from Kafka, we want to remove the event document from the outbox bucket.

Maybe users can add a callback function when get ack from Kafka

Program auto exits after a few seconds

Describe the bug
About 5 seconds after starting go-kafka-connect-couchbase connector, the health check failed and the connector auto exited.

To Reproduce
Steps to reproduce the behavior:

  1. extend the example main.go to print the received cb-dcp events
  2. copy the example main.go and config.yml to an ubuntu box
  3. run the code like go run main.go
  4. See error

{"level":"debug","time":"2023-03-21T04:39:33Z","message":"vbucket discovery opened with membership type: static"}
{"level":"info","time":"2023-03-21T04:39:33Z","message":"member: 1/1, vbucket range: 0-1023"}
{"level":"debug","time":"2023-03-21T04:39:33Z","message":"loaded checkpoint"}
{"level":"debug","time":"2023-03-21T04:39:33Z","message":"stream started"}
{"level":"debug","time":"2023-03-21T04:39:33Z","message":"started checkpoint schedule"}
{"level":"info","time":"2023-03-21T04:39:33Z","message":"dcp stream started"}
{"level":"info","time":"2023-03-21T04:39:33Z","message":"metric middleware registered on path /metrics"}
{"level":"info","time":"2023-03-21T04:39:33Z","message":"api starting on port 8080"}
{"level":"debug","time":"2023-03-21T04:39:43Z","message":"no need to save checkpoint"}
{"level":"error","error":"context deadline exceeded","time":"2023-03-21T04:39:48Z","message":"health check failed"}
{"level":"debug","time":"2023-03-21T04:39:48Z","message":"vbucket discovery closed"}
{"level":"debug","time":"2023-03-21T04:39:48Z","message":"no need to save checkpoint"}
{"level":"debug","time":"2023-03-21T04:39:48Z","message":"stopped checkpoint schedule"}
{"level":"debug","time":"2023-03-21T04:39:49Z","message":"stream stopped"}
{"level":"debug","time":"2023-03-21T04:39:49Z","message":"api stopped"}
{"level":"debug","time":"2023-03-21T04:39:49Z","message":"dcp connection closed
{"level":"debug","time":"2023-03-21T04:39:49Z","message":"connections closed
{"level":"info","time":"2023-03-21T04:39:49Z","message":"dcp stream closed"}

Expected behavior
It's expected to continuously print the cb dcp events as long as a workload is fed into cb.

Screenshots
N/A

Version (please complete the following information):

  • OS: Linux ubuntu-box 5.4.231-137.341.amzn2.x86_64
  • Golang version: go1.18.1 linux/amd64
  • Couchbase: Community Edition 6.5.0 build 4966
  • Kafka: 3.3.1

Additional context
Add any other context about the problem here.

chore: return error when creating new connector and producer

In the current implementation, we don't return an error while creating a new connector 1. If we fail to create dcp client 2 we can return error.

In the current implementation, we don't return an error while creating a new producer 3. If we fail to create a producer we can return error.

Footnotes

  1. https://github.com/Trendyol/go-kafka-connect-couchbase/blob/master/connector.go#L80

  2. https://github.com/Trendyol/go-kafka-connect-couchbase/blob/master/connector.go#L90

  3. https://github.com/Trendyol/go-kafka-connect-couchbase/blob/master/kafka/producer/producer.go#L29

Add source timestamp to couchbase.Event

Is your feature request related to a problem? Please describe.
When using the cb-dcp-kafka connector to build a CDC data pipeline, we need to study metrics such as latencies in each component to perform the right optimizations. Currently, the couchbase.Event passed to mapper only has key, value and type. As a result, we cannot calculate the time spent between when an event occurs in CB and when the event is captured/processed by the connector.

Describe the solution you'd like
The DCP logs should have the information such as when an event is logged or persisted in the CB database. It would be very convenient if you could expose a timestamp like EventTime in couchbase.Event. Then we could easily calculate the latencies between CB and connector.

Describe alternatives you've considered
In absence of such a timestamp, we have to add one into the CB documents when a doc is created. However, such a timestamp may not be a natural part in the application data model. Moreover, it is when the doc is created in the application, not when it is persisted in the database. This is inaccurate and intrusive.

Additional context
The java-base CB connector does not expose this timestamp. The Debezium db source connectors in general expose two timestamps, source.ts_ms and ts_ms. Kafka consumer SDKs also expose the timestamp when an event is persisted in Kafka. Those timestamps are very useful for investigating latencies and optimizations.

keep state info on kafka

Is your feature request related to a problem? Please describe.
Currently the dcp-kafka connector writes the state info (checkpoints) back to couchbase, which consequently magnifies the workload on CB. In my tests, for example, I generated 4k RPS to CB but observed 15k RPS there, which adds almost 3x extra workload to CB. This may not be acceptable in production. Moreover, conceptually, CDC is meant to be non-intrusive to the source databases. Keeping the state info on CB would cause many problems not only in terms of capacity but also breaking that read-only promise or expectation.

Describe the solution you'd like
At least make it an option for the dcp-kafka connector to keep the state info on Kafka instead of CB.

Describe alternatives you've considered
Define an interface so the developer can choose to use Kafka or CB. The two paths will implement the same interface.

Additional context
In my use case, we are definitely not allowed to write to the CB cluster. CDC is meant to be read-only.

Health check endpoint

We need to check if the connector works or not via an endpoint.

If the connector provides an endpoint like "/_healthcheck" and it returns the status of the connector such as Kafka status and DCP status would be great.

fatal concurrency error at startup time

Describe the bug
When starting the go-kafka-connect-couchbase connector, sometimes there was an error right after the stream and dcp stream started messages: "fatal error: concurrent map iteration and map write". It looks like a race condition when registering the metric middleware on path /metrics and starting the api.

To Reproduce
Steps to reproduce the behavior:

  1. extend the example main.go to print the received cb-dcp events
  2. copy the example main.go and config.yml to an ubuntu box
  3. run the code like go run main.go
  4. See error

{"level":"debug","time":"2023-03-21T04:39:33Z","message":"vbucket discovery opened with membership type: static"}
{"level":"debug","time":"2023-03-18T01:00:41Z","message":"ready to stream member number: 1, vBuckets range: 0-1023"}
{"level":"debug","time":"2023-03-18T01:00:42Z","message":"loaded checkpoint"}
{"level":"debug","time":"2023-03-18T01:00:42Z","message":"stream started"}
{"level":"debug","time":"2023-03-18T01:00:42Z","message":"started checkpoint schedule"}
{"level":"info","time":"2023-03-18T01:00:42Z","message":"dcp stream started"}
fatal error: concurrent map iteration and map write

goroutine 405 [running]:
github.com/Trendyol/go-dcp-client.(*metricCollector).Collect(0xc0206e27e0, 0x1759d4a?)
/go/pkg/mod/github.com/!trendyol/[email protected]/metric.go:45 +0x11e
github.com/prometheus/client_golang/prometheus.DescribeByCollect.func1()
/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/collector.go:90 +0x2b
created by github.com/prometheus/client_golang/prometheus.DescribeByCollect
/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/collector.go:89 +0x9c

Expected behavior
It's expected to continuously print the cb dcp events as long as a workload is fed into cb.

Screenshots
N/A

Version (please complete the following information):

  • OS: Linux ubuntu-box 5.4.231-137.341.amzn2.x86_64
  • Golang version: go1.18.1 linux/amd64
  • Couchbase: Community Edition 6.5.0 build 4966
  • Kafka: 3.3.1

Additional context
Add any other context about the problem here.

Update brokers config as list

Instead of getting the list of brokers as a string, we need to be able to give it as a list.

Example:

  brokers:
    - broker1
    - broker2

Secure kafka support

Is your feature request related to a problem? Please describe.
We should support secure kafka connection
Describe the solution you'd like
We should support secure kafka connection

Describe alternatives you've considered

Additional context

Connector cannot restart after closed

Describe the bug
We want to implement a dynamic configuration system in which after updating the config, the connector needs to be closed and started again. But we're getting "duplicate metrics collector registration attempted" error when the second connector is registering its prometheus metric connector.

To Reproduce
Create a connector, start it, close it.
Then create a new connector and start it.

Expected behavior
While closing the connector, it should unregister all prometheus collectors which causes the duplicate error.

Stack Trace

> panic: duplicate metrics collector registration attempted
goroutine 4671 [running]:
[github.com/prometheus/client_golang/prometheus.(*Registry).MustRegister(0x140003f4a50](http://github.com/prometheus/client_golang/prometheus.(*Registry).MustRegister(0x140003f4a50), {0x14026b26f00, 0x1, 0x1})
        /Users/firat.feroglu/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/registry.go:405 +0xf4
[github.com/Trendyol/go-dcp/api.NewMetricMiddleware(0x1401982f200](http://github.com/Trendyol/go-dcp/api.NewMetricMiddleware(0x1401982f200), 0x1400032c908, {0x1024f5d40, 0x1401ff720f0}, {0x1024f9ea0, 0x14019c49da0}, {0x1024e4908, 0x1401ff7e5a0}, {0x1401ffc4a10, 0x1, ...})
        /Users/firat.feroglu/go/pkg/mod/github.com/!trendyol/[email protected]/api/metric.go:345 +0xfc
[github.com/Trendyol/go-dcp/api.NewAPI(0x1400032c908](http://github.com/Trendyol/go-dcp/api.NewAPI(0x1400032c908), {0x1024f9ea0, 0x14019c49da0}, {0x1024f5d40, 0x1401ff720f0}, {0x0, 0x0}, {0x1024e4908, 0x1401ff7e5a0}, {0x1401ffc4a10, ...})
        /Users/firat.feroglu/go/pkg/mod/github.com/!trendyol/[email protected]/api/api.go:97 +0x21c
[github.com/Trendyol/go-dcp.(*dcp).Start.func1()](http://github.com/Trendyol/go-dcp.(*dcp).Start.func1())
        /Users/firat.feroglu/go/pkg/mod/github.com/!trendyol/[email protected]/dcp.go:150 +0x138
created by [github.com/Trendyol/go-dcp.(*dcp).Start](http://github.com/Trendyol/go-dcp.(*dcp).Start) in goroutine 4862
        /Users/firat.feroglu/go/pkg/mod/github.com/!trendyol/[email protected]/dcp.go:144 +0x774
Exiting.

Check whether given topic exists on given broker

The problem is that validation is needed to check whether a given topic exists on a specific broker. Currently, there is no way to determine if a topic exists on a broker, and this validation will be necessary for certain functionality in the application.

We can add validation when we are creating producer in producer.go

There are several go libraries that get all available topics since we use Kafka-go there is a way to do it https://github.com/segmentio/kafka-go#to-list-topics

Add resume/pause connector endpoint

We need to be able to pause/resume the consumption of the connector.

We would introduce two endpoints /pause and /resume.

We can call /pause endpoint to pause consumption.

We can call /resume endpoint to start consumption again, so we need to store the connector's state to work after resuming.

Collection to topic map config support

We need to send Kafka messages to different topics for each Couchbase collection, Java collector has a config "couchbase.collection.to.topic", It would be great if we have the same config here.

Example usage:

"couchbase.collection.to.topic": "scope.collection1=topic1,scope.collection2=topic2"

Get document from destination bucket

Is your feature request related to a problem? Please describe.
We will feed one bucket with Eventing. When the event received, we want to check a record in that bucket to see if a specific field has not been updated. Therefore, we need to fetch the relevant document from the destination bucket and make a comparison

Describe the solution you'd like
We want to able to get document from destination bucket

cb-kafka connector metrics

Is your feature request related to a problem? Please describe.

  1. when using the (go-dcp-client and go-kafka-connect-couchbase) libraries to build my own cb-kafka connector, I wish to be able to control at the connector level whether or not to start prometheus metrics collecting, and which port to run the metrics endpoint.
  2. when use prometheus/grafana to visualize the metrics in k8s, it's hard to do so because the metrics in general do not have labels.
  3. it's hard to tell the data rate and size of the critical (kafka or cb) metadata topic because there are no metrics about its working, which is very useful in estimating the additional resource requirements (on kafka or cb)

Describe the solution you'd like

  1. Provide methods for application developer to start metrics collecting at specified port
  2. Add some k8s default labels to the metrics (such as app, instance, pod, job, namespace)
  3. Add some metrics regarding the metadata topic reads and writes

Describe alternatives you've considered
N/A

Additional context
You may need to provide different sets of labels for different types of membership. At least we must be able to tell which metrics are from which member of the dcp group.

very high dcp latencies in v0.0.40

Describe the bug
After upgrading from go-kafka-connect-couchbase v0.0.37 (do-dcp-client v0.0.39) to v0.0.40 (go-dcp-client v0.0.45), huge dcp latencies were observed (with api enabled or disabled), p99 hitting 100s. After reverting to v0.0.37, the latencies went back to normal, p99 around 6-7ms.

To Reproduce
Steps to reproduce the behavior:

  1. set up couchbase, dcp-kafka connector, and kafka on k8s
  2. set up prometheus to visualize the dcp latencies and total number of mutation events
  3. feed workload to CB to generate 4k/s mutation updates
  4. dcp latencies keep increasing all the way to 100s, while the number of events keep dropping
  5. repeat the above with the two different versions

Expected behavior
After upgrading the versions, the same level of dcp latencies should be maintained.

Version (please complete the following information):

  • OS: linux/amd64
  • Golang version 1.18
  • Couchbase 6.5
  • Kafka 3.4.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.