Git Product home page Git Product logo

monix-kafka's Introduction

Monix

Asynchronous, Reactive Programming for Scala and Scala.js.

monix Scala version support

Build Gitter Discord

Overview

Monix is a high-performance Scala / Scala.js library for composing asynchronous, event-based programs.

It started as a proper implementation of ReactiveX, with stronger functional programming influences and designed from the ground up for back-pressure and made to interact cleanly with Scala's standard library, compatible out-of-the-box with the Reactive Streams protocol. It then expanded to include abstractions for suspending side effects and for resource handling, and is one of the parents and implementors of Cats Effect.

A Typelevel project, Monix proudly exemplifies pure, typeful, functional programming in Scala, while being pragmatic, and making no compromise on performance.

Highlights:

  • exposes the kick-ass Observable, Iterant, Task, IO[E, A], and Coeval data types, along with all the support they need
  • modular, split into multiple sub-projects, only use what you need
  • designed for true asynchronicity, running on both the JVM and Scala.js
  • excellent test coverage, code quality, and API documentation as a primary project policy

Usage

Library dependency (sbt)

For the stable release (compatible with Cats, and Cats-Effect 2.x):

libraryDependencies += "io.monix" %% "monix" % "3.4.1"

Sub-projects

Monix 3.x is modular by design. See the sub-modules graph:

Sub-modules graph

You can pick and choose:

  • monix-execution exposes the low-level execution environment, or more precisely Scheduler, Cancelable, Atomic, Local, CancelableFuture and Future based abstractions from monix-catnap.
  • monix-catnap exposes pure abstractions built on top of the Cats-Effect type classes; depends on monix-execution, Cats 1.x and Cats-Effect
  • monix-eval exposes Task, Coeval; depends on monix-execution
  • monix-reactive exposes Observable for modeling reactive, push-based streams with back-pressure; depends on monix-eval
  • monix-tail exposes Iterant streams for purely functional pull based streaming; depends on monix-eval and makes heavy use of Cats-Effect
  • monix provides all of the above

Documentation

See:

API Documentation:

(contributions are welcome)

Related:

Contributing

The Monix project welcomes contributions from anybody wishing to participate. You must license all code or documentation provided with the Apache License 2.0, see LICENSE.txt.

You must follow the Scala Code of Conduct when discussing Monix on GitHub, Gitter channel, or other venues.

Feel free to open an issue if you notice a bug, have an idea for a feature, or have a question about the code. Pull requests are also gladly accepted. For more information, check out the contributor guide.

If you'd like to donate in order to help with ongoing maintenance:

Adopters

Here's a (non-exhaustive) list of companies that use Monix in production. Don't see yours? Submit a PR ❤️

License

All code in this repository is licensed under the Apache License, Version 2.0. See LICENSE.

monix-kafka's People

Contributors

adrielvelazquez avatar alexandru avatar allantl avatar amitrai48 avatar arun0009 avatar avasil avatar cakper avatar clayrat avatar fdilg avatar joesan avatar leandrob13 avatar lewapek avatar livelxw avatar mihaisoloi avatar paualarco avatar poslegm avatar scala-steward avatar sherwinschiu avatar vilinski avatar voidconductor avatar xelik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

monix-kafka's Issues

How to detect KafkaProducerSink failures?

Hi!
I use the same approach for producing kafka records that is described in the documentation:

val producer = KafkaProducerSink[String,String](producerCfg, scheduler)

// Lets pretend we have this observable of records
val observable: Observable[ProducerRecord[???,???]] = ???

observable
  // on overflow, start dropping incoming events
  .whileBusyDrop
  // buffers into batches if the consumer is busy, up to a max size
  .bufferIntrospective(1024)
  // consume everything by pushing into Apache Kafka
  .consumeWith(producer)
  // ready, set, go!
  .runToFuture

But, in case if serialization error occurs, the Future object that I get from the runToFuture method completes without any failure. Such behavior looks strange to me.
I am not a very experienced user of monix. So, maybe I am doing something wrong.

Thank you for your help!)

Consumer can be rebalanced if poll is not called before `max.poll.interval.ms`

Starting from Kafka 0.10.1.0, there is max.poll.interval.ms setting:

The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member.

Since, monix-kafka backpressures until all the records has been processed. This could be a problem if processing takes time.

One solution could be to always poll every x second. Then, pause the partition when consumer is still busy processing so no new records are fetched (but it will reset the timer).

void pause(Collection<TopicPartition> partitions) 
Suspend fetching from the requested partitions.

Alpakka-kafka and fs2-kafka are doing this

https://github.com/ovotech/fs2-kafka/blob/master/modules/core/src/main/scala/fs2/kafka/internal/KafkaConsumerActor.scala#L332

https://github.com/akka/alpakka-kafka/blob/master/core/src/main/scala/akka/kafka/internal/KafkaConsumerActor.scala#L435

This will probably require some change, we should just mention in readme to reduce max.poll.records if it hits the limit.

monix-kafka-11 not released

First off thank you monix-kafka. We have been successfully using monix-kafka-9 for almost a year now, pulling 4M+ measurements/sec off of a Kafka cluster with no hiccups.

We are upgrading our internal clusters to Kafka 0.11 in the next few weeks and I noticed monix-kafka-11 was never released to mvnrepository:
https://mvnrepository.com/artifact/io.monix

Would this be possible? Was there a specific reason for not doing so? Can we help in any way?

Many thanks for the excellent work!

MAINTAINER WANTED!

My time is now consumed entirely by work, monix, cats-effect, funfix and family and no longer have the capacity to maintain other projects.

monix-kafka has actual users and I could use some help in maintaining it.

@Avasil has kindly offered to help and he has been doing a good job, but between this and his other contributions, I don't want him to burn out.

So if interested in maintaining monix-kafka, please add a mention below.

Confusion about the internals of ProducerSink

I'd like to understand why we are suggesting an io scheduler and using the blocking version of send for the producer. I guess there must be a reason to prefer this over creating tasks that correspond to the completion of each send. Is it a performance thing? I think even if you didn't want to create a task per send you could happily create a callback that incremented a batch size and completed a single task with either all the metadata / exceptions rolled up into it.

What am I missing?

Use Apache Kafka Producer and Consumer Java Map Constructor instead of Properties

Currently, when instantiating a new KafkaProducer and KafkaConsumerObservable, I've been running into inconsistent class not found exceptions, in particular the serializer classes. This is due to the inconsistencies of the classloaders paths, where sometimes the thread's classloader finds the class and sometimes it doesn't. The KafkaProducer and KafkaConsumerObservable passes the serializer along to Apache Kafka's producer and consumer, however Apache Kafka's producer and consumer retrieves the classname of the serializers and adds it to the properties instance instead. Eventually, it will attempt to instantiate the serializers again via the thread's classloader. I created an issue to Apache Kafka here. So given the current state of Apache Kafka, and a need for a more consistent way to instantiate new Monix Kafka Producers and Consumer Observables, I'm proposing we simply use the Java Map version of the constructor instead of the Properties, which preserves the passed in instances, and doesn't perform the unnecessary steps of reloading the class. The proposed solution simply replaces the Properties instances with the Java Map equivalence.

New release?

Are there any plans for a new release? I think the manual commit might be useful

KafkaConsumerConfig and KafkaProducerConfig apply helpers do not properly set properties

When you set KafkaConsumerConfig(source= someTypesafeConfig) you would expect that the configs that are not explicitly set by monix KafkaConsumerConfig would get passed onto properties, but that is not the case as it is being set to properties = Map.empty:
https://github.com/monix/monix-kafka/blob/master/kafka-0.11.x/src/main/scala/monix/kafka/KafkaConsumerConfig.scala#L439
Similarly with KafkaProducerConfig.

Explicit signature for `KafkaObservableConsumer.autoCommit`

Hello!

The current the way to create a kafka autocomit consumer observable is by using the apply method from
KafkaConsumerObservable, to me it would be more explicit to have a separated signature for that purpose which would be autoCommit.

Also I think that the current public method createConsumer(): Task[Consumer[K, V]] could be private, since it should only be used internally.

What are your thoughts?

Change map + errorHandle to Task.redeem

I saw map + onErrorHandle in KafkaProducerSink. Possibly more places.
Now Monix Task has redeem and redeemWith which can do it in one go so it is slightly more efficient.

KafkaProducer send locks threads

Monix KafkaProducers's send is locking threads of scheduler when tasks are executed in parallel, e. g.

val topic = "example"
val sc = Scheduler.forkJoin(12, 12)
val producer = KafkaProducer[String, Array[Byte]](config, sc)

val tasks = List.fill(12)("test")
Task.wanderUnordered(tasks)(producer.send(topic, _)).runAsync(...)

here's output of async-profiler with -e lock flag:

--- 321416121577 ns (44.57%), 19443 samples
  [ 0] monix.kafka.KafkaProducer$Implementation
  [ 1] monix.kafka.KafkaProducer$Implementation.$anonfun$send$2
  [ 2] monix.kafka.KafkaProducer$Implementation$$Lambda$7060.1333366705.run
  [ 3] akka.dispatch.TaskInvocation.run
  [ 4] akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec
  [ 5] akka.dispatch.forkjoin.ForkJoinTask.doExec
  [ 6] akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask
  [ 7] akka.dispatch.forkjoin.ForkJoinPool.runWorker
  [ 8] akka.dispatch.forkjoin.ForkJoinWorkerThread.run

This happens because of synchronization on producer and leads to performance degradation due to thread locking and impossibility of parallel sending

Given that apache KafkaProducer is thead-safe and asynchoronous one can benefit from parallel sending if batch size and linger time is high enough, when send is called in parallel it puts messages and callbacks on producer’s internal buffer and sends them in batches and completes callbacks on own io thread.

Create benchmarking module

It would be useful to have a convenient way to run benchmarks to watch for regressions and potentially improve performance even further. I think there are few low-hanging fruits but I don't know for sure without measuring.

I'm not sure what's the best way, perhaps docker images. I will have to look around to see what others are using in different libraries, blog posts etc.

kafka 1.0 has support for time based offsets.

Kafka 1.0 and onward have time based offsets, Currently there is only a boolean which allows you to start from latest or not wen an observable starts.

I propose an extention to also support a starting point from now - a finit time.
And also an offset at a certain time point.

Scalafmt config

Is it possible to add scalafmt config to this repo like in the main monix project?

There is a rule in contributing guidelines:

Follow the structure of the code in this repository and the indentation rules used.

But it is difficult to follow it without formal description of code style.

I can take the job of adding a config and code base formatting.

Failing to commit offset at the end of the Observable

Hello,
I'm using monix-kafka's KafkaConsumerObservable.manualCommit consumer to consume data from kafka in a batch application.
We consume data until there is no more (using .timeoutOnSlowUpstreamTo(5.seconds, Observable.empty) to detect the end).
I'm processing the data and at the end of the Observable (but still part of the Observable), i commit the offset with a CommitableBatch.

The commit fails with :

java.lang.IllegalStateException: This consumer has already been closed.

Here is my main logic FYI :

 private def logic(bootstrapServer: String, topic: String) = {
    val kafkaConfig: KafkaConsumerConfig = KafkaConsumerConfig.default.copy(
      bootstrapServers = List(bootstrapServer),
      groupId = "failing-logic",
      autoOffsetReset = AutoOffsetReset.Earliest
    )
    KafkaConsumerObservable
      .manualCommit[String, String](kafkaConfig, List(topic))
      .timeoutOnSlowUpstreamTo(5.seconds, Observable.empty)
      .foldLeft(CommittableOffsetBatch.empty) { case (batch, message) => batch.updated(message.committableOffset) }
      .mapEval(completeBatch => completeBatch.commitAsync())
      .headOrElseL(List.empty)
  }

And here is a repo with a ready sbt project to reproduce it : https://github.com/fchaillou/monix-kafka-issue

Let me know if there is anything else i can do to help
Thanks
Fabien

kafka 0.9.0 client offset error

Let me start with the output:

[2017-11-24 17:24:57,018] [DEBUG] [UpdatesService] [] Finished inserting 100 new record(s) into MetadataRepository.
[2017-11-24 17:24:58,149] [DEBUG] [UpdatesService] [] Finished inserting 100 new record(s) into MetadataRepository.
[2017-11-24 17:24:58,253] [ERROR] [o.a.k.c.c.i.ConsumerCoordinator] [] Error ILLEGAL_GENERATION occurred while committing offsets for group vies-metadata-listener
[2017-11-24 17:24:58,253] [WARN ] [o.a.k.c.c.i.ConsumerCoordinator] [] Auto offset commit failed: Commit cannot be completed due to group rebalance
[2017-11-24 17:24:58,254] [ERROR] [o.a.k.c.c.i.ConsumerCoordinator] [] Error ILLEGAL_GENERATION occurred while committing offsets for group vies-metadata-listener
[2017-11-24 17:24:58,254] [WARN ] [o.a.k.c.c.i.ConsumerCoordinator] [] Auto offset commit failed: 
[2017-11-24 17:24:59,537] [DEBUG] [UpdatesService] [] Finished inserting 100 new record(s) into MetadataRepository.
[2017-11-24 17:25:00,655] [DEBUG] [UpdatesService] [] Finished inserting 100 new record(s) into MetadataRepository.
[2017-11-24 17:25:01,780] [DEBUG] [UpdatesService] [] Finished inserting 100 new record(s) into MetadataRepository.

Even though I am not spending seconds between consecutive polls I am seeing ILLEGAL_GENERATION error which according to Internet gurus is thrown when kafka consumer group rebalances while I am spending time between polls.

[Question] How do you manually commit the offset after processing each record

Hi,
I'm trying to achieve manual commit for each record, these are my settings

  val consumerCfg = KafkaConsumerConfig.default.copy(
    bootstrapServers = List("localhost:9092"),
    groupId = "monixGroup",
    enableAutoCommit = false,
    observableCommitType = ObservableCommitType.Sync,
    observableCommitOrder = ObservableCommitOrder.AfterAck
  )

Following is my consumer Observable

 val monixConsumer = KafkaConsumerObservable[String, String](consumerCfg, List("monix"))
    val consumer = monixConsumer
    .bufferTimedWithPressure(1.second, 5)
    .map(deserialize(_))
    .map(callApi(_))
    .runAsyncGetLast

It only commits the offset once it's done processing all the elements, not the batch of 5.
I want to commit every batch manually, is there a way todo this?

Web documentation 📖

Looking at how nice and organised the documentation of other monix projects (bio and connect) looks like, thought that it could be a good idea to migrate kafka documentation on the web too.

And while migrating it, there are some points that could be added as well such like producer and consumer configurations, how different ser/de are handled in monix-kafka, avro schema registry and finally some tips and instructions for local testing (embedded vs docker).

[Question] Manual offset commit

Is it possible to have fine control about when an offset is committed?

I've been looking at ObservableCommitOrder and ObservableCommitType. In ObservableCommitType it says "...specifies to do a commit before/after acknowledgement is received from downstream". Is there a way to manually signal an acknowledgement?

Or can I disable offset committing completely and call consumer.commitSync() myself?

Thanks! :)

Kafka Streams API support

Adding support for the Streams API[1] offers some benefits, and is well matched for Monix's strengths which would allow interoperability with the KStream DSL[2]. This could be implemented as a Monix KafkaStreamSubject in monix-kafka, as the Streams API is used for both consuming and publishing to topics.

[1] https://docs.confluent.io/current/streams/introduction.html#the-kafka-streams-api-in-a-nutshell
[2] https://docs.confluent.io/current/streams/javadocs/org/apache/kafka/streams/kstream/KStream.html

Allow monix.kafka.Serializer and monix.kafka.Deserializer to provide instances manually

In many cases, the reflective creation of standard (de)serializers is acceptable. However, in some circumstances, it would be desirable to still implicitly use monix.kafka.Serializer and monix.kafka.Deserializer, but rather define how the underlying classes should be instantiated. A driving use-case is with the Confluent.io (de)serializers, namely those that use the Schema Registry Client as a constructor parameter.

If this is also desirable by the authors, then the follow-up PR can be a possible solution.

Callback-accepting Sink/Observable?

@alexandru
I feel it's a somewhat common scenario to want to do something after writing to Kafka - like send some sort of metrics for uploaded data, or acknowledge the write in some kind of a database. However, it seems this is not really possible with the existing facilities. Do you think we could write some kind of callback-accepting version of KafkaProducerSinks or even make an Observable version of it?

ConsumerObservable doesn't propagate the cancellation

During the upgrade to 1.0.0-RC5 we've noticed that the consumer keeps pooling messaged after it was terminated. A minimal reproduction case was added to this PR:
https://github.com/monix/monix-kafka/compare/master...cakper:cancelable-repro?expand=1

I've noticed that if the doAfterSubscribe is set before bufferTimedWithPressure then the consumer is not terminated properly, but if I change it to be after then it is.

Upon investigation @Avasil has found in the Observer implementation:

https://github.com/monix/monix-kafka/blob/v1.0.0-RC5/kafka-1.0.x/src/main/scala/monix/kafka/KafkaConsumerObservableAutoCommit.scala#L77

Observer.feed always returns Continue if iterator is empty so downstream doesn't get the chance to propagate cancelation and it keeps polling until there is anything available in the topic

I'm not sure about the fix though, on the user side you could probably use takeUntil or takeUntilEval + Deferred which will cancel the subscription (along with the Task) or just cancel the observableTask itself

Offsets issue

Hi,

I want that each time the inner function in foreachL crashes with exception, the offset to stay the same. It appears it doesn't work like this, changing the consumer group to something else will throw an error first time I run this code(with just one message in topic topic), as expected, but second time will work because foreachL is not called anymore since the record is already consumed.

What am I doing wrong?

Thank You,

object Main extends App {

  private val consumerCfg = KafkaConsumerConfig.default.copy(
    bootstrapServers = List("localhost:9092"),
    groupId = "foo-7",
    enableAutoCommit = false,
    autoOffsetReset = AutoOffsetReset.Earliest,
    observableCommitOrder = ObservableCommitOrder.AfterAck,
    observableCommitType = ObservableCommitType.Sync
  )

  val f = KafkaConsumerObservable[String, String](consumerCfg, List("topic"))
    .timeoutOnSlowUpstream(5.seconds)
    .foreachL { _ ⇒ throw new Exception("crash") }
    .runAsync(Scheduler.io())

  Await.result(f, Duration.Inf)
}

Commit after all connected multicasted observables are finished processing a record

I am creating multiple observables (per Kafka topic)

val kafkaConsumerConfig = KafkaConsumerConfig.default.copy(
      bootstrapServers = kConfig.bootstrapServers,
      groupId = kConfig.groupId,
      enableAutoCommit = false,
      observableCommitOrder = ObservableCommitOrder.AfterAck,
      observableCommitType = ObservableCommitType.Sync,
      autoOffsetReset = AutoOffsetReset.Latest
    )

val observable = Observable.merge(topics.map {
    provider => KafkaStream
      .creatConsumer(provider)
  } : _*)

val multiCast = observable.multicast(Pipe.publish[ConsumerRecord[A, B]])

... // multiple subscribers subscribe separately to the multiCast

multicast.connect()

It doesn't seem like the commit back to Kafka is issued after all subscribers are finished processing (I am using a combination of .subscribe() and .foreach to subscribe. It seems to commit after any of them complete. Is this possible or am I doing anything incorrectly?

Unable to access default params with a custom config

If you have a customized config file ala

kafka {
 consumer {
...
 }
 producer {
...
 }
}
app {
...
}

you cannot use the default values anymore, because the default fallback config in the *Config objects for consumer/producer assumes that everything sits directly under kafka key.

One easy solution here is just to drop the apply version with a rootPath, and pass in subconfigs directly.

Version for 0.11?

Any idea how to deal with 0.11? I think none of the code needs changing, only kafka-clients should be bumped. Are we okay with duplicating the code, or is there a smarter solution?

Manual commitAsync completes before actual commit

In KafkaCOnsumerObservableManualCommit.scala:56 there's incorrect implementation of async commit

override def commitBatchAsync(batch: Map[TopicPartition, Long], callback: OffsetCommitCallback): Task[Unit] =
  Task {
    blocking(consumer.synchronized(consumer.commitAsync(batch.map {
      case (k, v) => k -> new OffsetAndMetadata(v)
     }.asJava, callback)))
  }

Apache kafka commitAsync(offsets, callback) returns immediately and invokes callback when commit completes

this task completes when consumer.commitAsync returns, not when callback invoked and also ignores possible commit errors also passed through callback, it should probably be rewritten to Task.async

kafka 1.0

Kafka 1.0 has been released and it would be cool to support it. I could give it a try but I'm not sure how to include it, another subproject?

Detecting Consumer Failures

I have a case with an unpredictable delay in the processing time of messages, following Kafka consumer documentation I modified KafkaConsumerObservable.runLoop as follows:

def runLoop(consumer: KafkaConsumer[K, V]): Task[Unit] = {
      val ackTask: Task[Ack] = Task.unsafeCreate { (context, cb) =>
        implicit val s = context.scheduler
        s.executeAsync { () =>
          context.frameRef.reset()
          val ackFuture =
            try consumer.synchronized {
              if (context.connection.isCanceled) Stop
              else {
                val next = blocking(consumer.poll(pollTimeoutMillis))
                // Pasue partition
                blocking(consumer.pause(consumer.assignment()))
                Observer.feed(out, next.asScala)(out.scheduler)
              }
            } catch {
              case NonFatal(ex) =>
                Future.failed(ex)
            }

          ackFuture.syncOnComplete {
            case Success(ack) =>
              var streamErrors = true
              try consumer.synchronized {
                if (context.connection.isCanceled) {
                  streamErrors = false
                  cb.asyncOnSuccess(Stop)
                } else {
                  // Resume partition and commit offset
                  consumer.resume(consumer.assignment())
                  consumerCommit(consumer)
                  streamErrors = false
                  cb.asyncOnSuccess(ack)
                }
              } catch {
                case NonFatal(ex) =>
                  if (streamErrors) cb.asyncOnError(ex)
                  else s.reportFailure(ex)
              }

            case Failure(ex) =>
              cb.asyncOnError(ex)
          }
        }
      }

      ackTask.flatMap {
        case Stop     => Task.unit
        case Continue => runLoop(consumer)
      }
    }

Is possible to handle this use case without modify the exposed observable ?
Is worth it to create a PR for this case ?

Thanks!

Subscribing to topics regex support

It seems like java consumer API has the option to pass regex as topics to subscribe to.
It would be nice to support this in monix. The simpliest solution is probably adding new methods to KafkaConsumerObservable.
I will come up with PR soon

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.