Git Product home page Git Product logo

flink-clickhouse-sink's Introduction

Flink-ClickHouse-Sink

Java CI Maven Central

Description

Flink sink for ClickHouse database. Powered by Async Http Client.

High-performance library for loading data to ClickHouse.

It has two triggers for loading data: by timeout and by buffer size.

Version map
flink flink-clickhouse-sink
1.3.* 1.0.0
1.9.* 1.3.4
1.9.* 1.4.*

Install

Maven Central
<dependency>
  <groupId>ru.ivi.opensource</groupId>
  <artifactId>flink-clickhouse-sink</artifactId>
  <version>1.4.0</version>
</dependency>

Usage

Properties

The flink-clickhouse-sink uses two parts of configuration properties: common and for each sink in you operators chain.

The common part (use like global):

clickhouse.sink.num-writers - number of writers, which build and send requests,

clickhouse.sink.queue-max-capacity - max capacity (batches) of blank's queue,

clickhouse.sink.timeout-sec - timeout for loading data,

clickhouse.sink.retries - max number of retries,

clickhouse.sink.failed-records-path- path for failed records,

clickhouse.sink.ignoring-clickhouse-sending-exception-enabled - required boolean parameter responsible for raising (false) or not (true) ClickHouse sending exception in main thread. if ignoring-clickhouse-sending-exception-enabled is true, exception while clickhouse sending is ignored and failed data automatically goes to the disk. if ignoring-clickhouse-sending-exception-enabled is false, clickhouse sending exception thrown in "main" thread (thread which called ClickhHouseSink::invoke) and data also goes to the disk.

The sink part (use in chain):

clickhouse.sink.target-table - target table in ClickHouse,

clickhouse.sink.max-buffer-size- buffer size.

In code

Configuration: global parameters

At first, you add global parameters for the Flink environment:

StreamExecutionEnvironment environment = StreamExecutionEnvironment.createLocalEnvironment();
Map<String, String> globalParameters = new HashMap<>();

// ClickHouse cluster properties
globalParameters.put(ClickHouseClusterSettings.CLICKHOUSE_HOSTS, ...);
globalParameters.put(ClickHouseClusterSettings.CLICKHOUSE_USER, ...);
globalParameters.put(ClickHouseClusterSettings.CLICKHOUSE_PASSWORD, ...);

// sink common
globalParameters.put(ClickHouseSinkConst.TIMEOUT_SEC, ...);
globalParameters.put(ClickHouseSinkConst.FAILED_RECORDS_PATH, ...);
globalParameters.put(ClickHouseSinkConst.NUM_WRITERS, ...);
globalParameters.put(ClickHouseSinkConst.NUM_RETRIES, ...);
globalParameters.put(ClickHouseSinkConst.QUEUE_MAX_CAPACITY, ...);
globalParameters.put(ClickHouseSinkConst.IGNORING_CLICKHOUSE_SENDING_EXCEPTION_ENABLED, ...);

// set global paramaters
ParameterTool parameters = ParameterTool.fromMap(buildGlobalParameters(config));
environment.getConfig().setGlobalJobParameters(parameters);

Converter

The main thing: the clickhouse-sink works with events in String (ClickHouse insert format, like CSV) format. You have to convert your event to csv format (like usual insert into a database).

For example, you have an event-pojo:

class A {
   public final String str;
   public final int integer;
   
   public A(String str, int i){
       this.str = str;
       this.integer = i;
   }
}

You have to implement a converter to csv, using

public interface ClickHouseSinkConverter<T> {
 ...
}

You convert the pojo like this:

import ru.ivi.opensource.flinkclickhousesink.ClickHouseSinkConverter;

public class YourEventConverter implements ClickHouseSinkConverter<A>{
    
    @Override
    public String convert(A record){
     StringBuilder builder = new StringBuilder();
     builder.append("(");

     // add a.str
     builder.append("'");
     builder.append(a.str);
     builder.append("', ");

     // add a.integer
     builder.append(String.valueOf(a.integer));
     builder.append(" )");
     return builder.toString();
    }
}

And then add your sink to the chain:

// create table props for sink
Properties props = new Properties();
props.put(ClickHouseSinkConst.TARGET_TABLE_NAME, "your_table");
props.put(ClickHouseSinkConst.MAX_BUFFER_SIZE, "10000");

// converter
YourEventConverter converter = new YourEventConverter();       

// build chain
DataStream<YourEvent> dataStream = ...;
dataStream.addSink(new ClickHouseSink(props, converter))
          .name("your_table ClickHouse sink);

Roadmap

  • reading files from "failed-records-path"
  • migrate to gradle

flink-clickhouse-sink's People

Contributors

aleksanchezz avatar ashulenko avatar dependabot[bot] avatar eksd avatar innerpeacez avatar mchernyakov avatar moweonlee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flink-clickhouse-sink's Issues

Insert query format customization

I want to sink data via insert query like below.

INSERT INTO jordy(user_id, message) SELECT 10, saleDate AS aa FROM sale_date LIMIT 1;

But insert query format like this

INSERT INTO %s VALUES %s

How can I customize insert query?

the distribution of resource,urgent and serious problems

Urgent and serious problems!!!
In the production of enviroment, I used flink-clickhouse-sink-1.1.0 and my flink version is 1.9.3. My configurations is as follow:
ClickHouseSinkConst.TIMEOUT_SEC=60
ClickHouseSinkConst.FAILED_RECORDS_PATH=
ClickHouseSinkConst.NUM_WRITERS=64
ClickHouseSinkConst.NUM_RETRIES=3
ClickHouseSinkConst.QUEUE_MAX_CAPACITY=10000
when my program runs normally for a period of time and then reports an error:
ru.ivi.opensource.flinkclickhousesink.applied.ClickhouseWriter$WriterTask Error While inserting data
java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueueSynchronizer.acquireInterruptibly(AbstractQueueSynchronizer.java:1220)....

I would like to know what caused this exception......
can I fix this if I upgrade to the latest version 1.31?
I'm very anxious. Thank you for your help!!

The interface for converter to ClickHouse format

@aleksanchezz @ashulenko
Probably, for more obvious using this sink we should add the cpecific interface, something like this:

public interface ClickHouseSinkConverter<T> {
  String convert(T record);
}

And we can use it directly in the sink:

ClickHouseSinkConverter<SomeClass> chConverter = ....;
DataStream<YourEvent> dataStream = ...;
dataStream.addSink(new ClickhouseSink(props, chConverter))
                   .name("your_table clickhouse sink);

And inside https://github.com/ivi-ru/flink-clickhouse-sink/blob/master/src/main/java/ru/ivi/opensource/flinkclickhousesink/ClickhouseSink.java#L53 .

Added a JAR file on Flink SQL but can't create a Clickhouse sink table properly

Hi,

I am trying to create a Clickhouse (CH) sink table using Flink SQL. I am new at using Flink SQL and I was trying to create a sink table to store data into CH. To do so,

  1. I dowloaded a flink-clickhouse-sink-1.3.1.jar from https://jar-download.com/artifacts/ru.ivi.opensource/flink-clickhouse-sink/1.3.1

  2. Included it in the CLI;
    image

  3. I created a target table in CH

CREATE TABLE default.flink_table (    
  CUSTOMERID String,
  CAMPAIGNID String,
  TOTALRECIPIENTS Int32, 
  DELIVERYRATE Int32,
  STATUS String
) 
Engine = MergeTree()
ORDER BY tuple;
 
  1. I created a sink table from flink like so;
CREATE TABLE flink_table_sink (
    CUSTOMERID STRING, 
    CAMPAIGNID STRING,
    TOTALRECIPIENTS INT,
    DELIVERYRATE Int32,
    STATUS String,
    PRIMARY KEY (CAMPAIGNID) NOT ENFORCED
) 
WITH (
    'connector' = 'clickhouse',
    'url' = 'clickhouse://ip:port',
    'database-name' = 'default',
    'table-name' = 'flink_table'
);
  1. I tried to insert values into the sink table (flink_table_sink), I got this error message;
    image
    To me it looks like Flink didn't recognize the clickhouse flink jar file, but I can't figure out why. Because it still shows it in the jar files.

Can anyone help?

Upgrade to Flink version 1.9

We have to update sink to be able to use newer versions of Flink, especially facing the fact, that current stable version is 1.10 and we are stuck on 1.3.2

log loss

In the incremental synchronization phase, MySQL data is synchronized through Flinkcdc and then inserted into the clickhouse. During the synchronization process, restarting MySQL found that some logs were lost in the clickhouse, and there were no logs in the failed log directory. Through printing the log, it was found that the Flink program had already printed out the lost log of CDC synchronization.

This situation may require multiple restarts of MySQL to replicate
I have tested more than ten times, especially when there is a high amount of log insertion and frequent loss of logs

How to insert array type data into clickhouse

How to insert array type data into clickhouse。
CREATE TABLE tutorial.user3
(
id UInt64 comment '消息ID',
name String comment '名称',
addr Array(UInt64) comment '地址'
)
ENGINE = MergeTree()
primary key (id)
ORDER BY (id);

Bug report when restarting flink.

I'd like to request fix in ClickHouseSink.java
I have tried to push PR but it is forbidden.

Here is environments.

  • Flink 1.9.3 on yarn.
  • Using commit "23609ad"

It can be simply regenerated by just killing one of task managers with kill -9 "java process of flink task manager" or similar commands.

Then the dead task manager replaced by new task manager and the other task manager try to restart job.

In this restart cycle. flink calls "ClickHouseSink.close" and then "ClickHouseSink.open".

But the line here
prevents SinkManager from doing initialization again.

So I added simple code to nullify singManager after teardown, then I have checked that next open code reinitialize sink manager again normall.y

Could you please check if my fix is right ? Thank you in advance.

        if (sinkManager != null) {
            if (!sinkManager.isClosed()) {
                synchronized (DUMMY_LOCK) {
                    if (!sinkManager.isClosed()) {
                        sinkManager.close();
                        // added line .
                        sinkManager = null;
                    }
                }
            }
        }

The TIMEOUT_SEC parameter does not seem to take effect

I build up ClickHouseSink according to the documentation as follows:

    Map<String, String> ckSetting = new HashMap<>();
    ckSetting.put(CLICKHOUSE_HOSTS, ckGlobalProperties.getProperty("ck.hosts"));
    ckSetting.put(CLICKHOUSE_USER, ckGlobalProperties.getProperty("ck.user"));
    ckSetting.put(CLICKHOUSE_PASSWORD, ckGlobalProperties.getProperty("ck.pass"));
    ckSetting.put(TIMEOUT_SEC, "60");
    ckSetting.put(NUM_WRITERS, "1");
    ckSetting.put(NUM_RETRIES, "3");
    ckSetting.put(QUEUE_MAX_CAPACITY, "10");
    ckSetting.put(FAILED_RECORDS_PATH, "/tmp");
    ckSetting.put(IGNORING_CLICKHOUSE_SENDING_EXCEPTION_ENABLED, "false");

    ParameterTool ckParams = ParameterTool.fromMap(ckSetting);
    env.getConfig().setGlobalJobParameters(ckParams);

    Properties ckSinkProperties = new Properties();
    ckSinkProperties.put(TARGET_TABLE_NAME, CLICKHOUSE_DWD_ORDER_DONE_LOG);
    ckSinkProperties.put(MAX_BUFFER_SIZE, "1000");

    orderCsvStream
      .addSink(new ClickHouseSink(ckSinkProperties))
      .setParallelism(5);

According to my understanding (and the source code of this project), the batch should be flushed in at most TIMEOUT_SEC seconds, even though there is not enough records to reach the MAX_BUFFER_SIZE limit.

However, what I observed is that the batches are always flushing by MAX_BUFFER_SIZE, resulting in very infrequent sinking when data flow is low. Logs are shown below:

21-01-24 02:46:04 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseSinkScheduledCheckerAndCleaner  - Build Sink scheduled checker, timeout (sec) = 60
21-01-24 02:46:04 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseSinkManager  - Build sink writer's manager. params = ClickHouseSinkCommonParams{clickHouseClusterSettings=ClickHouseClusterSettings{hostsWithPorts=[..............], credentials='ZGVmYXVsdDpzaHRjazIwMjA=', authorizationRequired=true, currentHostId=0}, failedRecordsPath='/tmp', numWriters=1, queueMaxCapacity=10, ignoringClickHouseSendingExceptionEnabled=false, timeout=60, maxRetries=3}
21-01-24 02:46:04 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseSinkBuffer  - Instance ClickHouse Sink, target table = rtdw_dwd.order_done_log, buffer size = 1000
21-01-24 02:46:04 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Ready to load data to rtdw_dwd.order_done_log, size = 1000
21-01-24 02:46:04 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Ready to load data to rtdw_dwd.order_done_log, size = 1000
21-01-24 02:46:05 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Successful send data to ClickHouse, batch size = 1000, target table = rtdw_dwd.order_done_log, current attempt = 0
21-01-24 02:46:05 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Successful send data to ClickHouse, batch size = 1000, target table = rtdw_dwd.order_done_log, current attempt = 0
21-01-24 03:00:43 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Ready to load data to rtdw_dwd.order_done_log, size = 1000
21-01-24 03:00:43 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Successful send data to ClickHouse, batch size = 1000, target table = rtdw_dwd.order_done_log, current attempt = 0
21-01-24 03:31:30 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Ready to load data to rtdw_dwd.order_done_log, size = 1000
21-01-24 03:31:30 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Successful send data to ClickHouse, batch size = 1000, target table = rtdw_dwd.order_done_log, current attempt = 0
21-01-24 04:03:33 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Ready to load data to rtdw_dwd.order_done_log, size = 1000
21-01-24 04:03:33 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Successful send data to ClickHouse, batch size = 1000, target table = rtdw_dwd.order_done_log, current attempt = 0
21-01-24 04:32:34 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Ready to load data to rtdw_dwd.order_done_log, size = 1000
21-01-24 04:32:34 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Successful send data to ClickHouse, batch size = 1000, target table = rtdw_dwd.order_done_log, current attempt = 0
21-01-24 04:58:05 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Ready to load data to rtdw_dwd.order_done_log, size = 1000
21-01-24 04:58:05 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask  - Successful send data to ClickHouse, batch size = 1000, target table = rtdw_dwd.order_done_log, current attempt = 0

What do you think is the problem? Many thanks.

flink on yarn 集群跑任务问题

请问在flink 集群任务中 ClickHouseSinkConst.FAILED_RECORDS_PATH 这个参数填写什么地址,是hdfs地址还是本地地址,如果是本地地址,一旦任务提交到集群,都不知道在那台服务器建立地址

WriterTask thread will be closed one by one.

Thanks for your work!
I encounter a problem!After the program runs for a while, I found that ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask will be closed one by one;

I checked the log:
There are a large number of Task id = 10 is finished

2022-11-04 10:42:28,128 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask [] - Task id = 9 is finished
2022-11-04 10:42:32,783 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask [] - Ready to load data to monitor.qunhe_log, size = 100000
2022-11-04 10:42:32,783 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask [] - Ready to load data to monitor.qunhe_log, size = 100000
2022-11-04 10:42:34,532 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask [] - Task id = 1 is finished
2022-11-04 10:42:35,861 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask [] - Ready to load data to monitor.qunhe_log, size = 100000
2022-11-04 10:42:39,153 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask [] - Task id = 12 is finished
2022-11-04 10:42:40,263 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask [] - Task id = 18 is finished
2022-11-04 10:42:46,699 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask [] - Task id = 3 is finished
2022-11-04 10:42:47,717 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask [] - Task id = 10 is finished
2022-11-04 10:42:53,492 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask [] - Task id = 6 is finished
2022-11-04 10:42:55,086 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask [] - Task id = 8 is finished
2022-11-04 10:42:59,064 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask [] - Task id = 2 is finished
2022-11-04 10:43:00,173 INFO  ru.ivi.opensource.flinkclickhousesink.applied.ClickHouseWriter$WriterTask [] - Task id = 19 is finished

Then I looked up the clickhouse sink source, where OOM happened

     @Override
        public void run() {
            try {
                isWorking = true;

                logger.info("Start writer task, id = {}", id);
                while (isWorking || queue.size() > 0) {
                    ClickHouseRequestBlank blank = queue.poll(300, TimeUnit.MILLISECONDS);
                    if (blank != null) {
                        CompletableFuture<Boolean> future = new CompletableFuture<>();
                        futures.add(future);
                        send(blank, future);
                    }
                }
// Can't catch Throwable for OOM
            } catch (Exception e) {
                logger.error("Error while inserting data", e);
                throw new RuntimeException(e);
            } finally {
                logger.info("Task id = {} is finished", id);
            }
        }

I added catch Throwable to this piece of code,arise java.lang.OutOfMemoryError: Direct buffer memory

java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Direct buffer memory
	at java.util.concurrent.CompletableFuture.reportGet(Unknown Source) ~[?:?]
	at java.util.concurrent.CompletableFuture.get(Unknown Source) ~[?:?]
	at org.asynchttpclient.netty.NettyResponseFuture.get(NettyResponseFuture.java:201) ~[job.jar:?]
	at cksink.applied.ClickHouseWriter$WriterTask.lambda$responseCallback$0(ClickHouseWriter.java:219) ~[job.jar:?]
	at org.asynchttpclient.netty.NettyResponseFuture.lambda$addListener$0(NettyResponseFuture.java:294) ~[job.jar:?]
	at java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source) [?:?]
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(Unknown Source) [?:?]
	at java.util.concurrent.CompletableFuture$Completion.run(Unknown Source) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
Caused by: java.lang.OutOfMemoryError: Direct buffer memory
	at java.nio.Bits.reserveMemory(Unknown Source) ~[?:?]
	at java.nio.DirectByteBuffer.<init>(Unknown Source) ~[?:?]
	at java.nio.ByteBuffer.allocateDirect(Unknown Source) ~[?:?]
	at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:758) ~[job.jar:?]
	at io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:748) ~[job.jar:?]
	at io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:260) ~[job.jar:?]
	at io.netty.buffer.PoolArena.allocate(PoolArena.java:232) ~[job.jar:?]
	at io.netty.buffer.PoolArena.allocate(PoolArena.java:147) ~[job.jar:?]

Then I turned up flink's out-of-heap memory taskmanager.memory.task.off-heap.size from 512MB to 1GB,have no effect.

I am confused now, can you give me some advice? thanks.

Does this support 1.4.* ?

Thanks for this project! I am considering giving this a try for flink-1.4.2.

Could you give me an idea of how well tested this is in terms of

  1. has this been deployed to production ?
  2. for how long ?
  3. amount of traffic handled?

Add exception handling mode

Working on not Apache Flink-related project we were in need to put data to ClickHouse and we expanded current sink. Here is the list is the most important improvements and changes:

  • Changing naming of ClickHouse (converting letter "h" to uppercase);
  • Add exception handling with CompletableFuture;
  • ClickHouse sending exception handling config was added:
  1. if ignoring-clickhouse-sending-exception-enabled is true, exception while clickhouse sending is ignored and failed data automatically goes to the disk.
  2. if ignoring-clickhouse-sending-exception-enabled is false, clickhouse sending exception thrown in "main" thread (thread which called ClickhHouseSink::invoke) and data also goes to the disk.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.