Git Product home page Git Product logo

apache / paimon Goto Github PK

View Code? Open in Web Editor NEW
2.0K 71.0 790.0 30.53 MB

Apache Paimon is a lake format that enables building a Realtime Lakehouse Architecture with Flink and Spark for both streaming and batch operations.

Home Page: https://paimon.apache.org/

License: Apache License 2.0

Shell 0.14% Java 94.39% Scala 5.11% ANTLR 0.19% JavaScript 0.17%
big-data data-ingestion flink paimon real-time-analytics spark table-store streaming-datalake

paimon's Introduction

Paimon

License Get on Slack

Apache Paimon is a lake format that enables building a Realtime Lakehouse Architecture with Flink and Spark for both streaming and batch operations. Paimon innovatively combines lake format and LSM structure, bringing realtime streaming updates into the lake architecture.

Background and documentation are available at https://paimon.apache.org

Paimon's former name was Flink Table Store, developed from the Flink community. The architecture refers to some design concepts of Iceberg. Thanks to Apache Flink and Apache Iceberg.

Collaboration

Paimon tracks issues in GitHub and prefers to receive contributions as pull requests.

Mailing Lists

Name Subscribe Digest Unsubscribe Post Archive
user@paimon.apache.org
User support and questions mailing list
Subscribe Subscribe Unsubscribe Post Archives
dev@paimon.apache.org
Development related discussions
Subscribe Subscribe Unsubscribe Post Archives

Please make sure you are subscribed to the mailing list you are posting to! If you are not subscribed to the mailing list, your message will either be rejected (dev@ list) or you won't receive the response (user@ list).

Slack

You can join the Paimon community on Slack. Paimon channel is in ASF Slack workspace.

Building

JDK 8/11 is required for building the project. Maven version >=3.3.1.

  • Run the mvn clean install -DskipTests command to build the project.
  • Run the mvn spotless:apply to format the project (both Java and Scala).
  • IDE: Mark paimon-common/target/generated-sources/antlr4 as Sources Root.

How to Contribute

Contribution Guide.

License

The code in this repository is licensed under the Apache Software License 2.

paimon's People

Contributors

aitozi avatar alibaba-hzy avatar cxzl25 avatar fangyongs avatar houhang1005 avatar jingsongli avatar ladyforest avatar leaves12138 avatar legendtkl avatar liming30 avatar liugddx avatar lsomeyeah avatar monsterchenzhuo avatar qidian99 avatar s7monk avatar schnappi17 avatar shidayang avatar stenicholas avatar sunxiaojian avatar taozex avatar tsreaper avatar tyrantlucifer avatar wangfengpro avatar wg1026688210 avatar wxplovecc avatar yannbyron avatar yuzelin avatar zhangjun0x01 avatar zhuangchong avatar zouxxyy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

paimon's Issues

[Document] Add document for public api

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

Add document for public api

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Bug] When scan.mode is set to TimeTravel semantics, getTable from Catalog will fail

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

0.4

Compute Engine

flink-1.16

Minimal reproduce step

Execute the following sql in sql client:

CREATE CATALOG ts_catalog WITH (
    'type' = 'paimon',
    'warehouse' = '/test-data/e1d13ddf-7423-4075-91b2-ca5f7a18b9fd.store'
);
USE CATALOG ts_catalog;
CREATE TABLE IF NOT EXISTS ts_table (
    k VARCHAR,
    v INT,
    PRIMARY KEY (k) NOT ENFORCED
) WITH (
    'bucket' = '2',
    'log.consistency' = 'eventual',
    'log.system' = 'kafka',
    'kafka.bootstrap.servers' = 'kafka:9092',
    'scan.mode' = 'from-snapshot',
    'scan.snapshot-id' = '1',
    'kafka.topic' = 'ts-topic-b6a8e1c8-003e-4ef9-a5cc-16979a0ce56b'
);

INSERT INTO result1 SELECT * FROM ts_table;

What doesn't meet your expectations?

Creating a Table with timeTravel semantics should not report an error. But I get the following exception:

Could not execute SQL statement.
org.apache.flink.table.client.gateway.SqlExecutionException: Failed to parse statement: SELECT * FROM ts_table
;
	at org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:174) ~[flink-sql-client-1.16.1.jar:1.16.1]
	at org.apache.flink.table.client.cli.SqlCommandParserImpl.parseCommand(SqlCommandParserImpl.java:45) ~[flink-sql-client-1.16.1.jar:1.16.1]
	at org.apache.flink.table.client.cli.SqlMultiLineParser.parse(SqlMultiLineParser.java:71) ~[flink-sql-client-1.16.1.jar:1.16.1]
	at org.jline.reader.impl.LineReaderImpl.acceptLine(LineReaderImpl.java:2964) ~[flink-sql-client-1.16.1.jar:1.16.1]
	at org.jline.reader.impl.LineReaderImpl$1.apply(LineReaderImpl.java:3778) ~[flink-sql-client-1.16.1.jar:1.16.1]
	at org.jline.reader.impl.LineReaderImpl.readLine(LineReaderImpl.java:679) ~[flink-sql-client-1.16.1.jar:1.16.1]
	at org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:295) [flink-sql-client-1.16.1.jar:1.16.1]
	at org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:280) [flink-sql-client-1.16.1.jar:1.16.1]
	at org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:228) [flink-sql-client-1.16.1.jar:1.16.1]
	at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) [flink-sql-client-1.16.1.jar:1.16.1]
	at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) [flink-sql-client-1.16.1.jar:1.16.1]
	at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) [flink-sql-client-1.16.1.jar:1.16.1]
	at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) [flink-sql-client-1.16.1.jar:1.16.1]
Caused by: org.apache.flink.table.api.ValidationException: SQL validation failed. Can not create a Path from a null string
	at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:186) ~[?:?]
	at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:113) ~[?:?]
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:261) ~[?:?]
	at org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106) ~[?:?]
	at org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:172) ~[flink-sql-client-1.16.1.jar:1.16.1]
	... 12 more
Caused by: java.lang.IllegalArgumentException: Can not create a Path from a null string
	at org.apache.paimon.fs.Path.checkPathArg(Path.java:128) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.paimon.fs.Path.<init>(Path.java:142) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.paimon.CoreOptions.path(CoreOptions.java:564) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.paimon.CoreOptions.path(CoreOptions.java:560) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.paimon.AbstractFileStore.snapshotManager(AbstractFileStore.java:73) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.paimon.table.AbstractFileStoreTable.snapshotManager(AbstractFileStoreTable.java:178) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.paimon.table.AbstractFileStoreTable.tryTimeTravel(AbstractFileStoreTable.java:196) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.paimon.table.AbstractFileStoreTable.copy(AbstractFileStoreTable.java:132) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.paimon.table.FileStoreTableFactory.create(FileStoreTableFactory.java:86) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.paimon.table.FileStoreTableFactory.create(FileStoreTableFactory.java:69) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.paimon.catalog.AbstractCatalog.getDataTable(AbstractCatalog.java:62) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.paimon.catalog.AbstractCatalog.getTable(AbstractCatalog.java:56) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.paimon.flink.FlinkCatalog.getTable(FlinkCatalog.java:163) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.paimon.flink.FlinkCatalog.getTable(FlinkCatalog.java:71) ~[paimon-flink.jar:0.4-SNAPSHOT]
	at org.apache.flink.table.catalog.CatalogManager.getPermanentTable(CatalogManager.java:408) ~[flink-table-api-java-uber-1.16.1.jar:1.16.1]
	at org.apache.flink.table.catalog.CatalogManager.getTable(CatalogManager.java:364) ~[flink-table-api-java-uber-1.16.1.jar:1.16.1]
	at org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.getTable(DatabaseCalciteSchema.java:73) ~[?:?]
	at org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:83) ~[?:?]
	at org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:289) ~[?:?]
	at org.apache.calcite.sql.validate.EmptyScope.resolve_(EmptyScope.java:143) ~[?:?]
	at org.apache.calcite.sql.validate.EmptyScope.resolveTable(EmptyScope.java:99) ~[?:?]
	at org.apache.calcite.sql.validate.DelegatingScope.resolveTable(DelegatingScope.java:203) ~[?:?]
	at org.apache.calcite.sql.validate.IdentifierNamespace.resolveImpl(IdentifierNamespace.java:112) ~[?:?]
	at org.apache.calcite.sql.validate.IdentifierNamespace.validateImpl(IdentifierNamespace.java:184) ~[?:?]
	at org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84) ~[?:?]
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997) ~[?:?]
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975) ~[?:?]
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3085) ~[?:?]
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3070) ~[?:?]
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3335) ~[?:?]
	at org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60) ~[?:?]
	at org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84) ~[?:?]
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997) ~[?:?]
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975) ~[?:?]
	at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:232) ~[?:?]
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:952) ~[?:?]
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:704) ~[?:?]
	at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:182) ~[?:?]
	at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:113) ~[?:?]
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:261) ~[?:?]
	at org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106) ~[?:?]
	at org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:172) ~[flink-sql-client-1.16.1.jar:1.16.1]
	... 12 more

Anything else?

The reason for the exception is that the table with timeTravel semantics needs to check the snapshot. https://github.com/apache/incubator-paimon/blob/master/paimon-core/src/main/java/org/apache/paimon/table/AbstractFileStoreTable.java#L132

AbstractFileStoreTable#store will generator the options from the schema of the table, which does not contain the PATH configuration, so this exception occurs.

At the same time, I found that in many unit tests, if the scan.mode of the table is configured as the type of timeTravel, the exception will occur. For example FlinkCatalogTest#testCreateTable_Streaming

Should we always put the PATH configuration in the table schema?

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Replace keyword table_store with paimon

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

Replace keyword table_store with paimon in docs and classes

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Introduce Presto Reader for Paimon

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

Introduce Presto Reader for Paimon

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Bug] Timestamp LTZ is unsupported in table store

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

0.4

Compute Engine

flink

Minimal reproduce step

run table with timestamp with local zone field.

What doesn't meet your expectations?

Due to orc format limitation, timestamp ltz is unsupported now. We should fix this, and validate this type cross multiple engines (hive spark trino).
We need to careful about time zone.

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Support Datax sink to paimon

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

  1. 支持Datax读取

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Support Apache Flink 1.17.0

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

Support Apache Flink 1.17.0

Solution

Add a module named paimon-Flinks-1.17

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Bug] Recreate the same paimon hive catalog table after drop it failed because of schema file already exists

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

0.4-snapshot

Compute Engine

flink-1.16

Minimal reproduce step

CREATE CATALOG paimon WITH (
'type' = 'paimon',
'metastore' = 'hive',
'uri' = 'thrift://xxx:9083',
'warehouse' ='hdfs://hadoop3/table_store/warehouse',
'table.type'='EXTERNAL'
);
use catalog paimon;
CREATE TABLE if not exists demo_log_01 (
user_id BIGINT,
item_id BIGINT,
behavior STRING,
dt STRING,
hh STRING,
PRIMARY KEY (dt, hh, user_id) NOT ENFORCED
) ;
drop table demo_log_01;
then create table again:
CREATE TABLE if not exists demo_log_01 (
user_id BIGINT,
item_id BIGINT,
behavior STRING,
dt STRING,
hh STRING,
PRIMARY KEY (dt, hh, user_id) NOT ENFORCED
) ;

Error message:
org.apache.flink.table.api.TableException: Could not execute CreateTable in path fts.default.demo_log_01
at org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:891)
at org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:652)
at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:929)
at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:730)
at com.dlink.executor.Executor.executeSql(Executor.java:249)
at com.dlink.job.JobManager.executeSql(JobManager.java:425)
at com.dlink.service.impl.StudioServiceImpl.executeFlinkSql(StudioServiceImpl.java:202)
at com.dlink.service.impl.StudioServiceImpl.executeSql(StudioServiceImpl.java:189)
at com.dlink.service.impl.StudioServiceImpl$$FastClassBySpringCGLIB$$e3eb787.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:793)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763)
at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:89)
at com.dlink.aop.UdfClassLoaderAspect.round(UdfClassLoaderAspect.java:65)
at sun.reflect.GeneratedMethodAccessor125.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:634)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:624)
at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:72)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:708)
at com.dlink.service.impl.StudioServiceImpl$$EnhancerBySpringCGLIB$$c912ffb1.executeSql()
at com.dlink.controller.StudioController.executeSql(StudioController.java:78)
at com.dlink.controller.StudioController$$FastClassBySpringCGLIB$$e6483d87.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:793)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763)
at org.springframework.aop.framework.adapter.AfterReturningAdviceInterceptor.invoke(AfterReturningAdviceInterceptor.java:57)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:708)
at com.dlink.controller.StudioController$$EnhancerBySpringCGLIB$$d39000f.executeSql()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1071)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:964)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:696)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:779)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at com.alibaba.druid.support.http.WebStatFilter.doFilter(WebStatFilter.java:124)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:360)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1789)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.RuntimeException: Failed to commit changes of table default.demo_log_01 to underlying files
at org.apache.paimon.hive.HiveCatalog.createTable(HiveCatalog.java:237)
at org.apache.paimon.flink.FlinkCatalog.createTable(FlinkCatalog.java:219)
at org.apache.flink.table.catalog.CatalogManager.lambda$createTable$11(CatalogManager.java:663)
at org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:885)
... 94 more
Caused by: java.lang.IllegalStateException: Schema in filesystem exists, please use updating, latest schema is: Optional[{
"id" : 0,
"fields" : [ {
"id" : 0,
"name" : "user_id",
"type" : "BIGINT NOT NULL"
}, {
"id" : 1,
"name" : "item_id",
"type" : "BIGINT"
}, {
"id" : 2,
"name" : "behavior",
"type" : "STRING"
}, {
"id" : 3,
"name" : "dt",
"type" : "STRING NOT NULL"
}, {
"id" : 4,
"name" : "hh",
"type" : "STRING NOT NULL"
} ],
"highestFieldId" : 4,
"partitionKeys" : [ "dt", "hh" ],
"primaryKeys" : [ "dt", "hh", "user_id" ],
"options" : {
"bucket" : "4"
}
}]
at org.apache.paimon.schema.SchemaManager.lambda$createTable$0(SchemaManager.java:118)
at java.util.Optional.ifPresent(Optional.java:159)
at org.apache.paimon.schema.SchemaManager.createTable(SchemaManager.java:113)
at org.apache.paimon.hive.HiveCatalog.createTable(HiveCatalog.java:233)
... 97 more

What doesn't meet your expectations?

recreate a table should be supportted

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Upgrade orc version to 1.8

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

Current paimon depends on orc 1.5, we should upgrade the version to orc 1.8

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Flink connector should use public API

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

Now, Flink connector uses FileStoreTable, this is not public api, we should just use public api Table.
This is rely on #638

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Bug] Create table in FTS catalog with s3 warehouse throws DatabaseNotExistException

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

0.4

Compute Engine

flink

Minimal reproduce step

Flink SQL> CREATE CATALOG my_catalog WITH (
>   'type'='table-store',
>   'warehouse'='s3://bucket/my-tablestore'
> );
[INFO] Execute statement succeed.

Flink SQL> USE CATALOG my_catalog;
[INFO] Execute statement succeed.

Flink SQL> CREATE TABLE word_count (
>     word STRING PRIMARY KEY NOT ENFORCED,
>     cnt BIGINT
> );
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.catalog.exceptions.DatabaseNotExistException: Database default does not exist in Catalog my_catalog. 

Creating the table in the default catalog works though:

Flink SQL> use catalog default_catalog;
[INFO] Execute statement succeed.

Flink SQL> CREATE TABLE word_count (
>       word STRING PRIMARY KEY NOT ENFORCED,
>       cnt BIGINT
>  ) WITH (
>    'connector'='table-store',
>    'path'='s3://bucket/my-tablestore',
>    'auto-create'='true'
> );
[INFO] Execute statement succeed.

What doesn't meet your expectations?

[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.catalog.exceptions.DatabaseNotExistException: Database default does not exist in Catalog my_catalog.

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Improvement] Add validating column type in ddl for different format

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

Current paimon use avro/orc/parquet to store data which only support some of data type in sql. Paimon should validate the data type supported by format in ddl, otherwise the ddl will be successful and writing jobs will fail.

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Support jdk11 for paimon

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

Currently paimon project can be compiled by jdk8. Engines such as Flink already supports jdk11, paimon should need to support jdk11 too

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Use enum for file.format

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

Currently option for file.format is string, we should use enum and check the value in ddl

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Introduce metadata database for catalog

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

Introduce a system database for each catalog in table store to manage catalog information such as tables dependencies, relations between snapshots and checkpoints for each table

Subtask of #1105

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Some problems with stream reading

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

Q1:
When I am reading incremental streams, if I use the parameter "scan.timestamp-millis",
when the snapshot corresponding to the timestamp expires, an error will be reported.
I think there should be a bottom-back policy :
scan.timestamp-millis < snapshot earliest timestamp, then read from EARLIEST
scan.timestamp-millis > snapshot latest timestamp, then scans-mode = 'latest'

Q2:
The parallelism is greater than the number of buckets during stream read, and the excess slots are not used

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Bug] LogStoreTableFactory should not implements DynamicTableFactory

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

0.4

Compute Engine

flink

Minimal reproduce step

kafka connector sink to paimon table

What doesn't meet your expectations?

Now, KafkaLogStoreFactory is very easy to conflict with flink kafka connector.
we should not use flink DynamicTableFactory.

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Improve] Use new ReadBuilder and WriteBuilder API in modules

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

Now paimon has refactored the read/write api, so we should retire the old api interface in codes.

There are some examples:

image

image

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Support materialized column to improve query performance for complex types

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

In the world of data warehouse, it is very common to use one or more columns from a complex type such as a map, or to put many subfields into it. These operations can greatly affect query performance because:

  1. These operations are very wasteful IO. For example, if we have a field type of Map, which contains dozens of subfields, we need to read the entire column when reading this column. And Spark will traverse the entire map to get the value of the target key.
  2. Cannot take advantage of vectorized reads when reading nested type columns.
  3. Filter pushdown cannot be used when reading nested columns.

It is necessary to introduce the materialized column feature in Flink Table Store, which transparently solves the above problems of arbitrary columnar storage (not just Parquet).

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Assign batch splits for ContinuousFileSplitEnumerator

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

#687 introduces an option scan.split-enumerator.batch-size for assigning splits at one time to avoid exceed akka.framesize limit in StaticFileStoreSplitEnumerator. ContinuousFileSplitEnumerator should support this mechanism too.

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Bug] Select from a new table with Kafka LogStore crashes with UnknownTopicOrPartitionException

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

0.4

Compute Engine

flink 1.5

Minimal reproduce step

Selecting from newly created table that uses Kafka as a Log Store creates a job that crash-loops with UnknownTopicOrPartitionException: This server does not host this topic-partition exception. This happens because neither CREATE TABLE nor SELECT FROM create the underlying topic.

Steps to reproduce:

CREATE TABLE word_count (
    word STRING PRIMARY KEY NOT ENFORCED,
    cnt BIGINT
) WITH (
    'connector' = 'table-store',
    'path' = 's3://my-bucket/table-store',
    'log.system' = 'kafka',
    'kafka.bootstrap.servers' = 'broker:9092',
    'kafka.topic' = 'word_count_log',
    'auto-create' = 'true',
    'log.changelog-mode' = 'all',
    'log.consistency' = 'transactional'
);

SELECT * FROM word_count; 

What doesn't meet your expectations?

flink          | 2023-01-04 23:27:24,292 ERROR org.apache.flink.runtime.source.coordinator.SourceCoordinatorContext [] - Exception while handling result from async call in SourceCoordinator-Source: word_count[1]. Triggering job failover.
flink          | org.apache.flink.util.FlinkRuntimeException: Failed to list subscribed topic partitions due to
flink          |     at org.apache.flink.table.store.shaded.connector.kafka.source.enumerator.KafkaSourceEnumerator.checkPartitionChanges(KafkaSourceEnumerator.java:234) ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
flink          |     at org.apache.flink.runtime.source.coordinator.ExecutorNotifier.lambda$null$1(ExecutorNotifier.java:83) ~[flink-dist-1.16.0.jar:1.16.0]
flink          |     at org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:40) [flink-dist-1.16.0.jar:1.16.0]
flink          |     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_352]
flink          |     at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_352]
flink          |     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_352]
flink          |     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_352]
flink          |     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_352]
flink          |     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_352]
flink          |     at java.lang.Thread.run(Thread.java:750) [?:1.8.0_352]
flink          | Caused by: java.lang.RuntimeException: Failed to get metadata for topics [word_count_log].
flink          |     at org.apache.flink.table.store.shaded.connector.kafka.source.enumerator.subscriber.KafkaSubscriberUtils.getTopicMetadata(KafkaSubscriberUtils.java:47) ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
flink          |     at org.apache.flink.table.store.shaded.connector.kafka.source.enumerator.subscriber.TopicListSubscriber.getSubscribedTopicPartitions(TopicListSubscriber.java:52) ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
flink          |     at org.apache.flink.table.store.shaded.connector.kafka.source.enumerator.KafkaSourceEnumerator.getSubscribedTopicPartitions(KafkaSourceEnumerator.java:219) ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
flink          |     at org.apache.flink.runtime.source.coordinator.ExecutorNotifier.lambda$notifyReadyAsync$2(ExecutorNotifier.java:80) ~[flink-dist-1.16.0.jar:1.16.0]
flink          |     ... 7 more
flink          | Caused by: java.util.concurrent.ExecutionException: org.apache.flink.table.store.shaded.org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. 

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Bug] FlinkActionsE2eTest.testDelete is unstable

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

https://github.com/apache/incubator-paimon/actions/runs/4465175758/jobs/7841980086?pr=656

Error: Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 265.233 s <<< FAILURE! - in org.apache.paimon.tests.FlinkActionsE2eTest
Error: testDelete Time elapsed: 110.359 s <<< FAILURE!
org.opentest4j.AssertionFailedError:
Result is still unexpected after 60 retries.
Expected: {2023-01-21, 1, 31=1, 2023-01-20, 1, 28=1, 2023-01-19, 1, 23=1, 2023-01-18, 1, 75=1, 2023-01-17, 1, 50=1}
Actual: {2023-01-21, 1, 31=1, 2023-01-14, 0, 19=1, 2023-01-13, 0, 39=1, 2023-01-16, 1, 25=1, 2023-01-15, 0, 37=1, 2023-01-20, 1, 28=1, 2023-01-19, 1, 23=1, 2023-01-18, 1, 75=1, 2023-01-17, 1, 50=1}
at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:39)
at org.junit.jupiter.api.Assertions.fail(Assertions.java:134)
at org.apache.paimon.tests.E2eTestBase.checkResult(E2eTestBase.java:261)
at org.apache.paimon.tests.FlinkActionsE2eTest.testDelete(FlinkActionsE2eTest.java:256)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)

Compute Engine

flink

Minimal reproduce step

no

What doesn't meet your expectations?

no

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Update copyright NOTICE year to 2023

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

as an open source project, copyright is very important.

Solution

update NOTICE

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Optimize serialization of TableSchema

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

Now the serialization of TableSchema is more violent, once it is determined that there are watermark or calculated columns, is to turn all schema into options stuffed into.
We should only serialize watermark and calculated columns.

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Bug] Hive catalog should not shade hive dependencies

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

0.4

Compute Engine

flink,spark

Minimal reproduce step

run flink with hive catalog with gateway with hive jdbc.

What doesn't meet your expectations?

No conflicts.

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Bug] Close blocking iterators in tests

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

851e8d4

Compute Engine

Flink

Minimal reproduce step

Several blocking iterators are not closed in ContinuousFileStoreITCase, like in the testContinuousLatest()

What doesn't meet your expectations?

Run unit tests and found this.

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Feature] Introduce metrics about the busyness of compaction thread

Search before asking

  • I searched in the issues and found nothing similar.

Motivation

This metric monitors how busy the writer is, and if it is too busy, it can alert the user to add resources in advance.

Solution

No response

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Bug] Table Store records and fetches incorrect results with NaN

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

0.4

Compute Engine

flink

Minimal reproduce step

Use the following test data and SQL to reproduce this issue.

gao.csv:

1.0,2.0,aaaaaaaaaaaaaaa
0.0,0.0,aaaaaaaaaaaaaaa
1.0,1.0,aaaaaaaaaaaaaaa
0.0,0.0,aaaaaaaaaaaaaaa
1.0,0.0,aaaaaaaaaaaaaaa
0.0,0.0,aaaaaaaaaaaaaaa
-1.0,0.0,aaaaaaaaaaaaaaa
1.0,-1.0,aaaaaaaaaaaaaaa
1.0,-2.0,aaaaaaaaaaaaaaa
Flink SQL:

Flink SQL> create table T ( a double, b double, c string ) WITH ( 'connector' = 'filesystem', 'path' = '/tmp/gao.csv', 'format' = 'csv' );
[INFO] Execute statement succeed.

Flink SQL> create table S ( a string, b double ) WITH ( 'path' = '/tmp/store' );
[INFO] Execute statement succeed.

Flink SQL> insert into S select c, a / b from T;
[INFO] Submitting SQL update statement to the cluster...
[INFO] SQL update statement has been successfully submitted to the cluster:
Job ID: 851d7b3c233061733bdabbf30f20d16f

Flink SQL> select c, a / b from T;
+-----------------+-----------+
| c | EXPR$1 |
+-----------------+-----------+
| aaaaaaaaaaaaaaa | 0.5 |
| aaaaaaaaaaaaaaa | NaN |
| aaaaaaaaaaaaaaa | 1.0 |
| aaaaaaaaaaaaaaa | NaN |
| aaaaaaaaaaaaaaa | Infinity |
| aaaaaaaaaaaaaaa | NaN |
| aaaaaaaaaaaaaaa | -Infinity |
| aaaaaaaaaaaaaaa | -1.0 |
| aaaaaaaaaaaaaaa | -0.5 |
+-----------------+-----------+
9 rows in set

Flink SQL> select * from S;
+-----------------+-----------+
| a | b |
+-----------------+-----------+
| aaaaaaaaaaaaaaa | -Infinity |
| aaaaaaaaaaaaaaa | -Infinity |
| aaaaaaaaaaaaaaa | -Infinity |
| aaaaaaaaaaaaaaa | -Infinity |
| aaaaaaaaaaaaaaa | -Infinity |
| aaaaaaaaaaaaaaa | -Infinity |
| aaaaaaaaaaaaaaa | -Infinity |
| aaaaaaaaaaaaaaa | -1.0 |
| aaaaaaaaaaaaaaa | -0.5 |
+-----------------+-----------+
9 rows in set
Note that this issue may also affect FieldStatsCollector.

What doesn't meet your expectations?

Wrong result

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

[Bug] Remove dependency to sun.misc.*

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

all version

Compute Engine

all

Minimal reproduce step

build with a JDK that is not provided by Oracle

What doesn't meet your expectations?

I cannot build this with another JDK that the one provided by Oracle

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.