Git Product home page Git Product logo

eventuate-local's Introduction

Eventuate Local

Spring/Micronaut

eventuate local bom

Quarkus

eventuate local quarkus bom
logo

Simplifying the development of transactional microservices

Eventuate™ Local is the on-premise, open source version of Eventuate™, which is a platform for developing transactional business applications that use the microservice architecture. Eventuate provides an event-driven programming model for microservices that is based on event sourcing and CQRS. Eventuate™ Local has the same API as the SaaS version but uses a SQL database to persist events and Kafka as the publish/subscribe mechanism.

Eventuate Local consists of:

  • A framework for developing (microservice-based) applications.

  • An event store consisting of a SQL database (currently MySQL) and Kafka.

Big Picture

The framework persists events in an EVENTS table in the SQL database and subscribes to events in Kafka. A change data capture component tails the database transaction log and publishes each event to Kafka. There is a Kafka topic for each aggregate type.

About change data capture

Eventuate Local has a change data capture (CDC) component that

  1. tails the MySQL transaction log

  2. publishes each event inserted into the EVENTS table to Kafka topic for the event’s aggregate.

The CDC component runs either embedded within each application or as a service cdcservice.

Got questions?

Don’t hesitate to create an issue or see

Need support?

Take a look at the available paid support options.

Setting up Eventuate Local

To use Eventuate Local you need to

  1. Create EVENT and ENTITIES tables in a MySQL database.

  2. Run Apache Zookeeper, Apache Kafka and the optional cdcservice

  3. Use the Eventuate Local artifacts in your application.

The easiest way to get started is to run a set of Docker containers using Docker Compose as described below.

The quick setup

This is the fastest way to get started with Eventuate Local.

Set the DOCKER_HOST_IP environment variable

You must first set the environment variable DOCKER_HOST_IP to the IP address of the machine running Docker. For example, if you are running Docker Machine on Mac/Windows this would be the IP address of the VirtualBox VM. Please note that you cannot set DOCKER_HOST_IP to localhost since that will not resolve to the correct IP address within a Docker container.

Run the Eventuate Local Docker containers

Next, you can run the Docker containers. First, copy docker-compose-eventuate-local.yml to your project. Then, launch the Docker containers by running the following command:

docker-compose -f docker-compose-eventuate-local.yml

This command creates the following containers:

  • Apache Zookeeper - used by both the change data capture component and Kafka

  • Apache Kafka - message broker

  • MySQL - MySQL database that has the eventuate schema already defined

  • cdcservice - the change data capture component

For convenience, you might want to add the contents of this file to your project’s docker-compose.yml file.

Set some environment variables

In order for your Sprint Boot application to use Eventuate Local you need to set the following application properties:

spring.datasource.url=jdbc:mysql://${DOCKER_HOST_IP}/eventuate
spring.datasource.username=mysqluser
spring.datasource.password=mysqlpw
eventuateLocal.kafka.bootstrapServers=$DOCKER_HOST_IP:9092
eventuateLocal.zookeeper.connectionString=$DOCKER_HOST_IP:2181
eventuateLocal.cdc.dbUserName=root
eventuateLocal.cdc.dbPassword=rootpassword

A convenient way to do that is to set the corresponding OS environment variables:

export SPRING_DATASOURCE_URL=jdbc:mysql://${DOCKER_HOST_IP}/eventuate
export SPRING_DATASOURCE_USERNAME=mysqluser
export SPRING_DATASOURCE_PASSWORD=mysqlpw
export SPRING_DATASOURCE_DRIVER_CLASS_NAME=com.mysql.cj.jdbc.Driver
export EVENTUATELOCAL_KAFKA_BOOTSTRAP_SERVERS=$DOCKER_HOST_IP:9092
export EVENTUATELOCAL_CDC_DB_USER_NAME=root
export EVENTUATELOCAL_CDC_DB_PASSWORD=rootpassword
export EVENTUATELOCAL_ZOOKEEPER_CONNECTION_STRING=$DOCKER_HOST_IP:2181

You can do that by running the set-env.sh bash script:

Use the Eventuate Local libraries

If you are using Gradle then please specify the following in gradle.properties:

eventuateLocalVersion=0.11.0.RELEASE

and instead of the Eventuate HTTP/STOMP artifacts, specify the following:

compile "io.eventuate.local.java:eventuate-local-java-jdbc:${eventuateLocalVersion}"
compile "io.eventuate.local.java:eventuate-local-java-embedded-cdc-autoconfigure:${eventuateLocalVersion}"

For more information about developing applications with Eventuate Local see the Getting Started guide.

Configuring your application containers

You need to configure your application’s containers to connect to the Eventuate MySQL, Kafka and Zookeeper containers. You can do that using the following in your project’s docker-compose.yml file using links and environment:

mycontainer:
  ...
  links:
    - mysql
    - kafka
    - zookeeper
  environment:
    SPRING_DATASOURCE_URL: jdbc:mysql://mysql/eventuate
    SPRING_DATASOURCE_USERNAME: mysqluser
    SPRING_DATASOURCE_PASSWORD: mysqlpw
    SPRING_DATASOURCE_DRIVER_CLASS_NAME: com.mysql.cj.jdbc.Driver
    EVENTUATELOCAL_KAFKA_BOOTSTRAP_SERVERS: kafka:9092
    EVENTUATELOCAL_ZOOKEEPER_CONNECTION_STRING: zookeeper:2181
    EVENTUATELOCAL_CDC_DB_USER_NAME: root
    EVENTUATELOCAL_CDC_DB_PASSWORD: rootpassword

Note: in order for this to work you have either copied the container definitions from docker-compose-eventuate-local.yml to you docker-compose.yml file or you are running docker-compose with multiple -f arguments:

docker-compose -f docker-compose-eventuate-local.yml -f docker-compose.yml up -d

The not so quick version

TBD

Running an example application

The Eventuate example applications support both Eventuate and Eventuate Local.

To build an example with Eventuate Local, use this command:

./gradlew -P eventuateDriver=local assemble

To start the Docker Containers with Eventuate Local run this command:

docker-compose -f docker-compose-eventuate-local.yml up -d

The docker-compose-eventuate-local.yml file defines the application containers and the Eventuate Local containers and links them appropriately.

eventuate-local's People

Contributors

cer avatar dartartem avatar eventuateio avatar karolisl avatar kwonglau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eventuate-local's Issues

Not pulishing events to accountviewService by CDC service

Hi
I have done the local setup of eventaute moneytransfer application with emebeded CDC running up.
all services , kafka, mysql,mongodb etc are running up also application starting sucessfully. below sample log of starting CDC log. customer service and account service working fine. Creating customerid and accountId but this is not getting published to accountviewservice and not getting saved in mongo db. I don't see exception anywhere. Could you please tell me what could be possible wrong here. Thanks!!
2018-04-09 19:44:53.128 INFO 13112 --- [0:0:0:0:1:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
2018-04-09 19:44:53.130 INFO 13112 --- [0:0:0:0:1:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established to 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2181, initiating session
2018-04-09 19:44:53.324 INFO 13112 --- [0:0:0:0:1:2181)] org.apache.zookeeper.ClientCnxn : Session establishment complete on server 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2181, sessionid = 0x162aab436240013, negotiated timeout = 40000
2018-04-09 19:44:53.337 INFO 13112 --- [ain-EventThread] o.a.c.f.state.ConnectionStateManager : State change: CONNECTED
2018-04-09 19:44:53.385 INFO 13112 --- [ main] d.EventTableChangesToAggregateTopicRelay : CDC initialized. Ready to become leader
2018-04-09 19:44:57.171 INFO 13112 --- [ main] o.s.b.a.e.mvc.EndpointHandlerMapping : Mapped "{[/env],methods=[POST]}" onto public

[question] Async CQRS

Hi,

in my current design, UI is nothing else than any other service to process chain of events. Not only for this purpose I ask, would it be hard to change the currently implemented concept from HTTP to Kafka async design in Eventuate?

Thanks for info on this.

NOTE: In general something is still wrong in my head with whole CQRS pattern :-). it is like not really reactive platform component, but that is different question itself. :-)

Issues with debezium-connector-mysql on RDS

Hi,
I'm trying to use the cdc-service on AWS with a MySQL RDS database. When running the cdc-service, the application failed on step 2; when I tries to flush the database I guess.

Below the error :

017-07-03 16:48:50.533  INFO 8 --- [y-app-connector] i.d.connector.mysql.SnapshotReader       : Step 0: disabling autocommit and enabling repeatable read transactions
2017-07-03 16:48:50.554  INFO 8 --- [y-app-connector] i.d.connector.mysql.SnapshotReader       : Step 1: start transaction with consistent snapshot
2017-07-03 16:48:50.597  INFO 8 --- [y-app-connector] i.d.connector.mysql.SnapshotReader       : Step 2: flush and obtain global read lock (preventing writes to database)
2017-07-03 16:48:50.619 ERROR 8 --- [y-app-connector] i.d.connector.mysql.SnapshotReader       : Failed due to error: Aborting snapshot after running 'FLUSH TABLES WITH READ LOCK': Access denied for user 'eventuate_user'@'%' (using password: YES)

org.apache.kafka.connect.errors.ConnectException: Access denied for user 'eventuate_user'@'%' (using password: YES) Error code: 1045; SQLSTATE: 28000.
	at io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:141) ~[debezium-connector-mysql-0.3.1.jar!/:0.3.1]
	at io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:120) ~[debezium-connector-mysql-0.3.1.jar!/:0.3.1]
	at io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:449) [debezium-connector-mysql-0.3.1.jar!/:0.3.1]
	at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
Caused by: java.sql.SQLException: Access denied for user 'eventuate_user'@'%' (using password: YES)
	at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:957) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3878) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3814) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
	at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2478) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
	at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2625) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2547) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2505) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
	at com.mysql.jdbc.StatementImpl.executeInternal(StatementImpl.java:840) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
	at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:740) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
	at io.debezium.jdbc.JdbcConnection.lambda$execute$1(JdbcConnection.java:242) ~[debezium-core-0.3.1.jar!/:0.3.1]
	at io.debezium.jdbc.JdbcConnection.execute(JdbcConnection.java:259) ~[debezium-core-0.3.1.jar!/:0.3.1]
	at io.debezium.jdbc.JdbcConnection.execute(JdbcConnection.java:236) ~[debezium-core-0.3.1.jar!/:0.3.1]
	at io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:194) [debezium-connector-mysql-0.3.1.jar!/:0.3.1]
	... 1 common frames omitted

I found some discussion about the subject and I appears that there is a bug on RDS with the Debezium connector (version <0.4.1). Unfortunately, the issue seems to be resolved with the version 0.4.1.

Could you please upgrade the version of this connector?
Any suggestion on running the cdc on AWS ?
Should I create a MySQL instance on EC2 to make this work; as some guys suggested ?

Thank in advance
Regards

DebeziumCdcStartupValidator isn't able to connect to my local mysql server.

It fails as given below . All the configuration given to enable bin logs are enabled in the my.ini conf file.
cdcservice_1 | 2018-07-11 10:32:52.429 INFO 5 --- [eaderSelector-0] i.e.l.c.d.DebeziumCdcStartupValidator : Failed
testing connection for 10.175.173.140:3306 with user 'root'
cdcservice_1 | 2018-07-11 10:32:53.441 INFO 5 --- [eaderSelector-0] i.e.l.c.d.DebeziumCdcStartupValidator : Failed
testing connection for 10.175.173.140:3306 with user 'root'
cdcservice_1 | 2018-07-11 10:32:54.460 INFO 5 --- [eaderSelector-0] i.e.l.c.d.DebeziumCdcStartupValidator : Failed
testing connection for 10.175.173.140:3306 with user 'root'
cdcservice_1 | 2018-07-11 10:32:55.471 INFO 5 --- [eaderSelector-0] i.e.l.c.d.DebeziumCdcStartupValidator : Failed
testing connection for 10.175.173.140:3306 with user 'root'
cdcservice_1 | 2018-07-11 10:32:56.491 INFO 5 --- [eaderSelector-0] i.e.l.c.d.DebeziumCdcStartupValidator : Failed
testing connection for 10.175.173.140:3306 with user 'root'
cdcservice_1 | 2018-07-11 10:32:57.515 INFO 5 --- [eaderSelector-0] i.e.l.c.d.DebeziumCdcStartupValidator : Failed
testing connection for 10.175.173.140:3306 with user 'root'
cdcservice_1 | 2018-07-11 10:32:58.535 INFO 5 --- [eaderSelector-0] i.e.l.c.d.DebeziumCdcStartupValidator : Failed
testing connection for 10.175.173.140:3306 with user 'root'
cdcservice_1 | 2018-07-11 10:32:59.545 INFO 5 --- [eaderSelector-0] i.e.l.c.d.DebeziumCdcStartupValidator : Failed
testing connection for 10.175.173.140:3306 with user 'root'
cdcservice_1 | 2018-07-11 10:33:00.552 INFO 5 --- [eaderSelector-0] i.e.l.c.d.DebeziumCdcStartupValidator : Failed
testing connection for 10.175.173.140:3306 with user 'root'
cdcservice_1 | 2018-07-11 10:33:01.565 INFO 5 --- [eaderSelector-0] i.e.l.c.d.DebeziumCdcStartupValidator : Failed
testing connection for 10.175.173.140:3306 with user 'root'
cdcservice_1 | 2018-07-11 10:33:02.573 INFO 5 --- [eaderSelector-0] i.e.l.c.d.DebeziumCdcStartupValidator : Failed
testing connection for 10.175.173.140:3306 with user 'root'
cdcservice_1 | 2018-07-11 10:33:03.583 INFO 5 --- [eaderSelector-0] i.e.l.c.d.DebeziumCdcStartupValidator : Failed
testing connection for 10.175.173.140:3306 with user 'root'
cdcservice_1 | 2018-07-11 10:33:04.593 INFO 5 --- [eaderSelector-0] i.e.l.c.d.DebeziumCdcStartupValidator : Failed
testing connection for 10.175.173.140:3306 with user 'root'

Eventuate Local builds two different Postgres images

Problem:

  • Project builds two different Postgres images
  • One has WAL enabled. The other does not.
  • The containers have the same name: postgres
  • The build publishes the docker image called '.... postgres ... ' - fortunately, since the WAL version is built last, its the one that is published.

Proposal:

  • Eliminate the non-WAL one. Polling will presumably work just fine even if WAL is enabled.

Test with MariaDB

diff --git a/mysql/Dockerfile b/mysql/Dockerfile
index 778f433..842d241 100644
--- a/mysql/Dockerfile
+++ b/mysql/Dockerfile
@@ -1,3 +1,3 @@
-FROM mysql:5.7.13
+FROM mariadb:10.3.8
 COPY replication.cnf /etc/mysql/conf.d
 COPY initialize-database.sql /docker-entrypoint-initdb.d
diff --git a/mysql/replication.cnf b/mysql/replication.cnf
index d5f41e2..8ba8fa3 100644
--- a/mysql/replication.cnf
+++ b/mysql/replication.cnf
@@ -1,3 +1,4 @@
 [mysqld]
 log-bin=mysql-bin
 server-id=1
+binlog_format=ROW

[question] Using MongoDB as event store and ActiveMQ as broker

Hi,

for some small project I am requested to use event store but as main messaging is already based on ActiveMQ (AMQP 1.0) and storage on MongoDB, client doesn't want to introduce new storage and messaging technologies/protocol but remain on existing. Is it supported or easy to integrate mongodb and activemq as alternative components for event store?

Thank you very much.

Ladislav

[discussion] Fully distributed locally bounded Event Store

This more fundamental question as soon as the whole idea of centralized event store is somehow against the distributed design :-)))

Have you ever considered more design where each events are only stored in services which either produced or consumed those in local storage? I understand it can open pandora box with some other issues, but wouldn't it be more beautiful design? this of course will make CQRS more difficult, but maybe CQRS itself is corrupted by design.

This is really free discussion only on this topic. The idea behind is more fundamental:
I dislike so called event notifications, I more see it as IO. Event execution is happening between I and O. So everything produced by event execution (can be in multiple phases - start, progress, finish, fails, etc.) has a form of abstract I started calling EventOutput. This event output can be in form of small status update, but can be actually complex structure in size of terabytes to me, no difference. This also lead me to get rid of any kind of HTTP CRUD like stuff (and CQRS is nice, but advanced CRUD to me, as it separates IO and provide entity grouping). Please be nice to me when I say these things :-)

And as soon as UI is another service I already implemented small example with using following scenario using Vaadin and SpringBoot, tried to imply event driven concepts only without any kind of HTTP.

  1. UI load in browser generates event - UIStarted (if there doesn't exist previous message, generate global transaction ID -> which serve to mach chain of events to map to business requirements) , so no more GiveMeData naming conventions
  2. This event is consumed by multiple services providing any kind of data for UI in form of:
    ProcessListObtained (full table)
    etc.
    but data are not provided in concept like queries. The so called request-response pattern is only represented in form of 2 events in chain.
  3. The original UIStarted could be consumed by another validation chain of services, to do some security check and return ....

The idea is to get rid of thing so called Command Bus by removing restriction of EventNotification and give it full potential as pure EventOutput so it can carry data as well. The event in real world produce all data available.

I am really not sure if this whole idea make sense to you, but would love to here your opinions. This has not only impact on CQRS, I think even something like Aggregates may survive, as it is superior idea from my perspective.

So in general I have no centralized Event Store and output of the event is not limited to notification used to change some state, but can carry anything.

What do you think?

java.lang.IllegalArgumentException: No ConfigurationProperties annotation found on 'io.eventuate.local.java.kafka.EventuateKafkaConfigurationProperties

While starting the eventuate client application getting below exception. Can you please help to resolve this issue.

java.lang.IllegalArgumentException: No ConfigurationProperties annotation found on 'io.eventuate.local.java.kafka.EventuateKafkaConfigurationProperties'.
at org.springframework.util.Assert.notNull(Assert.java:115) ~[spring-core-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.boot.context.properties.EnableConfigurationPropertiesImportSelector$ConfigurationPropertiesBeanRegistrar.registerBeanDefinition(EnableConfigurationPropertiesImportSelector.java:118) ~[spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE]
at org.springframework.boot.context.properties.EnableConfigurationPropertiesImportSelector$ConfigurationPropertiesBeanRegistrar.registerBeanDefinitions(EnableConfigurationPropertiesImportSelector.java:82) ~[spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE]
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsFromRegistrars(ConfigurationClassBeanDefinitionReader.java:352) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForConfigurationClass(ConfigurationClassBeanDefinitionReader.java:143) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitions(ConfigurationClassBeanDefinitionReader.java:116) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:333) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:243) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanDefinitionRegistryPostProcessors(PostProcessorRegistrationDelegate.java:273) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:98) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:678) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:520) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118) ~[spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:766) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE]
at org.springframework.boot.SpringApplication.createAndRefreshContext(SpringApplication.java:361) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1191) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1180) [spring-boot-1.3.5.RELEASE.jar:1.3.5.RELEASE]
at poc.infosys.eventsource.customerservice.CustomersServiceMain.main(CustomersServiceMain.java:29) [classes/:na]

Question: Open source Scala client

I have been trying eventuate for couple of days now and would like to build a scala client (for eventuate-local) that does not depend on Spring.

Wondering if there are plans to release an open-source a scala client similar to the java client.

Reverse mapping from one or more keys to entityId

@cer Given the current event store design, we are relying on a known entityId in order to access event store. That means all command side actions must be started from query side and query side has to have a table that can map aggregate key to an entityId. In some of the use cases, the initial action might be started from command side. I am thinking to create a table in event store like this.

create table keys (
  key_id VARCHAR(256),
  key_type VARCHAR(256),
  entity_id VARCHAR(256),
  PRIMARY KEY(key_id, key_type)
)

with sample data

key_id                          key_type           entity_id
Steve                           screen_name    1111111
[email protected]        email                 1111111

This requires to add additional method to the event store interface (save, update and find). I am just wondering if this is a good idea and doable. Thanks for the fantastic framework.

How to maintain atomicity

Hi Chris

I am having one questions in mind that why a separate process is require to mine the data from transaction log and publish into kafka system. If the service itself after saving the event in event store and issue a commit request then put the event in kafka system, will it not be a feasible solution. What is the drawback with this approach. Mining the transaction log is little unfamiliar to me, so thought of designing the solution with this approach. Please let me know your suggestion.

CDC fails to start due to SQL error when mysql contains table with dash in name

We have a table in MySQL, containing dash (-), this prevents cdc from reading our events table.

Tested on eventuateio/eventuateio-local-cdc-service:0.12.0 docker image.

Am I missing some confugration?
As I understand, the table name should be escaped in backticks (```), but is there a way to disable process all available tables?

My configuration:

          value: zookeeper:2181

        - name: SPRING_DATASOURCE_DRIVER_CLASS_NAME
          value: com.mysql.jdbc.Driver

        - name: SPRING_DATASOURCE_URL
          value: jdbc:mysql://hostname/my_table?useUnicode=true&characterEncoding=utf8
        - name: SPRING_DATASOURCE_USERNAME
          value: fssui
        - name: SPRING_DATASOURCE_PASSWORD
          value: <snip>

        - name: EVENTUATELOCAL_CDC_DB_USER_NAME
          value: <snip>
        - name: EVENTUATELOCAL_CDC_DB_PASSWORD
          value: <snip>
        - name: EVENTUATELOCAL_KAFKA_BOOTSTRAP_SERVERS
          value: <snip>:9092

Exception:


org.apache.kafka.connect.errors.ConnectException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-thing' at line 1 Error code: 1064; SQLSTATE: 42000.
        at io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:141) ~[debezium-connector-mysql-0.3.1.jar!/:0.3.1]
        at io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:120) ~[debezium-connector-mysql-0.3.1.jar!/:0.3.1]
        at io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:449) [debezium-connector-mysql-0.3.1.jar!/:0.3.1]
        at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_91]
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-thing' at line 1
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_91]
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_91]
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_91]
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_91]
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:404) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
        at com.mysql.jdbc.Util.getInstance(Util.java:387) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:939) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3878) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3814) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
        at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2478) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
        at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2625) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
        at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2547) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
        at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2505) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
        at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1370) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
        at io.debezium.jdbc.JdbcConnection.query(JdbcConnection.java:306) ~[debezium-core-0.3.1.jar!/:0.3.1]
        at io.debezium.jdbc.JdbcConnection.query(JdbcConnection.java:287) ~[debezium-core-0.3.1.jar!/:0.3.1]
        at io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:253) [debezium-connector-mysql-0.3.1.jar!/:0.3.1]
        ... 1 common frames omitted

Log:
to-gh.txt

Noop Leadership

Empty leadership implementation can be useful at some point (for tests).

Event handler is not consuming messages

Hello,

I've set up two Spring Boot applications which are involved in a distributed transaction. When the first application is called, a new Event is inserted into the MySql Database. I expected that the cdc-service would read the MySql logs and publish a message to the Kafka queue, but that doesn't seem to happen.

My current environment consists of my two app, cdc-service, mysql, kafka and zookeeper.

Is there a way to set the logging level of the cdc-service by inserting a new property in the docker-compose?

Do you have any advice for finding out what's happening in this situation?

Thank you,

Fabrizio

i.e.l.j.j.EventuateLocalJdbcAccess: Failed to update entity: 0 -- Events delivered multiple times

Hello,

I've created a simple application that is simulating saga execution based on the pure CQRS approach [1]. Everything is running fine on separated invocations:

Valid run:

2018-04-16 06:33:15.570  INFO 6 --- [ool-1-thread-10] OrderSagaAggregate                       : STARTING SAGA for order aggregate 382c8d6c-faba-45a7-89ea-ca82be66722e
2018-04-16 06:33:15.613  INFO 6 --- [ool-1-thread-11] o.l.e.o.domain.service.OrderSagaService  : posting shipment request for saga 00000162cd285633-0242ac1300070001 to http://shipment-service:8080/api/request
2018-04-16 06:33:15.656  INFO 6 --- [ool-1-thread-11] o.l.e.o.domain.service.OrderSagaService  : Shipment request is being processed
2018-04-16 06:33:15.656  INFO 6 --- [ool-1-thread-11] o.l.e.o.domain.service.OrderSagaService  : posting invoice request for saga 00000162cd285633-0242ac1300070001 to http://invoice-service:8080/api/request
2018-04-16 06:33:15.687  INFO 6 --- [onPool-worker-0] OrderSagaAggregate                       : received ProcessShipmentCommand for order 382c8d6c-faba-45a7-89ea-ca82be66722e
2018-04-16 06:33:15.688  INFO 6 --- [ool-1-thread-11] o.l.e.o.domain.service.OrderSagaService  : Invoice request is being processed
2018-04-16 06:33:15.732  INFO 6 --- [onPool-worker-1] OrderSagaAggregate                       : received ProcessInvoiceCommand for order 382c8d6c-faba-45a7-89ea-ca82be66722e
2018-04-16 06:33:15.775  INFO 6 --- [ool-1-thread-11] OrderSagaAggregate                       : saga executed successfully -- order 382c8d6c-faba-45a7-89ea-ca82be66722e, testProduct
2018-04-16 06:33:15.775  INFO 6 --- [ool-1-thread-11] OrderSagaAggregate                       : ENDING SAGA

However, when I run multiple request subsequently -- even manually curl 2 times in fast succession :) -- I get most of the time an error i.e.l.j.j.EventuateLocalJdbcAccess: Failed to update entity: 0 and events are delivered multiple times:

Invalid run:

2018-04-16 06:29:56.468  INFO 6 --- [nio-8080-exec-3] o.l.e.o.domain.service.OrderService      : sending FileOrderCommand
2018-04-16 06:29:56.473  INFO 6 --- [pool-1-thread-4] OrderSagaAggregate                       : STARTING SAGA for order aggregate ece93522-2390-40dc-b916-f34abde09aa3
2018-04-16 06:29:56.519  INFO 6 --- [pool-1-thread-6] OrderSagaAggregate                       : STARTING SAGA for order aggregate 9e2bb329-7914-4299-a5a5-31edb6de3996
2018-04-16 06:29:56.536  INFO 6 --- [pool-1-thread-4] o.l.e.o.domain.service.OrderSagaService  : posting shipment request for saga 00000162cd254c7b-0242ac1300070001 to http://shipment-service:8080/api/request
2018-04-16 06:29:56.570  INFO 6 --- [pool-1-thread-6] o.l.e.o.domain.service.OrderSagaService  : posting shipment request for saga 00000162cd254cab-0242ac1300070001 to http://shipment-service:8080/api/request
2018-04-16 06:29:56.589  INFO 6 --- [pool-1-thread-4] o.l.e.o.domain.service.OrderSagaService  : Shipment request is being processed
2018-04-16 06:29:56.589  INFO 6 --- [pool-1-thread-4] o.l.e.o.domain.service.OrderSagaService  : posting invoice request for saga 00000162cd254c7b-0242ac1300070001 to http://invoice-service:8080/api/request
2018-04-16 06:29:56.594  INFO 6 --- [pool-1-thread-6] o.l.e.o.domain.service.OrderSagaService  : Shipment request is being processed
2018-04-16 06:29:56.594  INFO 6 --- [pool-1-thread-6] o.l.e.o.domain.service.OrderSagaService  : posting invoice request for saga 00000162cd254cab-0242ac1300070001 to http://invoice-service:8080/api/request
2018-04-16 06:29:56.615  INFO 6 --- [pool-1-thread-4] o.l.e.o.domain.service.OrderSagaService  : Invoice request is being processed
2018-04-16 06:29:56.618  INFO 6 --- [pool-1-thread-6] o.l.e.o.domain.service.OrderSagaService  : Invoice request is being processed
2018-04-16 06:29:56.633  INFO 6 --- [onPool-worker-1] OrderSagaAggregate                       : received ProcessShipmentCommand for order 9e2bb329-7914-4299-a5a5-31edb6de3996
2018-04-16 06:29:56.640  INFO 6 --- [onPool-worker-3] OrderSagaAggregate                       : received ProcessShipmentCommand for order ece93522-2390-40dc-b916-f34abde09aa3
2018-04-16 06:29:56.655  INFO 6 --- [onPool-worker-0] OrderSagaAggregate                       : received ProcessInvoiceCommand for order ece93522-2390-40dc-b916-f34abde09aa3
2018-04-16 06:29:56.658 ERROR 6 --- [onPool-worker-0] i.e.l.j.j.EventuateLocalJdbcAccess       : Failed to update entity: 0
2018-04-16 06:29:56.662  INFO 6 --- [onPool-worker-0] OrderSagaAggregate                       : received ProcessInvoiceCommand for order ece93522-2390-40dc-b916-f34abde09aa3
2018-04-16 06:29:56.683  INFO 6 --- [onPool-worker-1] OrderSagaAggregate                       : saga executed successfully -- order ece93522-2390-40dc-b916-f34abde09aa3, testProduct-1
2018-04-16 06:29:56.683  INFO 6 --- [onPool-worker-1] OrderSagaAggregate                       : ENDING SAGA
2018-04-16 06:29:56.684  INFO 6 --- [onPool-worker-3] OrderSagaAggregate                       : received ProcessInvoiceCommand for order 9e2bb329-7914-4299-a5a5-31edb6de3996
2018-04-16 06:29:56.686 ERROR 6 --- [onPool-worker-3] i.e.l.j.j.EventuateLocalJdbcAccess       : Failed to update entity: 0
2018-04-16 06:29:56.694  INFO 6 --- [onPool-worker-3] OrderSagaAggregate                       : received ProcessInvoiceCommand for order 9e2bb329-7914-4299-a5a5-31edb6de3996
2018-04-16 06:29:56.695  INFO 6 --- [onPool-worker-0] OrderSagaAggregate                       : saga executed successfully -- order ece93522-2390-40dc-b916-f34abde09aa3, testProduct-1
2018-04-16 06:29:56.698  INFO 6 --- [onPool-worker-0] OrderSagaAggregate                       : ENDING SAGA
2018-04-16 06:29:56.699 ERROR 6 --- [onPool-worker-0] i.e.l.j.j.EventuateLocalJdbcAccess       : Failed to update entity: 0
2018-04-16 06:29:56.712  INFO 6 --- [onPool-worker-1] OrderSagaAggregate                       : saga executed successfully -- order ece93522-2390-40dc-b916-f34abde09aa3, testProduct-1
2018-04-16 06:29:56.712  INFO 6 --- [onPool-worker-1] OrderSagaAggregate                       : ENDING SAGA
2018-04-16 06:29:56.777  INFO 6 --- [onPool-worker-0] OrderSagaAggregate                       : saga executed successfully -- order 9e2bb329-7914-4299-a5a5-31edb6de3996, testProduct-2
2018-04-16 06:29:56.777  INFO 6 --- [onPool-worker-0] OrderSagaAggregate                       : ENDING SAGA

It looks like the events are collected on error somewhere and redelivered on the next invocation. If I make a pause between invocations everything is delivered only once as expected.

Any help would be appreciated

[1] https://github.com/xstefank/eventuate-service

Spring Boot 2.0 doesn't allow camelcase for ConfigurationPropertyName

https://github.com/spring-projects/spring-boot/blob/master/spring-boot/src/main/java/org/springframework/boot/context/properties/source/ConfigurationPropertyName.java#L673-L679

Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'eventuateLocal.cdc-io.eventuate.local.cdc.debezium.EventTableChangesToAggregateTopicRelayConfigurationProperties': Could not bind properties to EventTableChangesToAggregateTopicRelayConfigurationProperties (prefix=eventuateLocal.cdc, ignoreInvalidFields=false, ignoreUnknownFields=true); nested exception is org.springframework.boot.context.properties.source.InvalidConfigurationPropertyNameException: Configuration property name 'eventuateLocal.cdc' is not valid
at org.springframework.boot.context.properties.ConfigurationPropertiesBindingPostProcessor.postProcessBeforeInitialization(ConfigurationPropertiesBindingPostProcessor.java:341)
at org.springframework.boot.context.properties.ConfigurationPropertiesBindingPostProcessor.postProcessBeforeInitialization(ConfigurationPropertiesBindingPostProcessor.java:306)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:422)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1707)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:499)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:205)
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:255)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1131)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1058)
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:812)
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:718)

Replace string concatenation with String.format()

Replace

   String query = "UPDATE " + pollingDataParser.table() +
        " SET " + pollingDataParser.publishedField() + " = 1 " +
        "WHERE " + pollingDataParser.idField() + " in (:ids)";

with String.format()

Issues with EventuateSqlDialect. getCurrentTimeInMillisecondsExpression()

The goal is for the creation_time time column to be a millisecond precision field in order to facilitate 'tracing' and 'auditing' of messages.

The current implementations all seem to calculate the time in seconds and multiple by 1000, which misses the point.

Also, it could be

  • a database-specific type (time type with millisecond precision) rather than "application-defined" time milliseconds.
  • set using a default column value in the table definition rather than being set by the insert statement (if that works for all databases)

NullPointerException when deleting events from events table

eventuateLocalVersion=0.15.0.RELEASE
CDC docker image version 0.18.0.RELEASE

After deleting some events:

DELETE FROM `eventuate`.`events` WHERE  `event_id`='

https://github.com/eventuate-local/eventuate-local/blob/0.18.0.RELEASE/eventuate-local-java-embedded-cdc/src/main/java/io/eventuate/local/cdc/debezium/MySqlBinLogBasedEventTableChangesToAggregateTopicRelay.java#L146

Caused by: java.lang.RuntimeException: Engine through exceptionStopping connector after error in the application's handler method: null
at io.eventuate.local.cdc.debezium.MySqlBinLogBasedEventTableChangesToAggregateTopicRelay.lambda$startCapturingChanges$0(MySqlBinLogBasedEventTableChangesToAggregateTopicRelay.java:96) ~[eventuate-local-java-embedded-cdc-0.15.0-SNAPSHOT.jar!/:na]
at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:742) ~[debezium-embedded-0.3.6.jar!/:0.3.6]
at io.eventuate.local.cdc.debezium.MySqlBinLogBasedEventTableChangesToAggregateTopicRelay.lambda$startCapturingChanges$1(MySqlBinLogBasedEventTableChangesToAggregateTopicRelay.java:105) ~[eventuate-local-java-embedded-cdc-0.15.0-SNAPSHOT.jar!/:na]
... 3 common frames omitted
Caused by: java.lang.NullPointerException: null
at io.eventuate.local.cdc.debezium.MySqlBinLogBasedEventTableChangesToAggregateTopicRelay.receiveEvent(MySqlBinLogBasedEventTableChangesToAggregateTopicRelay.java:146) ~[eventuate-local-java-embedded-cdc-0.15.0-SNAPSHOT.jar!/:na]
at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:662) ~[debezium-embedded-0.3.6.jar!/:0.3.6]

[unified-cdc] Default Table Naming

Default table names should be bound to the pipeline type, not to the project.

'message' for the 'eventuate-tram'
'events' for the 'eventuate-local'

Official Client for NodeJs?

I would like to check if there is an official eventuate client to use the framework from a nodeJs application
Thank you!

while running with docker-compose cdc service should wait for mysql container

When running with docker-compose, if cdc service starts before mysql is ready to accept connections, it fails to read transaction log.
As a workaround, the cdcservice image can be built with https://github.com/vishnubob/wait-for-it/blob/master/wait-for-it.sh
Using this cdc service can wait untill mysql starts accepting connections.

The error message seen is below:

2016-12-23 07:40:09.514 ERROR 7 --- [eaderSelector-0] d.EventTableChangesToAggregateTopicRelay : In takeLeadership

java.util.concurrent.ExecutionException: java.lang.RuntimeException: Engine failed to startError while trying to run connector class 'io.debezium.connector.mysql.MySqlConnector'
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) ~[na:1.8.0_91]
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) ~[na:1.8.0_91]
at io.eventuate.local.cdc.debezium.EventTableChangesToAggregateTopicRelay$1.takeLeadership(EventTableChangesToAggregateTopicRelay.java:61) [eventuate-local-java-embedded-cdc-0.6.0.RELEASE.jar!/:na]
at io.eventuate.local.cdc.debezium.EventTableChangesToAggregateTopicRelay$1.takeLeadership(EventTableChangesToAggregateTopicRelay.java:53) [eventuate-local-java-embedded-cdc-0.6.0.RELEASE.jar!/:na]
at org.apache.curator.framework.recipes.leader.LeaderSelector$WrappedListener.takeLeadership(LeaderSelector.java:534) [curator-recipes-2.11.0.jar!/:na]
at org.apache.curator.framework.recipes.leader.LeaderSelector.doWork(LeaderSelector.java:399) [curator-recipes-2.11.0.jar!/:na]
at org.apache.curator.framework.recipes.leader.LeaderSelector.doWorkLoop(LeaderSelector.java:441) [curator-recipes-2.11.0.jar!/:na]
at org.apache.curator.framework.recipes.leader.LeaderSelector.access$100(LeaderSelector.java:64) [curator-recipes-2.11.0.jar!/:na]
at org.apache.curator.framework.recipes.leader.LeaderSelector$2.call(LeaderSelector.java:245) [curator-recipes-2.11.0.jar!/:na]
at org.apache.curator.framework.recipes.leader.LeaderSelector$2.call(LeaderSelector.java:239) [curator-recipes-2.11.0.jar!/:na]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_91]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_91]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
Caused by: java.lang.RuntimeException: Engine failed to startError while trying to run connector class 'io.debezium.connector.mysql.MySqlConnector'
at io.eventuate.local.cdc.debezium.EventTableChangesToAggregateTopicRelay.lambda$startCapturingChanges$0(EventTableChangesToAggregateTopicRelay.java:155) ~[eventuate-local-java-embedded-cdc-0.6.0.RELEASE.jar!/:na]
at io.debezium.embedded.EmbeddedEngine.fail(EmbeddedEngine.java:342) ~[debezium-embedded-0.3.1.jar!/:0.3.1]
at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:507) ~[debezium-embedded-0.3.1.jar!/:0.3.1]
at io.eventuate.local.cdc.debezium.EventTableChangesToAggregateTopicRelay.lambda$startCapturingChanges$1(EventTableChangesToAggregateTopicRelay.java:164) ~[eventuate-local-java-embedded-cdc-0.6.0.RELEASE.jar!/:na]
... 3 common frames omitted
Caused by: org.apache.kafka.connect.errors.ConnectException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. Error code: 0; SQLSTATE: 08S01.
at io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:141) ~[debezium-connector-mysql-0.3.1.jar!/:0.3.1]
at io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:120) ~[debezium-connector-mysql-0.3.1.jar!/:0.3.1]
at io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:449) ~[debezium-connector-mysql-0.3.1.jar!/:0.3.1]
... 1 common frames omitted
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_91]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_91]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_91]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_91]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:981) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
at com.mysql.jdbc.MysqlIO.readPacket(MysqlIO.java:628) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1014) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2255) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2286) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2085) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:795) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
at com.mysql.jdbc.JDBC4Connection.(JDBC4Connection.java:44) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_91]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_91]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_91]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_91]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:400) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:327) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
at java.sql.DriverManager.getConnection(DriverManager.java:664) ~[na:1.8.0_91]
at java.sql.DriverManager.getConnection(DriverManager.java:208) ~[na:1.8.0_91]
at io.debezium.jdbc.JdbcConnection.lambda$patternBasedFactory$0(JdbcConnection.java:113) ~[debezium-core-0.3.1.jar!/:0.3.1]
at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:466) ~[debezium-core-0.3.1.jar!/:0.3.1]
at io.debezium.jdbc.JdbcConnection.setAutoCommit(JdbcConnection.java:212) ~[debezium-core-0.3.1.jar!/:0.3.1]
at io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:168) ~[debezium-connector-mysql-0.3.1.jar!/:0.3.1]
... 1 common frames omitted
Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2957) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
at com.mysql.jdbc.MysqlIO.readPacket(MysqlIO.java:560) ~[mysql-connector-java-5.1.38.jar!/:5.1.38]
... 20 common frames omitted

2016-12-23 07:40:09.516 INFO 7 --- [pool-5-thread-1] o.a.kafka.connect.util.KafkaBasedLog : Stopped KafkaBasedLog for topic eventuate.local.cdc.my-sql-connector.offset.storage
2016-12-23 07:40:09.528 INFO 7 --- [pool-5-thread-1] o.a.k.c.storage.KafkaOffsetBackingStore : Stopped KafkaOffsetBackingStore

Test Hangs from Time to Time

JdbcAutoConfigurationIntegrationSyncTest - hangs when pool tries to get connection.
Change test to use Hikari pool.

[question] Support of MongoDB as Event Store

Hi, I put it here as I am interested in on-premise only version.

Is there any blockade for MongoDB to be supported for Event Store? You extended PostgreSQL support (thank you). As this was my preferred solution, the more I consider MongoDB as local instance for services to support Aggregates materialization, the more I think of using MongoDB for Event Store as well.

I could look at this myself, just with some hints about what would need to be changed, I am not asking for doing this.

Thanks for any info.

Cdcservice lost connection to mysql

Would you have any idea why cdcservice display those errors.
this has the effect that the events are not long published

The error message seen is below:
java.util.concurrent.ExecutionException: java.lang.RuntimeException: Engine failed to startError while trying to run connector class 'io.debezium.connector.mysql.MySqlConnector'
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) ~[na:1.8.0_121]
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) ~[na:1.8.0_121]
at io.eventuate.local.cdc.debezium.EventTableChangesToAggregateTopicRelay$1.takeLeadership(EventTableChangesToAggregateTopicRelay.java:69) [eventuate-local-java-embedded-cdc-0.12.0.RELEASE.jar!/:na]
at io.eventuate.local.cdc.debezium.EventTableChangesToAggregateTopicRelay$1.takeLeadership(EventTableChangesToAggregateTopicRelay.java:61) [eventuate-local-java-embedded-cdc-0.12.0.RELEASE.jar!/:na]
at org.apache.curator.framework.recipes.leader.LeaderSelector$WrappedListener.takeLeadership(LeaderSelector.java:534) [curator-recipes-2.11.0.jar!/:na]
at org.apache.curator.framework.recipes.leader.LeaderSelector.doWork(LeaderSelector.java:399) [curator-recipes-2.11.0.jar!/:na]
at org.apache.curator.framework.recipes.leader.LeaderSelector.doWorkLoop(LeaderSelector.java:441) [curator-recipes-2.11.0.jar!/:na]
at org.apache.curator.framework.recipes.leader.LeaderSelector.access$100(LeaderSelector.java:64) [curator-recipes-2.11.0.jar!/:na]
at org.apache.curator.framework.recipes.leader.LeaderSelector$2.call(LeaderSelector.java:245) [curator-recipes-2.11.0.jar!/:na]
at org.apache.curator.framework.recipes.leader.LeaderSelector$2.call(LeaderSelector.java:239) [curator-recipes-2.11.0.jar!/:na]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_121]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: java.lang.RuntimeException: Engine failed to startError while trying to run connector class 'io.debezium.connector.mysql.MySqlConnector'
at io.eventuate.local.cdc.debezium.EventTableChangesToAggregateTopicRelay.lambda$startCapturingChanges$0(EventTableChangesToAggregateTopicRelay.java:165) ~[eventuate-local-java-embedded-cdc-0.12.0.RELEASE.jar!/:na]
at io.debezium.embedded.EmbeddedEngine.fail(EmbeddedEngine.java:342) ~[debezium-embedded-0.3.1.jar!/:0.3.1]
at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:507) ~[debezium-embedded-0.3.1.jar!/:0.3.1]
at io.eventuate.local.cdc.debezium.EventTableChangesToAggregateTopicRelay.lambda$startCapturingChanges$1(EventTableChangesToAggregateTopicRelay.java:174) ~[eventuate-local-java-embedded-cdc-0.12.0.RELEASE.jar!/:na]
... 3 common frames omitted
Caused by: org.apache.kafka.connect.errors.ConnectException: A slave with the same server_uuid/server_id as this slave has connected to the master; the first event 'mysql-bin.000005' at 76930033, the last event read from './mysql-bin.000008' at 2589, the last byte read from './mysql-bin.000008' at 2589. Error code: 1236; SQLSTATE: HY000.
at io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:141) ~[debezium-connector-mysql-0.3.1.jar!/:0.3.1]
at io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:111) ~[debezium-connector-mysql-0.3.1.jar!/:0.3.1]
at io.debezium.connector.mysql.BinlogReader$ReaderThreadLifecycleListener.onCommunicationFailure(BinlogReader.java:449) ~[debezium-connector-mysql-0.3.1.jar!/:0.3.1]
at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:734) ~[mysql-binlog-connector-java-0.4.0.jar!/:na]
at com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:462) ~[mysql-binlog-connector-java-0.4.0.jar!/:na]
at com.github.shyiko.mysql.binlog.BinaryLogClient$5.run(BinaryLogClient.java:623) ~[mysql-binlog-connector-java-0.4.0.jar!/:na]
... 1 common frames omitted
Caused by: com.github.shyiko.mysql.binlog.network.ServerException: A slave with the same server_uuid/server_id as this slave has connected to the master; the first event 'mysql-bin.000005' at 76930033, the last event read from './mysql-bin.000008' at 2589, the last byte read from './mysql-bin.000008' at 2589.
at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:698) ~[mysql-binlog-connector-java-0.4.0.jar!/:na]
... 3 common frames omitted

Best regards

[Feature request] Make kafka settings customizable

In my project utilizing Eventuate Tram [1] I ran into problems with default Kafka settings under high load performance test.

Exception in thread "Eventuate-subscriber-org.eventuate.saga.orderservice.saga.OrderSaga-consumer" java.lang.RuntimeException: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured session.timeout.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
	at io.eventuate.local.java.kafka.consumer.EventuateKafkaConsumer.lambda$start$0(EventuateKafkaConsumer.java:108)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured session.timeout.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:600)
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:541)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
	at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
	at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
	at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
	at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:426)
	at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1059)
	at io.eventuate.local.java.kafka.consumer.EventuateKafkaConsumer.maybeCommitOffsets(EventuateKafkaConsumer.java:56)
	at io.eventuate.local.java.kafka.consumer.EventuateKafkaConsumer.lambda$start$0(EventuateKafkaConsumer.java:97)
	... 1 more
2018-04-25 06:09:29.506 ERROR 8 --- [erSaga-consumer] i.e.l.j.k.c.EventuateKafkaConsumer       : Got exception: 

org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured session.timeout.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:600) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:541) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:426) ~[kafka-clients-0.10.0.1.jar!/:na]
	at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1059) ~[kafka-clients-0.10.0.1.jar!/:na]
	at io.eventuate.local.java.kafka.consumer.EventuateKafkaConsumer.maybeCommitOffsets(EventuateKafkaConsumer.java:56) ~[eventuate-local-java-kafka-0.16.0.RELEASE.jar!/:na]
	at io.eventuate.local.java.kafka.consumer.EventuateKafkaConsumer.lambda$start$0(EventuateKafkaConsumer.java:97) ~[eventuate-local-java-kafka-0.16.0.RELEASE.jar!/:na]
	at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111-internal]

A quick search - https://github.com/eventuate-local/eventuate-local/search?utf8=%E2%9C%93&q=session.timeout.ms&type=

If it make sense (I don't have much experience with kafka), it can be beneficial to allow users to customize these settings.

[1] https://github.com/xstefank/eventuate-tram-service

cdc-service stops sending events to kafka after temporal loss of connection to mysql

I've ran into an issue when running cdc-service on kubernetes cluster: I've restarted mysql, so connection dropped and then became available after a while.

After restarting CDC, the events were published to kafka and received by event handlers.

Steps to reproduce:

  • Stop mysql
  • Wait about 10 sec
  • Start mysql

Expected behaviour:
New events gets published

Actual behaviour:
Events doesn't get published

Workaround:
Restart cdc-service after mysql becomes available

If this is not easily solvable, a simple HTTP endpoint for quering connection health would help -- kubernetes would restart CDC when connection is dropped.

Log:
cdc-log-gh.txt

Eventuate Tram My SQL CDC Service : connection to mysql binlog failed

HI,
I am trying to configure Eventuate CDC service with MySQL using docker containers and I am getting following error . Below is the docker-compose fie

zookeeper:
image: eventuateio/eventuateio-local-zookeeper:0.17.0.RELEASE
ports:
- 2181:2181
- 2888:2888
- 3888:3888

kafka:
image: eventuateio/eventuateio-local-kafka:0.17.0.RELEASE
ports:
- 9092:9092
links:
- zookeeper
environment:

- KAFKA_HEAP_OPTS=-Xmx320m -Xms320m
- ZOOKEEPER_SERVERS=zookeeper:2181

mysql:
image: eventuateio/eventuate-tram-mysql:0.6.0.RELEASE
ports:
- 3307:3307
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=root
- MYSQL_PASSWORD=root

adminer:
image: adminer:4.3.1-standalone
ports:
- 9080:8080

tramcdcservice:
image: eventuateio/eventuate-tram-cdc-mysql-service:0.6.0.RELEASE
ports:
- "8099:8080"
links:
- kafka
- zookeeper
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://mysql-standalone:3306/salesorder?autoReconnect=true&useSSL=false
SPRING_DATASOURCE_USERNAME: root
SPRING_DATASOURCE_PASSWORD: root
SPRING_DATASOURCE_DRIVER_CLASS_NAME: com.mysql.jdbc.Driver
EVENTUATELOCAL_KAFKA_BOOTSTRAP_SERVERS: kafka:9092
EVENTUATELOCAL_ZOOKEEPER_CONNECTION_STRING: zookeeper:2181
EVENTUATELOCAL_CDC_DB_USER_NAME: root
EVENTUATELOCAL_CDC_DB_PASSWORD: root
EVENTUATELOCAL_CDC_BINLOG_CLIENT_ID: 67890
EVENTUATELOCAL_CDC_SOURCE_TABLE_NAME: message

Following are the error logs :

kafka_1_f059fedf82ea | [2019-07-01 13:05:34,458] INFO [GroupCoordinator 0]: Preparing to restabilize group 221926ad-e08a-47ad-b81a-c72f71c93dfd with old generation 1 (kafka.coordinator.GroupCoordinator)
kafka_1_f059fedf82ea | [2019-07-01 13:05:34,459] INFO [GroupCoordinator 0]: Group 221926ad-e08a-47ad-b81a-c72f71c93dfd generation 1 is dead and removed (kafka.coordinator.GroupCoordinator)
mysql_1_9bc278c31087 | Initializing database
mysql_1_9bc278c31087 | 2019-07-01T13:05:59.494926Z 0 [Warning] InnoDB: New log files created, LSN=45790
mysql_1_9bc278c31087 | 2019-07-01T13:05:59.588266Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
tramcdcservice_1_f0e1cfb6bfd7 | 13:05:59.683 [Curator-LeaderSelector-0] INFO i.e.l.m.binlog.MySqlBinaryLogClient - trying to connect to mysql binlog
tramcdcservice_1_f0e1cfb6bfd7 | 13:05:59.684 [Curator-LeaderSelector-0] ERROR i.e.l.m.binlog.MySqlBinaryLogClient - connection to mysql binlog failed
mysql_1_9bc278c31087 | 2019-07-01T13:05:59.696144Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: f8a44dd8-9c00-11e9-a9f1-0242ac110008.
mysql_1_9bc278c31087 | 2019-07-01T13:05:59.701915Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
mysql_1_9bc278c31087 | 2019-07-01T13:05:59.709014Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
mysql_1_9bc278c31087 | 2019-07-01T13:06:02.094359Z 1 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
mysql_1_9bc278c31087 | 2019-07-01T13:06:02.094418Z 1 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1_9bc278c31087 | 2019-07-01T13:06:02.094437Z 1 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1_9bc278c31087 | 2019-07-01T13:06:02.094448Z 1 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
mysql_1_9bc278c31087 | 2019-07-01T13:06:02.094478Z 1 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1_9bc278c31087 | Database initialized
mysql_1_9bc278c31087 | MySQL init process in progress...
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:04.685 [Curator-LeaderSelector-0] INFO i.e.l.m.binlog.MySqlBinaryLogClient - trying to connect to mysql binlog
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:04.774 [Curator-LeaderSelector-0] ERROR i.e.l.m.binlog.MySqlBinaryLogClient - connection to mysql binlog failed
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.801847Z 0 [Note] mysqld (mysqld 5.7.13-log) starting as process 51 ...
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.829264Z 0 [Note] InnoDB: PUNCH HOLE support available
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.829317Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.829323Z 0 [Note] InnoDB: Uses event mutexes
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.829326Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.829329Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.8
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.829331Z 0 [Note] InnoDB: Using Linux native AIO
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.829631Z 0 [Note] InnoDB: Number of pools: 1
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.829753Z 0 [Note] InnoDB: Using CPU crc32 instructions
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.831046Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.839858Z 0 [Note] InnoDB: Completed initialization of buffer pool
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.841856Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.863525Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.887022Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.887082Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.960993Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.962326Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.962379Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
mysql_1_9bc278c31087 | 2019-07-01T13:06:04.962692Z 0 [Note] InnoDB: Waiting for purge to start
mysql_1_9bc278c31087 | 2019-07-01T13:06:05.012867Z 0 [Note] InnoDB: 5.7.13 started; log sequence number 2525487
mysql_1_9bc278c31087 | 2019-07-01T13:06:05.014053Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
mysql_1_9bc278c31087 | 2019-07-01T13:06:05.014120Z 0 [Note] Plugin 'FEDERATED' is disabled.
mysql_1_9bc278c31087 | 2019-07-01T13:06:05.042557Z 0 [Note] InnoDB: Buffer pool(s) load completed at 190701 13:06:05
mysql_1_9bc278c31087 | 2019-07-01T13:06:05.061624Z 0 [Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key
mysql_1_9bc278c31087 | 2019-07-01T13:06:05.072740Z 0 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
mysql_1_9bc278c31087 | 2019-07-01T13:06:05.072854Z 0 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1_9bc278c31087 | 2019-07-01T13:06:05.073929Z 0 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1_9bc278c31087 | 2019-07-01T13:06:05.073971Z 0 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
mysql_1_9bc278c31087 | 2019-07-01T13:06:05.100502Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1_9bc278c31087 | 2019-07-01T13:06:05.152176Z 0 [Note] Event Scheduler: Loaded 0 events
mysql_1_9bc278c31087 | 2019-07-01T13:06:05.152533Z 0 [Note] mysqld: ready for connections.
mysql_1_9bc278c31087 | Version: '5.7.13-log' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server (GPL)
mysql_1_9bc278c31087 | Warning: Unable to load '/usr/share/zoneinfo/Factory' as time zone. Skipping it.
mysql_1_9bc278c31087 | Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
mysql_1_9bc278c31087 | Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
mysql_1_9bc278c31087 | Warning: Unable to load '/usr/share/zoneinfo/posix/Factory' as time zone. Skipping it.
mysql_1_9bc278c31087 | Warning: Unable to load '/usr/share/zoneinfo/right/Factory' as time zone. Skipping it.
mysql_1_9bc278c31087 | Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
mysql_1_9bc278c31087 | 2019-07-01T13:06:08.436256Z 4 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1_9bc278c31087 | 2019-07-01T13:06:08.436309Z 4 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
mysql_1_9bc278c31087 | 2019-07-01T13:06:08.436347Z 4 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1_9bc278c31087 | mysql: [Warning] Using a password on the command line interface can be insecure.
mysql_1_9bc278c31087 | ERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'root'@'%'
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:09.774 [Curator-LeaderSelector-0] INFO i.e.l.m.binlog.MySqlBinaryLogClient - trying to connect to mysql binlog
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:09.777 [Curator-LeaderSelector-0] ERROR i.e.l.m.binlog.MySqlBinaryLogClient - connection to mysql binlog failed
kafka_mysql_1_9bc278c31087 exited with code 1
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:14.777 [Curator-LeaderSelector-0] INFO i.e.l.m.binlog.MySqlBinaryLogClient - trying to connect to mysql binlog
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:14.817 [Curator-LeaderSelector-0] ERROR i.e.l.m.binlog.MySqlBinaryLogClient - connection to mysql binlog failed
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:19.817 [Curator-LeaderSelector-0] INFO i.e.l.m.binlog.MySqlBinaryLogClient - trying to connect to mysql binlog
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:19.819 [Curator-LeaderSelector-0] ERROR i.e.l.m.binlog.MySqlBinaryLogClient - connection to mysql binlog failed
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:24.820 [Curator-LeaderSelector-0] INFO i.e.l.m.binlog.MySqlBinaryLogClient - trying to connect to mysql binlog
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:24.851 [Curator-LeaderSelector-0] ERROR i.e.l.m.binlog.MySqlBinaryLogClient - connection to mysql binlog failed
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:29.852 [Curator-LeaderSelector-0] INFO i.e.l.m.binlog.MySqlBinaryLogClient - trying to connect to mysql binlog
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:29.854 [Curator-LeaderSelector-0] ERROR i.e.l.m.binlog.MySqlBinaryLogClient - connection to mysql binlog failed
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:34.855 [Curator-LeaderSelector-0] INFO i.e.l.m.binlog.MySqlBinaryLogClient - trying to connect to mysql binlog
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:34.906 [Curator-LeaderSelector-0] ERROR i.e.l.m.binlog.MySqlBinaryLogClient - connection to mysql binlog failed
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:39.907 [Curator-LeaderSelector-0] INFO i.e.l.m.binlog.MySqlBinaryLogClient - trying to connect to mysql binlog
tramcdcservice_1_f0e1cfb6bfd7 | 13:06:39.908 [Curator-LeaderSelector-0] ERROR i.e.l.m.binlog.MySqlBinaryLogClient - connection to mysql binlog failed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.