# build and run
$ docker-compose -f .dev/docker-compose-fast-start.yml -p kafka-dlq-retry-fast-start up --build -d
# show log
$ docker-compose -f .dev/docker-compose-fast-start.yml -p kafka-dlq-retry-fast-start logs -f kafka-dlq-retry
Run with java
Requirement
java 11
Create application.properties in project directory
# spring properties for connect to kafka
spring.kafka.producer.bootstrap-servers=kafka-host:port
spring.kafka.consumer.bootstrap-servers=kafka-host:port
# properties for define topics
dev.shermende.kafka-dlq-retry.consumers[0].topic=application.topic
dev.shermende.kafka-dlq-retry.consumers[0].dlq-topic=application.topic.dlq
dev.shermende.kafka-dlq-retry.consumers[0].error-topic=application.topic.error
# property for delay
dev.shermende.kafka-dlq-retry.consumers[0].delays=200,300,400
# property for concurreny and multithreading
dev.shermende.kafka-dlq-retry.consumers[0].concurrency=5
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
Cure
increase 'max.poll.interval.ms' in application.properties
spring.kafka.consumer.properties.max.poll.interval.ms=300000{+ ms}
OR
reduce 'max.poll.records' in application.properties
spring.kafka.consumer.properties.max.poll.records=500{- reduced row count}