Git Product home page Git Product logo

koperator's Introduction

Koperator Released License Go version (latest release)


Go version Go Report Card CI Image Image (perf test) Helm chart

Koperator

Koperator is an open-source operator that automates the provisioning, management, and autoscaling of Apache Kafka clusters on Kubernetes. Unlike other solutions that rely on StatefulSets, Koperator has been built with a unique architecture that provides greater flexibility and functionality for managing Apache Kafka. This architecture allows for fine-grained configuration and management of individual brokers.

Some of the main features of Koperator are:

  • the provisioning of secure and production-ready Kafka clusters
  • fine-grained broker-by-broker configuration support
  • advanced and highly configurable external access
  • graceful Kafka cluster scaling and rebalancing
  • detailed Prometheus metrics
  • encrypted communication using SSL
  • automatic reaction and self-healing based on alerts using Cruise Control
  • graceful rolling upgrades
  • advanced topic and user management via Kubernetes Custom Resources
  • Cruise Control task management via Kubernetes Custom Resources

Architecture

Kafka is a stateful application, and the Kafka Broker is a server that can create and form a cluster with other Brokers. Each Broker has its own unique configuration, the most important of which is the unique broker ID.

Most Kubernetes operators that manage Kafka rely on StatefulSets to create a Kafka Cluster. While StatefulSets provide unique Broker IDs generated during Pod startup, networking between brokers with headless services, and unique Persistent Volumes for Brokers, they have a few restrictions. For example, Broker configurations cannot be modified independently, and a specific Broker cannot be removed from the cluster - a StatefulSet always removes the most recently created Broker. Furthermore, multiple, different Persistent Volumes cannot be used for each Broker.

Koperator takes a different approach by using simple Pods, ConfigMaps, and PersistentVolumeClaims instead of StatefulSets. These resources allow us to build an Operator that is better suited to manage Apache Kafka. With Koperator, you can modify the configuration of unique Brokers, remove specific Brokers from clusters, and use multiple Persistent Volumes for each Broker.

If you want to learn more about our design motivations and the scenarios that drove us to create Koperator, please continue reading on our documentation page here.

Koperator architecture

Quick start

This quick start guide will walk you through the process of deploying Koperator on an existing Kubernetes cluster and provisioning a Kafka cluster using its custom resources.

Prerequisites

To complete this guide, you will need a Kubernetes cluster (with a suggested minimum of 6 vCPUs and 8 GB RAM). You can run the cluster locally using Kind or Minikube.

The quick start will help you set up a functioning Kafka cluster on Kubernetes. However, it does not include guidance on the installation of Prometheus and cert-manager, which are necessary for some of the more advanced functionality.

Install ZooKeeper

The version of Kafka that is installed by the operator requires Apache ZooKeeper. You'll need to deploy a ZooKeeper cluster if you don’t already have one.

  1. Install ZooKeeper using Pravega’s Zookeeper Operator.
helm install zookeeper-operator --repo https://charts.pravega.io zookeeper-operator --namespace=zookeeper --create-namespace
  1. Create a ZooKeeper cluster.
kubectl create -f - <<EOF
apiVersion: zookeeper.pravega.io/v1beta1
kind: ZookeeperCluster
metadata:
    name: zookeeper-server
    namespace: zookeeper
spec:
    replicas: 1
    persistence:
        reclaimPolicy: Delete
EOF
  1. Verify that ZooKeeper has been deployed.
> kubectl get pods -n zookeeper

NAME                                         READY   STATUS    RESTARTS   AGE
zookeeper-server-0                           1/1     Running   0          27m
zookeeper-operator-54444dbd9d-2tccj          1/1     Running   0          28m

Install Koperator

You can deploy Koperator using a Helm chart. Complete the following steps.

  1. Install the Koperator CustomResourceDefinition resources (adjust the version number to the Koperator release you want to install). This is performed in a separate step to allow you to uninstall and reinstall Koperator without deleting your already installed custom resources.
kubectl create --validate=false -f https://github.com/banzaicloud/koperator/releases/download/v0.25.1/kafka-operator.crds.yaml
  1. Install Koperator into the kafka namespace:
helm install kafka-operator --repo https://kubernetes-charts.banzaicloud.com kafka-operator --namespace=kafka --create-namespace
  1. Create the Kafka cluster using the KafkaCluster custom resource. The quick start uses a minimal custom resource, but there are other examples in the same directory.
kubectl create -n kafka -f https://raw.githubusercontent.com/banzaicloud/koperator/master/config/samples/simplekafkacluster.yaml
  1. Verify that the Kafka cluster has been created.
> kubectl get pods -n kafka

kafka-0-nvx8c                             1/1     Running   0          16m
kafka-1-swps9                             1/1     Running   0          15m
kafka-2-lppzr                             1/1     Running   0          15m
kafka-cruisecontrol-fb659b84b-7cwpn       1/1     Running   0          15m
kafka-operator-operator-8bb75c7fb-7w4lh   2/2     Running   0          17m

Test Kafka cluster

To test the Kafka cluster let's create a topic and send some messages.

  1. You can use the KafkaTopic CR to create a topic called my-topic:
kubectl create -n kafka -f - <<EOF
apiVersion: kafka.banzaicloud.io/v1alpha1
kind: KafkaTopic
metadata:
    name: my-topic
spec:
    clusterRef:
        name: kafka
    name: my-topic
    partitions: 1
    replicationFactor: 1
    config:
        "retention.ms": "604800000"
        "cleanup.policy": "delete"
EOF
  1. If SSL encryption is disabled for Kafka, you can use the following commands to send and receive messages within a Kubernetes cluster.

To send messages, run this command and type your test messages:

kubectl -n kafka run kafka-producer -it --image=ghcr.io/banzaicloud/kafka:2.13-3.1.0 --rm=true --restart=Never -- /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server kafka-headless:29092 --topic my-topic

To receive messages, run the following command:

kubectl -n kafka run kafka-consumer -it --image=ghcr.io/banzaicloud/kafka:2.13-3.1.0 --rm=true --restart=Never -- /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server kafka-headless:29092 --topic my-topic --from-beginning

Documentation

For detailed documentation on the Koperator project, see the Koperator documentation website.

Issues and contributions

We use GitHub to track issues and accept contributions. If you would like to raise an issue or open a pull request, please refer to our contribution guide.

If you use Koperator in a production environment, we encourage you to add yourself to the list of production adopters.

Community

Find us on Slack for more fun about Kafka on Kubernetes!

License

Copyright (c) 2023 Cisco Systems, Inc. and/or its affiliates

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Trademarks

Apache Kafka, Kafka, and the Kafka logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.

koperator's People

Contributors

adamantal avatar almariah avatar amuraru avatar asdwsda avatar azun avatar baluchicken avatar bartam1 avatar bonifaido avatar chrisgacsal avatar dependabot[bot] avatar ecojan avatar fekete-robert avatar hi-im-aren avatar kuvesz avatar luciferinlove avatar lukemassa avatar matyix avatar mihaialexandrescu avatar nikore avatar panyuenlau avatar pbalogh-sa avatar pepov avatar pregnor avatar schultheiszn avatar sergii-koshel-exa avatar shubhamcoc avatar stoader avatar szykes avatar tinyzimmer avatar yanzhel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

koperator's Issues

Code duplication around CRD status update/delete functions

In an upcoming feature in #74 there are a few functions performing updates/deletes on the CR.
Each function implements the same strategy to handle an update conflict with a simple in-place refresh/retry method which results in code duplication that should be avoided if possible.
The specific functions are: updateRackAwarenessStatus , updateGracefulScaleStatus , DeleteStatus.
In the simplest case returning with an error that would trigger a new reconciliation round could do it, but maybe that is not completely enough.

CruiseControl breaks if cluster name is not kafka

CruiseControl is hardcoded to connect to kafka-headless.

Steps to reproduce the issue:
create a cluster as described in readme but change the KafkaCluster metadata.name to dev

Expected behavior
CruiseControl connects to dev-headless

78     [           main] WARN  ache.kafka.clients.ClientUtils  - Removing server kafka-headless:29092 from bootstrap.servers as DNS resolution failed for kafka-headless
Exception in thread "main" org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
	at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:66)
	at com.linkedin.kafka.cruisecontrol.common.MetadataClient.<init>(MetadataClient.java:46)
	at com.linkedin.kafka.cruisecontrol.monitor.LoadMonitor.<init>(LoadMonitor.java:116)
	at com.linkedin.kafka.cruisecontrol.KafkaCruiseControl.<init>(KafkaCruiseControl.java:108)
	at com.linkedin.kafka.cruisecontrol.async.AsyncKafkaCruiseControl.<init>(AsyncKafkaCruiseControl.java:70)
	at com.linkedin.kafka.cruisecontrol.KafkaCruiseControlMain.main(KafkaCruiseControlMain.java:71)

cert-manager changed their API group

Describe the bug
Starting from cert-manager v0.11.0, the project changed the API group and bumped the API version. This causes validation errors when installing kafka-operator's Helm chart. After searching this repo for certmanager.k8s.io, it seems the Helm chart, the kustomize config files, and some code will need to be updated to accomdate cert-manager's changes.

Here's the exact Helm error:

$ helm install --name kafka-operator banzaicloud-stable/kafka-operator
Error: validation failed: [unable to recognize "": no matches for kind "Certificate" in version "certmanager.k8s.io/v1alpha1", unable to recognize "": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"]

Steps to reproduce the issue:
Install cert-manager v0.11.0 via the Helm chart (in theory, you only need the cert-manager crds to reproduce this issue)

$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
$ helm repo add jetstack https://charts.jetstack.io
$ helm install --name cert-manager jetstack/cert-manager

Install kafka-operator via the Helm chart

$ helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com
$ helm install --name kafka-operator banzaicloud-stable/kafka-operator

Unable to delete the cluster when using only plaintext listeners

Describe the bug
Sometimes in version 0.7.0 operator could not delete the cluster resource since, the user finalizer is placed even if not required to the resource.

In case the listener type is plaintext there is no need to place the user finalizer since we only support SSL based authorization now.

Steps to reproduce the issue:
Please describe the steps to reproduce the issue.

Install the operator version 0.7.0 via helm chart.
Install a cluster using cr simplekafkacluster.yaml

Expected behavior
A clear and concise description of what you expected to happen.
The operator should delete the cluster

a kafka pod is constantly created and destroyed

Describe the bug

intall steps

step 1 start k8s cluster by minikube:

minikube start --memory 4196 --cpus 2

step 2 install Zookeeper:

helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com/
helm install --name zookeeper-operator --namespace=zookeeper banzaicloud-stable/zookeeper-operator
kubectl create --namespace zookeeper -f - <<EOF
apiVersion: zookeeper.pravega.io/v1beta1
kind: ZookeeperCluster
metadata:
  name: example-zookeepercluster
  namespace: zookeeper
spec:
  replicas: 3
EOF

step3 minikube LoadBalancer:

kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system

step 4 install kafka:

helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com/
helm install --name=kafka-operator --namespace=kafka banzaicloud-stable/kafka-operator -f config/samples/example-prometheus-alerts.yaml
# Add your zookeeper svc name to the configuration
kubectl create -n kafka -f config/samples/example-secret.yaml
kubectl create -n kafka -f config/samples/banzaicloud_v1alpha1_kafkacluster.yaml

the bug

a kafka pod status is initially Init:0/3, and is running after a while. the other kafka pod is init:0/3 after 35s,...repeatedly.

first pod kafka8w4h9 is Init:0/3 :

$ kubectl get po --all-namespaces
NAMESPACE     NAME                                               READY   STATUS     RESTARTS   AGE
kafka         envoy-849fdc9687-tnr7z                             1/1     Running    0          40m
kafka         kafka-cruisecontrol-7564f794d8-knrx2               1/1     Running    0          38m
kafka         kafka-operator-operator-0                          2/2     Running    0          42m
kafka         kafka-operator-prometheus-server-6bddb4cbb-br7x4   2/2     Running    0          42m
kafka         kafka8w4h9                                         0/1     Init:0/3   0          4s
kafka         kafkafs6wq                                         1/1     Running    0          39m
kafka         kafkatlm4k                                         1/1     Running    0          39m
kafka         kafkazpvqz                                         1/1     Running    0          39m
kube-system   coredns-5c98db65d4-6z9j6                           1/1     Running    0          130m
kube-system   coredns-5c98db65d4-8lwz9                           1/1     Running    0          130m
kube-system   etcd-minikube                                      1/1     Running    0          129m
kube-system   kube-addon-manager-minikube                        1/1     Running    0          129m
kube-system   kube-apiserver-minikube                            1/1     Running    0          129m
kube-system   kube-controller-manager-minikube                   1/1     Running    0          129m
kube-system   kube-proxy-jkqxz                                   1/1     Running    0          130m
kube-system   kube-scheduler-minikube                            1/1     Running    0          129m
kube-system   minikube-lb-patch-6f6db8bccc-jr6nz                 1/1     Running    0          113m
kube-system   storage-provisioner                                1/1     Running    0          130m
kube-system   tiller-deploy-75f6c87b87-44w5s                     1/1     Running    0          127m
zookeeper     example-zookeepercluster-0                         1/1     Running    0          125m
zookeeper     example-zookeepercluster-1                         1/1     Running    0          125m
zookeeper     example-zookeepercluster-2                         1/1     Running    0          124m
zookeeper     zookeeper-operator-65d86d6674-wjjgj                1/1     Running    0          126m

the fist pod kafka8w4h9 is running after a while

$ kubectl get po --all-namespaces
NAMESPACE     NAME                                               READY   STATUS    RESTARTS   AGE
kafka         envoy-849fdc9687-tnr7z                             1/1     Running   0          40m
kafka         kafka-cruisecontrol-7564f794d8-knrx2               1/1     Running   0          39m
kafka         kafka-operator-operator-0                          2/2     Running   0          42m
kafka         kafka-operator-prometheus-server-6bddb4cbb-br7x4   2/2     Running   0          42m
kafka         kafka8w4h9                                         1/1     Running   0          24s
kafka         kafkafs6wq                                         1/1     Running   0          40m
kafka         kafkatlm4k                                         1/1     Running   0          40m
kafka         kafkazpvqz                                         1/1     Running   0          40m
kube-system   coredns-5c98db65d4-6z9j6                           1/1     Running   0          131m
kube-system   coredns-5c98db65d4-8lwz9                           1/1     Running   0          131m
kube-system   etcd-minikube                                      1/1     Running   0          130m
kube-system   kube-addon-manager-minikube                        1/1     Running   0          129m
kube-system   kube-apiserver-minikube                            1/1     Running   0          130m
kube-system   kube-controller-manager-minikube                   1/1     Running   0          129m
kube-system   kube-proxy-jkqxz                                   1/1     Running   0          131m
kube-system   kube-scheduler-minikube                            1/1     Running   0          129m
kube-system   minikube-lb-patch-6f6db8bccc-jr6nz                 1/1     Running   0          113m
kube-system   storage-provisioner                                1/1     Running   0          131m
kube-system   tiller-deploy-75f6c87b87-44w5s                     1/1     Running   0          127m
zookeeper     example-zookeepercluster-0                         1/1     Running   0          126m
zookeeper     example-zookeepercluster-1                         1/1     Running   0          125m
zookeeper     example-zookeepercluster-2                         1/1     Running   0          125m
zookeeper     zookeeper-operator-65d86d6674-wjjgj                1/1     Running   0          126m

the fist pod kafka8w4h9 disappear and the pod kafkadbct9 is Init:0/3:

$ kubectl get po --all-namespaces
NAMESPACE     NAME                                               READY   STATUS     RESTARTS   AGE
kafka         envoy-849fdc9687-tnr7z                             1/1     Running    0          40m
kafka         kafka-cruisecontrol-7564f794d8-knrx2               1/1     Running    0          39m
kafka         kafka-operator-operator-0                          2/2     Running    0          42m
kafka         kafka-operator-prometheus-server-6bddb4cbb-br7x4   2/2     Running    0          42m
kafka         kafkadbct9                                         0/1     Init:0/3   0          2s
kafka         kafkafs6wq                                         1/1     Running    0          40m
kafka         kafkatlm4k                                         1/1     Running    0          40m
kafka         kafkazpvqz                                         1/1     Running    0          40m
kube-system   coredns-5c98db65d4-6z9j6                           1/1     Running    0          131m
kube-system   coredns-5c98db65d4-8lwz9                           1/1     Running    0          131m
kube-system   etcd-minikube                                      1/1     Running    0          130m
kube-system   kube-addon-manager-minikube                        1/1     Running    0          130m
kube-system   kube-apiserver-minikube                            1/1     Running    0          130m
kube-system   kube-controller-manager-minikube                   1/1     Running    0          130m
kube-system   kube-proxy-jkqxz                                   1/1     Running    0          131m
kube-system   kube-scheduler-minikube                            1/1     Running    0          130m
kube-system   minikube-lb-patch-6f6db8bccc-jr6nz                 1/1     Running    0          113m
kube-system   storage-provisioner                                1/1     Running    0          131m
kube-system   tiller-deploy-75f6c87b87-44w5s                     1/1     Running    0          127m
zookeeper     example-zookeepercluster-0                         1/1     Running    0          126m
zookeeper     example-zookeepercluster-1                         1/1     Running    0          126m
zookeeper     example-zookeepercluster-2                         1/1     Running    0          125m
zookeeper     zookeeper-operator-65d86d6674-wjjgj                1/1     Running    0          126m

this process is constantly repeating.

the fist pod kafka8w4h9 log:

$ kubectl -n kafka logs  kafka8w4h9 -f
[2019-08-28 10:09:21,360] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2019-08-28 10:09:27,751] INFO starting (kafka.server.KafkaServer)
[2019-08-28 10:09:27,753] INFO Connecting to zookeeper on example-zookeepercluster-client.zookeeper:2181 (kafka.server.KafkaServer)
[2019-08-28 10:09:27,961] INFO [ZooKeeperClient] Initializing a new session to example-zookeepercluster-client.zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
[2019-08-28 10:09:27,969] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,970] INFO Client environment:host.name=kafka-0.kafka-headless.kafka.svc.cluster.local (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,970] INFO Client environment:java.version=1.8.0_191 (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,970] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,970] INFO Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,970] INFO Client environment:java.class.path=/opt/kafka/libs/extensions/cruise-control-metrics-reporter.jar:/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.5.jar:/opt/kafka/bin/../libs/compileScala.mapping:/opt/kafka/bin/../libs/compileScala.mapping.asc:/opt/kafka/bin/../libs/connect-api-2.1.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.1.0.jar:/opt/kafka/bin/../libs/connect-file-2.1.0.jar:/opt/kafka/bin/../libs/connect-json-2.1.0.jar:/opt/kafka/bin/../libs/connect-runtime-2.1.0.jar:/opt/kafka/bin/../libs/connect-transforms-2.1.0.jar:/opt/kafka/bin/../libs/extensions:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b42.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b42.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b42.jar:/opt/kafka/bin/../libs/jackson-annotations-2.9.7.jar:/opt/kafka/bin/../libs/jackson-core-2.9.7.jar:/opt/kafka/bin/../libs/jackson-databind-2.9.7.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.9.7.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.7.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.7.jar:/opt/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b42.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.27.jar:/opt/kafka/bin/../libs/jersey-common-2.27.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.27.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.27.jar:/opt/kafka/bin/../libs/jersey-hk2-2.27.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.27.jar:/opt/kafka/bin/../libs/jersey-server-2.27.jar:/opt/kafka/bin/../libs/jetty-client-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-http-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-io-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-security-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-server-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-servlet-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-servlets-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-util-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-2.1.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-2.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-2.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-2.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.12-2.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-2.1.0.jar:/opt/kafka/bin/../libs/kafka-tools-2.1.0.jar:/opt/kafka/bin/../libs/kafka_2.12-2.1.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-2.1.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.5.0.jar:/opt/kafka/bin/../libs/maven-artifact-3.5.4.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils-3.1.0.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.14.2.jar:/opt/kafka/bin/../libs/scala-library-2.12.7.jar:/opt/kafka/bin/../libs/scala-logging_2.12-3.9.0.jar:/opt/kafka/bin/../libs/scala-reflect-2.12.7.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.2.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.13.jar:/opt/kafka/bin/../libs/zstd-jni-1.3.5-4.jar:/opt/jmx-exporter/jmx_prometheus.jar (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,971] INFO Client environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,971] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,971] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,971] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,972] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,972] INFO Client environment:os.version=4.15.0 (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,972] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,973] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:27,973] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:28,050] INFO Initiating client connection, connectString=example-zookeepercluster-client.zookeeper:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@37271612 (org.apache.zookeeper.ZooKeeper)
[2019-08-28 10:09:28,066] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-08-28 10:09:28,160] INFO Opening socket connection to server example-zookeepercluster-client.zookeeper/10.108.171.112:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2019-08-28 10:09:28,251] INFO Socket connection established to example-zookeepercluster-client.zookeeper/10.108.171.112:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-08-28 10:09:28,261] INFO Session establishment complete on server example-zookeepercluster-client.zookeeper/10.108.171.112:2181, sessionid = 0x300000744c401d4, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-08-28 10:09:28,267] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-08-28 10:09:30,358] INFO Cluster ID = s5kRXw_MQFGtds5EU2xWbg (kafka.server.KafkaServer)
[2019-08-28 10:09:30,451] WARN No meta.properties file under dir /kafka-logs/kafka/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-08-28 10:09:31,158] INFO KafkaConfig values:
        advertised.host.name = null
        advertised.listeners = EXTERNAL://10.108.140.234:19090,SSL://kafka-0.kafka-headless.kafka.svc.cluster.local:29092
        advertised.port = null
        alter.config.policy.class.name = null
        alter.log.dirs.replication.quota.window.num = 11
        alter.log.dirs.replication.quota.window.size.seconds = 1
        authorizer.class.name =
        auto.create.topics.enable = false
        auto.leader.rebalance.enable = true
        background.threads = 10
        broker.id = 0
        broker.id.generation.enable = true
        broker.rack =
        client.quota.callback.class = null
        compression.type = producer
        connection.failed.authentication.delay.ms = 100
        connections.max.idle.ms = 600000
        controlled.shutdown.enable = true
        controlled.shutdown.max.retries = 3
        controlled.shutdown.retry.backoff.ms = 5000
        controller.socket.timeout.ms = 30000
        create.topic.policy.class.name = null
        default.replication.factor = 1
        delegation.token.expiry.check.interval.ms = 3600000
        delegation.token.expiry.time.ms = 86400000
        delegation.token.master.key = null
        delegation.token.max.lifetime.ms = 604800000
        delete.records.purgatory.purge.interval.requests = 1
        delete.topic.enable = true
        fetch.purgatory.purge.interval.requests = 1000
        group.initial.rebalance.delay.ms = 3000
        group.max.session.timeout.ms = 300000
        group.min.session.timeout.ms = 6000
        host.name =
        inter.broker.listener.name = null
        inter.broker.protocol.version = 2.1-IV2
        kafka.metrics.polling.interval.secs = 10
        kafka.metrics.reporters = []
        leader.imbalance.check.interval.seconds = 300
        leader.imbalance.per.broker.percentage = 10
        listener.security.protocol.map = SSL:SSL,EXTERNAL:SSL
        listeners = SSL://:29092,EXTERNAL://:9094
        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.min.compaction.lag.ms = 0
        log.cleaner.threads = 1
        log.cleanup.policy = [delete]
        log.dir = /tmp/kafka-logs
        log.dirs = /kafka-logs/kafka
        log.flush.interval.messages = 9223372036854775807
        log.flush.interval.ms = null
        log.flush.offset.checkpoint.interval.ms = 60000
        log.flush.scheduler.interval.ms = 9223372036854775807
        log.flush.start.offset.checkpoint.interval.ms = 60000
        log.index.interval.bytes = 4096
        log.index.size.max.bytes = 10485760
        log.message.downconversion.enable = true
        log.message.format.version = 2.1-IV2
        log.message.timestamp.difference.max.ms = 9223372036854775807
        log.message.timestamp.type = CreateTime
        log.preallocate = false
        log.retention.bytes = -1
        log.retention.check.interval.ms = 300000
        log.retention.hours = 168
        log.retention.minutes = null
        log.retention.ms = null
        log.roll.hours = 168
        log.roll.jitter.hours = 0
        log.roll.jitter.ms = null
        log.roll.ms = null
        log.segment.bytes = 1073741824
        log.segment.delete.delay.ms = 60000
        max.connections.per.ip = 2147483647
        max.connections.per.ip.overrides =
        max.incremental.fetch.session.cache.slots = 1000
        message.max.bytes = 1000012
        metric.reporters = [com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter]
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        min.insync.replicas = 1
        num.io.threads = 8
        num.network.threads = 3
        num.partitions = 1
        num.recovery.threads.per.data.dir = 1
        num.replica.alter.log.dirs.threads = null
        num.replica.fetchers = 1
        offset.metadata.max.bytes = 4096
        offsets.commit.required.acks = -1
        offsets.commit.timeout.ms = 5000
        offsets.load.buffer.size = 5242880
        offsets.retention.check.interval.ms = 600000
        offsets.retention.minutes = 10080
        offsets.topic.compression.codec = 0
        offsets.topic.num.partitions = 50
        offsets.topic.replication.factor = 3
        offsets.topic.segment.bytes = 104857600
        password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
        password.encoder.iterations = 4096
        password.encoder.key.length = 128
        password.encoder.keyfactory.algorithm = null
        password.encoder.old.secret = null
        password.encoder.secret = null
        port = 9092
        principal.builder.class = null
        producer.purgatory.purge.interval.requests = 1000
        queued.max.request.bytes = -1
        queued.max.requests = 500
        quota.consumer.default = 9223372036854775807
        quota.producer.default = 9223372036854775807
        quota.window.num = 11
        quota.window.size.seconds = 1
        replica.fetch.backoff.ms = 1000
        replica.fetch.max.bytes = 1048576
        replica.fetch.min.bytes = 1
        replica.fetch.response.max.bytes = 10485760
        replica.fetch.wait.max.ms = 500
        replica.high.watermark.checkpoint.interval.ms = 5000
        replica.lag.time.max.ms = 10000
        replica.socket.receive.buffer.bytes = 65536
        replica.socket.timeout.ms = 30000
        replication.quota.window.num = 11
        replication.quota.window.size.seconds = 1
        request.timeout.ms = 30000
        reserved.broker.max.id = 1000
        sasl.client.callback.handler.class = null
        sasl.enabled.mechanisms = [GSSAPI]
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism.inter.broker.protocol = GSSAPI
        sasl.server.callback.handler.class = null
        security.inter.broker.protocol = SSL
        socket.receive.buffer.bytes = 102400
        socket.request.max.bytes = 104857600
        socket.send.buffer.bytes = 102400
        ssl.cipher.suites = []
        ssl.client.auth = required
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = https
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = /var/run/secrets/java.io/keystores/kafka.server.keystore.jks
        ssl.keystore.password = [hidden]
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = /var/run/secrets/java.io/keystores/kafka.server.truststore.jks
        ssl.truststore.password = [hidden]
        ssl.truststore.type = JKS
        transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
        transaction.max.timeout.ms = 900000
        transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
        transaction.state.log.load.buffer.size = 5242880
        transaction.state.log.min.isr = 2
        transaction.state.log.num.partitions = 50
        transaction.state.log.replication.factor = 3
        transaction.state.log.segment.bytes = 104857600
        transactional.id.expiration.ms = 604800000
        unclean.leader.election.enable = false
        zookeeper.connect = example-zookeepercluster-client.zookeeper:2181
        zookeeper.connection.timeout.ms = null
        zookeeper.max.in.flight.requests = 10
        zookeeper.session.timeout.ms = 6000
        zookeeper.set.acl = false
        zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2019-08-28 10:09:31,352] INFO KafkaConfig values:
        advertised.host.name = null
        advertised.listeners = EXTERNAL://10.108.140.234:19090,SSL://kafka-0.kafka-headless.kafka.svc.cluster.local:29092
        advertised.port = null
        alter.config.policy.class.name = null
        alter.log.dirs.replication.quota.window.num = 11
        alter.log.dirs.replication.quota.window.size.seconds = 1
        authorizer.class.name =
        auto.create.topics.enable = false
        auto.leader.rebalance.enable = true
        background.threads = 10
        broker.id = 0
        broker.id.generation.enable = true
        broker.rack =
        client.quota.callback.class = null
        compression.type = producer
        connection.failed.authentication.delay.ms = 100
        connections.max.idle.ms = 600000
        controlled.shutdown.enable = true
        controlled.shutdown.max.retries = 3
        controlled.shutdown.retry.backoff.ms = 5000
        controller.socket.timeout.ms = 30000
        create.topic.policy.class.name = null
        default.replication.factor = 1
        delegation.token.expiry.check.interval.ms = 3600000
        delegation.token.expiry.time.ms = 86400000
        delegation.token.master.key = null
        delegation.token.max.lifetime.ms = 604800000
        delete.records.purgatory.purge.interval.requests = 1
        delete.topic.enable = true
        fetch.purgatory.purge.interval.requests = 1000
        group.initial.rebalance.delay.ms = 3000
        group.max.session.timeout.ms = 300000
        group.min.session.timeout.ms = 6000
        host.name =
        inter.broker.listener.name = null
        inter.broker.protocol.version = 2.1-IV2
        kafka.metrics.polling.interval.secs = 10
        kafka.metrics.reporters = []
        leader.imbalance.check.interval.seconds = 300
        leader.imbalance.per.broker.percentage = 10
        listener.security.protocol.map = SSL:SSL,EXTERNAL:SSL
        listeners = SSL://:29092,EXTERNAL://:9094
        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.min.compaction.lag.ms = 0
        log.cleaner.threads = 1
        log.cleanup.policy = [delete]
        log.dir = /tmp/kafka-logs
        log.dirs = /kafka-logs/kafka
        log.flush.interval.messages = 9223372036854775807
        log.flush.interval.ms = null
        log.flush.offset.checkpoint.interval.ms = 60000
        log.flush.scheduler.interval.ms = 9223372036854775807
        log.flush.start.offset.checkpoint.interval.ms = 60000
        log.index.interval.bytes = 4096
        log.index.size.max.bytes = 10485760
        log.message.downconversion.enable = true
        log.message.format.version = 2.1-IV2
        log.message.timestamp.difference.max.ms = 9223372036854775807
        log.message.timestamp.type = CreateTime
        log.preallocate = false
        log.retention.bytes = -1
        log.retention.check.interval.ms = 300000
        log.retention.hours = 168
        log.retention.minutes = null
        log.retention.ms = null
        log.roll.hours = 168
        log.roll.jitter.hours = 0
        log.roll.jitter.ms = null
        log.roll.ms = null
        log.segment.bytes = 1073741824
        log.segment.delete.delay.ms = 60000
        max.connections.per.ip = 2147483647
        max.connections.per.ip.overrides =
        max.incremental.fetch.session.cache.slots = 1000
        message.max.bytes = 1000012
        metric.reporters = [com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter]
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        min.insync.replicas = 1
        num.io.threads = 8
        num.network.threads = 3
        num.partitions = 1
        num.recovery.threads.per.data.dir = 1
        num.replica.alter.log.dirs.threads = null
        num.replica.fetchers = 1
        offset.metadata.max.bytes = 4096
        offsets.commit.required.acks = -1
        offsets.commit.timeout.ms = 5000
        offsets.load.buffer.size = 5242880
        offsets.retention.check.interval.ms = 600000
        offsets.retention.minutes = 10080
        offsets.topic.compression.codec = 0
        offsets.topic.num.partitions = 50
        offsets.topic.replication.factor = 3
        offsets.topic.segment.bytes = 104857600
        password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
        password.encoder.iterations = 4096
        password.encoder.key.length = 128
        password.encoder.keyfactory.algorithm = null
        password.encoder.old.secret = null
        password.encoder.secret = null
        port = 9092
        principal.builder.class = null
        producer.purgatory.purge.interval.requests = 1000
        queued.max.request.bytes = -1
        queued.max.requests = 500
        quota.consumer.default = 9223372036854775807
        quota.producer.default = 9223372036854775807
        quota.window.num = 11
        quota.window.size.seconds = 1
        replica.fetch.backoff.ms = 1000
        replica.fetch.max.bytes = 1048576
        replica.fetch.min.bytes = 1
        replica.fetch.response.max.bytes = 10485760
        replica.fetch.wait.max.ms = 500
        replica.high.watermark.checkpoint.interval.ms = 5000
        replica.lag.time.max.ms = 10000
        replica.socket.receive.buffer.bytes = 65536
        replica.socket.timeout.ms = 30000
        replication.quota.window.num = 11
        replication.quota.window.size.seconds = 1
        request.timeout.ms = 30000
        reserved.broker.max.id = 1000
        sasl.client.callback.handler.class = null
        sasl.enabled.mechanisms = [GSSAPI]
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism.inter.broker.protocol = GSSAPI
        sasl.server.callback.handler.class = null
        security.inter.broker.protocol = SSL
        socket.receive.buffer.bytes = 102400
        socket.request.max.bytes = 104857600
        socket.send.buffer.bytes = 102400
        ssl.cipher.suites = []
        ssl.client.auth = required
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = https
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = /var/run/secrets/java.io/keystores/kafka.server.keystore.jks
        ssl.keystore.password = [hidden]
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = /var/run/secrets/java.io/keystores/kafka.server.truststore.jks
        ssl.truststore.password = [hidden]
        ssl.truststore.type = JKS
        transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
        transaction.max.timeout.ms = 900000
        transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
        transaction.state.log.load.buffer.size = 5242880
        transaction.state.log.min.isr = 2
        transaction.state.log.num.partitions = 50
        transaction.state.log.replication.factor = 3
        transaction.state.log.segment.bytes = 104857600
        transactional.id.expiration.ms = 604800000
        unclean.leader.election.enable = false
        zookeeper.connect = example-zookeepercluster-client.zookeeper:2181
        zookeeper.connection.timeout.ms = null
        zookeeper.max.in.flight.requests = 10
        zookeeper.session.timeout.ms = 6000
        zookeeper.set.acl = false
        zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2019-08-28 10:09:31,562] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-08-28 10:09:31,568] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-08-28 10:09:31,568] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-08-28 10:09:32,054] INFO Loading logs. (kafka.log.LogManager)
[2019-08-28 10:09:32,253] INFO Logs loading complete in 199 ms. (kafka.log.LogManager)
[2019-08-28 10:09:32,562] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2019-08-28 10:09:32,749] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)

Bump Kafka-Operator helm chart to use Prometheus Operator

*Describe the bug
In this PR we changed the label names used to get cr in case of an alert from kubernetes_namespace -> namespace
#96

This new labelling requires a Prometheus which was installed by the Prometheus Operator.
Replace the operator helm chart requirement to use the Prometheus Operator chart instead.

Could not create CC topic

Describe the bug
I created a cluster as described in config/samples/simplekafkacluster.yaml. The cluster comes up just fine, but the operator complains that it "Could not create CC topic,..."

2019-09-30T09:18:25.646Z        DEBUG   controllers.KafkaCluster        resource is in sync     {"kafkacluster": "docugate/kafka", "Request.Name": "kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
2019-09-30T09:18:25.646Z        DEBUG   controllers.KafkaCluster        resource is in sync     {"kafkacluster": "docugate/kafka", "Request.Name": "kafka", "component": "kafka", "kind": "*v1.ConfigMap", "name": "kafka-config-2"}
2019-09-30T09:18:25.646Z        DEBUG   controllers.KafkaCluster        searching with label because name is empty      {"kafkacluster": "docugate/kafka", "Request.Name": "kafka", "component": "kafka", "kind": "*v1.Pod"}
2019-09-30T09:18:25.651Z        DEBUG   controllers.KafkaCluster        resource is in sync     {"kafkacluster": "docugate/kafka", "Request.Name": "kafka", "component": "kafka", "kind": "*v1.Pod"}
2019-09-30T09:18:25.856Z        DEBUG   controllers.KafkaCluster        Reconciled      {"kafkacluster": "docugate/kafka", "Request.Name": "kafka", "component": "kafka"}
2019-09-30T09:18:25.856Z        DEBUG   controllers.KafkaCluster        Reconciling     {"kafkacluster": "docugate/kafka", "Request.Name": "kafka", "component": "kafka-cruisecontrol"}
2019-09-30T09:18:25.959Z        INFO    controllers.KafkaCluster        CR status updated       {"kafkacluster": "docugate/kafka", "Request.Name": "kafka", "component": "kafka-cruisecontrol", "status": "CruiseControlTopicNotReady"}
2019-09-30T09:18:25.959Z        INFO    controllers.KafkaCluster        Could not create CC topic, either less than 3 brokers or not all are ready      {"kafkacluster": "docugate/kafka", "Request.Name": "kafka"}

yet all nodes seem to be ready and healthy:

Status:
  Alert Count:  0
  Brokers State:
    0:
      Configuration State:  ConfigInSync
      Graceful Action State:
        Cruise Control State:  GracefulUpdateNotRequired
        Error Message:
      Rack Awareness State:
    1:
      Configuration State:  ConfigInSync
      Graceful Action State:
        Cruise Control State:  GracefulUpdateNotRequired
        Error Message:
      Rack Awareness State:
    2:
      Configuration State:  ConfigInSync
      Graceful Action State:
        Cruise Control State:   GracefulUpdateNotRequired
        Error Message:
      Rack Awareness State:
  Cruise Control Topic Status:  CruiseControlTopicNotReady
  Rolling Upgrade Status:
    Error Count:   0
    Last Success:
  State:           ClusterReconciling
Events:            <none>

so I execed one of the nodes and described the CC metrics topic:

kafka-topics.sh --zookeeper kafka-zk-headless.zookeeper:2181 --describe --topic __CruiseControlMetrics                                                                                                                            Topic:__CruiseControlMetrics    PartitionCount:12       ReplicationFactor:3     Configs:
        Topic: __CruiseControlMetrics   Partition: 0    Leader: 1       Replicas: 1,0,2 Isr: 2,0,1
        Topic: __CruiseControlMetrics   Partition: 1    Leader: 0       Replicas: 0,2,1 Isr: 2,0,1
        Topic: __CruiseControlMetrics   Partition: 2    Leader: 2       Replicas: 2,1,0 Isr: 2,0,1
        Topic: __CruiseControlMetrics   Partition: 3    Leader: 1       Replicas: 1,2,0 Isr: 2,0,1
        Topic: __CruiseControlMetrics   Partition: 4    Leader: 0       Replicas: 0,1,2 Isr: 2,0,1
        Topic: __CruiseControlMetrics   Partition: 5    Leader: 2       Replicas: 2,0,1 Isr: 2,0,1
        Topic: __CruiseControlMetrics   Partition: 6    Leader: 1       Replicas: 1,0,2 Isr: 2,0,1
        Topic: __CruiseControlMetrics   Partition: 7    Leader: 0       Replicas: 0,2,1 Isr: 2,0,1
        Topic: __CruiseControlMetrics   Partition: 8    Leader: 2       Replicas: 2,1,0 Isr: 2,0,1
        Topic: __CruiseControlMetrics   Partition: 9    Leader: 1       Replicas: 1,2,0 Isr: 2,0,1
        Topic: __CruiseControlMetrics   Partition: 10   Leader: 0       Replicas: 0,1,2 Isr: 2,0,1
        Topic: __CruiseControlMetrics   Partition: 11   Leader: 2       Replicas: 2,0,1 Isr: 2,0,1
bash-4.4# kafka-topics.sh --zookeeper kafka-zk-headless.zookeeper:2181 --describe --topic __CruiseControlMetrics --under-replicated-partitions

Steps to reproduce the issue:
(assuming zookeeper is already installed)

  1. install the operator from chart version 0.2.0
  2. push the definition from simplekafkacluster.yaml

Add missing AVG value of brokerOverloaded alert

Describe the bug
Missing AVG in error definition causes fake alerts.

Steps to reproduce the issue:
Please describe the steps to reproduce the issue.

Expected behavior
Use AVG

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem like release numberm version, branch, etc.

many "failed to reconcile resource"

Describe the bug
When creating a cluster (deploying the CR), I get many errors like :

"error":"failed to reconcile resource: updating cr with rack awareness info failed: fetching Node rack awareness labels failed: Node \"\" not found"

and

"error":"failed to reconcile resource: updating resource failed: Service \"kf-kafka2-112\" is invalid: [metadata.ownerReferences.apiVersion: Invalid value: \"\": version must not be empty, metadata.ownerReferences.kind: Invalid value: \"\": kind must not be empty]"

Steps to reproduce the issue:
upload a CR like :

apiVersion: banzaicloud.banzaicloud.io/v1alpha1
kind: KafkaCluster
metadata:
  labels:
    controller-tools.k8s.io: "1.0"
  name: kf-kafka2
spec:
  headlessServiceEnabled: false
  zkAddresses:
    - "test-zookeeper-client.alerting:2181"
  rackAwareness:
    labels:
      - "failure-domain.beta.kubernetes.io/region"
      - "failure-domain.beta.kubernetes.io/zone"
  oneBrokerPerNode: false
  brokerConfigs:
    - image: "wurstmeister/kafka:2.12-2.1.0"
      id: 111
      resourceReqs:
        limits:
          memory: "2Gi"
        requests:
          memory: "2Gi"
          cpu: "1"
      config: |
        auto.create.topics.enable=false
        delete.topic.enable=true
        num.partitions=32
        default.replication.factor=2
        offsets.topic.replication.factor=2
        transaction.state.log.replication.factor=2
        transaction.state.log.min.isr=1
        ProducerConfig.RETRIES_CONFIG=10
        ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG=1000
        session.timeout.ms=200000
        num.recovery.threads.per.data.dir=8
      storageConfigs:
        - mountPath: "/kafka-logs"
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: ssd
            resources:
              requests:
                storage: 256Gi
...

Expected behavior
there should not be any error

Additional context
K8s cluster on AKS 1.14.5

kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.5", GitCommit:"0e9fcb426b100a2aea5ed5c25b3d8cfbb01a8acf", GitTreeState:"clean", BuildDate:"2019-08-05T09:13:08Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

RFC : Operator behaviour when a Topic is manually deleted from Kafka ?

With Kafka Operator 0.6.x there is support for Topic Management.
When we create a kafkatopic CR, the corresponding topic is created in Kafka.

When you delete the topic from Kafka by hand, the operator loops through an error :

kafka-operator-7bc9c8cd8f-l4qxk manager 2019-10-01T15:26:51.139Z	INFO	controllers.KafkaTopic.alerting/example-topic_sync	Syncing topic status
kafka-operator-7bc9c8cd8f-l4qxk manager 2019-10-01T15:26:51.233Z	ERROR	controllers.KafkaTopic.alerting/example-topic_sync	Failed to describe topic to update its status	{"error": "kafka server: Request was for a topic or partition that does not exist on this broker."}
kafka-operator-7bc9c8cd8f-l4qxk manager github.com/go-logr/zapr.(*zapLogger).Error
kafka-operator-7bc9c8cd8f-l4qxk manager 	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
kafka-operator-7bc9c8cd8f-l4qxk manager github.com/banzaicloud/kafka-operator/controllers.(*KafkaTopicReconciler).doTopicStatusSync
kafka-operator-7bc9c8cd8f-l4qxk manager 	/workspace/controllers/kafkatopic_controller.go:269
kafka-operator-7bc9c8cd8f-l4qxk manager github.com/banzaicloud/kafka-operator/controllers.(*KafkaTopicReconciler).syncTopicStatus
kafka-operator-7bc9c8cd8f-l4qxk manager 	/workspace/controllers/kafkatopic_controller.go:234

The kafkatopic CR is not enforced and the topic is not re-created.

Should the Kafka Operator enforce presence of topics in kafkatopic and re-create it ?

Note that even once the topic does not exist in Kafka, deleting the kafkatopic CR is well handled by the Operator :

Topic has been deleted, stopping sync routine

User Management via CRD

Describe the solution you'd like to see
It would be useful to manage users using a custom CRD

Alert pre-filtering

Describe the feature
All of the defined alerts are received by the operator.

Expected behavior
The alerts should be pre-filtered before processing it.

new storageclass missing parameter

in the readme.md, under the Installation section:
We recommend to use a custom StorageClass...

metadata: is missing. Please update:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:                                               # add this line
  name: exampleStorageclass
parameters:
  type: pd-standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

unable to bring up kafka v1.0.2 cluster

Describe the bug
operator v0.7.0 using clusterImage: "wurstmeister/kafka:2.11-1.0.2" will result in error:

2019-10-24T03:46:14.297Z        ERROR   controller-runtime.controller   Reconciler error        {"controller": "kafkacluster", "request": "osp/kafka", "error": "could not describe cluster wide broker config: Broker id must be an integer, but it is: ", "errorVerbose": "Broker id must be an integer, but it is: \ncould not describe cluster wide broker config\ngithub.com/banzaicloud/kafka-operator/pkg/resources/kafka.(*Reconciler).reconcileClusterWideDynamicConfig\n\t/workspace/pkg/resources/kafka/kafka.go:392\ngithub.com/banzaicloud/kafka-operator/pkg/resources/kafka.(*Reconciler).Reconcile\n\t/workspace/pkg/resources/kafka/kafka.go:284\ngithub.com/banzaicloud/kafka-operator/controllers.(*KafkaClusterReconciler).Reconcile\n\t/workspace/controllers/kafkacluster_controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"}

the cause seems coming from : kafka, I am suspecting API change, so github.com/Shopify/sarama does not work for pre-2.x.x kafka any more...

Implement securityContext

Is your feature request related to a problem? Please describe.
In some cases, it would be fine if deployed pods (kafka, cruise-control) securityContext can be defined.

Describe the solution you'd like to see
The securityContext should be defined in KafkaCluster CR .

Operator Panic

Describe the bug
When the external certificate is invalid, the operator will be in a crash loop.

2019-09-19T07:54:18.172Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "kafka/kafka", "Request.Name": "kafka", "component": "kafka", "kind": "*v1.Service", "name": "kafka-headless"}
E0919 07:54:18.172914       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:76
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:65
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/panic.go:679
/usr/local/go/src/runtime/panic.go:199
/usr/local/go/src/runtime/signal_unix.go:394
/workspace/pkg/certutil/main.go:72
/workspace/pkg/resources/kafka/kafka.go:233
/workspace/controllers/kafkacluster_controller.go:109
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
/usr/local/go/src/runtime/asm_amd64.s:1357
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1177be4]

Steps to reproduce the issue:
Use invalid cert as external cert.

Expected behavior
The operator shouldn't restart continuously.

brokerConfig.config is not copied to configmap

Considering a kafkacluster defined like :

  • kafka operator 0.6.0
  • cluster name kf-kafka
  • brokers section in the kafkacluster yaml :
  brokers:
    - id:  0
      brokerConfigGroup: "default_group"
      brokerConfig:
        resourceRequirements:
          limits:
            memory: "3Gi"
          requests:
            cpu: "0.3"
            memory: "512Mi"
        config: |
          session.timeout.ms=20000
          offsets.topic.replication.factor=2
          ProducerConfig.RETRIES_CONFIG=10
          transaction.state.log.replication.factor=2
          transaction.state.log.min.isr=1
          log.dirs=/kafka-logs/data
          ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG=1000
          delete.topic.enable=true
          num.partitions=32
          auto.create.topics.enable=false
          default.replication.factor=2
          num.recovery.threads.per.data.dir=8

The configmap kf-kafka-config-0 created is like :

apiVersion: v1
data:
  broker-config: |-
    advertised.listeners=PLAINTEXT://kf-kafka-0.alerting.svc.cluster.local:9092
    broker.id=0
    broker.rack=francecentral,3
    cruise.control.metrics.reporter.bootstrap.servers=PLAINTEXT://kf-kafka-0.alerting.svc.cluster.local:9092
    listener.security.protocol.map=PLAINTEXT:PLAINTEXT
    listeners=PLAINTEXT://:9092
    log.dirs=/kafka-logs/kafka
    metric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter
    security.inter.broker.protocol=PLAINTEXT
    super.users=
    zookeeper.connect=zk-zookeeper.alerting:2181
kind: ConfigMap
metadata:
  annotations:
  labels:
    app: kafka
    brokerId: "0"
    kafka_cr: kf-kafka
  name: kf-kafka-config-0
  namespace: alerting
  ownerReferences:
  - apiVersion: kafka.banzaicloud.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: KafkaCluster
    name: kf-kafka

Is it normal that the special config parameters are not set there ?

Use proper Cruise Control Update status field for initial brokers

Describe the bug
Cruise Control uses some machine learning to predict your Kafka cluster usage. It requires some time after startup to build its model from the Kafka metrics. Since CC stores its data on Kafka, a running Kafka cluster is a prerequisite for CC. So the first brokers will be not added to the cluster gracefully. It means the initially created Kafka Brokers will have the state GracefulUpdateFailed because of that.
We should change that to e.g.: initialCluster

Add support for Schema Registry

Create a CRD with it's own Controller
It will handle the Schema Registry install, configuration and management.
It should contain at least these information:

  • image
  • config

Topic management via CRD

Describe the solution you'd like to see
It would be useful if topic management could be configurable via CRD

Improve Helm chart

Describe the solution you'd like to see
The Helm chart is pretty basic.
While I don't like Helm (better use Jsonnet), I would like to know if it will be the primary install choice for the comming releases ?
If yes, I could give a shot and tweak few things, like templating the image/tag used and maybe other stuffs so it can be used to automate infrasctructure deployments.

Describe alternatives you've considered
Using Jsonnet, as Prometheus-operator do ?

Can we support nodeport for external listener access instead of lb?

Is your feature request related to a problem? Please describe.

I want to deploy this operator in my private k8s with the default yaml kafkacluster_without_ssl.yml. But it failed because operator wanted to get the ip of lb. Can I deploy this without lb? I just want to use NodePort.

Describe the solution you'd like to see
A clear and concise description of what would you like to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Fail to start pods if zone or region is not set on the node

Describe the bug
In minikube (and possibly in other environments as well) failure-domain.beta.kubernetes.io/zone and failure-domain.beta.kubernetes.io/region is not set. The operator does not check for this and set empty strings for these values as the Pod's NodeAffinity config. After that the Pod can't start as the node with the above labels (and empty values) won't be found.

Steps to reproduce the issue:
Start an example cluster in minikube.

Expected behavior
An error should be logged about the node not having the specific labels. Also the labels should be configurable to support other failure domains.

Additional context
The issue came up around testing and reviewing #74

Upscale action reinitiate unexpectedly.

Describe the bug
Sometimes reinitiate upscale action based on an already processed alert.

Steps to reproduce the issue:
Please describe the steps to reproduce the issue.

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem like release numberm version, branch, etc.

Create CC topic using the provided Topic CRD

An error can happen if Kafka is reusing zookeeper on the same path for a different Kafka Cluster.
Since the CC topic is created from code during the cluster creation, Zookeeper will hold the CC topic info even when the cluster is deleted, so creating a new cluster in the same zk znode will fail since the client which tries to create the topic fails with already exists error.

We are planning to use the newly introduced Topic CRD to create this topic, so the operator can remove it from zookeeper during cluster deletion.

When Setting up two kafka clusters in the same namespace cruisecontrol restarts constantly

Describe the bug

I was testing out this Kafka Operator, and one of my use cases is I need to spawn more than one cluster per namespace. If I enable CruiseControl for two clusters in the same namespace the crusisecontrol config constantly gets overwritten by the operator and the two cluster configs.

The only I can stop it is to delete the second cluster. I have tried editing a number of configs and it doesn't seem to matter. Based on the documentation for CruiseControl it would seem we need to deploy a cruisecontrol instance per broker cluster.

Steps to reproduce the issue:

I can share my manifests if you want (they are long) but basically apply the one in https://github.com/banzaicloud/kafka-operator/blob/master/config/samples/banzaicloud_v1alpha1_kafkacluster.yaml and the second time change the name field to anything other than kafka

Expected behavior
That it deploys the second cluster and cruisecontrol can balance/maintain it.

Screenshots

Additional context

    app.kubernetes.io/component: operator
    app.kubernetes.io/instance: kafka-operator
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: kafka-operator
    app.kubernetes.io/version: 0.5.2
    control-plane: controller-manager
    controller-tools.k8s.io: "1.0"
    helm.sh/chart: kafka-operator-0.0.11

External SSL not working with internal plaintext

Describe the bug
I have a cluster with a lisnter config like this:

  listenersConfig:
    externalListeners:
      - type: "ssl"
        name: "ssl"
        externalStartingPort: 19090
        containerPort: 9094
    internalListeners:
      - type: "plaintext"
        name: "plaintext"
        containerPort: 29092
        usedForInnerBrokerCommunication: true
    sslSecrets:
      tlsSecretName: "test-kafka-operator"
      jksPasswordName: "test-kafka-operator-pass"
      create: true

When the controller starts to create the cluster it will only create one broker and the it can't connect to this broker because it trys to connect with the tls cert but the broker starts a plaintext port.

I added a check to the kafka client creation:

if cluster.Spec.ListenersConfig.SSLSecrets != nil && cluster.Spec.ListenersConfig.InternalListeners[0].Type != "plaintext" {

Would it be ok to change the code for ssl check?

Docker image are hardcoded

Is your feature request related to a problem? Please describe.
When the Operator creates the Kafka cluster, it uses some external images which are hardcoded. You can see this in :
https://github.com/banzaicloud/kafka-operator/blob/master/pkg/resources/kafka/pod.go#L78

Describe the solution you'd like to see
I would like a way to change these images URLs to use a private registry, for obvious security concerns and reliance issues.

Describe alternatives you've considered
No idea for now...

Nil pointer on UpdateCrWithNodeAffinity

Describe the bug
The operator started to crash after I add the following config inside brokerConfigGroups

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=false

It keeps crashing if I remove the configuration as well.

Steps to reproduce the issue:
Create a cluster where all brokers use the same brokerConfigGroup and then updates the brokerConfigGroups.default.config.

Expected behavior

Screenshots

{"level":"info","time":"2019-09-21T14:20:47Z","message":"Current state of brokerConfig: {0 default <nil>}"}
{"level":"info","time":"2019-09-21T14:20:47Z","message":"Current state of nodeAffinity: &NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[{[{failure-domain.beta.kubernetes.io/zone In [us-east-1a]} {failure-domain.beta.kubernetes.io/region In [us-east-1]}] []}],},PreferredDuringSchedulingIgnoredDuringExecution:[],}"}
E0921 14:20:47.800370       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:76
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:65
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/panic.go:679
/usr/local/go/src/runtime/panic.go:199
/usr/local/go/src/runtime/signal_unix.go:394
/workspace/pkg/k8sutil/cr.go:53
/workspace/pkg/resources/kafka/kafka.go:543
/workspace/pkg/resources/kafka/kafka.go:303
/workspace/controllers/kafkacluster_controller.go:109
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
/usr/local/go/src/runtime/asm_amd64.s:1357
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x102122a]

goroutine 302 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:58 +0x105
panic(0x1769580, 0x2872310)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/banzaicloud/kafka-operator/pkg/k8sutil.UpdateCrWithNodeAffinity(0xc00014ce00, 0xc000034000, 0x1c29080, 0xc000300330, 0xc0000d9da0, 0x0)
	/workspace/pkg/k8sutil/cr.go:53 +0x54a
github.com/banzaicloud/kafka-operator/pkg/resources/kafka.(*Reconciler).reconcileKafkaPod(0xc0007ca200, 0x1c1d800, 0xc000756840, 0xc000185500, 0x0, 0x0)
	/workspace/pkg/resources/kafka/kafka.go:543 +0x9c9
github.com/banzaicloud/kafka-operator/pkg/resources/kafka.(*Reconciler).Reconcile(0xc0007ca200, 0x1c1d800, 0xc000a63aa0, 0x0, 0x0)
	/workspace/pkg/resources/kafka/kafka.go:303 +0x16af
github.com/banzaicloud/kafka-operator/controllers.(*KafkaClusterReconciler).Reconcile(0xc0003f53b0, 0xc0000446e9, 0x5, 0xc0000446d8, 0x5, 0xc0000b2cd8, 0xc000a5a000, 0xc0004823f8, 0x1bdf440)
	/workspace/controllers/kafkacluster_controller.go:109 +0x6bc
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0004e2460, 0x17d7800, 0xc0005d8da0, 0x0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216 +0x162
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0004e2460, 0x0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192 +0xcb
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0004e2460)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00024a120)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 +0x5e
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00024a120, 0x3b9aca00, 0x0, 0x1, 0xc000442ae0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc00024a120, 0x3b9aca00, 0xc000442ae0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:157 +0x32e

Additional context
I'm running the code of branch roll_up.

The nil pointer comes from this line:

brokerConfig.BrokerConfig.NodeAffinity = nodeAffinity

I added two lines of logs and as you can see brokerConfig.BrokerConfig is nil

can't set imagePullSecrets in Kafka pods

Is your feature request related to a problem? Please describe.
Pods support a imagePullSecrets option to set the secret(s) to use so it can pull the image from a private registry.
This feature is not provided by the KafkaCluster definition.

Describe the solution you'd like to see
Add a imagePullSecrets field in the KafkaCluster CRD which should be an array like :

 imagePullSecrets:
  - name: private-docker-registry-key

Describe alternatives you've considered
The only other solution is to create a ServiceAccount and add the imagePullSecrets to it, then use it in the KafkaCluster

like :

- apiVersion: v1
  kind: ServiceAccount
  metadata:
    labels:
      app: kafkacluster
    name: kafkacluster
    namespace: kafka
  imagePullSecrets:
  - name: private-docker-registry-key

panic: runtime error: index out of range [3] with length 3

Describe the bug
During Reconcile of a cluster, the Operator panic with :

kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.744Z	INFO	controllers.KafkaCluster	Reconciling KafkaCluster	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.744Z	DEBUG	controllers.KafkaCluster	Skipping PKI reconciling due to no SSL config	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "pki"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.744Z	DEBUG	controllers.KafkaCluster	Reconciling	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "envoy"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.744Z	DEBUG	controllers.KafkaCluster	Reconciled	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "envoy"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.744Z	DEBUG	controllers.KafkaCluster	Reconciling	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka_monitoring"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.745Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka_monitoring", "kind": "*v1.ConfigMap", "name": "kf-kafka-kafka-jmx-exporter"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.745Z	DEBUG	controllers.KafkaCluster	Reconciled	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka_monitoring"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.745Z	DEBUG	controllers.KafkaCluster	Reconciling	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "cruisecontrol_monitoring"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.745Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "cruisecontrol_monitoring", "kind": "*v1.ConfigMap", "name": "kf-kafka-cc-jmx-exporter"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.745Z	DEBUG	controllers.KafkaCluster	Reconciled	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "cruisecontrol_monitoring"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.745Z	DEBUG	controllers.KafkaCluster	Reconciling	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.746Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Service", "name": "kf-kafka-all-broker"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.747Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.748Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.759Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.ConfigMap", "name": "kf-kafka-config-1"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.760Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Service", "name": "kf-kafka-1"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.760Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Pod"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.763Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Pod"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.026Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.026Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.028Z	DEBUG	controllers.KafkaCluster	resource diffs	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.ConfigMap", "name": "kf-kafka-config-2", "patch": "{\"data\":{\"broker-config\":\"advertised.listeners=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nbroker.id=2\\nbroker.rack=francecentral,3\\ncruise.control.metrics.reporter.bootstrap.servers=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nlistener.security.protocol.map=PLAINTEXT:PLAINTEXT\\nlisteners=PLAINTEXT://:9092\\nlog.dirs=/kafka-logs/kafka\\nmetric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter\\nsecurity.inter.broker.protocol=PLAINTEXT\\nsuper.users=\\nzookeeper.connect=zk-zookeeper.alerting:2181\"}}", "current": "{\"kind\":\"ConfigMap\",\"apiVersion\":\"v1\",\"metadata\":{\"name\":\"kf-kafka-config-2\",\"namespace\":\"alerting\",\"selfLink\":\"/api/v1/namespaces/alerting/configmaps/kf-kafka-config-2\",\"uid\":\"0f9e6683-e399-11e9-94f3-4a7bbeb43129\",\"resourceVersion\":\"16769225\",\"creationTimestamp\":\"2019-09-30T15:43:34Z\",\"labels\":{\"app\":\"kafka\",\"brokerId\":\"2\",\"kafka_cr\":\"kf-kafka\"},\"annotations\":{\"banzaicloud.com/last-applied\":\"{\\\"data\\\":{\\\"broker-config\\\":\\\"advertised.listeners=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\\\nbroker.id=2\\\\ncruise.control.metrics.reporter.bootstrap.servers=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\\\nlistener.security.protocol.map=PLAINTEXT:PLAINTEXT\\\\nlisteners=PLAINTEXT://:9092\\\\nlog.dirs=/kafka-logs/kafka\\\\nmetric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter\\\\nsecurity.inter.broker.protocol=PLAINTEXT\\\\nsuper.users=\\\\nzookeeper.connect=zk-zookeeper.alerting:2181\\\"},\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kafka\\\",\\\"brokerId\\\":\\\"2\\\",\\\"kafka_cr\\\":\\\"kf-kafka\\\"},\\\"name\\\":\\\"kf-kafka-config-2\\\",\\\"namespace\\\":\\\"alerting\\\",\\\"ownerReferences\\\":[{\\\"apiVersion\\\":\\\"kafka.banzaicloud.io/v1beta1\\\",\\\"blockOwnerDeletion\\\":true,\\\"controller\\\":true,\\\"kind\\\":\\\"KafkaCluster\\\",\\\"name\\\":\\\"kf-kafka\\\",\\\"uid\\\":\\\"d6ff7d7f-e398-11e9-94f3-4a7bbeb43129\\\"}]}}\"},\"ownerReferences\":[{\"apiVersion\":\"kafka.banzaicloud.io/v1beta1\",\"kind\":\"KafkaCluster\",\"name\":\"kf-kafka\",\"uid\":\"d6ff7d7f-e398-11e9-94f3-4a7bbeb43129\",\"controller\":true,\"blockOwnerDeletion\":true}]},\"data\":{\"broker-config\":\"advertised.listeners=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nbroker.id=2\\ncruise.control.metrics.reporter.bootstrap.servers=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nlistener.security.protocol.map=PLAINTEXT:PLAINTEXT\\nlisteners=PLAINTEXT://:9092\\nlog.dirs=/kafka-logs/kafka\\nmetric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter\\nsecurity.inter.broker.protocol=PLAINTEXT\\nsuper.users=\\nzookeeper.connect=zk-zookeeper.alerting:2181\"}}", "modified": "{\"data\":{\"broker-config\":\"advertised.listeners=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nbroker.id=2\\nbroker.rack=francecentral,3\\ncruise.control.metrics.reporter.bootstrap.servers=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nlistener.security.protocol.map=PLAINTEXT:PLAINTEXT\\nlisteners=PLAINTEXT://:9092\\nlog.dirs=/kafka-logs/kafka\\nmetric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter\\nsecurity.inter.broker.protocol=PLAINTEXT\\nsuper.users=\\nzookeeper.connect=zk-zookeeper.alerting:2181\"},\"metadata\":{\"labels\":{\"app\":\"kafka\",\"brokerId\":\"2\",\"kafka_cr\":\"kf-kafka\"},\"name\":\"kf-kafka-config-2\",\"namespace\":\"alerting\",\"ownerReferences\":[{\"apiVersion\":\"kafka.banzaicloud.io/v1beta1\",\"blockOwnerDeletion\":true,\"controller\":true,\"kind\":\"KafkaCluster\",\"name\":\"kf-kafka\",\"uid\":\"d6ff7d7f-e398-11e9-94f3-4a7bbeb43129\"}]}}", "original": "{\"data\":{\"broker-config\":\"advertised.listeners=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nbroker.id=2\\ncruise.control.metrics.reporter.bootstrap.servers=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nlistener.security.protocol.map=PLAINTEXT:PLAINTEXT\\nlisteners=PLAINTEXT://:9092\\nlog.dirs=/kafka-logs/kafka\\nmetric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter\\nsecurity.inter.broker.protocol=PLAINTEXT\\nsuper.users=\\nzookeeper.connect=zk-zookeeper.alerting:2181\"},\"metadata\":{\"labels\":{\"app\":\"kafka\",\"brokerId\":\"2\",\"kafka_cr\":\"kf-kafka\"},\"name\":\"kf-kafka-config-2\",\"namespace\":\"alerting\",\"ownerReferences\":[{\"apiVersion\":\"kafka.banzaicloud.io/v1beta1\",\"blockOwnerDeletion\":true,\"controller\":true,\"kind\":\"KafkaCluster\",\"name\":\"kf-kafka\",\"uid\":\"d6ff7d7f-e398-11e9-94f3-4a7bbeb43129\"}]}}"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.160Z	INFO	controllers.KafkaCluster	Kafka cluster state updated	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.ConfigMap", "name": "kf-kafka-config-2"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.160Z	INFO	controllers.KafkaCluster	resource updated	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.ConfigMap", "name": "kf-kafka-config-2"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.161Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Service", "name": "kf-kafka-2"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.161Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Pod"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.316Z	INFO	kafka_util	offline Replica Count is 0
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.316Z	INFO	kafka_util	all replicas are in sync
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:14.324Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:14.324Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-nlwfz manager E0930 15:46:14.324936       1 runtime.go:69] Observed a panic: runtime.boundsError{x:3, y:3, signed:true, code:0x0} (runtime error: index out of range [3] with length 3)
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:76
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:65
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:51
kafka-operator-7bc9c8cd8f-nlwfz manager /usr/local/go/src/runtime/panic.go:679
kafka-operator-7bc9c8cd8f-nlwfz manager /usr/local/go/src/runtime/panic.go:75
kafka-operator-7bc9c8cd8f-nlwfz manager /workspace/pkg/resources/kafka/configmap.go:179
kafka-operator-7bc9c8cd8f-nlwfz manager /workspace/pkg/resources/kafka/configmap.go:97
kafka-operator-7bc9c8cd8f-nlwfz manager /workspace/pkg/resources/kafka/kafka.go:252
kafka-operator-7bc9c8cd8f-nlwfz manager /workspace/controllers/kafkacluster_controller.go:111
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
kafka-operator-7bc9c8cd8f-nlwfz manager /usr/local/go/src/runtime/asm_amd64.s:1357
kafka-operator-7bc9c8cd8f-nlwfz manager panic: runtime error: index out of range [3] with length 3 [recovered]
kafka-operator-7bc9c8cd8f-nlwfz manager 	panic: runtime error: index out of range [3] with length 3
kafka-operator-7bc9c8cd8f-nlwfz manager
kafka-operator-7bc9c8cd8f-nlwfz manager goroutine 436 [running]:
kafka-operator-7bc9c8cd8f-nlwfz manager k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:58 +0x105
kafka-operator-7bc9c8cd8f-nlwfz manager panic(0x185fd00, 0xc0016630c0)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/usr/local/go/src/runtime/panic.go:679 +0x1b2
kafka-operator-7bc9c8cd8f-nlwfz manager github.com/banzaicloud/kafka-operator/pkg/resources/kafka.Reconciler.generateBrokerConfig(0x1c02900, 0xc0005b1d70, 0xc001632000, 0x3, 0xc00188b6b0, 0xc0008c81c0, 0x0, 0x0, 0x0, 0x0, ...)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/workspace/pkg/resources/kafka/configmap.go:179 +0x6c9
kafka-operator-7bc9c8cd8f-nlwfz manager github.com/banzaicloud/kafka-operator/pkg/resources/kafka.(*Reconciler).configMap(0xc0017ceec0, 0xc000000003, 0xc00188b6b0, 0x0, 0x0, 0xc0008c81c0, 0x0, 0x0, 0x1bf7080, 0xc0017cfdc0, ...)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/workspace/pkg/resources/kafka/configmap.go:97 +0x5bd
kafka-operator-7bc9c8cd8f-nlwfz manager github.com/banzaicloud/kafka-operator/pkg/resources/kafka.(*Reconciler).Reconcile(0xc0017ceec0, 0x1bf7080, 0xc0017cfdc0, 0x0, 0x0)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/workspace/pkg/resources/kafka/kafka.go:252 +0x1693
kafka-operator-7bc9c8cd8f-nlwfz manager github.com/banzaicloud/kafka-operator/controllers.(*KafkaClusterReconciler).Reconcile(0xc000381140, 0xc000784998, 0x8, 0xc000784b08, 0x8, 0xc0008c9cd8, 0xc0008bf9e0, 0xc0008ce248, 0x1bb8ba0)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/workspace/controllers/kafkacluster_controller.go:111 +0x6bc
kafka-operator-7bc9c8cd8f-nlwfz manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00031a320, 0x17b3940, 0xc001483f40, 0xc000581500)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216 +0x162
kafka-operator-7bc9c8cd8f-nlwfz manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00031a320, 0xc0000a4000)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192 +0xcb
kafka-operator-7bc9c8cd8f-nlwfz manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc00031a320)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171 +0x2b
kafka-operator-7bc9c8cd8f-nlwfz manager k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000016640)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 +0x5e
kafka-operator-7bc9c8cd8f-nlwfz manager k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000016640, 0x3b9aca00, 0x0, 0x1, 0xc0000ba180)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 +0xf8
kafka-operator-7bc9c8cd8f-nlwfz manager k8s.io/apimachinery/pkg/util/wait.Until(0xc000016640, 0x3b9aca00, 0xc0000ba180)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0x4d
kafka-operator-7bc9c8cd8f-nlwfz manager created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:157 +0x32e

Steps to reproduce the issue:
havn't found a way to reproduce yet

Expected behavior
Operator should not panic :)

Additional context
Operator version 0.6.0 generated from Helm chart

kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T12:36:28Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Cleaned Zookeeper and re-created the kafka cluster from 0.
I had this error two times, with IDs 121,122,123 then with a new cluster with IDs 1,2,3.

full CRD documentation

I may have missed it but I could not find documentation for all the KafkaCluster CRD fields other than the 1 example yaml

Please add documentation for all CRD fields.

For me, good docs would include:

  • a full yaml file listing all possible fields to show the full structure.
  • a field by field description listing possible values and their effects.

Reconciliation blocks until Envoy external IP is available

Describe the bug
A clear and concise description of what the bug is.

Steps to reproduce the issue:
Please describe the steps to reproduce the issue.

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem like release numberm version, branch, etc.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.