Git Product home page Git Product logo

burrow's People

Contributors

bai avatar cm-cnnxty avatar ctrochalakis avatar cvtjnii avatar d1egoaz avatar dependabot[bot] avatar dpippenger avatar hoesler avatar jantebeest avatar jbvmio avatar jsvisa avatar jyates avatar lawrencemq avatar lins05 avatar lukkie avatar markrileybot avatar matsu-chara avatar mlongob avatar ms7s avatar poslegm avatar rconn01 avatar rjh-yext avatar sahilthapar avatar timbertson avatar toddpalino avatar toff63 avatar usiel avatar vas78 avatar vixns avatar vvuibert avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

burrow's Issues

Listing all consumer groups for a cluster is always returning empty list

I setup a burrow install and pointed to my dev kafka cluster(1 broker and ZK) and production cluster(3 brokers and ZK). In both cases, when I call /v2/kafka/kafka1/consumer , I get the same response, no consumers. i.e. {"error":false,"message":"consumer list returned","consumers":[]}

I know that both clusters are have connected consumers and should be processing messages.
All other end points, not at the consumer level, appear to work fine.

On my dev box, I get the following when asking for cluster detail( /v2/kafka/kafka1 ):
{"error":false,"message":"cluster detail returned","cluster":{"zookeepers":["[kafka.local]:2181"],"zookeeper_port":2181,"zookeeper_path":"/","brokers":["[kafka.local]:9092"],"broker_port":9092,"offsets_topic":"__consumer_offsets"}}

I am running(all in docker containers):
Kafka 0.9.0.1
ZK 3.4.6
go version go1.6 linux/amd64

Not sure what version burrow is (assuming the latest) but it was installed using :
RUN go get github.com/linkedin/burrow
RUN cd $GOPATH/src/github.com/linkedin/burrow && gpm install && go install

Any ideas?

Add some request parameters to json response

Hi

I think that adding 3 parameters to the json response for "Consumer Topic Detail" query will be great.
The parameters are the following

  1. Cluster name
  2. Consumer group
  3. topic name

The params are already exists in the server url, and this will be great if they will be printed in the response.

Thanks
D.

Consumers of slow topics are marked as error

Hello,

I have some Kafka topics with very little traffic (a few messages per day if any) and consumer group consuming theses are marked as failing by the HTTP /lag endpoint.
Maybe before marking the consumer group as ERR/STOP burrow should check the current topic offset to check if there is data to consume.

$ curl http://localhost/v2/kafka/dev/consumer/mygroup/lag
{
    "error": false, 
    "message": "consumer group status returned", 
    "status": {
        "cluster": "dev", 
        "complete": true, 
        "group": "mygroup", 
        "maxlag": null, 
        "partitions": [
            {
                "end": {
                    "lag": 0, 
                    "offset": 645853, 
                    "timestamp": 1450153877135
                }, 
                "partition": 0, 
                "start": {
                    "lag": 0, 
                    "offset": 645750, 
                    "timestamp": 1450142015472
                }, 
                "status": "STOP", 
                "topic": "slowtopic"
            }, 
            {
                "end": {
                    "lag": 0, 
                    "offset": 840126, 
                    "timestamp": 1450161779653
                }, 
                "partition": 1, 
                "start": {
                    "lag": 0, 
                    "offset": 840011, 
                    "timestamp": 1450161698730
                }, 
                "status": "STOP", 
                "topic": "slowtopic"
            }
        ], 
        "status": "ERR"
    }
}

$ curl http://localhost/v2/kafka/dev/topic/slowtopic
{
    "error": false, 
    "message": "broker topic offsets returned", 
    "offsets": [
        645854, 
        840127
    ]
}

Lag does not update when status is STOP

If I stop a consumer group, then the status of that group reported by Burrow changes to STOP, which is what I'd expect. However the lag reported by Burrow for that group remains at the value it was before I stopped the group even though new messages are being sent to the topic. When I start the consumer group again then the lag values are updated.

Is this intended behaviour? Ideally I'd like to know what the lag is even when the group is stopped.

Network interface binding

Hello,

I tried using Burrow on a Linux machine, which have both an ipv4 and ipv6 addresses.
The http server launched by burrow only listen to the specified port on ipv6, and I cannot connect using ipv4.
Is there any undocumented way to specify on which address/hostname/interface the server should listen to ?

a state of consumer

if offset.Timestamp-previousTimestamp < (storage.app.Config.Lagcheck.MinDistance * 1000) {
clusterOffsets.consumerLock.Unlock()
return
}
here, is there need exclude the state when offset.Timestamp-previousTimestamp==0 ?
if consumer did not commit offset for a while, and the ring is not full yet, then the evaluateGroup methd will not check out the true state of the consumer.

HTTP_PROXY env variable not respected

By default, http.Client reads any available HTTP_PROXY or HTTPS_PROXY environment variable, and automatically configures itself to use that proxy for outbound requests. This occurs on the Transport object. Background available in the Golang docs. See the following for some discussion: http://stackoverflow.com/questions/14661511/setting-up-proxy-for-http-client

However, in Burrow's HTTP notifier, the default Transport object is overridden to allow for a custom keep-alive setting: https://github.com/linkedin/Burrow/blob/master/http_notifier.go#L94

In practice, this has the effect of ignoring any present HTTP_PROXY or HTTPS_PROXY env vars, which in turn breaks the HTTP notifier for environments that require proxy access.

POST requests, and presumably also DELETE requests, instead hang with messages like:
1458928341523479731 [Error] Failed to send POST for group foo in cluster bar at severity ERR (Id 1234-some-uuid): Post https://example.com: net/http: request canceled while waiting for connection.

Rule 4 fails with high commit frequency

I have a few consumers I am monitoring with Burrow that commit frequently under normal load. At some points the commit frequency is faster than the lag check runs. I end up with a very small window where the difference between the first and last is ~200 ms.

I added some logging and here is an example, format is {offset timestamp lag}.

{74218 1442857809487 0} {74219 1442857809498 0} {74220 1442857809510 0} {74221 1442857809521 0} {74222 1442857809657 0} {74223 1442857809670 0} {74224 1442857809681 0} {74225 1442857809692 0} {74226 1442857809705 0} {74227 1442857809716 0}

1442857809716 - 1442857809487 = 229 ms

In this case if the lag check is run more than 229 ms after the last offset is committed then the check fails.

Seems like the broker-offsets should be the minimum time between offsets within the window.

Changes in consumer topics are not reflected properly

Burrow handles the case where a consumer goes away entirely, expiring the group after it has not committed for a certain amount of time. What it does not handle is the case where consumers change the list of topics they are consuming, specifically removing topics. It will continue to report them as errors (stopped partitions) until restarted.

One solution is to implement a "/v2/kafka/(cluster)/consumer/(group)/drop" endpoint that will drop the entire group and rebuild it. You could also implement a similar endpoint to remove just specific topics from a group.

ZK error: invalid ACL specified

I'm getting this error trying to run burrow.

Started Burrow at June 12, 2015 at 11:14am (PDT)
1434132884352394945 [Info] Starting Zookeeper client
1434132884352448792 [Info] Starting Offsets Storage module
1434132884352593141 [Info] Starting HTTP server
1434132884352628654 [Info] Starting Zookeeper client for cluster sit
1434132884352649043 [Info] Starting Kafka client for cluster sit
1434132884837543547 [Info] Starting consumers for 50 partitions of __consumer_offsets in cluster sit
1434132906887302495 [Critical] Cannot get ZK notifier lock: zk: invalid ACL specified

This is my config

[general]
logdir=log
logconfig=logging.cfg
pidfile=burrow.pid
client-id=burrow-lagchecker
group-blacklist=^(console-consumer-|python-kafka-consumer-).*$

[zookeeper]
hostname=host1
hostname=host2
hostname=host3
port=2181
timeout=6
lock-path=/burrow/notifier

[kafka "sit"]
broker=host1
broker=host2
broker=host3
broker=host4
broker-port=9092
zookeeper=host1
zookeeper=host2
zookeeper=host3
zookeeper-port=2181
zookeeper-path=/
offsets-topic=__consumer_offsets

[tickers]
broker-offsets=60

[lagcheck]
intervals=10
expire-group=604800

[httpserver]
server=on
port=7000

I'm using ZK 3.4.6 with no ACLs.

ConsumerGroup completeness and offset timing

If I want to commit my consumergroup offset after processing each message (to minimize re-processed messages on crash/restart etc), how do I configure my lagcheck interval to get a complete of true.

No consumers found

Howdy,

Not sure if this is a bug or I'm missing something. Basically we have a very simple kafka setup, 10 brokers and 3 zookeeper hosts.

Today burrow is able to find the available topics, but it can't find any consumer. We have a consumer and a producer running, I can confirm this because we also have kafka-offsetmonitor running and on that tool we can see the consumer.

$ curl -s -D /dev/null -o - http://localhost:8081/v2/kafka/main/topic | json_xs
{
"topics" : [
"dark",
"__consumer_offsets"
],
"error" : false,
"message" : "broker topic list returned"
}

$ curl -s -D /dev/null -o - http://localhost:8081/v2/kafka/main/consumer | json_xs
{
"consumers" : [],
"error" : false,
"message" : "consumer list returned"
}

Am I'm missing something? Not sure if burrow is suppose to work out of the box or if we need to do something additional with the consumers. Relevant config is:

client-id=burrow-lagchecker
[tickers]
broker-offsets=60

[lagcheck]
intervals=10
expire-group=604800

We don't have any blacklists.

From the logs I can see:
1434495090171073998 [Info] Starting Kafka client for cluster main
1434495090186517134 [Info] Starting consumers for 50 partitions of __consumer_offsets in cluster main
1434495106251738523 [Info] Acquired Zookeeper notifier lock

Thanks!

Logging under runit

Would be nice if all stdout/stderr could be routed to the console logger in seelog. Looks like Burrow only logs to files, which breaks the runit logger and potentially the systemd logger.

import cycle not allowed

import cycle not allowed
package .
imports bytes
imports errors
imports runtime
imports runtime/internal/atomic
imports unsafe
imports runtime
import cycle not allowed
package .
imports github.com/Shopify/sarama
imports crypto/tls
imports crypto/x509
imports net
imports runtime/cgo
imports runtime/cgo

how to solve this?

Does Burrow support Consumer Group based Lag calculation configuration

Hi,
I have a use-case where we have setup Kafka for 2 Topics. Each topic has consumers who consume messages at a different rate. For example Consumer_Topic_1 might do a lot of work after consuming a message and before committing the Offset. Consumer_Topic_2 might do half of work done by Consumer_Topic_1.

In this case, their sense of Slowness (or Lag) are very different. Hence ideally, I would need 2 sets of [ticker] and [lagcheck] sections, each serving a Consumer Group. But, I believe today it's not possible in a single instance of Burrow. Is this understanding correct?

I probably need to deploy 2 instances of Burrow to achieve what I need. How viable is this ask for a new feature to be added? Are there any plans around this line?

Thanks in advance!

Detect offset rewind/reset by consumers

It would be useful to add a rule when evaluating lag that would detect whether or not a consumer has reset their offsets. This is a fairly simple check (if the consumer offset goes backward for any interval, set a specific status code). I believe this should be categorized as an error, not a warning.

ansible playbook for burrow

We are starting to deploy burrow to our environments, so I created a playbook for Burrow. You can find it here.

You can also just install it via galaxy:

ansible-galaxy install slb350.Ansible-Burrow

Thanks! Let me know if you have any suggestions. We use ubuntu/trusty, and so far no issue.

Unable to see consumer info

I started burrow but I cannot see consumers for any clusters, not sure what the issue is.
{"error":false,"message":"consumer list returned","consumers":[]}
I am using Kafka 0.9.
Below is my burrow configuration:

[general]
logdir=/Users/rrsingh/git/go/src/github.com/linkedin/burrow/log
logconfig=/Users/rrsingh/git/go/src/github.com/linkedin/burrow/config/logging.cfg
pidfile=burrow.pid
client-id=burrow-lagchecker

[zookeeper]
hostname=localhost
port=2181
timeout=60
lock-path=/burrow/notifier

[kafka "integrationCluster"]
broker=localhost
broker-port=9092
zookeeper=localhost
zookeeper-port=2181
zookeeper-path=/


[tickers]
broker-offsets=60

[lagcheck]
intervals=10
expire-group=604800

[httpserver]
server=on
port=8000

Am I missing anything?

cannot find package "github.com/golang/snappy/snappy"

I am trying to install Burrow and gpm install gives me the following error

Getting package github.com/samuel/go-zookeeper/zk
Getting package github.com/Shopify/sarama
Getting package github.com/cihub/seelog
Getting package code.google.com/p/gcfg
Getting package github.com/pborman/uuid
Setting github.com/samuel/go-zookeeper/zk to version ad552be7b78b762b4a8040ffc5518bdaf5b7225d
Setting github.com/Shopify/sarama to version 2ca3f4f9705b8391a9a1fe3e6ead0d57313108d9
Setting github.com/cihub/seelog to version 92dc4b8b540607b8187cc2f95cac200211dcd745
Setting code.google.com/p/gcfg to version c2d3050044d05357eaf6c3547249ba57c5e235cb
Setting github.com/pborman/uuid to version ca53cad383cad2479bbba7f7a1a05797ec1386e4
Building package github.com/samuel/go-zookeeper/zk
Building package github.com/Shopify/sarama
../../Shopify/sarama/snappy.go:7:2: cannot find package "github.com/golang/snappy/snappy" in any of:
/usr/lib/go/src/github.com/golang/snappy/snappy (from $GOROOT)
/home/ceph/go/src/github.com/golang/snappy/snappy (from $GOPATH)

Issue on listing Consumers

Hi Tod,
I've some kind of issue on listing Consumers.
Asking about list of topics, status of topic and Offset is working fine:
http://8.8.8.8:8000/v2/kafka/druida/topic/buck_bidding

{
error: false,
message: "broker topic offsets returned",
offsets: [
331251579,
354084379,
349230371
]
}

But when I ask about Consumers result is empty
http://8.8.8.8:8000/v2/kafka/druida/consumer

{
error: false,
message: "consumer list returned",
consumers: [ ]
}

Looking inside Zookeeper I can see consumers section:

[zk: localhost:2181(CONNECTED) 0] ls /
[controller_epoch, controller, brokers, zookeeper, admin, consumers, burrow, druid-aws, config]
[zk: localhost:2181(CONNECTED) 1] ls /consumers
[druidaws2, druidaws]

Can you please drive me on doing proper checks?

Thanks
Maurizio

Extended Response Structure

Is it possible to add the topic name and the consumer group name to the response of the queries?
(for example, GET /v2/kafka/(cluster)/consumer/(group)/topic/(topic) )
It will be very helpful 👍

Thanks in advance!

No support for deleting topics

The support for deleted topics is marginal at best. The Kafka client should refresh metadata periodically and if it finds a topic has been deleted, all traces of it should be removed (both from broker offsets and any consumers that have been consuming it)

Slack http notifications

Any suggestions what should I need to add \ change in the basic http notifier .tmpl file in order to send notification to Slack?

In addition, how can I send to different http notifications to different http end points like the email support.

Thanks

Cannot use different ports by brokers

Hi,

I'm playing with a local Kafka cluster on docker.
Brokers are running in the dedicated containers and I use them via published ports on the docker host,
like 192.168.59.101:9091, 192.168.59.101:9092, 192.168.59.101:9093, by setting advertised.host.name and advertised.port.

On the other hand, Burrow seems not to allow different ports by brokers.
I hope Burrow will support this feature!

Thanks.

Consumer group offsets and lag per partition in ok state

Hi, please add rendering of per-partition consumer group offsets and lags when overall group status is OK.
We need this to monitor if any message in any partition left unconsumed.

We used bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker for this purpose before, but it can't handle offsets that's not stored in zookeeper.

Unable to get /v2/kafka/(cluster)/consumer/(group)/status

Always show "complete":false,"partitions":[],"maxlag":[] when get /v2/kafka/(cluster)/consumer/(group)/status

By updated addConsumerOffset function in offsets_store.go file to solve this problem:

if offset.Timestamp-previousTimestamp < (storage.app.Config.Lagcheck.MinDistance * 1000) {
storage.offsets[offset.Cluster].consumerLock.Unlock()
return
}

=>

if (offset.Timestamp-previousTimestamp) > 0 && (offset.Timestamp-previousTimestamp < (storage.app.Config.Lagcheck.MinDistance * 1000)) {
storage.offsets[offset.Cluster].consumerLock.Unlock()
return
}

Status endpoint returns complete:false and no partitions, but consumer+topic offsets are available.

(Sorry, this feels more like a support request than an issue, but I'm stumped.)

I've had burrow running against my cluster for a while, and all the HTTP endpoints seem to be returning good data except for the consumer status endpoint, which is never complete.

e.g.,

GET /v2/kafka/CLUSTER/consumer/GROUP/topic/TOPIC

{
    "error": false,
    "message": "consumer group topic offsets returned",
    "offsets": [
        22,
        ...snip...

(so I can see that there are offsets available)

GET /v2/kafka/CLUSTER/consumer/GROUP/status

{
    "error": false,
    "message": "consumer group status returned",
    "status": {
        "cluster": "CLUSTER",
        "complete": false,
        "group": "GROUP",
        "partitions": [],
        "status": "OK"
    }
}

But no partitions for the consumer group in the status? I'm using a very simple burrow.cfg:

[general]
logdir=/opt/burrow/log
logconfig=/opt/burrow/conf/logging.cfg
pidfile=burrow.pid
client-id=burrow-lagchecker
group-blacklist=^(console-consumer-|python-kafka-consumer-).*$

[zookeeper]
hostname=zookeeper
port=2181

[kafka "CLUSTER"]
broker=broker
broker-port=9092
zookeeper=zooker
zookeeper-port=2181
zookeeper-path=/kafka

[httpserver]
server=on
port=8001

What could I be missing that would keep status from being available?

Group whitelist

Hi

The group blacklist parameter is very good, but I think that adding whitelist group for some cases can be useful too.

Thanks
D.

Zookeeper version mismatch - Connection request from old client

hi, thanks for burrow project :)
I followed setup instructions in read me file. When i start burrow, it tries to connect to zookeeper and it stops.
my zookeeper version is = Server environment:zookeeper.version=3.4.6-1569965, and i checked
https://github.com/samuel/go-zookeeper/blob/master/.travis.yml it has same version, and also checked commit history, it does not seem to change.

But in zookeeper log ;

[2015-06-18 16:38:44,222] WARN Connection request from old client /192.168.20.119:59725; will be dropped if server is in r-o mode (org.apache.zookeeper.server.ZooKeeperServer)
then i think zookeeper closes connection for burrow host.
[2015-06-18 16:38:44,974] INFO Processed session termination for sessionid: 0x14e015f32db0011 (org.apache.zookeeper.server.PrepRequestProcessor)
[2015-06-18 16:38:44,977] INFO Closed socket connection for client /192.168.20.119:59726 which had sessionid 0x14e015f32db0011 (org.apache.zookeeper.server.NIOServerCnxn)
[2015-06-18 16:38:44,977] INFO Processed session termination for sessionid: 0x14e015f32db0010 (org.apache.zookeeper.server.PrepRequestProcessor)

Can you recommend a solution?

Where do the kafka cluster name comes from?

Hello

My burrow works fine, but I have a question that may be out of scope: where the cluster name given by the v2/kafka request comes from? I can't find a param in kafka doc to set it

Here's what I get from burrow/v2/kafka:

{
  "error":false,
  "message":"cluster list returned",
  "clusters":[
    "local"
  ]
}

If somesone can tell me how to change "local".

Best regards

Geoff

How to make Burrow package (for distribution)?

I want to install Burrow in our production environment. The environment is under restrict network control, it's not possible to download source codes or dependencies from github or via "go get", gpm. So I have to pack up the whole files (including dependencies) and upload them to the machines.

I'm new to Go, so I'm wondering is there any ways to make a rich Burrow package that includes everything Burrow needed for running?

Thanks in advance!

How do I view Storm cluster data?

I see that we now have support for Storm offsets [https://github.com//pull/34] but there doesn't seem to be API for something like v2/storm [https://github.com/linkedin/Burrow/blob/master/http_server.go]

App crash on consumer request

Hi,
I'm testing your code, Doing requests all are fine but when I go to the consumer URL the app crashes.
Can you help me on understanding if I've something wrong on setup?

This is my burrow.cfg file:

[general]
logdir=log
logconfig=config/logging.cfg
pidfile=burrow.pid
client-id=burrow-lagchecker
group-blacklist=^(console-consumer-|python-kafka-consumer-).*$

[zookeeper]
hostname=10.200.6.60
hostname=10.200.6.70
hostname=10.200.6.80
port=2181
timeout=6
lock-path=/burrow/notifier

[kafka "druid"]
broker=10.200.6.61
broker=10.200.6.63
broker-port=9092
zookeeper=10.200.6.60
zookeeper=10.200.6.70
zookeeper=10.200.6.80
zookeeper-port=2181
zookeeper-path=/
offsets-topic=__consumer_offsets

[tickers]
broker-offsets=60

[lagcheck]
intervals=10
expire-group=604800

[httpserver]
server=on
port=8000

[smtp]
server=10.20.20.93
port=25
[email protected]
template=config/default-email.tmpl

[email "[email protected]"]
group=local,critical-consumer-group
group=local,other-consumer-group
interval=60

[httpnotifier]
url=http://notification.server.example.com:9000/v1/alert
interval=60
extra=app=burrow
extra=tier=STG
template-post=config/default-http-post.tmpl
template-delete=config/default-http-delete.tmpl

Here is the dump of the error log:

Started Burrow at August 11, 2015 at 6:08am (EDT)
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x18 pc=0x417394]

goroutine 419 [running]:
main.(*OffsetStorage).evaluateGroup(0xc20807c030, 0xc20800bbc0, 0x5, 0xc20800bbc6, 0x17, 0xc2087b39e0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/offsets_store.go:337 +0x1c4
created by main.func·009
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/offsets_store.go:188 +0x4c5

goroutine 1 [chan receive]:
main.burrowMain(0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/main.go:198 +0x1e04
main.main()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/main.go:204 +0x27

goroutine 5 [syscall, 1 minutes]:
os/signal.loop()
        /Users/serafino.solreti/go/src/os/signal/signal_unix.go:21 +0x1f
created by os/signal.init·1
        /Users/serafino.solreti/go/src/os/signal/signal_unix.go:27 +0x35

goroutine 6 [semacquire, 1 minutes]:
sync.(*Cond).Wait(0xc20803e2c0)
        /Users/serafino.solreti/go/src/sync/cond.go:62 +0x9e
github.com/cihub/seelog.(*asyncLoopLogger).processItem(0xc2080581e0, 0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/cihub/seelog/behavior_asynclooplogger.go:50 +0xc2
github.com/cihub/seelog.(*asyncLoopLogger).processQueue(0xc2080581e0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/cihub/seelog/behavior_asynclooplogger.go:63 +0x31
created by github.com/cihub/seelog.newAsyncLoopLogger
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/cihub/seelog/behavior_asynclooplogger.go:40 +0x8e

goroutine 7 [semacquire, 1 minutes]:
sync.(*Cond).Wait(0xc20803e440)
        /Users/serafino.solreti/go/src/sync/cond.go:62 +0x9e
github.com/cihub/seelog.(*asyncLoopLogger).processItem(0xc208058300, 0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/cihub/seelog/behavior_asynclooplogger.go:50 +0xc2
github.com/cihub/seelog.(*asyncLoopLogger).processQueue(0xc208058300)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/cihub/seelog/behavior_asynclooplogger.go:63 +0x31
created by github.com/cihub/seelog.newAsyncLoopLogger
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/cihub/seelog/behavior_asynclooplogger.go:40 +0x8e

goroutine 8 [semacquire]:
sync.(*Cond).Wait(0xc20803f640)
        /Users/serafino.solreti/go/src/sync/cond.go:62 +0x9e
github.com/cihub/seelog.(*asyncLoopLogger).processItem(0xc208058900, 0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/cihub/seelog/behavior_asynclooplogger.go:50 +0xc2
github.com/cihub/seelog.(*asyncLoopLogger).processQueue(0xc208058900)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/cihub/seelog/behavior_asynclooplogger.go:63 +0x31
created by github.com/cihub/seelog.newAsyncLoopLogger
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/cihub/seelog/behavior_asynclooplogger.go:40 +0x8e

goroutine 9 [semacquire, 1 minutes]:
sync.(*WaitGroup).Wait(0xc208080f20)
        /Users/serafino.solreti/go/src/sync/waitgroup.go:132 +0x169
github.com/samuel/go-zookeeper/zk.(*Conn).loop(0xc20801ec30)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:227 +0x76d
github.com/samuel/go-zookeeper/zk.func·001()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:145 +0x2c
created by github.com/samuel/go-zookeeper/zk.ConnectWithDialer
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:149 +0x44f

goroutine 10 [runnable]:
main.func·009()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/offsets_store.go:168 +0x517
created by main.NewOffsetStorage
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/offsets_store.go:199 +0x519

goroutine 11 [IO wait]:
net.(*pollDesc).Wait(0xc2080ac610, 0x72, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2080ac610, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).accept(0xc2080ac5b0, 0x0, 0x2b7231b60ca0, 0xc208870b28)
        /Users/serafino.solreti/go/src/net/fd_unix.go:419 +0x40b
net.(*TCPListener).AcceptTCP(0xc208042080, 0x56c87e, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/tcpsock_posix.go:234 +0x4e
net/http.tcpKeepAliveListener.Accept(0xc208042080, 0x0, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/http/server.go:1976 +0x4c
net/http.(*Server).Serve(0xc208058840, 0x2b7231b63a30, 0xc208042080, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/http/server.go:1728 +0x92
net/http.(*Server).ListenAndServe(0xc208058840, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/http/server.go:1718 +0x154
net/http.ListenAndServe(0xc20807e670, 0x5, 0x2b7231b62850, 0xc20807c240, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/http/server.go:1808 +0xba
created by main.NewHttpServer
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/http_server.go:49 +0x4cf

goroutine 12 [semacquire, 1 minutes]:
sync.(*WaitGroup).Wait(0xc20800a0e0)
        /Users/serafino.solreti/go/src/sync/waitgroup.go:132 +0x169
github.com/samuel/go-zookeeper/zk.(*Conn).loop(0xc20801ed00)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:227 +0x76d
github.com/samuel/go-zookeeper/zk.func·001()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:145 +0x2c
created by github.com/samuel/go-zookeeper/zk.ConnectWithDialer
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:149 +0x44f

goroutine 16 [select, 1 minutes]:
github.com/Shopify/sarama.(*client).backgroundMetadataUpdater(0xc2080b6200)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/client.go:553 +0x2f3
github.com/Shopify/sarama.*client.(github.com/Shopify/sarama.backgroundMetadataUpdater)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/client.go:142 +0x27
github.com/Shopify/sarama.withRecover(0xc20807f520)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.NewClient
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/client.go:142 +0x8ce

goroutine 15 [chan receive, 1 minutes]:
github.com/Shopify/sarama.(*Broker).responseReceiver(0xc2080ac460)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/broker.go:340 +0xe3
github.com/Shopify/sarama.*Broker.(github.com/Shopify/sarama.responseReceiver)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/broker.go:93 +0x27
github.com/Shopify/sarama.withRecover(0xc20807ebf0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.func·006
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/broker.go:93 +0x610

goroutine 17 [chan receive, 1 minutes]:
main.func·002()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:84 +0x91
created by main.NewKafkaClient
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:87 +0x892

goroutine 18 [chan receive, 1 minutes]:
main.func·003()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:90 +0x98
created by main.NewKafkaClient
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:93 +0x8f1

goroutine 19 [chan receive, 1 minutes]:
main.func·004()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:98 +0x5c
created by main.NewKafkaClient
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:101 +0x964

goroutine 25 [runnable]:
net.(*pollDesc).Wait(0xc2080adaa0, 0x72, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2080adaa0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc2080ada40, 0xc20807fbd0, 0x8, 0x8, 0x0, 0x2b7231b60ca0, 0xc2087b15c8)
        /Users/serafino.solreti/go/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc208042648, 0xc20807fbd0, 0x8, 0x8, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/net.go:121 +0xdc
io.ReadAtLeast(0x2b7231b63bc8, 0xc208042648, 0xc20807fbd0, 0x8, 0x8, 0x8, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/io/io.go:298 +0xf1
io.ReadFull(0x2b7231b63bc8, 0xc208042648, 0xc20807fbd0, 0x8, 0x8, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/io/io.go:316 +0x6d
github.com/Shopify/sarama.(*Broker).responseReceiver(0xc2080ac700)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/broker.go:347 +0x29f
github.com/Shopify/sarama.*Broker.(github.com/Shopify/sarama.responseReceiver)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/broker.go:93 +0x27
github.com/Shopify/sarama.withRecover(0xc20807fbc0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.func·006
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/broker.go:93 +0x610

goroutine 113 [chan receive]:
github.com/Shopify/sarama.(*partitionConsumer).responseFeeder(0xc2080105b0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:403 +0x60
github.com/Shopify/sarama.*partitionConsumer.(github.com/Shopify/sarama.responseFeeder)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:152 +0x27
github.com/Shopify/sarama.withRecover(0xc20807a730)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).ConsumePartition
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:152 +0x533

goroutine 112 [chan receive, 1 minutes]:
github.com/Shopify/sarama.(*partitionConsumer).dispatcher(0xc2080105b0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:295 +0x5a
github.com/Shopify/sarama.*partitionConsumer.(github.com/Shopify/sarama.dispatcher)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:151 +0x27
github.com/Shopify/sarama.withRecover(0xc20807a720)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).ConsumePartition
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:151 +0x4d2

goroutine 154 [chan receive]:
main.func·006()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:130 +0xb4
created by main.NewKafkaClient
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:133 +0x10b1

goroutine 24 [runnable]:
net.(*pollDesc).Wait(0xc2080ada30, 0x72, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2080ada30, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc2080ad9d0, 0xc20807fb58, 0x8, 0x8, 0x0, 0x2b7231b60ca0, 0xc2087b13a0)
        /Users/serafino.solreti/go/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc208042640, 0xc20807fb58, 0x8, 0x8, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/net.go:121 +0xdc
io.ReadAtLeast(0x2b7231b63bc8, 0xc208042640, 0xc20807fb58, 0x8, 0x8, 0x8, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/io/io.go:298 +0xf1
io.ReadFull(0x2b7231b63bc8, 0xc208042640, 0xc20807fb58, 0x8, 0x8, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/io/io.go:316 +0x6d
github.com/Shopify/sarama.(*Broker).responseReceiver(0xc2080ac770)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/broker.go:347 +0x29f
github.com/Shopify/sarama.*Broker.(github.com/Shopify/sarama.responseReceiver)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/broker.go:93 +0x27
github.com/Shopify/sarama.withRecover(0xc20807fb60)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.func·006
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/broker.go:93 +0x610

goroutine 114 [select]:
github.com/Shopify/sarama.(*brokerConsumer).subscriptionManager(0xc2080a4280)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:547 +0x49e
github.com/Shopify/sarama.*brokerConsumer.(github.com/Shopify/sarama.subscriptionManager)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:520 +0x27
github.com/Shopify/sarama.withRecover(0xc20807a740)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).newBrokerConsumer
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:520 +0x22b

goroutine 115 [select]:
github.com/Shopify/sarama.(*Broker).sendAndReceive(0xc2080ac700, 0x2b7231b63e70, 0xc2087b1410, 0x2b7231b63eb0, 0xc2089ea950, 0x0, 0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/broker.go:286 +0x25d
github.com/Shopify/sarama.(*Broker).Fetch(0xc2080ac700, 0xc2087b1410, 0x12, 0x0, 0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/broker.go:204 +0xca
github.com/Shopify/sarama.(*brokerConsumer).fetchNewMessages(0xc2080a4280, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:646 +0x176
github.com/Shopify/sarama.(*brokerConsumer).subscriptionConsumer(0xc2080a4280)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:580 +0x150
github.com/Shopify/sarama.*brokerConsumer.(github.com/Shopify/sarama.subscriptionConsumer)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:521 +0x27
github.com/Shopify/sarama.withRecover(0xc20807a750)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).newBrokerConsumer
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:521 +0x288

goroutine 116 [chan receive, 1 minutes]:
main.func·006()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:130 +0xb4
created by main.NewKafkaClient
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:133 +0x10b1

goroutine 117 [chan receive, 1 minutes]:
main.func·007()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:136 +0xb4
created by main.NewKafkaClient
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:139 +0x113a

goroutine 118 [chan receive, 1 minutes]:
github.com/Shopify/sarama.(*partitionConsumer).dispatcher(0xc208010930)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:295 +0x5a
github.com/Shopify/sarama.*partitionConsumer.(github.com/Shopify/sarama.dispatcher)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:151 +0x27
github.com/Shopify/sarama.withRecover(0xc20807a8a0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).ConsumePartition
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:151 +0x4d2


goroutine 119 [chan receive]:
github.com/Shopify/sarama.(*partitionConsumer).responseFeeder(0xc208010930)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:403 +0x60
github.com/Shopify/sarama.*partitionConsumer.(github.com/Shopify/sarama.responseFeeder)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:152 +0x27
github.com/Shopify/sarama.withRecover(0xc20807a8b0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).ConsumePartition
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:152 +0x533

goroutine 120 [select]:
github.com/Shopify/sarama.(*brokerConsumer).subscriptionManager(0xc2080a43c0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:547 +0x49e
github.com/Shopify/sarama.*brokerConsumer.(github.com/Shopify/sarama.subscriptionManager)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:520 +0x27
github.com/Shopify/sarama.withRecover(0xc20807a8c0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).newBrokerConsumer
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:520 +0x22b

goroutine 121 [select]:
github.com/Shopify/sarama.(*Broker).sendAndReceive(0xc2080ac770, 0x2b7231b63e70, 0xc2087b11e0, 0x2b7231b63eb0, 0xc2089ea948, 0x0, 0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/broker.go:286 +0x25d
github.com/Shopify/sarama.(*Broker).Fetch(0xc2080ac770, 0xc2087b11e0, 0x12, 0x0, 0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/broker.go:204 +0xca
github.com/Shopify/sarama.(*brokerConsumer).fetchNewMessages(0xc2080a43c0, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:646 +0x176
github.com/Shopify/sarama.(*brokerConsumer).subscriptionConsumer(0xc2080a43c0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:580 +0x150
github.com/Shopify/sarama.*brokerConsumer.(github.com/Shopify/sarama.subscriptionConsumer)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:521 +0x27
github.com/Shopify/sarama.withRecover(0xc20807a8d0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).newBrokerConsumer
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:521 +0x288

goroutine 122 [chan receive, 1 minutes]:
main.func·006()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:130 +0xb4
created by main.NewKafkaClient
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:133 +0x10b1

goroutine 123 [chan receive, 1 minutes]:
main.func·007()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:136 +0xb4
created by main.NewKafkaClient
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:139 +0x113a

goroutine 124 [select]:
github.com/samuel/go-zookeeper/zk.(*Conn).sendLoop(0xc20801ec30, 0x2b7231b62960, 0xc208042650, 0xc208102420, 0x0, 0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:412 +0xce9
github.com/samuel/go-zookeeper/zk.func·002()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:212 +0x5a
created by github.com/samuel/go-zookeeper/zk.(*Conn).loop
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:215 +0x680

goroutine 125 [IO wait]:
net.(*pollDesc).Wait(0xc2080ac5a0, 0x72, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2080ac5a0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc2080ac540, 0xc208284000, 0x4, 0x180000, 0x0, 0x2b7231b60ca0, 0xc2087b0168)
        /Users/serafino.solreti/go/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc208042650, 0xc208284000, 0x4, 0x180000, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/net.go:121 +0xdc
io.ReadAtLeast(0x2b7231b63bc8, 0xc208042650, 0xc208284000, 0x4, 0x180000, 0x4, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/io/io.go:298 +0xf1
io.ReadFull(0x2b7231b63bc8, 0xc208042650, 0xc208284000, 0x4, 0x180000, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/io/io.go:316 +0x6d
github.com/samuel/go-zookeeper/zk.(*Conn).recvLoop(0xc20801ec30, 0x2b7231b62960, 0xc208042650, 0x0, 0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:476 +0x1b6
github.com/samuel/go-zookeeper/zk.func·003()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:219 +0x5f
created by github.com/samuel/go-zookeeper/zk.(*Conn).loop
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:225 +0x75d

goroutine 126 [select]:
github.com/samuel/go-zookeeper/zk.(*Conn).sendLoop(0xc20801ed00, 0x2b7231b62960, 0xc208042660, 0xc2080b55c0, 0x0, 0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:412 +0xce9
github.com/samuel/go-zookeeper/zk.func·002()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:212 +0x5a
created by github.com/samuel/go-zookeeper/zk.(*Conn).loop
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:215 +0x680

goroutine 127 [IO wait]:
net.(*pollDesc).Wait(0xc2080ac680, 0x72, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2080ac680, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc2080ac620, 0xc208584000, 0x4, 0x180000, 0x0, 0x2b7231b60ca0, 0xc2087b0188)
        /Users/serafino.solreti/go/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc208042660, 0xc208584000, 0x4, 0x180000, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/net.go:121 +0xdc
io.ReadAtLeast(0x2b7231b63bc8, 0xc208042660, 0xc208584000, 0x4, 0x180000, 0x4, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/io/io.go:298 +0xf1
io.ReadFull(0x2b7231b63bc8, 0xc208042660, 0xc208584000, 0x4, 0x180000, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/io/io.go:316 +0x6d
github.com/samuel/go-zookeeper/zk.(*Conn).recvLoop(0xc20801ed00, 0x2b7231b62960, 0xc208042660, 0x0, 0x0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:476 +0x1b6
github.com/samuel/go-zookeeper/zk.func·003()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:219 +0x5f
created by github.com/samuel/go-zookeeper/zk.(*Conn).loop
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/samuel/go-zookeeper/zk/conn.go:225 +0x75d

goroutine 128 [chan receive]:
github.com/Shopify/sarama.(*partitionConsumer).dispatcher(0xc208010bd0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:295 +0x5a
github.com/Shopify/sarama.*partitionConsumer.(github.com/Shopify/sarama.dispatcher)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:151 +0x27
github.com/Shopify/sarama.withRecover(0xc20807e180)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).ConsumePartition
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:151 +0x4d2

goroutine 129 [chan receive]:
github.com/Shopify/sarama.(*partitionConsumer).responseFeeder(0xc208010bd0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:403 +0x60
github.com/Shopify/sarama.*partitionConsumer.(github.com/Shopify/sarama.responseFeeder)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:152 +0x27
github.com/Shopify/sarama.withRecover(0xc20807e190)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).ConsumePartition
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:152 +0x533

goroutine 130 [chan receive]:
main.func·006()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:130 +0xb4
created by main.NewKafkaClient
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:133 +0x10b1

goroutine 131 [chan receive]:
main.func·007()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:136 +0xb4
created by main.NewKafkaClient
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:139 +0x113a

goroutine 132 [chan receive]:
github.com/Shopify/sarama.(*partitionConsumer).dispatcher(0xc2080ac310)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:295 +0x5a
github.com/Shopify/sarama.*partitionConsumer.(github.com/Shopify/sarama.dispatcher)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:151 +0x27
github.com/Shopify/sarama.withRecover(0xc20807e490)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).ConsumePartition
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:151 +0x4d2

goroutine 133 [chan receive]:
github.com/Shopify/sarama.(*partitionConsumer).responseFeeder(0xc2080ac310)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:403 +0x60
github.com/Shopify/sarama.*partitionConsumer.(github.com/Shopify/sarama.responseFeeder)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:152 +0x27
github.com/Shopify/sarama.withRecover(0xc20807e4a0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).ConsumePartition
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:152 +0x533

goroutine 134 [chan receive]:
main.func·006()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:130 +0xb4
created by main.NewKafkaClient
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:133 +0x10b1

goroutine 135 [chan receive]:
main.func·007()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:136 +0xb4
created by main.NewKafkaClient
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/kafka_client.go:139 +0x113a

goroutine 136 [chan receive]:
github.com/Shopify/sarama.(*partitionConsumer).dispatcher(0xc2080ad2d0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:295 +0x5a
github.com/Shopify/sarama.*partitionConsumer.(github.com/Shopify/sarama.dispatcher)·fm()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/consumer.go:151 +0x27
github.com/Shopify/sarama.withRecover(0xc20807e6b0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/Shopify/sarama/utils.go:42 +0x3a
......
......
......
goroutine 325 [select]:
main.func·001()
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/http_notifier.go:197 +0x18e
created by main.(*HttpNotifier).Start
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/http_notifier.go:207 +0xd5

goroutine 417 [IO wait]:
net.(*pollDesc).Wait(0xc20881fd40, 0x72, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc20881fd40, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc20881fce0, 0xc208a7f000, 0x1000, 0x1000, 0x0, 0x2b7231b60ca0, 0xc208870b38)
        /Users/serafino.solreti/go/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc2089ea7c0, 0xc208a7f000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/net.go:121 +0xdc
net/http.(*liveSwitchReader).Read(0xc20805ab88, 0xc208a7f000, 0x1000, 0x1000, 0xc208a7cd90, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/http/server.go:214 +0xab
io.(*LimitedReader).Read(0xc20895c980, 0xc208a7f000, 0x1000, 0x1000, 0x5ab515, 0x0, 0x0)
        /Users/serafino.solreti/go/src/io/io.go:408 +0xce
bufio.(*Reader).fill(0xc208819ec0)
        /Users/serafino.solreti/go/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).ReadSlice(0xc208819ec0, 0xc20000000a, 0x0, 0x0, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/bufio/bufio.go:295 +0x257
bufio.(*Reader).ReadLine(0xc208819ec0, 0x0, 0x0, 0x0, 0xc207f86100, 0x0, 0x0)
        /Users/serafino.solreti/go/src/bufio/bufio.go:324 +0x62
net/textproto.(*Reader).readLineSlice(0xc20879e870, 0x0, 0x0, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/textproto/reader.go:55 +0x9e
net/textproto.(*Reader).ReadLine(0xc20879e870, 0x0, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/textproto/reader.go:36 +0x4f
net/http.ReadRequest(0xc208819ec0, 0xc2088291e0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/http/request.go:598 +0xcb
net/http.(*conn).readRequest(0xc20805ab40, 0x0, 0x0, 0x0)
        /Users/serafino.solreti/go/src/net/http/server.go:586 +0x26f
net/http.(*conn).serve(0xc20805ab40)
        /Users/serafino.solreti/go/src/net/http/server.go:1162 +0x69e
created by net/http.(*Server).Serve
        /Users/serafino.solreti/go/src/net/http/server.go:1751 +0x35e

goroutine 418 [chan receive]:
main.(*HttpNotifier).sendEvaluationRequests(0xc208832a00)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/http_notifier.go:83 +0x1d2
created by main.func·001
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/http_notifier.go:202 +0x117

goroutine 420 [runnable]:
main.(*OffsetStorage).evaluateGroup(0xc20807c030, 0xc20800bd00, 0x5, 0xc20800bd06, 0x14, 0xc2087b39e0)
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/offsets_store.go:327
created by main.func·009
        /Users/serafino.solreti/Desktop/SELF_STUDY/OTHERS/GO/workspace1/src/github.com/linkedin/burrow/offsets_store.go:188 +0x4c5

Thanks
Maurizio

Compilation errors on Windows 7 Professional with go1.5.2.windows-386

D:\Platform\Go\test-01-Burrow\Burrow>set GOPATH=D:\Platform\Go\test-01-Burrow\Burrow

D:\Platform\Go\test-01-Burrow\Burrow>set GOBIN=D:\Platform\Go\test-01-Burrow\Burrow\bin

D:\Platform\Go\test-01-Burrow\Burrow>go install

/D/Platform/Go/test-01-Burrow/Burrow

.\logger.go:54: undefined: syscall.Dup2
.\logger.go:55: undefined: syscall.Dup2
.\main.go:211: undefined: syscall.SIGSTOP

D:\Platform\Go\test-01-Burrow\Burrow>

Burrow crashes if `__consumer_offsets` not yet exists

Hi,

I'm going to construct a Kafka cluster and monitor it with Burrow.

Just after Kafka installation, Burrow crashes with outputs below:

Started Burrow at July 31, 2015 at 11:16pm (JST)
1438352170387870777 [Info] Starting Zookeeper client
1438352170387958149 [Info] Starting Offsets Storage module
1438352170388042830 [Info] Starting HTTP server
1438352170388103038 [Info] Starting Zookeeper client for cluster local
1438352170388130970 [Info] Starting Kafka client for cluster local
1438352170416477260 [Critical] Cannot start Kafka client for cluster local: kafka server: Unexpected (unknown?) server error.
Burrow failed at July 31, 2015 at 11:16pm (JST)

After create some topics and consume them to create __consumer_offsets, Burrow worked well.

The Email Notifier error message might cause misleading

Hello, I update the burrow into the lastest version. It solves the slow topics problem. but it still uses

            thispart := &PartitionStatus{
                Topic:     topic,
                Partition: int32(partition),
                Status:    StatusOK,
                Start:     firstOffset,
                End:       lastOffset,
                Rule:      0,
            }

to output error msg like this:

----------------------------------------------------------------------
Cluster:  test
Group:    b7347d1e3d1958818cf95bced0b0855f
Status:   ERROR
Complete: true
Errors:   1 partitions have problems
          STOP risk_async:8 (1451379927170, 275814, 0) -> (1451381233774, 275829, 0) 
----------------------------------------------------------------------

In this case, email send error msg marked as stop because it violates the rule 4a. maybe the error msg should point out the rule it violated and show clusterMap.broker[topic][partition].Offset instead of firstOffset. btw the config/default-email.tmpl file seems out of date or I use it in wrong condition.

Cannot get consumer group list for cluster <name>: zk: node does not exist

Hello,

I have a mix of consumers that track offsets in Kafka and in Zookeeper. I successfully set up Burrow to track Kafka-stored offsets, however I have trouble tracking offsets that are stored in Zookeeper. When I set

zookeeper-offsets=true

Burrow starts up, but log contains the following message:

[Error] Cannot get consumer group list for cluster <my_cluster_name>: zk: node does not exist

I can retrieve Kafka-stored offsets fine, but it can't find consumer groups that are tracked in Zookeeper.

I have added a log before this line to show the path where offsets are looked up, and it shows correct path. I can verify in Exhibitor that consumer group nodes exist under that path.

I think my Zookeeper is set up correctly (otherwise I won't be able even to discover Kafka cluster at all).

Any ideas on resolution or further investigation are appreciated.

Graphing of lag over time

Hi,

Sorry for posting this as an issue - if there is a better way to get in contact with the team I can't find one on the wiki so far :)

As an engineer I love to see graphs over time for our systems, and we've been using raw offset requests for this in our apps. I was hoping that Burrow would be able to allow ingest of consumer metrics (specifically lag, so we know whether we need to scale up our consumers) into our InfluxDB instance for this purpose but it doesn't seem like it's possible.

At the moment from what I can tell you can only have the HTTP Notifier or the HTTP Endpoints return lag when the consumers are not in an "OK" state.

Is there a reason for that, before I start working on a pull request? It'd be great to be able to have some kind of "metrics bridge" where the notifier will trigger and return lag metrics regularly regardless of the state of the consumer group so lag metrics can be graphed at all times.

It's possible that I've missed the point of the project, but it does seem like it would be helpful to have continuous lag metrics in addition to alerts when consumer groups etc fall behind or go offline.

Topic Offsets Get Stuck

Sorry, I have no idea why this happens, but we are running Burrow and after some amount of time, the offsets in the Cluster Topic Detail stop updating. The offsets in the Consumer Topic Detail appear to be up to date, but they are now greater than the offsets in the topic detail, so if we use the two numbers to calculate lag, we get a negative number.

Restarting Burrow seems to fix the problem, but it occurs multiple times per day. Is there anything we can do/try?

Offset on newly created topic

What should the offset be for a new topic with no messages in it when hitting the cluster topic detail endpoint v2/kafka/local/topic/topic-name? The kafka client returns -1, but the endpoint shows 0.

Or is it the case that the endpoint returns the number of messages in the topic?

Issue calling `go get` on repo.

Hey all,

I'm having an issue trying to go get this repo. Here's what I'm seeing.

adam@Planet-X -- burrow: (master) gpm install
>> Getting package github.com/samuel/go-zookeeper/zk
>> Getting package github.com/Shopify/sarama
>> Getting package github.com/cihub/seelog
>> Getting package code.google.com/p/gcfg
>> Getting package code.google.com/p/go-uuid
package code.google.com/p/go-uuid
    imports code.google.com/p/go-uuid
    imports code.google.com/p/go-uuid: no buildable Go source files in /Users/adam/src/go/src/code.google.com/p/go-uuid
>> Setting code.google.com/p/go-uuid to version 35bc42037350
>> Setting github.com/cihub/seelog to version 92dc4b8b540607b8187cc2f95cac200211dcd745
>> Setting code.google.com/p/gcfg to version c2d3050044d05357eaf6c3547249ba57c5e235cb
>> Setting github.com/Shopify/sarama to version e5e8ace555208299fd79a051b0910e56ce8d4cb9
>> Setting github.com/samuel/go-zookeeper/zk to version ad552be7b78b762b4a8040ffc5518bdaf5b7225d
>> Building package github.com/samuel/go-zookeeper/zk
>> Building package github.com/Shopify/sarama
../../Shopify/sarama/snappy.go:5:2: cannot find package "code.google.com/p/snappy-go/snappy" in any of:
    /usr/local/Cellar/go/1.4.2/libexec/src/code.google.com/p/snappy-go/snappy (from $GOROOT)
    /Users/adam/src/go/src/code.google.com/p/snappy-go/snappy (from $GOPATH)

Following that link leads me to (https://github.com/golang/snappy) which I'm able to call go get github.com/google/go-snappy/snappy fine, but this project seems to point back at the code.google.com version. So it seems that an upstream of a dependency here uses that?

Just doesn't start, no errors

image

[general]
logdir=log
logconfig=config/logging.cfg
pidfile=burrow.pid
client-id=burrow-lagchecker
group-blacklist=^(console-consumer-|python-kafka-consumer-).*$

[zookeeper]
hostname=dockerhost
port=2181
timeout=6
lock-path=/burrow/notifier

[kafka "local"]
broker=dockerhost
broker-port=10251
zookeeper=dockerhost
zookeeper-port=2181
zookeeper-path=/
offsets-topic=__consumer_offsets

[tickers]
broker-offsets=60

[lagcheck]
intervals=10
expire-group=604800

[httpserver]
server=on
port=8000

[smtp]
server=mailserver.example.com
port=25
[email protected]
template=config/default-email.tmpl

[email "[email protected]"]
group=local,critical-consumer-group
group=local,other-consumer-group
interval=60

[httpnotifier]
url=http://notification.server.example.com:9000/v1/alert
interval=60
extra=app=burrow
extra=tier=STG
template-post=config/default-http-post.tmpl
template-delete=config/default-http-delete.tmpl

Option for on-request debugging of a consumer group

While debugging a problem with ZK offset checking, I found it was necessary to have some in-depth debugging of a particular consumer group so I could watch it. I added debug logging for offset commits to the code, but this can be overwhelming to turn on for a busy instance. It would be much nicer to turn on debug logging for a particular group in a particular cluster with a request to the API, and disable it again when done.

This shouldn't be too difficult. We just need to maintain a list of (cluster, group) tuples to emit debug logging for, similar to the group blacklist. It could even be just group (over all clusters) to simplify things. Probably makes sense to add a common debug logging method that encapsulates this logic.

Consumer group expiration logic flawed

There's a bug in the logic to expire consumer groups that is causing 404s to be returned for valid groups when doing a status check. This is likely because the logic is overly aggressive in removing information.

Exception: replication factor: 3 larger than available brokers: 1

I have got an error [kafka.admin.AdminOperationException: replication factor: 3 larger than available brokers: 1] when try to start burrow with Kafka Env which consists of single broker and zookeeper.

Details:

  • I started new Kafka Env (for testing) on my laptop. The environment consists of 1 zookeeper, 1 kafka broker.
  • The Kafka Env have no any topics.

When I trying to start a burrow I have got an exception:
==> /opt/zookeeper-3.4.8/logs/zookeeper.log <==
2016-02-29 15:11:18,536 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /0:0:0:0:0:0:0:1:50466
2016-02-29 15:11:18,536 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@854] - Connection request from old client /0:0:0:0:0:0:0:1:50466; will be dropped if server is in r-o mode
2016-02-29 15:11:18,537 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@900] - Client attempting to establish new session at /0:0:0:0:0:0:0:1:50466
2016-02-29 15:11:18,537 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /0:0:0:0:0:0:0:1:50467
2016-02-29 15:11:18,537 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@854] - Connection request from old client /0:0:0:0:0:0:0:1:50467; will be dropped if server is in r-o mode
2016-02-29 15:11:18,537 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@900] - Client attempting to establish new session at /0:0:0:0:0:0:0:1:50467
2016-02-29 15:11:18,537 [myid:] - INFO [SyncThread:0:ZooKeeperServer@645] - Established session 0x1532d22f1f30004 with negotiated timeout 10000 for client /0:0:0:0:0:0:0:1:50466
2016-02-29 15:11:18,538 [myid:] - INFO [SyncThread:0:ZooKeeperServer@645] - Established session 0x1532d22f1f30005 with negotiated timeout 10000 for client /0:0:0:0:0:0:0:1:50467
2016-02-29 15:11:18,542 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@489] - Processed session termination for sessionid: 0x1532d22f1f30005

==> /opt/kafka/logs/server.log <==
[2016-02-29 15:11:18,542] ERROR [KafkaApi-0] error when handling request Name: TopicMetadataRequest; Version: 0; CorrelationId: 1; ClientId: burrow-lagchecker; Topics: __consumer_offsets (kafka.server.KafkaApis)
kafka.admin.AdminOperationException: replication factor: 3 larger than available brokers: 1
at kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:70)
at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:171)
at kafka.server.KafkaApis$$anonfun$19.apply(KafkaApis.scala:513)
at kafka.server.KafkaApis$$anonfun$19.apply(KafkaApis.scala:503)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:47)
at scala.collection.SetLike$class.map(SetLike.scala:93)
at scala.collection.AbstractSet.map(Set.scala:47)
at kafka.server.KafkaApis.getTopicMetadata(KafkaApis.scala:503)
at kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:542)
at kafka.server.KafkaApis.handle(KafkaApis.scala:62)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
at java.lang.Thread.run(Thread.java:745)

==> /opt/zookeeper-3.4.8/logs/zookeeper.log <==
2016-02-29 15:11:18,544 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1008] - Closed socket connection for client /0:0:0:0:0:0:0:1:50467 which had sessionid 0x1532d22f1f30005
2016-02-29 15:11:18,544 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@489] - Processed session termination for sessionid: 0x1532d22f1f30004
2016-02-29 15:11:18,544 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1008] - Closed socket connection for client /0:0:0:0:0:0:0:1:50466 which had session 0x1532d22f1f30004

The root cause is replication factor: 3 for topics __consumer_offsets.

I have 1 broker and my default.replication.factor=1
$ cat server.properties | grep default.replication.factor
default.replication.factor=1

Is it normal that burrow client tries to create topic with replica 3 for single broker?

My burrow config:
[general]
logdir=/Users/montana/Documents/NAS-1/opt/kafka/tools/burrow/logs
logconfig=/Users/montana/Documents/NAS-1/opt/kafka/tools/burrow/config/logging.cfg
pidfile=burrow.pid
client-id=burrow-lagchecker
group-blacklist=^(console-consumer-|python-kafka-consumer-).*$

[zookeeper]
hostname=localhost
port=2181
timeout=10
lock-path=/burrow/notifier

[kafka "localhost"]
broker=localhost
broker-port=9092
zookeeper=localhost
zookeeper-port=2181
zookeeper-path=/
zookeeper-offsets=true
offsets-topic=__consumer_offsets

[tickers]
broker-offsets=60

[lagcheck]
intervals=10
expire-group=604800
min-distance=1
zookeeper-interval=60
zk-group-refresh=300

[httpserver]
server=on
port=8000

PS
I created topic __consumer_offsets with replication factor 1 manually and burrow was started successfully.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.