项目作者: nipuntalukdar

项目描述 :
Prometheus Kafka exporter for consumer offsets, consumer lags, topics
高级语言: Go
项目地址: git://github.com/nipuntalukdar/kafka_consumer_exporter.git
创建时间: 2017-11-12T05:23:11Z
项目社区:https://github.com/nipuntalukdar/kafka_consumer_exporter

开源协议:MIT License

下载


kafka consumer stats exporter for Prometheus

Kafka consumer exporter

This exporter can be used to export offset details of topics, get consumer offsets, get consumer
counts in a group, kafka cluster brokers information, consumer lags, topic and partition details etc.

For consumer lags and offsets monitoring, it works with the new high level consumers
only right now.
The old high level consumer where offsets are stored in Zookeeper and where consumer-coordinations happen
through Zookeeper is not tested.

Installation

You need to have golang installed to install the tool. I haven’t uploaded any prebuilt binary for
any platoform. I will be adding a dockerfile to build a Docker image shortly.

Run the below command:

$ go get github.com/nipuntalukdar/kafka_consumer_exporter

$ go install github.com/nipuntalukdar/kafka_consumer_exporter

Running the exporter

An example:
$ kafka_consumer_exporter -group agroup:topica,topicb -group anothergroup:topicx,topicy,topicz -topics mytopic,yourtopic

Detailed usage shown below:

  1. $ kafka_consumer_exporter -h
  2. Usage of ./kafka_consumer_exporter:
  3. -group value
  4. consumer-group and topics in the form of group1:topic1,topic2,topic3 etc
  5. -kafka_brokers string
  6. Comma-separated list of Kafka brokers (default "127.0.0.1:9092")
  7. -listen_address string
  8. http port where metrics are published (default ":10001")
  9. -metrics_url string
  10. URL where mettrics is accessible (default "/metrics")
  11. -namespace string
  12. Namespace for metrics (default "kafka")
  13. -topics string
  14. Comma separated list of kafka topics
  15. -with-go-metrics
  16. Should we import go runtime and http handler metrics

How to Create Grafana Dashboard

Clone this repository.
Go to directory dashboardgen under the repository directory.
Update the input.json with the topics and consumer groups you want to monitor. For example, if you want to monitor group “SomeGroup” which consumes from topis TopicA, TopicB, TopicC and also want to monitor another consumer group “SecretGroup” which consumes from topics SecretA, SecretB and also if you want to monitor extra topics ExtraTopicA, ExtraTopicB, then you should update the input.json as shown below:

  1. {
  2. "consumergroups": {
  3. "SomeGroup": [
  4. "TopicA",
  5. "TopicB",
  6. "TopicC"
  7. ],
  8. "SecretGroup": [
  9. "SecretA",
  10. "SecretB"
  11. ]
  12. },
  13. "topics": [
  14. "TopicA",
  15. "TopicB",
  16. "TopicC",
  17. "SecretA",
  18. "SecretB",
  19. "ExtraTopicA",
  20. "ExtraTopicB"
  21. ]
  22. }

The dashboard needs an environment (so that we may cater for multiple Kafka cluster). A Prometheus target may look like as the one shown below:

  1. - job_name: 'KafkaTopicAndConsumer'
  2. honor_labels: true
  3. static_configs:
  4. - targets: ['localhost:10001']
  5. labels:
  6. env: 'myenv'

Now, you issue the below command (from dashboardgen directory). We are assuming environment label griven to Prometheus exporter is myenv.

$ python kafkadashboard.py myenv > kafkadashboard.json

kafkadashboard.json may be imported to Grafana now !!!

Grafana Dashboard

A sample dashboard screenshot is given below:

screenshot1\
screenshot2\
screenshot3