“Kafka™ is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.” - Official Kafka Site
Kafka is essentially a powerful, fast message brokering system used to transfer a payload/message from many applications to many applications. This is a Java based application that exposes metrics through mBeans.
There are four main components to Kafka:
The Kafka Integration uses Datadog’s JMXFetch application to pull metrics, just like our other Java based applications such as Cassandra, JMX, Tomcat, etc. This pulls metrics through the use of mBeans, where the engineering team has included a list of commonly used mBeans in the Kafka.yaml file. This can be extended with any other beans the user would like, or if your version of Kafka supports additional metrics.
The Kafka_Consumer Integration collects metrics like our standard Python based checks. This uses an internal Zookeeper API. Zookeeper is an Apache application that is responsible for managing the configuration for the cluster of nodes known as the Kafka broker. (In version 0.9 of Kafka things are a bit different, Zookeeper is no longer required, see the Troubleshooting section for more information). This check picks up only three metrics, and these do not come from JMXFetch.
There are a few common issues you may face when it comes to the Kafka Integration. Here is a common list of issues that could be affecting users.
This first troubleshooting issue only applies if you are running version <5.20 of the Datadog Agent. In older versions of Kafka, consumer offsets were stored in Zookeper exclusively. The initial Kafka_consumer check was written when this limitation was in place. Due to this, you cannot get the
kafka.consumer_lag metric if your offsets are stored in Kafka and you are using an older version of the Agent. Upgrade the Agent to the latest version to see these metrics.
The second most common issue is the following error for the Kafka Integration:
instance #kafka-localhost-<PORT_NUM> [ERROR]: 'Cannot connect to instance localhost:<PORT_NUM>. java.io.IOException: Failed to retrieve RMIServer stub
This error essentially means that the Datadog Agent is unable to connect to the Kafka instance to retrieve metrics from the exposed mBeans over the RMI protocol. This error can be resolved by including the following JVM (Java Virtual Machine) arguments when starting the Kafka instance (required for Producer, Consumer, and Broker as they are all separate Java instances)
The next issue is one that affects the Kafka Integration. The issue is that people may not be seeing Consumer and Producer metrics in your account. By default we only collect broker based metrics. Additionally, there are cases where users are using custom Producer and Consumer clients that are not written in Java and/or not exposing mBeans, so having this enabled would still collect zero metrics. To start pulling in metrics, if you’re running Java based Producers and Consumers, you can uncomment this section of the yaml file and point the Agent to the proper ports:
# - host: remotehost # port: 9998 # Producer # tags: # kafka: producer0 # env: stage # newTag: test # - host: remotehost # port: 9997 # Consumer # tags: # kafka: consumer0 # env: stage # newTag: test
if you are running Producers and Consumers from other languages, this isn’t an option, and you have to use another way to submit these metrics from your code, for instance through dogstatsd.
This issue is specifically for the Kafka_Consumer check. If you specify a partition in your
Kafka_Consumer.yaml file that doesn’t exist in your environment, you see the following error in
instance - #0 [Error]: ''
The solution here would be to only specify the specific partition for your topic. This correlates to this specific line:
#my_topic [0, 1, 4, 12]
Partition Context Limitation: the number of partition contexts collection is limited to 200. If you require more contexts, contact the Datadog support team.