Prerequisites

LanguageLibraryMinimal tracer versionRecommended tracer version
Javakafka-clients (Lag generation is not supported for v3.7)1.9.01.43.0 or later
Goconfluent-kafka-go1.56.11.66.0 or later
Sarama1.56.11.66.0 or later
kafka-go1.63.01.63.0 or later
Node.jskafkajs2.39.0 or 3.26.0 or 4.5.05.25.0 or later
confluent-kafka-javascript5.52.05.52.0 or later
Pythonconfluent-kafka1.16.02.11.0 or later
aiokafka4.1.04.1.0 or later
.NETConfluent.Kafka2.28.02.41.0 or later
RubyRuby Kafka2.23.02.23.0 or later
Karafka2.23.02.23.0 or later
Kafka Streams is partially supported for Java, and can lead to latency measurements being missed.

Supported Kafka deployments

Instrumenting your consumers and producers with Data Streams Monitoring allows you to view your topology and track your pipelines with ready-to-go metrics independently of how Kafka is deployed. Additionally, the following Kafka deployments have further integration support, providing more insights into the health of your Kafka cluster:

ModelIntegration
Self HostedKafka Broker & Kafka Consumer
Confluent PlatformConfluent Platform
Confluent CloudConfluent Cloud
Amazon MSKAmazon MSK or Amazon MSK (Agent)
Red PandaNot yet integrated

Setting up Data Streams Monitoring

See setup instructions for Java, Go, Node.js, Python, .NET or Ruby.

Monitoring Kafka Pipelines

Data Streams Monitoring uses message headers to propagate context through Kafka streams. If log.message.format.version is set in the Kafka broker configuration, it must be set to 0.11.0.0 or higher. Data Streams Monitoring is not supported for versions lower than this.

Monitoring connectors

Confluent Cloud connectors

Data Streams Monitoring can automatically discover your Confluent Cloud connectors and visualize them within the context of your end-to-end streaming data pipeline.

Setup
  1. Install and configure the Datadog-Confluent Cloud integration.

  2. In Datadog, open the Confluent Cloud integration tile.

    The Confluent Cloud integration tile in Datadog, on the Configure tab. Under an Actions heading, a table titled '13 Resources autodiscovered' containing a list of resources and checkboxes for each resource.

    Under Actions, a list of resources populates with detected clusters and connectors. Datadog attempts to discover new connectors every time you view this integration tile.

  3. Select the resources you want to add.

  4. Click Add Resources.

  5. Navigate to Data Streams Monitoring to visualize the connectors and track connector status and throughput.

Self-hosted Kafka connectors

Requirements: dd-trace-java v1.44.0+

This feature is in Preview.

Data Streams Monitoring can collect information from your self-hosted Kafka connectors. In Datadog, these connectors are shown as services connected to Kafka topics. Datadog collects throughput to and from all Kafka topics. Datadog does not collect connector status or sinks and sources from self-hosted Kafka connectors.

Setup
  1. Ensure that the Datadog Agent is running on your Kafka Connect workers.
  2. Ensure that dd-trace-java is installed on your Kafka Connect workers.
  3. Modify your Java options to include dd-trace-java on your Kafka Connect worker nodes. For example, on Strimzi, modify STRIMZI_JAVA_OPTS to add -javaagent:/path/to/dd-java-agent.jar.