---
title: Setup Data Streams Monitoring for Java
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Data Streams Monitoring > Setup Data Streams Monitoring > Setup Data
  Streams Monitoring for Java
---

# Setup Data Streams Monitoring for Java

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

### Prerequisites{% #prerequisites %}

- [Datadog Agent v7.34.0 or later](https://docs.datadoghq.com/agent)

### Supported libraries{% #supported-libraries %}

| Technology     | Library                                                                                                                        | Minimal tracer version | Recommended tracer version |
| -------------- | ------------------------------------------------------------------------------------------------------------------------------ | ---------------------- | -------------------------- |
| Kafka          | [kafka-clients](https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients) (Lag generation is not supported for v3.7*) | 1.9.0                  | 1.43.0 or later            |
| RabbitMQ       | [amqp-client](https://mvnrepository.com/artifact/com.rabbitmq/amqp-client)                                                     | 1.9.0                  | 1.42.2 or later            |
| Amazon SQS     | [aws-java-sdk-sqs (v1)](https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-sqs)                                     | 1.27.0                 | 1.42.2 or later            |
| Amazon SQS     | [sqs (v2)](https://mvnrepository.com/artifact/software.amazon.awssdk/sqs)                                                      | 1.27.0                 | 1.42.2 or later            |
| Amazon Kinesis | [Kinesis (v1)](https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-kinesis)                                          | 1.22.0                 | 1.42.2 or later            |
| Amazon Kinesis | [Kinesis (v2)](https://mvnrepository.com/artifact/software.amazon.awssdk/kinesis)                                              | 1.22.0                 | 1.42.2 or later            |
| Amazon SNS     | [SNS (v1)](https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-sns)                                                  | 1.31.0                 | 1.42.2 or later            |
| Amazon SNS     | [SNS (v2)](https://mvnrepository.com/artifact/software.amazon.awssdk/sns)                                                      | 1.31.0                 | 1.42.2 or later            |
| Google PubSub  | [Google Cloud Pub/Sub](https://mvnrepository.com/artifact/com.google.cloud/google-cloud-pubsub)                                | 1.25.0                 | 1.42.2 or later            |
| IBM MQ         | [IBM MQ classes for Java and JMS](https://mvnrepository.com/artifact/com.ibm.mq/com.ibm.mq.jakarta.client)                     | 1.55.0                 | 1.55.0 or later            |

\*Spring Boot 3.3.x and spring-kafka 3.2.x use kafka-clients 3.7.x, which does not support lag generation. To resolve this, [update your kafka-clients version](https://docs.spring.io/spring-kafka/reference/appendix/override-boot-dependencies.html) to 3.8.0 or newer.

### Installation{% #installation %}

To enable Data Streams Monitoring, set the following environment variables to `true` on services that are sending or consuming messages:

- `DD_DATA_STREAMS_ENABLED`
- `DD_TRACE_REMOVE_INTEGRATION_SERVICE_NAMES_ENABLED`

{% tab title="Environment variables" %}

```yaml
environment:
  - DD_DATA_STREAMS_ENABLED: "true"
  - DD_TRACE_REMOVE_INTEGRATION_SERVICE_NAMES_ENABLED: "true"
```

{% /tab %}

{% tab title="Command line" %}
Run the following when you start your Java application:

```shell
java -javaagent:/path/to/dd-java-agent.jar -Ddd.data.streams.enabled=true -Ddd.trace.remove.integration-service-names.enabled=true -jar path/to/your/app.jar
```

{% /tab %}

### One-Click Installation{% #one-click-installation %}

To set up Data Streams Monitoring from the Datadog UI without needing to restart your service, use [Configuration at Runtime](https://docs.datadoghq.com/remote_configuration). Navigate to the APM Service Page and `Enable DSM`.

{% image
   source="https://docs.dd-static.net/images/data_streams/enable_dsm_service_catalog.2ceee0e87170254cb1402dc43ff7fdc1.png?auto=format"
   alt="Enable the Data Streams Monitoring from the Dependencies section of the APM Service Page" /%}

##### Setup{% #setup %}

Use Datadog's Java tracer, [`dd-trace-java`](https://github.com/DataDog/dd-trace-java), to collect information from your Kafka Connect workers.

1. [Add the `dd-java-agent.jar` file](https://docs.datadoghq.com/tracing/trace_collection/automatic_instrumentation/dd_libraries/java/?tab=wget) to your Kafka Connect workers. Ensure that you are using `dd-trace-java` [v1.44+](https://github.com/DataDog/dd-trace-java/releases/tag/v1.44.0).
1. Modify your Java options to include the Datadog Java tracer on your worker nodes. For example, on Strimzi, modify `STRIMZI_JAVA_OPTS` to add `-javaagent:/path/to/dd-java-agent.jar`.

### Monitoring SQS pipelines{% #monitoring-sqs-pipelines %}

Data Streams Monitoring uses one [message attribute](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-metadata.html) to track a message's path through an SQS queue. As Amazon SQS has a maximum limit of 10 message attributes allowed per message, all messages streamed through the data pipelines must have 9 or fewer message attributes set, allowing the remaining attribute for Data Streams Monitoring.

### Monitoring RabbitMQ pipelines{% #monitoring-rabbitmq-pipelines %}

The [RabbitMQ integration](https://docs.datadoghq.com/integrations/rabbitmq/?tab=host) can provide detailed monitoring and metrics of your RabbitMQ deployments. For full compatibility with Data Streams Monitoring, Datadog recommends configuring the integration as follows:

```yaml
instances:
  - prometheus_plugin:
      url: http://<HOST>:15692
      unaggregated_endpoint: detailed?family=queue_coarse_metrics&family=queue_consumer_count&family=channel_exchange_metrics&family=channel_queue_exchange_metrics&family=node_coarse_metrics
```

This ensures that all RabbitMQ graphs populate, and that you see detailed metrics for individual exchanges as well as queues.

### Monitoring SNS-to-SQS pipelines{% #monitoring-sns-to-sqs-pipelines %}

To monitor a data pipeline where Amazon SNS talks directly to Amazon SQS, you must perform the following additional configuration steps:

{% tab title="SQS v1" %}

- Set the environment variable `DD_TRACE_SQS_BODY_PROPAGATION_ENABLED` to `true`.

For example:

  ```yaml
  environment:
    - DD_DATA_STREAMS_ENABLED: "true"
    - DD_TRACE_REMOVE_INTEGRATION_SERVICE_NAMES_ENABLED: "true"
    - DD_TRACE_SQS_BODY_PROPAGATION_ENABLED: "true"
  ```

- Ensure that you are using [Java tracer v1.44.0+](https://github.com/DataDog/dd-trace-java/releases).

{% /tab %}

{% tab title="SQS v2" %}
Enable [Amazon SNS raw message delivery](https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html).
{% /tab %}

### Monitoring Kinesis pipelines{% #monitoring-kinesis-pipelines %}

There are no message attributes in Kinesis to propagate context and track a message's full path through a Kinesis stream. As a result, Data Streams Monitoring's end-to-end latency metrics are approximated based on summing latency on segments of a message's path, from the producing service through a Kinesis Stream, to a consumer service. Throughput metrics are based on segments from the producing service through a Kinesis Stream, to the consumer service. The full topology of data streams can still be visualized through instrumenting services.

### Manual instrumentation{% #manual-instrumentation %}

Data Streams Monitoring propagates context through message headers. If you are using a message queue technology that is not supported by DSM, a technology without headers (such as Kinesis), or Lambdas, use [manual instrumentation to set up DSM](https://docs.datadoghq.com/data_streams/manual_instrumentation/?tab=java).

### Monitoring connectors{% #monitoring-connectors %}

#### Confluent Cloud connectors{% #confluent-cloud-connectors %}

Data Streams Monitoring can automatically discover your [Confluent Cloud](https://docs.datadoghq.com/integrations/rabbitmq/?tab=host) connectors and visualize them within the context of your end-to-end streaming data pipeline.

##### Setup{% #setup-1 %}

1. Install and configure the [Datadog-Confluent Cloud integration](https://app.datadoghq.com/integrations/confluent-cloud).

1. In Datadog, open the [Confluent Cloud integration tile](https://app.datadoghq.com/integrations/confluent-cloud).

Under **Actions**, a list of resources populates with detected clusters and connectors. Datadog attempts to discover new connectors every time you view this integration tile.

1. Select the resources you want to add.

1. Click **Add Resources**.

1. Navigate to [Data Streams Monitoring](https://app.datadoghq.com/data-streams/) to visualize the connectors and track connector status and throughput.

#### Self-hosted Kafka connectors{% #self-hosted-kafka-connectors %}

*Requirements*: [`dd-trace-java` v1.44.0+](https://github.com/DataDog/dd-trace-java/releases/tag/v1.44.0)

{% alert level="info" %}
This feature is in Preview.
{% /alert %}

Data Streams Monitoring can collect information from your self-hosted Kafka connectors. In Datadog, these connectors are shown as services connected to Kafka topics. Datadog collects throughput to and from all Kafka topics. Datadog does not collect connector status or sinks and sources from self-hosted Kafka connectors.

##### Setup{% #setup-2 %}

1. Ensure that the Datadog Agent is running on your Kafka Connect workers.
1. Ensure that [`dd-trace-java`](https://github.com/DataDog/dd-trace-java) is installed on your Kafka Connect workers.
1. Modify your Java options to include `dd-trace-java` on your Kafka Connect worker nodes. For example, on Strimzi, modify `STRIMZI_JAVA_OPTS` to add `-javaagent:/path/to/dd-java-agent.jar`.

## Further reading{% #further-reading %}

- [Kafka Integration](https://docs.datadoghq.com/integrations/kafka/)
- [Software Catalog](https://docs.datadoghq.com/tracing/software_catalog/)
- [Autodiscover Confluent Cloud connectors and easily monitor performance in Data Streams Monitoring](https://www.datadoghq.com/blog/confluent-connector-dsm-autodiscovery/)
