---
title: Setup Data Streams Monitoring for Python
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Data Streams Monitoring > Setup Data Streams Monitoring > Setup Data
  Streams Monitoring for Python
---

# Setup Data Streams Monitoring for Python

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

### Prerequisites{% #prerequisites %}

- [Datadog Agent v7.34.0 or later](https://docs.datadoghq.com/agent)

### Supported libraries{% #supported-libraries %}

| Technology     | Library                                                      | Minimal tracer version | Recommended tracer version | Minimal Lambda Library Version                                                |
| -------------- | ------------------------------------------------------------ | ---------------------- | -------------------------- | ----------------------------------------------------------------------------- |
| Kafka          | [confluent-kafka](https://pypi.org/project/confluent-kafka/) | 1.16.0                 | 2.11.0 or later            | [112](https://github.com/DataDog/datadog-lambda-python/releases/tag/v7.112.0) |
| Kafka          | [aiokafka](https://pypi.org/project/aiokafka/)               | 4.1.0                  | 4.1.0 or later             | Not supported                                                                 |
| RabbitMQ       | [Kombu](https://pypi.org/project/kombu/)                     | 2.6.0                  | 2.6.0 or later             | [112](https://github.com/DataDog/datadog-lambda-python/releases/tag/v7.112.0) |
| Amazon SQS     | [Botocore](https://pypi.org/project/botocore/)               | 1.20.0                 | 2.8.0 or later             | [112](https://github.com/DataDog/datadog-lambda-python/releases/tag/v7.112.0) |
| Amazon Kinesis | [Botocore](https://pypi.org/project/botocore/)               | 1.20.0                 | 2.8.0 or later             | [112](https://github.com/DataDog/datadog-lambda-python/releases/tag/v7.112.0) |
| Amazon SNS     | [Botocore](https://pypi.org/project/botocore/)               | 1.20.0                 | 2.8.0 or later             | [112](https://github.com/DataDog/datadog-lambda-python/releases/tag/v7.112.0) |

### Installation{% #installation %}

Python uses auto-instrumentation to inject and extract additional metadata required by Data Streams Monitoring for measuring end-to-end latencies and the relationship between queues and services. To enable Data Streams Monitoring, set the `DD_DATA_STREAMS_ENABLED` environment variable to `true` on services sending messages to (or consuming messages from) Kafka.

For example:

```yaml
environment:
    - DD_DATA_STREAMS_ENABLED: 'true'
    - DD_TRACE_REMOVE_INTEGRATION_SERVICE_NAMES_ENABLED: 'true'
```

### Monitoring Kafka Pipelines{% #monitoring-kafka-pipelines %}

Data Streams Monitoring uses message headers to propagate context through Kafka streams. If `log.message.format.version` is set in the Kafka broker configuration, it must be set to `0.11.0.0` or higher. Data Streams Monitoring is not supported for versions lower than this.

### Monitoring SQS pipelines{% #monitoring-sqs-pipelines %}

Data Streams Monitoring uses one [message attribute](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-metadata.html) to track a message's path through an SQS queue. As Amazon SQS has a maximum limit of 10 message attributes allowed per message, all messages streamed through the data pipelines must have 9 or fewer message attributes set, allowing the remaining attribute for Data Streams Monitoring.

### Monitoring RabbitMQ pipelines{% #monitoring-rabbitmq-pipelines %}

The [RabbitMQ integration](https://docs.datadoghq.com/integrations/rabbitmq/?tab=host) can provide detailed monitoring and metrics of your RabbitMQ deployments. For full compatibility with Data Streams Monitoring, Datadog recommends configuring the integration as follows:

```yaml
instances:
  - prometheus_plugin:
      url: http://<HOST>:15692
      unaggregated_endpoint: detailed?family=queue_coarse_metrics&family=queue_consumer_count&family=channel_exchange_metrics&family=channel_queue_exchange_metrics&family=node_coarse_metrics
```

This ensures that all RabbitMQ graphs populate, and that you see detailed metrics for individual exchanges as well as queues.

### Monitoring Kinesis pipelines{% #monitoring-kinesis-pipelines %}

There are no message attributes in Kinesis to propagate context and track a message's full path through a Kinesis stream. As a result, Data Streams Monitoring's end-to-end latency metrics are approximated based on summing latency on segments of a message's path, from the producing service through a Kinesis Stream, to a consumer service. Throughput metrics are based on segments from the producing service through a Kinesis Stream, to the consumer service. The full topology of data streams can still be visualized through instrumenting services.

### Monitoring SNS-to-SQS pipelines{% #monitoring-sns-to-sqs-pipelines %}

To monitor a data pipeline where Amazon SNS talks directly to Amazon SQS, you must enable [Amazon SNS raw message delivery](https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html).

### Manual instrumentation{% #manual-instrumentation %}

Data Streams Monitoring propagates context through message headers. If you are using a message queue technology that is not supported by DSM, a technology without headers (such as Kinesis), use [manual instrumentation to set up DSM](https://docs.datadoghq.com/data_streams/manual_instrumentation/?tab=python).

### Monitoring connectors{% #monitoring-connectors %}

#### Confluent Cloud connectors{% #confluent-cloud-connectors %}

Data Streams Monitoring can automatically discover your [Confluent Cloud](https://docs.datadoghq.com/integrations/rabbitmq/?tab=host) connectors and visualize them within the context of your end-to-end streaming data pipeline.

##### Setup{% #setup %}

1. Install and configure the [Datadog-Confluent Cloud integration](https://app.datadoghq.com/integrations/confluent-cloud).

1. In Datadog, open the [Confluent Cloud integration tile](https://app.datadoghq.com/integrations/confluent-cloud).

Under **Actions**, a list of resources populates with detected clusters and connectors. Datadog attempts to discover new connectors every time you view this integration tile.

1. Select the resources you want to add.

1. Click **Add Resources**.

1. Navigate to [Data Streams Monitoring](https://app.datadoghq.com/data-streams/) to visualize the connectors and track connector status and throughput.

## Further reading{% #further-reading %}

- [Kafka Integration](https://docs.datadoghq.com/integrations/kafka/)
- [Software Catalog](https://docs.datadoghq.com/tracing/software_catalog/)
- [Autodiscover Confluent Cloud connectors and easily monitor performance in Data Streams Monitoring](https://www.datadoghq.com/blog/confluent-connector-dsm-autodiscovery/)
