---
title: Destinations
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Observability Pipelines > Destinations
---

# Destinations

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

## Overview{% #overview %}

Use the Observability Pipelines Worker to send your processed logs and metrics (Preview (PREVIEW indicates an early access version of a major product or feature that you can opt into before its official release.)) to different destinations. Most Observability Pipelines destinations send events in batches to the downstream integration. See Event batching for more information. Some Observability Pipelines destinations also have fields that support template syntax, so you can set these fields based on specific fields. See Template syntax for more information.

Select a destination in the left navigation menu to see more information about it.

## Destinations{% #destinations %}

These are the available destinations:

{% tab title="Logs" %}

- [Amazon OpenSearch](https://docs.datadoghq.com/observability_pipelines/destinations/amazon_opensearch/)
- [Amazon S3](https://docs.datadoghq.com/observability_pipelines/destinations/amazon_s3/)
- [Amazon Security Lake](https://docs.datadoghq.com/observability_pipelines/destinations/amazon_security_lake/)
- [Azure Storage](https://docs.datadoghq.com/observability_pipelines/destinations/azure_storage/)
- [Datadog CloudPrem](https://docs.datadoghq.com/observability_pipelines/destinations/cloudprem/)
- [CrowdStrike Next-Gen SIEM](https://docs.datadoghq.com/observability_pipelines/destinations/crowdstrike_ng_siem/)
- [Datadog Logs](https://docs.datadoghq.com/observability_pipelines/destinations/datadog_logs/)
- [Elasticsearch](https://docs.datadoghq.com/observability_pipelines/destinations/elasticsearch/)
- [Google Cloud Storage](https://docs.datadoghq.com/observability_pipelines/destinations/google_cloud_storage/)
- [Google Pub/Sub](https://docs.datadoghq.com/observability_pipelines/destinations/google_pubsub/)
- [Google SecOps](https://docs.datadoghq.com/observability_pipelines/destinations/google_secops/)
- [HTTP Client](https://docs.datadoghq.com/observability_pipelines/destinations/http_client/)
- [Kafka](https://docs.datadoghq.com/observability_pipelines/destinations/kafka/)
- [Microsoft Sentinel](https://docs.datadoghq.com/observability_pipelines/destinations/microsoft_sentinel/)
- [New Relic](https://docs.datadoghq.com/observability_pipelines/destinations/new_relic/)
- [OpenSearch](https://docs.datadoghq.com/observability_pipelines/destinations/opensearch/)
- [SentinelOne](https://docs.datadoghq.com/observability_pipelines/destinations/sentinelone/)
- [Socket](https://docs.datadoghq.com/observability_pipelines/destinations/socket/)
- [Splunk HTTP Event Collector (HEC)](https://docs.datadoghq.com/observability_pipelines/destinations/splunk_hec/)
- [Sumo Logic Hosted Collector](https://docs.datadoghq.com/observability_pipelines/destinations/sumo_logic_hosted_collector/)
- [Syslog](https://docs.datadoghq.com/observability_pipelines/destinations/syslog/)

{% /tab %}

{% tab title="Metrics" %}

- [Datadog Metrics](https://docs.datadoghq.com/observability_pipelines/destinations/datadog_metrics/)

{% /tab %}

## Template syntax{% #template-syntax %}

Logs are often stored in separate indexes based on log data, such as the service or environment the logs are coming from or another log attribute. In Observability Pipelines, you can use template syntax to route your logs to different indexes based on specific log fields.

When the Observability Pipelines Worker cannot resolve the field with the template syntax, the Worker defaults to a specified behavior for that destination. For example, if you are using the template `{{application_id}}` for the Amazon S3 destination's **Prefix** field, but there isn't an `application_id` field in the log, the Worker creates a folder called `OP_UNRESOLVED_TEMPLATE_LOGS/` and publishes the logs there.

The following table lists the destinations and fields that support template syntax, and what happens when the Worker cannot resolve the field:

| Destination       | Fields that support template syntax | Behavior when the field cannot be resolved                                                                             |
| ----------------- | ----------------------------------- | ---------------------------------------------------------------------------------------------------------------------- |
| Amazon Opensearch | Index                               | The Worker writes logs to the `datadog-op` index.                                                                      |
| Amazon S3         | Prefix                              | The Worker creates a folder named `OP_UNRESOLVED_TEMPLATE_LOGS/` and writes the logs there.                            |
| Azure Blob        | Prefix                              | The Worker creates a folder named `OP_UNRESOLVED_TEMPLATE_LOGS/` and writes the logs there.                            |
| Elasticsearch     | Index                               | The Worker writes logs to the `datadog-op` index.                                                                      |
| Google Chronicle  | Log type                            | Defaults to `DATADOG` log type.                                                                                        |
| Google Cloud      | Prefix                              | The Worker creates a folder named `OP_UNRESOLVED_TEMPLATE_LOGS/` and writes the logs there.                            |
| Opensearch        | Index                               | The Worker writes logs to the `datadog-op` index.                                                                      |
| Splunk HEC        | IndexSource type                    | The Worker sends the logs to the default index configured in Splunk.The Worker defaults to the `httpevent` sourcetype. |

#### Example{% #example %}

If you want to route logs based on the log's application ID field (for example, `application_id`) to the Amazon S3 destination, use the event fields syntax in the **Prefix to apply to all object keys** field.

{% image
   source="https://datadog-docs.imgix.net/images/observability_pipelines/amazon_s3_prefix_20250709.28cf38d62d2615c41887a931488494d6.png?auto=format"
   alt="The Amazon S3 destination showing the prefix field using the event fields syntax /application_id={{ application_id }}/" /%}

### Syntax{% #syntax %}

#### Event fields{% #event-fields %}

Use `{{ <field_name> }}` to access individual log event fields. For example:

```
{{ application_id }}
```

#### Strftime specifiers{% #strftime-specifiers %}

Use [strftime specifiers](https://docs.rs/chrono/0.4.19/chrono/format/strftime/index.html#specifiers) for the date and time. For example:

```
year=%Y/month=%m/day=%d
```

#### Escape characters{% #escape-characters %}

Prefix a character with `\` to escape the character. This example escapes the event field syntax:

```
\{{ field_name }}
```

This example escapes the strftime specifiers:

```
year=\%Y/month=\%m/day=\%d/
```

## Event batching{% #event-batching %}

Observability Pipelines destinations send events in batches to the downstream integration. A batch of events is flushed when one of the following parameters is met:

- Maximum number of events
- Maximum number of bytes
- Timeout (seconds)

For example, if a destination's parameters are:

- Maximum number of events = 2
- Maximum number of bytes = 100,000
- Timeout (seconds) = 5

And the destination receives 1 event in a 5-second window, it flushes the batch at the 5-second timeout.

If the destination receives 3 events within 2 seconds, it flushes a batch with 2 events and then flushes a second batch with the remaining event after 5 seconds. If the destination receives 1 event that is more than 100,000 bytes, it flushes this batch with the 1 event.

| Destination                                 | Maximum Events | Maximum Size (MB) | Timeout (seconds) |
| ------------------------------------------- | -------------- | ----------------- | ----------------- |
| Amazon OpenSearch                           | None           | 10                | 1                 |
| Amazon S3 (Datadog Log Archives)            | None           | 100               | 900               |
| Amazon Security Lake                        | None           | 256               | 300               |
| Azure Storage (Datadog Log Archives)        | None           | 100               | 900               |
| CrowdStrike                                 | None           | 1                 | 1                 |
| Datadog CloudPrem                           | 1,000          | 4.25              | 5                 |
| Datadog Logs                                | 1,000          | 4.25              | 5                 |
| Datadog Metrics                             | 100,000        | None              | 2                 |
| Elasticsearch                               | None           | 10                | 1                 |
| Google Chronicle                            | None           | 1                 | 15                |
| Google Cloud Storage (Datadog Log Archives) | None           | 100               | 900               |
| Google Pub/Sub                              | 1,000          | 10                | 1                 |
| HTTP Client                                 | 1000           | 1                 | 1                 |
| Kafka                                       | 10,000         | 1                 | 1                 |
| Microsoft Sentinel                          | None           | 10                | 1                 |
| New Relic                                   | 100            | 1                 | 1                 |
| OpenSearch                                  | None           | 10                | 1                 |
| SentinelOne                                 | None           | 1                 | 1                 |
| Socket*                                     | N/A            | N/A               | N/A               |
| Splunk HTTP Event Collector (HEC)           | None           | 1                 | 1                 |
| Sumo Logic Hosted Collector                 | None           | 10                | 1                 |
| Syslog*                                     | N/A            | N/A               | N/A               |

\*Destination does not batch events.
