---
title: Elasticsearch Destination
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Observability Pipelines > Destinations > Elasticsearch Destination
---

# Elasticsearch Destination

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}
Available for:
{% icon name="icon-logs" /%}
 Logs | 
{% icon name="icon-metrics" /%}
 Metrics 
{% callout %}
##### Join the Preview!

Sending metrics to Observability Pipelines is in Preview. Fill out the form to request access.

[Request Access](https://www.datadoghq.com/product-preview/metrics-ingestion-and-cardinality-control-in-observability-pipelines/)
{% /callout %}

## Overview{% #overview %}

Use Observability Pipelines' Elasticsearch destination to send logs or metrics (Preview (PREVIEW indicates an early access version of a major product or feature that you can opt into before its official release.)) to Elasticsearch.

## Setup{% #setup %}

Set up the Elasticsearch destination and its environment variables when you [set up a pipeline](https://app.datadoghq.com/observability-pipelines). The information below is configured in the pipelines UI.

### Set up the destination{% #set-up-the-destination %}

{% alert level="danger" %}
Only enter the identifiers for the Elasticsearch endpoint URL, username, and password. Do not enter the actual values.
{% /alert %}

1. Enter the identifiers for your Elasticsearch username and password. If you leave it blank, the default is used.
1. Enter the identifier for your Elasticsearch endpoint URL. If you leave it blank, the default is used.
1. (Optional) Enter the Elasticsearch version.
1. In the **Mode** dropdown menu, select **Bulk** or **Data stream**.
   - **Bulk** mode
     - Uses Elasticsearch's [Bulk API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) to send batched events directly into a standard index.
     - Choose this mode when you want direct control over index naming and lifecycle management. Data is appended to the index you specify, and you are responsible for handling rollovers, deletions, and mappings.
     - To configure **Bulk** mode:
       - (Optional) In the **Index** field, enter the name of the Elasticsearch index. You can use [template syntax](https://docs.datadoghq.com/observability_pipelines/destinations/#template-syntax) to dynamically route data to different indexes based on specific fields in your logs, for example `logs-{{service}}` or `metrics-{{service}}`.
   - **Data streams** mode
     - Uses [Elasticsearch Data Streams](https://www.elastic.co/docs/reference/fleet/data-streams) for data storage. Data streams automatically manage backing indexes and rollovers, making them ideal for time series log data.
     - Choose this mode when you want Elasticsearch to manage the index lifecycle for you. Data streams ensure smooth rollovers, Index Lifecycle Management (ILM) compatibility, and optimized handling of time-based data.
     - To configure **Data streams** mode, optionally specify the data stream name and configure routing and syncing settings.
       1. In the **Type** field, enter the category of data being ingested, for example `logs` or `metrics`.
       1. In the **Dataset** field, specify the format or data source that describes the structure, for example `apache`.
       1. In the **Namespace** field, enter the grouping for organizing your data streams, for example `production`.
       1. Enable the **Auto routing** toggle to automatically route events to a data stream based on the event content.
       1. Enable the **Sync fields** toggle to synchronize data stream fields with the Elasticsearch index mapping.

       - In the UI, there is a preview of the data stream name you configured. If the fields are left blank, the default data stream name used is `logs-generic-default` for logs and `metrics-generic-default` for metrics. With the above example inputs, the data stream name that the Worker writes to is:
         - `logs-apache-production` for logs
         - `metrics-apache-production` for metrics

#### Optional settings{% #optional-settings %}

##### Enable TLS{% #enable-tls %}

Toggle the switch to **Enable TLS**.

- If you are using Secrets Management, enter the identifier for the key pass. See Set secrets for the default used if the field is left blank.
- The following certificate and key files are required for TLS:
  - `Server Certificate Path`: The path to the certificate file that has been signed by your Certificate Authority (CA) root file in DER, PEM, or CRT (X.509).
  - `CA Certificate Path`: The path to the certificate file that is your Certificate Authority (CA) root file in DER, PEM, or CERT (X.509).
  - `Private Key Path`: The path to the `.key` private key file that belongs to your Server Certificate Path in DER, PEM, or CERT (PKCS #8) format.
  - **Notes**:
    - The configuration data directory `/var/lib/observability-pipelines-worker/config/` is automatically appended to the file paths. See [Advanced Worker Configurations](https://docs.datadoghq.com/configuration/install_the_worker/advanced_worker_configurations/) for more information.
    - The file must be readable by the `observability-pipelines-worker` group and user.

##### Compression{% #compression %}

You might want to use compression if you are sending a high volume of events to your Elasticsearch clusters.

Toggle the switch to enable **Compression**. Select a compression algorithm (**gzip**, **snappy**, **zlib**, **zstd**) in the dropdown menu. The default is no compression.

##### Buffering{% #buffering %}

Toggle the switch to enable **Buffering Options**. Enable a configurable buffer on your destination to ensure intermittent latency or an outage at the destination doesn't create immediate backpressure, and allow events to continue to be ingested from your source. Disk buffers can also increase pipeline durability by writing data to disk, ensuring buffered data persists through a Worker restart. See [Destination buffers](https://docs.datadoghq.com/observability_pipelines/scaling_and_performance/buffering_and_backpressure/#destination-buffers) for more information.

- If left unconfigured, your destination uses a memory buffer with a capacity of 500 events.
- To configure a buffer on your destination:
  1. Select the buffer type you want to set (**Memory** or **Disk**).
  1. Enter the buffer size and select the unit.
     1. Maximum memory buffer size is 128 GB.
     1. Maximum disk buffer size is 500 GB.
  1. In the **Behavior on full buffer** dropdown menu, select whether you want to **block** events or **drop new events** when the buffer is full.

##### Advanced options{% #advanced-options %}

1. In the **ID Key** field, enter the name of the field used as the document ID in Elasticsearch.
1. In the **Pipeline** field, enter the name of an Elasticsearch ingest pipeline to apply to events before indexing.
1. Enable the **Retry partial failures** toggle to retry a failed bulk request when some events in a batch fail while others succeed.

### Set secrets{% #set-secrets %}

These are the defaults used for secret identifiers and environment variables.

**Note**: If you enter secret identifiers and then choose to use environment variables, the environment variable is the identifier entered and prepended with `DD_OP`. For example, if you entered `PASSWORD_1` for a password identifier, the environment variable for that password is `DD_OP_PASSWORD_1`.

{% tab title="Secrets Management" %}

- Elasticsearch endpoint URL identifier:
  - The default identifier is `DESTINATION_ELASTICSEARCH_ENDPOINT_URL`.
- Elasticsearch authentication username identifier:
  - The default identifier is `DESTINATION_ELASTICSEARCH_USERNAME`.
- Elasticsearch authentication password identifier:
  - The default identifier is `DESTINATION_ELASTICSEARCH_PASSWORD`.

{% /tab %}

{% tab title="Environment Variables" %}

- Elasticsearch authentication username:
  - The default environment variable is `DD_OP_DESTINATION_ELASTICSEARCH_USERNAME`.
- Elasticsearch authentication password:
  - The default environment variable is `DD_OP_DESTINATION_ELASTICSEARCH_PASSWORD`.
- Elasticsearch endpoint URL:
  - The default environment variable is `DD_OP_DESTINATION_ELASTICSEARCH_ENDPOINT_URL`.

{% /tab %}

## How the destination works{% #how-the-destination-works %}

### Event batching{% #event-batching %}

A batch of events is flushed when one of these parameters is met. See [event batching](https://docs.datadoghq.com/observability_pipelines/destinations/#event-batching) for more information.

| Maximum Events | Maximum Size (MB) | Timeout (seconds) |
| -------------- | ----------------- | ----------------- |
| None           | 10                | 1                 |
