For AI agents: A markdown version of this page is available at https://docs.datadoghq.com/observability_pipelines/destinations/elasticsearch.md.
A documentation index is available at /llms.txt.
Use Observability Pipelines’ Elasticsearch destination to send logs or metrics (PREVIEW indicates an early access version of a major product or feature that you can opt into before its official release.Glossary) to Elasticsearch.
Setup
Set up the Elasticsearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
Set up the destination
Only enter the identifiers for the Elasticsearch endpoint URL, username, and password. Do not enter the actual values.
Enter the identifiers for your Elasticsearch username and password. If you leave it blank, the default is used.
Enter the identifier for your Elasticsearch endpoint URL. If you leave it blank, the default is used.
(Optional) Enter the Elasticsearch version.
In the Mode dropdown menu, select Bulk or Data stream.
Bulk mode
Uses Elasticsearch’s Bulk API to send batched events directly into a standard index.
Choose this mode when you want direct control over index naming and lifecycle management. Data is appended to the index you specify, and you are responsible for handling rollovers, deletions, and mappings.
To configure Bulk mode:
(Optional) In the Index field, enter the name of the Elasticsearch index. You can use template syntax to dynamically route data to different indexes based on specific fields in your logs, for example logs-{{service}} or metrics-{{service}}.
Data streams mode
Uses Elasticsearch Data Streams for data storage. Data streams automatically manage backing indexes and rollovers, making them ideal for time series log data.
Choose this mode when you want Elasticsearch to manage the index lifecycle for you. Data streams ensure smooth rollovers, Index Lifecycle Management (ILM) compatibility, and optimized handling of time-based data.
To configure Data streams mode, optionally specify the data stream name and configure routing and syncing settings.
In the Type field, enter the category of data being ingested, for example logs or metrics.
In the Dataset field, specify the format or data source that describes the structure, for example apache.
In the Namespace field, enter the grouping for organizing your data streams, for example production.
Enable the Auto routing toggle to automatically route events to a data stream based on the event content.
Enable the Sync fields toggle to synchronize data stream fields with the Elasticsearch index mapping.
In the UI, there is a preview of the data stream name you configured. If the fields are left blank, the default data stream name used is logs-generic-default for logs and metrics-generic-default for metrics. With the above example inputs, the data stream name that the Worker writes to is:
logs-apache-production for logs
metrics-apache-production for metrics
Optional settings
Enable TLS
Toggle the switch to Enable TLS.
If you are using Secrets Management, enter the identifier for the key pass. See Set secrets for the default used if the field is left blank.
The following certificate and key files are required for TLS:
Server Certificate Path: The path to the certificate file that has been signed by your Certificate Authority (CA) root file in DER, PEM, or CRT (X.509).
CA Certificate Path: The path to the certificate file that is your Certificate Authority (CA) root file in DER, PEM, or CERT (X.509).
Private Key Path: The path to the .key private key file that belongs to your Server Certificate Path in DER, PEM, or CERT (PKCS #8) format.
Notes:
The configuration data directory /var/lib/observability-pipelines-worker/config/ is automatically appended to the file paths. See Advanced Worker Configurations for more information.
The file must be readable by the observability-pipelines-worker group and user.
Compression
You might want to use compression if you are sending a high volume of events to your Elasticsearch clusters.
Toggle the switch to enable Compression. Select a compression algorithm (gzip, snappy, zlib, zstd) in the dropdown menu. The default is no compression.
Buffering
Toggle the switch to enable Buffering Options. Enable a configurable buffer on your destination to ensure intermittent latency or an outage at the destination doesn’t create immediate backpressure, and allow events to continue to be ingested from your source. Disk buffers can also increase pipeline durability by writing data to disk, ensuring buffered data persists through a Worker restart. See Destination buffers for more information.
If left unconfigured, your destination uses a memory buffer with a capacity of 500 events.
To configure a buffer on your destination:
Select the buffer type you want to set (Memory or Disk).
Enter the buffer size and select the unit.
Maximum memory buffer size is 128 GB.
Maximum disk buffer size is 500 GB.
In the Behavior on full buffer dropdown menu, select whether you want to block events or drop new events when the buffer is full.
Advanced options
In the ID Key field, enter the name of the field used as the document ID in Elasticsearch.
In the Pipeline field, enter the name of an Elasticsearch ingest pipeline to apply to events before indexing.
Enable the Retry partial failures toggle to retry a failed bulk request when some events in a batch fail while others succeed.
Set secrets
These are the defaults used for secret identifiers and environment variables.
Note: If you enter secret identifiers and then choose to use environment variables, the environment variable is the identifier entered and prepended with DD_OP. For example, if you entered PASSWORD_1 for a password identifier, the environment variable for that password is DD_OP_PASSWORD_1.
Elasticsearch endpoint URL identifier:
The default identifier is DESTINATION_ELASTICSEARCH_ENDPOINT_URL.
Elasticsearch authentication username identifier:
The default identifier is DESTINATION_ELASTICSEARCH_USERNAME.
Elasticsearch authentication password identifier:
The default identifier is DESTINATION_ELASTICSEARCH_PASSWORD.
Elasticsearch authentication username:
The default environment variable is DD_OP_DESTINATION_ELASTICSEARCH_USERNAME.
Elasticsearch authentication password:
The default environment variable is DD_OP_DESTINATION_ELASTICSEARCH_PASSWORD.
Elasticsearch endpoint URL:
The default environment variable is DD_OP_DESTINATION_ELASTICSEARCH_ENDPOINT_URL.
How the destination works
Event batching
A batch of events is flushed when one of these parameters is met. See event batching for more information.
Maximum Events
Maximum Size (MB)
Timeout (seconds)
None
10
1
1
2
rulesets:- %!s(<nil>) # Rules to enforce .
Request a personalized demo
Get Started with Datadog
Ask AI
AI-generated responses may be inaccurate. Verify important info.