- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Use the Observability Pipelines Worker to send your processed logs to different destinations.
Select and set up your destinations when you set up a pipeline. This is step 4 in the pipeline setup process:
Select a destination for more information:
Logs are often stored in separate indexes based on log data, such as the service or environment the logs are coming from or another log attribute. In Observability Pipelines, you can use template syntax to route your logs to different indexes based on specific log fields.
When the Observability Pipelines Worker cannot resolve the field with the template syntax, the Worker defaults to a specified behavior for that destination. For example, if you are using the template {{application_id}}
for the Amazon S3 destination’s Prefix field, but there isn’t an application_id
field in the log, the Worker creates a folder called OP_UNRESOLVED_TEMPLATE_LOGS/
and publishes the logs there.
The following table lists the destinations and fields that support template syntax, and what happens when the Worker cannot resolve the field:
Destination | Fields that support template syntax | Behavior when the field cannot be resolved |
---|---|---|
Amazon Opensearch | Index | The Worker writes logs to the datadog-op index. |
Amazon S3 | Prefix | The Worker creates a folder named OP_UNRESOLVED_TEMPLATE_LOGS/ and writes the logs there. |
Azure Blob | Prefix | The Worker creates a folder named OP_UNRESOLVED_TEMPLATE_LOGS/ and writes the logs there. |
Elasticsearch | Source type | The Worker writes logs to the datadog-op index. |
Google Chronicle | Log type | Defaults to DATADOG log type. |
Google Cloud | Prefix | The Worker creates a folder named OP_UNRESOLVED_TEMPLATE_LOGS/ and writes the logs there. |
Opensearch | Index | The Worker writes logs to the datadog-op index. |
Splunk HEC | Index Source type | The Worker sends the logs to the default index configured in Splunk. The Worker defaults to the httpevent sourcetype. |
If you want to route logs based on the log’s application ID field (for example, application_id
) to the Amazon S3 destination, use the event fields syntax in the Prefix to apply to all object keys field.
Use {{ <field_name> }}
to access individual log event fields. For example:
{{ application_id }}
Use strftime specifiers for the date and time. For example:
year=%Y/month=%m/day=%d
Prefix a character with \
to escape the character. This example escapes the event field syntax:
\{{ field_name }}
This example escapes the strftime specifiers:
year=\%Y/month=\%m/day=\%d/
Observability Pipelines destinations send events in batches to the downstream integration. A batch of events is flushed when one of the following parameters is met:
For example, if a destination’s parameters are:
And the destination receives 1 event in a 5-second window, it flushes the batch at the 5-second timeout.
If the destination receives 3 events within 2 seconds, it flushes a batch with 2 events and then flushes a second batch with the remaining event after 5 seconds. If the destination receives 1 event that is more than 100,000 bytes, it flushes this batch with the 1 event.
Destination | Maximum Events | Maximum Bytes | Timeout (seconds) |
---|---|---|---|
Amazon OpenSearch | None | 10,000,000 | 1 |
Amazon S3 (Datadog Log Archives) | None | 100,000,000 | 900 |
Azure Storage (Datadog Log Archives) | None | 100,000,000 | 900 |
Datadog Logs | 1,000 | 4,250,000 | 5 |
Elasticsearch | None | 10,000,000 | 1 |
Google Chronicle | None | 1,000,000 | 15 |
Google Cloud Storage (Datadog Log Archives) | None | 100,000,000 | 900 |
Microsoft Sentinel | None | 10,000,000 | 1 |
New Relic | 100 | 1,000,000 | 1 |
OpenSearch | None | 10,000,000 | 1 |
SentinelOne | None | 1,000,000 | 1 |
Splunk HTTP Event Collector (HEC) | None | 1,000,000 | 1 |
Sumo Logic Hosted Collecter | None | 10,000,000 | 1 |
Note: The rsyslog and syslog-ng destinations do not batch events.