이 페이지는 아직 한국어로 제공되지 않습니다. 번역 작업 중입니다.
현재 번역 프로젝트에 대한 질문이나 피드백이 있으신 경우
언제든지 연락주시기 바랍니다.Overview
Datadog Observability Pipelines allows you to collect, process, and route logs within your own infrastructure. It gives you control over your observability data before it leaves your environment.
With out-of-the-box templates, you can build pipelines that redact sensitive data, enrich logs, filter out noisy logs, and route events to destinations like Datadog, SIEM tools, or cloud storage.
Key components
Observability Pipelines Worker
The Observability Pipelines Worker runs within your infrastructure to aggregate, process, and route logs.
Datadog recommends you update Observability Pipelines Worker (OPW) with every minor and patch release, or, at a minimum, monthly.
Upgrading to a major OPW version and keeping it updated is the only supported way to get the latest OPW functionality, fixes, and security updates. See
Upgrade the Worker to update to the latest Worker version.
Observability Pipelines UI
The Observability Pipelines UI provides a centralized control plane where you can:
- Build and edit pipelines with guided templates.
- Deploy and manage Workers.
- Enable monitors to track pipeline health.
Get started
- Navigate to Observability Pipelines.
- Select a template based on your use case.
- Set up your pipeline:
- Choose a log source.
- Configure processors.
- Add one or more destinations.
- Install the Worker in your environment
- Enable monitors for real-time observability into your pipeline health.
See Set Up Pipelines for detailed instructions.
Common use cases and templates
Observability Pipelines includes prebuilt templates for common log routing and transformation workflows. You can fully customize or combine them to meet your needs.
| Template | Description |
|---|
| Log Volume Control | Reduce indexed log volume by filtering low-value logs before they’re stored. |
| Dual Ship Logs | Send the same log stream to multiple destinations (for example, Datadog and a SIEM). |
| Archive Logs | Store raw logs in Amazon S3, Google Cloud Storage, or Azure Storage for long-term retention and rehydration. |
| Split Logs | Route logs by type (for example, security vs. application) to different tools. |
| Sensitive Data Redaction | Detect and remove personally identifiable information (PII) and secrets using built-in or custom rules. |
| Log Enrichment | Add metadata from reference tables or static mappings for more effective querying. |
| Generate Metrics | Convert high-volume logs into count or distribution metrics to reduce storage needs. |
See Explore templates for more information.
Further Reading