(LEGACY) Observability Pipelines Documentation
このページは日本語には対応しておりません。随時翻訳に取り組んでいます。
翻訳に関してご質問やご意見ございましたら、
お気軽にご連絡ください。
Observability Pipelines is not available on the US1-FED Datadog site.
バージョン 1.8 以下の OP Worker をバージョン 2.0 以上にアップグレードすると、既存のパイプラインが破損します。OP Worker バージョン 1.8 以下を引き続き使用する場合は、OP Worker をアップグレードしないでください。OP Worker 2.0 以上を使用する場合は、OP Worker 1.8 以前のパイプラインを OP Worker 2.x に移行する必要があります。
Datadog では、OP Worker バージョン 2.0 以上への更新を推奨しています。OP Worker のメジャーバージョンにアップグレードして更新し続けることが、OP Worker の最新の機能、修正、セキュリティ更新を入手するために唯一サポートされた方法です。
The following documents are for the Observability Pipelines Worker 1.8 and older.
Reference: Configurations
Reference: Datadog Processing Language
Legacy Observability Pipelines
Overview
Observability Pipelines allow you to collect, process, and route logs from any source to any destination in infrastructure that you own or manage.
With Observability Pipelines, you can:
- Control your data volume before routing to manage costs.
- Route data anywhere to reduce vendor lock-in and simplify migrations.
- Transform logs by adding, parsing, enriching, and removing fields and tags.
- Redact sensitive data from your telemetry data.
The Observability Pipelines Worker is the software that runs in your infrastructure. It aggregates and centrally processes and routes your data. More specifically, the Worker can:
- Receive or pull all your observability data collected by your agents, collectors, or forwarders.
- Transform ingested data (for example: parse, filter, sample, enrich, and more).
- Route the processed data to any destination.
The Datadog UI provides a control plane to manage your Observability Pipelines Workers. You can monitor your pipelines to understand the health of your pipelines, identify bottlenecks and latencies, fine-tune performance, validate data delivery, and investigate your largest volume contributors. You can build or edit pipelines, whether it be routing a subset of data to a new destination or introducing a new sensitive data redaction rule, and roll out these changes to your active pipelines from the Datadog UI.
Get started
- Set up the Observability Pipelines Worker.
- Create pipelines to collect, transform and route your data.
- Discover how to deploy Observability Pipelines at production scale:
Explore Observability Pipelines
Start getting insights into your Observability Pipelines:
Collect data from any source and route data to any destination
Collect data* from any source and route them to any destination to reduce vendor lock-in and simplify migrations.
Control your data volume before it gets routed
Optimize volume and reduce the size of your observability data by sampling, filtering, deduplicating, and aggregating your logs.
Redact sensitive data from your telemetry data
Redact sensitive data before they are routed outside of your infrastructure, using out-of-the-box patterns to scan for PII, PCI, private keys, and more.
Monitor the health of your pipelines
Get a holistic view of all of your pipelines’ topologies and monitor key performance indicators, such as average load, error rate, and throughput for each of your flows.
Further Reading