Observability Pipelines

This product is not supported for your selected Datadog site. ().
Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.

Overview

A graphic showing data being aggregated from a variety of sources, processed and enriched by the observability pipelines worker in your own environment, and then being routed to the security, analytics, and storage destinations of your choice

Datadog Observability Pipelines allows you to collect and process logs and metrics (PREVIEW indicates an early access version of a major product or feature that you can opt into before its official release.Glossary) within your own infrastructure, and then route the data to different destinations. It gives you control over your observability data before it leaves your environment.

With out-of-the-box templates, you can build pipelines that redact sensitive data, enrich data, filter out noisy events, and route data to destinations like Datadog, SIEM tools, or cloud storage.

Key components

Observability Pipelines Worker

The Observability Pipelines Worker runs within your infrastructure to aggregate, process, and route data.

Datadog recommends you update Observability Pipelines Worker (OPW) with every minor and patch release, or, at a minimum, monthly.

Upgrading to a major OPW version and keeping it updated is the only supported way to get the latest OPW functionality, fixes, and security updates. See Upgrade the Worker to update to the latest Worker version.

Observability Pipelines UI

The Observability Pipelines UI provides a centralized control plane where you can:

  • Build and edit pipelines with guided templates.
  • Deploy and manage Workers.
  • Enable monitors to track pipeline health.

Get started

  1. Navigate to Observability Pipelines.
  2. Select a template based on your use case.
  3. Set up your pipeline:
    1. Choose a log source.
    2. Configure processors.
    3. Add one or more destinations.
  4. Install the Worker in your environment
  5. Enable monitors for real-time observability into your pipeline health.

See Set Up Pipelines for detailed instructions.

Common use cases and templates

Observability Pipelines includes prebuilt templates for common data routing and transformation workflows. You can fully customize or combine them to meet your needs.

The Observability Pipelines UI showing the eight templates

Templates

TemplateDescription
Archive LogsStore raw logs in Amazon S3, Google Cloud Storage, or Azure Storage for long-term retention and rehydration.
Dual Ship LogsSend the same log stream to multiple destinations (for example, Datadog and a SIEM).
Generate Log-based MetricsConvert high-volume logs into count or distribution metrics to reduce storage needs.
Log EnrichmentAdd metadata from reference tables or static mappings for more effective querying.
Log Volume ControlReduce indexed log volume by filtering low-value logs before they’re stored.
Sensitive Data RedactionDetect and remove personally identifiable information (PII) and secrets using built-in or custom rules.
Split LogsRoute logs by type (for example, security vs. application) to different tools.
Metrics Volume and Cardinality Control is in Preview. Fill out the form to request access.
TemplateDescription
Metrics Volume and Cardinality ControlManage the quality and volume of your metrics by keeping only the metrics you need, standardizing metrics tagging, and removing unwanted tags to prevent high cardinality.

See Explore templates for more information.

Further Reading