Observability Pipelines

This product is not supported for your selected Datadog site. ().
Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.

Overview

A graphic showing data being aggregated from a variety of sources, processed and enriched by the observability pipelines worker in your own environment, and then being routed to the security, analytics, and storage destinations of your choice

Datadog Observability Pipelines allows you to collect, process, and route logs within your own infrastructure. It gives you control over your observability data before it leaves your environment.

With out-of-the-box templates, you can build pipelines that redact sensitive data, enrich logs, filter out noisy logs, and route events to destinations like Datadog, SIEM tools, or cloud storage.

Key components

Observability Pipelines Worker

The Observability Pipelines Worker runs within your infrastructure to aggregate, process, and route logs.

Datadog recommends you update Observability Pipelines Worker (OPW) with every minor and patch release, or, at a minimum, monthly.

Upgrading to a major OPW version and keeping it updated is the only supported way to get the latest OPW functionality, fixes, and security updates. See Upgrade the Worker to update to the latest Worker version.

Observability Pipelines UI

The Observability Pipelines UI provides a centralized control plane where you can:

  • Build and edit pipelines with guided templates.
  • Deploy and manage Workers.
  • Enable monitors to track pipeline health.

Get started

  1. Navigate to Observability Pipelines.
  2. Select a template based on your use case.
  3. Set up your pipeline:
    1. Choose a log source.
    2. Configure processors.
    3. Add one or more destinations.
  4. Install the Worker in your environment
  5. Enable monitors for real-time observability into your pipeline health.

See Set Up Pipelines for detailed instructions.

Common use cases and templates

Observability Pipelines includes prebuilt templates for common log routing and transformation workflows. You can fully customize or combine them to meet your needs.

The Observability Pipelines UI showing the six templates
TemplateDescription
Log Volume ControlReduce indexed log volume by filtering low-value logs before they’re stored.
Dual Ship LogsSend the same log stream to multiple destinations (for example, Datadog and a SIEM).
Archive LogsStore raw logs in Amazon S3, Google Cloud Storage, or Azure Storage for long-term retention and rehydration.
Split LogsRoute logs by type (for example, security vs. application) to different tools.
Sensitive Data RedactionDetect and remove personally identifiable information (PII) and secrets using built-in or custom rules.
Log EnrichmentAdd metadata from reference tables or static mappings for more effective querying.
Generate MetricsConvert high-volume logs into count or distribution metrics to reduce storage needs.

See Explore templates for more information.

Further Reading