Observability Pipelines

A graphic showing different data sources on the left that flows into three hexagons named transform, reduce, and route, with arrows pointing to different destinations for the modified data

What is Observability Pipelines and the Observability Pipelines Worker?

Observability Pipelines Worker

Observability Pipelines Worker is an on-premise end-to-end data pipeline solution designed to collect, process, and route logs and metrics from any source to any destination. You can deploy Observability Pipelines as an aggregator within your infrastructure for central processing, collecting data from multiple upstream sources, and performing cross-host aggregation and analysis. With the Observability Pipelines Worker, you can also:

  • Control your data volume before routing to manage costs.
  • Route data anywhere to reduce vendor lock-in and simplify migrations.
  • Transform logs and metrics by adding, parsing, enriching, and removing fields and tags.
  • Redact sensitive data from your telemetry data.

Observability Pipelines

Using Datadog, you can monitor, build, and manage all of your Observability Pipelines Worker deployments at scale.

Add your Datadog API key to your Observability Pipelines configuration to monitor your pipelines in Datadog: Identify bottlenecks and latencies, fine-tune performance, monitor data delivery, and more.

Get started

  1. Install the Observability Pipelines Worker.
  2. Set up configurations to collect, transform and route your data.

Explore Observability Pipelines

Start exploring and getting insights into your Observability Pipelines:

Monitor the health of your pipelines

Get a holistic view of all of your pipelines’ topologies and monitor key performance indicators, such as average load, error rate, and throughput for each of your flows.

The configuration map showing data coming from http, splunk_hec, and datadog, and flowing into different transforms and then sent to different destinations

Quickly identify bottlenecks and optimize performance

Dive into specific configuration components to understand how observability data is flowing into your pipeline to troubleshoot and pinpoint performance bottlenecks and to optimize your pipeline.

The S3 source configuration side panel showing graphs for events in and out per second, percentage of errors, and load average percentage

Ensure data delivery and reduce latency.

Find out if data is reaching its destination and get full visibility into any latency issues to meet SLIs and SLOs.

The Observability Pipelines page showing a list of active and inactive pipelines with columns for created date, number of hosts, version, events in, bytes in, and error rate

Further Reading