---
title: Processing Pipelines
description: >-
  Transform, normalize, and enrich span attributes after ingestion without
  modifying application code.
breadcrumbs: Docs > APM > The Trace Pipeline > Processing Pipelines
---

# Processing Pipelines

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site.md). ().
{% /alert %}

{% /callout %}

{% image
   source="https://docs.dd-static.net/images/tracing/processing_pipelines/processing_pipelines_overview.dacd7ebda0cb9fb997b77985ef53268e.jpg?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/tracing/processing_pipelines/processing_pipelines_overview.dacd7ebda0cb9fb997b77985ef53268e.jpg?auto=format&fit=max&w=850&dpr=2 2x"
   alt="APM data flow diagram showing instrumented applications, ingestion sampling rules, Processing Pipelines, metrics from spans, retention filters, and indexed search" /%}

APM Processing Pipelines let you transform, normalize, and enrich span attributes after ingestion and before storage, without modifying application code.

Use pipelines to:

- Standardize attribute naming across services
- Consolidate inconsistent keys into a single canonical attribute
- Extract structured data from string values

Processing Pipelines run in the Datadog backend and apply only to newly ingested spans. Each pipeline contains a filter query that defines which spans enter the pipeline, and one or more processors that define how to transform matching spans. It evaluates pipelines in order from top to bottom.

## Create a pipeline{% #create-a-pipeline %}

To create a pipeline:

1. Navigate to [**APM > Settings > Pipelines**](https://app.datadoghq.com/apm/pipelines).
1. Click **Add Pipeline**.
1. Define a filter query using [query syntax](https://docs.datadoghq.com/tracing/trace_explorer/query_syntax.md). The pipeline only processes spans matching this filter.
1. Name the pipeline.
1. Click **Enable**.

{% image
   source="https://docs.dd-static.net/images/tracing/processing_pipelines/create_pipeline.6f017e149fde3b3a028c644884748179.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/tracing/processing_pipelines/create_pipeline.6f017e149fde3b3a028c644884748179.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="The new pipeline dialog with a filter query field, a live preview table of matching spans, and a pipeline name field" /%}

## Manage pipelines{% #manage-pipelines %}

From the [Pipelines](https://app.datadoghq.com/apm/pipelines) page, you can:

- Enable or disable individual pipelines
- Reorder pipelines by dragging them
- Edit pipelines in draft mode
- Restrict access with [RBAC](https://docs.datadoghq.com/account_management/rbac.md)

Disabling a pipeline stops it from processing newly ingested spans. It does not retroactively modify previously stored spans.

{% image
   source="https://docs.dd-static.net/images/tracing/processing_pipelines/manage_pipelines.d5dd5ab4c8df14278c0f9037f3ae94f7.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/tracing/processing_pipelines/manage_pipelines.d5dd5ab4c8df14278c0f9037f3ae94f7.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="The Processing Pipelines list showing three active pipelines with their filter queries and management controls" /%}

## Processors{% #processors %}

Processors define the transformations applied to matching spans. Within a pipeline, processors run sequentially. Attribute changes from one processor apply to all downstream processors in the same pipeline. To add a processor, expand a pipeline and click **Add Processor**.

{% alert level="info" %}
Processors can only be applied to [span attributes, not span tags](https://docs.datadoghq.com/tracing/trace_explorer/span_tags_attributes.md).
{% /alert %}

### Remapper processor{% #remapper-processor %}

The Remapper processor renames, merges, or removes span attributes to enforce consistent attribute naming across services. It modifies attribute keys, but *does not* extract new data from attribute values. To extract data from values, use the Parser processor.

The system attributes `env`, `service`, `resource_name`, `operation_name`, and `@duration` cannot be remapped. If you rename or remove attributes used in dashboards, monitors, or retention filters, update the affected dashboards, monitors, and retention filters accordingly.

For example, different services may emit `http.route`, `http.path`, or `http.target` for the same logical field. Use the Remapper to map all three to `http.route` so that every matching span contains a single, standardized attribute.

{% image
   source="https://docs.dd-static.net/images/tracing/processing_pipelines/remapper_processor.ee13561f50e9bfcdbdc949886b50e964.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/tracing/processing_pipelines/remapper_processor.ee13561f50e9bfcdbdc949886b50e964.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="The Remapper processor configuration showing source attributes http.path and http.target mapped to a target attribute http.route, with a preview of matched spans" /%}

### Parser processor{% #parser-processor %}

The Parser processor extracts structured attributes from existing span attribute values using [Grok parsing rules](https://docs.datadoghq.com/logs/log_configuration/parsing.md). It uses the same Grok syntax as [Log Management parsing](https://docs.datadoghq.com/logs/log_configuration/parsing.md), including all matchers and filters. Unlike the Remapper, the Parser creates new attributes based on parsed content. Use it to transform semi-structured text stored in span attributes into searchable, structured attributes. To rename or consolidate attribute keys, use the Remapper processor instead.

To configure Grok parsing rules:

1. **Define the parsing attribute and samples**: Select the attribute that you want to parse, and add sample data for the selected attribute.
1. **Define parsing rules**: Write your parsing rules in the rule editor.
1. **Preview parsing**: Select a sample to evaluate it against the parsing rules. All samples show a status (`match` or `no match`) indicating whether one of the Grok rules matches the sample.

{% image
   source="https://docs.dd-static.net/images/tracing/processing_pipelines/parser_processor.7e55761b74ef838c698dd7095d837f9e.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/tracing/processing_pipelines/parser_processor.7e55761b74ef838c698dd7095d837f9e.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="The Parser processor configuration showing sample attribute values with match status, a Grok parsing rule editor, and a parsed output preview" /%}

When multiple Grok rules match the same sample, only the first matching rule is applied.

## Execution flow{% #execution-flow %}

Processing Pipelines run at a specific point in the span processing life cycle:

1. Spans are ingested.
1. Datadog enrichments are applied (infrastructure, CI, source code metadata).
1. Processing Pipelines run.
1. Retention filters and metrics from spans are computed.
1. Spans are stored and indexed.

## Preprocessed attributes{% #preprocessed-attributes %}

Datadog preprocesses some span attributes before pipelines run. For example, [Quantization of APM Data](https://docs.datadoghq.com/tracing/troubleshooting/quantization.md) normalizes resource names by default and cannot be disabled. You can define additional pipelines if you need further customization of these attributes.

## Further reading{% #further-reading %}

- [Generate Custom Metrics from Spans](https://docs.datadoghq.com/tracing/trace_pipeline/generate_metrics.md)
- [Trace Retention](https://docs.datadoghq.com/tracing/trace_pipeline/trace_retention.md)
- [Quantization of APM Data](https://docs.datadoghq.com/tracing/troubleshooting/quantization.md)
- [Sensitive Data Scanner](https://docs.datadoghq.com/security/sensitive_data_scanner.md)
- [Service Remapping Rules](https://docs.datadoghq.com/tracing/services/service_remapping_rules.md)
