이 페이지는 아직 한국어로 제공되지 않습니다. 번역 작업 중입니다.
현재 번역 프로젝트에 대한 질문이나 피드백이 있으신 경우
언제든지 연락주시기 바랍니다.APM Processing Pipelines let you transform, normalize, and enrich span attributes after ingestion and before storage, without modifying application code.
Use pipelines to:
- Standardize attribute naming across services
- Consolidate inconsistent keys into a single canonical attribute
- Extract structured data from string values
Processing Pipelines run in the Datadog backend and apply only to newly ingested spans. Each pipeline contains a filter query that defines which spans enter the pipeline, and one or more processors that define how to transform matching spans. It evaluates pipelines in order from top to bottom.
Create a pipeline
To create a pipeline:
- Navigate to APM > Settings > Pipelines.
- Click Add Pipeline.
- Define a filter query using query syntax. The pipeline only processes spans matching this filter.
- Name the pipeline.
- Click Enable.
Manage pipelines
From the Pipelines page, you can:
- Enable or disable individual pipelines
- Reorder pipelines by dragging them
- Edit pipelines in draft mode
- Restrict access with RBAC
Disabling a pipeline stops it from processing newly ingested spans. It does not retroactively modify previously stored spans.
Processors
Processors define the transformations applied to matching spans. Within a pipeline, processors run sequentially. Attribute changes from one processor apply to all downstream processors in the same pipeline. To add a processor, expand a pipeline and click Add Processor.
Remapper processor
The Remapper processor renames, merges, or removes span attributes to enforce consistent attribute naming across services. It modifies attribute keys, but does not extract new data from attribute values. To extract data from values, use the Parser processor.
The system attributes env, service, resource_name, operation_name, and @duration cannot be remapped. If you rename or remove attributes used in dashboards, monitors, or retention filters, update the affected dashboards, monitors, and retention filters accordingly.
For example, different services may emit http.route, http.path, or http.target for the same logical field. Use the Remapper to map all three to http.route so that every matching span contains a single, standardized attribute.
Parser processor
The Parser processor extracts structured attributes from existing span attribute values using Grok parsing rules. It uses the same Grok syntax as Log Management parsing, including all matchers and filters. Unlike the Remapper, the Parser creates new attributes based on parsed content. Use it to transform semi-structured text stored in span attributes into searchable, structured attributes. To rename or consolidate attribute keys, use the Remapper processor instead.
To configure Grok parsing rules:
- Define the parsing attribute and samples: Select the attribute that you want to parse, and add sample data for the selected attribute.
- Define parsing rules: Write your parsing rules in the rule editor.
- Preview parsing: Select a sample to evaluate it against the parsing rules. All samples show a status (
match or no match) indicating whether one of the Grok rules matches the sample.
When multiple Grok rules match the same sample, only the first matching rule is applied.
Execution flow
Processing Pipelines run at a specific point in the span processing life cycle:
- Spans are ingested.
- Datadog enrichments are applied (infrastructure, CI, source code metadata).
- Processing Pipelines run.
- Retention filters and metrics from spans are computed.
- Spans are stored and indexed.
Preprocessed attributes
Datadog preprocesses some span attributes before pipelines run. For example, Quantization of APM Data normalizes resource names by default and cannot be disabled. You can define additional pipelines if you need further customization of these attributes.
Further reading