Multiple mechanisms are responsible for choosing if spans generated by your applications are sent to Datadog (ingested). The logic behind these mechanisms lie in the tracing libraries and in the Datadog Agent. Depending on the configuration, all or some the traffic generated by instrumented services is ingested.
To each span ingested, there is attached a unique ingestion reason referring to one of the mechanisms described in this page. Usage metrics
datadog.estimated_usage.apm.ingested_spans are tagged by
Use the Ingestion Reasons dashboard to investigate in context each of these ingestion reasons. Get an overview of the volume attributed to each mechanism, to quickly know which configuration options to focus on.
The default sampling mechanism is called head-based sampling. The decision of whether to keep or drop a trace is made at the very beginning of the trace, at the start of the root span. This decision is then propagated to other services as part of their request context, for example as an HTTP request header.
Because the decision is made at the beginning of the trace and then conveyed to all parts of the trace, the trace is guaranteed to be kept or dropped as a whole.
You can set sampling rates for head-based sampling in two places:
- At the Agent level (default)
- At the Tracing Library level: any tracing library mechanism overrides the Agent setup.
In the Agent
The Datadog Agent continuously sends sampling rates to tracing libraries to apply at the root of traces. The Agent adjusts rates to achieve a target of overall ten traces per second, distributed to services depending on the traffic.
For instance, if service
A has more traffic than service
B, the Agent might vary the sampling rate for
A such that
A keeps no more than seven traces per second, and similarly adjust the sampling rate for
B such that
B keeps no more than three traces per second, for a total of 10 traces per second.
Set Agent’s target traces-per-second in its main configuration file (
datadog.yaml) or as an environment variable :
@param max_traces_per_second - integer - optional - default: 10
@env DD_APM_MAX_TPS - integer - optional - default: 10
Note: The traces-per-second sampling rate set in the Agent only applies to Datadog tracing libraries, it has no effect on other tracing libraries such as OpenTelemetry SDKs.
In tracing libraries: user-defined rules
At the library level, more specific sampling configuration is available:
- Set a specific sampling rate to apply to all root services, overriding the Agent’s default mechanism.
- Set a sampling rate for a specific root service.
- Set a limit on the number of ingested traces per second.
Note: These rules also follow a head-based sampling mechanism. If the traffic for a service is higher than the configured maximum traces per second, then traces are dropped at the root. It does not create incomplete traces.
The configuration can be set by environment variables or directly in the code:
@env DD_TRACE_SAMPLE_RATE - integer - optional null (defaults to Agent default feedback loop)
@env DD_TRACE_SAMPLING_RULES - integer - optional null
@env DD_TRACE_RATE_LIMIT - integer - optional 100 (if using the Agent default mechanism, the rate limiter is ignored)
The following Python example shows sampling 10 percent of all traces, with a rate limit of 100 traces per second, and an overriding sampling rate for a specific service:
# in dd-trace-py
default_sample_rate=0.10, # keep 10% of traces
rate_limit=100, # but at most 100 traces per second
# 100% sampled for “my-service”, but the 100 traces-per-second overall limit is still honored
Read more about configuring the ingestion in the tracing libraries documentation.
Note: Services configured with user-defined rules are marked as
Configured in the Ingestion Control Page Configuration column. Services configured to use the default mechanism are labeled as
Force keep and drop
The head-based sampling mechanism can be overridden at the tracing library level. For example, if you need to monitor a critical transaction, you can force the associated trace to be kept. On the other hand, for unnecessary or repetitive information like health checks, you can force the trace to be dropped.
ManualKeep on a span to indicate that it and all child spans should be ingested. The resulting trace might appear incomplete in the UI if the span in question is not the root span of the trace.
// in dd-trace-go
Single spans (App Analytics)
On October 20, 2020, App Analytics was replaced by Tracing without Limits. This is a deprecated mechanism with configuration information relevant to legacy App Analytics. Instead, use new configuration options head-based sampling
to have full control over your data ingestion.
If you need to sample a specific span, but don’t need the full trace to be available, tracers allow a sampling rate to be configured for a single span. This span will be ingested at no less than the configured rate, even when the enclosing trace is dropped.
In the tracing libraries
To use the analytics mechanism, enable it either by an environment variable or in the code. Also, define a sampling rate to be applied to all
@env DD_TRACE_ANALYTICS_ENABLED - boolean - optional false
// in dd-trace-go
// set analytics_enabled by default
// set raw sampling rate to apply on all analytics_enabled spans
Tag any single span with
analytics_enabled:true. In addition, specify a sampling rate to be associated with the span:
// in dd-trace-go
// make a span analytics_enabled
// make a span analytics_enabled with a rate of 0.5
s := tracer.StartSpan("redis.cmd", AnalyticsRate(0.5))
In the Agent
In the Agent, an additional rate limiter is set to 200 spans per second. If the limit is reached, some spans are dropped and not forwarded to Datadog.
Set the rate in the Agent main configuration file (
datadog.yaml) or as an environment variable:
@param max_events_per_second - integer - optional 200
@env DD_APM_MAX_EPS - integer - optional 200
Error and rare traces
For traces not caught by the head-based sampling, Agent mechanisms make sure that critical and diverse traces are kept and ingested. These two samplers keep a diverse set of traces by catching all combinations of a predetermined set of tags:
- Error traces: Sampling errors is important for providing visibility on potential system failures.
- Rare traces: Sampling rare traces allows you to keep visibility on your system as a whole, by making sure that low-traffic services and resources are still monitored.
Note: Error and rare samplers are ignored for services for which you set library sampling rules.
The error sampler catches pieces of traces that contain error spans that are not caught by head-based sampling. It distributes a ten-traces-per-second rate to catch all combinations of
With Agent version 7.33 and forward, you can configure the error sampler in the Agent main configuration file (
datadog.yaml) or with environment variables:
@param errors_per_second - integer - optional - default: 10
@env DD_APM_ERROR_TPS - integer - optional - default: 10
Note: Set the parameter to
0 to disable the error sampler.
The rare sampler sends a set of rare spans to Datadog. Rare sampling is also a distributed rate, to catch combinations of
http.status. The default sampling rate for rare traces is five traces per second.
In Agent version 7.33 and forward, you can disable the rare sampler in the Agent main configuration file (
datadog.yaml) or with an environment variable:
@params apm_config.disable_rare_sampler - boolean - optional - default: false
@env DD_APM_DISABLE_RARE_SAMPLER - boolean - optional - default: false
Note: Sampled rare traces may be incomplete, because this mechanism occurs downstream of the head-based sampling. There is no way to guarantee that the Agent will receive a complete trace from the tracing libraries.
Product ingested spans
Some additional ingestion reasons are attributed to spans that are generated by specific Datadog products:
|Product||Ingestion Reason||Ingestion Mechanism Description|
|An HTTP or browser test generates a trace when the backend services are instrumented. Find the backend trace from the synthetic test run.|
|Real User Monitoring|
|A browser request from a web or mobile application generates a trace when the backend services are instrumented. Find the backend trace from the RUM browser sessions and resources.|
|Your traces received from the Serverless applications traced with Datadog Tracing Libraries or the AWS X-Ray integration.|
|Application Security Monitoring|
|Traces ingested from Datadog tracing libraries and flagged by ASM as a threat.|
Additional helpful documentation, links, and articles: