- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Multiple mechanisms are responsible for choosing if spans generated by your applications are sent to Datadog (ingested). The logic behind these mechanisms lie in the tracing libraries and in the Datadog Agent. Depending on the configuration, all or some the traffic generated by instrumented services is ingested.
To each span ingested, there is attached a unique ingestion reason referring to one of the mechanisms described in this page. Usage metrics datadog.estimated_usage.apm.ingested_bytes
and datadog.estimated_usage.apm.ingested_spans
are tagged by ingestion_reason
.
Use the Ingestion Reasons dashboard to investigate in context each of these ingestion reasons. Get an overview of the volume attributed to each mechanism, to quickly know which configuration options to focus on.
The default sampling mechanism is called head-based sampling. The decision of whether to keep or drop a trace is made at the very beginning of the trace, at the start of the root span. This decision is then propagated to other services as part of their request context, for example as an HTTP request header.
Because the decision is made at the beginning of the trace and then conveyed to all parts of the trace, the trace is guaranteed to be kept or dropped as a whole.
You can set sampling rates for head-based sampling in two places:
ingestion_reason: auto
The Datadog Agent continuously sends sampling rates to tracing libraries to apply at the root of traces. The Agent adjusts rates to achieve a target of overall ten traces per second, distributed to services depending on the traffic.
For instance, if service A
has more traffic than service B
, the Agent might vary the sampling rate for A
such that A
keeps no more than seven traces per second, and similarly adjust the sampling rate for B
such that B
keeps no more than three traces per second, for a total of 10 traces per second.
Sampling rate configuration in the Agent is configurable remotely if you are using Agent version 7.42.0 or higher. To get started, set up Remote Configuration and then configure the ingestion_reason
parameter from the Ingestion Control page. Remote Configuration allows you to change the parameter without having to restart the Agent. Remotely set configuration takes precedence over local configurations, including environment variables and settings from datadog.yaml
.
Set Agent’s target traces-per-second in its main configuration file (datadog.yaml
) or as an environment variable :
@param max_traces_per_second - integer - optional - default: 10
@env DD_APM_MAX_TPS - integer - optional - default: 10
Notes:
All the spans from a trace sampled using the Datadog Agent automatically computed sampling rates are tagged with the ingestion reason auto
. The ingestion_reason
tag is also set on usage metrics. Services using the Datadog Agent default mechanism are labeled as Automatic
in the Ingestion Control Page Configuration column.
ingestion_reason: rule
For more granular control, use tracing library sampling configuration options:
Note: Sampling rules are also head-based sampling controls. If the traffic for a service is higher than the configured maximum traces per second, then traces are dropped at the root. It does not create incomplete traces.
The configuration can be set by environment variables or directly in the code:
Remote configuration
Read more about how to remotely configure sampling rates by service and resource in the Resource-based sampling guide.
Note: Remotely set configuration takes precedence over local configuration.
Local configuration
For Java applications, set by-service and by-resource (starting from version v1.26.0 for resource-based sampling) sampling rates with the DD_TRACE_SAMPLING_RULES
environment variable.
For example, to capture 100% of traces for the resource GET /checkout
from the service my-service
, and 20% of other endpoints’ traces, set:
# using system property
java -Ddd.trace.sampling.rules='[{\"service\": \"my-service\", \"resource\": \"GET /checkout\", \"sample_rate\":1},{\"service\": \"my-service\", \"sample_rate\":0.2}]' -javaagent:dd-java-agent.jar -jar my-app.jar
# using environment variables
export DD_TRACE_SAMPLING_RULES='[{"service": "my-service", "resource":"GET /checkout", "sample_rate": 1},{"service": "my-service", "sample_rate": 0.2}]'
The service name value is case sensitive and must match the case of the actual service name.
Configure a rate limit by setting the environment variable DD_TRACE_RATE_LIMIT
to a number of traces per second per service instance. If no DD_TRACE_RATE_LIMIT
value is set, a limit of 100 traces per second is applied.
Note: The use of DD_TRACE_SAMPLE_RATE
is deprecated. Use DD_TRACE_SAMPLING_RULES
instead. For instance, if you already set DD_TRACE_SAMPLE_RATE
to 0.1
, set DD_TRACE_SAMPLING_RULES
to [{"sample_rate":0.1}]
instead.
Read more about sampling controls in the Java tracing library documentation.
For Python applications, set by-service and by-resource (starting from version v2.8.0 for resource-based sampling) sampling rates with the DD_TRACE_SAMPLING_RULES
environment variable.
For example, to capture 100% of traces for the resource GET /checkout
from the service my-service
, and 20% of other endpoints’ traces, set:
export DD_TRACE_SAMPLING_RULES='[{"service": "my-service", "resource": "GET /checkout", "sample_rate": 1},{"service": "my-service", "sample_rate": 0.2}]'
Configure a rate limit by setting the environment variable DD_TRACE_RATE_LIMIT
to a number of traces per second per service instance. If no DD_TRACE_RATE_LIMIT
value is set, a limit of 100 traces per second is applied.
Note: The use of DD_TRACE_SAMPLE_RATE
is deprecated. Use DD_TRACE_SAMPLING_RULES
instead. For instance, if you already set DD_TRACE_SAMPLE_RATE
to 0.1
, set DD_TRACE_SAMPLING_RULES
to [{"sample_rate":0.1}]
instead.
Read more about sampling controls in the Python tracing library documentation.
For Ruby applications, set a global sampling rate for the library using the DD_TRACE_SAMPLE_RATE
environment variable. Set by-service sampling rates with the DD_TRACE_SAMPLING_RULES
environment variable.
For example, to send 50% of the traces for the service named my-service
and 10% of the rest of the traces:
export DD_TRACE_SAMPLE_RATE=0.1
export DD_TRACE_SAMPLING_RULES='[{"service": "my-service", "sample_rate": 0.5}]'
Configure a rate limit by setting the environment variable DD_TRACE_RATE_LIMIT
to a number of traces per second per service instance. If no DD_TRACE_RATE_LIMIT
value is set, a limit of 100 traces per second is applied.
Read more about sampling controls in the Ruby tracing library documentation.
Remote configuration
Read more about how to remotely configure sampling rates by service and resource in this article.
Note: The remotely set configuration takes precedence over local configuration.
Local configuration
For Go applications, set by-service and by-resource (starting from version v1.60.0 for resource-based sampling) sampling rates with the DD_TRACE_SAMPLING_RULES
environment variable.
For example, to capture 100% of traces for the resource GET /checkout
from the service my-service
, and 20% of other endpoints’ traces, set:
export DD_TRACE_SAMPLING_RULES='[{"service": "my-service", "resource": "GET /checkout", "sample_rate": 1},{"service": "my-service", "sample_rate": 0.2}]'
Configure a rate limit by setting the environment variable DD_TRACE_RATE_LIMIT
to a number of traces per second per service instance. If no DD_TRACE_RATE_LIMIT
value is set, a limit of 100 traces per second is applied.
Note: The use of DD_TRACE_SAMPLE_RATE
is deprecated. Use DD_TRACE_SAMPLING_RULES
instead. For instance, if you already set DD_TRACE_SAMPLE_RATE
to 0.1
, set DD_TRACE_SAMPLING_RULES
to [{"sample_rate":0.1}]
instead.
Read more about sampling controls in the Go tracing library documentation.
For Node.js applications, set a global sampling rate in the library using the DD_TRACE_SAMPLE_RATE
environment variable.
You can also set by-service sampling rates. For instance, to send 50% of the traces for the service named my-service
and 10% for the rest of the traces:
tracer.init({
ingestion: {
sampler: {
sampleRate: 0.1,
rules: [
{ sampleRate: 0.5, service: 'my-service' }
]
}
}
});
Configure a rate limit by setting the environment variable DD_TRACE_RATE_LIMIT
to a number of traces per second per service instance. If no DD_TRACE_RATE_LIMIT
value is set, a limit of 100 traces per second is applied.
Read more about sampling controls in the Node.js tracing library documentation.
For PHP applications, set a global sampling rate for the library using the DD_TRACE_SAMPLE_RATE
environment variable. Set by-service sampling rates with the DD_TRACE_SAMPLING_RULES
environment variable.
For example, to send 50% of the traces for the service named my-service
and 10% for the rest of the traces:
export DD_TRACE_SAMPLE_RATE=0.1
export DD_TRACE_SAMPLING_RULES='[{"service": "my-service", "sample_rate": 0.5}]'
Read more about sampling controls in the PHP tracing library documentation.
Starting in v0.1.0, the Datadog C++ library supports the following configurations:
DD_TRACE_SAMPLE_RATE
environment variableDD_TRACE_SAMPLING_RULES
environment variable.DD_TRACE_RATE_LIMIT
environment variable.For example, to send 50% of the traces for the service named my-service
and 10% for the rest of the traces:
export DD_TRACE_SAMPLE_RATE=0.1
export DD_TRACE_SAMPLING_RULES='[{"service": "my-service", "sample_rate": 0.5}]'
C++ does not provide integrations for automatic instrumentation, but it’s used by proxy tracing such as Envoy, Nginx, or Istio. Read more about how to configure sampling for proxies in Tracing proxies.
For .NET applications, set a global sampling rate for the library using the DD_TRACE_SAMPLE_RATE
environment variable. Set by-service sampling rates with the DD_TRACE_SAMPLING_RULES
environment variable.
For example, to send 50% of the traces for the service named my-service
and 10% for the rest of the traces:
export DD_TRACE_SAMPLE_RATE=0.1
export DD_TRACE_SAMPLING_RULES='[{"service": "my-service", "sample_rate": 0.5}]'
DD_TRACE_SAMPLE_RATE
in the Service Catalog UI.Configure a rate limit by setting the environment variable DD_TRACE_RATE_LIMIT
to a number of traces per second per service instance. If no DD_TRACE_RATE_LIMIT
value is set, a limit of 100 traces per second is applied.
Read more about sampling controls in the .NET tracing library documentation.
Read more about configuring environment variables for .NET.
Note: All the spans from a trace sampled using a tracing library configuration are tagged with the ingestion reason rule
. Services configured with user-defined sampling rules are marked as Configured
in the Ingestion Control Page Configuration column.
For traces not caught by the head-based sampling, two additional Datadog Agent sampling mechanisms make sure that critical and diverse traces are kept and ingested. These two samplers keep a diverse set of local traces (set of spans from the same host) by catching all combinations of a predetermined set of tags:
Note: Error and rare samplers are ignored for services for which you set library sampling rules.
ingestion_reason: error
The error sampler catches pieces of traces that contain error spans that are not caught by head-based sampling. It catches error traces up to a rate of 10 traces per second (per Agent). It ensures comprehensive visibility on errors when the head-based sampling rate is low.
With Agent version 7.33 and forward, you can configure the error sampler in the Agent main configuration file (datadog.yaml
) or with environment variables:
@param errors_per_second - integer - optional - default: 10
@env DD_APM_ERROR_TPS - integer - optional - default: 10
Notes:
0
to disable the error sampler.manual.drop
are excluded under the error sampler.The error sampling is remotely configurable if you’re using the Agent version 7.42.0 or higher. Follow the documentation to enable remote configuration in your Agents. With remote configuration, you are able to enable the collection of rare spans without having to restart the Datadog Agent.
To override the default behavior so that spans dropped by the tracing library rules or custom logic such as manual.drop
are included by the error sampler, enable the feature with: DD_APM_FEATURES=error_rare_sample_tracer_drop
in the Datadog Agent (or the dedicated Trace Agent container within the Datadog Agent pod in Kubernetes).
Error sampling default behavior can’t be changed for these Agent versions. Upgrade the Datadog Agent to Datadog Agent 6/7.41.0 and higher.
ingestion_reason: rare
The rare sampler sends a set of rare spans to Datadog. It catches combinations of env
, service
, name
, resource
, error.type
, and http.status
up to 5 traces per second (per Agent). It ensures visibility on low traffic resources when the head-based sampling rate is low.
Note: The rare sampler captures local traces at the Agent level. If the trace is distributed, there is no way to guarantee that the complete trace will be sent to Datadog.
The rare sampling rate is remotely configurable if you’re using the Agent version 7.42.0 or higher. Follow the documentation to enable remote configuration in your Agents. With remote configuration, you are able to change the parameter value without having to restart the Datadog Agent.
By default, the rare sampler is not enabled.
Note: When enabled, spans dropped by tracing library rules or custom logic such as manual.drop
are excluded under this sampler.
To configure the rare sampler, update the apm_config.enable_rare_sampler
setting in the Agent main configuration file (datadog.yaml
) or with the environment variable DD_APM_ENABLE_RARE_SAMPLER
:
@params apm_config.enable_rare_sampler - boolean - optional - default: false
@env DD_APM_ENABLE_RARE_SAMPLER - boolean - optional - default: false
To evaluate spans dropped by tracing library rules or custom logic such as manual.drop
,
enable the feature with: DD_APM_FEATURES=error_rare_sample_tracer_drop
in the Trace Agent.
By default, the rare sampler is enabled.
Note: When enabled, spans dropped by tracing library rules or custom logic such as manual.drop
are excluded under this sampler. To include these spans in this logic, upgrade to Datadog Agent 6.41.0/7.41.0 or higher.
To change the default rare sampler settings, update the apm_config.disable_rare_sampler
setting in the Agent main configuration file (datadog.yaml
) or with the environment variable DD_APM_DISABLE_RARE_SAMPLER
:
@params apm_config.disable_rare_sampler - boolean - optional - default: false
@env DD_APM_DISABLE_RARE_SAMPLER - boolean - optional - default: false
ingestion_reason: manual
The head-based sampling mechanism can be overridden at the tracing library level. For example, if you need to monitor a critical transaction, you can force the associated trace to be kept. On the other hand, for unnecessary or repetitive information like health checks, you can force the trace to be dropped.
Set Manual Keep on a span to indicate that it and all child spans should be ingested. The resulting trace might appear incomplete in the UI if the span in question is not the root span of the trace.
Set Manual Drop on a span to make sure that no child span is ingested. Error and rare samplers will be ignored in the Agent.
Manually keep a trace:
import datadog.trace.api.DDTags;
import io.opentracing.Span;
import datadog.trace.api.Trace;
import io.opentracing.util.GlobalTracer;
public class MyClass {
@Trace
public static void myMethod() {
// grab the active span out of the traced method
Span span = GlobalTracer.get().activeSpan();
// Always keep the trace
span.setTag(DDTags.MANUAL_KEEP, true);
// method impl follows
}
}
Manually drop a trace:
import datadog.trace.api.DDTags;
import io.opentracing.Span;
import datadog.trace.api.Trace;
import io.opentracing.util.GlobalTracer;
public class MyClass {
@Trace
public static void myMethod() {
// grab the active span out of the traced method
Span span = GlobalTracer.get().activeSpan();
// Always Drop the trace
span.setTag(DDTags.MANUAL_DROP, true);
// method impl follows
}
}
Manually keep a trace:
from ddtrace import tracer
from ddtrace.constants import MANUAL_DROP_KEY, MANUAL_KEEP_KEY
@tracer.wrap()
def handler():
span = tracer.current_span()
# Always Keep the Trace
span.set_tag(MANUAL_KEEP_KEY)
# method impl follows
Manually drop a trace:
from ddtrace import tracer
from ddtrace.constants import MANUAL_DROP_KEY, MANUAL_KEEP_KEY
@tracer.wrap()
def handler():
span = tracer.current_span()
# Always Drop the Trace
span.set_tag(MANUAL_DROP_KEY)
# method impl follows
Manually keep a trace:
Datadog::Tracing.trace(name, options) do |span, trace|
trace.keep! # Affects the active trace
# Method implementation follows
end
Manually drop a trace:
Datadog::Tracing.trace(name, options) do |span, trace|
trace.reject! # Affects the active trace
# Method implementation follows
end
Manually keep a trace:
package main
import (
"log"
"net/http"
"gopkg.in/DataDog/dd-trace-go.v1/ddtrace/ext"
"gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer"
)
func handler(w http.ResponseWriter, r *http.Request) {
// Create a span for a web request at the /posts URL.
span := tracer.StartSpan("web.request", tracer.ResourceName("/posts"))
defer span.Finish()
// Always keep this trace:
span.SetTag(ext.ManualKeep, true)
//method impl follows
}
Manually drop a trace:
package main
import (
"log"
"net/http"
"gopkg.in/DataDog/dd-trace-go.v1/ddtrace/ext"
"gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer"
)
func handler(w http.ResponseWriter, r *http.Request) {
// Create a span for a web request at the /posts URL.
span := tracer.StartSpan("web.request", tracer.ResourceName("/posts"))
defer span.Finish()
// Always drop this trace:
span.SetTag(ext.ManualDrop, true)
//method impl follows
}
Manually keep a trace:
const tracer = require('dd-trace')
const tags = require('dd-trace/ext/tags')
const span = tracer.startSpan('web.request')
// Always keep the trace
span.setTag(tags.MANUAL_KEEP)
//method impl follows
Manually drop a trace:
const tracer = require('dd-trace')
const tags = require('dd-trace/ext/tags')
const span = tracer.startSpan('web.request')
// Always drop the trace
span.setTag(tags.MANUAL_DROP)
//method impl follows
Manually keep a trace:
using Datadog.Trace;
using(var scope = Tracer.Instance.StartActive("my-operation"))
{
var span = scope.Span;
// Always keep this trace
span.SetTag(Datadog.Trace.Tags.ManualKeep, "true");
//method impl follows
}
Manually drop a trace:
using Datadog.Trace;
using(var scope = Tracer.Instance.StartActive("my-operation"))
{
var span = scope.Span;
// Always drop this trace
span.SetTag(Datadog.Trace.Tags.ManualDrop, "true");
//method impl follows
}
Manually keep a trace:
<?php
$tracer = \DDTrace\GlobalTracer::get();
$span = $tracer->getActiveSpan();
if (null !== $span) {
// Always keep this trace
$span->setTag(\DDTrace\Tag::MANUAL_KEEP, true);
}
?>
Manually drop a trace:
<?php
$tracer = \DDTrace\GlobalTracer::get();
$span = $tracer->getActiveSpan();
if (null !== $span) {
// Always drop this trace
$span->setTag(\DDTrace\Tag::MANUAL_DROP, true);
}
?>
Manually keep a trace:
...
#include <datadog/tags.h>
#include <datadog/trace_segment.h>
#include <datadog/sampling_priority.h>
...
dd::SpanConfig span_cfg;
span_cfg.resource = "operation_name";
auto span = tracer.create_span(span_cfg);
// Always keep this trace
span.trace_segment().override_sampling_priority(int(dd::SamplingPriority::USER_KEEP));
//method impl follows
Manually drop a trace:
...
#include <datadog/tags.h>
#include <datadog/trace_segment.h>
#include <datadog/sampling_priority.h>
...
using namespace dd = datadog::tracing;
dd::SpanConfig span_cfg;
span_cfg.resource = "operation_name";
auto another_span = tracer.create_span(span_cfg);
// Always drop this trace
span.trace_segment().override_sampling_priority(int(dd::SamplingPriority::USER_DROP));
//method impl follows
Manual trace keeping should happen before context propagation. If it is kept after context propagation, the system can’t ensure that the entire trace is kept across services. Manual trace keep is set at tracing client location, so the trace can still be dropped by Agent or server location based on sampling rules.
ingestion_reason: single_span
If you need to sample a specific span, but don’t need the full trace to be available, tracing libraries allow you to set a sampling rate to be configured for a single span.
For example, if you are building metrics from spans to monitor specific services, you can configure span sampling rules to ensure that these metrics are based on 100% of the application traffic, without having to ingest 100% of traces for all the requests flowing through the service.
This feature is available for Datadog Agent v7.40.0+.
Note: Single span sampling rules cannot be used to drop spans that are kept by head-based sampling, only to keep additional spans that are dropped by head-based sampling.
Starting in tracing library version 1.7.0, for Java applications, set by-service and by-operation name span sampling rules with the DD_SPAN_SAMPLING_RULES
environment variable.
For example, to collect 100% of the spans from the service named my-service
, for the operation http.request
, up to 50 spans per second:
@env DD_SPAN_SAMPLING_RULES=[{"service": "my-service", "name": "http.request", "sample_rate":1.0, "max_per_second": 50}]
Read more about sampling controls in the Java tracing library documentation.
Starting from version v1.4.0, for Python applications, set by-service and by-operation name span sampling rules with the DD_SPAN_SAMPLING_RULES
environment variable.
For example, to collect 100%
of the spans from the service named my-service
, for the operation http.request
, up to 50
spans per second:
@env DD_SPAN_SAMPLING_RULES=[{"service": "my-service", "name": "http.request", "sample_rate":1.0, "max_per_second": 50}]
Read more about sampling controls in the Python tracing library documentation.
Starting from version v1.5.0, for Ruby applications, set by-service and by-operation name span sampling rules with the DD_SPAN_SAMPLING_RULES
environment variable.
For example, to collect 100%
of the spans from the service named my-service
, for the operation http.request
, up to 50
spans per second:
@env DD_SPAN_SAMPLING_RULES=[{"service": "my-service", "name": "http.request", "sample_rate":1.0, "max_per_second": 50}]
Read more about sampling controls in the Ruby tracing library documentation.
Starting from version v1.41.0, for Go applications, set by-service and by-operation name span sampling rules with the DD_SPAN_SAMPLING_RULES
environment variable.
For example, to collect 100%
of the spans from the service named my-service
, for the operation http.request
, up to 50
spans per second:
@env DD_SPAN_SAMPLING_RULES=[{"service": "my-service", "name": "http.request", "sample_rate":1.0, "max_per_second": 50}]
Starting from version v1.60.0, for Go applications, set by-resource and by-tags span sampling rules with the DD_SPAN_SAMPLING_RULES
environment variable.
For example, to collect 100%
of the spans from the service for the resource POST /api/create_issue
, for the tag priority
with value high
:
@env DD_SPAN_SAMPLING_RULES=[{"resource": "POST /api/create_issue", "tags": { "priority":"high" }, "sample_rate":1.0}]
Read more about sampling controls in the Go tracing library documentation.
For Node.js applications, set by-service and by-operation name span sampling rules with the DD_SPAN_SAMPLING_RULES
environment variable.
For example, to collect 100%
of the spans from the service named my-service
, for the operation http.request
, up to 50
spans per second:
@env DD_SPAN_SAMPLING_RULES=[{"service": "my-service", "name": "http.request", "sample_rate":1.0, "max_per_second": 50}]
Read more about sampling controls in the Node.js tracing library documentation.
Starting from version v0.77.0, for PHP applications, set by-service and by-operation name span sampling rules with the DD_SPAN_SAMPLING_RULES
environment variable.
For example, to collect 100%
of the spans from the service named my-service
, for the operation http.request
, up to 50
spans per second:
@env DD_SPAN_SAMPLING_RULES=[{"service": "my-service", "name": "http.request", "sample_rate":1.0, "max_per_second": 50}]
Read more about sampling controls in the PHP tracing library documentation.
Starting from version v0.1.0, for C++ applications, set by-service and by-operation name span sampling rules with the DD_SPAN_SAMPLING_RULES
environment variable.
For example, to collect 100%
of the spans from the service named my-service
, for the operation http.request
, up to 50
spans per second:
@env DD_SPAN_SAMPLING_RULES=[{"service": "my-service", "name": "http.request", "sample_rate":1.0, "max_per_second": 50}]
Starting from version v2.18.0, for .NET applications, set by-service and by-operation name span sampling rules with the DD_SPAN_SAMPLING_RULES
environment variable.
For example, to collect 100%
of the spans from the service named my-service
, for the operation http.request
, up to 50
spans per second:
@env DD_SPAN_SAMPLING_RULES='[{"service": "my-service", "name": "http.request", "sample_rate":1.0, "max_per_second": 50}]'
Read more about sampling controls in the .NET tracing library documentation.
ingestion_reason:rum
A request from a web or mobile application generates a trace when the backend services are instrumented. The APM integration with Real User Monitoring links web and mobile application requests to their corresponding backend traces so you can see your full frontend and backend data through one lens.
Beginning with version 4.30.0
of the RUM browser SDK, you can control ingested volumes and keep a sampling of the backend traces by configuring the traceSampleRate
initialization parameter. Set traceSampleRate
to a number between 0
and 100
.
If no traceSampleRate
value is set, a default of 100% of the traces coming from the browser requests are sent to Datadog.
Similarly, control the trace sampling rate in other SDKs by using similar parameters:
SDK | Parameter | Minimum version |
---|---|---|
Browser | traceSampleRate | v4.30.0 |
iOS | tracingSamplingRate | 1.11.0 Sampling rate is reported in the Ingestion Control Page since 1.13.0 |
Android | traceSampleRate | 1.13.0 Sampling rate is reported in the Ingestion Control Page since 1.15.0 |
Flutter | tracingSamplingRate | 1.0.0 |
React Native | tracingSamplingRate | 1.0.0 Sampling rate is reported in the Ingestion Control Page since 1.2.0 |
ingestion_reason:synthetics
and ingestion_reason:synthetics-browser
HTTP and browser tests generate traces when the backend services are instrumented. The APM integration with Synthetic Testing links your synthetic tests with the corresponding backend traces. Navigate from a test run that failed to the root cause of the issue by looking at the trace generated by that test run.
By default, 100% of synthetic HTTP and browser tests generate backend traces.
Some additional ingestion reasons are attributed to spans that are generated by specific Datadog products:
Product | Ingestion Reason | Ingestion Mechanism Description |
---|---|---|
Serverless | lambda and xray | Your traces received from the Serverless applications traced with Datadog Tracing Libraries or the AWS X-Ray integration. |
Application Security Management | appsec | Traces ingested from Datadog tracing libraries and flagged by ASM as a threat. |
Data Jobs Monitoring | data_jobs | Traces ingested from the Datadog Java Tracer Spark integration or the Databricks integration. |
ingestion_reason:otel
Depending on your setup with the OpenTelemetry SDKs (using the OpenTelemetry Collector or the Datadog Agent), you have multiple ways of controlling ingestion sampling. See Ingestion Sampling with OpenTelemetry for details about the options available for sampling at the OpenTelemetry SDK, OpenTelemetry Collector, and Datadog Agent level in various OpenTelemetry setups.