With the Admission Controller approach, the Agent uses the Kubernetes Admission Controller to intercept requests to the Kubernetes API and mutate new pods to inject the specified instrumentation library.
Library injection is applied on new pods only and does not have any impact on running pods.
To learn more about Kubernetes Admission Controller, read Kubernetes Admission Controllers Reference.
Requirements
- Kubernetes v1.14+
- Datadog Cluster Agent v7.40+ for Java, Python, NodeJS, Datadog Cluster Agent v7.44+ for .NET and Ruby.
- Datadog Admission Controller enabled. Note: In Helm chart v2.35.0 and later, Datadog Admission Controller is activated by default in the Cluster Agent.
- For Python, uWSGI applications are not supported at this time.
- For Ruby, library injection support is in Beta. Instrumentation is only supported for Ruby on Rails applications with Bundler version greater than 2.3 and without vendored gems (deployment mode or
BUNDLE_PATH
). - Applications in Java, JavaScript, Python, .NET, or Ruby deployed on Linux with a supported architecture. Check the corresponding container registry for the complete list of supported architectures by language.
Container registries
Docker Hub is subject to image pull rate limits. If you are not a Docker Hub customer, Datadog recommends that you update your Datadog Agent and Cluster Agent configuration to pull from GCR or ECR. For instructions, see
Changing your container registry.
Datadog publishes instrumentation libraries images on gcr.io, Docker Hub, and AWS ECR:
The DD_ADMISSION_CONTROLLER_AUTO_INSTRUMENTATION_CONTAINER_REGISTRY
environment variable in the Datadog Cluster Agent configuration specifies the registry used by the Admission Controller. The default value is gcr.io/datadoghq
.
You can pull the tracing library from a different registry by changing it to docker.io/datadog
, public.ecr.aws/datadog
, or another URL if you are hosting the images in a local container registry.
For your Kubernetes applications whose traces you want to send to Datadog, configure the Datadog Admission Controller to inject Java, JavaScript, Python, .NET or Ruby instrumentation libraries automatically. From a high level, this involves the following steps, described in detail below:
- Enable Datadog Admission Controller to mutate your pods.
- Annotate your pods to select which instrumentation library to inject.
- Tag your pods with Unified Service Tags to tie Datadog telemetry together and navigate seamlessly across traces, metrics, and logs with consistent tags.
- Apply your new configuration.
You do not need to generate a new application image to inject the library. The library injection is taken care of adding the instrumentation library, so no change is required in your application image.
Step 1 - Enable Datadog Admission Controller to mutate your pods
By default, Datadog Admission controller mutates only pods labeled with a specific label. To enable mutation on your pods, add the label admission.datadoghq.com/enabled: "true"
to your pod spec.
Note: You can configure Datadog Admission Controller to enable injection config without having this pod label by configuring the Cluster Agent with clusterAgent.admissionController.mutateUnlabelled
(or DD_ADMISSION_CONTROLLER_MUTATE_UNLABELLED
) to true
.
For more details on how to configure, read Datadog Admission Controller page.
For example:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
# (...)
spec:
template:
metadata:
labels:
admission.datadoghq.com/enabled: "true" # Enable Admission Controller to mutate new pods part of this deployment
spec:
containers:
- # (...)
Step 2 - Annotate your pods for library injection
To select your pods for library injection, annotate them with the following, corresponding to your application language, in your pod spec:
Language | Pod annotation |
---|
Java | admission.datadoghq.com/java-lib.version: "<CONTAINER IMAGE TAG>" |
JavaScript | admission.datadoghq.com/js-lib.version: "<CONTAINER IMAGE TAG>" |
Python | admission.datadoghq.com/python-lib.version: "<CONTAINER IMAGE TAG>" |
.NET | admission.datadoghq.com/dotnet-lib.version: "<CONTAINER IMAGE TAG>" |
Ruby | admission.datadoghq.com/ruby-lib.version: "<CONTAINER IMAGE TAG>" |
The available library versions are listed in each container registry, as well as in the tracer source repositories for each language:
- Java
- Javascript
- Python
- .NET
- Note: For .NET library injection, if the application container uses a musl-based Linux distribution (such as Alpine), you must specify a tag with the the
-musl
suffix for the pod annotation. For example, to use library version v2.29.0
, specify container tag v2.29.0-musl
.
- Ruby
Note: If you already have an application instrumented using version X of the library, and then use library injection to instrument using version Y of the same tracer library, the tracer does not break. Rather, the library version loaded first is used. Because library injection happens at the admission controller level prior to runtime, it takes precedent over manually configured libraries.
Note: Using the latest
tag is supported, but use it with caution because major library releases can introduce breaking changes.
For example, to inject a Java library:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
# (...)
spec:
template:
metadata:
labels:
admission.datadoghq.com/enabled: "true" # Enable Admission Controller to mutate new pods in this deployment
annotations:
admission.datadoghq.com/java-lib.version: "<CONTAINER IMAGE TAG>"
spec:
containers:
- # (...)
With Unified Service Tags, you can tie Datadog telemetry together and navigate seamlessly across traces, metrics, and logs with consistent tags. Set the Unified Service Tagging on both the deployment object and the pod template specs.
Set Unified Service tags by using the following labels:
metadata:
labels:
tags.datadoghq.com/env: "<ENV>"
tags.datadoghq.com/service: "<SERVICE>"
tags.datadoghq.com/version: "<VERSION>"
Note: It is not necessary to set the environment variables for universal service tagging (DD_ENV
, DD_SERVICE
, DD_VERSION
) in the pod template spec, because the Admission Controller propagates the tag values as environment variables when injecting the library.
For example:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
tags.datadoghq.com/env: "prod" # Unified service tag - Deployment Env tag
tags.datadoghq.com/service: "my-service" # Unified service tag - Deployment Service tag
tags.datadoghq.com/version: "1.1" # Unified service tag - Deployment Version tag
# (...)
spec:
template:
metadata:
labels:
tags.datadoghq.com/env: "prod" # Unified service tag - Pod Env tag
tags.datadoghq.com/service: "my-service" # Unified service tag - Pod Service tag
tags.datadoghq.com/version: "1.1" # Unified service tag - Pod Version tag
admission.datadoghq.com/enabled: "true" # Enable Admission Controller to mutate new pods part of this deployment
annotations:
admission.datadoghq.com/java-lib.version: "<CONTAINER IMAGE TAG>"
spec:
containers:
- # (...)
Step 4 - Apply the configuration
Your pods are ready to be instrumented when their new configuration is applied.
The library is injected on new pods only and does not have any impact on running pods.
Check that the library injection was successful
Library injection leverages the injection of a dedicated init
container in pods.
If the injection was successful you can see an init
container called datadog-lib-init
in your pod:
Or run kubectl describe pod <my-pod>
to see the datadog-lib-init
init container listed.
The instrumentation also starts sending telemetry to Datadog (for example, traces to APM).
Troubleshooting installation issues
If the application pod fails to start, run kubectl logs <my-pod> --all-containers
to print out the logs and compare them to the known issues below.
.NET installation issues
dotnet: error while loading shared libraries: libc.musl-x86_64.so.1: cannot open shared object file: No such file or directory
- Problem: The pod annotation for the dotnet library version included a
-musl
suffix, but the application container runs on a Linux distribution that uses glibc. - Solution: Remove the
-musl
suffix from the dotnet library version.
Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /datadog-lib/continuousprofiler/Datadog.Linux.ApiWrapper.x64.so)
- Problem: The application container runs on a Linux distribution that uses musl-libc (for example, Alpine), but the pod annotation does not include the
-musl
suffix. - Solution: Add the
-musl
suffix to the dotnet library version.
Python installation issues
Incompatible Python version
The library injection mechanism for Python only supports injecting the Python library in Python v3.7+.
user-installed ddtrace found, aborting
- Problem: The
ddtrace
library is already installed on the system so the injection logic aborts injecting the library to avoid introducing a breaking change in the application. - Solution: Remove the installation of
ddtrace
if library injection is desired. Otherwise, use the installed library (see documentation) instead of library injection.
Tracing Library Injection on a host is in beta.
When both the Agent and your services are running on a host, real or virtual, Datadog injects the tracing library by using a preload library that overrides calls to execve
. Any newly started processes are intercepted and the specified instrumentation library is injected into the services.
Note: Injection on arm64 is not supported.
Install both library injection and the Datadog Agent
Requirements: A host running Linux.
If the host does not yet have a Datadog Agent installed, or if you want to upgrade your Datadog Agent installation, use the the Datadog Agent install script to install both the injection libraries and the Datadog Agent:
DD_APM_INSTRUMENTATION_ENABLED=host DD_API_KEY=<YOUR KEY> DD_SITE="<YOUR SITE>" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh)"
By default, running the script installs support for Java, Node.js, Python, Ruby, and .NET. If you want to specify which language support is installed, also set the DD_APM_INSTRUMENTATION_LANGUAGES
environment variable. The valid values are java
, js
, python
, ruby
, and dotnet
. Use a comma-separated list to specify more than one language:
DD_APM_INSTRUMENTATION_LANGUAGES=java,js DD_APM_INSTRUMENTATION_ENABLED=host DD_API_KEY=<YOUR KEY> DD_SITE="<YOUR SITE>" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh)"
Exit and open a new shell to use the injection library.
Install only library injection
Requirements:
If the host already has a Datadog Agent installed, you can install just the injection libraries:
Ensure your Agent is running.
Install the library with one of the following sets of commands, where <LANG>
is one of java
, js
, dotnet
, python
, ruby
, or all
:
For Ubuntu, Debian or other Debian-based Linux distributions:
sudo apt-get update
sudo apt-get install datadog-apm-inject datadog-apm-library-<LANG>
For CentOS, RedHat, or another distribution that uses yum/RPM:
sudo yum makecache
sudo yum install datadog-apm-inject datadog-apm-library-<LANG>
Run the command dd-host-install
.
Exit and open a new shell to use the preload library.
After enabling host injection, make sure that the /etc/datadog-agent/datadog.yaml
configuration file has the following lines at the end:
# BEGIN LD PRELOAD CONFIG
apm_config:
receiver_socket: /opt/datadog/apm/inject/run/apm.socket
use_dogstatsd: true
dogstatsd_socket: /opt/datadog/apm/inject/run/dsd.socket
remote_configuration:
enabled: true
# END LD PRELOAD CONFIG
If these properties are set to different values, change them to match. If they are not present, add them. Restart the Datadog Agent.
Next steps
If you haven’t already, install your app and any supporting languages or libraries it requires.
When an app that is written in a supported language is launched, it is automatically injected with tracing enabled.
Configure host injection is configured in one the following ways:
- Set environment variables on the process being launched.
- Specify the configuration in the file
/etc/datadog-agent/inject/host_config.yaml
.
Values in environment variables override settings in the configuration file on a per-process basis.
Configuration file
Property name | Purpose | Default value | Valid values |
---|
log_level | The logging level | off | off , debug , info , warn , error |
output_paths | The location where log output is written | stderr | stderr or a file:// URL |
env | The default environment assigned to a process | none | n/a |
config_sources | The default configuration for a process | BASIC | See Config Sources |
Example
---
log_level: debug
output_paths:
- file:///tmp/host_injection.log
env: dev
config_sources: BASIC
Environment variables
The following environment variables configure library injection. You can pass these in by export
through the command line (export DD_CONFIG_SOURCES=BASIC
), shell configuration, or launch command.
Each of the fields in the config file corresponds to an environment variable. This environment variable is read from the environment of the process that’s being launched and affects only the process currently being launched.
Config file property | Environment Variable |
---|
log_level | DD_APM_INSTRUMENTATION_DEBUG |
output_paths | DD_APM_INSTRUMENTATION_OUTPUT_PATHS |
env | DD_ENV |
config_sources | DD_CONFIG_SOURCES |
The DD_APM_INSTRUMENTATION_DEBUG
environment variable is limited to the values true
and false
(default value false
). Setting it to true
sets log_level
to debug
and setting it to false
(or not setting it at all) uses the log_level
specified in the configuration file. The environment variable can only set the log level to debug
, not any other log level values.
The DD_INSTRUMENT_SERVICE_WITH_APM
environment variable controls whether or not injection is enabled. It defaults to TRUE
. Set it to FALSE
to turn off library injection altogether.
Config sources
By default, the following settings are enabled in an instrumented process:
- Tracing
- Log injection, assuming the application uses structured logging (usually JSON). For traces to appear in non-structured logs, you must change your application’s log configuration to include placeholders for trace ID and span ID. See Connect Logs and Traces for more information.
- Health metrics
- Runtime metrics
You can change these settings for all instrumented processes by setting the config_sources
property in the configuration file or for a single process by setting the DD_CONFIG_SOURCES
environment variable for the process. The valid settings for config sources are:
Configuration Source Name | Meaning |
---|
BASIC | Apply the configurations specified above. If no configuration source is specified, this is the default. |
LOCAL:PATH | Apply the configuration at the specified path on the local file system. The format of the configuration file is described below. Example: LOCAL:/opt/config/my_process_config.yaml |
BLOB:URL | Apply the configuration at the specified path in an S3-compatible object store. The connection URL and the format of the configuration file are described below. Example: BLOB:s3://config_bucket/my_process_config.yaml?region=us-east-1 |
The words BASIC
, LOCAL
, and BLOB
must be uppercase.
Config source values can be separated by semicolons to indicate multiple possible locations. The first configuration that returns without an error is used. Configuration is not merged from multiple configuration sources. The following example checks an S3 bucket for configuration, then checks the local file system, and finally uses the built-in default configuration:
DD_CONFIG_SOURCES=BLOB:s3://config_bucket/my_process_config.yaml?region=us-east-1;LOCAL:/opt/config/my_process_config.yaml;BASIC
Blob storage support
The supported blob storage solutions are:
- AWS S3 - Set the URL with the
s3://
prefix. If you have authenticated with the AWS CLI, it uses those credentials.
See the AWS SDK documentation for information about configuring credentials using environment variables. - GCP GCS - Set the URL with the
gs://
prefix. It uses Application Default Credentials. Authenticate with GCloud auth application-default login. See the Google Cloud authentication documentation for more information about configuring credentials using environment variables. - Azure Blob - Set the URL with the
azblob://
prefix, and point to a storage container name. It uses the credentials found in AZURE_STORAGE_ACCOUNT
(that is, the bucket name) plus at least one of AZURE_STORAGE_KEY
and AZURE_STORAGE_SAS_TOKEN
. For more information about configuring BLOB
or LOCAL
settings, see Supplying configuration source.
Supplying configuration source
The config file for LOCAL
and BLOB
can be formatted as JSON:
{
"version": 1,
"service_language": "<LANG>",
"tracing_enabled": true,
"log_injection_enabled": true,
"health_metrics_enabled": true,
"runtime_metrics_enabled": true,
"tracing_sampling_rate": 1.0,
"tracing_rate_limit": 1,
"tracing_tags": ["a=b", "foo"],
"tracing_service_mapping": [
{ "from_key": "mysql", "to_name": "super_db"},
{ "from_key": "postgres", "to_name": "my_pg"}
],
"tracing_agent_timeout": 1,
"tracing_header_tags": [
{"header": "HEADER", "tag_name":"tag"}
],
"tracing_partial_flush_min_spans": 1,
"tracing_debug": true,
"tracing_log_level": "debug",
}
Or as YAML:
---
version: 1
service_language: <LANG>
tracing_enabled: true
log_injection_enabled: true
health_metrics_enabled: true
runtime_metrics_enabled: true
tracing_sampling_rate: 1.0
tracing_rate_limit: 1
tracing_tags:
- a=b
- foo
tracing_service_mapping:
- from_key: mysql
to_name: super_db
- from_key: postgres
to_name: my_pg
tracing_agent_timeout: 1
tracing_header_tags:
- header: HEADER
tag_name: tag
tracing_partial_flush_min_spans: 1
tracing_debug: true
tracing_log_level: debug
The value of version
is always 1
. This refers to the configuration schema version in use, not the version of the content.
If the language is known, set service_language
to one of the following values:
If multiple languages are used, leave service_language
unset.
The following table shows how the injection configuration values map to the corresponding tracing library configuration options:
Injection | Java tracer | NodeJS tracer | .NET tracer | Python tracer |
---|
tracing_enabled | dd.trace.enabled | DD_TRACE_ENABLED | DD_TRACE_ENABLED | DD_TRACE_ENABLED |
log_injection_enabled | dd.logs.injection | DD_LOGS_INJECTION | DD_LOGS_INJECTION | DD_LOGS_INJECTION |
health_metrics_enabled | dd.trace.health.metrics.enabled | n/a | n/a | n/a |
runtime_metrics_enabled | dd.jmxfetch.enabled | DD_RUNTIME_METRICS_ENABLED | DD_RUNTIME_METRICS_ENABLED | DD_RUNTIME_METRICS_ENABLED |
tracing_sampling_rate | dd.trace.sample.rate | DD_TRACE_SAMPLE_RATE | DD_TRACE_SAMPLE_RATE | DD_TRACE_SAMPLE_RATE |
tracing_rate_limit | n/a | DD_TRACE_RATE_LIMIT | DD_TRACE_RATE_LIMIT | DD_TRACE_RATE_LIMIT |
tracing_tags | dd.tags | DD_TAGS | DD_TAGS | DD_TAGS |
tracing_service_mapping | dd.service.mapping | DD_SERVICE_MAPPING | DD_TRACE_SERVICE_MAPPING | DD_SERVICE_MAPPING |
tracing_agent_timeout | dd.trace.agent.timeout | n/a | n/a | n/a |
tracing_header_tags | dd.trace.header.tags | n/a | DD_TRACE_HEADER_TAGS | DD_TRACE_HEADER_TAGS |
tracing_partial_flush_min_spans | dd.trace.partial.flush.min.spans | DD_TRACE_PARTIAL_FLUSH_MIN_SPANS | DD_TRACE_PARTIAL_FLUSH_ENABLED | n/a |
tracing_debug | dd.trace.debug | DD_TRACE_DEBUG | DD_TRACE_DEBUG | DD_TRACE_DEBUG |
tracing_log_level | datadog.slf4j.simpleLogger.defaultLogLevel | DD_TRACE_LOG_LEVEL | n/a | n/a |
Tracer library configuration options that aren’t mentioned in the injection configuration are still available for use through properties or environment variables the usual way.
Basic configuration settings
BASIC
configuration settings are equivalent to the following YAML settings:
---
version: 1
tracing_enabled: true
log_injection_enabled: true
health_metrics_enabled: true
runtime_metrics_enabled: true
Launch your services
Launch your services, indicating the preload library configuration in the launch command. If DD_CONFIG_SOURCES
is not specified, the value specified for config_sources
in the /etc/datadog-agent/inject/host_config.yaml
config file is used. If that is not specified either, DD_CONFIG_SOURCES
defaults to BASIC
:
Java app example:
java -jar <SERVICE_1>.jar &
DD_CONFIG_SOURCES=LOCAL:/etc/<SERVICE_2>/config.yaml;BASIC java -jar <SERVICE_2>.jar &
Node app example:
node index.js &
DD_CONFIG_SOURCES=LOCAL:/etc/<SERVICE_2>/config.yaml;BASIC node index.js &
.NET app example:
dotnet <SERVICE_1>.dll &
DD_CONFIG_SOURCES=LOCAL:/etc/<SERVICE_2>/config.yaml;BASIC dotnet <SERVICE_2>.dll &
Python app example:
python <SERVICE_1>.py &
DD_CONFIG_SOURCES=LOCAL:/etc/<SERVICE_2>/config.yaml;BASIC python <SERVICE_2>.py &
Exercise your application to start generating telemetry data, which you can see as traces in APM.
Tracing Library Injection on hosts and containers is in beta.
When your Agent is running on a host, and your services are running in containers, Datadog injects the tracing library by intercepting container creation and configuring the Docker container.
Any newly started processes are intercepted and the specified instrumentation library is injected into the services.
Note: Injection on arm64 is not supported.
Install both library injection and the Datadog Agent
Requirements:
If the host does not yet have a Datadog Agent installed, or if you want to upgrade your Datadog Agent installation, use the the Datadog Agent install script to install both the injection libraries and the Datadog Agent:
DD_APM_INSTRUMENTATION_ENABLED=all DD_API_KEY=<YOUR KEY> DD_SITE="<YOUR SITE>" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh)"
By default, running the script installs support for Java, Node.js, Python, Ruby, and .NET. If you want to specify which language support is installed, also set the DD_APM_INSTRUMENTATION_LANGUAGES
environment variable. The valid values are java
, js
, python
, ruby
, and dotnet
. Use a comma-separated list to specify more than one language:
DD_APM_INSTRUMENTATION_LANGUAGES=java,js DD_APM_INSTRUMENTATION_ENABLED=all DD_API_KEY=<YOUR KEY> DD_SITE="<YOUR SITE>" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh)"
Install only library injection
Requirements:
If the host already has a Datadog Agent installed, you can install just the injection libraries:
Ensure your Agent is running.
Install the library with one of the following sets of commands, where <LANG>
is one of java
, js
, dotnet
, python
, ruby
, or all
:
For Ubuntu, Debian or other Debian-based Linux distributions:
sudo apt-get update
sudo apt-get install datadog-apm-inject datadog-apm-library-<LANG>
For CentOS, RedHat, or another distribution that uses yum/RPM:
sudo yum makecache
sudo yum install datadog-apm-inject datadog-apm-library-<LANG>
Run the command dd-host-container-install
.
After enabling host injection, make sure that the /etc/datadog-agent/datadog.yaml
configuration file has the following lines at the end:
# BEGIN LD PRELOAD CONFIG
apm_config:
receiver_socket: /opt/datadog/apm/inject/run/apm.socket
use_dogstatsd: true
dogstatsd_socket: /opt/datadog/apm/inject/run/dsd.socket
remote_configuration:
enabled: true
# END LD PRELOAD CONFIG
If these properties are set to different values, change them to match. If they are not present, add them. Restart the Datadog Agent.
Edit /etc/datadog-agent/inject/docker_config.yaml
and add the following YAML configuration for the injection:
---
log_level: debug
output_paths:
- stderr
config_sources: BASIC
config_sources
- Turn on or off library injection and specify a semicolon-separated ordered list of places where configuration is stored. The first value that returns without an error is used. Configuration is not merged across configuration sources. The valid values are:
BLOB:<URL>
- Load configuration from a blob store (S3-compatible) located at <URL>
.LOCAL:<PATH>
- Load from a file on the local file system at <PATH>
.BASIC
- Use default values. If config_sources
is not specified, this configuration is used.
The words BASIC
, LOCAL
, and BLOB
must be uppercase.
Config source values can be separated by semicolons to indicate multiple possible locations. The first configuration that returns without an error is used. Configuration is not merged from multiple configuration sources. The following example checks an S3 bucket for configuration, then checks the local file system, and finally uses the built-in default configuration:
config_sources: BLOB:s3://config_bucket/my_process_config.yaml?region=us-east-1;LOCAL:/opt/config/my_process_config.yaml;BASIC
For more information about configuring BLOB
or LOCAL
settings, see Supplying configuration source.
log_level
- Set to
debug
to log detailed information about what is happening, or info
, warn
, or error
to log far less.
Default: info
output_paths
- A list of one or more places to write logs.
Default: stderr
- Optional:
env
- Specifies the
DD_ENV
tag for the containers running in Docker, for example, dev
, prod
, staging
.
Default None.
Supplying configuration source
If you specify BLOB
or LOCAL
configuration source, create a JSON or YAML file there, and provide the configuration either as JSON:
{
"version": 1,
"service_language": "<LANG>",
"tracing_enabled": true,
"log_injection_enabled": true,
"health_metrics_enabled": true,
"runtime_metrics_enabled": true,
"tracing_sampling_rate": 1.0,
"tracing_rate_limit": 1,
"tracing_tags": ["a=b", "foo"],
"tracing_service_mapping": [
{ "from_key": "mysql", "to_name": "super_db"},
{ "from_key": "postgres", "to_name": "my_pg"}
],
"tracing_agent_timeout": 1,
"tracing_header_tags": [
{"header": "HEADER", "tag_name":"tag"}
],
"tracing_partial_flush_min_spans": 1,
"tracing_debug": true,
"tracing_log_level": "debug",
}
Or as YAML:
---
version: 1
service_language: <LANG>
tracing_enabled: true
log_injection_enabled: true
health_metrics_enabled: true
runtime_metrics_enabled: true
tracing_sampling_rate: 1.0
tracing_rate_limit: 1
tracing_tags:
- a=b
- foo
tracing_service_mapping:
- from_key: mysql
to_name: super_db
- from_key: postgres
to_name: my_pg
tracing_agent_timeout: 1
tracing_header_tags:
- header: HEADER
tag_name: tag
tracing_partial_flush_min_spans: 1
tracing_debug: true
tracing_log_level: debug
Set service_language
to one of the following values:
In this configuration file, the value of version
is always 1
. This refers to the configuration schema version in use, not the version of the content.
The following table shows how the injection configuration values map to the corresponding tracing library configuration options:
Injection | Java tracer | NodeJS tracer | .NET tracer | Python tracer |
---|
tracing_enabled | dd.trace.enabled | DD_TRACE_ENABLED | DD_TRACE_ENABLED | DD_TRACE_ENABLED |
log_injection_enabled | dd.logs.injection | DD_LOGS_INJECTION | DD_LOGS_INJECTION | DD_LOGS_INJECTION |
health_metrics_enabled | dd.trace.health.metrics.enabled | n/a | n/a | n/a |
runtime_metrics_enabled | dd.jmxfetch.enabled | DD_RUNTIME_METRICS_ENABLED | DD_RUNTIME_METRICS_ENABLED | DD_RUNTIME_METRICS_ENABLED |
tracing_sampling_rate | dd.trace.sample.rate | DD_TRACE_SAMPLE_RATE | DD_TRACE_SAMPLE_RATE | DD_TRACE_SAMPLE_RATE |
tracing_rate_limit | n/a | DD_TRACE_RATE_LIMIT | DD_TRACE_RATE_LIMIT | DD_TRACE_RATE_LIMIT |
tracing_tags | dd.tags | DD_TAGS | DD_TAGS | DD_TAGS |
tracing_service_mapping | dd.service.mapping | DD_SERVICE_MAPPING | DD_TRACE_SERVICE_MAPPING | DD_SERVICE_MAPPING |
tracing_agent_timeout | dd.trace.agent.timeout | n/a | n/a | n/a |
tracing_header_tags | dd.trace.header.tags | n/a | DD_TRACE_HEADER_TAGS | DD_TRACE_HEADER_TAGS |
tracing_partial_flush_min_spans | dd.trace.partial.flush.min.spans | DD_TRACE_PARTIAL_FLUSH_MIN_SPANS | DD_TRACE_PARTIAL_FLUSH_ENABLED | n/a |
tracing_debug | dd.trace.debug | DD_TRACE_DEBUG | DD_TRACE_DEBUG | DD_TRACE_DEBUG |
tracing_log_level | datadog.slf4j.simpleLogger.defaultLogLevel | DD_TRACE_LOG_LEVEL | n/a | n/a |
Tracer library configuration options that aren’t mentioned in the injection configuration are still available for use through properties or environment variables the usual way.
Blob storage support
The supported blob storage solutions are:
- AWS S3 - Set the URL with the
s3://
prefix. If you have authenticated with the AWS CLI, it uses those credentials.
See the AWS SDK documentation for information about configuring credentials using environment variables. - GCP GCS - Set the URL with the
gs://
prefix. It uses Application Default Credentials. Authenticate with GCloud auth application-default login. See the Google Cloud authentication documentation for more information about configuring credentials using environment variables. - Azure Blob - Set the URL with the
azblob://
prefix, and point to a storage container name. It uses the credentials found in AZURE_STORAGE_ACCOUNT
(that is, the bucket name) plus at least one of AZURE_STORAGE_KEY
and AZURE_STORAGE_SAS_TOKEN
. For more information about configuring BLOB
or LOCAL
settings, see Supplying configuration source.
Basic configuration settings
BASIC
configuration settings are equivalent to the following YAML settings:
---
version: 1
tracing_enabled: true
log_injection_enabled: true
health_metrics_enabled: true
runtime_metrics_enabled: true
If the environment variables DD_ENV
, DD_SERVICE
, or DD_VERSION
are specified in a service container image, those values are used to tag telemetry from the container.
If they are not specified, DD_ENV
uses the env
value set in the /etc/datadog-agent/inject/docker_config.yaml
config file, if any. DD_SERVICE
is derived from the name of the Docker image. An image with the name my-service:1.0
is tagged with DD_SERVICE
of my-service
.
Launch your services
Start your Agent and launch your containerized services as usual.
Exercise your application to start generating telemetry data, which you can see as traces in APM.
Tracing Library Injection in containers is in beta.
When your Agent and services are running in separate Docker containers on the same host, Datadog injects the tracing library by intercepting container creation and configuring the Docker container.
Any newly started processes are intercepted and the specified instrumentation library is injected into the services.
Requirements:
Note: Injection on arm64 is not supported.
Install the preload library
Use the install_script_docker_injection
shell script to automatically install Docker injection support. Docker must already be installed on the host machine.
bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_docker_injection.sh)"
This installs language libraries for all supported languages. To install specific languages, set the DD_APM_INSTRUMENTATION_LANGUAGES
variable. The valid values are java
, js
, python
, ruby
, and dotnet
:
DD_APM_INSTRUMENTATION_LANGUAGES=java,js bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_docker_injection.sh)"
Edit /etc/datadog-agent/inject/docker_config.yaml
and add the following YAML configuration for the injection:
---
log_level: debug
output_paths:
- stderr
config_sources: BASIC
config_sources
- Turn on or off library injection and specify a semicolon-separated ordered list of places where configuration is stored. The first value that returns without an error is used. Configuration is not merged across configuration sources. The valid values are:
BLOB:<URL>
- Load configuration from a blob store (S3-compatible) located at <URL>
.LOCAL:<PATH>
- Load from a file on the local file system at <PATH>
.BASIC
- Use default values. If config_sources
is not specified, this configuration is used.
The words BASIC
, LOCAL
, and BLOB
must be uppercase.
Config source values can be separated by semicolons to indicate multiple possible locations. The first configuration that returns without an error is used. Configuration is not merged from multiple configuration sources. The following example checks an S3 bucket for configuration, then checks the local file system, and finally uses the built-in default configuration:
config_sources: BLOB:s3://config_bucket/my_process_config.yaml?region=us-east-1;LOCAL:/opt/config/my_process_config.yaml;BASIC
For more information about configuring BLOB
or LOCAL
settings, see Supplying configuration source.
log_level
- Set to
debug
to log detailed information about what is happening, or info
to log far less. output_paths
- A list of one or more places to write logs.
Default: stderr
- Optional:
env
- Specifies the
DD_ENV
tag for the containers running in Docker, for example, dev
, prod
, staging
.
Default None.
Supplying configuration source
If you specify BLOB
or LOCAL
configuration source, create a JSON or YAML file there, and provide the configuration either as JSON:
{
"version": 1,
"service_language": "<LANG>",
"tracing_enabled": true,
"log_injection_enabled": true,
"health_metrics_enabled": true,
"runtime_metrics_enabled": true,
"tracing_sampling_rate": 1.0,
"tracing_rate_limit": 1,
"tracing_tags": ["a=b", "foo"],
"tracing_service_mapping": [
{ "from_key": "mysql", "to_name": "super_db"},
{ "from_key": "postgres", "to_name": "my_pg"}
],
"tracing_agent_timeout": 1,
"tracing_header_tags": [
{"header": "HEADER", "tag_name":"tag"}
],
"tracing_partial_flush_min_spans": 1,
"tracing_debug": true,
"tracing_log_level": "debug",
}
Or as YAML:
---
version: 1
service_language: <LANG>
tracing_enabled: true
log_injection_enabled: true
health_metrics_enabled: true
runtime_metrics_enabled: true
tracing_sampling_rate: 1.0
tracing_rate_limit: 1
tracing_tags:
- a=b
- foo
tracing_service_mapping:
- from_key: mysql
to_name: super_db
- from_key: postgres
to_name: my_pg
tracing_agent_timeout: 1
tracing_header_tags:
- header: HEADER
tag_name: tag
tracing_partial_flush_min_spans: 1
tracing_debug: true
tracing_log_level: debug
Set service_language
to one of the following values:
In this configuration file, the value of version
is always 1
. This refers to the configuration schema version in use, not the version of the content.
The following table shows how the injection configuration values map to the corresponding tracing library configuration options:
Injection | Java tracer | NodeJS tracer | .NET tracer | Python tracer |
---|
tracing_enabled | dd.trace.enabled | DD_TRACE_ENABLED | DD_TRACE_ENABLED | DD_TRACE_ENABLED |
log_injection_enabled | dd.logs.injection | DD_LOGS_INJECTION | DD_LOGS_INJECTION | DD_LOGS_INJECTION |
health_metrics_enabled | dd.trace.health.metrics.enabled | n/a | n/a | n/a |
runtime_metrics_enabled | dd.jmxfetch.enabled | DD_RUNTIME_METRICS_ENABLED | DD_RUNTIME_METRICS_ENABLED | DD_RUNTIME_METRICS_ENABLED |
tracing_sampling_rate | dd.trace.sample.rate | DD_TRACE_SAMPLE_RATE | DD_TRACE_SAMPLE_RATE | DD_TRACE_SAMPLE_RATE |
tracing_rate_limit | n/a | DD_TRACE_RATE_LIMIT | DD_TRACE_RATE_LIMIT | DD_TRACE_RATE_LIMIT |
tracing_tags | dd.tags | DD_TAGS | DD_TAGS | DD_TAGS |
tracing_service_mapping | dd.service.mapping | DD_SERVICE_MAPPING | DD_TRACE_SERVICE_MAPPING | DD_SERVICE_MAPPING |
tracing_agent_timeout | dd.trace.agent.timeout | n/a | n/a | n/a |
tracing_header_tags | dd.trace.header.tags | n/a | DD_TRACE_HEADER_TAGS | DD_TRACE_HEADER_TAGS |
tracing_partial_flush_min_spans | dd.trace.partial.flush.min.spans | DD_TRACE_PARTIAL_FLUSH_MIN_SPANS | DD_TRACE_PARTIAL_FLUSH_ENABLED | n/a |
tracing_debug | dd.trace.debug | DD_TRACE_DEBUG | DD_TRACE_DEBUG | DD_TRACE_DEBUG |
tracing_log_level | datadog.slf4j.simpleLogger.defaultLogLevel | DD_TRACE_LOG_LEVEL | n/a | n/a |
Tracer library configuration options that aren’t mentioned in the injection configuration are still available for use through properties or environment variables the usual way.
Blob storage support
The supported blob storage solutions are:
- AWS S3 - Set the URL with the
s3://
prefix. If you have authenticated with the AWS CLI, it uses those credentials.
See the AWS SDK documentation for information about configuring credentials using environment variables. - GCP GCS - Set the URL with the
gs://
prefix. It uses Application Default Credentials. Authenticate with GCloud auth application-default login. See the Google Cloud authentication documentation for more information about configuring credentials using environment variables. - Azure Blob - Set the URL with the
azblob://
prefix, and point to a storage container name. It uses the credentials found in AZURE_STORAGE_ACCOUNT
(that is, the bucket name) plus at least one of AZURE_STORAGE_KEY
and AZURE_STORAGE_SAS_TOKEN
. For more information about configuring BLOB
or LOCAL
settings, see Supplying configuration source.
Basic configuration settings
BASIC
configuration settings are equivalent to the following YAML settings:
---
version: 1
tracing_enabled: true
log_injection_enabled: true
health_metrics_enabled: true
runtime_metrics_enabled: true
In the Docker compose file that launches your containers, use the following settings for the Agent, securely setting your own Datadog API key for ${DD_API_KEY}
:
dd-agent:
container_name: dd-agent
image: datadog/agent:7
environment:
- DD_API_KEY=${DD_API_KEY}
- DD_APM_ENABLED=true
- DD_APM_NON_LOCAL_TRAFFIC=true
- DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true
- DD_APM_RECEIVER_SOCKET=/opt/datadog/apm/inject/run/apm.socket
- DD_DOGSTATSD_SOCKET=/opt/datadog/apm/inject/run/dsd.socket
volumes:
- /opt/datadog/apm:/opt/datadog/apm
- /var/run/docker.sock:/var/run/docker.sock:ro
If the environment variables DD_ENV
, DD_SERVICE
, or DD_VERSION
are specified in a service container image, those values are used to tag telemetry from the container.
If they are not specified, DD_ENV
uses the env
value set in the /etc/datadog-agent/inject/docker_config.yaml
config file, if any. DD_SERVICE
is derived from the name of the Docker image. An image with the name my-service:1.0
is tagged with DD_SERVICE
of my-service
.
Launch the Agent on Docker
The dd-agent
container must be launched before any service containers. Run:
docker-compose up -d dd-agent
Launch your services
Launch your containerized services as usual.
Exercise your application to start generating telemetry data, which you can see as traces in APM.