Injecting Libraries Locally

이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

Overview

To automatically instrument your application, you can:

For more information, see Automatic Instrumentation.

How to inject the library locally, without touching the application code at all, varies depending on where and how your Agent and application are installed. Select the scenario that represents your environment:

With the Admission Controller approach, the Agent uses the Kubernetes Admission Controller to intercept requests to the Kubernetes API and mutate new pods to inject the specified instrumentation library.

Library injection is applied on new pods only and does not have any impact on running pods.

To learn more about Kubernetes Admission Controller, read Kubernetes Admission Controllers Reference.

Requirements

  • Kubernetes v1.14+
  • Datadog Cluster Agent v7.40+ for Java, Python, NodeJS, Datadog Cluster Agent v7.44+ for .NET and Ruby.
  • Datadog Admission Controller enabled. Note: In Helm chart v2.35.0 and later, Datadog Admission Controller is activated by default in the Cluster Agent.
  • For Python, uWSGI applications are not supported at this time.
  • For Ruby, library injection support is in Beta. Instrumentation is only supported for Ruby on Rails applications with Bundler version greater than 2.3 and without vendored gems (deployment mode or BUNDLE_PATH).
  • Applications in Java, JavaScript, Python, .NET, or Ruby deployed on Linux with a supported architecture. Check the corresponding container registry for the complete list of supported architectures by language.

Container registries

Docker Hub is subject to image pull rate limits. If you are not a Docker Hub customer, Datadog recommends that you update your Datadog Agent and Cluster Agent configuration to pull from GCR or ECR. For instructions, see Changing your container registry.

Datadog publishes instrumentation libraries images on gcr.io, Docker Hub, and Amazon ECR:

The DD_ADMISSION_CONTROLLER_AUTO_INSTRUMENTATION_CONTAINER_REGISTRY environment variable in the Datadog Cluster Agent configuration specifies the registry used by the Admission Controller. The default value is gcr.io/datadoghq.

You can pull the tracing library from a different registry by changing it to docker.io/datadog, public.ecr.aws/datadog, or another URL if you are hosting the images in a local container registry.

Configure instrumentation libraries injection

For your Kubernetes applications whose traces you want to send to Datadog, configure the Datadog Admission Controller to inject Java, JavaScript, Python, .NET or Ruby instrumentation libraries automatically. From a high level, this involves the following steps, described in detail below:

  1. Enable Datadog Admission Controller to mutate your pods.
  2. Annotate your pods to select which instrumentation library to inject.
  3. Tag your pods with Unified Service Tags to tie Datadog telemetry together and navigate seamlessly across traces, metrics, and logs with consistent tags.
  4. Apply your new configuration.
You do not need to generate a new application image to inject the library. The library injection is taken care of adding the instrumentation library, so no change is required in your application image.

Step 1 - Enable Datadog Admission Controller to mutate your pods

By default, Datadog Admission controller mutates only pods labeled with a specific label. To enable mutation on your pods, add the label admission.datadoghq.com/enabled: "true" to your pod spec.

Note: You can configure Datadog Admission Controller to enable injection config without having this pod label by configuring the Cluster Agent with clusterAgent.admissionController.mutateUnlabelled (or DD_ADMISSION_CONTROLLER_MUTATE_UNLABELLED) to true.

For more details on how to configure, read Datadog Admission Controller page.

For example:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    # (...)
spec:
  template:
    metadata:
      labels:
        admission.datadoghq.com/enabled: "true" # Enable Admission Controller to mutate new pods part of this deployment
    spec:
      containers:
        - # (...)

Step 2 - Annotate your pods for library injection

To select your pods for library injection, use the annotations provided in the following table within your pod spec:

LanguagePod annotation
Javaadmission.datadoghq.com/java-lib.version: "<CONTAINER IMAGE TAG>"
JavaScriptadmission.datadoghq.com/js-lib.version: "<CONTAINER IMAGE TAG>"
Pythonadmission.datadoghq.com/python-lib.version: "<CONTAINER IMAGE TAG>"
.NETadmission.datadoghq.com/dotnet-lib.version: "<CONTAINER IMAGE TAG>"
Rubyadmission.datadoghq.com/ruby-lib.version: "<CONTAINER IMAGE TAG>"

The available library versions are listed in each container registry, as well as in the tracer source repositories for each language:

  • Java
  • JavaScript
  • Python
  • .NET
    • Note: For .NET library injection, if the application container uses a musl-based Linux distribution (such as Alpine), you must specify a tag with the -musl suffix for the pod annotation. For example, to use library version v2.29.0, specify container tag v2.29.0-musl.
  • Ruby

Note: If you already have an application instrumented using version X of the library, and then use library injection to instrument using version Y of the same tracer library, the tracer does not break. Rather, the library version loaded first is used. Because library injection happens at the admission controller level prior to runtime, it takes precedence over manually configured libraries.

Note: Using the latest tag is supported, but use it with caution because major library releases can introduce breaking changes.

For example, to inject a Java library:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    # (...)
spec:
  template:
    metadata:
      labels:
        admission.datadoghq.com/enabled: "true" # Enable Admission Controller to mutate new pods in this deployment
      annotations:
        admission.datadoghq.com/java-lib.version: "<CONTAINER IMAGE TAG>"
    spec:
      containers:
        - # (...)

Step 3 - Tag your pods with Unified Service Tags

With Unified Service Tags, you can tie Datadog telemetry together and navigate seamlessly across traces, metrics, and logs with consistent tags. Set the Unified Service Tagging on both the deployment object and the pod template specs. Set Unified Service tags by using the following labels:

  metadata:
    labels:
      tags.datadoghq.com/env: "<ENV>"
      tags.datadoghq.com/service: "<SERVICE>"
      tags.datadoghq.com/version: "<VERSION>"

Note: It is not necessary to set the environment variables for universal service tagging (DD_ENV, DD_SERVICE, DD_VERSION) in the pod template spec, because the Admission Controller propagates the tag values as environment variables when injecting the library.

For example:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    tags.datadoghq.com/env: "prod" # Unified service tag - Deployment Env tag
    tags.datadoghq.com/service: "my-service" # Unified service tag - Deployment Service tag
    tags.datadoghq.com/version: "1.1" # Unified service tag - Deployment Version tag
  # (...)
spec:
  template:
    metadata:
      labels:
        tags.datadoghq.com/env: "prod" # Unified service tag - Pod Env tag
        tags.datadoghq.com/service: "my-service" # Unified service tag - Pod Service tag
        tags.datadoghq.com/version: "1.1" # Unified service tag - Pod Version tag
        admission.datadoghq.com/enabled: "true" # Enable Admission Controller to mutate new pods part of this deployment
      annotations:
        admission.datadoghq.com/java-lib.version: "<CONTAINER IMAGE TAG>"
    spec:
      containers:
        - # (...)

Step 4 - Apply the configuration

Your pods are ready to be instrumented when their new configuration is applied.

The library is injected on new pods only and does not have any impact on running pods.

Check that the library injection was successful

Library injection leverages the injection of a dedicated init container in pods. If the injection was successful you can see an init container called datadog-lib-init in your pod:

Kubernetes environment details page showing init container in the pod.

Or run kubectl describe pod <my-pod> to see the datadog-lib-init init container listed.

The instrumentation also starts sending telemetry to Datadog (for example, traces to APM).

Troubleshooting installation issues

If the application pod fails to start, run kubectl logs <my-pod> --all-containers to print out the logs and compare them to the known issues below.

.NET installation issues

dotnet: error while loading shared libraries: libc.musl-x86_64.so.1: cannot open shared object file: No such file or directory
  • Problem: The pod annotation for the dotnet library version included a -musl suffix, but the application container runs on a Linux distribution that uses glibc.
  • Solution: Remove the -musl suffix from the dotnet library version.
Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /datadog-lib/continuousprofiler/Datadog.Linux.ApiWrapper.x64.so)
  • Problem: The application container runs on a Linux distribution that uses musl-libc (for example, Alpine), but the pod annotation does not include the -musl suffix.
  • Solution: Add the -musl suffix to the dotnet library version.

Python installation issues

Noisy library logs

In Python < 1.20.3, Python injection logs output to stderr. Upgrade to 1.20.3 or above to suppress the logs by default. The logs can be enabled by setting the environment variable DD_TRACE_DEBUG to 1.

Incompatible Python version

The library injection mechanism for Python only supports injecting the Python library in Python v3.7+.

user-installed ddtrace found, aborting
  • Problem: The ddtrace library is already installed on the system so the injection logic aborts injecting the library to avoid introducing a breaking change in the application.
  • Solution: Remove the installation of ddtrace if library injection is desired. Otherwise, use the installed library (see documentation) instead of library injection.
Tracing Library Injection on a host is in beta.

When both the Agent and your services are running on a host, real or virtual, Datadog injects the tracing library by using a preload library that overrides calls to execve. Any newly started processes are intercepted and the specified instrumentation library is injected into the services.

Note: Injection on arm64 is not supported.

Install both library injection and the Datadog Agent

Requirements: A host running Linux.

If the host does not yet have a Datadog Agent installed, or if you want to upgrade your Datadog Agent installation, use the Datadog Agent install script to install both the injection libraries and the Datadog Agent:

DD_APM_INSTRUMENTATION_ENABLED=host DD_API_KEY=<YOUR KEY> DD_SITE="<YOUR SITE>" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh)"

By default, running the script installs support for Java, Node.js, Python, Ruby, and .NET. If you want to specify which language support is installed, also set the DD_APM_INSTRUMENTATION_LANGUAGES environment variable. The valid values are java, js, python, ruby, and dotnet. Use a comma-separated list to specify more than one language:

DD_APM_INSTRUMENTATION_LANGUAGES=java,js DD_APM_INSTRUMENTATION_ENABLED=host DD_API_KEY=<YOUR KEY> DD_SITE="<YOUR SITE>" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh)"

Exit and open a new shell to use the injection library.

Next steps

If you haven’t already, install your app and any supporting languages or libraries it requires.

When an app that is written in a supported language is launched, it is automatically injected with tracing enabled.

Configure the injection

Configure host injection in one of the following ways:

  • Set environment variables on the process being launched.
  • Specify host injection configuration in the /etc/datadog-agent/inject/host_config.yaml file.

Values in environment variables override settings in the configuration file on a per-process basis.

Configuration file

Property namePurposeDefault valueValid values
log_levelThe logging leveloffoff, debug, info, warn, error
output_pathsThe location where log output is writtenstderrstderr or a file:// URL
envThe default environment assigned to a processnonen/a
config_sourcesThe default configuration for a processBASICSee Config Sources

Example

---
log_level: debug
output_paths:
  - file:///tmp/host_injection.log
env: dev
config_sources: BASIC

Environment variables

The following environment variables configure library injection. You can pass these in by export through the command line (export DD_CONFIG_SOURCES=BASIC), shell configuration, or launch command.

Each of the fields in the config file corresponds to an environment variable. This environment variable is read from the environment of the process that’s being launched and affects only the process currently being launched.

Config file propertyEnvironment Variable
log_levelDD_APM_INSTRUMENTATION_DEBUG
output_pathsDD_APM_INSTRUMENTATION_OUTPUT_PATHS
envDD_ENV
config_sourcesDD_CONFIG_SOURCES

The DD_APM_INSTRUMENTATION_DEBUG environment variable is limited to the values true and false (default value false). Setting it to true sets log_level to debug and setting it to false (or not setting it at all) uses the log_level specified in the configuration file. The environment variable can only set the log level to debug, not any other log level values.

The DD_INSTRUMENT_SERVICE_WITH_APM environment variable controls whether or not injection is enabled. It defaults to TRUE. Set it to FALSE to turn off library injection altogether.

Config sources

By default, the following settings are enabled in an instrumented process:

  • Tracing
  • Log injection, assuming the application uses structured logging (usually JSON). For traces to appear in non-structured logs, you must change your application’s log configuration to include placeholders for trace ID and span ID. See Connect Logs and Traces for more information.
  • Health metrics
  • Runtime metrics

You can change these settings for all instrumented processes by setting the config_sources property in the configuration file or for a single process by setting the DD_CONFIG_SOURCES environment variable for the process. The valid settings for config sources are:

Configuration Source NameMeaning
BASICApply the configurations specified above. If no configuration source is specified, this is the default.
LOCAL:PATHApply the configuration at the specified path on the local file system. The format of the configuration file is described below. Example: LOCAL:/opt/config/my_process_config.yaml
BLOB:URLApply the configuration at the specified path in an S3-compatible object store. The connection URL and the format of the configuration file are described below. Example: BLOB:s3://config_bucket/my_process_config.yaml?region=us-east-1

The words BASIC, LOCAL, and BLOB must be uppercase.

Config source values can be separated by semicolons to indicate multiple possible locations. The first configuration that returns without an error is used. Configuration is not merged from multiple configuration sources. The following example checks an S3 bucket for configuration, then checks the local file system, and finally uses the built-in default configuration:

DD_CONFIG_SOURCES=BLOB:s3://config_bucket/my_process_config.yaml?region=us-east-1;LOCAL:/opt/config/my_process_config.yaml;BASIC

Blob storage support

The supported blob storage solutions are:

  • Amazon S3 - Set the URL with the s3:// prefix. If you have authenticated with the AWS CLI, it uses those credentials. See the AWS SDK documentation for information about configuring credentials using environment variables.
  • GCP GCS - Set the URL with the gs:// prefix. It uses Application Default Credentials. Authenticate with gcloud auth application-default login. See the Google Cloud authentication documentation for more information about configuring credentials using environment variables.
  • Azure Blob - Set the URL with the azblob:// prefix, and point to a storage container name. It uses the credentials found in AZURE_STORAGE_ACCOUNT (that is, the bucket name) plus at least one of AZURE_STORAGE_KEY and AZURE_STORAGE_SAS_TOKEN. For more information about configuring BLOB or LOCAL settings, see Supplying configuration source.

Supplying configuration source

The config file for LOCAL and BLOB can be formatted as JSON:

{
	"version": 1,
	"tracing_enabled": true,
	"log_injection_enabled": true,
	"health_metrics_enabled": true,
	"runtime_metrics_enabled": true,
	"tracing_sampling_rate": 1.0,
	"tracing_rate_limit": 1,
	"tracing_tags": ["a=b", "foo"],
	"tracing_service_mapping": [
		{ "from_key": "mysql", "to_name": "super_db"},
		{ "from_key": "postgres", "to_name": "my_pg"}
	],
	"tracing_agent_timeout": 1,
	"tracing_header_tags": [
		{"header": "HEADER", "tag_name":"tag"}
	],
	"tracing_partial_flush_min_spans": 1,
	"tracing_debug": true,
	"tracing_log_level": "debug",
}

Or as YAML:

---
version: 1
tracing_enabled: true
log_injection_enabled: true
health_metrics_enabled: true
runtime_metrics_enabled: true
tracing_sampling_rate: 1.0
tracing_rate_limit: 1
tracing_tags:
- a=b
- foo
tracing_service_mapping:
- from_key: mysql
  to_name: super_db
- from_key: postgres
  to_name: my_pg
tracing_agent_timeout: 1
tracing_header_tags:
- header: HEADER
  tag_name: tag
tracing_partial_flush_min_spans: 1
tracing_debug: true
tracing_log_level: debug

The value of version is always 1. This refers to the configuration schema version in use, not the version of the content.

The following table shows how the injection configuration values map to the corresponding tracing library configuration options:

InjectionJava tracerNodeJS tracer.NET tracerPython tracer
tracing_enableddd.trace.enabledDD_TRACE_ENABLEDDD_TRACE_ENABLEDDD_TRACE_ENABLED
log_injection_enableddd.logs.injectionDD_LOGS_INJECTIONDD_LOGS_INJECTIONDD_LOGS_INJECTION
health_metrics_enableddd.trace.health.metrics.enabledn/an/an/a
runtime_metrics_enableddd.jmxfetch.enabledDD_RUNTIME_METRICS_ENABLEDDD_RUNTIME_METRICS_ENABLEDDD_RUNTIME_METRICS_ENABLED
tracing_sampling_ratedd.trace.sample.rateDD_TRACE_SAMPLE_RATEDD_TRACE_SAMPLE_RATEDD_TRACE_SAMPLE_RATE
tracing_rate_limitdd.trace.rate.limitDD_TRACE_RATE_LIMITDD_TRACE_RATE_LIMITDD_TRACE_RATE_LIMIT
tracing_tagsdd.tagsDD_TAGSDD_TAGSDD_TAGS
tracing_service_mappingdd.service.mappingDD_SERVICE_MAPPINGDD_TRACE_SERVICE_MAPPINGDD_SERVICE_MAPPING
tracing_agent_timeoutdd.trace.agent.timeoutn/an/an/a
tracing_header_tagsdd.trace.header.tagsn/aDD_TRACE_HEADER_TAGSDD_TRACE_HEADER_TAGS
tracing_partial_flush_min_spansdd.trace.partial.flush.min.spansDD_TRACE_PARTIAL_FLUSH_MIN_SPANSDD_TRACE_PARTIAL_FLUSH_ENABLEDn/a
tracing_debugdd.trace.debugDD_TRACE_DEBUGDD_TRACE_DEBUGDD_TRACE_DEBUG
tracing_log_leveldatadog.slf4j.simpleLogger.defaultLogLevelDD_TRACE_LOG_LEVELn/an/a

Tracer library configuration options that aren’t mentioned in the injection configuration are still available for use through properties or environment variables the usual way.

Basic configuration settings

BASIC configuration settings are equivalent to the following YAML settings:

---
version: 1
tracing_enabled: true
log_injection_enabled: true
health_metrics_enabled: true
runtime_metrics_enabled: true

Launch your services

Launch your services, indicating the preload library configuration in the launch command. If DD_CONFIG_SOURCES is not specified, the value specified for config_sources in the /etc/datadog-agent/inject/host_config.yaml config file is used. If that is not specified either, DD_CONFIG_SOURCES defaults to BASIC:

Java app example:

java -jar <SERVICE_1>.jar &
DD_CONFIG_SOURCES=LOCAL:/etc/<SERVICE_2>/config.yaml;BASIC java -jar <SERVICE_2>.jar &

Node app example:

node index.js &
DD_CONFIG_SOURCES=LOCAL:/etc/<SERVICE_2>/config.yaml;BASIC node index.js &

.NET app example:

dotnet <SERVICE_1>.dll &
DD_CONFIG_SOURCES=LOCAL:/etc/<SERVICE_2>/config.yaml;BASIC dotnet <SERVICE_2>.dll &

Python app example:

python <SERVICE_1>.py &
DD_CONFIG_SOURCES=LOCAL:/etc/<SERVICE_2>/config.yaml;BASIC python <SERVICE_2>.py &

Exercise your application to start generating telemetry data, which you can see as traces in APM.

Tracing Library Injection on hosts and containers is in beta.

When your Agent is running on a host, and your services are running in containers, Datadog injects the tracing library by intercepting container creation and configuring the Docker container.

Any newly started processes are intercepted and the specified instrumentation library is injected into the services.

Note: Injection on arm64 is not supported.

Install both library injection and the Datadog Agent

Requirements:

If the host does not yet have a Datadog Agent installed, or if you want to upgrade your Datadog Agent installation, use the Datadog Agent install script to install both the injection libraries and the Datadog Agent:

DD_APM_INSTRUMENTATION_ENABLED=all DD_API_KEY=<YOUR KEY> DD_SITE="<YOUR SITE>" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh)"

By default, running the script installs support for Java, Node.js, Python, Ruby, and .NET. If you want to specify which language support is installed, also set the DD_APM_INSTRUMENTATION_LANGUAGES environment variable. The valid values are java, js, python, ruby, and dotnet. Use a comma-separated list to specify more than one language:

DD_APM_INSTRUMENTATION_LANGUAGES=java,js DD_APM_INSTRUMENTATION_ENABLED=all DD_API_KEY=<YOUR KEY> DD_SITE="<YOUR SITE>" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh)"

Configure Docker injection

If the default configuration doesn’t meet your needs, you can edit /etc/datadog-agent/inject/docker_config.yaml and add the following YAML configuration for the injection:

---
log_level: debug
output_paths:
- stderr
config_sources: BASIC
config_sources
Turn on or off library injection and specify a semicolon-separated ordered list of places where configuration is stored. The first value that returns without an error is used. Configuration is not merged across configuration sources. The valid values are:
  • BLOB:<URL> - Load configuration from a blob store (S3-compatible) located at <URL>.
  • LOCAL:<PATH> - Load from a file on the local file system at <PATH>.
  • BASIC - Use default values. If config_sources is not specified, this configuration is used.

The words BASIC, LOCAL, and BLOB must be uppercase.

Config source values can be separated by semicolons to indicate multiple possible locations. The first configuration that returns without an error is used. Configuration is not merged from multiple configuration sources. The following example checks an S3 bucket for configuration, then checks the local file system, and finally uses the built-in default configuration:

config_sources: BLOB:s3://config_bucket/my_process_config.yaml?region=us-east-1;LOCAL:/opt/config/my_process_config.yaml;BASIC

For more information about configuring BLOB or LOCAL settings, see Supplying configuration source.

log_level
Set to debug to log detailed information about what is happening, or info, warn, or error to log far less.
Default: info
output_paths
A list of one or more places to write logs.
Default: stderr
Optional: env
Specifies the DD_ENV tag for the containers running in Docker, for example, dev, prod, staging.
Default None.

Supplying configuration source

If you specify BLOB or LOCAL configuration source, create a JSON or YAML file there, and provide the configuration either as JSON:

{
	"version": 1,
	"tracing_enabled": true,
	"log_injection_enabled": true,
	"health_metrics_enabled": true,
	"runtime_metrics_enabled": true,
	"tracing_sampling_rate": 1.0,
	"tracing_rate_limit": 1,
	"tracing_tags": ["a=b", "foo"],
	"tracing_service_mapping": [
		{ "from_key": "mysql", "to_name": "super_db"},
		{ "from_key": "postgres", "to_name": "my_pg"}
	],
	"tracing_agent_timeout": 1,
	"tracing_header_tags": [
		{"header": "HEADER", "tag_name":"tag"}
	],
	"tracing_partial_flush_min_spans": 1,
	"tracing_debug": true,
	"tracing_log_level": "debug",
}

Or as YAML:

---
version: 1
tracing_enabled: true
log_injection_enabled: true
health_metrics_enabled: true
runtime_metrics_enabled: true
tracing_sampling_rate: 1.0
tracing_rate_limit: 1
tracing_tags:
- a=b
- foo
tracing_service_mapping:
- from_key: mysql
  to_name: super_db
- from_key: postgres
  to_name: my_pg
tracing_agent_timeout: 1
tracing_header_tags:
- header: HEADER
  tag_name: tag
tracing_partial_flush_min_spans: 1
tracing_debug: true
tracing_log_level: debug

The following table shows how the injection configuration values map to the corresponding tracing library configuration options:

InjectionJava tracerNodeJS tracer.NET tracerPython tracer
tracing_enableddd.trace.enabledDD_TRACE_ENABLEDDD_TRACE_ENABLEDDD_TRACE_ENABLED
log_injection_enableddd.logs.injectionDD_LOGS_INJECTIONDD_LOGS_INJECTIONDD_LOGS_INJECTION
health_metrics_enableddd.trace.health.metrics.enabledn/an/an/a
runtime_metrics_enableddd.jmxfetch.enabledDD_RUNTIME_METRICS_ENABLEDDD_RUNTIME_METRICS_ENABLEDDD_RUNTIME_METRICS_ENABLED
tracing_sampling_ratedd.trace.sample.rateDD_TRACE_SAMPLE_RATEDD_TRACE_SAMPLE_RATEDD_TRACE_SAMPLE_RATE
tracing_rate_limitdd.trace.rate.limitDD_TRACE_RATE_LIMITDD_TRACE_RATE_LIMITDD_TRACE_RATE_LIMIT
tracing_tagsdd.tagsDD_TAGSDD_TAGSDD_TAGS
tracing_service_mappingdd.service.mappingDD_SERVICE_MAPPINGDD_TRACE_SERVICE_MAPPINGDD_SERVICE_MAPPING
tracing_agent_timeoutdd.trace.agent.timeoutn/an/an/a
tracing_header_tagsdd.trace.header.tagsn/aDD_TRACE_HEADER_TAGSDD_TRACE_HEADER_TAGS
tracing_partial_flush_min_spansdd.trace.partial.flush.min.spansDD_TRACE_PARTIAL_FLUSH_MIN_SPANSDD_TRACE_PARTIAL_FLUSH_ENABLEDn/a
tracing_debugdd.trace.debugDD_TRACE_DEBUGDD_TRACE_DEBUGDD_TRACE_DEBUG
tracing_log_leveldatadog.slf4j.simpleLogger.defaultLogLevelDD_TRACE_LOG_LEVELn/an/a

Tracer library configuration options that aren’t mentioned in the injection configuration are still available for use through properties or environment variables the usual way.

Blob storage support

The supported blob storage solutions are:

  • Amazon S3 - Set the URL with the s3:// prefix. If you have authenticated with the AWS CLI, it uses those credentials. See the AWS SDK documentation for information about configuring credentials using environment variables.
  • GCP GCS - Set the URL with the gs:// prefix. It uses Application Default Credentials. Authenticate with gcloud auth application-default login. See the Google Cloud authentication documentation for more information about configuring credentials using environment variables.
  • Azure Blob - Set the URL with the azblob:// prefix, and point to a storage container name. It uses the credentials found in AZURE_STORAGE_ACCOUNT (that is, the bucket name) plus at least one of AZURE_STORAGE_KEY and AZURE_STORAGE_SAS_TOKEN. For more information about configuring BLOB or LOCAL settings, see Supplying configuration source.

Basic configuration settings

BASIC configuration settings are equivalent to the following YAML settings:

---
version: 1
tracing_enabled: true
log_injection_enabled: true
health_metrics_enabled: true
runtime_metrics_enabled: true

Specifying Unified Service Tags on containers

If the environment variables DD_ENV, DD_SERVICE, or DD_VERSION are specified in a service container image, those values are used to tag telemetry from the container.

If they are not specified, DD_ENV uses the env value set in the /etc/datadog-agent/inject/docker_config.yaml config file, if any. DD_SERVICE is derived from the name of the Docker image. An image with the name my-service:1.0 is tagged with DD_SERVICE of my-service.

Launch your services

Start your Agent and launch your containerized services as usual.

Exercise your application to start generating telemetry data, which you can see as traces in APM.

Tracing Library Injection in containers is in beta.

When your Agent and services are running in separate Docker containers on the same host, Datadog injects the tracing library by intercepting container creation and configuring the Docker container.

Any newly started processes are intercepted and the specified instrumentation library is injected into the services.

Requirements:

Note: Injection on arm64 is not supported.

Install the preload library

Use the install_script_docker_injection shell script to automatically install Docker injection support. Docker must already be installed on the host machine.

bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_docker_injection.sh)"

This installs language libraries for all supported languages. To install specific languages, set the DD_APM_INSTRUMENTATION_LANGUAGES variable. The valid values are java, js, python, ruby, and dotnet:

DD_APM_INSTRUMENTATION_LANGUAGES=java,js bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_docker_injection.sh)"

Configure Docker injection

Edit /etc/datadog-agent/inject/docker_config.yaml and add the following YAML configuration for the injection:

---
log_level: debug
output_paths:
- stderr
config_sources: BASIC
config_sources
Turn on or off library injection and specify a semicolon-separated ordered list of places where configuration is stored. The first value that returns without an error is used. Configuration is not merged across configuration sources. The valid values are:
  • BLOB:<URL> - Load configuration from a blob store (S3-compatible) located at <URL>.
  • LOCAL:<PATH> - Load from a file on the local file system at <PATH>.
  • BASIC - Use default values. If config_sources is not specified, this configuration is used.

The words BASIC, LOCAL, and BLOB must be uppercase.

Config source values can be separated by semicolons to indicate multiple possible locations. The first configuration that returns without an error is used. Configuration is not merged from multiple configuration sources. The following example checks an S3 bucket for configuration, then checks the local file system, and finally uses the built-in default configuration:

config_sources: BLOB:s3://config_bucket/my_process_config.yaml?region=us-east-1;LOCAL:/opt/config/my_process_config.yaml;BASIC

For more information about configuring BLOB or LOCAL settings, see Supplying configuration source.

log_level
Set to debug to log detailed information about what is happening, or info to log far less.
output_paths
A list of one or more places to write logs.
Default: stderr
Optional: env
Specifies the DD_ENV tag for the containers running in Docker, for example, dev, prod, staging.
Default None.

Supplying configuration source

If you specify BLOB or LOCAL configuration source, create a JSON or YAML file there, and provide the configuration either as JSON:

{
	"version": 1,
	"tracing_enabled": true,
	"log_injection_enabled": true,
	"health_metrics_enabled": true,
	"runtime_metrics_enabled": true,
	"tracing_sampling_rate": 1.0,
	"tracing_rate_limit": 1,
	"tracing_tags": ["a=b", "foo"],
	"tracing_service_mapping": [
		{ "from_key": "mysql", "to_name": "super_db"},
		{ "from_key": "postgres", "to_name": "my_pg"}
	],
	"tracing_agent_timeout": 1,
	"tracing_header_tags": [
		{"header": "HEADER", "tag_name":"tag"}
	],
	"tracing_partial_flush_min_spans": 1,
	"tracing_debug": true,
	"tracing_log_level": "debug",
}

Or as YAML:

---
version: 1
tracing_enabled: true
log_injection_enabled: true
health_metrics_enabled: true
runtime_metrics_enabled: true
tracing_sampling_rate: 1.0
tracing_rate_limit: 1
tracing_tags:
- a=b
- foo
tracing_service_mapping:
- from_key: mysql
  to_name: super_db
- from_key: postgres
  to_name: my_pg
tracing_agent_timeout: 1
tracing_header_tags:
- header: HEADER
  tag_name: tag
tracing_partial_flush_min_spans: 1
tracing_debug: true
tracing_log_level: debug

In this configuration file, the value of version is always 1. This refers to the configuration schema version in use, not the version of the content.

The following table shows how the injection configuration values map to the corresponding tracing library configuration options:

InjectionJava tracerNodeJS tracer.NET tracerPython tracer
tracing_enableddd.trace.enabledDD_TRACE_ENABLEDDD_TRACE_ENABLEDDD_TRACE_ENABLED
log_injection_enableddd.logs.injectionDD_LOGS_INJECTIONDD_LOGS_INJECTIONDD_LOGS_INJECTION
health_metrics_enableddd.trace.health.metrics.enabledn/an/an/a
runtime_metrics_enableddd.jmxfetch.enabledDD_RUNTIME_METRICS_ENABLEDDD_RUNTIME_METRICS_ENABLEDDD_RUNTIME_METRICS_ENABLED
tracing_sampling_ratedd.trace.sample.rateDD_TRACE_SAMPLE_RATEDD_TRACE_SAMPLE_RATEDD_TRACE_SAMPLE_RATE
tracing_rate_limitdd.trace.rate.limitDD_TRACE_RATE_LIMITDD_TRACE_RATE_LIMITDD_TRACE_RATE_LIMIT
tracing_tagsdd.tagsDD_TAGSDD_TAGSDD_TAGS
tracing_service_mappingdd.service.mappingDD_SERVICE_MAPPINGDD_TRACE_SERVICE_MAPPINGDD_SERVICE_MAPPING
tracing_agent_timeoutdd.trace.agent.timeoutn/an/an/a
tracing_header_tagsdd.trace.header.tagsn/aDD_TRACE_HEADER_TAGSDD_TRACE_HEADER_TAGS
tracing_partial_flush_min_spansdd.trace.partial.flush.min.spansDD_TRACE_PARTIAL_FLUSH_MIN_SPANSDD_TRACE_PARTIAL_FLUSH_ENABLEDn/a
tracing_debugdd.trace.debugDD_TRACE_DEBUGDD_TRACE_DEBUGDD_TRACE_DEBUG
tracing_log_leveldatadog.slf4j.simpleLogger.defaultLogLevelDD_TRACE_LOG_LEVELn/an/a

Tracer library configuration options that aren’t mentioned in the injection configuration are still available for use through properties or environment variables the usual way.

Blob storage support

The supported blob storage solutions are:

  • Amazon S3 - Set the URL with the s3:// prefix. If you have authenticated with the AWS CLI, it uses those credentials. See the AWS SDK documentation for information about configuring credentials using environment variables.
  • GCP GCS - Set the URL with the gs:// prefix. It uses Application Default Credentials. Authenticate with gcloud auth application-default login. See the Google Cloud authentication documentation for more information about configuring credentials using environment variables.
  • Azure Blob - Set the URL with the azblob:// prefix, and point to a storage container name. It uses the credentials found in AZURE_STORAGE_ACCOUNT (that is, the bucket name) plus at least one of AZURE_STORAGE_KEY and AZURE_STORAGE_SAS_TOKEN. For more information about configuring BLOB or LOCAL settings, see Supplying configuration source.

Basic configuration settings

BASIC configuration settings are equivalent to the following YAML settings:

---
version: 1
tracing_enabled: true
log_injection_enabled: true
health_metrics_enabled: true
runtime_metrics_enabled: true

Configure the Agent

In the Docker compose file that launches your containers, use the following settings for the Agent, securely setting your own Datadog API key for ${DD_API_KEY}:

  dd-agent:
    container_name: dd-agent
    image: datadog/agent:7
    environment:
      - DD_API_KEY=${DD_API_KEY}
      - DD_APM_ENABLED=true
      - DD_APM_NON_LOCAL_TRAFFIC=true
      - DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true
      - DD_APM_RECEIVER_SOCKET=/opt/datadog/apm/inject/run/apm.socket
      - DD_DOGSTATSD_SOCKET=/opt/datadog/apm/inject/run/dsd.socket
    volumes:
      - /opt/datadog/apm:/opt/datadog/apm
      - /var/run/docker.sock:/var/run/docker.sock:ro

Specifying Unified Service Tags on containers

If the environment variables DD_ENV, DD_SERVICE, or DD_VERSION are specified in a service container image, those values are used to tag telemetry from the container.

If they are not specified, DD_ENV uses the env value set in the /etc/datadog-agent/inject/docker_config.yaml config file, if any. DD_SERVICE is derived from the name of the Docker image. An image with the name my-service:1.0 is tagged with DD_SERVICE of my-service.

Launch the Agent on Docker

The dd-agent container must be launched before any service containers. Run:

docker-compose up -d dd-agent

Launch your services

Launch your containerized services as usual.

Exercise your application to start generating telemetry data, which you can see as traces in APM.

Uninstall library injection

Remove instrumentation for specific services

To stop producing traces for a specific service, run the following commands and restart the service:

  1. Add the DD_INSTRUMENT_SERVICE_WITH_APM environment variable to the service startup command:

    DD_INSTRUMENT_SERVICE_WITH_APM=false <service_start_command>
    
  2. Restart the service.

  1. Add the DD_INSTRUMENT_SERVICE_WITH_APM environment variable to the service startup command:
    docker run -e DD_INSTRUMENT_SERVICE_WITH_APM=false
    
  2. Restart the service.

Remove APM for all services on the infrastructure

To stop producing traces, remove library injectors and restart the infrastructure:

  1. Run:
    dd-host-install --uninstall
    
  2. Restart your host.
  1. Uninstall local library injection:
    dd-container-install --uninstall
    
  2. Restart Docker:
    systemctl restart docker
    
    Or use the equivalent for your environment.

Configuring the library

The supported features and configuration options for the tracing library are the same for library injection as for other installation methods, and can be set with environment variables. Read the Datadog library configuration page for your language for more details.

For example, you can turn on Application Security Monitoring or Continuous Profiler, each of which may have billing impact:

  • For Kubernetes, set the DD_APPSEC_ENABLED or DD_PROFILING_ENABLED environment variables to true in the underlying application pod’s deployment file.

  • For hosts and containers, set the DD_APPSEC_ENABLED or DD_PROFILING_ENABLED container environment variables to true, or in the injection configuration, specify an additional_environment_variables section like the following YAML example:

    additional_environment_variables:
    - key: DD_PROFILING_ENABLED
      value: true
    - key: DD_APPSEC_ENABLED
      value: true
    

    Only configuration keys that start with DD_ can be set in the injection config source additional_environment_variables section.