Datadog supports a variety of open standards, including OpenTelemetry and OpenTracing.
The OpenTelemetry Collector is a vendor-agnostic separate agent process for collecting and exporting telemetry data emitted by many processes. Datadog has an exporter available within the OpenTelemetry Collector to receive traces and metrics data from the OpenTelemetry SDKs, and to forward the data on to Datadog (without the Datadog Agent). It works with all supported languages, and you can connect those OpenTelemetry trace data with application logs.
You can deploy the OpenTelemetry Collector using any of the supported methods, and configure it by adding a datadog
exporter to your OpenTelemetry configuration YAML file along with your Datadog API key:
datadog:
api:
key: "<API key>"
To send the data to the Datadog EU site, also set the site
parameter:
datadog:
api:
key: "<API key>"
site: datadoghq.eu
On each OpenTelemetry-instrumented application, set the resource attributes development.environment
, service.name
, and service.version
using the language’s SDK. As a fall-back, you can also configure hostname (optionally), environment, service name, and service version at the collector level for unified service tagging by following the example configuration file. If you don’t specify the hostname explicitly, the exporter attempts to get an automatic default by checking the following sources in order, falling back to the next one if the current one is unavailable or invalid:
The OpenTelemetry Collector is configured by adding a pipeline to your otel-collector-configuration.yml
file. Supply the relative path to this configuration file when you start the collector by passing it in via the --config=<path/to/configuration_file>
command line argument. For examples of supplying a configuration file, see the environment specific setup section below or the OpenTelemetry Collector documentation.
The exporter assumes you have a pipeline that uses the datadog
exporter, and includes a batch processor configured with the following:
timeout
setting of 10s
(10 seconds). A batch representing 10 seconds of traces is a constraint of Datadog’s API Intake for Trace Related Statistics.timeout
setting, trace related metrics including .hits
, .errors
, and .duration
for different services and service resources will be inaccurate over periods of time.Here is an example trace pipeline configured with an otlp
receiver, batch
processor, and datadog
exporter:
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
timeout: 10s
exporters:
datadog/api:
hostname: customhostname
env: prod
service: myservice
version: myversion
tags:
- example:tag
api:
key: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
site: datadoghq.eu
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [datadog/api]
Download the appropriate binary from the project repository latest release.
Create a otel_collector_config.yaml
file. Here is an example template to get started. It enables the collector’s OTLP Receiver and Datadog Exporter.
Run the download on the host, specifying the configration yaml file set via the --config
parameter. For example:
otelcontribcol_linux_amd64 --config otel_collector_config.yaml
Run an Opentelemetry Collector container to receive traces either from the installed host, or from other containers.
Create a otel_collector_config.yaml
file. Here is an example template to get started. It enables the collector’s OTLP receiver and the Datadog exporter.
Choose a published Docker image such as otel/opentelemetry-collector-contrib:latest
.
Determine which ports to open on your container. OpenTelemetry traces are sent to the OpenTelemetry Collector over TCP or UDP on a number of ports, which must be exposed on the container. By default, traces are sent over OTLP/gRPC on port 55680
, but common protocols and their ports include:
9411
14250
14268
6831
55680
55681
Run the container with the configured ports and an otel_collector_config.yaml
file. For example:
$ docker run \
-p 55680:55680 \
-v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \
otel/opentelemetry-collector-contrib
Configure your application with the appropriate resource attributes for unified service tagging by adding metadata
Create an otel_collector_config.yaml
file. Here is an example template to get started. It enables the collector’s OTLP receiver and Datadog exporter.
Configure your application with the appropriate resource attributes for unified service tagging by adding the metadata described here
Create a docker network:
docker network create <NETWORK_NAME>
Run the OpenTelemetry Collector container and application container in the same network. Note: When running the application container, ensure the environment variable OTEL_EXPORTER_OTLP_ENDPOINT
is configured to use the appropriate hostname for the OpenTelemetry Collector. In the example below, this is opentelemetry-collector
.
# Datadog Agent
docker run -d --name opentelemetry-collector \
--network <NETWORK_NAME> \
-v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \
otel/opentelemetry-collector-contrib
# Application
docker run -d --name app \
--network <NETWORK_NAME> \
-e OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:55680 \
company/app:latest
The OpenTelemetry Collector can be run in two types of deployment scenarios:
As an OpenTelemetry Collector agent running on the same host as the application in a sidecar or daemonset; or
As a standalone service, for example a container or deployment, typically per-cluster, per-datacenter, or per-region.
To accurately track the appropriate metadata in Datadog, run the OpenTelemetry Collector in agent mode on each of the Kubernetes nodes.
When deploying the OpenTelemetry Collector as a daemonset, refer to the example configuration below as a guide.
On the application container, use the downward API to pull the host IP. The application container needs an environment variable that points to status.hostIP
. The OpenTelemetry Application SDKs expect this to be named OTEL_EXPORTER_OTLP_ENDPOINT
. Use the below example snippet as a guide.
A full example Kubernetes manifest for deploying the OpenTelemetry Collector as both daemonset and standalone collector can be found here. Modify the example to suit your environment. The key sections that are specific to Datadog are as follows:
The example demonstrates deploying the OpenTelemetry Collectors in agent mode via daemonset, which collect relevant k8s node and pod specific metadata, and then forward telemetry data to an OpenTelemetry Collector in standalone collector mode. This OpenTelemetry Collector in standalone collector mode then exports to the Datadog backend. See this diagram of this deployment model.
For OpenTelemetry Collectors deployed as agent via daemonset, in the daemonset, spec.containers.env
should use the downward API to capture status.podIP
and add it as part of the OTEL_RESOURCE
environment variable. This is used by the OpenTelemetry Collector’s resourcedetection
and k8s_tagger
processors, which should be included along with a batch
processor and added to the traces
pipeline.
In the daemonset’s spec.containers.env
section:
# ...
env:
# Get pod ip so that k8s_tagger can tag resources
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
# This is picked up by the resource detector
- name: OTEL_RESOURCE
value: "k8s.pod.ip=$(POD_IP)"
# ...
In the otel-agent-conf
ConfigMap’s data.otel-agent-config
processors
section:
# ...
# The resource detector injects the pod IP
# to every metric so that the k8s_tagger can
# fetch information afterwards.
resourcedetection:
detectors: [env]
timeout: 5s
override: false
# The k8s_tagger in the Agent is in passthrough mode
# so that it only tags with the minimal info for the
# collector k8s_tagger to complete
k8s_tagger:
passthrough: true
batch:
# ...
In the otel-agent-conf
ConfigMap’s data.otel-agent-config
service.pipelines.traces
section:
# ...
# resourcedetection must come before k8s_tagger
processors: [resourcedetection, k8s_tagger, batch]
# ...
For OpenTelemetry Collectors in standalone collector mode, which receive traces from downstream collectors and export to Datadog’s backend, include a batch
processor configured with a timeout
of 10s
as well as the k8s_tagger
enabled. These should be included along with the datadog
exporter and added to the traces
pipeline.
In the otel-collector-conf
ConfigMap’s data.otel-collector-config
processors
section:
# ...
batch:
timeout: 10s
k8s_tagger:
# ...
In the otel-collector-conf
ConfigMap’s data.otel-collector-config
exporters
section:
exporters:
datadog:
api:
key: <YOUR_API_KEY>
In the otel-collector-conf
ConfigMap’s data.otel-collector-config
service.pipelines.traces
section:
# ...
processors: [k8s_tagger, batch]
exporters: [datadog]
# ...
In addition to the OpenTelemetry Collector configuration, ensure that OpenTelemetry SDKs that are installed in an application transmit telemetry data to the collector, by configuring the environment variable OTEL_EXPORTER_OTLP_ENDPOINT
with the host IP. Use the downward API to pull the host IP, and set it as an environment variable, which is then interpolated when setting the OTEL_EXPORTER_OTLP_ENDPOINT
environment variable:
apiVersion: apps/v1
kind: Deployment
...
spec:
containers:
- name: <CONTAINER_NAME>
image: <CONTAINER_IMAGE>/<TAG>
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
# This is picked up by the opentelemetry sdks
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://$(HOST_IP):55680"
To see more information and additional examples of how you might configure your collector, see the OpenTelemetry Collector configuration documentation.
To connect OpenTelemetry traces and logs so that your application logs monitoring and analysis has the additional context provided by the OpenTelemetry traces, see Connect OpenTelemetry Traces and Logs for language specific instructions and example code.
Datadog recommends you use the OpenTelemetry Collector Datadog exporter in conjunction with OpenTelemetry tracing clients. However, if that doesn’t work for you:
Each of the supported languages also has support for sending OpenTracing data to Datadog.
Python, Ruby, and NodeJS also have language-specific OpenTelemetry Datadog span exporters, which export traces directly from OpenTelemetry tracing clients to a Datadog Agent.
Documentation, liens et articles supplémentaires utiles: