The OpenTelemetry Collector enables you to collect, process, and export telemetry data from your applications in a vendor-neutral way. When configured with the Datadog Exporter and Datadog Connector, you can send your traces, logs, and metrics to Datadog without the Datadog Agent.
Datadog Exporter: Forwards trace, metric, and logs data from OpenTelemetry SDKs to Datadog (without the Datadog Agent)
Datadog Connector: Calculates Trace Metrics from collected span data
To see which Datadog features are supported with this setup, see the feature compatibility table under Full OTel.
Install and configure
1 - Download the OpenTelemetry Collector
Download the latest release of the OpenTelemetry Collector Contrib distribution, from the project’s repository.
Set your Datadog API key as the DD_API_KEY environment variable.
The following examples use 0.0.0.0 as the endpoint address for convenience. This allows connections from any network interface. For enhanced security, especially in local deployments, consider using localhost instead.
For more information on secure endpoint configuration, see the OpenTelemetry security documentation.
receivers:otlp:protocols:http:endpoint:0.0.0.0:4318grpc:endpoint:0.0.0.0:4317# The hostmetrics receiver is required to get correct infrastructure metrics in Datadog.hostmetrics:collection_interval:10sscrapers:paging:metrics:system.paging.utilization:enabled:truecpu:metrics:system.cpu.utilization:enabled:truedisk:filesystem:metrics:system.filesystem.utilization:enabled:trueload:memory:network:processes:# The prometheus receiver scrapes metrics needed for the OpenTelemetry Collector Dashboard.prometheus:config:scrape_configs:- job_name:'otelcol'scrape_interval:10sstatic_configs:- targets:['0.0.0.0:8888']filelog:include_file_path:truepoll_interval:500msinclude:- /var/log/**/*example*/*.logprocessors:batch:send_batch_max_size:100send_batch_size:10timeout:10sconnectors:datadog/connector:exporters:datadog/exporter:api:site:key:${env:DD_API_KEY}service:pipelines:metrics:receivers:[hostmetrics, prometheus, otlp, datadog/connector]processors:[batch]exporters:[datadog/exporter]traces:receivers:[otlp]processors:[batch]exporters:[datadog/connector, datadog/exporter]logs:receivers:[otlp, filelog]processors:[batch]exporters:[datadog/exporter]
This basic configuration enables the receiving of OTLP data over HTTP and gRPC, and sets up a batch processor.
For a complete list of configuration options for the Datadog Exporter, see the fully documented example configuration file. Additional options like api::site and host_metadata settings may be relevant depending on your deployment.
Batch processor configuration
The batch processor is required for non-development environments. The exact configuration depends on your specific workload and signal types.
Configure the batch processor based on Datadog’s intake limits:
You may get 413 - Request Entity Too Large errors if you batch too much telemetry data in the batch processor.
3 - Configure your application
To get better metadata for traces and for smooth integration with Datadog:
Use resource detectors: If they are provided by the language SDK, attach container information as resource attributes. For example, in Go, use the WithContainer() resource option.
Apply Unified Service Tagging: Make sure you’ve configured your application with the appropriate resource attributes for unified service tagging. This ties Datadog telemetry together with tags for service name, deployment environment, and service version. The application should set these tags using the OpenTelemetry semantic conventions: service.name, deployment.environment, and service.version.
4 - Configure the logger for your application
Since the OpenTelemetry SDKs’ logging functionality is not fully supported (see your specific language in the OpenTelemetry documentation for more information), Datadog recommends using a standard logging library for your application. Follow the language-specific Log Collection documentation to set up the appropriate logger in your application. Datadog strongly encourages setting up your logging library to output your logs in JSON to avoid the need for custom parsing rules.
Configure the filelog receiver
Configure the filelog receiver using operators. For example, if there is a service checkoutservice that is writing logs to /var/log/pods/services/checkout/0.log, a sample log might look like this:
{"level":"info","message":"order confirmation email sent to \"jack@example.com\"","service":"checkoutservice","span_id":"197492ff2b4e1c65","timestamp":"2022-10-10T22:17:14.841359661Z","trace_id":"e12c408e028299900d48a9dd29b0dc4c"}
start_at: end: Indicates to read new content that is being written
poll_internal: Sets the poll frequency
Operators:
json_parser: Parses JSON logs. By default, the filelog receiver converts each log line into a log record, which is the body of the logs’ data model. Then, the json_parser converts the JSON body into attributes in the data model.
trace_parser: Extract the trace_id and span_id from the log to correlate logs and traces in Datadog.
Remap OTel’s service.name attribute to service for logs
For Datadog Exporter versions 0.83.0 and later, the service field of OTel logs is populated as OTel semantic conventionservice.name. However, service.name is not one of the default service attributes in Datadog’s log preprocessing.
To get the service field correctly populated in your logs, you can specify service.name to be the source of a log’s service by setting a log service remapper processor.
Optional: Using Kubernetes
There are multiple ways to deploy the OpenTelemetry Collector and Datadog Exporter in a Kubernetes infrastructure. For the filelog receiver to work, the Agent/DaemonSet deployment is the recommended deployment method.
In containerized environments, applications write logs to stdout or stderr. Kubernetes collects the logs and writes them to a standard location. You need to mount the location on the host node into the Collector for the filelog receiver. Below is an extension example with the mounts required for sending logs.
apiVersion:apps/v1metadata:name:otel-agentlabels:app:opentelemetrycomponent:otel-collectorspec:template:metadata:labels:app:opentelemetrycomponent:otel-collectorspec:containers:-name:collectorcommand:-"/otelcol-contrib"-"--config=/conf/otel-agent-config.yaml"image:otel/opentelemetry-collector-contrib:0.71.0env:-name:POD_IPvalueFrom:fieldRef:fieldPath:status.podIP# The k8s.pod.ip is used to associate pods for k8sattributes-name:OTEL_RESOURCE_ATTRIBUTESvalue:"k8s.pod.ip=$(POD_IP)"ports:-containerPort:4318# default port for OpenTelemetry HTTP receiver.hostPort:4318-containerPort:4317# default port for OpenTelemetry gRPC receiver.hostPort:4317-containerPort:8888# Default endpoint for querying metrics.volumeMounts:-name:otel-agent-config-volmountPath:/conf-name:varlogpodsmountPath:/var/log/podsreadOnly:true-name:varlibdockercontainersmountPath:/var/lib/docker/containersreadOnly:truevolumes:-name:otel-agent-config-volconfigMap:name:otel-agent-confitems:-key:otel-agent-configpath:otel-agent-config.yaml# Mount nodes log file location.-name:varlogpodshostPath:path:/var/log/pods-name:varlibdockercontainershostPath:path:/var/lib/docker/containers
Out-of-the-box Datadog Exporter configuration
You can find working examples of out-of-the-box configuration for Datadog Exporter in the exporter/datadogexporter/examples folder in the OpenTelemetry Collector Contrib project. See the full configuration example file, ootb-ec2.yaml. Configure each of the following components to suit your needs: