このページは日本語には対応しておりません。随時翻訳に取り組んでいます。翻訳に関してご質問やご意見ございましたら、お気軽にご連絡ください。

The OpenTelemetry Collector is a vendor-agnostic agent process for collecting and exporting telemetry data emitted by many processes. The Datadog Exporter for the OpenTelemetry Collector allows you to forward trace, metric, and logs data from OpenTelemetry SDKs on to Datadog (without the Datadog Agent). It works with all supported languages, and you can connect those OpenTelemetry trace data with application logs.

Application Instrumented Library, Cloud Integrations, and Other Monitoring Solutions (for example Prometheus) -> Datadog Exporter inside OpenTelemetry Collector -> Datadog

Setting up the OpenTelemetry Collector with the Datadog Exporter

To run the OpenTelemetry Collector along with the Datadog Exporter:

Step 1 - Download the OpenTelemetry Collector

Download the latest release of the OpenTelemetry Collector Contrib distribution, from the project’s repository.

Step 2 - Configure the Datadog Exporter

To use the Datadog Exporter, add it to your OpenTelemetry Collector configuration. Create a configuration file and name it collector.yaml. Use the example file which provides a basic configuration that is ready to use after you set your Datadog API key as the DD_API_KEY environment variable:

collector.yaml

receivers:
  otlp:
    protocols:
      http:
      grpc:
  # The hostmetrics receiver is required to get correct infrastructure metrics in Datadog.
  hostmetrics:
    collection_interval: 10s
    scrapers:
      paging:
        metrics:
          system.paging.utilization:
            enabled: true
      cpu:
        metrics:
          system.cpu.utilization:
            enabled: true
      disk:
      filesystem:
        metrics:
          system.filesystem.utilization:
            enabled: true
      load:
      memory:
      network:
      processes:
  # The prometheus receiver scrapes metrics needed for the OpenTelemetry Collector Dashboard.
  prometheus:
    config:
      scrape_configs:
      - job_name: 'otelcol'
        scrape_interval: 10s
        static_configs:
        - targets: ['0.0.0.0:8888']

  filelog:
    include_file_path: true
    poll_interval: 500ms
    include:
      - /var/log/**/*example*/*.log

processors:
  batch:
    send_batch_max_size: 100
    send_batch_size: 10
    timeout: 10s

exporters:
  datadog:
    api:
      site: <DD_SITE>
      key: ${env:DD_API_KEY}

service:
  pipelines:
    metrics:
      receivers: [hostmetrics, prometheus, otlp]
      processors: [batch]
      exporters: [datadog]
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [datadog]
    logs:
      receivers: [otlp, filelog]
      processors: [batch]
      exporters: [datadog]

Where <DD_SITE> is your site, .

The above configuration enables the receiving of OTLP data from OpenTelemetry instrumentation libraries over HTTP and gRPC, and sets up a batch processor, which is mandatory for any non-development environment. Note that you may get 413 - Request Entity Too Large errors if you batch too much telemetry data in the batch processor.

The exact configuration of the batch processor depends on your specific workload as well as the signal types. Datadog intake has different payload size limits for the 3 signal types:

Advanced configuration

This fully documented example configuration file illustrates all possible configuration options for the Datadog Exporter. There may be other options relevant to your deployment, such as api::site or the ones on the host_metadata section.

Step 3 - Configure your application

To get better metadata for traces and for smooth integration with Datadog:

  • Use resource detectors: If they are provided by the language SDK, attach container information as resource attributes. For example, in Go, use the WithContainer() resource option.

  • Apply Unified Service Tagging: Make sure you’ve configured your application with the appropriate resource attributes for unified service tagging. This ties Datadog telemetry together with tags for service name, deployment environment, and service version. The application should set these tags using the OpenTelemetry semantic conventions: service.name, deployment.environment, and service.version.

Step 4 - Configure the logger for your application

A diagram showing the host, container, or application sending data to the filelog receiver in the collector and the Datadog Exporter in the collector sending the data to the Datadog backend

Since the OpenTelemetry SDKs’ logging functionality is not fully supported (see your specific language in the OpenTelemetry documentation for more information), Datadog recommends using a standard logging library for your application. Follow the language-specific Log Collection documentation to set up the appropriate logger in your application. Datadog strongly encourages setting up your logging library to output your logs in JSON to avoid the need for custom parsing rules.

Configure the filelog receiver

Configure the filelog receiver using operators. For example, if there is a service checkoutservice that is writing logs to /var/log/pods/services/checkout/0.log, a sample log might look like this:

{"level":"info","message":"order confirmation email sent to \"jack@example.com\"","service":"checkoutservice","span_id":"197492ff2b4e1c65","timestamp":"2022-10-10T22:17:14.841359661Z","trace_id":"e12c408e028299900d48a9dd29b0dc4c"}

Example filelog configuration:

filelog:
   include:
     - /var/log/pods/**/*checkout*/*.log
   start_at: end
   poll_interval: 500ms
   operators:
     - id: parse_log
       type: json_parser
       parse_from: body
     - id: trace
       type: trace_parser
       trace_id:
         parse_from: attributes.trace_id
       span_id:
         parse_from: attributes.span_id
   attributes:
     ddtags: env:staging
  • include: The list of files the receiver tails
  • start_at: end: Indicates to read new content that is being written
  • poll_internal: Sets the poll frequency
  • Operators:
    • json_parser: Parses JSON logs. By default, the filelog receiver converts each log line into a log record, which is the body of the logs’ data model. Then, the json_parser converts the JSON body into attributes in the data model.
    • trace_parser: Extract the trace_id and span_id from the log to correlate logs and traces in Datadog.

Remap OTel’s service.name attribute to service for logs

For Datadog Exporter versions 0.83.0 and later, the service field of OTel logs is populated as OTel semantic convention service.name. However, service.name is not one of the default service attributes in Datadog’s log preprocessing.

To get the service field correctly populated in your logs, you can specify service.name to be the source of a log’s service by setting a log service remapper processor.

Optional: Using Kubernetes

There are multiple ways to deploy the OpenTelemetry Collector and Datadog Exporter in a Kubernetes infrastructure. For the filelog receiver to work, the Agent/DaemonSet deployment is the recommended deployment method.

In containerized environments, applications write logs to stdout or stderr. Kubernetes collects the logs and writes them to a standard location. You need to mount the location on the host node into the Collector for the filelog receiver. Below is an extension example with the mounts required for sending logs.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: otel-agent
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-collector
    spec:
      containers:
        - name: collector
          command:
            - "/otelcol-contrib"
            - "--config=/conf/otel-agent-config.yaml"
          image: otel/opentelemetry-collector-contrib:0.71.0
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            # The k8s.pod.ip is used to associate pods for k8sattributes
            - name: OTEL_RESOURCE_ATTRIBUTES
              value: "k8s.pod.ip=$(POD_IP)"
          ports:
            - containerPort: 4318 # default port for OpenTelemetry HTTP receiver.
              hostPort: 4318
            - containerPort: 4317 # default port for OpenTelemetry gRPC receiver.
              hostPort: 4317
            - containerPort: 8888 # Default endpoint for querying metrics.
          volumeMounts:
            - name: otel-agent-config-vol
              mountPath: /conf
            - name: varlogpods
              mountPath: /var/log/pods
              readOnly: true
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
      volumes:
        - name: otel-agent-config-vol
          configMap:
            name: otel-agent-conf
            items:
              - key: otel-agent-config
                path: otel-agent-config.yaml
        # Mount nodes log file location.
        - name: varlogpods
          hostPath:
            path: /var/log/pods
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers

Step 5 - Run the collector

Run the collector, specifying the configuration file using the --config parameter:

otelcontribcol_linux_amd64 --config collector.yaml

To run the OpenTelemetry Collector as a Docker image and receive traces from the same host:

  1. Choose a published Docker image such as otel/opentelemetry-collector-contrib.

  2. Determine which ports to open on your container so that OpenTelemetry traces are sent to the OpenTelemetry Collector. By default, traces are sent over gRPC on port 4317. If you don’t use gRPC, use port 4318.

  3. Run the container and expose the necessary port, using the previously defined collector.yaml file. For example, considering you are using port 4317:

    $ docker run \
        -p 4317:4317 \
        --hostname $(hostname) \
        -v $(pwd)/otel_collector_config.yaml:/etc/otelcol-contrib/config.yaml \
        otel/opentelemetry-collector-contrib
    

To run the OpenTelemetry Collector as a Docker image and receive traces from other containers:

  1. Create a Docker network:

    docker network create <NETWORK_NAME>
    
  2. Run the OpenTelemetry Collector and application containers as part of the same network.

    # Run the OpenTelemetry Collector
    docker run -d --name opentelemetry-collector \
        --network <NETWORK_NAME> \
        --hostname $(hostname) \
        -v $(pwd)/otel_collector_config.yaml:/etc/otelcol-contrib/config.yaml \
        otel/opentelemetry-collector-contrib
    

    When running the application container, ensure that the environment variable OTEL_EXPORTER_OTLP_ENDPOINT is configured to use the appropriate hostname for the OpenTelemetry Collector. In the example below, this is opentelemetry-collector.

    # Run the application container
    docker run -d --name app \
        --network <NETWORK_NAME> \
        --hostname $(hostname) \
        -e OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:4317 \
        company/app:latest
    

Using a DaemonSet is the most common and recommended way to configure OpenTelemetry collection in a Kubernetes environment. To deploy the OpenTelemetry Collector and Datadog Exporter in a Kubernetes infrastructure:

  1. Use this full example of configuring the OpenTelemetry Collector using the Datadog Exporter as a DaemonSet, including the application configuration example.

    Note especially some essential configuration options from the example, which ensure the essential ports of the DaemonSet are exposed and accessible to your application:

    # ...
            ports:
            - containerPort: 4318 # default port for OpenTelemetry HTTP receiver.
              hostPort: 4318
            - containerPort: 4317 # default port for OpenTelemetry gRPC receiver.
              hostPort: 4317
            - containerPort: 8888  # Default endpoint for querying Collector observability metrics.
    # ...
    

    If you do not need both the standard HTTP and gRPC ports for your application, it is fine to remove them.

  2. Collect valuable Kubernetes attributes, which are used for Datadog container tagging, report the Pod IP as a resource attribute, as shown in the example:

    # ...
            env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            # The k8s.pod.ip is used to associate pods for k8sattributes
            - name: OTEL_RESOURCE_ATTRIBUTES
              value: "k8s.pod.ip=$(POD_IP)"
    # ...
    

    This ensures that Kubernetes Attributes Processor which is used in the config map is able to extract the necessary metadata to attach to traces. There are additional roles that need to be set to allow access to this metadata. The example is complete, ready to use, and has the correct roles set up.

  3. Provide your application container. To configure your application container, ensure that the correct OTLP endpoint hostname is used. The OpenTelemetry Collector runs as a DaemonSet, so the current host needs to be targeted. Set your application container’s OTEL_EXPORTER_OTLP_ENDPOINT environment variable correctly, as in the example chart:

    # ...
            env:
            - name: HOST_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
              # The application SDK must use this environment variable in order to successfully
              # connect to the DaemonSet's collector.
            - name: OTEL_EXPORTER_OTLP_ENDPOINT
              value: "http://$(HOST_IP):4318"
    # ...
    

To deploy the OpenTelemetry Collector and Datadog Exporter in a Kubernetes Gateway deployment:

  1. Use this full example of configuring the OpenTelemetry Collector using the Datadog Exporter as a DaemonSet, including the application configuration example.

    Note especially some essential configuration options from the example, which ensure the essential ports of the DaemonSet are exposed and accessible to your application:

    # ...
            ports:
            - containerPort: 4318 # default port for OpenTelemetry HTTP receiver.
              hostPort: 4318
            - containerPort: 4317 # default port for OpenTelemetry gRPC receiver.
              hostPort: 4317
            - containerPort: 8888  # Default endpoint for querying Collector observability metrics.
    # ...
    

    If you do not need both the standard HTTP and gRPC ports for your application, it is fine to remove them.

  2. Collect valuable Kubernetes attributes, which are used for Datadog container tagging, report the Pod IP as a resource attribute, as shown in the example:

    # ...
            env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            # The k8s.pod.ip is used to associate pods for k8sattributes
            - name: OTEL_RESOURCE_ATTRIBUTES
              value: "k8s.pod.ip=$(POD_IP)"
    # ...
    

    This ensures that Kubernetes Attributes Processor which is used in the config map is able to extract the necessary metadata to attach to traces. There are additional roles that need to be set to allow access to this metadata. The example is complete, ready to use, and has the correct roles set up.

  3. Provide your application container. To configure your application container, ensure that the correct OTLP endpoint hostname is used. The OpenTelemetry Collector runs as a DaemonSet, so the current host needs to be targeted. Set your application container’s OTEL_EXPORTER_OTLP_ENDPOINT environment variable correctly, as in the example chart:

    # ...
            env:
            - name: HOST_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
              # The application SDK must use this environment variable in order to successfully
              # connect to the DaemonSet's collector.
            - name: OTEL_EXPORTER_OTLP_ENDPOINT
              value: "http://$(HOST_IP):4318"
    # ...
    
  4. Change the DaemonSet to include an OTLP exporter instead of the Datadog Exporter currently in place:

    # ...
    exporters:
      otlp:
        endpoint: "<GATEWAY_HOSTNAME>:4317"
    # ...
    
  5. Make sure that the service pipelines use this exporter, instead of the Datadog one that is in place in the example:

    # ...
        service:
          pipelines:
            metrics:
              receivers: [hostmetrics, otlp]
              processors: [resourcedetection, k8sattributes, batch]
              exporters: [otlp]
            traces:
              receivers: [otlp]
              processors: [resourcedetection, k8sattributes, batch]
              exporters: [otlp]
    # ...
    

    This ensures that each agent forwards its data via the OTLP protocol to the Collector Gateway.

  6. Replace GATEWAY_HOSTNAME with the address of your OpenTelemetry Collector Gateway.

  7. To ensure that Kubernetes metadata continues to be applied to traces, tell the k8sattributes processor to forward the Pod IP to the Gateway Collector so that it can obtain the metadata:

    # ...
    k8sattributes:
      passthrough: true
    # ...
    

    For more information about the passthrough option, read its documentation.

  8. Make sure that the Gateway Collector’s configuration uses the same Datadog Exporter settings that have been replaced by the OTLP exporter in the agents. For example (where <DD_SITE> is your site, ):

    # ...
    exporters:
      datadog:
        api:
          site: <DD_SITE>
          key: ${env:DD_API_KEY}
    # ...
    

To use the OpenTelemetry Operator:

  1. Follow the official documentation for deploying the OpenTelemetry Operator. As described there, deploy the certificate manager in addition to the Operator.

  2. Configure the Operator using one of the OpenTelemetry Collector standard Kubernetes configurations:

    For example:

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: opentelemetry-example
    spec:
      mode: daemonset
      hostNetwork: true
      image: otel/opentelemetry-collector-contrib
      env:
        - name: DD_API_KEY
          valueFrom:
            secretKeyRef:
              key:  datadog_api_key
              name: opentelemetry-example-otelcol-dd-secret
      config:
        receivers:
          otlp:
            protocols:
              grpc:
              http:
        hostmetrics:
          collection_interval: 10s
          scrapers:
            paging:
              metrics:
                system.paging.utilization:
                  enabled: true
            cpu:
              metrics:
                system.cpu.utilization:
                  enabled: true
            disk:
            filesystem:
              metrics:
                system.filesystem.utilization:
                  enabled: true
            load:
            memory:
            network:
        processors:
          k8sattributes:
          batch:
            send_batch_max_size: 100
            send_batch_size: 10
            timeout: 10s
        exporters:
          datadog:
            api:
              key: ${env:DD_API_KEY}
        service:
          pipelines:
            metrics:
              receivers: [hostmetrics, otlp]
              processors: [k8sattributes, batch]
              exporters: [datadog]
            traces:
              receivers: [otlp]
              processors: [k8sattributes, batch]
              exporters: [datadog]
    

Logs and traces correlation

If you are using the Datadog Exporter to also send OpenTelemetry traces to Datadog, use the trace_parser operator to extract the trace_id from each trace and add it to the associated logs. Datadog automatically correlates the related logs and traces. See Connect OpenTelemetry Traces and Logs for more information.

The trace panel showing a list of logs correlated with the trace

Hostname resolution

See Mapping OpenTelemetry Semantic Conventions to Hostnames to understand how the hostname is resolved.

Deployment-based limitations

The OpenTelemetry Collector has two primary deployment methods: Agent and Gateway. Depending on your deployment method, some components are not available.

Deployment modeHost metricsKubernetes orchestration metricsTracesLogs auto-ingestion
as Gateway
as Agent

Out-of-the-box dashboards

Datadog provides out-of-the-box dashboards that you can copy and customize. To use Datadog’s out-of-the-box OpenTelemetry dashboards:

  1. Install the OpenTelemetry integration.

  2. Go to Dashboards > Dashboards list and search for opentelemetry:

    The Dashboards list, showing two OpenTelemetry out-of-the-box dashboards: Host Metrics and Collector Metrics.

The Host Metrics dashboard is for data collected from the host metrics receiver. The Collector Metrics dashboard is for any other types of metrics collected, depending on which metrics receiver you choose to enable.

Containers overview dashboard

The Container Overview dashboard is in private beta. Fill out this form to try it out.
This feature is affected by Docker deprecation in Kubernetes and you might not be able to use dockerstatsreceiver for OpenTelemetry with Kubernetes version 1.24+.

The Docker Stats receiver generates container metrics for the OpenTelemetry Collector. The Datadog Exporter translates container metrics to their Datadog counterparts.

Use the following configuration to enable additional attributes on the Docker Stats receiver that populates the containers overview dashboard:

  docker_stats:
    metrics:
      container.network.io.usage.rx_packets:
        enabled: true
      container.network.io.usage.tx_packets:
        enabled: true
      container.cpu.usage.system:
        enabled: true
      container.memory.rss:
        enabled: true
      container.blockio.io_serviced_recursive:
        enabled: true
      container.uptime:
        enabled: true
      container.memory.hierarchical_memory_limit:
        enabled: true

The minimum required OpenTelemetry Collector version that supports this feature is v0.78.0.

Further reading