- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
This page guides you through various deployment options for the OpenTelemetry Collector with the Datadog Exporter, allowing you to send traces, metrics, and logs to Datadog.
To run the OpenTelemetry Collector along with the Datadog Exporter, download the latest release of the OpenTelemetry Collector Contrib distribution.
The OpenTelemetry Collector can be deployed in various environments to suit different infrastructure needs. This section covers the following deployment options:
It’s important to note that certain features and capabilities may vary depending on the deployment method. For a detailed overview of these differences, see the Deployment-based limitations.
Choose the deployment option that best fits your infrastructure and complete the following instructions.
Run the Collector, specifying the configuration file using the --config
parameter:
otelcontribcol_linux_amd64 --config collector.yaml
To run the OpenTelemetry Collector as a Docker image and receive traces from the same host:
Choose a published Docker image such as otel/opentelemetry-collector-contrib
.
Determine which ports to open on your container so that OpenTelemetry traces are sent to the OpenTelemetry Collector. By default, traces are sent over gRPC on port 4317. If you don’t use gRPC, use port 4318.
Run the container and expose the necessary port, using the collector.yaml
file. For example, if you are using port 4317:
$ docker run \
-p 4317:4317 \
--hostname $(hostname) \
-v $(pwd)/otel_collector_config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib
To run the OpenTelemetry Collector as a Docker image and receive traces from other containers:
Create a Docker network:
docker network create <NETWORK_NAME>
Run the OpenTelemetry Collector and application containers as part of the same network.
# Run the OpenTelemetry Collector
docker run -d --name opentelemetry-collector \
--network <NETWORK_NAME> \
--hostname $(hostname) \
-v $(pwd)/otel_collector_config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib
When running the application container, ensure that the environment variable OTEL_EXPORTER_OTLP_ENDPOINT
is configured to use the appropriate hostname for the OpenTelemetry Collector. In the example below, this is opentelemetry-collector
.
# Run the application container
docker run -d --name app \
--network <NETWORK_NAME> \
--hostname $(hostname) \
-e OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:4317 \
company/app:latest
Using a DaemonSet is the most common and recommended way to configure OpenTelemetry collection in a Kubernetes environment. To deploy the OpenTelemetry Collector and Datadog Exporter in a Kubernetes infrastructure:
Use this example configuration, including the application configuration, to set up the OpenTelemetry Collector with the Datadog Exporter as a DaemonSet.
Ensure that essential ports for the DaemonSet are exposed and accessible to your application. The following configuration options from the example define these ports:
# ...
ports:
- containerPort: 4318 # default port for OpenTelemetry HTTP receiver.
hostPort: 4318
- containerPort: 4317 # default port for OpenTelemetry gRPC receiver.
hostPort: 4317
- containerPort: 8888 # Default endpoint for querying Collector observability metrics.
# ...
To collect valuable Kubernetes attributes, which are used for Datadog container tagging, report the Pod IP as a resource attribute, as shown in the example:
# ...
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
# The k8s.pod.ip is used to associate pods for k8sattributes
- name: OTEL_RESOURCE_ATTRIBUTES
value: "k8s.pod.ip=$(POD_IP)"
# ...
This ensures that Kubernetes Attributes Processor, which is used in the config map, is able to extract the necessary metadata to attach to traces. There are additional roles that need to be set to allow access to this metadata. The example is complete, ready to use, and has the correct roles set up.
Configure your application container to use the correct OTLP endpoint hostname. Since the OpenTelemetry Collector runs as a DaemonSet, the current host needs to be targeted. Set your application container’s OTEL_EXPORTER_OTLP_ENDPOINT
environment variable accordingly, as in the example chart:
# ...
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
# The application SDK must use this environment variable in order to successfully
# connect to the DaemonSet's collector.
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://$(HOST_IP):4318"
# ...
Configure host metadata collection to ensure accurate host information. Set up your DaemonSet to collect and forward host metadata:
processors:
resourcedetection:
detectors: [system, env]
k8sattributes:
# existing k8sattributes config
transform:
trace_statements:
- context: resource
statements:
- set(attributes["datadog.host.use_as_metadata"], true)
...
service:
pipelines:
traces:
receivers: [otlp]
processors: [resourcedetection, k8sattributes, transform, batch]
exporters: [datadog]
This configuration collects host metadata using the resourcedetection
processor, adds Kubernetes metadata with the k8sattributes
processor, and sets the datadog.host.use_as_metadata
attribute to true
. For more information, see Mapping OpenTelemetry Semantic Conventions to Infrastructure List Host Information.
To deploy the OpenTelemetry Collector and Datadog Exporter in a Kubernetes Gateway deployment:
Use this example configuration, including the application configuration, to set up the OpenTelemetry Collector with the Datadog Exporter as a DaemonSet.
Ensure that essential ports for the DaemonSet are exposed and accessible to your application. The following configuration options from the example define these ports:
# ...
ports:
- containerPort: 4318 # default port for OpenTelemetry HTTP receiver.
hostPort: 4318
- containerPort: 4317 # default port for OpenTelemetry gRPC receiver.
hostPort: 4317
- containerPort: 8888 # Default endpoint for querying Collector observability metrics.
# ...
To collect valuable Kubernetes attributes, which are used for Datadog container tagging, report the Pod IP as a resource attribute, as shown in the example:
# ...
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
# The k8s.pod.ip is used to associate pods for k8sattributes
- name: OTEL_RESOURCE_ATTRIBUTES
value: "k8s.pod.ip=$(POD_IP)"
# ...
This ensures that Kubernetes Attributes Processor, which is used in the config map, is able to extract the necessary metadata to attach to traces. There are additional roles that need to be set to allow access to this metadata. The example is complete, ready to use, and has the correct roles set up.
Configure your application container to use the correct OTLP endpoint hostname. Since the OpenTelemetry Collector runs as a DaemonSet, the current host needs to be targeted. Set your application container’s OTEL_EXPORTER_OTLP_ENDPOINT
environment variable accordingly, as in the example chart:
# ...
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
# The application SDK must use this environment variable in order to successfully
# connect to the DaemonSet's collector.
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://$(HOST_IP):4318"
# ...
Change the DaemonSet to include an OTLP exporter instead of the Datadog Exporter currently in place:
# ...
exporters:
otlp:
endpoint: "<GATEWAY_HOSTNAME>:4317"
# ...
Make sure that the service pipelines use this exporter, instead of the Datadog one that is in place in the example:
# ...
service:
pipelines:
metrics:
receivers: [hostmetrics, otlp]
processors: [resourcedetection, k8sattributes, batch]
exporters: [otlp]
traces:
receivers: [otlp]
processors: [resourcedetection, k8sattributes, batch]
exporters: [otlp]
# ...
This ensures that each Agent forwards its data through the OTLP protocol to the Collector Gateway.
Replace <GATEWAY_HOSTNAME>
with the address of your OpenTelemetry Collector Gateway.
Configure the k8sattributes
processor to forward the Pod IP to the Gateway Collector so that it can obtain the metadata:
# ...
k8sattributes:
passthrough: true
# ...
For more information about the passthrough
option, read its documentation.
Make sure that the Gateway Collector’s configuration uses the same Datadog Exporter settings that have been replaced by the OTLP exporter in the Agents. For example (where <DD_SITE>
is your site, ):
# ...
exporters:
datadog:
api:
site: <DD_SITE>
key: ${env:DD_API_KEY}
# ...
Configure host metadata collection:
In a gateway deployment, you need to ensure that host metadata is collected by the agent collectors and preserved by the gateway collector. This ensures that host metadata is collected by the agents and properly forwarded through the gateway to Datadog.
For more information, see Mapping OpenTelemetry Semantic Conventions to Infrastructure List Host Information.
Agent collector configuration:
processors:
resourcedetection:
detectors: [system, env]
k8sattributes:
passthrough: true
exporters:
otlp:
endpoint: "<GATEWAY_HOSTNAME>:4317"
service:
pipelines:
traces:
receivers: [otlp]
processors: [resourcedetection, k8sattributes, transform, batch]
exporters: [otlp]
Gateway collector configuration:
processors:
k8sattributes:
extract:
metadata: [node.name, k8s.node.name]
exporters:
datadog:
api:
key: ${DD_API_KEY}
hostname_source: resource_attribute
service:
pipelines:
traces:
receivers: [otlp]
processors: [k8sattributes, batch]
exporters: [datadog]
To use the OpenTelemetry Operator, follow the official documentation for deploying the OpenTelemetry Operator. As described there, deploy the certificate manager in addition to the Operator.
Configure the Operator using one of the OpenTelemetry Collector standard Kubernetes configurations:
See Mapping OpenTelemetry Semantic Conventions to Hostnames to understand how the hostname is resolved.
The OpenTelemetry Collector has two primary deployment methods: Agent and Gateway. Depending on your deployment method, the following components are available:
Deployment mode | Host metrics | Kubernetes orchestration metrics | Traces | Logs auto-ingestion |
---|---|---|---|---|
as Gateway | ||||
as Agent |