- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
If you experience unexpected behavior using OpenTelemetry with Datadog, this guide may help you resolve the issue. If you continue to have trouble, contact Datadog Support for further assistance.
When using OpenTelemetry with Datadog, you might encounter various hostname-related issues. The following sections cover common scenarios and their solutions.
Symptom: When deploying in Kubernetes, the hostname reported by Datadog does not match the expected node name.
Cause: This is typically the result of missing k8s.node.name
(and optionally k8s.cluster.name
) tags.
Resolution:
Configure the k8s.pod.ip
attribute for your application deployment:
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: OTEL_RESOURCE
value: k8s.pod.ip=$(MY_POD_IP)
Enable the k8sattributes
processor in your Collector:
k8sattributes:
[...]
processors:
- k8sattributes
Alternatively, you can override the hostname using the datadog.host.name
attribute:
processors:
transform:
trace_statements:
- context: resource
statements:
- set(attributes["datadog.host.name"], "${NODE_NAME}")
For more information on host-identifying attributes, see Mapping OpenTelemetry Semantic Conventions to Hostnames.
Symptom: In AWS Fargate environments, an incorrect hostname might be reported for traces.
Cause: In Fargate environments, the default resource detection may not properly identify the ECS metadata, leading to incorrect hostname assignment.
Resolution:
Configure the resourcedetection
processor in your Collector configuration and enable the ecs
detector:
processors:
resourcedetection:
detectors: [env, ecs]
timeout: 2s
override: false
Symptom: In a gateway deployment, telemetry from multiple hosts appears to come from a single host, or host metadata isn’t being properly forwarded.
Cause: This occurs when the gateway collector configuration doesn’t preserve or properly forward the host metadata attributes from the agent collectors.
Resolution:
Configure agent collectors to collect and forward host metadata:
processors:
resourcedetection:
detectors: [system, env]
k8sattributes:
passthrough: true
Configure the gateway collector to extract and forward necessary metadata:
processors:
k8sattributes:
extract:
metadata: [node.name, k8s.node.name]
transform:
trace_statements:
- context: resource
statements:
- set(attributes["datadog.host.use_as_metadata"], true)
exporters:
datadog:
hostname_source: resource_attribute
For more information, see Mapping OpenTelemetry Semantic Conventions to Infrastructure List Host Information.
Symptom: A single host appears under multiple names in Datadog. For example, you might see one entry from the OpenTelemetry Collector (with the OTel logo) and another from the Datadog Agent.
Cause: When a host is monitored through more than one ingestion method (for example, OTLP + Datadog Agent, or DogStatsD + OTLP) without aligning on a single hostname resource attribute, Datadog treats each path as a separate host.
Resolution:
k8s.node.name
).processors:
transform:
trace_statements:
- context: resource
statements:
- set(attributes["datadog.host.name"], "shared-hostname")
Symptom: You may experience a delay in host tags appearing on your telemetry data after starting the Datadog Agent or OpenTelemetry Collector. This delay typically lasts under 10 minutes but can extend up to 40-50 minutes in some cases.
Cause: This delay occurs because host metadata must be processed and indexed by Datadog’s backend before tags can be associated with telemetry data.
Resolution:
Host tags configured in either the Datadog exporter configuration (host_metadata::tags
) or the Datadog Agent’s tags
section are not immediately applied to telemetry data. The tags eventually appear after the backend resolves the host metadata.
Choose your setup for specific instructions:
Configure expected_tags_duration
in datadog.yaml
to bridge the gap until host tags are resolved:
expected_tags_duration: "15m"
This configuration adds the expected tags to all telemetry for the specified duration (in this example, 15 minutes).
Use the transform
processor to set your host tags as OTLP attributes. For example, to add environment and team tags:
processors:
transform:
trace_statements:
- context: resource
statements:
# OpenTelemetry semantic conventions
- set(attributes["deployment.environment.name"], "prod")
# Datadog-specific host tags
- set(attributes["ddtags"], "env:prod,team:backend")
...
This approach combines OpenTelemetry semantic conventions with Datadog-specific host tags to ensure proper functionality in both OpenTelemetry and Datadog environments.
Symptom: The team tag is not appearing in Datadog for logs and traces, despite being set as a resource attribute in OpenTelemetry configurations.
Cause: This happens because OpenTelemetry resource attributes need explicit mapping to Datadog’s tag format using the ddtags
attribute.
Resolution:
Use the OpenTelemetry Collector’s transform processor to map the team resource attribute to the ddtags
attribute:
processors:
transform/datadog_team_tag:
metric_statements:
- context: datapoint
statements:
- set(attributes["ddtags"], Concat(["team:", resource.attributes["team"]],""))
log_statements:
- context: log
statements:
- set(attributes["ddtags"], Concat(["team:", resource.attributes["team"]],""))
trace_statements:
- context: span
statements:
- set(attributes["ddtags"], Concat(["team:", resource.attributes["team"]],""))
resource.attributes["team"]
with the actual attribute name if different in your setup (for example, resource.attributes["arm.team.name"]
).To verify the configuration:
Symptom: Container tags are not appearing on the Containers page in Datadog, which affects container monitoring and management capabilities.
Cause: This occurs when container resource attributes aren’t properly mapped to Datadog’s expected container metadata format.
Resolution:
When using OTLP ingestion in the Datadog Agent, you need to set specific resource attributes to ensure proper container metadata association. For more information, see Resource Attribute Mapping.
To verify the configuration:
container.id
should become container_id
).Symptom: Metrics are not appearing in the Service Catalog and dashboards despite being properly collected.
Cause: This typically occurs due to incorrect or improperly mapped semantic conventions.
Resolution:
To verify the configuration:
추가 유용한 문서, 링크 및 기사: