- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
The Datadog OTLP metrics intake endpoint is in Preview.
Datadog’s OpenTelemetry Protocol (OTLP) metrics intake API endpoint allows you to send metrics directly to Datadog. With this feature, you don’t need to run the Datadog Agent or OpenTelemetry Collector + Datadog Exporter.
You might prefer this option if you’re looking for a straightforward setup and want to send metrics directly to Datadog without using the Datadog Agent or OpenTelemetry Collector.
This endpoint is particularly useful in the following scenarios:
OpenTelemetry distributions without Datadog Exporter support: Some OpenTelemetry distributions, such as the AWS Distro for OpenTelemetry (ADOT), have removed vendor-specific exporters in favor of a unified OTLP exporter. The OTLP metrics endpoint enables these distributions to send metrics directly to Datadog seamlessly.
Technical constraints using the Datadog Exporter or Agent: Ideal for scenarios where installing additional software is impractical or restrictive, such as third-party managed services (for example, Vercel), applications on customer devices, or environments requiring streamlined, Agentless observability pipelines. The OTLP metrics endpoint enables direct OTLP metric ingestion in these scenarios.
To export OTLP metrics data to the Datadog OTLP metrics intake endpoint:
dd-otel-metric-config
HTTP header to configure the metric translator behavior.To send OTLP data to the Datadog OTLP metrics intake endpoint, use the OTLP HTTP exporter. For metrics, the exporter supports both HTTP Protobuf and HTTP JSON. HTTP Protobuf is recommended for better performance.
The process differs depending on whether you’re using automatic or manual instrumentation for OpenTelemetry.
The Datadog OTLP metrics intake endpoint accepts only delta metrics. If you attempt to send cumulative metrics (the default in most SDKs), you will receive an error. Make sure to configure your OpenTelemetry SDK or Collector to produce delta metrics.
OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
environment variable to delta
.If you are using OpenTelemetry automatic instrumentation, set the following environment variables:
export OTEL_EXPORTER_OTLP_METRICS_PROTOCOL="http/protobuf"
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=""
export OTEL_EXPORTER_OTLP_METRICS_HEADERS="dd-api-key=${DD_API_KEY},dd-otlp-source=${YOUR_SITE}"
export OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE="delta"
If you are using manual instrumentation with OpenTelemetry SDKs, configure the OTLP HTTP Protobuf exporter programmatically.
The JavaScript exporter is @opentelemetry/exporter-metrics-otlp-proto
. To configure the exporter, use the following code snippet:
const { OTLPMetricExporter } = require('@opentelemetry/exporter-metrics-otlp-proto');
const exporter = new OTLPMetricExporter({
url: 'https://api.datadoghq.com/api/intake/otlp/v1/metrics',
temporalityPreference: AggregationTemporalityPreference.DELTA, // Ensure delta temporality
headers: {
'dd-api-key': process.env.DD_API_KEY,
'dd-otel-metric-config': '{resource_attributes_as_tags: true}',
'dd-otlp-source': '${YOUR_SITE}', // Replace this with the correct site
},
});
The Java exporter is OtlpHttpMetricExporter
. To configure the exporter, use the following code snippet:
import io.opentelemetry.exporter.otlp.http.metrics.OtlpHttpMetricExporter;
OtlpHttpMetricExporter exporter = OtlpHttpMetricExporter.builder()
.setEndpoint("https://api.datadoghq.com/api/intake/otlp/v1/metrics")
.setAggregationTemporalitySelector(
AggregationTemporalitySelector.deltaPreferred()) // Ensure delta temporality
.addHeader("dd-api-key", System.getenv("DD_API_KEY"))
.addHeader("dd-otel-metric-config", "{resource_attributes_as_tags: true}")
.addHeader("dd-otlp-source", "${YOUR_SITE}") // Replace this with the correct site
.build();
The Go exporter is otlpmetrichttp
. To configure the exporter, use the following code snippet:
import "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp"
metricExporter, err := otlpmetrichttp.New(
ctx,
otlpmetrichttp.WithEndpoint("api.datadoghq.com"),
otlpmetrichttp.WithURLPath("/api/intake/otlp/v1/metrics"),
otlpmetrichttp.WithTemporalitySelector(deltaSelector), // Ensure delta temporality
otlpmetrichttp.WithHeaders(
map[string]string{
"dd-api-key": os.Getenv("DD_API_KEY"),
"dd-otel-metric-config": "{resource_attributes_as_tags: true}",
"dd-otlp-source": "${YOUR_SITE}", // Replace this with the correct site
}),
)
The Python exporter is OTLPMetricExporter
. To configure the exporter, use the following code snippet:
from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter
exporter = OTLPMetricExporter(
endpoint="https://api.datadoghq.com/api/intake/otlp/v1/metrics",
preferred_temporality=deltaTemporality, # Ensure delta temporality
headers={
"dd-api-key": os.environ.get("DD_API_KEY"),
"dd-otel-metric-config": "{resource_attributes_as_tags: true}",
"dd-otlp-source": "${YOUR_SITE}" # Replace this with the correct site
},
)
Use the dd-otel-metric-config
header to configure how metrics are translated and sent to Datadog. The JSON header contains the following fields:
resource_attributes_as_tags
false
true
, transforms all resource attributes into metric labels, which are then converted into tags.instrumentation_scope_metadata_as_tags
false
true
, adds the name and version of the instrumentation scope that created a metric to the metric tags.histograms.mode
distributions
: sends histograms as Datadog distributions (recommended).counters
: sends histograms as Datadog counts, one metric per bucket.nobuckets
: sends no bucket histogram metrics.histograms.send_aggregation_metrics
true
, writes additional .sum
, .count
, .min
, and .max
metrics for histograms.summaries.mode
noquantiles
: sends no .quantile
metrics. .sum
and .count
metrics are still sent.gauges
: sends .quantile
metrics as gauges tagged by the quantile.For example:
{
"resource_attributes_as_tags": true,
"instrumentation_scope_metadata_as_tags": true,
"histograms": {
"mode": "distributions",
"send_aggregation_metrics": true
},
"summaries": {
"mode": "gauges"
}
}
If you are using an OpenTelemetry Collector distribution that doesn’t support the Datadog Exporter, you can configure the otlphttpexporter
to export metrics to the Datadog OTLP metrics intake endpoint.
For example, your config.yaml
file would look like this:
...
exporters:
otlphttp:
metrics_endpoint:
headers:
dd-api-key: ${env:DD_API_KEY}
dd-otel-metric-config: "{resource_attributes_as_tags: true}"
dd-otlp-source: "${YOUR_SITE}", # Replace this with the correct site
...
service:
pipelines:
metrics:
receivers: [otlp]
processors: [batch, cumulativetodelta]
exporters: [otlphttp]
cumulativetodelta
processor in the pipeline, which converts cumulative metrics to delta metrics. Delta metrics are required for the OTLP metrics intake endpoint. For more information, see Configure delta temporality in OpenTelemetry.If you receive a 403 Forbidden
error when sending metrics to the Datadog OTLP metrics intake endpoint, it indicates one of the following issues:
The API key belongs to an organization that is not allowed to access the Datadog OTLP metrics intake endpoint. Solution: Verify that you are using an API key from an organization that is allowed to access the Datadog OTLP metrics intake endpoint.
The dd-otlp-source
header is missing or has an incorrect value.
Solution: Ensure that the dd-otlp-source
header is set with the proper value for your site. You should have received an allowlisted value for this header from Datadog if you are a platform partner.
The endpoint URL is incorrect for your organization.
Solution: Use the correct endpoint URL for your organization. Your site is , so you need to use the
endpoint.
If you receive a 413 Request Entity Too Large
error when sending metrics to the Datadog OTLP metrics intake endpoint, it indicates that the payload size sent by the OTLP exporter exceeds the Datadog metrics intake endpoint’s limit of 500KB for uncompressed payloads, or 5MB for compressed payloads after decompression.
This error usually occurs when the OpenTelemetry SDK batches too much telemetry data in a single request payload.
Solution: Reduce the export batch size of the SDK’s batch processor. For example, in the OpenTelemetry Java SDK, you can adjust BatchMetricExportProcessor
.
If you notice missing datapoints or lower than expected metric values, it may be because you are sending multiple datapoints for a metric that have the same timestamp (in seconds) and same dimensions. In such cases, Datadog only accepts the last datapoint, and previous datapoints are dropped (last-write-wins). Datadog requires the timeseries data of a metric to be unique in the context of {timestamp + dimensions}.
Solution: Ensure that your datapoints of a given metric at one timestamp are uniquely tagged. For example, if you send multiple datapoints for a metric simultaneously from multiple AWS Lambda invocations, make sure to include unique identifiers (such as the Lambda ARN) as resource attributes in your metrics. Use the resource_attributes_as_tags
option to add these resource attributes as metric tags.
추가 유용한 문서, 링크 및 기사: