---
title: Health Metrics
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > OpenTelemetry in Datadog > Integrations > Health Metrics
---

# Health Metrics

## Overview{% #overview %}

{% image
   source="https://docs.dd-static.net/images/opentelemetry/collector_exporter/collector_health_metrics.ffbf001b09f35ad7a4dfd6af804a1cfa.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/opentelemetry/collector_exporter/collector_health_metrics.ffbf001b09f35ad7a4dfd6af804a1cfa.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="OpenTelemetry Collector health metrics dashboard" /%}

The OpenTelemetry Collector exposes internal telemetry as metrics. You can collect these metrics and send them to Datadog to monitor Collector health and pipeline throughput.

You can send the Collector's health metrics to Datadog with two approaches:

- **Prometheus**: Scrape the Collector's internal Prometheus endpoint with the [Prometheus receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/prometheusreceiver) and forward the metrics through a metrics pipeline to the Datadog Exporter.
- **OTLP**: Configure the Collector's internal telemetry to export metrics directly to the [Datadog OTLP metrics intake endpoint](https://docs.datadoghq.com/opentelemetry/setup/otlp_ingest/metrics.md) over OTLP HTTP.

## Setup{% #setup %}

### Configure the pipeline{% #configure-the-pipeline %}

The following tabs cover two approaches: a Prometheus-style scrape and a direct OTLP push. After configuring the pipeline, see the Configuration reference for all available options.

{% tab title="Prometheus" %}
Configure the Collector to expose its internal metrics on a Prometheus pull endpoint. Scrape that endpoint with the [Prometheus receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/prometheusreceiver) and route the data through a metrics pipeline to the [Datadog Exporter](https://docs.datadoghq.com/opentelemetry/setup/collector_exporter.md).

```yaml
receivers:
  prometheus/internal:
    config:
      scrape_configs:
        - job_name: 'otelcol'
          scrape_interval: 10s
          static_configs:
            - targets: ['0.0.0.0:8888']

exporters:
  datadog:
    api:
      key: ${env:DD_API_KEY}
      site: <DATADOG_SITE>
    metrics:
      resource_attributes_as_tags: true
    sending_queue:
      batch:
        flush_timeout: 10s

service:
  telemetry:
    metrics:
      level: normal
      readers:
        - pull:
            exporter:
              prometheus:
                host: 0.0.0.0
                port: 8888
                without_type_suffix: true
                without_units: true
                without_scope_info: true
                # with_resource_constant_labels:
                #   included: ["service.name", "service.instance.id"]
                #   excluded: []
      # views:
      #   - selector:
      #       instrument_name: otelcol_processor_*
      #     stream:
      #       aggregation:
      #         drop: {}
  pipelines:
    metrics:
      receivers: [prometheus/internal]
      exporters: [datadog]
```

Replace `<DATADOG_SITE>` with your [Datadog site](https://docs.datadoghq.com/getting_started/site.md): .

The `service.telemetry.metrics` block exposes the Collector's internal metrics on `0.0.0.0:8888`. The `prometheus/internal` receiver scrapes that same endpoint, and the metrics pipeline forwards the scraped metrics to Datadog.

For all available options, see `pull.exporter.prometheus` options.

#### Enrich with processors (optional){% #enrich-with-processors-optional %}

Add the [resource detection processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor) to the metrics pipeline to automatically populate cloud and host resource attributes (for example, `host.id`, `cloud.provider`, `cloud.region`). You can also add other processors such as [`transform`](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/transformprocessor) or [`k8sattributes`](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/k8sattributesprocessor) to enrich or transform Collector health metrics before export.

```yaml
processors:
  resourcedetection:
    detectors: [env, system, ec2]
    override: false

service:
  pipelines:
    metrics:
      receivers: [prometheus/internal]
      processors: [resourcedetection]
      exporters: [datadog]
```

{% alert level="warning" %}
If you have a Datadog Agent running on the same host as an OpenTelemetry Collector or DDOT Collector that uses a Prometheus receiver to scrape Collector health metrics, make sure the Agent's [OpenMetrics integration](https://docs.datadoghq.com/integrations/openmetrics.md) is either turned off or scraping a different endpoint than the Collector health metrics endpoint. Otherwise, both the Agent and Collector scrape the same endpoint, resulting in duplicate Collector health metrics.
{% /alert %}

{% /tab %}

{% tab title="OTLP" %}
Configure the Collector's internal telemetry to push metrics directly to the [Datadog OTLP metrics intake endpoint](https://docs.datadoghq.com/opentelemetry/setup/otlp_ingest/metrics.md) using a periodic OTLP HTTP reader. This approach does not require a Prometheus receiver or a metrics pipeline for Collector health metrics.

```yaml
service:
  telemetry:
    metrics:
      level: normal
      readers:
        - periodic:
            interval: 10000
            timeout: 5000
            exporter:
              otlp:
                protocol: http/protobuf
                endpoint: <OTLP_METRICS_ENDPOINT>
                temporality_preference: delta
                compression: gzip
                timeout: 5000
                headers:
                  - name: dd-api-key
                    value: ${env:DD_API_KEY}
                  - name: dd-otel-metric-config
                    value: '{"resource_attributes_as_tags": true, "instrumentation_scope_metadata_as_tags": true}'
                # default_histogram_aggregation: explicit_bucket_histogram
                # headers_list: "dd-api-key=${env:DD_API_KEY}"
                # insecure: false
                # certificate: /path/to/ca.pem
                # client_certificate: /path/to/client.pem
                # client_key: /path/to/client.key
      # views:
      #   - selector:
      #       instrument_name: otelcol_processor_*
      #     stream:
      #       aggregation:
      #         drop: {}
```

Replace `<OTLP_METRICS_ENDPOINT>` with the [Datadog OTLP metrics intake endpoint](https://docs.datadoghq.com/opentelemetry/setup/otlp_ingest/metrics.md) for your [Datadog site](https://docs.datadoghq.com/getting_started/site.md): .

The Datadog OTLP metrics intake endpoint accepts only delta metrics, so `temporality_preference: delta` is required. The `dd-api-key` header authenticates the request, and the `dd-otel-metric-config` header customizes how metrics are translated to Datadog. For all the YAML fields available under the `periodic` reader, see `periodic.exporter.otlp` options; for the full list of metric-translation options and troubleshooting, see [Datadog OTLP Metrics Intake Endpoint](https://docs.datadoghq.com/opentelemetry/setup/otlp_ingest/metrics.md).

{% alert level="warning" %}
This setup pushes metrics directly to the OTLP intake endpoint, bypassing any enrichment that pipeline processors (such as `resourcedetection` or `k8sattributes`) would otherwise apply. To populate Datadog tags and host metadata (which are needed for hostname resolution and the default Collector dashboard), set the relevant attributes explicitly under `service.telemetry.resource`. If you need automatic hostname and cloud-attribute detection, use the Prometheus tab instead.
{% /alert %}

{% /tab %}

### Tag with resource attributes (optional){% #tag-with-resource-attributes-optional %}

Use `service.telemetry.resource` to attach resource attributes (such as `k8s.cluster.name`, `service.instance.id`, or any [Datadog-mapped semantic convention](https://docs.datadoghq.com/opentelemetry/mapping/semantic_mapping.md)) to all telemetry the Collector emits about itself.

Use the legacy inline map format for a concise definition:

```yaml
service:
  telemetry:
    resource:
      k8s.cluster.name: my-cluster
      k8s.pod.name: ${env:HOSTNAME}
      service.instance.id: ${env:HOSTNAME}
      deployment.environment: prod
```

Alternatively, use the declarative `attributes` format, which supports explicit typing and a `schema_url`. This format requires Collector v0.151.0 or later:

```yaml
service:
  telemetry:
    resource:
      schema_url: https://opentelemetry.io/schemas/1.27.0
      attributes:
        - name: k8s.cluster.name
          value: my-cluster
        - name: k8s.pod.name
          value: ${env:HOSTNAME}
```

To suppress a default attribute such as `service.version`, specify it with a null value in the legacy inline format.

Datadog maps these attributes to tags and host metadata. For the full list of supported mappings, see [OpenTelemetry Semantic Conventions and Datadog Conventions](https://docs.datadoghq.com/opentelemetry/mapping/semantic_mapping.md) and [Mapping OpenTelemetry Semantic Conventions to Hostnames](https://docs.datadoghq.com/opentelemetry/mapping/hostname.md).

## Configuration reference{% #configuration-reference %}

### `service.telemetry.metrics` fields{% #servicetelemetrymetrics-fields %}

The following top-level fields apply to both Prometheus and OTLP setups:

{% dl %}

{% dt %}
`level`
{% /dt %}

{% dd %}
Verbosity of the Collector's internal metrics. One of `none`, `basic`, `normal` (default), or `detailed`. Set `level: detailed` to enable `views`.
{% /dd %}

{% dt %}
`readers`
{% /dt %}

{% dd %}
List of metric readers. At least one is required when `level` is not `none`. Each reader is either a `pull` reader (Prometheus) or a `periodic` reader (OTLP or console).
{% /dd %}

{% dt %}
`views`
{% /dt %}

{% dd %}
Optional list of [SDK views](https://opentelemetry.io/docs/specs/otel/metrics/sdk/#view) that drop, rename, filter, or re-aggregate specific instruments. Only available when `level: detailed`.
{% /dd %}

{% /dl %}

### `pull.exporter.prometheus` options{% #pullexporterprometheus-options %}

{% dl %}

{% dt %}
`host` / `port`
{% /dt %}

{% dd %}
Address to expose the Prometheus endpoint on. Defaults to `localhost:8888`. Use `0.0.0.0` to expose outside the loopback interface.
{% /dd %}

{% dt %}
`without_type_suffix`
{% /dt %}

{% dd %}
When `true` (the Collector default), drops the type suffix (for example, `_total` for counters) from metric names. Names appear as `otelcol_exporter_sent_metric_points` instead of `otelcol_exporter_sent_metric_points_total`.
{% /dd %}

{% dt %}
`without_units`
{% /dt %}

{% dd %}
When `true`, drops the unit suffix (for example, `_seconds`, `_bytes`) from metric names.
{% /dd %}

{% dt %}
`without_scope_info`
{% /dt %}

{% dd %}
When `true`, suppresses the `otel_scope_info` metric and `otel_scope_*` labels.
{% /dd %}

{% dt %}
`with_resource_constant_labels.included` / `with_resource_constant_labels.excluded`
{% /dt %}

{% dd %}
Allowlist and denylist of resource attributes to copy onto every exported metric as constant labels.
{% /dd %}

{% /dl %}

### `periodic.exporter.otlp` options{% #periodicexporterotlp-options %}

The `periodic` reader (which contains the OTLP exporter) accepts these top-level options:

{% dl %}

{% dt %}
`interval`
{% /dt %}

{% dd %}
Time in milliseconds between exports. Defaults to `60000` (60 seconds).
{% /dd %}

{% dt %}
`timeout`
{% /dt %}

{% dd %}
Maximum time in milliseconds to wait for an export to complete. Defaults to `30000` (30 seconds).
{% /dd %}

{% /dl %}

The `otlp` block inside `periodic.exporter` accepts these options:

{% dl %}

{% dt %}
`protocol`
{% /dt %}

{% dd %}
One of `grpc`, `http/protobuf`, or `http/json`. Use `http/protobuf` for the Datadog OTLP metrics intake endpoint.
{% /dd %}

{% dt %}
`endpoint`
{% /dt %}

{% dd %}
URL of the OTLP receiver. For Datadog, use .
{% /dd %}

{% dt %}
`headers` / `headers_list`
{% /dt %}

{% dd %}
Headers to add to every export request. Use the structured `headers` list (each entry is a `{name, value}` pair) or `headers_list` as a URL-encoded string (for example, `dd-api-key=${env:DD_API_KEY}`).
{% /dd %}

{% dt %}
`compression`
{% /dt %}

{% dd %}
Compression algorithm. One of `gzip` or `none`.
{% /dd %}

{% dt %}
`timeout`
{% /dt %}

{% dd %}
Per-request timeout in milliseconds. Distinct from the reader-level `timeout`.
{% /dd %}

{% dt %}
`temporality_preference`
{% /dt %}

{% dd %}
One of `cumulative`, `delta`, or `lowmemory`. The Datadog OTLP metrics intake endpoint requires `delta`.
{% /dd %}

{% dt %}
`default_histogram_aggregation`
{% /dt %}

{% dd %}
One of `explicit_bucket_histogram` (default) or `base2_exponential_bucket_histogram`.
{% /dd %}

{% dt %}
`insecure`
{% /dt %}

{% dd %}
When `true`, disables TLS. Defaults to `false`.
{% /dd %}

{% dt %}
`certificate`, `client_certificate`, `client_key`
{% /dt %}

{% dd %}
Paths to PEM files for custom CA verification and mutual TLS (mTLS) client authentication.
{% /dd %}

{% /dl %}

## Data collected{% #data-collected %}

| OpenTelemetry Metric                              | Description                                                                                                                        |
| ------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------- |
| `otelcol_process_uptime`                          | Uptime of the process                                                                                                              |
| `otelcol_process_memory_rss`                      | Total physical memory (resident set size)                                                                                          |
| `otelcol_exporter_queue_size`                     | Current size of the retry queue (in batches)                                                                                       |
| `otelcol_exporter_sent_spans`                     | Number of spans successfully sent to destination                                                                                   |
| `otelcol_exporter_send_failed_metric_points`      | Number of metric points in failed attempts to send to destination                                                                  |
| `otelcol_exporter_send_failed_spans`              | Number of spans in failed attempts to send to destination                                                                          |
| `otelcol_process_cpu_seconds`                     | Total CPU user and system time in seconds                                                                                          |
| `otelcol_receiver_refused_spans`                  | Number of spans that could not be pushed into the pipeline                                                                         |
| `otelcol_exporter_queue_capacity`                 | Fixed capacity of the retry queue (in batches)                                                                                     |
| `otelcol_receiver_accepted_spans`                 | Number of spans successfully pushed into the pipeline                                                                              |
| `otelcol_exporter_sent_metric_points`             | Number of metric points successfully sent to destination                                                                           |
| `otelcol_exporter_enqueue_failed_spans`           | Number of spans failed to be added to the sending queue                                                                            |
| `otelcol_scraper_errored_metric_points`           | Number of metric points that were unable to be scraped                                                                             |
| `otelcol_scraper_scraped_metric_points`           | Number of metric points successfully scraped                                                                                       |
| `otelcol_receiver_refused_metric_points`          | Number of metric points that could not be pushed into the pipeline                                                                 |
| `otelcol_receiver_accepted_metric_points`         | Number of metric points successfully pushed into the pipeline                                                                      |
| `otelcol_process_runtime_heap_alloc_bytes`        | Bytes of allocated heap objects (see 'go doc runtime.MemStats.HeapAlloc')                                                          |
| `otelcol_process_runtime_total_alloc_bytes`       | Cumulative bytes allocated for heap objects (see 'go doc runtime.MemStats.TotalAlloc')                                             |
| `otelcol_exporter_enqueue_failed_log_records`     | Number of log records failed to be added to the sending queue                                                                      |
| `otelcol_processor_batch_timeout_trigger_send`    | Number of times the batch was sent due to a timeout trigger                                                                        |
| `otelcol_exporter_enqueue_failed_metric_points`   | Number of metric points failed to be added to the sending queue                                                                    |
| `otelcol_process_runtime_total_sys_memory_bytes`  | Total bytes of memory obtained from the OS (see [the Go docs for `runtime.MemStats.Sys`](https://pkg.go.dev/runtime#MemStats.Sys)) |
| `otelcol_processor_batch_batch_size_trigger_send` | Number of times the batch was sent due to a size trigger                                                                           |
| `otelcol_exporter_sent_log_records`               | Number of log records successfully sent to destination                                                                             |
| `otelcol_receiver_refused_log_records`            | Number of log records that could not be pushed into the pipeline                                                                   |
| `otelcol_receiver_accepted_log_records`           | Number of log records successfully pushed into the pipeline                                                                        |

## Example logging output{% #example-logging-output %}

```
ResourceMetrics #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.6.1
Resource attributes:
     -> service.name: Str(opentelemetry-collector)
     -> net.host.name: Str(192.168.55.78)
     -> service.instance.id: Str(192.168.55.78:8888)
     -> net.host.port: Str(8888)
     -> http.scheme: Str(http)
     -> k8s.pod.ip: Str(192.168.55.78)
     -> cloud.provider: Str(aws)
     -> cloud.platform: Str(aws_ec2)
     -> cloud.region: Str(us-east-1)
     -> cloud.account.id: Str(XXXXXXXXX)
     -> cloud.availability_zone: Str(us-east-1c)
     -> host.id: Str(i-0368add8e328c28f7)
     -> host.image.id: Str(ami-08a2e6a8e82737230)
     -> host.type: Str(m5.large)
     -> host.name: Str(ip-192-168-53-115.ec2.internal)
     -> os.type: Str(linux)
     -> k8s.pod.name: Str(opentelemetry-collector-agent-gqwm8)
     -> k8s.daemonset.name: Str(opentelemetry-collector-agent)
     -> k8s.daemonset.uid: Str(6d6fef61-d4c7-4226-9b7b-7d6b893cb31d)
     -> k8s.node.name: Str(ip-192-168-53-115.ec2.internal)
     -> kube_app_name: Str(opentelemetry-collector)
     -> kube_app_instance: Str(opentelemetry-collector)
     -> k8s.namespace.name: Str(otel-staging)
     -> k8s.pod.start_time: Str(2023-11-20T12:53:23Z)
     -> k8s.pod.uid: Str(988d1bdc-5baf-4e98-942f-ab026a371daf)
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope otelcol/prometheusreceiver 0.88.0-dev
Metric #0
Descriptor:
     -> Name: otelcol_otelsvc_k8s_namespace_added
     -> Description: Number of namespace add events received
     -> Unit: 
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> service_instance_id: Str(d80d11f9-aa84-4e16-818d-3e7d868c0cfe)
     -> service_name: Str(otelcontribcol)
     -> service_version: Str(0.88.0-dev)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2023-11-20 13:17:36.881 +0000 UTC
Value: 194151496.000000
Metric #9
Descriptor:
     -> Name: otelcol_receiver_accepted_spans
     -> Description: Number of spans successfully pushed into the pipeline.
     -> Unit: 
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
```
