Overview

Datadog products and visualizations are built on metrics and tags that adhere to specific naming patterns. Metrics from OpenTelemetry components that are sent to Datadog are mapped to corresponding Datadog metrics, as applicable. The creation of these additional metrics does not affect Datadog billing.

The following diagram shows the process of mapping the metrics from OpenTelemetry into metrics that Datadog uses:

The decision process for mapping OpenTelemetry metric names to Datadog metric names. If an OTel metric is not used by any Datadog product, or if its semantics are the same as Datadog's, it is sent as-is to Datadog. Otherwise, a Datadog-style metric is created from the OTel metric and sent to Datadog.

Use of the otel prefix

To differentiate the metrics captured by the hostmetrics receiver from Datadog Agent, we add a prefix, otel, for metrics collected by the collector. If a metric name starts with system. or process., otel. is prepended to the metric name. Monitoring the same infrastructure artifact using both Agent and Collector is not recommended.

Datadog is evaluating ways to improve the OTLP metric experience, including potentially deprecating this otel prefix. If you have feedback related to this, reach out your account team to provide your input.

Host metrics

Host Metrics are collected by the host metrics receiver. For information about setting up the receiver, see OpenTelemetry Collector Datadog Exporter.

The metrics, mapped to Datadog metrics, are used in the following views:

Note: To correlate trace and host metrics, configure Universal Service Monitoring attributes for each service, and set the host.name resource attribute to the corresponding underlying host for both service and collector instances.

The following table shows which Datadog host metric names are associated with corresponding OpenTelemetry host metric names, and, if applicable, what math is applied to the OTel host metric to transform it to Datadog units during the mapping.

Datadog metric nameOTel metric nameMetric descriptionTransform done on OTel metric
system.load.1system.cpu.load_average.1mThe average system load over one minute. (Linux only)
system.load.5system.cpu.load_average.5mThe average system load over five minutes. (Linux only)
system.load.15system.cpu.load_average.15mThe average system load over 15 minutes. (Linux only)
system.cpu.idlesystem.cpu.utilization
Attribute Filter state: idle
Fraction of time the CPU spent in an idle state. Shown as percent.Multiplied by 100
system.cpu.usersystem.cpu.utilization
Attribute Filter state: user
Fraction of time the CPU spent running user space processes. Shown as percent.Multiplied by 100
system.cpu.systemsystem.cpu.utilization
Attribute Filter state: system
Fraction of time the CPU spent running the kernel.Multiplied by 100
system.cpu.iowaitsystem.cpu.utilization
Attribute Filter state: wait
The percent of time the CPU spent waiting for IO operations to complete.Multiplied by 100
system.cpu.stolensystem.cpu.utilization
Attribute Filter state: steal
The percent of time the virtual CPU spent waiting for the hypervisor to service another virtual CPU. Only applies to virtual machines. Shown as percent.Multiplied by 100
system.mem.totalsystem.memory.usageThe total amount of physical RAM in bytes.Converted to MB (divided by 2^20)
system.mem.usablesystem.memory.usage
Attributes Filter state: (free, cached, buffered)
Value of MemAvailable from /proc/meminfo if present. If not present, falls back to adding free + buffered + cached memory. In bytes.Converted to MB (divided by 2^20)
system.net.bytes_rcvdsystem.network.io
Attribute Filter direction: receive
The number of bytes received on a device per second.
system.net.bytes_sentsystem.network.io
Attribute Filter direction: transmit
The number of bytes sent from a device per second.
system.swap.freesystem.paging.usage
Attribute Filter state: free
The amount of free swap space, in bytesConverted to MB (divided by 2^20)
system.swap.usedsystem.paging.usage
Attribute Filter state: used
The amount of swap space in use, in bytes.Converted to MB (divided by 2^20)
system.disk.in_usesystem.filesystem.utilizationThe amount of disk space in use as a fraction of the total.

Container metrics

The Docker Stats receiver generates container metrics for the OpenTelemetry Collector. The Datadog Exporter translates container metrics to their Datadog counterparts for use in the following views:

Note: To correlate trace and container metrics, configure Universal Service Monitoring attributes for each service, and set the following resource attributes for each service:

  • k8s.container.name
  • k8s.pod.name
  • container.name
  • container.id

Learn more about mapping between OpenTelemetry and Datadog semantic conventions for resource attributes.

The following table shows what Datadog container metric names are associated with corresponding OpenTelemetry container metric names

Datadog Metric NameOTel Docker Stats Metric NameMetric Description
container.cpu.usagecontainer.cpu.usage.totalThe container total CPU Usage
container.cpu.usercontainer.cpu.usage.usermodeThe container userspace CPU usage
container.cpu.systemcontainer.cpu.usage.systemThe container system CPU usage
container.cpu.throttledcontainer.cpu. throttling_data.throttled_timeThe total cpu throttled time
container.cpu.throttled.periodscontainer.cpu. throttling_data.throttled_periodsThe number of periods during which the container was throttled
container.memory.usagecontainer.memory.usage.totalThe container total memory usage
container.memory.kernelcontainer.memory.active_anonThe container kernel memory usage
container.memory.limitcontainer.memory. hierarchical_memory_limitThe container memory limit
container.memory.soft_limitcontainer.memory.usage.limitThe container memory soft limit
container.memory.cachecontainer.memory.total_cacheThe container cache usage
container.memory.swapcontainer.memory.total_swapThe container swap usage
container.io.writecontainer.blockio. io_service_bytes_recursive
Attribute Filter operation=write
The number of bytes written to disks by this container
container.io.readcontainer.blockio. io_service_bytes_recursive
Attribute Filter operation=read
The number of bytes read from disks by this container
container.io.write.operationscontainer.blockio. io_serviced_recursive
Attribute Filter operation=write
The number of write operations done by this container
container.io.read.operationscontainer.blockio. io_serviced_recursive
Attribute Filter operation=read
The number of read operations done by this container
container.net.sentcontainer.network.io. usage.tx_bytesThe number of network bytes sent (per interface)
container.net.sent.packetscontainer.network.io. usage.tx_packetsThe number of network packets sent (per interface)
container.net.rcvdcontainer.network.io. usage.rx_bytesThe number of network bytes received (per interface)
container.net.rcvd.packetscontainer.network.io. usage.rx_packetsThe number of network packets received (per interface)

Kafka metrics

OpenTelemetry MetricDatadog MetricSourceTransform done on Datadog Metric
otel.kafka.producer.request-ratekafka.producer.request_rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka-producer}
otel.kafka.producer.response-ratekafka.producer.response_rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka-producer}
otel.kafka.producer.request-latency-avgkafka.producer.request_latency_avgJMX Receiver / JMX Metrics Gatherer {target_system:kafka-producer}
kafka.producer.outgoing-byte-ratekafka.producer.outgoing-byte-rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka-producer}
kafka.producer.io-wait-time-ns-avgkafka.producer.io_waitJMX Receiver / JMX Metrics Gatherer {target_system:kafka-producer}
kafka.producer.byte-ratekafka.producer.bytes_outJMX Receiver / JMX Metrics Gatherer {target_system:kafka-producer}
kafka.consumer.total.bytes-consumed-ratekafka.consumer.bytes_inJMX Receiver / JMX Metrics Gatherer {target_system:kafka-consumer}
kafka.consumer.total.records-consumed-ratekafka.consumer.messages_inJMX Receiver / JMX Metrics Gatherer {target_system:kafka-consumer}
kafka.network.io{state:out}kafka.net.bytes_out.rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka}Compute rate per second and submitted as Gauge
kafka.network.io{state:in}kafka.net.bytes_in.rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka}Compute rate per second and submitted as Gauge
kafka.purgatory.size{type:produce}kafka.request.producer_request_purgatory.sizeJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.purgatory.size{type:fetch}kafka.request.fetch_request_purgatory.sizeJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.partition.under_replicatedkafka.replication.under_replicated_partitionsJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.isr.operation.count{operation:shrink}kafka.replication.isr_shrinks.rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka}Compute rate per second and submitted as Gauge
kafka.isr.operation.count{operation:expand}kafka.replication.isr_expands.rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka}Compute rate per second and submitted as Gauge
kafka.leader.election.ratekafka.replication.leader_elections.rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka}Compute rate per second and submitted as Gauge
kafka.partition.offlinekafka.replication.offline_partitions_countJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.request.time.avg{type:produce}kafka.request.produce.time.avgJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.request.time.avg{type:fetchconsumer}kafka.request.fetch_consumer.time.avgJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.request.time.avg{type:fetchfollower}kafka.request.fetch_follower.time.avgJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.message.countkafka.messages_in.rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka}Compute rate per second and submitted as Gauge
kafka.request.failed{type:produce}kafka.request.produce.failed.rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka}Compute rate per second and submitted as Gauge
kafka.request.failed{type:fetch}kafka.request.fetch.failed.rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka}Compute rate per second and submitted as Gauge
kafka.request.time.99p{type:produce}kafka.request.produce.time.99percentileJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.request.time.99p{type:fetchconsumer}kafka.request.fetch_consumer.time.99percentileJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.request.time.99p{type:fetchfollower}kafka.request.fetch_follower.time.99percentileJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.partition.countkafka.replication.partition_countJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.max.lagkafka.replication.max_lagJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.controller.active.countkafka.replication.active_controller_countJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.unclean.election.ratekafka.replication.unclean_leader_elections.rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka}Compute rate per second and submitted as Gauge
kafka.request.queuekafka.request.channel.queue.sizeJMX Receiver / JMX Metrics Gatherer {target_system:kafka}
kafka.logs.flush.time.countkafka.log.flush_rate.rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka}Compute rate per second and submitted as Gauge
kafka.consumer.bytes-consumed-ratekafka.consumer.bytes_consumedJMX Receiver / JMX Metrics Gatherer {target_system:kafka-consumer}
kafka.consumer.records-consumed-ratekafka.consumer.records_consumedJMX Receiver / JMX Metrics Gatherer {target_system:kafka-consumer}
otel.kafka.consumer.fetch-size-avgkafka.consumer.fetch_size_avgJMX Receiver / JMX Metrics Gatherer {target_system:kafka-consumer}
otel.kafka.producer.compression-ratekafka.producer.compression-rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka-producer}
otel.kafka.producer.record-error-ratekafka.producer.record_error_rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka-producer}
otel.kafka.producer.record-retry-ratekafka.producer.record_retry_rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka-producer}
otel.kafka.producer.record-send-ratekafka.producer.record_send_rateJMX Receiver / JMX Metrics Gatherer {target_system:kafka-producer}
kafka.partition.current_offsetkafka.broker_offsetkafkametricsreceiver
kafka.consumer_group.lagkafka.consumer_lagkafkametricsreceiver
kafka.consumer_group.offsetkafka.consumer_offsetkafkametricsreceiver
jvm.gc.collections.count{name:Copy && name:PS Scavenge && name:ParNew && name:G1 Young Generation}jvm.gc.min&&_collection_countJMX Receiver / JMX Metrics Gatherer {target_system:jvm}Compute rate per second and submitted as Gauge
jvm.gc.maj&&_collection_count{name:MarkSweepCompact && name:PS MarkSweep &&name:ConcurrentMarkSweep &&name:G1 Mixed Generation && G1 Old Generation && Shenandoah Cycles && ZGC}jvm.gc.maj&&_collection_countJMX Receiver / JMX Metrics Gatherer {target_system:jvm}Compute rate per second and submitted as Gauge
jvm.gc.collections.elapsed{name:Copy && name:PS Scavenge && name:ParNew && name:G1 Young Generation}jvm.gc.min&&_collection_timeJMX Receiver / JMX Metrics Gatherer {target_system:jvm}Compute rate per second and submitted as Gauge
jvm.gc.collections.elapsed{name:MarkSweepCompact && name:PS MarkSweep &&name:ConcurrentMarkSweep &&name:G1 Mixed Generation && G1 Old Generation && Shenandoah Cycles && ZGC}jvm.gc.major_collection_timeJMX Receiver / JMX Metrics Gatherer {target_system:jvm}Compute rate per second and submitted as Gauge

Note: In Datadog - gets translated to _. For the metrics prepended by otel., this means that the OTel metric name and the Datadog metric name are the same (for example, kafka.producer.request-rate and kafka.producer.request_rate). In order to avoid double counting for these metrics, the OTel metric is then prepended with otel..

Further reading