Nvidia NVML

Supported OS Linux Windows Mac OS

Integration version1.0.9
이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.


This check monitors NVIDIA Management Library (NVML) exposed metrics through the Datadog Agent and can correlate them with the exposed Kubernetes devices .


The NVML check is not included in the Datadog Agent package, so you need to install it.


For Agent v7.21+ / v6.21+, follow the instructions below to install the NVML check on your host. See Use Community Integrations to install with the Docker Agent or earlier versions of the Agent.

  1. Run the following command to install the Agent integration:

    datadog-agent integration install -t datadog-nvml==<INTEGRATION_VERSION>
    # You may also need to install dependencies since those aren't packaged into the wheel
    sudo -u dd-agent -H /opt/datadog-agent/embedded/bin/pip3 install grpcio pynvml
  2. Configure your integration similar to core integrations .

If you are using Docker, there is an example Dockerfile in the NVML repository.

docker build -t dd-agent-nvml .

If you’re using Docker and Kubernetes, you need to expose the environment variables NVIDIA_VISIBLE_DEVICES and NVIDIA_DRIVER_CAPABILITIES. See the included Dockerfile for an example.

To correlate reserved Kubernetes NVIDIA devices with the Kubernetes pod using the device, mount the Unix domain socket /var/lib/kubelet/pod-resources/kubelet.sock into your Agent’s configuration. More information about this socket is on the Kubernetes website . Note: This device is in beta support for version 1.15.


  1. Edit the nvml.d/conf.yaml file, in the conf.d/ folder at the root of your Agent’s configuration directory to start collecting your NVML performance data. See the sample nvml.d/conf.yaml for all available configuration options.

  2. Restart the Agent .


Run the Agent’s status subcommand and look for nvml under the Checks section.

Data Collected


Number of GPU on this instance.
Percent of time over the past sample period during which one or more kernels was executing on the GPU.
Shown as percent
Percent of time over the past sample period during which global (device) memory was being read or written.
Shown as percent
Unallocated FB memory.
Shown as byte
Allocated FB memory.
Shown as byte
Total installed FB memory.
Shown as byte
Power usage for this GPU in milliwatts and its associated circuitry (e.g. memory)
Total energy consumption for this GPU in millijoules (mJ) since the driver was last reloaded
The current utilization for the Encoder
Shown as percent
The current utilization for the Decoder
Shown as percent
PCIe TX utilization
Shown as kibibyte
PCIe RX utilization
Shown as kibibyte
Current temperature for this GPU in degrees celsius
The current utilization for the fan
Shown as percent
The current usage of gpu memory by process
Shown as byte
The authoritative metric documentation is on the NVIDIA website .

There is an attempt to, when possible, match metric names with NVIDIA’s Data Center GPU Manager (DCGM) exporter .


NVML does not include any events.

Service Checks


Need help? Contact Datadog support .