HDFS Datanode

Supported OS Linux Mac OS

Integration version6.1.0

HDFS DataNode Integration

HDFS Dashboard

Overview

Track disk utilization and failed volumes on each of your HDFS DataNodes. This Agent check collects metrics for these, as well as block- and cache-related metrics.

Use this check (hdfs_datanode) and its counterpart check (hdfs_namenode), not the older two-in-one check (hdfs); that check is deprecated.

Setup

Follow the instructions below to install and configure this check for an Agent running on a host. For containerized environments, see the Autodiscovery Integration Templates for guidance on applying these instructions.

Installation

The HDFS DataNode check is included in the Datadog Agent package, so you don’t need to install anything else on your DataNodes.

Configuration

Connect the Agent

Host

To configure this check for an Agent running on a host:

  1. Edit the hdfs_datanode.d/conf.yaml file, in the conf.d/ folder at the root of your Agent’s configuration directory. See the sample hdfs_datanode.d/conf.yaml for all available configuration options:

    init_config:
    
    instances:
      ## @param hdfs_datanode_jmx_uri - string - required
      ## The HDFS DataNode check retrieves metrics from the HDFS DataNode's JMX
      ## interface via HTTP(S) (not a JMX remote connection). This check must be installed on a HDFS DataNode. The HDFS
      ## DataNode JMX URI is composed of the DataNode's hostname and port.
      ##
      ## The hostname and port can be found in the hdfs-site.xml conf file under
      ## the property dfs.datanode.http.address
      ## https://hadoop.apache.org/docs/r3.1.3/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
      #
      - hdfs_datanode_jmx_uri: http://localhost:9864
    
  2. Restart the Agent.

Containerized

For containerized environments, see the Autodiscovery Integration Templates for guidance on applying the parameters below.

ParameterValue
<INTEGRATION_NAME>hdfs_datanode
<INIT_CONFIG>blank or {}
<INSTANCE_CONFIG>{"hdfs_datanode_jmx_uri": "http://%%host%%:9864"}

Log collection

Available for Agent >6.0

  1. Collecting logs is disabled by default in the Datadog Agent. Enable it in the datadog.yaml file with:

      logs_enabled: true
    
  2. Add this configuration block to your hdfs_datanode.d/conf.yaml file to start collecting your DataNode logs:

      logs:
        - type: file
          path: /var/log/hadoop-hdfs/*.log
          source: hdfs_datanode
          service: <SERVICE_NAME>
    

    Change the path and service parameter values and configure them for your environment.

  3. Restart the Agent.

Validation

Run the Agent’s status subcommand and look for hdfs_datanode under the Checks section.

Data Collected

Metrics

hdfs.datanode.cache_capacity
(gauge)
Cache capacity in bytes
Shown as byte
hdfs.datanode.cache_used
(gauge)
Cache used in bytes
Shown as byte
hdfs.datanode.dfs_capacity
(gauge)
Disk capacity in bytes
Shown as byte
hdfs.datanode.dfs_remaining
(gauge)
The remaining disk space left in bytes
Shown as byte
hdfs.datanode.dfs_used
(gauge)
Disk usage in bytes
Shown as byte
hdfs.datanode.estimated_capacity_lost_total
(gauge)
The estimated capacity lost in bytes
Shown as byte
hdfs.datanode.last_volume_failure_date
(gauge)
The date/time of the last volume failure in milliseconds since epoch
Shown as millisecond
hdfs.datanode.num_blocks_cached
(gauge)
The number of blocks cached
Shown as block
hdfs.datanode.num_blocks_failed_to_cache
(gauge)
The number of blocks that failed to cache
Shown as block
hdfs.datanode.num_blocks_failed_to_uncache
(gauge)
The number of failed blocks to remove from cache
Shown as block
hdfs.datanode.num_failed_volumes
(gauge)
Number of failed volumes

Events

The HDFS-datanode check does not include any events.

Service Checks

hdfs.datanode.jmx.can_connect

Returns CRITICAL if the Agent cannot connect to the DataNode’s JMX interface for any reason. Returns OK otherwise.

Statuses: ok, critical

Troubleshooting

Need help? Contact Datadog support.

Further Reading