---
title: Red Hat Gluster Storage
description: Monitor GlusterFS cluster node, volume, and brick status metrics.
breadcrumbs: Docs > Integrations > Red Hat Gluster Storage
---

# Red Hat Gluster Storage
Supported OS Integration version3.4.1
## Overview{% #overview %}

This check monitors [Red Hat Gluster Storage](https://www.redhat.com/en/technologies/storage/gluster) cluster health, volume, and brick status through the Datadog Agent. This GlusterFS integration is compatible with both Red Hat vendored and open-source versions of GlusterFS.

**Minimum Agent version:** 7.25.1

## Setup{% #setup %}

Follow the instructions below to install and configure this check for an Agent running on a host. For containerized environments, see the [Autodiscovery Integration Templates](https://docs.datadoghq.com/agent/kubernetes/integrations/) for guidance on applying these instructions.

### Installation{% #installation %}

The GlusterFS check is included in the [Datadog Agent](https://app.datadoghq.com/account/settings/agent/latest) package. No additional installation is needed on your server.

### Configuration{% #configuration %}

1. Edit the `glusterfs.d/conf.yaml` file, in the `conf.d/` folder at the root of your Agent's configuration directory to start collecting your GlusterFS performance data. See the [sample glusterfs.d/conf.yaml](https://github.com/DataDog/integrations-core/blob/master/glusterfs/datadog_checks/glusterfs/data/conf.yaml.example) for all available configuration options.

   ```yaml
   init_config:
   
    ## @param gstatus_path - string - optional - default: /opt/datadog-agent/embedded/sbin/gstatus
    ## Path to the gstatus command.
    ##
    ## A version of the gstatus is shipped with the Agent binary.
    ## If you are using a source install, specify the location of gstatus.
    #
    # gstatus_path: /opt/datadog-agent/embedded/sbin/gstatus
   
    instances:
      -
        ## @param min_collection_interval - number - optional - default: 60
        ## The GlusterFS integration collects cluster-wide metrics which can put additional workload on the server.
        ## Increase the collection interval to reduce the frequency.
        ##
        ## This changes the collection interval of the check. For more information, see:
        ## https://docs.datadoghq.com/developers/write_agent_check/#collection-interval
        #
        min_collection_interval: 60
   ```

**NOTE**: By default, [`gstatus`](https://github.com/gluster/gstatus#install) internally calls the `gluster` command which requires running as superuser. Add a line like the following to your `sudoers` file:

   ```text
    dd-agent ALL=(ALL) NOPASSWD:/path/to/your/gstatus
   ```

If your GlusterFS environment does not require root, set `use_sudo` configuration option to `false`.

1. [Restart the Agent](https://docs.datadoghq.com/agent/guide/agent-commands/#start-stop-and-restart-the-agent).

#### Log collection{% #log-collection %}

1. Collecting logs is disabled by default in the Datadog Agent, enable it in your `datadog.yaml` file:

   ```yaml
   logs_enabled: true
   ```

1. Edit this configuration block in your `glusterfs.d/conf.yaml` file to start collecting your GlusterFS logs:

   ```yaml
   logs:
     - type: file
       path: /var/log/glusterfs/glusterd.log
       source: glusterfs
     - type: file
       path: /var/log/glusterfs/cli.log
       source: glusterfs
   ```

Change the `path` parameter value based on your environment. See the [sample conf.yaml](https://github.com/DataDog/integrations-core/blob/master/glusterfs/datadog_checks/glusterfs/data/conf.yaml.example) for all available configuration options.
[Restart the Agent](https://docs.datadoghq.com/agent/guide/agent-commands/#start-stop-and-restart-the-agent).
For information on configuring the Agent for log collection in Kubernetes environments, see [Kubernetes Log Collection](https://docs.datadoghq.com/agent/kubernetes/log/).

### Validation{% #validation %}

[Run the Agent's status subcommand](https://docs.datadoghq.com/agent/guide/agent-commands/#agent-status-and-information) and look for `glusterfs` under the Checks section.

## Data Collected{% #data-collected %}

### Metrics{% #metrics %}

|  |
|  |
| **glusterfs.brick.block\_size**(gauge)           | Block Size of brick*Shown as byte*                     |
| **glusterfs.brick.inodes.free**(gauge)           | Free inodes in brick*Shown as byte*                    |
| **glusterfs.brick.inodes.total**(gauge)          | Total inodes in brick*Shown as byte*                   |
| **glusterfs.brick.inodes.used**(gauge)           | Inode used in brick*Shown as byte*                     |
| **glusterfs.brick.online**(gauge)                | Number of bricks online*Shown as unit*                 |
| **glusterfs.brick.size.free**(gauge)             | Brick size free*Shown as byte*                         |
| **glusterfs.brick.size.total**(gauge)            | Total brick size*Shown as byte*                        |
| **glusterfs.brick.size.used**(gauge)             | Current bytes used in brick*Shown as byte*             |
| **glusterfs.cluster.nodes.active**(gauge)        | Current active nodes*Shown as node*                    |
| **glusterfs.cluster.nodes.count**(gauge)         | Total number of nodes in cluster*Shown as node*        |
| **glusterfs.cluster.volumes.count**(gauge)       | Number of volumes in cluster*Shown as unit*            |
| **glusterfs.cluster.volumes.started**(gauge)     | Number of volumes started in cluster*Shown as unit*    |
| **glusterfs.heal\_info.entries.count**(gauge)    | Number of entries requiring healing*Shown as unit*     |
| **glusterfs.subvol.disperse**(gauge)             | Disperse count of subvolume*Shown as unit*             |
| **glusterfs.subvol.disperse\_redundancy**(gauge) | Disperse redundancy of subvolume*Shown as unit*        |
| **glusterfs.subvol.replica**(gauge)              | Replicas in subvolume*Shown as unit*                   |
| **glusterfs.volume.bricks.count**(gauge)         | Number of bricks in volume*Shown as unit*              |
| **glusterfs.volume.disperse**(gauge)             | Number of dispersed in volume*Shown as unit*           |
| **glusterfs.volume.disperse\_redundancy**(gauge) | Number of disperse redundancy in volume*Shown as unit* |
| **glusterfs.volume.distribute**(gauge)           | Number of distributed*Shown as unit*                   |
| **glusterfs.volume.inodes.free**(gauge)          | Inodes free in volume*Shown as byte*                   |
| **glusterfs.volume.inodes.total**(gauge)         | Total size inodes in volume*Shown as byte*             |
| **glusterfs.volume.inodes.used**(gauge)          | Used bytes of inodes in volume*Shown as byte*          |
| **glusterfs.volume.online**(gauge)               | Number of volumes online*Shown as unit*                |
| **glusterfs.volume.replica**(gauge)              | Replicas in volumes*Shown as unit*                     |
| **glusterfs.volume.size.free**(gauge)            | Bytes free in volume*Shown as byte*                    |
| **glusterfs.volume.size.total**(gauge)           | Bytes total in volume*Shown as byte*                   |
| **glusterfs.volume.size.used**(gauge)            | Bytes used in volume*Shown as byte*                    |
| **glusterfs.volume.snapshot.count**(gauge)       | number of snapshots of volume*Shown as byte*           |
| **glusterfs.volume.used.percent**(gauge)         | percentage of volume used*Shown as percent*            |

### Events{% #events %}

GlusterFS does not include any events.

### Service Checks{% #service-checks %}

**glusterfs.brick.health**

Returns `CRITICAL` if the sub volume is 'degraded'. Returns `OK` if 'up'.

*Statuses: ok, critical, warning*

**glusterfs.volume.health**

Returns `CRITICAL` if the volume is 'degraded'. Returns `OK` if 'up'.

*Statuses: ok, critical, warning*

**glusterfs.cluster.health**

Returns `CRITICAL` if the volume is 'degraded'. Returns `OK` otherwise.

*Statuses: ok, critical, warning*

## Troubleshooting{% #troubleshooting %}

Need help? Contact [Datadog support](https://docs.datadoghq.com/help/).
