---
title: FluentD
description: Monitor buffer queues and retry counts for each Fluentd plugin you've enabled.
breadcrumbs: Docs > Integrations > FluentD
---

# FluentD
Supported OS Integration version5.5.1


## Overview{% #overview %}

Get metrics from Fluentd to:

- Visualize Fluentd performance.
- Correlate the performance of Fluentd with the rest of your applications.

**Minimum Agent version:** 6.0.0

## Setup{% #setup %}

### Installation{% #installation %}

The Fluentd check is included in the [Datadog Agent](https://app.datadoghq.com/account/settings/agent/latest) package, so you don't need to install anything else on your Fluentd servers.

#### Prepare Fluentd{% #prepare-fluentd %}

In your Fluentd configuration file, add a `monitor_agent` source:

```text
<source>
  @type monitor_agent
  bind 0.0.0.0
  port 24220
</source>
```

### Configuration{% #configuration %}

{% tab title="Host" %}
#### Host{% #host %}

To configure this check for an Agent running on a host:

##### Metric collection{% #metric-collection %}

1. Edit the `fluentd.d/conf.yaml` file, in the `conf.d/` folder at the root of your [Agent's configuration directory](https://docs.datadoghq.com/agent/guide/agent-configuration-files.md#agent-configuration-directory) to start collecting your Fluentd metrics. See the [sample fluentd.d/conf.yaml](https://github.com/DataDog/integrations-core/blob/master/fluentd/datadog_checks/fluentd/data/conf.yaml.example) for all available configuration options.

   ```yaml
   init_config:
   
   instances:
     ## @param monitor_agent_url - string - required
     ## Monitor Agent URL to connect to.
     #
     - monitor_agent_url: http://example.com:24220/api/plugins.json
   ```

1. [Restart the Agent](https://docs.datadoghq.com/agent/guide/agent-commands.md#start-stop-and-restart-the-agent).

##### Log collection{% #log-collection %}

You can use the [Datadog Fluentd plugin](https://github.com/DataDog/fluent-plugin-datadog) to forward the logs directly from Fluentd to your Datadog account.

###### Add metadata to your logs{% #add-metadata-to-your-logs %}

Proper metadata (including hostname and source) is the key to unlocking the full potential of your logs in Datadog. By default, the hostname and timestamp fields should be properly remapped with the [remapping for reserved attributes](https://docs.datadoghq.com/logs/processing.md#edit-reserved-attributes).

###### Source and custom tags{% #source-and-custom-tags %}

Add the `ddsource` attribute with [the name of the log integration](https://docs.datadoghq.com/integrations.md#cat-log-collection) in your logs in order to trigger the [integration automatic setup](https://docs.datadoghq.com/logs/processing.md#integration-pipelines) in Datadog. [Host tags](https://docs.datadoghq.com/getting_started/tagging/assigning_tags.md) are automatically set on your logs if there is a matching hostname in your [infrastructure list](https://app.datadoghq.com/infrastructure). Use the `ddtags` attribute to add custom tags to your logs:

Setup Example:

```
  # Match events tagged with "datadog.**" and
  # send them to Datadog

<match datadog.**>
  @type datadog
  @id awesome_agent
  api_key <your_api_key>

  # Optional
  include_tag_key true
  tag_key 'tag'

  # Optional tags
  dd_source '<INTEGRATION_NAME>'
  dd_tags '<KEY1:VALUE1>,<KEY2:VALUE2>'

  <buffer>
          @type memory
          flush_thread_count 4
          flush_interval 3s
          chunk_limit_size 5m
          chunk_limit_records 500
  </buffer>
</match>
```

By default, the plugin is configured to send logs through HTTPS (port 443) using gzip compression. You can change this behavior by using the following parameters:

- `use_http`: Set this to `false` if you want to use TCP forwarding and update the `host` and `port` accordingly (default is `true`)
- `use_compression`: Compression is only available for HTTP. Disable it by setting this to `false` (default is `true`)
- `compression_level`: Set the compression level from HTTP. The range is from 1 to 9, 9 being the best ratio (default is `6`)

Additional parameters can be used to change the endpoint used in order to go through a proxy:

- `host`: The proxy endpoint for logs not directly forwarded to Datadog (default value: `http-intake.logs.datadoghq.com`).
- `port`: The proxy port for logs not directly forwarded to Datadog (default value: `80`).
- `ssl_port`: The port used for logs forwarded with a secure TCP/SSL connection to Datadog (default value: `443`).
- `use_ssl`: Instructs the Agent to initialize a secure TCP/SSL connection to Datadog (default value: `true`).
- `no_ssl_validation`: Disables SSL hostname validation (default value: `false`).

**Note**: Set `host` and `port` to your region  .

```
<match datadog.**>

  #...
  host 'http-intake.logs.datadoghq.eu'

</match>
```

###### Kubernetes and Docker tags{% #kubernetes-and-docker-tags %}

Datadog tags are critical to be able to jump from one part of the product to another. Having the right metadata associated with your logs is therefore important in jumping from a container view or any container metrics to the most related logs.

If your logs contain any of the following attributes, these attributes are automatically added as Datadog tags on your logs:

- `kubernetes.container_image`
- `kubernetes.container_name`
- `kubernetes.namespace_name`
- `kubernetes.pod_name`
- `docker.container_id`

While the Datadog Agent collects Docker and Kubernetes metadata automatically, Fluentd requires a plugin for this. Datadog recommends using [fluent-plugin-kubernetes_metadata_filter](https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter) to collect this metadata.

Configuration example:

```
# Collect metadata for logs tagged with "kubernetes.**"
 <filter kubernetes.*>
   type kubernetes_metadata
 </filter>
```

{% /tab %}

{% tab title="Containerized" %}
#### Containerized{% #containerized %}

For containerized environments, see the [Autodiscovery Integration Templates](https://docs.datadoghq.com/agent/kubernetes/integrations.md) for guidance on applying the parameters below.

##### Metric collection{% #metric-collection %}

| Parameter            | Value                                                             |
| -------------------- | ----------------------------------------------------------------- |
| `<INTEGRATION_NAME>` | `fluentd`                                                         |
| `<INIT_CONFIG>`      | blank or `{}`                                                     |
| `<INSTANCE_CONFIG>`  | `{"monitor_agent_url": "http://%%host%%:24220/api/plugins.json"}` |

{% /tab %}

### Validation{% #validation %}

[Run the Agent's status subcommand](https://docs.datadoghq.com/agent/guide/agent-commands.md#agent-status-and-information) and look for `fluentd` under the Checks section.

## Data Collected{% #data-collected %}

### Metrics{% #metrics %}

|  |
|  |
| **fluentd.buffer\_available\_buffer\_space\_ratios**(gauge) | Show available space for buffer                                                                                                         |
| **fluentd.buffer\_queue\_byte\_size**(gauge)                | Current bytesize of queued buffer chunks*Shown as byte*                                                                                 |
| **fluentd.buffer\_queue\_length**(gauge)                    | The length of the buffer queue for this plugin.*Shown as buffer*                                                                        |
| **fluentd.buffer\_stage\_byte\_size**(gauge)                | Current bytesize of staged buffer chunks*Shown as byte*                                                                                 |
| **fluentd.buffer\_stage\_length**(gauge)                    | The length of staged buffer chunks                                                                                                      |
| **fluentd.buffer\_total\_queued\_size**(gauge)              | The size of the buffer queue for this plugin.*Shown as byte*                                                                            |
| **fluentd.emit\_count**(gauge)                              | The total number of emit call in output plugin*Shown as unit*                                                                           |
| **fluentd.emit\_records**(gauge)                            | The total number of emitted records*Shown as record*                                                                                    |
| **fluentd.flush\_time\_count**(gauge)                       | The total time of buffer flush in milliseconds*Shown as millisecond*                                                                    |
| **fluentd.retry\_count**(gauge)                             | The number of retries for this plugin.*Shown as time*                                                                                   |
| **fluentd.rollback\_count**(gauge)                          | The total number of rollback. rollback happens when write/try_write failed*Shown as unit*                                               |
| **fluentd.slow\_flush\_count**(gauge)                       | The total number of slow flush. This count will be incremented when buffer flush is longer than slow_flush_log_threshold*Shown as unit* |
| **fluentd.write\_count**(gauge)                             | The total number of write/try_write call in output plugin*Shown as unit*                                                                |

### Events{% #events %}

The Fluentd check does not include any events.

### Service Checks{% #service-checks %}

**fluentd.is\_ok**

Returns `OK` if fluentd and its monitor agent are running, CRITICAL otherwise.

*Statuses: ok, critical*

## Troubleshooting{% #troubleshooting %}

Need help? Contact [Datadog support](https://docs.datadoghq.com/help/).

## Further Reading{% #further-reading %}

- [How to monitor Fluentd with Datadog](https://www.datadoghq.com/blog/monitor-fluentd-datadog)
