---
title: Logstash
description: Monitor and collect runtime metrics from a Logstash instance
breadcrumbs: Docs > Integrations > Logstash
---

# Logstash
Supported OS Integration version1.2.0
## Overview{% #overview %}

Get metrics from Logstash in real time to:

- Visualize and monitor Logstash states.
- Be notified about Logstash events.

## Setup{% #setup %}

### Installation{% #installation %}

The Logstash check is not included in the [Datadog Agent](https://app.datadoghq.com/account/settings/agent/latest) package, so you need to install it.

{% tab title="Host" %}
#### Host{% #host %}

For Agent v7.21+ / v6.21+, follow the instructions below to install the Logstash check on your host. For earlier versions of the Agent, see [Use Community Integrations](https://docs.datadoghq.com/agent/guide/use-community-integrations/).

1. Run the following command to install the Agent integration:

   ```shell
   datadog-agent integration install -t datadog-logstash==<INTEGRATION_VERSION>
   ```

1. Configure your integration similar to core [integrations](https://docs.datadoghq.com/getting_started/integrations/).

{% /tab %}

{% tab title="Containerized" %}
#### Containerized{% #containerized %}

Use the following Dockerfile to build a custom Datadog Agent image that includes the Logstash integration.

```dockerfile
FROM gcr.io/datadoghq/agent:latest
RUN datadog-agent integration install -r -t datadog-logstash==<INTEGRATION_VERSION>
```

If you are using Kubernetes, update your Datadog Operator or Helm chart configuration to pull this custom Datadog Agent image.

See [Use Community Integrations](https://docs.datadoghq.com/agent/guide/use-community-integrations/) for more context.
{% /tab %}

### Configuration{% #configuration %}

#### Metric collection{% #metric-collection %}

{% tab title="Host" %}
##### Host{% #host %}

1. Edit the `logstash.d/conf.yaml` file in the `conf.d/` folder at the root of your [Agent's configuration directory](https://docs.datadoghq.com/agent/guide/agent-configuration-files/#agent-configuration-directory).

   ```yaml
   init_config:
   
   instances:
     # The URL where Logstash provides its monitoring API.
     # This will be used to fetch various runtime metrics about Logstash.
     #
     - url: http://localhost:9600
   ```

See the [sample logstash.d/conf.yaml](https://github.com/DataDog/integrations-extras/blob/master/logstash/datadog_checks/logstash/data/conf.yaml.example) for all available configuration options.

1. [Restart the Agent](https://docs.datadoghq.com/agent/guide/agent-commands/#start-stop-and-restart-the-agent).

{% /tab %}

{% tab title="Containerized" %}
##### Containerized{% #containerized %}

For containerized environments, use an Autodiscovery template with the following parameters:

| Parameter            | Value                                |
| -------------------- | ------------------------------------ |
| `<INTEGRATION_NAME>` | `logstash`                           |
| `<INIT_CONFIG>`      | blank or `{}`                        |
| `<INSTANCE_CONFIG>`  | `{"server": "http://%%host%%:9600"}` |

To learn how to apply this template, see [Docker Integrations](https://docs.datadoghq.com/containers/docker/integrations) or [Kubernetes Integrations](https://docs.datadoghq.com/containers/kubernetes/integrations/).

See the [sample logstash.d/conf.yaml](https://github.com/DataDog/integrations-extras/blob/master/logstash/datadog_checks/logstash/data/conf.yaml.example) for all available configuration options.
{% /tab %}

#### Log collection{% #log-collection %}

Datadog has [an output plugin](https://github.com/DataDog/logstash-output-datadog_logs) for Logstash that takes care of sending your logs to your Datadog platform.

To install this plugin run the following command:

```shell
logstash-plugin install logstash-output-datadog_logs
```

Then configure the `datadog_logs` plugin with your [Datadog API key](https://app.datadoghq.com/organization-settings/api-keys):

```
output {
    datadog_logs {
        api_key => "<DATADOG_API_KEY>"
    }
}
```

By default, the plugin is configured to send logs through HTTPS (port 443) using gzip compression. You can change this behavior by using the following parameters:

- `use_http`: Set this to `false` if you want to use TCP forwarding and update the `host` and `port` accordingly (default is `true`).
- `use_compression`: Compression is only available for HTTP. Disable it by setting this to `false` (default is `true`).
- `compression_level`: Set the compression level from HTTP. The range is from 1 to 9, 9 being the best ratio (default is `6`).

##### Proxy configuration{% #proxy-configuration %}

To send logs through a proxy server, use the `http_proxy` parameter:

```
output {
   datadog_logs {
       api_key => "<DATADOG_API_KEY>"
       http_proxy => "http://<PROXY_SERVER>:3128"
   }
}
```

See the Datadog [Proxy documentation](https://docs.datadoghq.com/agent/proxy/#proxy-for-logs) for additional configuration options.

##### Regional endpoint configuration{% #regional-endpoint-configuration %}

By default, the plugin sends logs to the Datadog US region. To send logs to a different Datadog region, configure the `host` parameter:

**EU region example:**

```
output {
   datadog_logs {
       api_key => "<DATADOG_API_KEY>"
       host => "http-intake.logs.datadoghq.eu"
   }
}
```

**Note**: Set `host` to your region: .

##### Advanced endpoint configuration{% #advanced-endpoint-configuration %}

You can use additional parameters to customize the Datadog endpoint connection:

- `host`: The Datadog intake endpoint (default value: `http-intake.logs.datadoghq.com`).
- `port`: The port for HTTP connections (default value: `80`).
- `ssl_port`: The port used for secure TCP/SSL connections (default value: `443`).
- `use_ssl`: Initialize a secure TCP/SSL connection to Datadog (default value: `true`).
- `no_ssl_validation`: Disables SSL hostname validation (default value: `false`).

##### Add metadata to your logs{% #add-metadata-to-your-logs %}

To get the best use out of your logs in Datadog, it is important to have the proper metadata associated with your logs, including hostname and source. By default, the hostname and timestamp should be properly remapped thanks to Datadog's default [remapping for reserved attributes](https://docs.datadoghq.com/logs/#edit-reserved-attributes). To make sure the service is correctly remapped, add its attribute value to the service remapping list.

##### Source{% #source %}

Set up a Logstash filter to set the source (Datadog integration name) on your logs.

```
filter {
  mutate {
    add_field => {
 "ddsource" => "<MY_SOURCE_VALUE>"
       }
    }
 }
```

This triggers the [integration automatic setup](https://docs.datadoghq.com/logs/processing/#integration-pipelines) in Datadog.

##### Custom tags{% #custom-tags %}

[Host tags](https://docs.datadoghq.com/getting_started/tagging/assigning_tags) are automatically set on your logs if there is a matching hostname in your [infrastructure list](https://app.datadoghq.com/infrastructure). Use the `ddtags` attribute to add custom tags to your logs:

```
filter {
  mutate {
    add_field => {
        "ddtags" => "env:test,<KEY:VALUE>"
       }
    }
 }
```

### Validation{% #validation %}

[Run the Agent's `status` subcommand](https://docs.datadoghq.com/agent/guide/agent-commands/#service-status) and look for `logstash` under the Checks section.

## Compatibility{% #compatibility %}

The Logstash check is compatible with Logstash 5.x, 6.x and 7.x versions. It also supports the new multi-pipelines metrics introduced in Logstash 6.0. Tested with Logstash versions 5.6.15, 6.3.0 and 7.0.0.

## Data Collected{% #data-collected %}

### Metrics{% #metrics %}

|  |
|  |
| **logstash.process.open\_file\_descriptors**(gauge)                                  | The number of open file descriptors used by this process.                    |
| **logstash.process.peak\_open\_file\_descriptors**(gauge)                            | The peak number of open file descriptors used by this process.               |
| **logstash.process.max\_file\_descriptors**(gauge)                                   | The maximum number of file descriptors used by this process.                 |
| **logstash.process.mem.total\_virtual\_in\_bytes**(gauge)                            | Total virtual memory allocated to this process.*Shown as byte*               |
| **logstash.process.cpu.total\_in\_millis**(gauge)                                    | The CPU time in milliseconds.*Shown as millisecond*                          |
| **logstash.process.cpu.percent**(gauge)                                              | CPU utilization in percentage.*Shown as percent*                             |
| **logstash.process.cpu.load\_average.1m**(gauge)                                     | The average CPU load over one minute.                                        |
| **logstash.process.cpu.load\_average.5m**(gauge)                                     | The average CPU load over five minutes                                       |
| **logstash.process.cpu.load\_average.15m**(gauge)                                    | The average CPU load over fifteen minutes.                                   |
| **logstash.jvm.threads.count**(gauge)                                                | Number of threads used by the JVM.*Shown as thread*                          |
| **logstash.jvm.threads.peak\_count**(gauge)                                          | The peak number of threads used by JVM.*Shown as thread*                     |
| **logstash.jvm.mem.heap\_used\_percent**(gauge)                                      | Total Java heap memory used.*Shown as percent*                               |
| **logstash.jvm.mem.heap\_committed\_in\_bytes**(gauge)                               | Total Java heap memory committed.*Shown as byte*                             |
| **logstash.jvm.mem.heap\_max\_in\_bytes**(gauge)                                     | Maximum Java heap memory size.*Shown as byte*                                |
| **logstash.jvm.mem.heap\_used\_in\_bytes**(gauge)                                    | Total Java heap memory used.*Shown as byte*                                  |
| **logstash.jvm.mem.non\_heap\_used\_in\_bytes**(gauge)                               | Total Java non-heap memory used.*Shown as byte*                              |
| **logstash.jvm.mem.non\_heap\_committed\_in\_bytes**(gauge)                          | Total Java non-heap memory committed.*Shown as byte*                         |
| **logstash.jvm.mem.pools.survivor.peak\_used\_in\_bytes**(gauge)                     | The peak Java memory used in the Survivor space.*Shown as byte*              |
| **logstash.jvm.mem.pools.survivor.used\_in\_bytes**(gauge)                           | The Java memory used in the Survivor space.*Shown as byte*                   |
| **logstash.jvm.mem.pools.survivor.peak\_max\_in\_bytes**(gauge)                      | The peak maximum Java memory used in the Survivor space.*Shown as byte*      |
| **logstash.jvm.mem.pools.survivor.max\_in\_bytes**(gauge)                            | The maximum Java memory used in the Survivor space.*Shown as byte*           |
| **logstash.jvm.mem.pools.survivor.committed\_in\_bytes**(gauge)                      | The committed Java memory used in the Survivor space.*Shown as byte*         |
| **logstash.jvm.mem.pools.old.peak\_used\_in\_bytes**(gauge)                          | The peak Java memory used in the Old generation.*Shown as byte*              |
| **logstash.jvm.mem.pools.old.used\_in\_bytes**(gauge)                                | The Java memory used in the Old generation.*Shown as byte*                   |
| **logstash.jvm.mem.pools.old.peak\_max\_in\_bytes**(gauge)                           | The peak maximum Java memory used in the Old generation.*Shown as byte*      |
| **logstash.jvm.mem.pools.old.max\_in\_bytes**(gauge)                                 | The maximum Java memory used in the Old generation.*Shown as byte*           |
| **logstash.jvm.mem.pools.old.committed\_in\_bytes**(gauge)                           | The committed Java memory used in the Old generation.*Shown as byte*         |
| **logstash.jvm.mem.pools.young.peak\_used\_in\_bytes**(gauge)                        | The peak Java memory used in the Young space.*Shown as byte*                 |
| **logstash.jvm.mem.pools.young.used\_in\_bytes**(gauge)                              | The Java memory used in the Young generation.*Shown as byte*                 |
| **logstash.jvm.mem.pools.young.peak\_max\_in\_bytes**(gauge)                         | The peak maximum Java memory used in the Young generation.*Shown as byte*    |
| **logstash.jvm.mem.pools.young.max\_in\_bytes**(gauge)                               | The maximum Java memory used in the Young generation.*Shown as byte*         |
| **logstash.jvm.mem.pools.young.committed\_in\_bytes**(gauge)                         | The committed Java memory used in the Young generation.*Shown as byte*       |
| **logstash.jvm.gc.collectors.old.collection\_time\_in\_millis**(gauge)               | Garbage collection time spent in the Old generation.*Shown as millisecond*   |
| **logstash.jvm.gc.collectors.old.collection\_count**(gauge)                          | Garbage collection count in the Old generation.                              |
| **logstash.jvm.gc.collectors.young.collection\_time\_in\_millis**(gauge)             | Garbage collection time spent in the Young generation.*Shown as millisecond* |
| **logstash.jvm.gc.collectors.young.collection\_count**(gauge)                        | Garbage collection count in the Young generation.                            |
| **logstash.reloads.successes**(gauge)                                                | Number of successful configuration reloads.                                  |
| **logstash.reloads.failures**(gauge)                                                 | Number of failed configuration reloads.                                      |
| **logstash.pipeline.dead\_letter\_queue.queue\_size\_in\_bytes**(gauge)              | Total size of the dead letter queue.*Shown as byte*                          |
| **logstash.pipeline.events.duration\_in\_millis**(gauge)                             | Events duration in the pipeline.*Shown as millisecond*                       |
| **logstash.pipeline.events.in**(gauge)                                               | Number of events into the pipeline.                                          |
| **logstash.pipeline.events.out**(gauge)                                              | Number of events out from the pipeline.                                      |
| **logstash.pipeline.events.filtered**(gauge)                                         | Number of events filtered.                                                   |
| **logstash.pipeline.reloads.successes**(gauge)                                       | Number of successful pipeline reloads.                                       |
| **logstash.pipeline.reloads.failures**(gauge)                                        | Number of failed pipeline reloads.                                           |
| **logstash.pipeline.plugins.inputs.events.out**(gauge)                               | Number of events out from the input plugin.                                  |
| **logstash.pipeline.plugins.inputs.events.queue\_push\_duration\_in\_millis**(gauge) | Duration of queue push in the input plugin.*Shown as millisecond*            |
| **logstash.pipeline.plugins.outputs.events.in**(gauge)                               | Number of events into the output plugin.                                     |
| **logstash.pipeline.plugins.outputs.events.out**(gauge)                              | Number of events out from the output plugin.                                 |
| **logstash.pipeline.plugins.outputs.events.duration\_in\_millis**(gauge)             | Duration of events in the output plugin.*Shown as millisecond*               |
| **logstash.pipeline.plugins.filters.events.in**(gauge)                               | Number of events into the filter plugin.                                     |
| **logstash.pipeline.plugins.filters.events.out**(gauge)                              | Number of events out from the filter plugin.                                 |
| **logstash.pipeline.plugins.filters.events.duration\_in\_millis**(gauge)             | Duration of events in the filter plugin.*Shown as millisecond*               |
| **logstash.pipeline.queue.capacity.max\_queue\_size\_in\_bytes**(gauge)              | Maximum queue capacity in bytes of a persistent queue.*Shown as byte*        |
| **logstash.pipeline.queue.capacity.max\_unread\_events**(gauge)                      | Maximum unread events allowed in a persistent queue.                         |
| **logstash.pipeline.queue.capacity.page\_capacity\_in\_bytes**(gauge)                | Queue page capacity in bytes of a persistent queue.*Shown as byte*           |
| **logstash.pipeline.queue.capacity.queue\_size\_in\_bytes**(gauge)                   | Disk used in bytes of a persistent queue.*Shown as byte*                     |
| **logstash.pipeline.queue.events**(gauge)                                            | Number of events in a persistent queue.                                      |

### Events{% #events %}

The Logstash check does not include any events.

### Service Checks{% #service-checks %}

**logstash.can\_connect**

Returns `Critical` if the Agent cannot connect to Logstash to collect metrics, returns `OK` otherwise.

*Statuses: ok, critical*

## Troubleshooting{% #troubleshooting %}

### Agent cannot connect{% #agent-cannot-connect %}

```text
    logstash
    -------
      - instance #0 [ERROR]: "('Connection aborted.', error(111, 'Connection refused'))"
      - Collected 0 metrics, 0 events & 1 service check
```

Check that the `url` in `conf.yaml` is correct.

If you need further help, contact [Datadog support](http://docs.datadoghq.com/help).
