---
title: Kong
description: Track total requests, response codes, client connections, and more.
breadcrumbs: Docs > Integrations > Kong
---

# Kong
Supported OS Integration version6.3.0
## Overview{% #overview %}

The Agent's Kong check tracks total requests, response codes, client connections, and more.

You can also use Kong's [Datadog plugin](https://docs.konghq.com/hub/kong-inc/datadog/) to send API, connection, and database metrics to Datadog through the Datadog Agent using [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/). Read the [Monitor Kong with the Datadog integration](https://www.datadoghq.com/blog/monitor-kong-datadog) blog post for more information.

**Minimum Agent version:** 6.0.0

## Setup{% #setup %}

### Installation{% #installation %}

The Kong check is included in the [Datadog Agent](https://app.datadoghq.com/account/settings/agent/latest) package, so you don't need to install anything else on your Kong servers.

### Configuration{% #configuration %}

{% tab title="Host" %}
#### Host{% #host %}

To configure this check for an Agent running on a host:

##### Metric collection{% #metric-collection %}

1. Ensure that OpenMetrics metrics are exposed in your Kong service by [enabling the Prometheus plugin](https://docs.konghq.com/hub/kong-inc/prometheus/). This needs to be configured first before the Agent can collect Kong metrics.

1. Add this configuration block to your `kong.d/conf.yaml` file in the `conf.d/` folder at the root of your [Agent's configuration directory](https://docs.datadoghq.com/agent/guide/agent-configuration-files/#agent-configuration-directory) to start gathering your Kong metrics. See the [sample kong.d/conf.yaml](https://github.com/DataDog/integrations-core/blob/master/kong/datadog_checks/kong/data/conf.yaml.example) for all available configuration options:

   ```yaml
   init_config:
   
   instances:
     ## @param openmetrics_endpoint - string - required
     ## The URL exposing metrics in the OpenMetrics format.
     #
     - openmetrics_endpoint: http://localhost:8001/metrics
   ```

1. [Restart the Agent](https://docs.datadoghq.com/agent/guide/agent-commands/#start-stop-and-restart-the-agent).

**Note**: The current version of the check (1.17.0+) uses [OpenMetrics](https://docs.datadoghq.com/integrations/openmetrics/) for metric collection, which requires Python 3. For hosts unable to use Python 3, or to use a legacy version of this check, see the following [config](https://github.com/DataDog/integrations-core/blob/7.27.x/kong/datadog_checks/kong/data/conf.yaml.example).

##### Log collection{% #log-collection %}

*Available for Agent versions >6.0*

Kong access logs are generated by NGINX, so the default location is the same as for NGINX files.

1. Collecting logs is disabled by default in the Datadog Agent, enable it in your `datadog.yaml` file:

   ```yaml
   logs_enabled: true
   ```

1. Add this configuration block to your `kong.d/conf.yaml` file to start collecting your Kong Logs:

   ```yaml
   logs:
     - type: file
       path: /var/log/nginx/access.log
       service: '<SERVICE>'
       source: kong
   
     - type: file
       path: /var/log/nginx/error.log
       service: '<SERVICE>'
       source: kong
   ```

Change the `path` and `service` parameter values and configure them for your environment. See the [sample kong.d/conf.yaml](https://github.com/DataDog/integrations-core/blob/master/kong/datadog_checks/kong/data/conf.yaml.example) for all available configuration options.

1. [Restart the Agent](https://docs.datadoghq.com/agent/guide/agent-commands/#start-stop-and-restart-the-agent).

{% /tab %}

{% tab title="Containerized" %}
#### Containerized{% #containerized %}

Ensure that OpenMetrics metrics are exposed in your Kong service by [enabling the Prometheus plugin](https://docs.konghq.com/hub/kong-inc/prometheus/). This needs to be configured first before the Agent can collect Kong metrics. For containerized environments, see the [Autodiscovery Integration Templates](https://docs.datadoghq.com/agent/kubernetes/integrations/) for guidance on applying the parameters below.

##### Metric collection{% #metric-collection %}

| Parameter            | Value                                                      |
| -------------------- | ---------------------------------------------------------- |
| `<INTEGRATION_NAME>` | `kong`                                                     |
| `<INIT_CONFIG>`      | blank or `{}`                                              |
| `<INSTANCE_CONFIG>`  | `{"openmetrics_endpoint": "http://%%host%%:8001/metrics"}` |

##### Log collection{% #log-collection %}

*Available for Agent versions >6.0*

Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes log collection documentation](https://docs.datadoghq.com/agent/kubernetes/log/).

| Parameter      | Value                                             |
| -------------- | ------------------------------------------------- |
| `<LOG_CONFIG>` | `{"source": "kong", "service": "<SERVICE_NAME>"}` |

{% /tab %}

### Validation{% #validation %}

[Run the Agent's status subcommand](https://docs.datadoghq.com/agent/guide/agent-commands/#agent-status-and-information) and look for `kong` under the Checks section.

## Data Collected{% #data-collected %}

### Metrics{% #metrics %}

|  |
|  |
| **kong.bandwidth.bytes.count**(count)                | [OpenMetrics V2] (Kong v3+) The total bandwidth in bytes consumed per service/route in Kong*Shown as byte*                                          |
| **kong.bandwidth.count**(count)                      | [OpenMetrics V2] (Kong < 3) The total bandwidth in bytes consumed per service/route in Kong*Shown as byte*                                          |
| **kong.connections\_accepted**(gauge)                | [Legacy] Total number of accepted client connections.*Shown as connection*                                                                          |
| **kong.connections\_active**(gauge)                  | [Legacy] Current number of active client connections including Waiting connections.*Shown as connection*                                            |
| **kong.connections\_handled**(gauge)                 | [Legacy] Total number of handled connections. (Same as accepts unless resource limits were reached).*Shown as connection*                           |
| **kong.connections\_reading**(gauge)                 | [Legacy] Current number of connections where Kong is reading the request header.*Shown as connection*                                               |
| **kong.connections\_waiting**(gauge)                 | [Legacy] Current number of idle client connections waiting for a request.*Shown as connection*                                                      |
| **kong.connections\_writing**(gauge)                 | [Legacy] Current number of connections where nginx is writing the response back to the client.*Shown as connection*                                 |
| **kong.http.consumer.status.count**(count)           | [OpenMetrics V2] (Kong < 3) HTTP status codes for customer per service/route in Kong*Shown as request*                                              |
| **kong.http.requests.count**(count)                  | [OpenMetrics V2] (Kong v3+) Http Status codes per service/route in Kong*Shown as request*                                                           |
| **kong.http.status**(count)                          | [OpenMetrics V2] (Kong < 3) HTTP status codes per service/route in Kong*Shown as request*                                                           |
| **kong.kong.latency.ms.bucket**(count)               | [OpenMetrics V2] (Kong v3+) The latency of Kong specificially*Shown as millisecond*                                                                 |
| **kong.kong.latency.ms.count**(count)                | [OpenMetrics V2] (Kong v3+) The latency of Kong specifically*Shown as millisecond*                                                                  |
| **kong.kong.latency.ms.sum**(count)                  | [OpenMetrics V2] (Kong v3+) The latency of Kong specifically*Shown as millisecond*                                                                  |
| **kong.latency.bucket**(count)                       | [OpenMetrics V2] (Kong < 3) The latency added by Kong, total request time and upstream latency for each service/route in Kong*Shown as millisecond* |
| **kong.latency.count**(count)                        | [OpenMetrics V2] (Kong < 3) The latency added by Kong, total request time and upstream latency for each service/route in Kong*Shown as millisecond* |
| **kong.latency.sum**(count)                          | [OpenMetrics V2] (Kong < 3) The latency added by Kong, total request time and upstream latency for each service/route in Kong*Shown as millisecond* |
| **kong.memory.lua.shared\_dict.bytes**(gauge)        | [OpenMetrics V2] The allocated slabs in bytes in a shared_dict*Shown as byte*                                                                       |
| **kong.memory.lua.shared\_dict.total\_bytes**(gauge) | [OpenMetrics V2] The total capacity in bytes of a shared_dict*Shown as byte*                                                                        |
| **kong.memory.workers.lua.vms.bytes**(gauge)         | [OpenMetrics V2] The allocated bytes in worker Lua VM*Shown as byte*                                                                                |
| **kong.nginx.connections.total**(gauge)              | [OpenMetrics V2] (Kong v3+) The number of HTTP and stream connections*Shown as connection*                                                          |
| **kong.nginx.http.current\_connections**(gauge)      | [OpenMetrics V2] (Kong < 3) The number of HTTP connections*Shown as connection*                                                                     |
| **kong.nginx.requests.total**(gauge)                 | [OpenMetrics V2] (Kong v3+) The number of total Nginx connections*Shown as request*                                                                 |
| **kong.nginx.stream.current\_connections**(gauge)    | [OpenMetrics V2] (Kong < 3) The number of stream connections*Shown as connection*                                                                   |
| **kong.nginx.timers**(gauge)                         | [OpenMetrics v2] (Kong v2.8+) Total number of Nginx timers in Running or Pending state.*Shown as item*                                              |
| **kong.request.latency.ms.bucket**(count)            | [OpenMetrics V2] (Kong v3+) The latency added by Kong to requests*Shown as millisecond*                                                             |
| **kong.request.latency.ms.count**(count)             | [OpenMetrics V2] (Kong v3+) The latency added by Kong to requests*Shown as millisecond*                                                             |
| **kong.request.latency.ms.sum**(count)               | [OpenMetrics V2] (Kong v3+) The latency added by Kong to requests*Shown as millisecond*                                                             |
| **kong.session.duration.ms**(count)                  | [OpenMetrics V2] (Kong v3+) The duration of a Kong stream*Shown as millisecond*                                                                     |
| **kong.stream.status.count**(count)                  | [OpenMetrics V2] The stream status codes per service/route in Kong*Shown as request*                                                                |
| **kong.total\_requests**(gauge)                      | [Legacy] Total number of client requests.*Shown as request*                                                                                         |
| **kong.upstream.latency.ms.bucket**(count)           | [OpenMetrics V2] (Kong v3+) The upstream latency added by Kong*Shown as millisecond*                                                                |
| **kong.upstream.latency.ms.count**(count)            | [OpenMetrics V2] (Kong v3+) The upstream latency added by Kong*Shown as millisecond*                                                                |
| **kong.upstream.latency.ms.sum**(count)              | [OpenMetrics V2] (Kong v3+) The upstream latency added by Kong*Shown as millisecond*                                                                |

### Events{% #events %}

The Kong check does not include any events.

### Service Checks{% #service-checks %}

**kong.can\_connect**

Returns `CRITICAL` if the Agent is unable to connect to the Kong instance. Returns `OK` otherwise.

*Statuses: ok, critical*

**kong.openmetrics.health**

Returns `CRITICAL` if the Agent is unable to connect to the OpenMetrics endpoint, otherwise returns `OK`.

*Statuses: ok, critical*

**kong.datastore.reachable**

Returns `CRITICAL` if Kong is unable to connect to the datastore, otherwise returns `OK`.

*Statuses: ok, critical*

**kong.upstream.target.health**

Returns `CRITICAL` if the target is unhealthy, otherwise returns `OK`.

*Statuses: ok, critical*

## Troubleshooting{% #troubleshooting %}

Need help? Contact [Datadog support](https://docs.datadoghq.com/help/).

## Further Reading{% #further-reading %}

- [Monitor Kong with our new Datadog integration](https://www.datadoghq.com/blog/monitor-kong-datadog)
