Kong

Supported OS Linux Windows Mac OS

통합 버전5.0.0
이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

Overview

The Agent’s Kong check tracks total requests, response codes, client connections, and more.

You can also use Kong’s Datadog plugin to send API, connection, and database metrics to Datadog through the Datadog Agent using DogStatsD. Read the Monitor Kong with the Datadog integration blog post for more information.

Setup

Installation

The Kong check is included in the Datadog Agent package, so you don’t need to install anything else on your Kong servers.

Configuration

Host

To configure this check for an Agent running on a host:

Metric collection
  1. Ensure that OpenMetrics metrics are exposed in your Kong service by enabling the Prometheus plugin. This needs to be configured first before the Agent can collect Kong metrics.

  2. Add this configuration block to your kong.d/conf.yaml file in the conf.d/ folder at the root of your Agent’s configuration directory to start gathering your Kong metrics. See the sample kong.d/conf.yaml for all available configuration options:

    init_config:
    
    instances:
      ## @param openmetrics_endpoint - string - required
      ## The URL exposing metrics in the OpenMetrics format.
      #
      - openmetrics_endpoint: http://localhost:8001/metrics
    
  3. Restart the Agent.

Note: The current version of the check (1.17.0+) uses OpenMetrics for metric collection, which requires Python 3. For hosts unable to use Python 3, or to use a legacy version of this check, see the following config.

Log collection

Available for Agent versions >6.0

Kong access logs are generated by NGINX, so the default location is the same as for NGINX files.

  1. Collecting logs is disabled by default in the Datadog Agent, enable it in your datadog.yaml file:

    logs_enabled: true
    
  2. Add this configuration block to your kong.d/conf.yaml file to start collecting your Kong Logs:

    logs:
      - type: file
        path: /var/log/nginx/access.log
        service: '<SERVICE>'
        source: kong
    
      - type: file
        path: /var/log/nginx/error.log
        service: '<SERVICE>'
        source: kong
    

    Change the path and service parameter values and configure them for your environment. See the sample kong.d/conf.yaml for all available configuration options.

  3. Restart the Agent.

Containerized

Ensure that OpenMetrics metrics are exposed in your Kong service by enabling the Prometheus plugin. This needs to be configured first before the Agent can collect Kong metrics. For containerized environments, see the Autodiscovery Integration Templates for guidance on applying the parameters below.

Metric collection
ParameterValue
<INTEGRATION_NAME>kong
<INIT_CONFIG>blank or {}
<INSTANCE_CONFIG>{"openmetrics_endpoint": "http://%%host%%:8001/metrics"}
Log collection

Available for Agent versions >6.0

Collecting logs is disabled by default in the Datadog Agent. To enable it, see Kubernetes log collection documentation.

ParameterValue
<LOG_CONFIG>{"source": "kong", "service": "<SERVICE_NAME>"}

Validation

Run the Agent’s status subcommand and look for kong under the Checks section.

Data Collected

Metrics

kong.bandwidth.bytes.count
(count)
[OpenMetrics V2] (Kong v3+) The total bandwidth in bytes consumed per service/route in Kong
Shown as byte
kong.bandwidth.count
(count)
[OpenMetrics V2] (Kong < 3) The total bandwidth in bytes consumed per service/route in Kong
Shown as byte
kong.connections_accepted
(gauge)
[Legacy] Total number of accepted client connections.
Shown as connection
kong.connections_active
(gauge)
[Legacy] Current number of active client connections including Waiting connections.
Shown as connection
kong.connections_handled
(gauge)
[Legacy] Total number of handled connections. (Same as accepts unless resource limits were reached).
Shown as connection
kong.connections_reading
(gauge)
[Legacy] Current number of connections where Kong is reading the request header.
Shown as connection
kong.connections_waiting
(gauge)
[Legacy] Current number of idle client connections waiting for a request.
Shown as connection
kong.connections_writing
(gauge)
[Legacy] Current number of connections where nginx is writing the response back to the client.
Shown as connection
kong.http.consumer.status.count
(count)
[OpenMetrics V2] (Kong < 3) HTTP status codes for customer per service/route in Kong
Shown as request
kong.http.requests.count
(count)
[OpenMetrics V2] (Kong v3+) Http Status codes per service/route in Kong
Shown as request
kong.http.status
(count)
[OpenMetrics V2] (Kong < 3) HTTP status codes per service/route in Kong
Shown as request
kong.kong.latency.ms.bucket
(count)
[OpenMetrics V2] (Kong v3+) The latency of Kong specificially
Shown as millisecond
kong.kong.latency.ms.count
(count)
[OpenMetrics V2] (Kong v3+) The latency of Kong specifically
Shown as millisecond
kong.kong.latency.ms.sum
(count)
[OpenMetrics V2] (Kong v3+) The latency of Kong specifically
Shown as millisecond
kong.latency.bucket
(count)
[OpenMetrics V2] (Kong < 3) The latency added by Kong, total request time and upstream latency for each service/route in Kong
Shown as millisecond
kong.latency.count
(count)
[OpenMetrics V2] (Kong < 3) The latency added by Kong, total request time and upstream latency for each service/route in Kong
Shown as millisecond
kong.latency.sum
(count)
[OpenMetrics V2] (Kong < 3) The latency added by Kong, total request time and upstream latency for each service/route in Kong
Shown as millisecond
kong.memory.lua.shared_dict.bytes
(gauge)
[OpenMetrics V2] The allocated slabs in bytes in a shared_dict
Shown as byte
kong.memory.lua.shared_dict.total_bytes
(gauge)
[OpenMetrics V2] The total capacity in bytes of a shared_dict
Shown as byte
kong.memory.workers.lua.vms.bytes
(gauge)
[OpenMetrics V2] The allocated bytes in worker Lua VM
Shown as byte
kong.nginx.connections.total
(gauge)
[OpenMetrics V2] (Kong v3+) The number of HTTP and stream connections
Shown as connection
kong.nginx.http.current_connections
(gauge)
[OpenMetrics V2] (Kong < 3) The number of HTTP connections
Shown as connection
kong.nginx.requests.total
(gauge)
[OpenMetrics V2] (Kong v3+) The number of total Nginx connections
Shown as request
kong.nginx.stream.current_connections
(gauge)
[OpenMetrics V2] (Kong < 3) The number of stream connections
Shown as connection
kong.nginx.timers
(gauge)
[OpenMetrics v2] (Kong v2.8+) Total number of Nginx timers in Running or Pending state.
Shown as item
kong.request.latency.ms.bucket
(count)
[OpenMetrics V2] (Kong v3+) The latency added by Kong to requests
Shown as millisecond
kong.request.latency.ms.count
(count)
[OpenMetrics V2] (Kong v3+) The latency added by Kong to requests
Shown as millisecond
kong.request.latency.ms.sum
(count)
[OpenMetrics V2] (Kong v3+) The latency added by Kong to requests
Shown as millisecond
kong.session.duration.ms
(count)
[OpenMetrics V2] (Kong v3+) The duration of a Kong stream
Shown as millisecond
kong.stream.status.count
(count)
[OpenMetrics V2] The stream status codes per service/route in Kong
Shown as request
kong.total_requests
(gauge)
[Legacy] Total number of client requests.
Shown as request
kong.upstream.latency.ms.bucket
(count)
[OpenMetrics V2] (Kong v3+) The upstream latency added by Kong
Shown as millisecond
kong.upstream.latency.ms.count
(count)
[OpenMetrics V2] (Kong v3+) The upstream latency added by Kong
Shown as millisecond
kong.upstream.latency.ms.sum
(count)
[OpenMetrics V2] (Kong v3+) The upstream latency added by Kong
Shown as millisecond

Events

The Kong check does not include any events.

Service Checks

kong.can_connect
Returns CRITICAL if the Agent is unable to connect to the Kong instance. Returns OK otherwise.
Statuses: ok, critical

kong.openmetrics.health
Returns CRITICAL if the Agent is unable to connect to the OpenMetrics endpoint, otherwise returns OK.
Statuses: ok, critical

kong.datastore.reachable
Returns CRITICAL if Kong is unable to connect to the datastore, otherwise returns OK.
Statuses: ok, critical

kong.upstream.target.health
Returns CRITICAL if the target is unhealthy, otherwise returns OK.
Statuses: ok, critical

Troubleshooting

Need help? Contact Datadog support.

Further Reading