- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Supported OS
The Agent’s Kong check tracks total requests, response codes, client connections, and more.
You can also use Kong’s Datadog plugin to send API, connection, and database metrics to Datadog through the Datadog Agent using DogStatsD. Read the Monitor Kong with the Datadog integration blog post for more information.
The Kong check is included in the Datadog Agent package, so you don’t need to install anything else on your Kong servers.
To configure this check for an Agent running on a host:
Ensure that OpenMetrics metrics are exposed in your Kong service by enabling the Prometheus plugin. This needs to be configured first before the Agent can collect Kong metrics.
Add this configuration block to your kong.d/conf.yaml
file in the conf.d/
folder at the root of your Agent’s configuration directory to start gathering your Kong metrics. See the sample kong.d/conf.yaml for all available configuration options:
init_config:
instances:
## @param openmetrics_endpoint - string - required
## The URL exposing metrics in the OpenMetrics format.
#
- openmetrics_endpoint: http://localhost:8001/metrics
Note: The current version of the check (1.17.0+) uses OpenMetrics for metric collection, which requires Python 3. For hosts unable to use Python 3, or to use a legacy version of this check, see the following config.
Available for Agent versions >6.0
Kong access logs are generated by NGINX, so the default location is the same as for NGINX files.
Collecting logs is disabled by default in the Datadog Agent, enable it in your datadog.yaml
file:
logs_enabled: true
Add this configuration block to your kong.d/conf.yaml
file to start collecting your Kong Logs:
logs:
- type: file
path: /var/log/nginx/access.log
service: '<SERVICE>'
source: kong
- type: file
path: /var/log/nginx/error.log
service: '<SERVICE>'
source: kong
Change the path
and service
parameter values and configure them for your environment. See the sample kong.d/conf.yaml for all available configuration options.
Ensure that OpenMetrics metrics are exposed in your Kong service by enabling the Prometheus plugin. This needs to be configured first before the Agent can collect Kong metrics. For containerized environments, see the Autodiscovery Integration Templates for guidance on applying the parameters below.
Parameter | Value |
---|---|
<INTEGRATION_NAME> | kong |
<INIT_CONFIG> | blank or {} |
<INSTANCE_CONFIG> | {"openmetrics_endpoint": "http://%%host%%:8001/metrics"} |
Available for Agent versions >6.0
Collecting logs is disabled by default in the Datadog Agent. To enable it, see Kubernetes log collection documentation.
Parameter | Value |
---|---|
<LOG_CONFIG> | {"source": "kong", "service": "<SERVICE_NAME>"} |
Run the Agent’s status subcommand and look for kong
under the Checks section.
kong.bandwidth.bytes.count (count) | [OpenMetrics V2] (Kong v3+) The total bandwidth in bytes consumed per service/route in Kong Shown as byte |
kong.bandwidth.count (count) | [OpenMetrics V2] (Kong < 3) The total bandwidth in bytes consumed per service/route in Kong Shown as byte |
kong.connections_accepted (gauge) | [Legacy] Total number of accepted client connections. Shown as connection |
kong.connections_active (gauge) | [Legacy] Current number of active client connections including Waiting connections. Shown as connection |
kong.connections_handled (gauge) | [Legacy] Total number of handled connections. (Same as accepts unless resource limits were reached). Shown as connection |
kong.connections_reading (gauge) | [Legacy] Current number of connections where Kong is reading the request header. Shown as connection |
kong.connections_waiting (gauge) | [Legacy] Current number of idle client connections waiting for a request. Shown as connection |
kong.connections_writing (gauge) | [Legacy] Current number of connections where nginx is writing the response back to the client. Shown as connection |
kong.http.consumer.status.count (count) | [OpenMetrics V2] (Kong < 3) HTTP status codes for customer per service/route in Kong Shown as request |
kong.http.requests.count (count) | [OpenMetrics V2] (Kong v3+) Http Status codes per service/route in Kong Shown as request |
kong.http.status (count) | [OpenMetrics V2] (Kong < 3) HTTP status codes per service/route in Kong Shown as request |
kong.kong.latency.ms.bucket (count) | [OpenMetrics V2] (Kong v3+) The latency of Kong specificially Shown as millisecond |
kong.kong.latency.ms.count (count) | [OpenMetrics V2] (Kong v3+) The latency of Kong specifically Shown as millisecond |
kong.kong.latency.ms.sum (count) | [OpenMetrics V2] (Kong v3+) The latency of Kong specifically Shown as millisecond |
kong.latency.bucket (count) | [OpenMetrics V2] (Kong < 3) The latency added by Kong, total request time and upstream latency for each service/route in Kong Shown as millisecond |
kong.latency.count (count) | [OpenMetrics V2] (Kong < 3) The latency added by Kong, total request time and upstream latency for each service/route in Kong Shown as millisecond |
kong.latency.sum (count) | [OpenMetrics V2] (Kong < 3) The latency added by Kong, total request time and upstream latency for each service/route in Kong Shown as millisecond |
kong.memory.lua.shared_dict.bytes (gauge) | [OpenMetrics V2] The allocated slabs in bytes in a shared_dict Shown as byte |
kong.memory.lua.shared_dict.total_bytes (gauge) | [OpenMetrics V2] The total capacity in bytes of a shared_dict Shown as byte |
kong.memory.workers.lua.vms.bytes (gauge) | [OpenMetrics V2] The allocated bytes in worker Lua VM Shown as byte |
kong.nginx.connections.total (gauge) | [OpenMetrics V2] (Kong v3+) The number of HTTP and stream connections Shown as connection |
kong.nginx.http.current_connections (gauge) | [OpenMetrics V2] (Kong < 3) The number of HTTP connections Shown as connection |
kong.nginx.requests.total (gauge) | [OpenMetrics V2] (Kong v3+) The number of total Nginx connections Shown as request |
kong.nginx.stream.current_connections (gauge) | [OpenMetrics V2] (Kong < 3) The number of stream connections Shown as connection |
kong.nginx.timers (gauge) | [OpenMetrics v2] (Kong v2.8+) Total number of Nginx timers in Running or Pending state. Shown as item |
kong.request.latency.ms.bucket (count) | [OpenMetrics V2] (Kong v3+) The latency added by Kong to requests Shown as millisecond |
kong.request.latency.ms.count (count) | [OpenMetrics V2] (Kong v3+) The latency added by Kong to requests Shown as millisecond |
kong.request.latency.ms.sum (count) | [OpenMetrics V2] (Kong v3+) The latency added by Kong to requests Shown as millisecond |
kong.session.duration.ms (count) | [OpenMetrics V2] (Kong v3+) The duration of a Kong stream Shown as millisecond |
kong.stream.status.count (count) | [OpenMetrics V2] The stream status codes per service/route in Kong Shown as request |
kong.total_requests (gauge) | [Legacy] Total number of client requests. Shown as request |
kong.upstream.latency.ms.bucket (count) | [OpenMetrics V2] (Kong v3+) The upstream latency added by Kong Shown as millisecond |
kong.upstream.latency.ms.count (count) | [OpenMetrics V2] (Kong v3+) The upstream latency added by Kong Shown as millisecond |
kong.upstream.latency.ms.sum (count) | [OpenMetrics V2] (Kong v3+) The upstream latency added by Kong Shown as millisecond |
The Kong check does not include any events.
kong.can_connect
Returns CRITICAL
if the Agent is unable to connect to the Kong instance. Returns OK
otherwise.
Statuses: ok, critical
kong.openmetrics.health
Returns CRITICAL
if the Agent is unable to connect to the OpenMetrics endpoint, otherwise returns OK
.
Statuses: ok, critical
kong.datastore.reachable
Returns CRITICAL
if Kong is unable to connect to the datastore, otherwise returns OK
.
Statuses: ok, critical
kong.upstream.target.health
Returns CRITICAL
if the target is unhealthy, otherwise returns OK
.
Statuses: ok, critical
Need help? Contact Datadog support.