- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Supported OS
Get metrics from Fluentd to:
The Fluentd check is included in the Datadog Agent package, so you don’t need to install anything else on your Fluentd servers.
In your Fluentd configuration file, add a monitor_agent
source:
<source>
@type monitor_agent
bind 0.0.0.0
port 24220
</source>
To configure this check for an Agent running on a host:
Edit the fluentd.d/conf.yaml
file, in the conf.d/
folder at the root of your Agent’s configuration directory to start collecting your Fluentd metrics. See the sample fluentd.d/conf.yaml for all available configuration options.
init_config:
instances:
## @param monitor_agent_url - string - required
## Monitor Agent URL to connect to.
#
- monitor_agent_url: http://example.com:24220/api/plugins.json
You can use the Datadog FluentD plugin to forward the logs directly from FluentD to your Datadog account.
Proper metadata (including hostname and source) is the key to unlocking the full potential of your logs in Datadog. By default, the hostname and timestamp fields should be properly remapped with the remapping for reserved attributes.
Add the ddsource
attribute with the name of the log integration in your logs in order to trigger the integration automatic setup in Datadog.
Host tags are automatically set on your logs if there is a matching hostname in your infrastructure list. Use the ddtags
attribute to add custom tags to your logs:
Setup Example:
# Match events tagged with "datadog.**" and
# send them to Datadog
<match datadog.**>
@type datadog
@id awesome_agent
api_key <your_api_key>
# Optional
include_tag_key true
tag_key 'tag'
# Optional tags
dd_source '<INTEGRATION_NAME>'
dd_tags '<KEY1:VALUE1>,<KEY2:VALUE2>'
<buffer>
@type memory
flush_thread_count 4
flush_interval 3s
chunk_limit_size 5m
chunk_limit_records 500
</buffer>
</match>
By default, the plugin is configured to send logs through HTTPS (port 443) using gzip compression. You can change this behavior by using the following parameters:
use_http
: Set this to false
if you want to use TCP forwarding and update the host
and port
accordingly (default is true
)use_compression
: Compression is only available for HTTP. Disable it by setting this to false
(default is true
)compression_level
: Set the compression level from HTTP. The range is from 1 to 9, 9 being the best ratio (default is 6
)Additional parameters can be used to change the endpoint used in order to go through a proxy:
host
: The proxy endpoint for logs not directly forwarded to Datadog (default value: http-intake.logs.datadoghq.com
).port
: The proxy port for logs not directly forwarded to Datadog (default value: 80
).ssl_port
: The port used for logs forwarded with a secure TCP/SSL connection to Datadog (default value: 443
).use_ssl
: Instructs the Agent to initialize a secure TCP/SSL connection to Datadog (default value: true
).no_ssl_validation
: Disables SSL hostname validation (default value: false
).Note: Set host
and port
to your region
.
<match datadog.**>
#...
host 'http-intake.logs.datadoghq.eu'
</match>
Datadog tags are critical to be able to jump from one part of the product to another. Having the right metadata associated with your logs is therefore important in jumping from a container view or any container metrics to the most related logs.
If your logs contain any of the following attributes, these attributes are automatically added as Datadog tags on your logs:
kubernetes.container_image
kubernetes.container_name
kubernetes.namespace_name
kubernetes.pod_name
docker.container_id
While the Datadog Agent collects Docker and Kubernetes metadata automatically, FluentD requires a plugin for this. Datadog recommends using fluent-plugin-kubernetes_metadata_filter to collect this metadata.
Configuration example:
# Collect metadata for logs tagged with "kubernetes.**"
<filter kubernetes.*>
type kubernetes_metadata
</filter>
For containerized environments, see the Autodiscovery Integration Templates for guidance on applying the parameters below.
Parameter | Value |
---|---|
<INTEGRATION_NAME> | fluentd |
<INIT_CONFIG> | blank or {} |
<INSTANCE_CONFIG> | {"monitor_agent_url": "http://%%host%%:24220/api/plugins.json"} |
Run the Agent’s status subcommand and look for fluentd
under the Checks section.
fluentd.buffer_available_buffer_space_ratios (gauge) | Show available space for buffer |
fluentd.buffer_queue_byte_size (gauge) | Current bytesize of queued buffer chunks Shown as byte |
fluentd.buffer_queue_length (gauge) | The length of the buffer queue for this plugin. Shown as buffer |
fluentd.buffer_stage_byte_size (gauge) | Current bytesize of staged buffer chunks Shown as byte |
fluentd.buffer_stage_length (gauge) | The length of staged buffer chunks |
fluentd.buffer_total_queued_size (gauge) | The size of the buffer queue for this plugin. Shown as byte |
fluentd.emit_count (gauge) | The total number of emit call in output plugin Shown as unit |
fluentd.emit_records (gauge) | The total number of emitted records Shown as record |
fluentd.flush_time_count (gauge) | The total time of buffer flush in milliseconds Shown as millisecond |
fluentd.retry_count (gauge) | The number of retries for this plugin. Shown as time |
fluentd.rollback_count (gauge) | The total number of rollback. rollback happens when write/try_write failed Shown as unit |
fluentd.slow_flush_count (gauge) | The total number of slow flush. This count will be incremented when buffer flush is longer than slowflushlog_threshold Shown as unit |
fluentd.write_count (gauge) | The total number of write/try_write call in output plugin Shown as unit |
The FluentD check does not include any events.
fluentd.is_ok
Returns OK
if fluentd and its monitor agent are running, CRITICAL otherwise.
Statuses: ok, critical
Need help? Contact Datadog support.