There are a number of common issues that can get in the way when sending new logs to Datadog via the log collector in the
dd-agent. If you experience issues sending new logs to Datadog, this list helps you troubleshoot. If you continue to have trouble, email us for further assistance.
After you’ve made any configuration changes to the
datadog-agent, the changes only take effect after you restart the Datadog Agent.
The Datadog Agent sends its logs to Datadog over tcp via port 10516. If that connection is not available, logs fail to be sent and an error is recorded in the
agent.log file to that effect.
Test manually your connection by running a telnet or openssl command like so (port 10514 would work too, but is less secure):
openssl s_client -connect intake.logs.datadoghq.com:10516
telnet intake.logs.datadoghq.com 10514
And then by sending a log like the following:
<API_KEY> this is a test message
443to forward them (only available with the Datadog Agent) by adding the following in
logs_config: use_port_443: true
The Datadog Agent only collects logs that have been written after it has started trying to collect them (whether it be tailing or listening for them). In order to confirm whether log collection has been successfully set up, make sure that new logs have been written.
datadog-agent does not run as root (and we do not recommend that you make it run as root, as a general best-practice). For this reason, when you configure your
datadog-agent to tail log files (for custom logs or for integrations) you need to take special care to ensure the
datadog-agent user has read access to tail the log files you want to collect from.
Otherwise there should be a similar message in the Agent
namei command to obtain more information about the file permissions:
> namei -m /var/log/application/error.log > f: /var/log/application/error.log drwxr-xr-x / drwxr-xr-x var drwxrwxr-x log drw-r--r-- application -rw-r----- error.log
In this example, the
application directory is not executable, therefore the Agent cannot list its files. Furthermore, the Agent does not have read permissions on the
Add the missing permissions via the chmod command.
Note: When adding the appropriate read permissions, make sure that these permissions are correctly set in your log rotation configuration. Otherwise, on the next log rotate, the Datadog Agent may lose its read permissions.
Set permissions as
644 in the log rotation configuration to make sure the Agent has read access to the files.
When collecting logs from journald, make sure that the Datadog Agent user is added in the systemd group as shown in the journald integration.
Note that journald sends an empty payload if the file permissions are incorrect. Accordingly, it is not possible to raise or send an explicit error message in this case.
These are a few of the common configuration issues that are worth triple-checking in your
Run the Agent status config to spot the major configuration issue:
Check if the
api_key is defined in
By default the Agent does not collect any logs, make sure there is at least one .yaml file in the Agent’s
conf.d/ directory that includes a logs section and the appropriate values.
You may have some .yaml parsing errors in your configuration files. YAML can be finicky, so when in doubt, a good YAML validator is worth referencing.
Check if you have
logs_enabled: true in your
There might be an error in the logs that would explain the issue. So just run the following command and check for errors:
sudo cat /var/log/datadog/agent.log | grep ERROR
usermod -a -G docker dd-agent
At least one valid log configuration must be set to start log collection. There are several options to configure log collection; ensure that at least one of them is activated:
DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL=true, which collects logs from all containers (see here how to exclude a subset)
Autodiscovery via container labels. In this case, ensure that
datadog.yaml has Docker listener and config provider:
listeners: - name: docker config_providers: - name: docker polling: true
datadog.yamlhas the kubelet listener and config provider:
listeners: - name: kubelet config_providers: - name: kubelet polling: true
When using Journald in a containerized environment, make sure to follow the instructions in the journald integration as there is a specific file used to mount to the Agent.
See the Datadog-AWS Log integration to configure your environment. If you still do not see your logs, double-check the following points:
Check Datadog lambda configuration parameter:
<API_KEY>: Should be set with your Datadog API key either directly in the Python code, or alternatively as a environment variable. In case you manage several platforms, double-check that you are actually using the right
<API_KEY>for the right platform.
Check that Datadog lambda function is actually triggered by leveraging
aws.lambda.errors metrics with the
functionname tag of your Datadog lambda function within Datadog, or check for errors in Datadog lambda logs in Cloudwatch.
Additional helpful documentation, links, and articles: