Log Collection & Integrations

Log Collection & Integrations

Overview

Choose a configuration option below to begin ingesting your logs. If you are already using a log-shipper daemon, refer to the dedicated documentation for Rsyslog, Syslog-ng, NXlog, FluentD, or Logstash.

Consult the list of available Datadog log collection endpoints if you want to send your logs directly to Datadog.

Note: When sending logs in a JSON format to Datadog, there is a set of reserved attributes that have a specific meaning within Datadog. See the Reserved Attributes section to learn more.

Setup

Follow the Datadog Agent installation instructions to start forwarding logs alongside your metrics and traces. The Agent can tail log files or listen for logs sent over UDP/TCP, and you can configure it to filter out logs, scrub sensitive data, or aggregate multi line logs.

After you have enabled log collection, configure your application language to generate logs:

Note: JSON-formatted logging helps handle multi-line application logs. JSON-formatted logs are automatically parsed by Datadog. If you have control over the log format you send to Datadog, it is recommended that you format logs as JSON to avoid the need for custom parsing rules.

The Datadog Agent can collect logs directly from container stdout/stderr without using a logging driver. When the Agent’s Docker check is enabled, container and orchestrator metadata are automatically added as tags to your logs. It is possible to collect logs from all your containers or only a subset filtered by container image, label, or name. Autodiscovery can also be used to configure log collection directly in the container labels. In Kubernetes environments you can also leverage the daemonset installation.

Choose your environment below to get dedicated log collection instructions:


Datadog collects logs from AWS Lambda. To enable this, refer to the serverless monitoring documentation.

Select your Cloud provider below to see how to automatically collect your logs and forward them to Datadog:


Datadog integrations and log collection are tied together. Use an integration default configuration file to enable dedicated processors, parsing, and facets in Datadog.

Consult the list of available supported integrations.

Additional configuration options

Logging endpoints

Datadog provides logging endpoints for both SSL-encrypted connections and unencrypted connections. Use the encrypted endpoint when possible. The Datadog Agent uses the encrypted endpoint to send logs to Datadog. More information is available in the Datadog security documentation.

Endpoints that can be used to send logs to Datadog, for SSL-encrypted connections:

Port:
Used by the Agent to send logs in protobuf format over an SSL-encrypted TCP connection.
Port:
Used by the Agent to send logs in JSON format over HTTPS. See the Host Agent Log collection documentation.
Port:
Used by custom forwarder to send logs in JSON or plain text format over HTTPS. See the Logs HTTP API documentation.
Port:
Used by custom forwarders to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection.
Port:
Used by Lambda functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection.
Port:
Used by Lambda functions to send logs in raw, Syslog, or JSON format over HTTPS.
Port:
Used by Azure functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection. Note: This endpoint may be useful with other cloud providers.

Endpoints that can be used to send logs to Datadog, for unencrypted connections:

Port:
Used by custom forwarders to send logs in raw, Syslog, or JSON format over an unencrypted TCP connection.

Custom log forwarding

Any custom process or logging library able to forward logs through TCP or HTTP can be used in conjunction with Datadog Logs.

You can send logs to Datadog platform over HTTP. Refer to the Datadog Log HTTP API documentation to get started.

The secure TCP endpoint is intake.logs.datadoghq.com 10516 (or port 10514 for insecure connections).

You must prefix the log entry with your [Datadog API Key][1], for example:

<DATADOG_API_KEY> <PAYLOAD>

Note: <PAYLOAD> can be in raw, Syslog, or JSON format.

Test it manually with telnet. Example of <PAYLOAD> in raw format:

telnet intake.logs.datadoghq.com 10514
<DATADOG_API_KEY> Log sent directly via TCP

This produces the following result in your [live tail page][2]:

In case of a <PAYLOAD> in JSON format, Datadog automatically parses its attributes:

telnet intake.logs.datadoghq.com 10514
<DATADOG_API_KEY> {"message":"json formatted log", "ddtags":"env:my-env,user:my-user", "ddsource":"my-integration", "hostname":"my-hostname", "service":"my-service"}

The secure TCP endpoint is tcp-intake.logs.datadoghq.eu 443 (or port 1883 for insecure connections).

You must prefix the log entry with your [Datadog API Key][1], e.g.:

<DATADOG_API_KEY> <PAYLOAD>

Note: <PAYLOAD> can be in raw, Syslog, or JSON format.

Test it manually with telnet. Example of <PAYLOAD> in raw format:

telnet tcp-intake.logs.datadoghq.eu 443
<DATADOG_API_KEY> Log sent directly via TCP

This produces the following result in your [live tail page][2]:

In case of a <PAYLOAD> in JSON format, Datadog automatically parses its attributes:

telnet tcp-intake.logs.datadoghq.eu 1883
<DATADOG_API_KEY> {"message":"json formatted log", "ddtags":"env:my-env,user:my-user", "ddsource":"my-integration", "hostname":"my-hostname", "service":"my-service"}

A TCP endpoint is not supported for this region.

Notes:

  • The HTTPS API supports logs of sizes up to 1MB. However, for optimal performance, it is recommended that an individual log be no greater than 25K bytes. If you use the Datadog Agent for logging, it is configured to split a log at 256kB (256000 bytes).
  • A log event should not have more than 100 tags, and each tag should not exceed 256 characters for a maximum of 10 million unique tags per day.
  • A log event converted to JSON format should contain less than 256 attributes. Each of those attribute’s keys should be less than 50 characters, nested in less than 10 successive levels, and their respective value should be less than 1024 characters if promoted as a facet.
  • Log events can be submitted up to 18h in the past and 2h in the future.

Log events that do not comply with these limits might be transformed or truncated by the system or not indexed if outside the provided time range. However, Datadog tries to preserve as much user data as possible.

Attributes and tags

Attributes prescribe logs facets, which are used for filtering and searching in Log Explorer. See the dedicated attributes and aliasing documentation for a list of reserved and standard attributes and to learn how to support a naming convention with logs attributes and aliasing.

Attributes for stack traces

When logging stack traces, there are specific attributes that have a dedicated UI display within your Datadog application such as the logger name, the current thread, the error type, and the stack trace itself.

To enable these functionalities use the following attribute names:

AttributeDescription
logger.nameName of the logger
logger.thread_nameName of the current thread
error.stackActual stack trace
error.messageError message contained in the stack trace
error.kindThe type or “kind” of an error (i.e “Exception”, “OSError”, …)

Note: By default, integration Pipelines attempt to remap default logging library parameters to those specific attributes and parse stack traces or traceback to automatically extract the error.message and error.kind.

For more information, see the complete source code attributes documentation.

Next steps

Once logs are collected and ingested, they are available in Log Explorer. Log Explorer is where you can search, enrich, and view alerts on your logs. See the Log Explorer documentation to begin analyzing your log data, or see the additional log management documentation below.

Further Reading


*Logging without Limits is a trademark of Datadog, Inc.