Log Collection & Integrations
New announcements from Dash: Incident Management, Continuous Profiler, and more! New announcements from Dash!

Log Collection & Integrations

Follow the Datadog Agent installation instructions to start forwarding logs alongside your metrics and traces. The Agent can tail log files or listen for logs sent over UDP / TCP, and you can configure it to filter out logs, scrub sensitive data, or aggregate multi line logs. Finally choose your application language below in order to get dedicated logging best practices. If you are already using a log-shipper daemon, refer to the dedicated documentation for Rsyslog, Syslog-ng, NXlog, FluentD, and Logstash.

Datadog Log Management also comes with a set of out of the box solutions to collect your logs and send them to Datadog:

Datadog Integrations and Log Collection are tied together. Use an integration default configuration file to enable its dedicated processing, parsing, and facets in Datadog.

Find at the bottom of this page the list of available Datadog Log collection endpoints if you want to send your logs directly to Datadog.

Note: When sending logs in a JSON format to Datadog, there is a set of reserved attributes that have a specific meaning within Datadog. See the Reserved Attributes section to learn more.

Application Log collection

After you have enabled log collection, configure your application language to generate logs:

Container Log Collection

The Datadog Agent can collect logs directly from container stdout/stderr without using a logging driver. When the Agent’s Docker check is enabled, container and orchestrator metadata are automatically added as tags to your logs. It is possible to collect logs from all your containers or only a subset filtered by container image, label, or name. Autodiscovery can also be used to configure log collection directly in the container labels. In Kubernetes environments you can also leverage the daemonset installation.

Choose your environment below to get dedicated log collection instructions:


Serverless Log Collection

Datadog collects logs from AWS Lambda. To enable this, refer to the AWS Lambda integration documentation.

Cloud Providers Log Collection

Select your Cloud provider below to see how to automatically collect your logs and forward them to Datadog:


Custom Log Forwarder

Any custom process or logging library able to forward logs through TCP or HTTP can be used in conjunction with Datadog Logs.

The public endpoint is http-intake.logs.datadoghq.com. The API key must be added either in the path or as a header, for instance:

curl -X POST https://http-intake.logs.datadoghq.com/v1/input \
     -H "Content-Type: text/plain" \
     -H "DD-API-KEY: <API_KEY>" \
     -d 'hello world'

For more examples with JSON formats, multiple logs per request, or the use of query parameters, refer to the Datadog Log HTTP API documentation.

The public endpoint is http-intake.logs.datadoghq.eu. The API key must be added either in the path or as a header, for instance:

curl -X POST https://http-intake.logs.datadoghq.eu/v1/input \
     -H "Content-Type: text/plain" \
     -H "DD-API-KEY: <API_KEY>" \
     -d 'hello world'

For more examples with JSON formats, multiple logs per request, or the use of query parameters, refer to the Datadog Log HTTP API documentation.

The secure TCP endpoint is intake.logs.datadoghq.com:10516 (or port 10514 for insecure connections).

You must prefix the log entry with your Datadog API Key, e.g.:

<DATADOG_API_KEY> <PAYLOAD>

Note: <PAYLOAD> can be in raw, Syslog, or JSON format.

Test it manually with telnet. Example of <PAYLOAD> in raw format:

telnet intake.logs.datadoghq.com 10514
<DATADOG_API_KEY> Log sent directly via TCP

This produces the following result in your live tail page:

In case of a <PAYLOAD> in JSON format, Datadog automatically parses its attributes:

telnet intake.logs.datadoghq.com 10514
<DATADOG_API_KEY> {"message":"json formatted log", "ddtags":"env:my-env,user:my-user", "ddsource":"my-integration", "hostname":"my-hostname", "service":"my-service"}

The secure TCP endpoint is tcp-intake.logs.datadoghq.eu:443 (or port 1883 for insecure connections).

You must prefix the log entry with your Datadog API Key, e.g.:

<DATADOG_API_KEY> <PAYLOAD>

Note: <PAYLOAD> can be in raw, Syslog, or JSON format.

Test it manually with telnet. Example of <PAYLOAD> in raw format:

telnet tcp-intake.logs.datadoghq.eu 1883
<DATADOG_API_KEY> Log sent directly via TCP

This produces the following result in your live tail page:

In case of a <PAYLOAD> in JSON format, Datadog automatically parses its attributes:

telnet tcp-intake.logs.datadoghq.eu 1883
<DATADOG_API_KEY> {"message":"json formatted log", "ddtags":"env:my-env,user:my-user", "ddsource":"my-integration", "hostname":"my-hostname", "service":"my-service"}

Datadog Logs Endpoints

Datadog provides logging endpoints for both SSL-encrypted connections and unencrypted connections. Use the encrypted endpoint when possible. The Datadog Agent uses the encrypted endpoint to send logs to Datadog. More information is available in the Datadog security documentation.

Endpoints that can be used to send logs to Datadog US region:

Endpoints for SSL encrypted connectionsPortDescription
agent-intake.logs.datadoghq.com10516Used by the Agent to send logs in protobuf format over an SSL-encrypted TCP connection.
agent-http-intake.logs.datadoghq.com443Used by the Agent to send logs in JSON format over HTTPS. See the How to send logs over HTTP documentation.
http-intake.logs.datadoghq.com443Used by custom forwarder to send logs in JSON or plain text format over HTTPS. See the How to send logs over HTTP documentation.
intake.logs.datadoghq.com10516Used by custom forwarders to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection.
lambda-intake.logs.datadoghq.com10516Used by Lambda functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection.
lambda-http-intake.logs.datadoghq.com443Used by Lambda functions to send logs in raw, Syslog, or JSON format over HTTPS.
functions-intake.logs.datadoghq.com10516Used by Azure functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection. Note: This endpoint may be useful with other cloud providers.
Endpoint for unencrypted connectionsPortDescription
intake.logs.datadoghq.com10514Used by custom forwarders to send logs in raw, Syslog, or JSON format over an unecrypted TCP connection.

Endpoints that can be used to send logs to Datadog EU region:

Endpoints for SSL encrypted connectionsPortDescription
agent-intake.logs.datadoghq.eu443Used by the Agent to send logs in protobuf format over an SSL-encrypted TCP connection.
agent-http-intake.logs.datadoghq.eu443Used by the Agent to send logs in JSON format over HTTPS. See the Agent logs documentation.
http-intake.logs.datadoghq.eu443Used by custom forwarder to send logs in JSON or plain text format over HTTPS. See the Agent logs documentation.
tcp-intake.logs.datadoghq.eu443Used by custom forwarders to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection.
lambda-intake.logs.datadoghq.eu443Used by Lambda functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection.
lambda-http-intake.logs.datadoghq.eu443Used by Lambda functions to send logs in raw, Syslog, or JSON format over HTTPS.
functions-intake.logs.datadoghq.eu443Used by Azure functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection. Note: This endpoint may be useful with other cloud providers.
Endpoint for unencrypted connectionsPortDescription
tcp-intake.logs.datadoghq.eu1883Used by custom forwarders to send logs in raw, Syslog, or JSON format format over an unecrypted TCP connection.

Reserved attributes

Here are some key attributes you should pay attention to when setting up your project:

AttributeDescription
hostThe name of the originating host as defined in metrics. We automatically retrieve corresponding host tags from the matching host in Datadog and apply them to your logs. The Agent sets this value automatically.
sourceThis corresponds to the integration name: the technology from which the log originated. When it matches an integration name, Datadog automatically installs the corresponding parsers and facets. For example: nginx, postgresql, etc.
statusThis corresponds to the level/severity of a log. It is used to define patterns and has a dedicated layout in the Datadog Log UI.
serviceThe name of the application or service generating the log events. It is used to switch from Logs to APM, so make sure you define the same value when you use both products.
messageBy default, Datadog ingests the value of the message attribute as the body of the log entry. That value is then highlighted and displayed in the Logstream, where it is indexed for full text search.

Your logs are collected and centralized into the Log Explorer view. You can also search, enrich, and alert on your logs.

Unified service tagging

As a best practice for log collection, Datadog recommends configuring unified service tagging to tie Datadog telemetry together through the use of three standard tags: env, service, and version. Refer to the dedicated unified service tagging documentation to configure unified service tagging.

How to get the most of your application logs

When logging stack traces, there are specific attributes that have a dedicated UI display within your Datadog application such as the logger name, the current thread, the error type, and the stack trace itself.

To enable these functionalities use the following attribute names:

AttributeDescription
logger.nameName of the logger
logger.thread_nameName of the current thread
error.stackActual stack trace
error.messageError message contained in the stack trace
error.kindThe type or “kind” of an error (i.e “Exception”, “OSError”, …)

Note: By default, integration Pipelines attempt to remap default logging library parameters to those specific attributes and parse stack traces or traceback to automatically extract the error.message and error.kind.

Send your application logs in JSON

For integration frameworks, Datadog provides guidelines on how to log JSON into a file. JSON-formatted logging helps handle multi-line application logs, and is automatically parsed by Datadog.

The Advantage of Collecting JSON-formatted logs

Datadog automatically parses JSON-formatted logs. For this reason, if you have control over the log format you send to Datadog, it is recommended to format these logs as JSON to avoid the need for custom parsing rules.

Limits applied to ingested log events

  • For optimal use, Datadog recommends a log event should not exceed 25K bytes in size. When using the Datadog Agent, log events greater than 256KB are split into several entries. When using the Datadog TCP or HTTP API directly, log events up to 1MB are accepted.
  • Log events can be submitted up to 18h in the past and 2h in the future.
  • A log event converted to JSON format should contain less than 256 attributes. Each of those attribute’s keys should be less than 50 characters, nested in less than 10 successive levels, and their respective value should be less than 1024 characters if promoted as a facet.
  • A log event should not have more than 100 tags and each tag should not exceed 256 characters for a maximum of 10 million unique tags per day.

Log events that do not comply with these limits might be transformed or truncated by the system or not indexed if outside the provided time range. However, Datadog tries to preserve as much user data as possible.

Further Reading


*Logging without Limits is a trademark of Datadog, Inc.