New announcements for Serverless, Network, RUM, and more from Dash! New announcements from Dash!

Log Collection & Integrations

Follow the Datadog Agent installation instructions to start forwarding logs alongside your metrics and traces. The Agent can tail log files or listen for logs sent over UDP / TCP, and you can configure it to filter out logs, scrub sensitive data, or aggregate multi line logs. Finally choose your application language below in order to get dedicated logging best practices. If you are already using a log-shipper daemon, refer to the dedicated documentation for Rsyslog, Syslog-ng, NXlog, FluentD, and Logstash.

Datadog Log Management also comes with a set of out of the box solutions to collect your logs and send them to Datadog:

Datadog Integrations and Log Collection are tied together. Use an integration default configuration file to enable its dedicated processing, parsing, and facets in Datadog.

Find at the bottom of this page the list of available Datadog Log collection endpoints if you want to send your logs directly to Datadog.

Note: When sending logs in a JSON format to Datadog, there is a set of reserved attributes that have a specific within Datadog. See the Reserved Attributes section to learn more.

Application Log collection

After you have enabled log collection, configure your application language to generate logs:


Container Log collection

The Datadog Agent can collect logs directly from container stdout/stderr without using a logging driver. When the Agent’s Docker check is enabled, container and orchestrator metadata are automatically added as tags to your logs. It is possible to collect logs from all your containers or only a subset filtered by container image, label, or name. Autodiscovery can also be used to configure log collection directly in the container labels. In Kubernetes environments you can also leverage the daemonset installation.

Choose your environment below to get dedicated log collection instructions:


Cloud Providers Log collection

Select your Cloud provider below to see how to automatically collect your logs and forward them to Datadog:


Custom Log forwarder

Any custom process or logging library able to forward logs through TCP or HTTP can be used in conjunction with Datadog Logs. Choose below which Datadog site you want to forward logs to:

The secure TCP endpoint is intake.logs.datadoghq.com:10516 (or port 10514 for insecure connections).

You must prefix the log entry with your Datadog API Key, e.g.:

<DATADOG_API_KEY> this is my log

Test it manually with telnet:

telnet intake.logs.datadoghq.com 10514
<DATADOG_API_KEY> Log sent directly via TCP

This produces the following result in your live tail page:

Datadog automatically parses attributes out of JSON formatted messages.

telnet intake.logs.datadoghq.com 10514
<DATADOG_API_KEY> {"message":"json formatted log", "ddtags":"env:my-env,user:my-user", "ddsource":"my-integration", "hostname":"my-hostname", "service":"my-service"}

The secure TCP endpoint is tcp-intake.logs.datadoghq.eu:443 (or port 1883 for insecure connections).

You must prefix the log entry with your Datadog API Key, e.g.:

<DATADOG_API_KEY> this is my log

Test it manually with telnet:

telnet tcp-intake.logs.datadoghq.eu 1883
<DATADOG_API_KEY> Log sent directly via TCP

This produces the following result in your live tail page:

Datadog automatically parses attributes out of JSON formatted messages.

telnet tcp-intake.logs.datadoghq.eu 1883
<DATADOG_API_KEY> {"message":"json formatted log", "ddtags":"env:my-env,user:my-user", "ddsource":"my-integration", "hostname":"my-hostname", "service":"my-service"}

To send logs over HTTPs for the EU or US site, refer to the Datadog Log HTTP API documentation.

Datadog Logs Endpoints

Datadog provides logging endpoints for both SSL-encrypted connections and unencrypted connections. You should use the encrypted endpoint when possible. The Datadog Agent uses the encrypted endpoint to send logs to Datadog (more information available in the Datadog security documentation).

Endpoints that can be used to send logs to Datadog:

Endpoints for SSL encrypted connectionsPortDescription
agent-intake.logs.datadoghq.com10516Used by the Agent to send logs in protobuf format over an SSL-encrypted TCP connection.
intake.logs.datadoghq.com10516Used by custom forwarders to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection.
lambda-intake.logs.datadoghq.com10516Used by Lambda functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection.
lambda-http-intake.logs.datadoghq.com443Used by Lambda functions to send logs in raw, Syslog, or JSON format over HTTPS.
functions-intake.logs.datadoghq.com10516Used by Azure functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection. Note: This endpoint may be useful with other cloud providers.
Endpoint for unencrypted connectionsPortDescription
intake.logs.datadoghq.com10514Used by custom forwarders to send logs in raw, Syslog, or JSON format over an unecrypted TCP connection.
Endpoints for SSL encrypted connectionsPortDescription
agent-intake.logs.datadoghq.eu443Used by the Agent to send logs in protobuf format over an SSL-encrypted TCP connection.
tcp-intake.logs.datadoghq.eu443Used by custom forwarders to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection.
lambda-intake.logs.datadoghq.eu443Used by Lambda functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection.
lambda-http-intake.logs.datadoghq.eu443Used by Lambda functions to send logs in raw, Syslog, or JSON format over HTTPS.
functions-intake.logs.datadoghq.eu443Used by Azure functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection. Note: This endpoint may be useful with other cloud providers.
Endpoint for unencrypted connectionsPortDescription
tcp-intake.logs.datadoghq.eu1883Used by custom forwarders to send logs in raw, Syslog, or JSON format format over an unecrypted TCP connection.

To send logs over HTTPs, refer to the Datadog Log HTTP API documentation.

Reserved attributes

Here are some key attributes you should pay attention to when setting up your project:

AttributeDescription
hostThe name of the originating host as defined in metrics. We automatically retrieve corresponding host tags from the matching host in Datadog and apply them to your logs. The Agent sets this value automatically.
sourceThis corresponds to the integration name: the technology from which the log originated. When it matches an integration name, Datadog automatically installs the corresponding parsers and facets. For example: nginx, postgresql, etc.
status This corresponds to the level/severity of a log. It is used to define patterns and has a dedicated layout in the Datadog Log UI.
serviceThe name of the application or service generating the log events. It is used to switch from Logs to APM, so make sure you define the same value when you use both products.
messageBy default, Datadog ingests the value of the message attribute as the body of the log entry. That value is then highlighted and displayed in the Logstream, where it is indexed for full text search.

Your logs are collected and centralized into the Log Explorer view. You can also search, enrich, and alert on your logs.

How to get the most of your application logs

When logging stack traces, there are specific attributes that have a dedicated UI display within your Datadog application such as the logger name, the current thread, the error type, and the stack trace itself.

To enable these functionalities use the following attribute names:

AttributeDescription
logger.nameName of the logger
logger.thread_nameName of the current thread
error.stackActual stack trace
error.messageError message contained in the stack trace
error.kindThe type or “kind” of an error (i.e “Exception”, “OSError”, …)

Note: By default, integration Pipelines attempt to remap default logging library parameters to those specific attributes and parse stack traces or traceback to automatically extract the error.message and error.kind.

Send your application logs in JSON

For integration frameworks, Datadog provides guidelines on how to log JSON into a file. JSON-formatted logging helps handle multi-line application logs, and is automatically parsed by Datadog.

The Advantage of Collecting JSON-formatted logs

Datadog automatically parses JSON-formatted logs. For this reason, if you have control over the log format you send to Datadog, it is recommended to format these logs as JSON to avoid the need for custom parsing rules.

Further Reading