---
title: Log Collection and Integrations
description: >-
  Configure your environment to gather logs from your host, containers, and
  services.
breadcrumbs: Docs > Log Management > Log Collection and Integrations
---

# Log Collection and Integrations

## Overview{% #overview %}

Choose a configuration option below to begin ingesting your logs. If you are already using a log-shipper daemon, refer to the dedicated documentation for [Rsyslog](https://docs.datadoghq.com/integrations/rsyslog/), [Syslog-ng](https://docs.datadoghq.com/integrations/syslog_ng/), [NXlog](https://docs.datadoghq.com/integrations/nxlog/), [FluentD](https://docs.datadoghq.com/integrations/fluentd/#log-collection), or [Logstash](https://docs.datadoghq.com/integrations/logstash/#log-collection).

Consult the list of available Datadog log collection endpoints if you want to send your logs directly to Datadog.

**Note**: When sending logs in a JSON format to Datadog, there is a set of reserved attributes that have a specific meaning within Datadog. See the Reserved Attributes section to learn more.

## Setup{% #setup %}

{% tab title="Host" %}

1. Install the [Datadog Agent](https://app.datadoghq.com/account/settings/agent/latest).
1. To enable log collection, change `logs_enabled: false` to `logs_enabled: true` in your Agent's main configuration file (`datadog.yaml`). See the [Host Agent Log collection documentation](https://docs.datadoghq.com/agent/logs/) for more information and examples.
1. Once enabled, the Datadog Agent can be configured to [tail log files or listen for logs sent over UDP/TCP](https://docs.datadoghq.com/agent/logs/#custom-log-collection), [filter out logs or scrub sensitive data](https://docs.datadoghq.com/agent/logs/advanced_log_collection/#filter-logs), and [aggregate multi-line logs](https://docs.datadoghq.com/agent/logs/advanced_log_collection/#multi-line-aggregation).

{% /tab %}

{% tab title="Application" %}

1. Install the [Datadog Agent](https://app.datadoghq.com/account/settings/agent/latest).
1. To enable log collection, change `logs_enabled: false` to `logs_enabled: true` in your Agent's main configuration file (`datadog.yaml`). See the [Host Agent Log collection documentation](https://docs.datadoghq.com/agent/logs/) for more information and examples.
1. Follow your application language installation instructions to configure a logger and start generating logs:

- [Java](https://docs.datadoghq.com/logs/log_collection/java)
- [Python](https://docs.datadoghq.com/logs/log_collection/python)
- [go](https://docs.datadoghq.com/logs/log_collection/go)
- [Ruby](https://docs.datadoghq.com/logs/log_collection/ruby)
- [.Net](https://docs.datadoghq.com/logs/log_collection/csharp)
- [PHP](https://docs.datadoghq.com/logs/log_collection/php)
- [Node.js](https://docs.datadoghq.com/logs/log_collection/nodejs)
- [Javascript](https://docs.datadoghq.com/logs/log_collection/javascript)
- [React Native](https://docs.datadoghq.com/logs/log_collection/reactnative)
- [Android](https://docs.datadoghq.com/logs/log_collection/android)
- [ios](https://docs.datadoghq.com/logs/log_collection/ios)
- [Flutter](https://docs.datadoghq.com/logs/log_collection/flutter)
- [Roku](https://docs.datadoghq.com/logs/log_collection/roku)
- [Kotlin Multiplatform](https://docs.datadoghq.com/logs/log_collection/kotlin_multiplatform)

{% /tab %}

{% tab title="Container" %}
Choose a container or orchestrator provider and follow their dedicated log collection instructions:

- [Docker](https://docs.datadoghq.com/agent/docker/log/)
- [Kubernetes](https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile#log-collection)
- [Red Hat OpenShift](https://docs.datadoghq.com/integrations/openshift/#log-collection)
- [Amazon ECS](https://docs.datadoghq.com/containers/amazon_ecs/logs/)
- [ECS Fargate](https://docs.datadoghq.com/integrations/ecs_fargate/#log-collection)

**Notes**:

- The Datadog Agent can [collect logs directly from container stdout/stderr](https://docs.datadoghq.com/agent/docker/log/) without using a logging driver. When the Agent's Docker check is enabled, container and orchestrator metadata are automatically added as tags to your logs.

- It is possible to collect logs from all your containers or [only a subset filtered by container image, label, or name](https://docs.datadoghq.com/agent/guide/autodiscovery-management/).

- Autodiscovery can also be used to [configure log collection directly in the container labels](https://docs.datadoghq.com/agent/kubernetes/integrations/).

- In Kubernetes environments, you can also leverage [the daemonset installation](https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup).

{% /tab %}

{% tab title="Serverless" %}
Use the Datadog Forwarder, an AWS Lambda function that ships logs from your environment to Datadog. To enable log collection in your AWS serverless environment, refer to the [Datadog Forwarder documentation](https://docs.datadoghq.com/serverless/forwarder).
{% /tab %}

{% tab title="Cloud/Integration" %}
Select your Cloud provider below to see how to automatically collect your logs and forward them to Datadog:

- [Docker](https://docs.datadoghq.com/integrations/amazon_web_services/?tab=allpermissions#log-collection)
- [Kubernetes](https://docs.datadoghq.com/integrations/azure/?tab=azurecliv20#log-collection)
- [Amazon ECS](https://docs.datadoghq.com/integrations/google_cloud_platform/?tab=datadogussite#log-collection)
- [Oracle Cloud Infrastructure](https://docs.datadoghq.com/integrations/oracle-cloud-infrastructure/?tab=ociquickstartpreviewrecommended#log-collection)
- [Amazon ECS](https://docs.datadoghq.com/logs/guide/collect-heroku-logs/)

Datadog integrations and log collection are tied together. You can use an integration's default configuration file to enable dedicated [processors](https://docs.datadoghq.com/logs/log_configuration/processors), [parsing](https://docs.datadoghq.com/logs/log_configuration/parsing), and [facets](https://docs.datadoghq.com/logs/explorer/facets/) in Datadog. To begin log collection with an integration:

1. Select an integration from the [Integrations page](https://docs.datadoghq.com/integrations/#cat-log-collection) and follow the setup instructions.
1. Follow the integration's log collection instructions. This section covers how to uncomment the logs section in that integration's `conf.yaml` file and configure it for your environment.

## Reduce data transfer fees{% #reduce-data-transfer-fees %}

Use Datadog's [Cloud Network Monitoring](https://docs.datadoghq.com/network_monitoring/cloud_network_monitoring/) to identify your organization's highest throughput applications. Connect to Datadog over supported private connections and send data over a private network to avoid the public internet and reduce your data transfer fees. After you switch to private links, use Datadog's [Cloud Cost Management](https://docs.datadoghq.com/cloud_cost_management/) tools to verify the impact and monitor the reduction in your cloud costs.

For more information, see [How to send logs to Datadog while reducing data transfer fees](https://docs.datadoghq.com/logs/guide/reduce_data_transfer_fees/).
{% /tab %}

{% tab title="Agent Check" %}
If you are developing a custom Agent integration, you can submit logs programmatically from within your Agent check using the `send_log` method. This allows your custom integration to emit logs alongside metrics, events, and service checks.

To learn how to submit logs from your custom Agent check, see [Agent Integration Log Collection](https://docs.datadoghq.com/logs/log_collection/agent_checks/).
{% /tab %}

## Additional configuration options{% #additional-configuration-options %}

### Logging endpoints{% #logging-endpoints %}

Datadog provides logging endpoints for both SSL-encrypted connections and unencrypted connections. Use the encrypted endpoint when possible. The Datadog Agent uses the encrypted endpoint to send logs to Datadog. More information is available in the [Datadog security documentation](https://docs.datadoghq.com/data_security/logs/#information-security).

#### Supported endpoints{% #supported-endpoints %}

Use the [site](https://docs.datadoghq.com/getting_started/site/) selector dropdown on the right side of the page to see supported endpoints by Datadog site.

| Site | Type  | Endpoint | Port | Description                                                                                                                                                                  |
| ---- | ----- | -------- | ---- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|      | HTTPS | ``       | 443  | Used by custom forwarder to send logs in JSON or plain text format over HTTPS. See the [Logs HTTP API documentation](https://docs.datadoghq.com/api/latest/logs/#send-logs). |
|      | HTTPS | ``       | 443  | Used by the Agent to send logs in JSON format over HTTPS. See the [Host Agent Log collection documentation](https://docs.datadoghq.com/agent/logs/#send-logs-over-https).    |
|      | HTTPS | ``       | 443  | Used by Lambda functions to send logs in raw, Syslog, or JSON format over HTTPS.                                                                                             |
|      | HTTPS | `logs.`  | 443  | Used by the Browser SDK to send logs in JSON format over HTTPS.                                                                                                              |

### Custom log forwarding{% #custom-log-forwarding %}

Any custom process or logging library able to forward logs through **HTTP** can be used in conjunction with Datadog Logs.

You can send logs to Datadog platform over HTTP. Refer to the [Datadog Log HTTP API documentation](https://docs.datadoghq.com/api/latest/logs/#send-logs) to get started.

**Notes**:

- The HTTPS API supports logs of sizes up to 1MB. However, for optimal performance, it is recommended that an individual log be no greater than 25K bytes. If you use the Datadog Agent for logging, it is configured to split a log at 900kB (900000 bytes).
- A log event should not have more than 100 tags, and each tag should not exceed 256 characters for a maximum of 10 million unique tags per day.
- A log event converted to JSON format should contain less than 256 attributes. Each of those attribute's keys should be less than 50 characters, nested in less than 20 successive levels, and their respective value should be less than 1024 characters if promoted as a facet.
- Log events can be submitted with a [timestamp](https://docs.datadoghq.com/logs/log_configuration/pipelines/?tab=date#date-attribute) that is up to 18h in the past.

{% alert level="info" %}
Preview available: You can submit logs from the past 7 days, instead of the current 18-hour limit. [Register for the Preview](https://www.datadoghq.com/product-preview/ingest-logs-up-to-7-days-old/).
{% /alert %}

Log events that do not comply with these limits might be transformed or truncated by the system or not indexed if outside the provided time range. However, Datadog tries to preserve as much user data as possible.

There is an additional truncation in fields that applies only to indexed logs: the value is truncated to 75 KiB for the message field and 25 KiB for non-message fields. Datadog still stores the full text, and it remains visible in regular list queries in the Logs Explorer. However, the truncated version will be displayed when performing a grouped query, such as when grouping logs by that truncated field or performing similar operations that display that specific field.

{% collapsible-section %}
### TCP

{% alert level="warning" %}
TCP log collection is **not supported**. Datadog provides **no delivery or reliability guarantees** when using TCP, and log data may be lost without notice. For reliable ingestion, use the HTTP intake endpoint, an official Datadog Agent, or forwarder integration instead. For more information, see [Log Collection](https://docs.datadoghq.com/logs/log_collection/?tab=host).
{% /alert %}

| Site | Type        | Endpoint                              | Port  | Description                                                                                                                                                                 |
| ---- | ----------- | ------------------------------------- | ----- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| US   | TCP         | `agent-intake.logs.datadoghq.com`     | 10514 | Used by the Agent to send logs without TLS.                                                                                                                                 |
| US   | TCP and TLS | `agent-intake.logs.datadoghq.com`     | 10516 | Used by the Agent to send logs with TLS.                                                                                                                                    |
| US   | TCP and TLS | `intake.logs.datadoghq.com`           | 443   | Used by custom forwarders to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection.                                                                 |
| US   | TCP and TLS | `functions-intake.logs.datadoghq.com` | 443   | Used by Azure functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection. **Note**: This endpoint may be useful with other cloud providers. |
| US   | TCP and TLS | `lambda-intake.logs.datadoghq.com`    | 443   | Used by Lambda functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection.                                                                  |
| EU   | TCP and TLS | `agent-intake.logs.datadoghq.eu`      | 443   | Used by the Agent to send logs in protobuf format over an SSL-encrypted TCP connection.                                                                                     |
| EU   | TCP and TLS | `functions-intake.logs.datadoghq.eu`  | 443   | Used by Azure functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection. **Note**: This endpoint may be useful with other cloud providers. |
| EU   | TCP and TLS | `lambda-intake.logs.datadoghq.eu`     | 443   | Used by Lambda functions to send logs in raw, Syslog, or JSON format over an SSL-encrypted TCP connection.                                                                  |

{% /collapsible-section %}

### Attributes and tags{% #attributes-and-tags %}

Attributes prescribe [logs facets](https://docs.datadoghq.com/logs/explorer/facets/), which are used for filtering and searching in Log Explorer. See the dedicated [attributes and aliasing](https://docs.datadoghq.com/logs/log_configuration/attributes_naming_convention) documentation for a list of reserved and standard attributes and to learn how to support a naming convention with logs attributes and aliasing.

#### Attributes for stack traces{% #attributes-for-stack-traces %}

When logging stack traces, there are specific attributes that have a dedicated UI display within your Datadog application such as the logger name, the current thread, the error type, and the stack trace itself.

{% image
   source="https://docs.dd-static.net/images/logs/log_collection/stack_trace.6d73ceca149a8c347b307d31f696cd47.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/logs/log_collection/stack_trace.6d73ceca149a8c347b307d31f696cd47.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="Attributes for a parsed stack trace" /%}

To enable these functionalities use the following attribute names:

| Attribute            | Description                                                             |
| -------------------- | ----------------------------------------------------------------------- |
| `logger.name`        | Name of the logger                                                      |
| `logger.thread_name` | Name of the current thread                                              |
| `error.stack`        | Actual stack trace                                                      |
| `error.message`      | Error message contained in the stack trace                              |
| `error.kind`         | The type or "kind" of an error (for example, "Exception", or "OSError") |

**Note**: By default, integration Pipelines attempt to remap default logging library parameters to those specific attributes and parse stack traces or traceback to automatically extract the `error.message` and `error.kind`.

For more information, see the complete [source code attributes documentation](https://docs.datadoghq.com/logs/log_configuration/attributes_naming_convention/#source-code).

## Next steps{% #next-steps %}

Once logs are collected and ingested, they are available in **Log Explorer**. Log Explorer is where you can search, enrich, and view alerts on your logs. See the [Log Explorer](https://docs.datadoghq.com/logs/explore/) documentation to begin analyzing your log data, or see the additional log management documentation below.

{% image
   source="https://docs.dd-static.net/images/logs/explore.b937574cd932232459c1c5746146a353.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/logs/explore.b937574cd932232459c1c5746146a353.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="Logs appearing in the Log Explorer" /%}

## Further Reading{% #further-reading %}



- [How to manage log files using Logrotate](https://www.datadoghq.com/blog/log-file-control-with-logrotate/)
- [Advanced log collection configurations](https://docs.datadoghq.com/agent/logs/advanced_log_collection)
- [Discover how to process your logs](https://docs.datadoghq.com/logs/log_configuration/processors)
- [Learn more about parsing](https://docs.datadoghq.com/logs/log_configuration/parsing)
- [Datadog live tail functionality](https://docs.datadoghq.com/logs/live_tail/)
- [See how to explore your logs](https://docs.datadoghq.com/logs/explorer/)
- [Logging Without Limits*](https://docs.datadoghq.com/logs/logging_without_limits/)
\*Logging without Limits is a trademark of Datadog, Inc.

