---
title: Getting Started with Logs
description: >-
  Collect logs from multiple sources, process and analyze them, and correlate
  these logs with traces and metrics.
breadcrumbs: Docs > Getting Started > Getting Started with Logs
---

# Getting Started with Logs

## Overview{% #overview %}

Use Datadog Log Management, also called logs, to collect logs across multiple logging sources, such as your server, container, cloud environment, application, or existing log processors and forwarders. With conventional logging, you have to choose which logs to analyze and retain to maintain cost-efficiency. With Datadog Logging without Limits*, you can collect, process, archive, explore, and monitor your logs without logging limits.

This page shows you how to get started with Log Management in Datadog. If you haven't already, create a [Datadog account](https://www.datadoghq.com).

## Configure a logging source{% #configure-a-logging-source %}

With Log Management, you can analyze and explore data in the Log Explorer, connect [Tracing](https://docs.datadoghq.com/tracing/other_telemetry/connect_logs_and_traces/) and [Metrics](https://docs.datadoghq.com/logs/guide/correlate-logs-with-metrics/) to correlate valuable data across Datadog, and use ingested logs for Datadog [Cloud SIEM](https://docs.datadoghq.com/security/cloud_siem/). The lifecycle of a log within Datadog begins at ingestion from a logging source.

{% image
   source="https://datadog-docs.imgix.net/images/getting_started/logs/getting-started-overview.34ea2bb8a81d3f8530403cca26a9e609.png?auto=format"
   alt="Different types of log configurations" /%}

### Server{% #server %}

There are several [integrations](https://docs.datadoghq.com/getting_started/integrations/) available to forward logs from your server to Datadog. Integrations use a log configuration block in their `conf.yaml` file, which is available in the `conf.d/` folder at the root of your Agent's configuration directory, to forward logs to Datadog from your server.

```yaml
logs:
  - type: file
    path: /path/to/your/integration/access.log
    source: integration_name
    service: integration_name
    sourcecategory: http_web_access
```

To begin collecting logs from a server:

1. If you haven't already, install the [Datadog Agent](https://docs.datadoghq.com/agent/) based on your platform.

**Note**: Log collection requires Datadog Agent v6+.

1. Collecting logs is **not enabled** by default in the Datadog Agent. To enable log collection, set `logs_enabled` to `true` in your `datadog.yaml` file.

In the `datadog.yaml` file:

   ```yaml
   ## @param logs_enabled - boolean - optional - default: false
       ## @env DD_LOGS_ENABLED - boolean - optional - default: false
       ## Enable Datadog Agent log collection by setting logs_enabled to true.
       logs_enabled: false
       
       ## @param logs_config - custom object - optional
       ## Enter specific configurations for your Log collection.
       ## Uncomment this parameter and the one below to enable them.
       ## See https://docs.datadoghq.com/agent/logs/
       logs_config:
       
         @param container_collect_all - boolean - optional - default: false
         @env DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL - boolean - optional - default: false
         Enable container log collection for all the containers (see ac_exclude to filter out containers)
         container_collect_all: false
       
         @param logs_dd_url - string - optional
         @env DD_LOGS_CONFIG_LOGS_DD_URL - string - optional
         Define the endpoint and port to hit when using a proxy for logs.
         As of agent version 7.70.0, proxy paths are supported. To forward logs to a
         specific proxy path, the URL scheme must be specified: https://proxy.example.com:443/logs
         logs_dd_url: <ENDPOINT>:<PORT>
       
         @param logs_no_ssl - boolean - optional - default: false
         @env DD_LOGS_CONFIG_LOGS_NO_SSL - optional - default: false
         Disable the SSL encryption. This parameter should only be used when logs are
         forwarded locally to a proxy. It is highly recommended to then handle the SSL encryption
         on the proxy side.
         logs_no_ssl: false
       
         @param processing_rules - list of custom objects - optional
         @env DD_LOGS_CONFIG_PROCESSING_RULES - list of custom objects - optional
         Global processing rules that are applied to all logs. The available rules are
         "exclude_at_match", "include_at_match" and "mask_sequences". More information in Datadog documentation:
         https://docs.datadoghq.com/agent/logs/advanced_log_collection/#global-processing-rules
         processing_rules:
           - type: <RULE_TYPE>
             name: <RULE_NAME>
             pattern: <RULE_PATTERN>
       
         @param auto_multi_line_detection - boolean - optional - default: false
         @env DD_LOGS_CONFIG_AUTO_MULTI_LINE_DETECTION - boolean - optional - default: false
         Enable automatic aggregation of multi-line logs for common log patterns.
         More information can be found in Datadog documentation:
         https://docs.datadoghq.com/agent/logs/auto_multiline_detection/?tab=configurationfile
         auto_multi_line_detection: true
       
         @param force_use_http - boolean - optional - default: false
         @env DD_LOGS_CONFIG_FORCE_USE_HTTP - boolean - optional - default: false
         Set this parameter to `true` to always send logs via HTTP(S) protocol and never fall back to
         raw TCP forwarding (recommended).
         #
         By default, the Agent sends logs in HTTPS batches if HTTPS connectivity can
         be established at Agent startup, and falls back to TCP otherwise. This parameter
         can be used to override this fallback behavior. It is recommended, but not the default, to
         maintain compatibility with previous Agent versions.
         #
         Note, the logs are forwarded via HTTPS (encrypted) by default. Please use `logs_no_ssl` if you
         need unencrypted HTTP instead.
         force_use_http: true
       
         @param http_protocol - string - optional - default: auto
         @env DD_LOGS_CONFIG_HTTP_PROTOCOL - string - optional - default: auto
         The transport type to use for sending logs. Possible values are "auto" or "http1".
         http_protocol: auto
       
         @param http_timeout - integer - optional - default: 10
         @env DD_LOGS_CONFIG_HTTP_TIMEOUT - integer - optional - default: 10
         The HTTP timeout to use for sending logs, in seconds.
         http_timeout: 10
       
         @param force_use_tcp - boolean - optional - default: false
         @env DD_LOGS_CONFIG_FORCE_USE_TCP - boolean - optional - default: false
         By default, logs are sent via HTTP protocol if possible, set this parameter
         to `true` to always send logs via TCP. If `force_use_http` is set to `true`, this parameter
         is ignored.
         force_use_tcp: true
       
         @param use_compression - boolean - optional - default: true
         @env DD_LOGS_CONFIG_USE_COMPRESSION - boolean - optional - default: true
         This parameter is available when sending logs via HTTP protocol. If enabled, the Agent
         compresses logs before sending them.
         use_compression: true
       
         @param compression_level - integer - optional - default: 6
         @env DD_LOGS_CONFIG_COMPRESSION_LEVEL - boolean - optional - default: false
         The compression_level parameter accepts values from 0 (no compression)
         to 9 (maximum compression but higher resource usage). Only takes effect if
         `use_compression` is set to `true`.
         compression_level: 6
       
         @param batch_wait - integer - optional - default: 5
         @env DD_LOGS_CONFIG_BATCH_WAIT - integer - optional - default: 5
         The maximum time (in seconds) the Datadog Agent waits to fill each batch of logs before sending.
         batch_wait: 5
       
         @param close_timeout - integer - optional - default: 60
         @env DD_LOGS_CONFIG_CLOSE_TIMEOUT - integer - optional - default: 60
         The maximum number of seconds the Agent spends reading from a file after it has been rotated.
         close_timeout: 60
       
         @param open_files_limit - integer - optional - default: 500
         @env DD_LOGS_CONFIG_OPEN_FILES_LIMIT - integer - optional - default: 500
         The maximum number of files that can be tailed in parallel.
         Note: the default for Mac OS is 200. The default for
         all other systems is 500.
         open_files_limit: 500
       
         @param file_wildcard_selection_mode - string - optional - default: `by_name`
         @env DD_LOGS_CONFIG_FILE_WILDCARD_SELECTION_MODE - string - optional - default: `by_name`
         The strategy used to prioritize wildcard matches if they exceed the open file limit.
         #
         Choices are `by_name` and `by_modification_time`.
         #
         `by_name` means that each log source is considered and the matching files are ordered
         in reverse name order. While there are less than `logs_config.open_files_limit` files
         being tailed, this process repeats, collecting from each configured source.
         #
         `by_modification_time` takes all log sources and first adds any log sources that
         point to a specific file. Next, it finds matches for all wildcard sources.
         This resulting list is ordered by which files have been most recently modified
         and the top `logs_config.open_files_limit` most recently modified files are
         chosen for tailing.
         #
         WARNING: `by_modification_time` is less performant than `by_name` and will trigger
         more disk I/O at the configured wildcard log paths.
         file_wildcard_selection_mode: by_name
       
         @param max_message_size_bytes - integer - optional - default: 900000
         @env DD_LOGS_CONFIG_MAX_MESSAGE_SIZE_BYTES - integer - optional - default : 900000
         The maximum size of single log message in bytes. Lines that are longer
         than this limit are split into multiple line where each long line that is
         split has `...TRUNCATED...` added as a suffix and each line that was created
         from a split of a previous line has `...TRUNCATED...` added as a prefix.
         #
         Note: Datadog's ingest API truncates any logs greater than 1 MB by discarding the
         remainder. See https://docs.datadoghq.com/api/latest/logs/ for details.
         max_message_size_bytes: 900000
       
         @param integrations_logs_files_max_size - integer - optional - default: 10
         @env DD_LOGS_CONFIG_INTEGRATIONS_LOGS_FILES_MAX_SIZE - integer - optional - default: 10
         The max size in MB that an integration logs file is allowed to use
         integrations_logs_files_max_size: 10
       
         @param integrations_logs_total_usage - integer - optional - default: 100
         @env DD_LOGS_CONFIG_INTEGRATIONS_LOGS_TOTAL_USAGE - integer - optional - default: 100
         The total combined usage all integrations logs files can use
         integrations_logs_total_usage: 100
       
         @param k8s_container_use_kubelet_api - boolean - optional - default: false
         @env DD_LOGS_CONFIG_K8S_CONTAINER_USE_KUBELET_API - boolean - optional - default: false
         Enable container log collection via the kubelet API, typically used for EKS Fargate
         k8s_container_use_kubelet_api: false
       
         @param streaming - custom object - optional
         This section allows you to configure streaming logs via remote config.
         streaming:
           @param streamlogs_log_file - string - optional
           @env DD_LOGS_CONFIG_STREAMING_STREAMLOGS_LOG_FILE - string - optional
           Path to the file containing the streamlogs log file.
           Default paths:
             * Windows: c:\\programdata\\datadog\\logs\\streamlogs_info\\streamlogs.log
             * Unix: /opt/log/datadog/streamlogs_info/streamlogs.log
             * Linux: /var/log/datadog/streamlogs_info/streamlogs.log
           streamlogs_log_file: <path_to_streamlogs_log_file>
```

1. Restart the [Datadog Agent](https://docs.datadoghq.com/agent/configuration/agent-commands/#restart-the-agent).

1. Follow the integration [activation steps](https://app.datadoghq.com/logs/onboarding/server) or the custom files log collection steps on the Datadog site.

**Note**: If you're collecting logs from custom files and need examples for tail files, TCP/UDP, journald, or Windows Events, see [Custom log collection](https://docs.datadoghq.com/agent/logs/?tab=tailfiles#custom-log-collection).

### Container{% #container %}

As of Datadog Agent v6, the Agent can collect logs from containers. Each containerization service has specific configuration instructions based where the Agent is deployed or run, or how logs are routed.

For example, [Docker](https://docs.datadoghq.com/agent/docker/log/?tab=containerinstallation) has two different types of Agent installation available: on your host, where the Agent is external to the Docker environment, or deploying a containerized version of the Agent in your Docker environment.

[Kubernetes](https://docs.datadoghq.com/agent/kubernetes/log/?tab=daemonset) requires that the Datadog Agent run in your Kubernetes cluster, and log collection can be configured using a DaemonSet spec, Helm chart, or with the Datadog Operator.

To begin collecting logs from a container service, follow the [in-app instructions](https://app.datadoghq.com/logs/onboarding/container).

### Cloud{% #cloud %}

You can forward logs from multiple cloud providers, such as AWS, Azure, and Google Cloud, to Datadog. Each cloud provider has its own set of configuration instructions.

For example, ​AWS service logs are usually stored in S3 buckets or CloudWatch Log groups. You can subscribe to these logs and forward them to an Amazon Kinesis stream to then forward them to one or multiple destinations. Datadog is one of the default destinations for Amazon Kinesis Delivery streams.​

To begin collecting logs from a cloud service, follow the [in-app instructions](https://app.datadoghq.com/logs/onboarding/cloud).

### Client{% #client %}

Datadog permits log collection from clients through SDKs or libraries. For example, use the `datadog-logs` SDK to send logs to Datadog from JavaScript clients.

To begin collecting logs from a client, follow the [in-app instructions](https://app.datadoghq.com/logs/onboarding/client).

### Other{% #other %}

If you're using existing logging services or utilities such as rsyslog, Fluentd, or Logstash, Datadog offers plugins and log forwarding options.

If you don't see your integration, you can type it in the *other integrations* box and get notifications for when the integration is available.

To begin collecting logs from a cloud service, follow the [in-app instructions](https://app.datadoghq.com/logs/onboarding/other).

## Explore your logs{% #explore-your-logs %}

Once a logging source is configured, your logs are available in the [Log Explorer](https://docs.datadoghq.com/logs/explorer/). This is where you can filter, aggregate, and visualize your logs.

For example, if you have logs flowing in from a service that you wish to examine further, filter by `service`. You can further filter by `status`, such as `ERROR`, and select [Group into Patterns](https://docs.datadoghq.com/logs/explorer/analytics/patterns/) to see which part of your service is logging the most errors.

{% image
   source="https://datadog-docs.imgix.net/images/getting_started/logs/error-pattern-2024.286e9223737e7a0d6063de16c12f9ba4.png?auto=format"
   alt="Filtering in the Log Explorer by error pattern" /%}

Aggregate your logs into `Fields` and visualize as **Top List** to see your top logging services. Select a source, such as `info` or `warn`, and select **View Logs** from the dropdown menu. The side panel populates logs based on error, so you quickly see which host and services require attention.

{% image
   source="https://datadog-docs.imgix.net/images/getting_started/logs/top-list-view-2024.13ff0972b0aaaf5e6f787f58211bd895.png?auto=format"
   alt="A top list in the Log Explorer" /%}

The Log Explorer offers the following features for log troubleshooting and exploration:

- [Advanced search and filtering](https://docs.datadoghq.com/logs/explorer/search_syntax/) with facets and queries
- [Log Analytics](https://docs.datadoghq.com/logs/explorer/analytics/patterns/) for grouping logs into patterns and aggregating data
- [Visualizations](https://docs.datadoghq.com/logs/explorer/visualize/) to display log data in various formats
- [Saved Views](https://docs.datadoghq.com/logs/explorer/saved_views/) to save and share your search configurations
- [Export options](https://docs.datadoghq.com/logs/explorer/export/) to reuse your queries in different contexts

For detailed information about all Log Explorer features, see the [Log Explorer documentation](https://docs.datadoghq.com/logs/explorer/).

## What's next?{% #whats-next %}

Once a logging source is configured, and your logs are available in the Log Explorer, you can begin to explore a few other areas of log management.

### Log configuration{% #log-configuration %}

- Set [attributes and aliasing](https://docs.datadoghq.com/logs/log_configuration/attributes_naming_convention/) to unify your logs environment.
- Control how your logs are processed with [pipelines](https://docs.datadoghq.com/logs/log_configuration/pipelines/) and [processors](https://docs.datadoghq.com/logs/log_configuration/processors/).
- As Logging without Limits* decouples log ingestion and indexing, you can [configure your logs](https://docs.datadoghq.com/logs/log_configuration/) and choose which logs to [index](https://docs.datadoghq.com/logs/log_configuration/indexes), [retain](https://docs.datadoghq.com/logs/log_configuration/flex_logs), or [archive](https://docs.datadoghq.com/logs/log_configuration/archives).

### Log correlation{% #log-correlation %}

- [Connect logs and traces](https://docs.datadoghq.com/tracing/other_telemetry/connect_logs_and_traces/) to exact logs associated with a specific `env`, `service,` or `version`.
- If you're already using metrics in Datadog, you can [correlate logs and metrics](https://docs.datadoghq.com/logs/guide/correlate-logs-with-metrics/) to gain context of an issue.

### Guides{% #guides %}

- [Best practices for Log Management](https://docs.datadoghq.com/logs/guide/best-practices-for-log-management/)
- Dive further into [Logging without Limits*](https://docs.datadoghq.com/logs/guide/getting-started-lwl/)
- Manage sensitive log data with [RBAC settings](https://docs.datadoghq.com/logs/guide/logs-rbac/)

## Further reading{% #further-reading %}

- [Introduction to Log Management](https://learn.datadoghq.com/courses/intro-to-log-management)
- [Going Deeper with Logs Processing](https://learn.datadoghq.com/courses/going-deeper-with-logs-processing)
- [Manage and Monitor Indexed Log Volumes](https://learn.datadoghq.com/courses/log-indexes)
- [Build and Manage Log Pipelines](https://learn.datadoghq.com/courses/log-pipelines)
- [Process Logs Out of the Box with Integration Pipelines](https://learn.datadoghq.com/courses/integration-pipelines)
- [Log Collection & Integrations](https://docs.datadoghq.com/logs/log_collection/)
- [Learn how to configure unified service tagging](https://docs.datadoghq.com/getting_started/tagging/unified_service_tagging)
- [Join an interactive session to optimize your Log Management](https://dtdg.co/fe)
\*Logging without Limits is a trademark of Datadog, Inc.