---
title: Python Log Collection
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Log Management > Log Collection and Integrations > Python Log
  Collection
---

# Python Log Collection

## Overview{% #overview %}

To send your Python logs to Datadog, configure a Python logger to log to a file on your host and then [tail](https://docs.datadoghq.com/glossary/#tail) that file with the Datadog Agent.

## Configure your logger{% #configure-your-logger %}

Python logs can be complex to handle because of tracebacks. Tracebacks cause logs to be split into multiple lines, which makes them difficult to associate with the original log event. To address this issue, Datadog strongly recommends using a JSON formatter when logging so that you can:

- Ensure each stack trace is wrapped into the correct log.
- Ensure all the attributes of a log event are correctly extracted (severity, logger name, thread name, and so on).

See the setup examples for the following logging libraries:

- [JSON-log-formatter](https://pypi.python.org/pypi/JSON-log-formatter/)
- [Python-json-logger](https://github.com/nhairs/python-json-logger)
- [django-datadog-logger](https://pypi.org/project/django-datadog-logger/)\*

\*The [Python logger](https://docs.python.org/3/library/logging.html#logging) has an `extra` parameter for adding custom attributes. Use `DJANGO_DATADOG_LOGGER_EXTRA_INCLUDE` to specify a regex that matches the name of the loggers for which you want to add the `extra` parameter.

## Configure the Datadog Agent{% #configure-the-datadog-agent %}

Once [log collection](https://docs.datadoghq.com/agent/logs/?tab=tailfiles#activate-log-collection) is enabled, set up [custom log collection](https://docs.datadoghq.com/agent/logs/?tab=tailfiles#custom-log-collection) to tail your log files and send them to Datadog by doing the following:

1. Create a `python.d/` folder in the `conf.d/` Agent configuration directory.
1. Create a file `conf.yaml` in the `conf.d/python.d/` directory with the following content:
   ```yaml
   init_config:
   
   instances:
   
   ##Log section
   logs:
   
     - type: file
       path: "<PATH_TO_PYTHON_LOG>.log"
       service: "<SERVICE_NAME>"
       source: python
       sourcecategory: sourcecode
       # For multiline logs, if they start by the date with the format yyyy-mm-dd uncomment the following processing rule
       #log_processing_rules:
       #  - type: multi_line
       #    name: new_log_start_with_date
       #    pattern: \d{4}\-(0?[1-9]|1[012])\-(0?[1-9]|[12][0-9]|3[01])
   ```
1. [Restart the Agent](https://docs.datadoghq.com/agent/configuration/agent-commands/).
1. Run the [Agent's status subcommand](https://docs.datadoghq.com/agent/configuration/agent-commands/?tab=agentv6v7#agent-status-and-information) and look for `python` under the `Checks` section to confirm that logs are successfully submitted to Datadog.

If logs are in JSON format, Datadog automatically [parses the log messages](https://docs.datadoghq.com/logs/log_configuration/parsing/) to extract log attributes. Use the [Log Explorer](https://docs.datadoghq.com/logs/explorer/#overview) to view and troubleshoot your logs.

## Connect your service across logs and traces{% #connect-your-service-across-logs-and-traces %}

If APM is enabled for this application, connect your logs and traces by automatically adding trace IDs, span IDs, `env`, `service`, and `version` to your logs by [following the APM Python instructions](https://docs.datadoghq.com/tracing/other_telemetry/connect_logs_and_traces/python).

**Note**: If the APM tracer injects `service` into your logs, it overrides the value set in the agent configuration.

Once this is done, the log should have the following format:

```xml
2019-01-07 15:20:15,972 DEBUG [flask.app] [app.py:100] [dd.trace_id=5688176451479556031 dd.span_id=4663104081780224235] - this is an example
```

If logs are in JSON format, trace values are automatically extracted if the values are at the top level or in the top level `extra` or `record.extra` blocks. The following are examples of valid JSON logs where trace values are automatically parsed.

```json
{
  "message":"Hello from the private method",
  "dd.trace_id":"18287620314539322434",
  "dd.span_id":"8440638443344356350",
  "dd.env":"dev",
  "dd.service":"logs",
  "dd.version":"1.0.0"
}
```

```json
{
  "message":"Hello from the private method",
  "extra":{
    "dd.trace_id":"18287620314539322434",
    "dd.span_id":"8440638443344356350",
    "dd.env":"dev",
    "dd.service":"logs",
    "dd.version":"1.0.0"
  }
}
```

```json
{
"message":"Hello from the private method",
  "record":{
    "extra":{
      "dd.trace_id":"1734396609740561719",
      "dd.span_id":"17877262712156101004",
      "dd.env":"dev",
      "dd.service":"logs",
      "dd.version":"1.0.0"
    }
  }
}
```

## Further Reading{% #further-reading %}

- [How to collect, customize, and centralize Python logs](https://www.datadoghq.com/blog/python-logging-best-practices/)
- [Learn how to process your logs](https://docs.datadoghq.com/logs/log_configuration/processors)
- [Learn more about parsing](https://docs.datadoghq.com/logs/log_configuration/parsing)
- [Learn how to explore your logs](https://docs.datadoghq.com/logs/explorer/)
- [Log Collection Troubleshooting Guide](https://docs.datadoghq.com/logs/faq/log-collection-troubleshooting-guide/)
- [Glossary entry for "tail"](https://docs.datadoghq.com/glossary/#tail)
