---
title: Correlating Python Logs and Traces
description: Connect your Python logs and traces to correlate them in Datadog.
breadcrumbs: >-
  Docs > APM > Correlate APM Data with Other Telemetry > Correlate Logs and
  Traces > Correlating Python Logs and Traces
---

# Correlating Python Logs and Traces

## Injection{% #injection %}

### Standard library logging{% #standard-library-logging %}

To correlate your [traces](https://docs.datadoghq.com/tracing/glossary/#trace) with your logs, complete the following steps:

1. Activate automatic instrumentation.
1. Include required attributes from the log record.

#### Step 1 - Activate automatic instrumentation{% #step-1---activate-automatic-instrumentation %}

Activate automatic instrumentation using one of the following options:

Option 1: [Library Injection](https://docs.datadoghq.com/tracing/trace_collection/library_injection_local/):

1. Follow the instructions in [Library Injection](https://docs.datadoghq.com/tracing/trace_collection/library_injection_local/) to set up tracing.
1. For older trace versions (`ddtrace<3.11`) set the environment variable `DD_LOGS_INJECTION=true` in the application `deployment/manifest` file.

Option 2: `ddtrace-run`:

1. Import **ddtrace** into the application.
1. Run the application with `ddtrace-run` (for example, `ddtrace-run python appname.py`).

Option 3: `import ddtrace.auto`:

1. Import **ddtrace.auto** into the application. This will automatically enable the logging, logback, loguru, and/or structlog integration.

#### Step 2 - Include required attributes{% #step-2---include-required-attributes %}

Update your log format to include the required attributes from the log record.

Include the `dd.env`, `dd.service`, `dd.version`, `dd.trace_id` and `dd.span_id` attributes for your log record in the format string.

Here is an example using `logging.basicConfig` to configure the log injection. Note `ddtrace-run` or `import ddtrace.auto` are required:

```python
import logging
from ddtrace import tracer

FORMAT = ('%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] '
          '[dd.service=%(dd.service)s dd.env=%(dd.env)s dd.version=%(dd.version)s dd.trace_id=%(dd.trace_id)s dd.span_id=%(dd.span_id)s] '
          '- %(message)s')
logging.basicConfig(format=FORMAT)
log = logging.getLogger(__name__)
log.level = logging.INFO

@tracer.wrap()
def hello():
    log.info('Hello, World!')

hello()
```

To learn more about logs injection, read the [ddtrace documentation](https://ddtrace.readthedocs.io/en/stable/advanced_usage.html#logs-injection).

### No standard library logging{% #no-standard-library-logging %}

By default, `ddtrace.trace.tracer.log_correlation` returns `dd.env=<ENV> dd.service=<SERVICE> dd.version=<VERSION> dd.trace_id=<TRACE_ID> dd.span_id=<SPAN_ID>`.

```python
from ddtrace import tracer

log_correlation_dict = tracer.get_log_correlation_context()
```

As an illustration of this approach, the following example defines a function as a *processor* in `structlog` to add tracer fields to the log output:

```python
import ddtrace
from ddtrace import tracer

import structlog

def tracer_injection(logger, log_method, event_dict):
    # get correlation ids from current tracer context
    event_dict.update(tracer.get_log_correlation_context())
    return event_dict

structlog.configure(
    processors=[
        tracer_injection,
        structlog.processors.JSONRenderer()
    ]
)
log = structlog.get_logger()
```

Once the logger is configured, executing a traced function that logs an event yields the injected tracer information:

```text
>>> traced_func()
{"event": "In tracer context", "dd.trace_id": 9982398928418628468, "dd.span_id": 10130028953923355146, "dd.env": "dev", "dd.service": "hello", "dd.version": "abc123"}
```

**Note**: If you are not using a [Datadog Log Integration](https://docs.datadoghq.com/logs/log_collection/python/#configure-the-datadog-agent) to parse your logs, custom log parsing rules need to ensure that `dd.trace_id` and `dd.span_id` are being parsed as strings and remapped using the [Trace Remapper](https://docs.datadoghq.com/logs/log_configuration/processors/#trace-remapper). For more information, see [Correlated Logs Not Showing Up in the Trace ID Panel](https://docs.datadoghq.com/tracing/troubleshooting/correlated-logs-not-showing-up-in-the-trace-id-panel/?tab=custom).

[See the Python logging documentation](https://docs.datadoghq.com/logs/log_collection/python/#configure-the-datadog-agent) to ensure that the Python Log Integration is properly configured so that your Python logs are automatically parsed.

## Further Reading{% #further-reading %}

- [Instrument manually your application to create traces.](https://docs.datadoghq.com/tracing/manual_instrumentation/)
- [Implement Opentracing across your applications.](https://docs.datadoghq.com/tracing/opentracing/)
- [Explore your services, resources, and traces](https://docs.datadoghq.com/tracing/glossary/)
- [Correlate request logs with traces automatically](https://www.datadoghq.com/blog/request-log-correlation/)
- [Ease troubleshooting with cross product correlation.](https://docs.datadoghq.com/logs/guide/ease-troubleshooting-with-cross-product-correlation/)
