---
title: Ease Troubleshooting With Cross-Product Correlation
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Log Management > Logs Guides > Ease Troubleshooting With Cross-Product
  Correlation
---

# Ease Troubleshooting With Cross-Product Correlation

## Overview{% #overview %}

[Unified service tagging](https://docs.datadoghq.com/getting_started/tagging/unified_service_tagging) enables high-level correlation capabilities. There may be times when the starting point of your investigation is a single log, trace, view, or Synthetic test. Correlating logs, traces, and views with other data provides helpful context in estimating business impact and identifying the root cause of an issue in quickly.

{% image
   source="https://docs.dd-static.net/images/logs/guide/ease-troubleshooting-with-cross-product-correlation/full-stack-cover.d1a3ae9a0b373d0c968df88aa49d064f.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/logs/guide/ease-troubleshooting-with-cross-product-correlation/full-stack-cover.d1a3ae9a0b373d0c968df88aa49d064f.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="Full stack correlation" /%}

This guide walks you through how to correlate your full stack data. Depending on your use case, you may skip certain steps below. Steps that are dependent on others are explicitly stated.

1. Correlate server-side logs with traces
   - Correlate application logs
   - Correlate proxy logs
   - Correlate database logs
1. Correlate frontend products
   - Correlate browser logs with RUM
1. Correlate user experience with server behavior
   - Correlate RUM views with traces
   - Leverage trace correlation to troubleshoot Synthetic tests

## Correlate server-side logs with traces{% #correlate-server-side-logs-with-traces %}

When your users are encountering errors or high latency in your application, viewing the specific logs from a problematic request can reveal exactly what went wrong. By pulling together all the logs pertaining to a given request, you can see in rich detail how it was handled from beginning to end so you can quickly diagnose the issue.

Correlating your logs with traces also eases [aggressive sampling strategy without losing entity-level consistency](https://docs.datadoghq.com/logs/indexes/#sampling-consistently-with-higher-level-entities) with the use of `trace_id`.

Correlating application logs offers extensive visibility across your stack, but some specific use cases require correlation deeper into your stack. Follow the links to complete setup per use case:

- Correlate proxy logs
- Correlate database logs

### Correlate application logs{% #correlate-application-logs %}

#### Why?{% #why %}

Application logs give the most context around most code and business logic issues. They can even help you solve other services issues. For example, most ORMs log database errors.

#### How?{% #how %}

Use one of the [various OOTB correlations](https://docs.datadoghq.com/tracing/other_telemetry/connect_logs_and_traces). If you use a custom tracer or if you have any issues, follow the [correlation FAQ](https://docs.datadoghq.com/tracing/faq/why-cant-i-see-my-correlated-logs-in-the-trace-id-panel).

### Correlate proxy logs{% #correlate-proxy-logs %}

#### Why?{% #why-1 %}

Proxy logs provide more information than application logs as they cover more entry points and give information on static content and redirections.

#### How?{% #how-1 %}

The application tracer generates trace IDs by default. This can be changed by injecting `x-datadog-trace-id` into HTTP Request headers.

#### NGINX{% #nginx %}

To correlate NGINX logs with traces, you must configure your NGINX `log_format` to include the trace ID and then configure a Datadog pipeline to parse that ID from your logs.

See the [Instrumenting NGINX](https://docs.datadoghq.com/tracing/trace_collection/proxy_setup/nginx) for complete, end-to-end setup instructions.

### Correlate database logs{% #correlate-database-logs %}

#### Why?{% #why-2 %}

Database logs are often hard to contextualize due to query similarities, variable anonymization, and high usage.

For example, production slow queries are hard to reproduce and analyze without investing a lot of time and resources. Below is an example of how to correlate slow query analysis with traces.

#### How?{% #how-2 %}

#### PostgreSQL{% #postgresql %}

##### Enrich your database logs{% #enrich-your-database-logs %}

PostgreSQL default logs are not detailed. Follow [this integration guide](https://docs.datadoghq.com/integrations/postgres/?tab=host#log-collection) to enrich them.

Slow query best practices also suggests logging execution plans of slow statements automatically, so you don't have to run `EXPLAIN` by hand. To run `EXPLAIN` automatically, update `/etc/postgresql/<VERSION>/main/postgresql.conf` with:

```gdscript3
session_preload_libraries = 'auto_explain'
auto_explain.log_min_duration = '500ms'
```

Queries longer than 500ms log their execution plan.

**Note**: `auto_explain.log_analyze = 'true'` provides even more information, but greatly impacts performance. For more information, see the [official documentation](https://www.postgresql.org/docs/13/auto-explain.html).

##### Inject trace_id into your database logs{% #inject-trace_id-into-your-database-logs %}

Inject `trace_id` into most of your database logs with [SQL comments](https://www.postgresql.org/docs/13/sql-syntax-lexical.html#SQL-SYNTAX-COMMENTS). Here is an example with Flask and SQLAlchemy:

```python
if os.environ.get('DD_LOGS_INJECTION') == 'true':
    from ddtrace import tracer
    from sqlalchemy.engine import Engine
    from sqlalchemy import event

    @event.listens_for(Engine, "before_cursor_execute", retval=True)
    def comment_sql_calls(conn, cursor, statement, parameters, context, executemany):
        trace_ctx = tracer.get_log_correlation_context()
        statement = f"{statement} -- dd.trace_id=<{trace_ctx['trace_id']}>"
        return statement, parameters
```

**Note**: This only correlates logs that include a query statement. Error logs like `ERROR: duplicate key value violates unique constraint "<TABLE_KEY>"` stay out of context. Most of the time you can still get error information through your application logs.

Clone and customize the PostgreSQL pipeline:

1. Add a new [grok parser](https://docs.datadoghq.com/logs/log_configuration/processors/#grok-parser):

   ```text
   extract_trace %{data}\s+--\s+dd.trace_id=<%{notSpace:dd.trace_id}>\s+%{data}
   ```

1. Add a [trace ID remapper](https://docs.datadoghq.com/logs/log_configuration/processors/#trace-remapper) on `dd.trace_id` attribute.

Here is an example of a slow query execution plan from a slow trace:

{% image
   source="https://docs.dd-static.net/images/logs/guide/ease-troubleshooting-with-cross-product-correlation/slow-query-root-cause.d4ef88db7dfb318d5c7505c9e53b52a2.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/logs/guide/ease-troubleshooting-with-cross-product-correlation/slow-query-root-cause.d4ef88db7dfb318d5c7505c9e53b52a2.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="Slow query logs correlation" /%}

## Correlate frontend products{% #correlate-frontend-products %}

### Correlate browser logs with RUM & Session Replay{% #correlate-browser-logs-with-rum--session-replay %}

#### Why?{% #why-3 %}

[Browser logs](https://docs.datadoghq.com/logs/log_collection/javascript/) inside a RUM event give context and insight into an issue. In the following example, browser logs indicate that the root cause of the bad query is an invalid user ID.

{% image
   source="https://docs.dd-static.net/images/logs/guide/ease-troubleshooting-with-cross-product-correlation/browser-logs-in-rum.118bdcdcf100c391cde110d537ca2d0e.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/logs/guide/ease-troubleshooting-with-cross-product-correlation/browser-logs-in-rum.118bdcdcf100c391cde110d537ca2d0e.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="Browser logs in a RUM action" /%}

Correlating your browser logs with RUM also eases [aggressive sampling strategy without losing entity-level consistency](https://docs.datadoghq.com/logs/indexes/#sampling-consistently-with-higher-level-entities) with the use of attributes like `session_id` and `view.id`.

#### How?{% #how-3 %}

Browser logs and RUM events are automatically correlated. For more information, see [RUM & Session Replay Billing](https://docs.datadoghq.com/account_management/billing/rum/#how-do-you-view-logs-from-the-browser-collector-in-rum). You must [match configurations between the RUM Browser SDK and Logs SDK](https://docs.datadoghq.com/real_user_monitoring/application_monitoring/browser/setup/#initialization-parameters).

## Correlate user experience with server behavior{% #correlate-user-experience-with-server-behavior %}

Traditional backend and frontend monitoring are siloed and may require separate workflows to troubleshoot across a stack. Datadog's full stack correlations allow you to identify a root cause—whether it comes from a browser issue or a database downtime—and estimate the user impact.

This section walks you through how to enable these correlations:

- Correlate RUM views with traces
- Leverage the trace correlation to troubleshoot Synthetic tests

### Correlate RUM views with traces{% #correlate-rum-views-with-traces %}

#### Why?{% #why-4 %}

The APM integration with RUM & Session Replay allows you to see your frontend and backend data in one lens, in addition to:

- Quickly pinpoint issues anywhere in your stack, including the frontend
- Fully understand what your users are experiencing

#### How?{% #how-4 %}

You can access RUM views in the [Trace Explorer](https://app.datadoghq.com/apm/traces) and APM traces in the [RUM Explorer](https://app.datadoghq.com/rum/explorer). For more information, see [Connect RUM and Traces](https://docs.datadoghq.com/real_user_monitoring/correlate_with_other_telemetry/apm).

{% image
   source="https://docs.dd-static.net/images/logs/guide/ease-troubleshooting-with-cross-product-correlation/trace-details-rum.3cdd5b3f1885ba3f7faaf8444357a983.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/logs/guide/ease-troubleshooting-with-cross-product-correlation/trace-details-rum.3cdd5b3f1885ba3f7faaf8444357a983.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="RUM information in a trace" /%}

There is no direct correlation between RUM views and server logs. To see RUM events in a log and logs in a RUM event, click in the **Traces** tab.

{% image
   source="https://docs.dd-static.net/images/logs/guide/ease-troubleshooting-with-cross-product-correlation/rum-action-server-logs.6550f92b29f925fc68c1b26277bd295c.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/logs/guide/ease-troubleshooting-with-cross-product-correlation/rum-action-server-logs.6550f92b29f925fc68c1b26277bd295c.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="Logs in a RUM action trace preview" /%}

### Leverage trace correlation to troubleshoot Synthetic tests{% #leverage-trace-correlation-to-troubleshoot-synthetic-tests %}

#### Why?{% #why-5 %}

The APM integration with Synthetic Monitoring allows you to navigate from a failed test run to the root cause of the issue with the trace generated by the test.

{% image
   source="https://docs.dd-static.net/images/logs/guide/ease-troubleshooting-with-cross-product-correlation/synthetic-trace-root-cause.1f003e91a681749564a1b527ff1e675f.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/logs/guide/ease-troubleshooting-with-cross-product-correlation/synthetic-trace-root-cause.1f003e91a681749564a1b527ff1e675f.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="Root cause of a synthetic test fail" /%}

Having network-related specifics from your test, in addition to backend, infrastructure, and log information from your trace, and RUM events (for [browser tests](https://docs.datadoghq.com/synthetics/browser_tests/) only) allows you to access additional details about your application's behavior and user experience.

#### How?{% #how-5 %}

After enabling APM on your application's endpoint, you can access APM traces in the [Synthetic Monitoring & Continuous Testing page](https://app.datadoghq.com/synthetics/tests).

For more information, see [Connect Synthetic Tests and Traces](https://docs.datadoghq.com/synthetics/apm).

## Further Reading{% #further-reading %}

- [Learn about Unified Service Tagging](https://docs.datadoghq.com/getting_started/tagging/unified_service_tagging/)
- [Connect Logs and Traces](https://docs.datadoghq.com/tracing/other_telemetry/connect_logs_and_traces)
- [Connect RUM & Session Replay and Traces](https://docs.datadoghq.com/real_user_monitoring/correlate_with_other_telemetry/apm/)
- [Connect Synthetic Tests and Traces](https://docs.datadoghq.com/synthetics/apm/)
