---
title: Prompt Tracking
description: Use Prompt Tracking to track your prompt templates and versions.
breadcrumbs: Docs > LLM Observability > Monitoring > Prompt Tracking
---

# Prompt Tracking

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

{% image
   source="https://datadog-docs.imgix.net/images/llm_observability/monitoring/llm-prompt-tracking-hero.89aa66162882d4080feba4f7aea67915.png?auto=format"
   alt="Prompts view for an app in LLM Observability." /%}

In Datadog's LLM Observability, the *Prompt Tracking* feature links prompt templates and versions to LLM calls. Prompt Tracking works alongside LLM Observability's traces, spans, and Playground.

Prompt Tracking enables you to:

- See all prompts used by your LLM application or agent, with call volume and latency over time
- Compare prompts or versions by calls, latency, tokens used, and cost
- See detailed information about a prompt: review its version history, view a text diff, and jump to traces using a specific version
- Filter [Trace Explorer](https://app.datadoghq.com/llm/traces) by prompt name, ID, or version to isolate impacted requests
- Reproduce a run by populating [LLM Observability Playground](https://app.datadoghq.com/llm/playground) with the exact template and variables from any span

## Set up Prompt Tracking{% #set-up-prompt-tracking %}

### With structured prompt metadata{% #with-structured-prompt-metadata %}

To use Prompt Tracking, you can submit structured prompt metadata (ID, optional version, template, variables).

#### LLM Observability Python SDK{% #llm-observability-python-sdk %}

If you are using the LLM Observability Python SDK (`dd-trace` v3.16.0+), attach prompt metadata to the LLM span using the `prompt` argument or helper. See the [LLM Observability Python SDK documentation](https://docs.datadoghq.com/llm_observability/instrumentation/sdk/?tab=python#prompt-tracking).

#### LLM Observability Node.js SDK{% #llm-observability-nodejs-sdk %}

If you are using the LLM Observability Node.js SDK (`dd-trace` v5.83.0+), attach prompt metadata to the LLM span using the `prompt` option. See the [LLM Observability Node.js SDK documentation](https://docs.datadoghq.com/llm_observability/instrumentation/sdk/?tab=nodejs#prompt-tracking).

#### LLM Observability API{% #llm-observability-api %}

If you are using the LLM Observability API intake, submit prompt metadata to the Spans API endpoint. See the [LLM Observability HTTP API reference documentation](https://docs.datadoghq.com/llm_observability/instrumentation/api/?tab=model#prompt).

#### OpenTelemetry instrumentation{% #opentelemetry-instrumentation %}

If you are using [OpenTelemetry instrumentation](https://docs.datadoghq.com/llm_observability/instrumentation/otel_instrumentation), you can attach prompt metadata to your LLM spans by setting the `_dd.ml_obs.prompt_tracking` attribute with a JSON string containing your prompt information.

Set the attribute on any LLM span:

{% tab title="Python" %}

```python
import json

span.set_attribute("_dd.ml_obs.prompt_tracking", json.dumps({
    "name": "greeting-prompt",
    "version": "v1",
    "template": "Hello {{name}}, tell me about {{topic}}",
    "variables": {"name": "Alice", "topic": "weather"}
}))
```

{% /tab %}

{% tab title="JavaScript" %}

```javascript
span.setAttribute("_dd.ml_obs.prompt_tracking", JSON.stringify({
    name: "greeting-prompt",
    version: "v1",
    template: "Hello {{name}}, tell me about {{topic}}",
    variables: { name: "Alice", topic: "weather" }
}));
```

{% /tab %}

{% tab title="Go" %}

```go
span.SetAttributes(attribute.String("_dd.ml_obs.prompt_tracking",
    `{"name":"greeting-prompt","version":"v1","template":"Hello {{name}}, tell me about {{topic}}","variables":{"name":"Alice","topic":"weather"}}`,
))
```

{% /tab %}

The following fields are supported in the prompt tracking JSON:

| Field                   | Type             | Required                 | Description                                                                                       |
| ----------------------- | ---------------- | ------------------------ | ------------------------------------------------------------------------------------------------- |
| `template`              | string           | Yes (or `chat_template`) | Template string for single-message prompts                                                        |
| `chat_template`         | array            | Yes (or `template`)      | List of `{"role": "...", "content": "..."}` message templates                                     |
| `id`                    | string           | No                       | Unique identifier for the prompt. Defaults to `{ml_app}_unnamed-prompt` if omitted                |
| `name`                  | string           | No                       | Prompt name. Used as a fallback for `id` if `id` is omitted                                       |
| `version`               | string           | No                       | User-supplied version tag                                                                         |
| `variables`             | object           | No                       | Template variable substitutions                                                                   |
| `rag_context_variables` | array of strings | No                       | Names of variables in `variables` that contain RAG context (ground truth). Used by RAG evaluators |
| `rag_query_variables`   | array of strings | No                       | Names of variables in `variables` that contain the user query. Used by RAG evaluators             |

{% alert level="info" %}
If you are using prompt templates, LLM Observability can automatically attach version information based on prompt content.
{% /alert %}

### With LangChain templates{% #with-langchain-templates %}

If you are using LangChain prompt templates, Datadog automatically captures prompt metadata without code changes. IDs are derived from module or template names. To override these IDs, see [LLM Observability Auto-instrumentation: LangChain](https://docs.datadoghq.com/llm_observability/instrumentation/auto_instrumentation?tab=python#langchain).

## Use Prompt Tracking in LLM Observability{% #use-prompt-tracking-in-llm-observability %}

View your app in LLM Observability and select **Prompts** on the left. The *Prompts view* features the following information:

- **Prompt Call Count**: A timeseries chart displaying calls per prompt (or per version) over time
- **Recent Prompt Updates**: Information about recent prompt updates, including time of last update, call count, average latency, and average tokens per call
- **Most Tokens Used**: Prompts ranked by total (input or output) tokens
- **Highest Latency Prompts**: Prompts ranked by average duration

{% image
   source="https://datadog-docs.imgix.net/images/llm_observability/monitoring/prompt_details.9e726a0f5dfed2ea84ae62b254a2310e.png?auto=format"
   alt="Detail view for a single prompt." /%}

Click on a prompt to open a detailed side-panel view that features information about version activity and various metrics. You can also see a diff view of two versions, open Trace Explorer pre-filtered to spans that use a selected version, or start a Playground session pre-populated with the selected version's template and variables.

{% image
   source="https://datadog-docs.imgix.net/images/llm_observability/monitoring/prompt_tracking_trace_explorer3.191bed7c31eee340bf4bb11bee360e01.png?auto=format"
   alt="Prompts view for an app in LLM Observability." /%}

You can use the LLM Observability Trace Explorer to locate requests by prompt usage. You can use a prompt's name, ID, and version as facets for both trace-level and span-level search. Click any LLM span to see the prompt that generated it.

## Further Reading{% #further-reading %}

- [Track, compare, and optimize your LLM prompts with Datadog LLM Observability](https://www.datadoghq.com/blog/llm-prompt-tracking)
