Trace an LLM Application

LLM Observability is not available in the US1-FED site.

LLM Observability is in public beta.


Your application can submit data to LLM Observability in two ways: with Datadog’s Python SDK, or with the LLM Observability API.

Each request fulfilled by your application is represented as a trace on the LLM Observability traces page in Datadog:

An LLM Observability trace displaying each span of a request

A given trace contains spans representing each choice made by an agent or each step of a workflow. A span represents some unit of work that your application is performing. Spans have a start time, duration, name, tags, and attributes.

Multiple spans combine to form a trace, and a root span is the first span in a trace.

A trace can contain several kinds of spans. The span kind categorizes the type of work the span is performing.

Only three span kinds can be the root span of a trace:

  • LLM span: An individual LLM inference. LLM spans allow you to track inputs and outputs to your LLM calls; track tokens, error rates, and latencies for your LLM calls; and break down important metrics by models and model providers.
  • Workflow span: A grouping of LLM calls and their contextual operations, such as tool calls or preprocessing steps.
  • Agent span: A dynamic LLM workflow executed by an LLM agent.

Different span kinds also have different parent-child relationships. For details, see Span Kinds.

Instrument an LLM application

This guide uses the LLM Observability SDK for Python. If your application is not written in Python, you can complete the steps below with API requests instead of SDK function calls.

To trace an LLM application:

  1. Install the LLM Observability SDK.
  2. Configure the SDK by providing the required environment variables in your application startup command.
  3. In your code, use the SDK to create spans representing your application’s tasks.
  4. Annotate your spans with input data, output data, metadata (such as temperature), metrics (such as input_tokens), and key-value tags (such as version:1.0.0).
  5. Explore the resulting traces on the LLM Observability traces page, and the resulting metrics on the out-of-the-box LLM Observability dashboard.

Optionally, you can also:

Span creation example

To create a span, use the LLM Observability SDK’s llmobs.decorators.<SPAN_KIND>() as a function decorator, replacing <SPAN_KIND> with the desired span kind.

The example below creates a workflow span:

from ddtrace.llmobs.decorators import workflow

def process_message():
    ... # user application logic

For more information on alternative tracing methods and tracing features, see the SDK documentation.