This product is not supported for your selected Datadog site. ().
Overview
The LLM Observability HTTP API provides an interface for developers to send LLM-related traces and spans to Datadog. If your application is written in Python, Node.js, or Java, you can use the LLM Observability SDKs.
The API accepts spans with timestamps no more than 24 hours old, allowing limited backfill of delayed data.
Spans API
Use this endpoint to send spans to Datadog. For details on the available kinds of spans, see Span Kinds.
{"data":{"type":"span","attributes":{"ml_app":"weather-bot","session_id":"1","tags":["service:weather-bot","env:staging","user_handle:example-user@example.com","user_id:1234"],"spans":[{"parent_id":"undefined","trace_id":"<TEST_TRACE_ID>","span_id":"<AGENT_SPAN_ID>","name":"health_coach_agent","meta":{"kind":"agent","input":{"value":"What is the weather like today and do i wear a jacket?"},"output":{"value":"It's very hot and sunny, there is no need for a jacket"}},"start_ns":1713889389104152000,"duration":10000000000},{"parent_id":"<AGENT_SPAN_ID>","trace_id":"<TEST_TRACE_ID>","span_id":"<WORKFLOW_ID>","name":"qa_workflow","meta":{"kind":"workflow","input":{"value":"What is the weather like today and do i wear a jacket?"},"output":{"value":"It's very hot and sunny, there is no need for a jacket"}},"start_ns":1713889389104152000,"duration":5000000000},{"parent_id":"<WORKFLOW_SPAN_ID>","trace_id":"<TEST_TRACE_ID>","span_id":"<LLM_SPAN_ID>","name":"generate_response","meta":{"kind":"llm","input":{"messages":[{"role":"system","content":"Your role is to ..."},{"role":"user","content":"What is the weather like today and do i wear a jacket?"}]},"output":{"messages":[{"content":"It's very hot and sunny, there is no need for a jacket","role":"assistant"}]}},"start_ns":1713889389104152000,"duration":2000000000}]}}}
Response
If the request is successful, the API responds with a 202 network code and an empty body.
API standards
Error
Field
Type
Description
message
string
The error message.
stack
string
The stack trace.
type
string
The error type.
IO
Field
Type
Description
value
string
Input or output value. If not set, this value is inferred from messages or documents.
Unique identifier matching the corresponding tool call.
type
string
The type of tool result.
ToolDefinition
Field
Type
Description
name
string
The name of the tool.
description
string
A description of what the tool does.
schema
Dict[key (string), value]
The schema defining the tool’s parameters.
SpanField
Field
Type
Description
kind
string
The kind of span field.
Prompt
LLM Observability registers new versions of templates when the template or chat_template value is updated. If the input is expected to change between invocations, extract the dynamic parts into a variable.
Field
Type
Description
id
string
Logical identifier for this prompt template. Should be unique per ml_app.
name
string
Human-readable name for the prompt.
version
string
Version tag for the prompt (for example, “1.0.0”). If not provided, LLM Observability automatically generates a version by computing a hash of the template content.
template
string
Single string template form. Use placeholder syntax (like {{variable_name}}) to embed variables. This should not be set with chat_template.
Multi-message template form. Use placeholder syntax (like {{variable_name}}) to embed variables in message content. This should not be set with template.
variables
Dict[key (string), string]
Variables used to render the template. Keys correspond to placeholder names in the template.
query_variable_keys
[string]
Variable keys that contain the user query. Used for hallucination detection.
context_variable_keys
[string]
Variable keys that contain ground-truth or context content. Used for hallucination detection.
tags
Dict[key (string), string]
Tags to attach to the prompt run.
{"id":"translation-prompt","chat_template":[{"role":"system","content":"You are a translation service. You translate to {{language}}."},{"role":"user","content":"{{user_input}}"}],"variables":{"language":"french","user_input":"<USER_INPUT_TEXT>"}}
Meta
Field
Type
Description
kind [required]
string
The span kind: "agent", "workflow", "llm", "tool", "task", "embedding", or "retrieval".
Dict[key (string), value] where the value is a float, bool, or string
Data about the span that is not input or output related. Use the following metadata keys for LLM spans: temperature, max_tokens, model_name, and model_provider.
model_name
string
The name of the model used for LLM spans.
model_provider
string
The provider of the model used for LLM spans.
model_version
string
The version of the model used for LLM spans.
embedding_for_prompt_idx
integer
The prompt index for which embeddings were computed.
A dictionary of metrics to collect for the span. The keys are metric names (strings) and values are metric values (float64 pointers). Common metrics include:
input_tokens - The number of input tokens (LLM spans)
output_tokens - The number of output tokens (LLM spans)
total_tokens - The total number of tokens (LLM spans)
time_to_first_token - Time in seconds for first output token (streaming LLM, root spans)
time_per_output_token - Time in seconds per output token (streaming LLM, root spans)
input_cost - Input cost in dollars (LLM and embedding spans)
output_cost - Output cost in dollars (LLM spans)
total_cost - Total cost in dollars (LLM spans)
non_cached_input_cost - Non-cached input cost in dollars (LLM spans)
cache_read_input_cost - Cache read input cost in dollars (LLM spans)
cache_write_input_cost - Cache write input cost in dollars (LLM spans)
Type: Dict[key (string), float64]
Span
Field
Type
Description
name [required]
string
The name of the span.
span_id [required]
string
An ID unique to the span.
trace_id [required]
string
A unique ID shared by all spans in the same trace.
parent_id [required]
string
ID of the span’s direct parent. If the span is a root span, the parent_id must be undefined.
The session the list of spans belongs to. Can be overridden or set on individual spans as well.
Tag
Tags should be formatted as a list of strings (for example, ["user_handle:dog@gmail.com", "app_version:1.0.0"]). They are meant to store contextual information surrounding the span.
{"data":{"type":"evaluation_metric","attributes":{"metrics":[{"join_on":{"span":{"span_id":"20245611112024561111","trace_id":"13932955089405749200"}},"ml_app":"weather-bot","timestamp_ms":1609459200,"metric_type":"categorical","label":"Sentiment","categorical_value":"Positive",},{"join_on":{"tag":{"key":"msg_id","value":"1123132"}},"ml_app":"weather-bot","timestamp_ms":1609479200,"metric_type":"score","label":"Accuracy","score_value":3,"assessment":"fail","reasoning":"The response provided incorrect information about the weather forecast."},{"join_on":{"tag":{"key":"msg_id","value":"1123132"}},"ml_app":"weather-bot","timestamp_ms":1609479200,"metric_type":"boolean","label":"Topic Relevancy","boolean_value":true,}]}}}
{"data":{"type":"evaluation_metric","id":"456f4567-e89b-12d3-a456-426655440000","attributes":{"metrics":[{"id":"d4f36434-f0cd-47fc-884d-6996cee26da4","join_on":{"span":{"span_id":"20245611112024561111","trace_id":"13932955089405749200"}},"ml_app":"weather-bot","timestamp_ms":1609459200,"metric_type":"categorical","label":"Sentiment","categorical_value":"Positive"},{"id":"cdfc4fc7-e2f6-4149-9c35-edc4bbf7b525","join_on":{"tag":{"key":"msg_id","value":"1123132"}},"span_id":"20245611112024561111","trace_id":"13932955089405749200","ml_app":"weather-bot","timestamp_ms":1609479200,"metric_type":"score","label":"Accuracy","score_value":3,"assessment":"fail","reasoning":"The response provided incorrect information about the weather forecast."},{"id":"haz3fc7-g3p2-1s37-8m12-ndk4hbf7a522","join_on":{"tag":{"key":"msg_id","value":"1123132"}},"span_id":"20245611112024561111","trace_id":"13932955089405749200","ml_app":"weather-bot","timestamp_ms":1609479200,"metric_type":"boolean","label":"Topic Relevancy","boolean_value":true,}]}}}