This product is not supported for your selected Datadog site. ().

Overview

Datadog’s LLM Observability can automatically trace and annotate calls to supported LLM frameworks and libraries through various LLM integrations. When you run your LLM application with the LLM Observability SDK, these LLM integrations are enabled by default and provide out-of-the-box traces and observability, without you having to change your code.

Automatic instrumentation works for calls to supported frameworks and libraries. To trace other calls (for example: API calls, database queries, internal functions), see the LLM Observability SDK reference for how to add manual instrumentation.

Supported frameworks and libraries

FrameworkSupported VersionsTracer Version
Amazon Bedrock>= 1.31.57>= 2.9.0
Amazon Bedrock Agents>= 1.38.26>= 3.10.0
Anthropic>= 0.28.0>= 2.10.0
CrewAI>= 0.105.0>= 3.5.0
Google ADK>= 1.0.0>= 3.15.0
Google GenAI>= 1.21.1>= 3.11.0
Google GenerativeAI>= 0.7.2>= 2.14.0
LangChain>= 0.0.192>= 2.9.0
LangGraph>= 0.2.23>= 3.10.1
LiteLLM>= 1.70.0>= 3.9.0
MCP>= 1.10.0>= 3.11.0
OpenAI, Azure OpenAI>= 0.26.5>= 2.9.0
OpenAI Agents>= 0.0.2>= 3.5.0
Pydantic AI>= 0.3.0>= 3.11.0
Vertex AI>= 1.71.1>= 2.18.0
FrameworkSupported VersionsTracer Version
Amazon Bedrock>= 3.422.0>= 5.35.0 (CJS), >=5.35.0 (ESM)
Anthropic>= 0.14.0>= 5.71.0 (CJS), >=5.71.0 (ESM)
LangChain>= 0.1.0>= 5.32.0 (CJS), >=5.38.0 (ESM)
OpenAI, Azure OpenAI>= 3.0.0>= 4.49.0, >= 5.25.0 (CJS), >= 5.38.0 (ESM)
Vercel AI SDK>=4.0.0>= 5.63.0 (CJS), >=5.63.0 (ESM)
VertexAI>= 1.0.0>= 5.44.0 (CJS), >=5.44.0 (ESM)

Automatic instrumentation for ESM projects is supported starting from dd-trace@>=5.38.0. To enable automatic instrumentation in your ESM projects, run your application with the following Node option:

--import dd-trace/register.js

For command-line setup, use the following option instead:

--import dd-trace/initialize.mjs
# or
--loader dd-trace/initialize.mjs
Troubleshooting: Custom loader for module incompatibility

If there are errors launching your application when using this option, it is likely a module incompatibility. You can create your own hook file with the module and file in question excluded:

// hook.mjs

import { register } from 'node:module';

register('import-in-the-middle/hook.mjs', import.meta.url, {
  parentURL: import.meta.url,
  data: { exclude: [
    /langsmith/,
    /openai\/_shims/,
    /openai\/resources\/chat\/completions\/messages/,
    // Add any other modules you want to exclude
  ]}
});

To use this custom loader, run your application with the following Node option:

--import ./hook.mjs

To use LLM Observability integrations in bundled applications (esbuild, Webpack, Next.js), you must exclude these integrations’ modules from bundling.

esbuild

If you are using esbuild, see Bundling with the Node.js tracer.

Webpack or Next.js

For Webpack or Next.js bundling, specify the corresponding integration in the externals section of the webpack configuration:

// next.config.js
module.exports = {
  webpack: (config) => {
    // this may be a necessary inclusion
    config.resolve.fallback = {
      ...config.resolve.fallback,
      graphql: false,
    }

    // exclude OpenAI from bundling
    config.externals.push('openai')

    return config
  }
}

// webpack.config.js
module.exports = {
  resolve: {
    fallback: {
      graphql: false,
    }
  },
  externals: {
    openai: 'openai'
  }
}

LLM integrations

Datadog’s LLM integrations capture latency, errors, input parameters, input and output messages, and token usage (when available) for traced calls.

The Amazon Bedrock integration provides automatic instrumentation for the Amazon Bedrock Runtime Python SDK’s chat model calls (using Boto3/Botocore).

Traced methods

The Amazon Bedrock integration instruments the following methods:

The Amazon Bedrock integration does not support tracing embedding calls.

The Amazon Bedrock integration provides automatic tracing for the Amazon Bedrock Runtime Node.js SDK’s chat model calls (using BedrockRuntimeClient).

Traced methods

The Amazon Bedrock integration instruments the following methods:

The Amazon Bedrock Agents integration provides automatic tracing for the Amazon Bedrock Agents Runtime Python SDK’s agent invoke calls (using Boto3/Botocore).

Traced methods

The Amazon Bedrock Agents integration instruments the following methods:

The Amazon Bedrock Agents integration, by default, only traces the overall InvokeAgent method. To enable tracing intra-agent steps, you must set enableTrace=True in the InvokeAgent request parameters.

The Anthropic integration provides automatic tracing for the Anthropic Python SDK’s chat message calls.

Traced methods

The Anthropic integration instruments the following methods:

  • Chat messages (including streamed calls):
    • Anthropic().messages.create(), AsyncAnthropic().messages.create()
  • Streamed chat messages:
    • Anthropic().messages.stream(), AsyncAnthropic().messages.stream()

The Anthropic integration provides automatic tracing for the Anthropic Node.js SDK’s chat message calls.

Traced methods

The Anthropic integration instruments the following methods:

The CrewAI integration automatically traces execution of Crew kickoffs, including task/agent/tool invocations, made through CrewAI’s Python SDK.

Traced methods

The CrewAI integration instruments the following methods:

The Google ADK integration provides automatic tracing for agent runs, tool calls, and code executions made through Google’s ADK Python SDK.

Traced methods

The Google ADK integration instruments the following methods:

Both run_live and run_async methods are supported.

The Google GenAI integration automatically traces methods in the Google GenAI Python SDK.

Note: The Google GenAI Python SDK succeeds the Google GenerativeAI SDK, and exposes both Gemini Developer API as well as Vertex.

Traced methods

The Google GenAI integration instruments the following methods:

  • Generating content (including streamed calls):
    • models.generate_content() (Also captures chat.send_message())
    • aio.models.generate_content() (Also captures aio.chat.send_message())
  • Embedding content -models.embed_content() -aio.models.embed_content()

The Google GenerativeAI integration provides automatic tracing for the Google GenerativeAI Python SDK content generation calls.

Note: The Google Generative AI SDK is deprecated, and succeeded by Google GenAI.

Traced methods

The Google GenerativeAI integration instruments the following methods:

  • Generating content (including streamed calls):
    • model.generate_content() (Also captures chat.send_message())
    • model.generate_content_async() (Also captures chat.send_message_async())

The LangChain integration provides automatic tracing for the LangChain Python SDK’s LLM, chat model, and chain calls.

Traced methods

The LangChain integration instruments the following methods:

  • LLMs:

    • llm.invoke(), llm.ainvoke()
    • llm.stream(), llm.astream()
  • Chat models

    • chat_model.invoke(), chat_model.ainvoke()
    • chat_model.stream(), chat_model.astream()
  • Chains/LCEL

    • chain.invoke(), chain.ainvoke()
    • chain.batch(), chain.abatch()
    • chain.stream(), chain.astream()
  • Embeddings

    • OpenAI : OpenAIEmbeddings.embed_documents(), OpenAIEmbeddings.embed_query()
  • Tools

    • BaseTool.invoke(), BaseTool.ainvoke()
  • Retrieval

    • langchain_community.<vectorstore>.similarity_search()
    • langchain_pinecone.similarity_search()
  • Prompt Templating

    • BasePromptTemplate.invoke(), BasePromptTemplate.ainvoke()
    For best results, assign templates to variables with meaningful names. Automatic instrumentation uses these names to identify prompts.
    # "translation_template" will be used to identify the template in Datadog
    translation_template = PromptTemplate.from_template("Translate {text} to {language}")
    chain = translation_template | llm
    

The LangChain integration provides automatic tracing for the LangChain Node.js SDK’s LLM, chat model, chain, and OpenAI embeddings calls.

Traced methods

The LangChain integration instruments the following methods:

The LangGraph integration automatically traces Pregel/CompiledGraph and RunnableSeq (node) invocations made through the LangGraph Python SDK.

Traced methods

The LangGraph integration instruments synchronous and asynchronous versions of the following methods:

The LiteLLM integration provides automatic tracing for the LiteLLM Python SDK and proxy server router methods.

Traced methods

The LiteLLM integration instruments the following methods:

  • Chat Completions (including streamed calls):
    • litellm.completion
    • litellm.acompletion
  • Completions (including streamed calls):
    • litellm.text_completion
    • litellm.atext_completion
  • Router Chat Completions (including streamed calls):
    • router.Router.completion
    • router.Router.acompletion
  • Router Completions (including streamed calls):
    • router.Router.text_completion
    • router.Router.atext_completion

The Model Context Protocol (MCP) integration instruments client and server tool calls in the MCP SDK.

Traced methods

The MCP integration instruments the following methods:

The OpenAI integration provides automatic tracing for the OpenAI Python SDK’s completion and chat completion endpoints to OpenAI and Azure OpenAI.

Traced methods

The OpenAI integration instruments the following methods, including streamed calls:

  • Completions:
    • OpenAI().completions.create(), AzureOpenAI().completions.create()
    • AsyncOpenAI().completions.create(), AsyncAzureOpenAI().completions.create()
  • Chat completions:
    • OpenAI().chat.completions.create(), AzureOpenAI().chat.completions.create()
    • AsyncOpenAI().chat.completions.create(), AsyncAzureOpenAI().chat.completions.create()
  • Responses:
    • OpenAI().responses.create()
    • AsyncOpenAI().responses.create()
  • Calls made to DeepSeek through the OpenAI Python SDK (as of ddtrace==3.1.0)

The OpenAI integration provides automatic tracing for the OpenAI Node.js SDK’s completion, chat completion, and embeddings endpoints to OpenAI and Azure OpenAI.

Traced methods

The OpenAI integration instruments the following methods, including streamed calls:

The OpenAI Agents integration converts the built-in tracing from the OpenAI Agents SDK into LLM Observability format and sends it to Datadog’s LLM Observability product by adding a Datadog trace processor.

The following operations are supported:

The Pydantic AI integration instruments agent invocations and tool calls made using the Pydantic AI agent framework.

Traced methods

The Pydantic AI integration instruments the following methods:

  • Agent Invocations (including any tools or toolsets associated with the agent):
    • agent.Agent.iter (also traces agent.Agent.run and agent.Agent.run_sync)
    • agent.Agent.run_stream

The Vercel AI SDK integration automatically traces text and object generation, embeddings, and tool calls by intercepting the OpenTelemetry spans created by the underlying core Vercel AI SDK and converting them into Datadog LLM Observability spans.

Traced methods

Vercel AI Core SDK telemetry

This integration automatically patches the tracer passed into each of the traced methods under the experimental_telemetry option. If no experimental_telemetry configuration is passed in, the integration enables it to still send LLM Observability spans.

require('dd-trace').init({
  llmobs: {
    mlApp: 'my-ml-app',
  }
});

const { generateText } = require('ai');
const { openai } = require('@ai-sdk/openai');

async function main () {
  let result = await generateText({
    model: openai('gpt-4o'),
    ...
    experimental_telemetry: {
      isEnabled: true,
      tracer: someTracerProvider.getTracer('ai'), // this tracer will be patched to format and send created spans to Datadog LLM Observability
    }
  });

  result = await generateText({
    model: openai('gpt-4o'),
    ...
  }); // since no tracer is passed in, the integration will enable it to still send LLM Observability spans
}

Note: If experimental_telemetry.isEnabled is set to false, the integration does not turn it on, and does not send spans to LLM Observability.

The Vertex AI integration automatically traces content generation and chat message calls made through Google’s Vertex AI Python SDK.

Traced methods

The Vertex AI integration instruments the following methods:

  • Generating content (including streamed calls):

    • model.generate_content()
    • model.generate_content_async()
  • Chat Messages (including streamed calls):

    • chat.send_message()
    • chat.send_message_async()

The Vertex AI integration automatically traces content generation and chat message calls made through Google’s Vertex AI Node.js SDK.

Traced methods

The Vertex AI integration instruments the following methods:

Enable or disable LLM integrations

All integrations are enabled by default.

Disable all LLM integrations

Use the in-code SDK setup and specify integrations_enabled=False.

Example: In-code SDK setup that disables all LLM integrations

from ddtrace.llmobs import LLMObs

LLMObs.enable(
  ml_app="<YOUR_ML_APP_NAME>",
  api_key="<YOUR_DATADOG_API_KEY>",
  integrations_enabled=False
)

Use the in-code SDK setup and specify plugins: false.

Example: In-code SDK setup that disables all LLM integrations

const tracer = require('dd-trace').init({
  llmobs: { ... },
  plugins: false
});
const { llmobs } = tracer;

Only enable specific LLM integrations

  1. Use the in-code SDK setup and disable all integrations with integrations_enabled=False.
  2. Manually enable select integrations with ddtrace.patch().

Example: In-code SDK setup that only enables the LangChain integration

from ddtrace import patch
from ddtrace.llmobs import LLMObs

LLMObs.enable(
  ml_app="<YOUR_ML_APP_NAME>",
  api_key="<YOUR_DATADOG_API_KEY>",
  integrations_enabled=False
)

patch(langchain=True)
  1. Use the in-code SDK setup and disable all integrations with plugins: false.
  2. Manually enable select integrations with use().

Example: In-code SDK setup that only enables the LangChain integration

const tracer = require('dd-trace').init({
  llmobs: { ... },
  plugins: false
});
const { llmobs } = tracer;
tracer.use('langchain', true);

For more specific control over library patching and the integration that starts the span, you can set the following environment variables:

DD_TRACE_DISABLED_PLUGINS
A comma-separated string of integration names that are automatically disabled when the tracer is initialized.
Example: DD_TRACE_DISABLED_PLUGINS=openai,http
DD_TRACE_DISABLED_INSTRUMENTATIONS
A comma-separated string of library names that are not patched when the tracer is initialized.
Example: DD_TRACE_DISABLED_INSTRUMENTATIONS=openai,http

Further Reading

Additional helpful documentation, links, and articles: