---
title: Automatic Instrumentation for LLM Observability
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > LLM Observability > LLM Observability Instrumentation > Automatic
  Instrumentation for LLM Observability
---

# Automatic Instrumentation for LLM Observability

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site.md). ().
{% /alert %}

{% /callout %}

## Overview{% #overview %}

Datadog's LLM Observability can automatically trace and annotate calls to supported LLM frameworks and libraries through various LLM integrations. When you [run your LLM application with the LLM Observability SDK](https://docs.datadoghq.com/llm_observability/quickstart.md), these LLM integrations are enabled by default and provide out-of-the-box traces and observability, without you having to change your code.

{% alert level="info" %}
Automatic instrumentation works for calls to supported frameworks and libraries. To trace other calls (for example: API calls, database queries, internal functions), see the [LLM Observability SDK reference](https://docs.datadoghq.com/llm_observability/instrumentation/sdk.md) for how to add manual instrumentation.
{% /alert %}

### Supported frameworks and libraries{% #supported-frameworks-and-libraries %}

{% tab title="Python" %}

| Framework             | Supported Versions | Tracer Version |
| --------------------- | ------------------ | -------------- |
| Amazon Bedrock        | \>= 1.31.57        | \>= 2.9.0      |
| Amazon Bedrock Agents | \>= 1.38.26        | \>= 3.10.0     |
| Anthropic             | \>= 0.28.0         | \>= 2.10.0     |
| CrewAI                | \>= 0.105.0        | \>= 3.5.0      |
| Google ADK            | \>= 1.0.0          | \>= 3.15.0     |
| Google GenAI          | \>= 1.21.1         | \>= 3.11.0     |
| LangChain             | \>= 0.0.192        | \>= 2.9.0      |
| LangGraph             | \>= 0.2.23         | \>= 3.10.1     |
| LiteLLM               | \>= 1.70.0         | \>= 3.9.0      |
| MCP                   | \>= 1.10.0         | \>= 3.11.0     |
| OpenAI, Azure OpenAI  | \>= 0.26.5         | \>= 2.9.0      |
| OpenAI Agents         | \>= 0.0.2          | \>= 3.5.0      |
| Pydantic AI           | \>= 0.3.0          | \>= 3.11.0     |
| Strands Agents        | \>= 1.11.0         | Any            |
| Vertex AI             | \>= 1.71.1         | \>= 2.18.0     |

{% /tab %}

{% tab title="Node.js" %}

| Framework            | Supported Versions | Tracer Version                               |
| -------------------- | ------------------ | -------------------------------------------- |
| Amazon Bedrock       | \>= 3.422.0        | \>= 5.35.0 (CJS), >=5.35.0 (ESM)             |
| Anthropic            | \>= 0.14.0         | \>= 5.71.0 (CJS), >=5.71.0 (ESM)             |
| LangChain            | \>= 0.1.0          | \>= 5.32.0 (CJS), >=5.38.0 (ESM)             |
| OpenAI, Azure OpenAI | \>= 3.0.0          | \>= 4.49.0, >= 5.25.0 (CJS), >= 5.38.0 (ESM) |
| Vercel AI SDK        | \>=4.0.0           | \>= 5.63.0 (CJS), >=5.63.0 (ESM)             |
| VertexAI             | \>= 1.0.0          | \>= 5.44.0 (CJS), >=5.44.0 (ESM)             |
| Google GenAI         | \>= 1.19.0         | \>= 5.81.0 (CJS), >=5.81.0 (ESM)             |

{% collapsible-section #esm-support %}
#### Support for ESMAScript Modules (ESM)

Automatic instrumentation for ESM projects is supported starting from `dd-trace@>=5.38.0`. To enable automatic instrumentation in your ESM projects, use the [command-line setup](https://docs.datadoghq.com/llm_observability/instrumentation/sdk.md?tab=nodejs#command-line-setup) and the following Node.js option when running your application:

```bash
--import dd-trace/initialize.mjs
```

For example:

```bash
node --import dd-trace/initialize.mjs app.js
# or
NODE_OPTIONS="--import dd-trace/initialize.mjs" node app.js
```

{% /collapsible-section %}

{% collapsible-section #bundling-support %}
#### Support for bundled applications (esbuild, Webpack)

To use LLM Observability integrations in bundled applications (esbuild, Webpack), you must exclude these integrations' modules from bundling.

##### esbuild{% #esbuild %}

If you are using esbuild, see [Bundling with the Node.js tracer](https://docs.datadoghq.com/tracing/trace_collection/automatic_instrumentation/dd_libraries/nodejs.md#bundling).

##### Webpack{% #webpack %}

For Webpack, specify the corresponding integration in the `externals` section of the webpack configuration:

```javascript
// webpack.config.js
module.exports = {
  resolve: {
    fallback: {
      graphql: false,
    }
  },
  externals: {
    openai: 'openai'
  }
}
```

{% /collapsible-section %}

{% collapsible-section #nextjs-support %}
#### Support for Next.js

Properly initialize the tracer in your application to ensure auto-instrumentation works correctly. If using TypeScript or ESM for your Next.js application, initialize the tracer in a `instrumentation.{ts/js}` file as follows, specifying your configuration options as environment variables:

```typescript
// instrumentation.ts
export async function register() {
  if (process.env.NEXT_RUNTIME === 'nodejs') {
    const initializeImportName = 'dd-trace/initialize.mjs';
    await import(/* webpackIgnore: true */ initializeImportName as 'dd-trace/initialize.mjs')
  }

  // ...
}
```

Otherwise, for CommonJS Next.js applications, you can use the `init` function directly:

```javascript
// instrumentation.js
const tracer = require('dd-trace')

function register () {
  if (process.env.NEXT_RUNTIME === 'nodejs') {
    tracer.init({}); // specify options here or they will be read from environment variables
  }

  // ...
}

module.exports = register;
```

Then, make sure to specify `dd-trace` and any other supported integration package names in `serverExternalPackages` in your `next.config.{ts/js}` file:

```javascript
// next.config.ts
module.exports = {
  serverExternalPackages: ['dd-trace', '<INTEGRATION_PACKAGE_NAME>'], // add any other supported integration package names here to be auto-instrumented
}
```

{% /collapsible-section %}

{% /tab %}

{% tab title="Java" %}

| Framework            | Supported Versions | Tracer Version |
| -------------------- | ------------------ | -------------- |
| OpenAI, Azure OpenAI | \>= 3.0.0          | \>= 1.59.0     |

{% /tab %}

{% alert level="info" %}
Datadog LLM Observability also supports any framework that natively emits [OpenTelemetry GenAI semantic convention v1.37+](https://opentelemetry.io/docs/specs/semconv/gen-ai/)-compliant spans, without requiring the Datadog tracer. See [OpenTelemetry Instrumentation](https://docs.datadoghq.com/llm_observability/instrumentation/otel_instrumentation.md) for details.
{% /alert %}

## LLM integrations{% #llm-integrations %}

Datadog's LLM integrations capture latency, errors, input parameters, input and output messages, and token usage (when available) for traced calls.

{% collapsible-section #amazon-bedrock %}
### Amazon Bedrock

{% tab title="Python" %}
The [Amazon Bedrock integration](https://docs.datadoghq.com/integrations/amazon-bedrock.md) provides automatic instrumentation for the Amazon Bedrock Runtime Python SDK's chat model calls (using [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime.html)/[Botocore](https://botocore.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime.html)).

**Package name:** `boto3` **Integration name:** `botocore`

### Traced methods{% #traced-methods %}

The Amazon Bedrock integration instruments the following methods:

- [Chat messages](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html):
  - `InvokeModel`
- [Streamed chat messages](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html):
  - `InvokeModelWithResponseStream`
- [Chat messages](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html):
  - `Converse` (requires `ddtrace>=3.4.0`)
- [Streamed chat messages](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html):
  - `ConverseStream` (requires `ddtrace>=3.5.0`)

{% alert level="info" %}
The Amazon Bedrock integration does not support tracing embedding calls.
{% /alert %}

{% /tab %}

{% tab title="Node.js" %}
The [Amazon Bedrock integration](https://docs.datadoghq.com/integrations/amazon-bedrock.md) provides automatic tracing for the Amazon Bedrock Runtime Node.js SDK's chat model calls (using [BedrockRuntimeClient](https://www.npmjs.com/package/@aws-sdk/client-bedrock-runtime)).

**Package name:** `@aws-sdk/client-bedrock-runtime` **Integration name:** `aws-sdk`

### Traced methods{% #traced-methods %}

The Amazon Bedrock integration instruments the following methods:

- [Chat messages](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html):
  - `InvokeModel`
  - `InvokeModelWithResponseStream`

{% /tab %}

{% /collapsible-section %}

{% collapsible-section #amazon-bedrock-agents %}
### Amazon Bedrock Agents

{% tab title="Python" %}
The Amazon Bedrock Agents integration provides automatic tracing for the Amazon Bedrock Agents Runtime Python SDK's agent invoke calls (using [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime.html)/[Botocore](https://botocore.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime.html)).

**Package name:** `boto3` **Integration name:** `botocore`

### Traced methods{% #traced-methods %}

The Amazon Bedrock Agents integration instruments the following methods:

- [Invoke Agent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeAgent.html):
  - `InvokeAgent` (requires ddtrace>=3.10.0)

{% alert level="info" %}
The Amazon Bedrock Agents integration, by default, only traces the overall `InvokeAgent` method. To enable tracing intra-agent steps, you must set `enableTrace=True` in the `InvokeAgent` request parameters.
{% /alert %}

{% /tab %}

{% /collapsible-section %}

{% collapsible-section #anthropic %}
### Anthropic

{% tab title="Python" %}
The [Anthropic integration](https://docs.datadoghq.com/integrations/anthropic.md) provides automatic tracing for the [Anthropic Python SDK's](https://docs.anthropic.com/en/api/client-sdks#python) chat message calls.

**Package name:** `anthropic` **Integration name:** `anthropic`

### Traced methods{% #traced-methods %}

The Anthropic integration instruments the following methods:

- [Chat messages](https://docs.anthropic.com/en/api/messages) (including streamed calls):
  - `Anthropic().messages.create()`, `AsyncAnthropic().messages.create()`
- [Streamed chat messages](https://docs.anthropic.com/en/api/messages-streaming):
  - `Anthropic().messages.stream()`, `AsyncAnthropic().messages.stream()`

{% /tab %}

{% tab title="Node.js" %}
The [Anthropic integration](https://docs.datadoghq.com/integrations/anthropic.md) provides automatic tracing for the [Anthropic Node.js SDK's](https://docs.claude.com/en/api/client-sdks#typescript) chat message calls.

**Package name:** `@anthropic-ai/sdk` **Integration name:** `anthropic`

### Traced methods{% #traced-methods %}

The Anthropic integration instruments the following methods:

- [Chat messages](https://docs.anthropic.com/en/api/messages) (including streamed calls):
  - `anthropic.messages.create()`
- [Streamed chat messages](https://docs.anthropic.com/en/api/messages-streaming):
  - `anthropic.messages.stream()`

{% /tab %}

{% /collapsible-section %}

{% collapsible-section #crewai %}
### CrewAI

{% tab title="Python" %}
The [CrewAI integration](https://docs.datadoghq.com/integrations/crewai.md) automatically traces execution of Crew kickoffs, including task/agent/tool invocations, made through [CrewAI's Python SDK](https://docs.crewai.com/introduction).

**Package name:** `crewai` **Integration name:** `crewai`

### Traced methods{% #traced-methods %}

The CrewAI integration instruments the following methods:

- [Crew Kickoff](https://docs.crewai.com/concepts/crews#kicking-off-a-crew):

  - `crew.kickoff()`
  - `crew.kickoff_async()`
  - `crew.kickoff_for_each()`
  - `crew.kickoff_for_each_async()`

- [Task Execution](https://docs.crewai.com/concepts/tasks):

  - `task.execute_sync()`
  - `task.execute_async()`

- [Agent Execution](https://docs.crewai.com/concepts/agents):

  - `agent.execute_task()`

- [Tool Invocation](https://docs.crewai.com/concepts/tools):

  - `tool.invoke()`

{% /tab %}

{% /collapsible-section %}

{% collapsible-section #google-adk %}
### Google ADK

{% tab title="Python" %}
The Google ADK integration provides automatic tracing for agent runs, tool calls, and code executions made through [Google's ADK Python SDK](https://google.github.io/adk-docs/#python).

**Package name:** `google-adk` **Integration name:** `google_adk`

### Traced methods{% #traced-methods %}

The Google ADK integration instruments the following methods:

- [Agent Runs](https://google.github.io/adk-docs/agents/)
- [Tool Calls](https://google.github.io/adk-docs/tools)
- [Code Executions](https://google.github.io/adk-docs/agents/llm-agents/#code-execution)

Both `run_live` and `run_async` methods are supported.
{% /tab %}

{% /collapsible-section %}

{% collapsible-section #google-genai %}
### Google GenAI

{% tab title="Python" %}
The Google GenAI integration automatically traces methods in the [Google GenAI Python SDK](https://ai.google.dev/gemini-api/docs).

**Package name:** `google-genai` **Integration name:** `google_genai`

**Note:** The [Google GenAI Python SDK](https://ai.google.dev/gemini-api/docs) succeeds the Google GenerativeAI SDK, and exposes both Gemini Developer API as well as Vertex.

### Traced methods{% #traced-methods %}

The Google GenAI integration instruments the following methods:

- [Generating content](https://ai.google.dev/api/generate-content#method:-models.generatecontent) (including streamed calls):
  - `models.generate_content()` (Also captures `chat.send_message()`)
  - `aio.models.generate_content()` (Also captures `aio.chat.send_message()`)
- [Embedding content](https://ai.google.dev/api/embeddings#method:-models.embedcontent) -`models.embed_content()` -`aio.models.embed_content()`

{% /tab %}

{% tab title="Node.js" %}
The Google GenAI integration automatically traces methods in the [Google GenAI Node.js SDK](https://ai.google.dev/gemini-api/docs#javascript) by instrumenting the [`@google/genai` package](https://www.npmjs.com/package/@google/genai).

**Note:** The [Google GenAI Node.js SDK](https://ai.google.dev/gemini-api/docs#javascript) succeeds the [Google GenerativeAI SDK](https://www.npmjs.com/package/@google/generative-ai), and exposes both Gemini Developer API as well as Vertex.

**Package name:** `@google/genai` **Integration name:** `google-genai`

### Traced methods{% #traced-methods %}

The Google GenAI integration instruments the following methods:

- [Generating content](https://ai.google.dev/api/generate-content#text_gen_text_only_prompt-JAVASCRIPT) (including [streamed calls](https://ai.google.dev/api/generate-content#text_gen_text_only_prompt_streaming-JAVASCRIPT))
- [Embedding content](https://ai.google.dev/api/embeddings#embed_content-JAVASCRIPT)

{% /tab %}

{% /collapsible-section %}

{% collapsible-section #langchain %}
### LangChain

{% tab title="Python" %}
The [LangChain integration](https://docs.datadoghq.com/integrations/langchain.md) provides automatic tracing for the [LangChain Python SDK's](https://python.langchain.com/docs/introduction/) LLM, chat model, and chain calls.

**Package name:** `langchain`, `langchain_openai`, `langchain_anthropic`, and [other langchain partner packages](https://docs.langchain.com/oss/python/integrations/providers/all_providers) **Integration name:** `langchain`

### Traced methods{% #traced-methods %}

The LangChain integration instruments the following methods:

- [LLMs](https://python.langchain.com/v0.2/docs/concepts/#llms):

  - `llm.invoke()`, `llm.ainvoke()`
  - `llm.stream()`, `llm.astream()`

- [Chat models](https://python.langchain.com/docs/concepts/chat_models/)

  - `chat_model.invoke()`, `chat_model.ainvoke()`
  - `chat_model.stream()`, `chat_model.astream()`

- [Chains/LCEL](https://python.langchain.com/docs/concepts/runnables/)

  - `chain.invoke()`, `chain.ainvoke()`
  - `chain.batch()`, `chain.abatch()`
  - `chain.stream()`, `chain.astream()`

- [Embeddings](https://python.langchain.com/docs/concepts/embedding_models/)

  - OpenAI : `OpenAIEmbeddings.embed_documents()`, `OpenAIEmbeddings.embed_query()`

- [Tools](https://python.langchain.com/docs/concepts/tools/)

  - `BaseTool.invoke()`, `BaseTool.ainvoke()`

- [Retrieval](https://python.langchain.com/docs/concepts/retrieval/)

  - `langchain_community.<vectorstore>.similarity_search()`
  - `langchain_pinecone.similarity_search()`

- [Prompt Templating](https://docs.langchain.com/langsmith/manage-prompts-programmatically#pull-a-prompt)

  - `BasePromptTemplate.invoke()`, `BasePromptTemplate.ainvoke()`
Important alert (level: info): For best results, assign templates to variables with meaningful names. Automatic instrumentation uses these names to identify prompts.
  ```python
  # "translation_template" will be used to identify the template in Datadog
  translation_template = PromptTemplate.from_template("Translate {text} to {language}")
  chain = translation_template | llm
  ```

{% /tab %}

{% tab title="Node.js" %}
The [LangChain integration](https://docs.datadoghq.com/integrations/langchain.md) provides automatic tracing for the [LangChain Node.js SDK's](https://js.langchain.com/docs/introduction/) LLM, chat model, chain, and OpenAI embeddings calls.

**Package name:** `langchain`, `@langchain/openai`, `@langchain/anthropic`, and [other langchain partner packages](https://docs.langchain.com/oss/javascript/integrations/providers/all_providers) **Integration name:** `langchain`

### Traced methods{% #traced-methods %}

The LangChain integration instruments the following methods:

- [LLMs](https://js.langchain.com/docs/integrations/llms/):
  - `llm.invoke()`
- [Chat models](https://js.langchain.com/docs/concepts/chat_models)
  - `chat_model.invoke()`
- [Chains](https://js.langchain.com/docs/how_to/sequence/)
  - `chain.invoke()`
  - `chain.batch()`
- [Embeddings](https://js.langchain.com/docs/integrations/text_embedding/)
  - `embeddings.embedQuery()`
  - `embeddings.embedDocuments()`

{% /tab %}

{% /collapsible-section %}

{% collapsible-section #langgraph %}
### LangGraph

{% tab title="Python" %}
The LangGraph integration automatically traces `Pregel/CompiledGraph` and `RunnableSeq (node)` invocations made through the [LangGraph Python SDK](https://langchain-ai.github.io/langgraph/concepts/sdk/).

**Package name:** `langgraph` **Integration name:** `langgraph`

### Traced methods{% #traced-methods %}

The LangGraph integration instruments synchronous and asynchronous versions of the following methods:

- [CompiledGraph.invoke(), Pregel.invoke(), CompiledGraph.stream(), Pregel.stream()](https://blog.langchain.dev/langgraph/#compile)
- [RunnableSeq.invoke()](https://blog.langchain.dev/langgraph/#nodes)

{% /tab %}

{% /collapsible-section %}

{% collapsible-section #litellm %}
### LiteLLM

{% tab title="Python" %}
The [LiteLLM integration](https://docs.datadoghq.com/integrations/litellm.md) provides automatic tracing for the [LiteLLM Python SDK](https://docs.litellm.ai/docs/#litellm-python-sdk) and [proxy server router methods](https://docs.litellm.ai/docs/routing).

**Package name:** `litellm` **Integration name:** `litellm`

### Traced methods{% #traced-methods %}

The LiteLLM integration instruments the following methods:

- [Chat Completions](https://docs.litellm.ai/docs/completion) (including streamed calls):
  - `litellm.completion`
  - `litellm.acompletion`
- [Completions](https://docs.litellm.ai/docs/text_completion) (including streamed calls):
  - `litellm.text_completion`
  - `litellm.atext_completion`
- Router Chat Completions (including streamed calls):
  - `router.Router.completion`
  - `router.Router.acompletion`
- Router Completions (including streamed calls):
  - `router.Router.text_completion`
  - `router.Router.atext_completion`

{% /tab %}

{% /collapsible-section %}

{% collapsible-section #mcp %}
### MCP

{% tab title="Python" %}
The Model Context Protocol (MCP) integration instruments client and server tool calls in the [MCP](https://modelcontextprotocol.io/docs/getting-started/intro) SDK.

**Package name:** `mcp` **Integration name:** `mcp`

### Traced methods{% #traced-methods %}

The MCP integration instruments the following methods:

- [Client Tool Calls](https://github.com/modelcontextprotocol/python-sdk?tab=readme-ov-file#writing-mcp-clients):

  - `mcp.client.session.ClientSession.call_tool`

- [Server Tool Calls](https://github.com/modelcontextprotocol/python-sdk?tab=readme-ov-file#tools):

  - `mcp.server.fastmcp.tools.tool_manager.ToolManager.call_tool`

{% /tab %}

{% /collapsible-section %}

{% collapsible-section #openai %}
### OpenAI, Azure OpenAI

{% tab title="Python" %}
The [OpenAI integration](https://docs.datadoghq.com/integrations/openai.md) provides automatic tracing for the [OpenAI Python SDK's](https://platform.openai.com/docs/api-reference/introduction) completion and chat completion endpoints to OpenAI and Azure OpenAI.

**Package name:** `openai` **Integration name:** `openai`

### Traced methods{% #traced-methods %}

The OpenAI integration instruments the following methods, including streamed calls:

- [Completions](https://platform.openai.com/docs/api-reference/completions):
  - `OpenAI().completions.create()`, `AzureOpenAI().completions.create()`
  - `AsyncOpenAI().completions.create()`, `AsyncAzureOpenAI().completions.create()`
- [Chat completions](https://platform.openai.com/docs/api-reference/chat):
  - `OpenAI().chat.completions.create()`, `AzureOpenAI().chat.completions.create()`
  - `AsyncOpenAI().chat.completions.create()`, `AsyncAzureOpenAI().chat.completions.create()`
- [Responses](https://platform.openai.com/docs/api-reference/responses):
  - `OpenAI().responses.create()`
  - `AsyncOpenAI().responses.create()`
  - `OpenAI().responses.parse()` (as of `ddtrace==3.17.0`)
  - `AsyncOpenAI().responses.parse()` (as of `ddtrace==3.17.0`)
- [Calls made to DeepSeek through the OpenAI Python SDK](https://api-docs.deepseek.com/) (as of `ddtrace==3.1.0`)

{% /tab %}

{% tab title="Node.js" %}
The [OpenAI integration](https://docs.datadoghq.com/integrations/openai.md) provides automatic tracing for the [OpenAI Node.js SDK's](https://platform.openai.com/docs/api-reference/introduction) completion, chat completion, and embeddings endpoints to OpenAI and [Azure OpenAI](https://www.npmjs.com/package/openai#microsoft-azure-openai).

**Package name:** `openai` **Integration name:** `openai`

### Traced methods{% #traced-methods %}

The OpenAI integration instruments the following methods, including streamed calls:

- [Completions](https://platform.openai.com/docs/api-reference/completions):
  - `openai.completions.create()` and `azureopenai.completions.create()`
- [Chat completions](https://platform.openai.com/docs/api-reference/chat):
  - `openai.chat.completions.create()` and `azureopenai.chat.completions.create()`
- [Embeddings](https://platform.openai.com/docs/api-reference/embeddings):
  - `openai.embeddings.create()` and `azureopenai.embeddings.create()`
- [Calls made to DeepSeek through the OpenAI Node.js SDK](https://api-docs.deepseek.com/) (as of `dd-trace@5.42.0`)
- [Responses](https://platform.openai.com/docs/api-reference/responses)
  - `openai.responses.create()` (as of `dd-trace@5.76.0`)

{% /tab %}

{% tab title="Java" %}
The [OpenAI integration](https://docs.datadoghq.com/integrations/openai.md) provides automatic tracing for the [OpenAI Java SDK's](https://platform.openai.com/docs/api-reference/introduction) completion, chat completion, embeddings, and responses endpoints to OpenAI and Azure OpenAI.

### Traced methods{% #traced-methods %}

The OpenAI integration instruments the following methods on `OpenAIClient`, including streamed calls:

- [Completions](https://platform.openai.com/docs/api-reference/completions):
  - `openAiClient.completions().create()`
  - `openAiClient.completions().createStreaming()`
  - `openAiClient.async().completions().create()`
  - `openAiClient.async().completions().createStreaming()`
- [Chat completions](https://platform.openai.com/docs/api-reference/chat):
  - `openAiClient.chat().completions().create()`
  - `openAiClient.chat().completions().createStreaming()`
  - `openAiClient.async().chat().completions().create()`
  - `openAiClient.async().chat().completions().createStreaming()`
- [Embeddings](https://platform.openai.com/docs/api-reference/embeddings):
  - `openAiClient.embeddings().create()`
  - `openAiClient.async().embeddings().create()`
- [Responses](https://platform.openai.com/docs/api-reference/responses):
  - `openAiClient.responses().create()`
  - `openAiClient.responses().createStreaming()`
  - `openAiClient.async().responses().create()`
  - `openAiClient.async().responses().createStreaming()`

The provider (OpenAI vs Azure OpenAI) is automatically detected based on the `baseUrl` configured in `ClientOptions`. All methods support both blocking and async (CompletableFuture-based) variants.
{% /tab %}

{% /collapsible-section %}

{% collapsible-section #openai-agents %}
### OpenAI Agents

{% tab title="Python" %}
The OpenAI Agents integration converts the [built-in tracing](https://openai.github.io/openai-agents-python/tracing/) from the [OpenAI Agents SDK](https://openai.github.io/openai-agents-python/) into LLM Observability format and sends it to Datadog's LLM Observability product by adding a Datadog trace processor.

**Package name:** `openai-agents` **Integration name:** `openai_agents`

The following operations are supported:

- [`traces`](https://openai.github.io/openai-agents-python/ref/tracing/traces/)
- [`agent`](https://openai.github.io/openai-agents-python/ref/tracing/#agents.tracing.agent_span)
- [`generation`](https://openai.github.io/openai-agents-python/ref/tracing/#agents.tracing.generation_span) using Datadog's OpenAI integration
- [`response`](https://openai.github.io/openai-agents-python/ref/tracing/#agents.tracing.response_span)
- [`guardrail`](https://openai.github.io/openai-agents-python/ref/tracing/#agents.tracing.guardrail_span)
- [`handoff`](https://openai.github.io/openai-agents-python/ref/tracing/#agents.tracing.handoff_span)
- [`function`](https://openai.github.io/openai-agents-python/ref/tracing/#agents.tracing.function_span)
- [`custom`](https://openai.github.io/openai-agents-python/ref/tracing/#agents.tracing.custom_span)

{% /tab %}

{% /collapsible-section %}

{% collapsible-section #pydantic-ai %}
### Pydantic AI

{% tab title="Python" %}
The Pydantic AI integration instruments agent invocations and tool calls made using the [Pydantic AI](https://ai.pydantic.dev/) agent framework.

**Package name:** `pydantic-ai` **Integration name:** `pydantic_ai`

### Traced methods{% #traced-methods %}

The Pydantic AI integration instruments the following methods:

- [Agent Invocations](https://ai.pydantic.dev/agents/) (including any tools or toolsets associated with the agent):
  - `agent.Agent.iter` (also traces `agent.Agent.run` and `agent.Agent.run_sync`)
  - `agent.Agent.run_stream`

{% /tab %}

{% /collapsible-section %}

{% collapsible-section #strands-agents %}
### Strands Agents

{% tab title="Python" %}
Starting from [v1.11.0](https://github.com/strands-agents/sdk-python/releases/tag/v1.11.0), [Strands Agents](https://strandsagents.com) natively emits spans compliant with [OpenTelemetry GenAI semantic conventions v1.37](https://opentelemetry.io/docs/specs/semconv/gen-ai/), which Datadog LLM Observability automatically ingests without requiring the Datadog tracer.

For setup instructions and a complete example, see [OpenTelemetry Instrumentation — Using Strands Agents](https://docs.datadoghq.com/llm_observability/instrumentation/otel_instrumentation.md#using-strands-agents).
{% /tab %}

{% /collapsible-section %}

{% collapsible-section #vercel-ai-sdk %}
### Vercel AI SDK

{% tab title="Node.js" %}
The [Vercel AI SDK](https://docs.datadoghq.com/integrations/vercel-ai-sdk.md) integration automatically traces text and object generation, embeddings, and tool calls by intercepting the OpenTelemetry spans created by the underlying core [Vercel AI SDK](https://ai-sdk.dev/docs/introduction) and converting them into Datadog LLM Observability spans.

**Package name:** `ai` **Integration name:** `ai`

### Traced methods{% #traced-methods %}

- [Text generation](https://ai-sdk.dev/docs/ai-sdk-core/generating-text):
  - `generateText`
  - `streamText`
- [Object generation](https://ai-sdk.dev/docs/ai-sdk-core/generating-structured-data):
  - `generateObject`
  - `streamObject`
- [Embedding](https://ai-sdk.dev/docs/ai-sdk-core/embeddings):
  - `embed`
  - `embedMany`
- [Tool calling](https://ai-sdk.dev/docs/ai-sdk-core/tools-and-tool-calling):
  - `tool.execute`

### Vercel AI Core SDK telemetry{% #vercel-ai-core-sdk-telemetry %}

This integration automatically patches the tracer passed into each of the traced methods under the [`experimental_telemetry` option](https://ai-sdk.dev/docs/ai-sdk-core/telemetry). If no `experimental_telemetry` configuration is passed in, the integration enables it to still send LLM Observability spans.

```javascript
require('dd-trace').init({
  llmobs: {
    mlApp: 'my-ml-app',
  }
});

const { generateText } = require('ai');
const { openai } = require('@ai-sdk/openai');

async function main () {
  let result = await generateText({
    model: openai('gpt-4o'),
    ...
    experimental_telemetry: {
      isEnabled: true,
      tracer: someTracerProvider.getTracer('ai'), // this tracer will be patched to format and send created spans to Datadog LLM Observability
    }
  });

  result = await generateText({
    model: openai('gpt-4o'),
    ...
  }); // since no tracer is passed in, the integration will enable it to still send LLM Observability spans
}
```

**Note**: If `experimental_telemetry.isEnabled` is set to `false`, the integration does not turn it on, and does not send spans to LLM Observability.
{% /tab %}

{% /collapsible-section %}

{% collapsible-section #vertex-ai %}
### Vertex AI

{% tab title="Python" %}
The [Vertex AI integration](https://docs.datadoghq.com/integrations/google-cloud-vertex-ai.md) automatically traces content generation and chat message calls made through [Google's Vertex AI Python SDK](https://cloud.google.com/vertex-ai/generative-ai/docs/reference/python/latest).

**Package name:** `vertexai` **Integration name:** `vertexai`

### Traced methods{% #traced-methods %}

The Vertex AI integration instruments the following methods:

- [Generating content](https://cloud.google.com/vertex-ai/generative-ai/docs/reference/python/latest/summary_method#vertexai_preview_generative_models_GenerativeModel_generate_content_summary) (including streamed calls):

  - `model.generate_content()`
  - `model.generate_content_async()`

- [Chat Messages](https://cloud.google.com/vertex-ai/generative-ai/docs/reference/python/latest/summary_method#vertexai_generative_models_ChatSession_send_message_summary) (including streamed calls):

  - `chat.send_message()`
  - `chat.send_message_async()`

{% /tab %}

{% tab title="Node.js" %}
The [Vertex AI integration](https://docs.datadoghq.com/integrations/google-cloud-vertex-ai.md) automatically traces content generation and chat message calls made through [Google's Vertex AI Node.js SDK](https://cloud.google.com/vertex-ai/generative-ai/docs/reference/nodejs/latest).

**Package name:** `@google-cloud/vertexai` **Integration name:** `google-cloud-vertexai`

### Traced methods{% #traced-methods %}

The Vertex AI integration instruments the following methods:

- [Generating content](https://cloud.google.com/vertex-ai/generative-ai/docs/reference/nodejs/latest#send-text-prompt-requests):
  - `model.generateContent()`
  - `model.generateContentStream()`
- [Chat Messages](https://cloud.google.com/vertex-ai/generative-ai/docs/reference/nodejs/latest#send-multiturn-chat-requests):
  - `chat.sendMessage()`
  - `chat.sendMessageStream()`

{% /tab %}

{% /collapsible-section %}

## Enable or disable LLM integrations{% #enable-or-disable-llm-integrations %}

All integrations are **enabled by default**.

### Disable all LLM integrations{% #disable-all-llm-integrations %}

{% tab title="Python" %}
Use the [in-code SDK setup](https://docs.datadoghq.com/llm_observability/instrumentation/sdk.md?tab=python#in-code-setup) and specify `integrations_enabled=False`.

**Example**: In-code SDK setup that disables all LLM integrations

```python
from ddtrace.llmobs import LLMObs

LLMObs.enable(
  ml_app="<YOUR_ML_APP_NAME>",
  api_key="<YOUR_DATADOG_API_KEY>",
  integrations_enabled=False
)
```

{% /tab %}

{% tab title="Node.js" %}
Use the [in-code SDK setup](https://docs.datadoghq.com/llm_observability/instrumentation/sdk.md?tab=nodejs#in-code-setup) and specify `plugins: false`.

**Example**: In-code SDK setup that disables all LLM integrations

```javascript
const tracer = require('dd-trace').init({
  llmobs: { ... },
  plugins: false
});
const { llmobs } = tracer;
```

{% /tab %}

### Only enable specific LLM integrations{% #only-enable-specific-llm-integrations %}

{% tab title="Python" %}

1. Use the [in-code SDK setup](https://docs.datadoghq.com/llm_observability/instrumentation/sdk.md?tab=python#in-code-setup) and disable all integrations with `integrations_enabled=False`.
1. Manually enable select integrations with `ddtrace.patch()`.

**Example**: In-code SDK setup that only enables the LangChain integration

```python
from ddtrace import patch
from ddtrace.llmobs import LLMObs

LLMObs.enable(
  ml_app="<YOUR_ML_APP_NAME>",
  api_key="<YOUR_DATADOG_API_KEY>",
  integrations_enabled=False
)

patch(langchain=True)
```

{% /tab %}

{% tab title="Node.js" %}

1. Use the [in-code SDK setup](https://docs.datadoghq.com/llm_observability/instrumentation/sdk.md?tab=nodejs#in-code-setup) and disable all integrations with `plugins: false`.
1. Manually enable select integrations with `use()`.

**Example**: In-code SDK setup that only enables the LangChain integration

```javascript
const tracer = require('dd-trace').init({
  llmobs: { ... },
  plugins: false
});
const { llmobs } = tracer;
tracer.use('langchain', true);
```

{% /tab %}

For more specific control over library patching and the integration that starts the span, you can set the following environment variables:

{% dl %}

{% dt %}
`DD_TRACE_DISABLED_PLUGINS`
{% /dt %}

{% dd %}
A comma-separated string of integration names that are automatically disabled when the tracer is initialized.**Example**: `DD_TRACE_DISABLED_PLUGINS=openai,http`
{% /dd %}

{% dt %}
`DD_TRACE_DISABLED_INSTRUMENTATIONS`
{% /dt %}

{% dd %}
A comma-separated string of library names that are not patched when the tracer is initialized.**Example**: `DD_TRACE_DISABLED_INSTRUMENTATIONS=openai,http`
{% /dd %}

{% /dl %}

## Further Reading{% #further-reading %}

- [LLM Observability SDK Reference](https://docs.datadoghq.com/llm_observability/instrumentation/sdk.md)
- [Track, compare, and optimize your LLM prompts with Datadog LLM Observability](https://www.datadoghq.com/blog/llm-prompt-tracking)
- [Gain end-to-end visibility into MCP clients with Datadog LLM Observability](https://www.datadoghq.com/blog/mcp-client-monitoring)
