Datadog offers a variety of artificial intelligence (AI) and machine learning (ML) capabilities. The AI/ML integrations on the Integrations page and the Datadog Marketplace are platform-wide Datadog functionalities.

For example, APM offers a native integration with OpenAI for monitoring your OpenAI usage, while Infrastructure Monitoring offers an integration with NVIDIA DCGM Exporter for monitoring compute-intensive AI workloads. These integrations are different from the LLM Observability offering.

Overview

Datadog’s LLM Observability Python SDK provides integrations that automatically trace and annotate calls to LLM frameworks and libraries. Without changing your code, you can get out-of-the-box traces and observability for calls that your LLM application makes to the following frameworks:

FrameworkSupported VersionsTracer Version
OpenAI, Azure OpenAI>= 0.26.5>= 2.9.0
Langchain>= 0.0.192>= 2.9.0
Amazon Bedrock>= 1.31.57>= 2.9.0
Anthropic>= 0.28.0>= 2.10.0
Google Gemini>= 0.7.2>= 2.14.0

You can programmatically enable automatic tracing of LLM calls to a supported LLM model like OpenAI or a framework like LangChain by setting integrations_enabled to true in the LLMOBs.enable() function. In addition to capturing latency and errors, the integrations capture the input parameters, input and output messages, and token usage (when available) of each traced call.

Note: When using the supported LLM Observability frameworks or libraries, no additional manual instrumentation (such as function decorators) is required to capture these calls. For custom or additional calls within your LLM application that are not automatically traced (like API calls, database queries, or internal functions), you can use function decorators to manually trace these operations and capture detailed spans for any part of your application that is not covered by auto-instrumentation.

Enabling and disabling integrations

All integrations are enabled by default.

To disable all integrations, use the in-code SDK setup and specify integrations_enabled=False.

To only enable specific integrations:

  1. Use the in-code SDK setup, specifying integrations_enabled=False.
  2. Manually enable the integration with ddtrace.patch() at the top of the entrypoint file of your LLM application:
from ddtrace import patch
from ddtrace.llmobs import LLMObs

LLMObs.enable(integrations_enabled=False, ...)
patch(openai=True)
patch(langchain=True)
patch(anthropic=True)
patch(gemini=True)
patch(botocore=True)

Note: Use botocore as the name of the Amazon Bedrock integration when manually enabling.

OpenAI

The OpenAI integration provides automatic tracing for the OpenAI Python SDK’s completion and chat completion endpoints to OpenAI and Azure OpenAI.

Traced methods

The OpenAI integration instruments the following methods, including streamed calls:

  • Completions:
    • OpenAI().completions.create(), AzureOpenAI().completions.create()
    • AsyncOpenAI().completions.create(), AsyncAzureOpenAI().completions.create()
  • Chat completions:
    • OpenAI().chat.completions.create(), AzureOpenAI().chat.completions.create()
    • AsyncOpenAI().chat.completions.create(), AsyncAzureOpenAI().chat.completions.create()

LangChain

The LangChain integration provides automatic tracing for the LangChain Python SDK’s LLM, chat model, and chain calls.

Traced methods

The LangChain integration instruments the following methods:

  • LLMs:
    • llm.invoke(), llm.ainvoke()
  • Chat models
    • chat_model.invoke(), chat_model.ainvoke()
  • Chains/LCEL
    • chain.invoke(), chain.ainvoke()
    • chain.batch(), chain.abatch()
  • Embeddings
    • OpenAI : OpenAIEmbeddings.embed_documents(), OpenAIEmbeddings.embed_query()

Note: The LangChain integration does not yet support tracing streamed calls.

Amazon Bedrock

The Amazon Bedrock integration provides automatic tracing for the Amazon Bedrock Runtime Python SDK’s chat model calls (using Boto3/Botocore).

Traced methods

The Amazon Bedrock integration instruments the following methods:

Note: The Amazon Bedrock integration does not yet support tracing embedding calls.

Anthropic

The Anthropic integration provides automatic tracing for the Anthropic Python SDK’s chat message calls.

Traced methods

The Anthropic integration instruments the following methods:

  • Chat messages (including streamed calls):
    • Anthropic().messages.create(), AsyncAnthropic().messages.create()
  • Streamed chat messages:
    • Anthropic().messages.stream(), AsyncAnthropic().messages.stream()

Google Gemini

The Google Gemini integration provides automatic tracing for the Google AI Python SDK’s content generation calls.

Traced methods

The Google Gemini integration instruments the following methods:

  • Generating content (including streamed calls):
    • model.generate_content() (Also captures chat.send_message())
    • model.generate_content_async() (Also captures chat.send_message_async())

Overview

Datadog’s LLM Observability Node.js SDK provides integrations that automatically trace and annotate calls to LLM frameworks and libraries. Without changing your code, you can get out-of-the-box traces and observability for calls that your LLM application makes to the following frameworks:

FrameworkSupported VersionsTracer Version
OpenAI (common JS)>= 3.0.0>= 4.49.0, >= 5.25.0

In addition to capturing latency and errors, the integrations capture the input parameters, input and output messages, and token usage (when available) of each traced call.

Enabling and disabling integrations

All integrations are enabled by default.

To disable all integrations, use the in-code SDK setup and specify plugins: false on the general tracer configuration.

const tracer = require('dd-trace').init({
  llmobs: { ... },
  plugins: false
});
const { llmobs } = tracer;

To only enable specific integrations:

  1. Use the in-code SDK setup, specifying plugins: false.
  2. Manually enable the integration with tracer.use() at the top of the entrypoint file of your LLM application:
const tracer = require('dd-trace').init({
  llmobs: { ... },
  plugins: false
});

const { llmobs } = tracer;
tracer.use('openai', true);

Additionally, you can set the following environment variables for more specific control over library patching and the integration that starts the span:

DD_TRACE_DISABLED_PLUGINS
Example: DD_TRACE_DISABLED_PLUGINS=openai,http
A comma-separated string of integration names automatically disabled when the tracer is initialized.
DD_TRACE_DISABLED_INSTRUMENTATIONS
Example: DD_TRACE_DISABLED_INSTRUMENTATIONS=openai,http
A comma-separated string of library names that are not patched when the tracer is initialized.

OpenAI

The OpenAI integration provides automatic tracing for the OpenAI Node.js SDK’s completion, chat completion, and embeddings endpoints.

Traced methods

The OpenAI integration instruments the following methods, including streamed calls:

ESM support

The OpenAI integration for the Node.js tracer is not supported in ESM. To use OpenAI along with dd-trace in your ESM projects without errors, create the following script:

// register.mjs

import { register } from 'node:module';

register("import-in-the-middle/hook.mjs", import.meta.url, {
  parentURL: import.meta.url,
  data: { include: ["openai"]}, // this is the important bit here
});

And start your application with:

DD_SITE=<YOUR_DATADOG_SITE> node --import ./register.js --require dd-trace/init script.js

This avoids any compatability issues with OpenAI and dd-trace in ESM projects.

In this case, tracing is not used for OpenAI calls. To add this tracing for LLM Observability, you can instrument your OpenAI calls with the llmobs.trace() method.

const tracer = require('dd-trace').init({
  llmobs: { ... }
});

// user application code

function makeOpenAICall (input) {
  // user code
  const response = await llmobs.trace({ kind: 'llm', name: 'openai.createChatCompletion', modelName: 'gpt-4', modelProvider: 'openai' }, async () => {
    const res = await openai.chat.completions.create({ ... });
    llmobs.annotate({
      inputData: input,
      outputData: res.choices[0].message.content
    })

    return res;
  });

  // user code to do something with `response`
}

Bundling support

To use LLM Observability integrations in bundled applications (esbuild, Webpack, Next.js), you must exclude those integrations’ modules from bundling.

If you are using esbuild, or for more specific information on why tracing does not work directly with bundlers, refer to Bundling with the Node.js tracer.

For Webpack or Next.js bundling, specify the corresponding integration in the externals section of the webpack configuration:

// next.config.js
module.exports = {
  webpack: (config) => {
    // this may be a necessary inclusion
    config.resolve.fallback = {
      ...config.resolve.fallback,
      graphql: false,
    }

    // exclude OpenAI from bundling
    config.externals.push('openai')

    return config
  }
}

// webpack.config.js
module.exports = {
  resolve: {
    fallback: {
      graphql: false,
    }
  },
  externals: {
    openai: 'openai'
  }
}

Further Reading