- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Datadog’s LLM Observability Python SDK provides integrations that automatically trace and annotate calls to LLM frameworks and libraries. Without changing your code, you can get out-of-the-box traces and observability for calls that your LLM application makes to the following frameworks:
Framework | Supported Versions | Tracer Version |
---|---|---|
OpenAI, Azure OpenAI | >= 0.26.5 | >= 2.9.0 |
Langchain | >= 0.0.192 | >= 2.9.0 |
Amazon Bedrock | >= 1.31.57 | >= 2.9.0 |
Anthropic | >= 0.28.0 | >= 2.10.0 |
Google Gemini | >= 0.7.2 | >= 2.14.0 |
Vertex AI | >= 1.71.1 | >= 2.18.0 |
You can programmatically enable automatic tracing of LLM calls to a supported LLM model like OpenAI or a framework like LangChain by setting integrations_enabled
to true
in the LLMOBs.enable()
function. In addition to capturing latency and errors, the integrations capture the input parameters, input and output messages, and token usage (when available) of each traced call.
Note: When using the supported LLM Observability frameworks or libraries, no additional manual instrumentation (such as function decorators) is required to capture these calls. For custom or additional calls within your LLM application that are not automatically traced (like API calls, database queries, or internal functions), you can use function decorators to manually trace these operations and capture detailed spans for any part of your application that is not covered by auto-instrumentation.
All integrations are enabled by default.
To disable all integrations, use the in-code SDK setup and specify integrations_enabled=False
.
To only enable specific integrations:
integrations_enabled=False
.ddtrace.patch()
at the top of the entrypoint file of your LLM application:from ddtrace import patch
from ddtrace.llmobs import LLMObs
LLMObs.enable(integrations_enabled=False, ...)
patch(openai=True)
patch(langchain=True)
patch(anthropic=True)
patch(gemini=True)
patch(botocore=True)
Note: Use botocore
as the name of the Amazon Bedrock integration when manually enabling.
The OpenAI integration provides automatic tracing for the OpenAI Python SDK’s completion and chat completion endpoints to OpenAI and Azure OpenAI.
The OpenAI integration instruments the following methods, including streamed calls:
OpenAI().completions.create()
, AzureOpenAI().completions.create()
AsyncOpenAI().completions.create()
, AsyncAzureOpenAI().completions.create()
OpenAI().chat.completions.create()
, AzureOpenAI().chat.completions.create()
AsyncOpenAI().chat.completions.create()
, AsyncAzureOpenAI().chat.completions.create()
The LangChain integration provides automatic tracing for the LangChain Python SDK’s LLM, chat model, and chain calls.
The LangChain integration instruments the following methods:
llm.invoke()
, llm.ainvoke()
llm.stream()
, llm.astream()
chat_model.invoke()
, chat_model.ainvoke()
chat_model.stream()
, chat_model.astream()
chain.invoke()
, chain.ainvoke()
chain.batch()
, chain.abatch()
chain.stream()
, chain.astream()
OpenAIEmbeddings.embed_documents()
, OpenAIEmbeddings.embed_query()
BaseTool.invoke()
, BaseTool.ainvoke()
langchain_community.<vectorstore>.similarity_search()
langchain_pinecone.similarity_search()
The Amazon Bedrock integration provides automatic tracing for the Amazon Bedrock Runtime Python SDK’s chat model calls (using Boto3/Botocore).
The Amazon Bedrock integration instruments the following methods:
InvokeModel
InvokeModelWithResponseStream
Note: The Amazon Bedrock integration does not yet support tracing embedding calls.
The Anthropic integration provides automatic tracing for the Anthropic Python SDK’s chat message calls.
The Anthropic integration instruments the following methods:
Anthropic().messages.create()
, AsyncAnthropic().messages.create()
Anthropic().messages.stream()
, AsyncAnthropic().messages.stream()
The Google Gemini integration provides automatic tracing for the Google AI Python SDK’s content generation calls.
The Google Gemini integration instruments the following methods:
model.generate_content()
(Also captures chat.send_message()
)model.generate_content_async()
(Also captures chat.send_message_async()
)The Vertex AI integration automatically traces content generation and chat message calls made through Google’s Vertex AI Python SDK.
The Vertex AI integration instruments the following methods:
Generating content (including streamed calls):
model.generate_content()
model.generate_content_async()
Chat Messages (including streamed calls):
chat.send_message()
chat.send_message_async()
Datadog’s LLM Observability Node.js SDK provides integrations that automatically trace and annotate calls to LLM frameworks and libraries. Without changing your code, you can get out-of-the-box traces and observability for calls that your LLM application makes to the following frameworks:
Framework | Supported Versions | Tracer Version |
---|---|---|
OpenAI (common JS) | >= 3.0.0 | >= 4.49.0, >= 5.25.0 |
In addition to capturing latency and errors, the integrations capture the input parameters, input and output messages, and token usage (when available) of each traced call.
All integrations are enabled by default.
To disable all integrations, use the in-code SDK setup and specify plugins: false
on the general tracer configuration.
const tracer = require('dd-trace').init({
llmobs: { ... },
plugins: false
});
const { llmobs } = tracer;
To only enable specific integrations:
plugins: false
.tracer.use()
at the top of the entrypoint file of your LLM application:const tracer = require('dd-trace').init({
llmobs: { ... },
plugins: false
});
const { llmobs } = tracer;
tracer.use('openai', true);
Additionally, you can set the following environment variables for more specific control over library patching and the integration that starts the span:
DD_TRACE_DISABLED_PLUGINS
DD_TRACE_DISABLED_PLUGINS=openai,http
DD_TRACE_DISABLED_INSTRUMENTATIONS
DD_TRACE_DISABLED_INSTRUMENTATIONS=openai,http
The OpenAI integration provides automatic tracing for the OpenAI Node.js SDK’s completion, chat completion, and embeddings endpoints.
The OpenAI integration instruments the following methods, including streamed calls:
openai.completions.create()
openai.chat.completions.create()
openai.embeddings.create()
The OpenAI integration for the Node.js tracer is not supported in ESM. To use OpenAI along with dd-trace
in your ESM projects without errors, create the following script:
// register.mjs
import { register } from 'node:module';
register("import-in-the-middle/hook.mjs", import.meta.url, {
parentURL: import.meta.url,
data: { include: ["openai"]}, // this is the important bit here
});
And start your application with:
DD_SITE=<YOUR_DATADOG_SITE> node --import ./register.js --require dd-trace/init script.js
This avoids any compatability issues with OpenAI and dd-trace
in ESM projects.
In this case, tracing is not used for OpenAI calls. To add this tracing for LLM Observability, you can instrument your OpenAI calls with the llmobs.trace()
method.
const tracer = require('dd-trace').init({
llmobs: { ... }
});
// user application code
function makeOpenAICall (input) {
// user code
const response = await llmobs.trace({ kind: 'llm', name: 'openai.createChatCompletion', modelName: 'gpt-4', modelProvider: 'openai' }, async () => {
const res = await openai.chat.completions.create({ ... });
llmobs.annotate({
inputData: input,
outputData: res.choices[0].message.content
})
return res;
});
// user code to do something with `response`
}
To use LLM Observability integrations in bundled applications (esbuild, Webpack, Next.js), you must exclude those integrations’ modules from bundling.
If you are using esbuild, or for more specific information on why tracing does not work directly with bundlers, refer to Bundling with the Node.js tracer.
For Webpack or Next.js bundling, specify the corresponding integration in the externals
section of the webpack configuration:
// next.config.js
module.exports = {
webpack: (config) => {
// this may be a necessary inclusion
config.resolve.fallback = {
...config.resolve.fallback,
graphql: false,
}
// exclude OpenAI from bundling
config.externals.push('openai')
return config
}
}
// webpack.config.js
module.exports = {
resolve: {
fallback: {
graphql: false,
}
},
externals: {
openai: 'openai'
}
}