- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
The LLM Observability SDK for Node.js enhances the observability of your JavaScript-based LLM applications. The SDK supports Node.js versions 16 and newer. For information about LLM Observability’s integration support, see Auto Instrumentation.
You can install and configure tracing of various operations such as workflows, tasks, and API calls with wrapped functions or traced blocks. You can also annotate these traces with metadata for deeper insights into the performance and behavior of your applications, supporting multiple LLM services or models from the same environment.
dd-trace
package must be installed:npm install dd-trace
Enable LLM Observability by running your application with NODE_OPTIONS="--import dd-race/initialize.mjs"
and specifying the required environment variables.
Note: dd-trace/initialize.mjs
automatically turns on all APM integrations.
DD_SITE=
DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 \
DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME> NODE_OPTIONS="--import dd-trace/initialize.mjs" node <YOUR_APP_ENTRYPOINT>
DD_API_KEY
DD_SITE
.DD_LLMOBS_ENABLED
1
or true
.DD_LLMOBS_ML_APP
DD_LLMOBS_AGENTLESS_ENABLED
false
1
or true
.Enable LLM Observability programatically through the init()
function instead of running with the dd-trace/initialize.mjs
command. Note: Do not use this setup method with the dd-trace/initialize.mjs
command.
const tracer = require('dd-trace').init({
llmobs: {
mlApp: "<YOUR_ML_APP_NAME>",
agentlessEnabled: true,
},
site: "<YOUR_DATADOG_SITE>",
env: "<YOUR_ENV>",
});
const llmobs = tracer.llmobs;
These options are set on the llmobs
configuration:
mlApp
DD_LLMOBS_ML_APP
.agentlessEnabled
false
true
. This configures the dd-trace
library to not send any data that requires the Datadog Agent. If not provided, this defaults to the value of DD_LLMOBS_AGENTLESS_ENABLED
.These options can be set on the general tracer configuration:
env
prod
, pre-prod
, staging
). If not provided, this defaults to the value of DD_ENV
.service
DD_SERVICE
.DD_API_KEY
and DD_SITE
are read from environment variables for configuration, and cannot be configured programatically.
Use the llmobs.flush()
function to flush all remaining spans from the tracer to LLM Observability at the end of the Lambda function.
Your application name (the value of DD_LLMOBS_ML_APP
) must be a lowercase Unicode string. It may contain the characters listed below:
The name can be up to 193 characters long and may not contain contiguous or trailing underscores.
To trace a span, use llmobs.wrap(options, function)
as a function wrapper for the function you’d like to trace. For a list of available span kinds, see the Span Kinds documentation. For more granular tracing of operations within functions, see Tracing spans using inline methods.
Span kinds are required, and are specified on the options
object passed to the llmobs
tracing functions (trace
, wrap
, and decorate
). See the Span Kinds documentation for a list of supported span kinds.
Note: Spans with an invalid span kind are not submitted to LLM Observability.
llmobs.wrap
(along with llmobs.decorate
for TypeScript) tries to automatically capture inputs, outputs, and the name of the function being traced. If you need to manually annotate a span, see Annotating a span. Inputs and outputs you annotate will override the automatic capturing. Additionally, to override the function name, pass the name
property on the options object to the llmobs.wrap
function:
function processMessage () {
... // user application logic
return
}
processMessage = llmobs.wrap({ kind: 'workflow', name: 'differentFunctionName' }, processMessage)
Note: If you are using any LLM providers or frameworks that are supported by Datadog’s LLM integrations, you do not need to manually start a LLM span to trace these operations.
To trace an LLM span, specify the span kind as llm
, and optionally specify the following arguments on the options object.
modelName
"custom"
name
name
defaults to the name of the traced function.modelProvider
"custom"
sessionId
mlApp
function llmCall () {
const completion = ... // user application logic to invoke LLM
return completion
}
llmCall = llmobs.wrap({ kind: 'llm', name: 'invokeLLM', modelName: 'claude', modelProvider: 'anthropic' }, llmCall)
To trace an LLM span, specify the span kind as workflow
, and optionally specify the following arguments on the options object.
name
name
defaults to the name of the traced function.sessionId
mlApp
function processMessage () {
... // user application logic
return
}
processMessage = llmobs.wrap({ kind: 'workflow' }, processMessage)
To trace an LLM span, specify the span kind as agent
, and optionally specify the following arguments on the options object.
name
name
defaults to the name of the traced function.sessionId
mlApp
function reactAgent () {
... // user application logic
return
}
reactAgent = llmobs.wrap({ kind: 'agent' }, reactAgent)
To trace an LLM span, specify the span kind as tool
, and optionally specify the following arguments on the options object.
name
name
defaults to the name of the traced function.sessionId
mlApp
function callWeatherApi () {
... // user application logic
return
}
callWeatherApi = llmobs.wrap({ kind: 'tool' }, callWeatherApi)
To trace an LLM span, specify the span kind as task
, and optionally specify the following arguments on the options object.
name
name
defaults to the name of the traced function.sessionId
mlApp
function sanitizeInput () {
... // user application logic
return
}
sanitizeInput = llmobs.wrap({ kind: 'task' }, sanitizeInput)
To trace an LLM span, specify the span kind as embedding
, and optionally specify the following arguments on the options object.
Note: Annotating an embedding span’s input requires different formatting than other span types. See Annotating a span for more details on how to specify embedding inputs.
modelName
"custom"
name
name
is set to the name of the traced function.modelProvider
"custom"
sessionId
mlApp
function performEmbedding () {
... // user application logic
return
}
performEmbedding = llmobs.wrap({ kind: 'embedding', modelName: 'text-embedding-3', modelProvider: 'openai' }, performEmbedding)
To trace an LLM span, specify the span kind as retrieval
, and optionally specify the following arguments on the options object.
Note: Annotating a retrieval span’s output requires different formatting than other span types. See Annotating a span for more details on how to specify retrieval outputs.
name
name
defaults to the name of the traced function.sessionId
mlApp
The following also includes an example of annotating a span. See Annotating a span for more information.
function getRelevantDocs (question) {
const contextDocuments = ... // user application logic
llmobs.annotate({
inputData: question,
outputData: contextDocuments.map(doc => ({
id: doc.id,
score: doc.score,
text: doc.text,
name: doc.name
}))
})
return
}
getRelevantDocs = llmobs.wrap({ kind: 'retrieval' }, getRelevantDocs)
llmobs.wrap
extends the underlying behavior of tracer.wrap
. The underlying span created when the function is called is finished under the following conditions:
The following example demonstrates the second condition, where the last argument is a callback:
const express = require('express')
const app = express()
function myAgentMiddleware (req, res, next) {
const err = ... // user application logic
// the span for this function is finished when `next` is called
next(err)
}
myAgentMiddleware = llmobs.wrap({ kind: 'agent' }, myAgentMiddleware)
app.use(myAgentMiddleware)
If the application does not use the callback function, it is recommended to use an inline traced block instead. See Tracing spans using inline methods for more information.
const express = require('express')
const app = express()
function myAgentMiddleware (req, res) {
// the `next` callback is not being used here
return llmobs.trace({ kind: 'agent', name: 'myAgentMiddleware' }, () => {
return res.status(200).send('Hello World!')
})
}
app.use(myAgentMiddleware)
Session tracking allows you to associate multiple interactions with a given user. When starting a root span for a new trace or span in a new process, specify the sessionId
argument with the string ID of the underlying user session:
function processMessage() {
... # user application logic
return
}
processMessage = llmobs.wrap({ kind: 'workflow', sessionId: "<SESSION_ID>" }, processMessage)
The SDK provides the method llmobs.annotate()
to annotate spans with inputs, outputs, and metadata.
The LLMObs.annotate()
method accepts the following arguments:
span
span
is not provided (as when using function wrappers), the SDK annotates the current active span.annotationOptions
The annotationOptions
object can contain the following:
inputData
{role: "...", content: "..."}
(for LLM spans). Note: Embedding spans are a special case and require a string or an object (or a list of objects) with this format: {text: "..."}
.outputData
{role: "...", content: "..."}
(for LLM spans). Note: Retrieval spans are a special case and require a string or an object (or a list of objects) with this format: {text: "...", name: "...", score: number, id: "..."}
.metadata
model_temperature
, max_tokens
, top_k
, etc.).metrics
input_tokens
, output_tokens
, total_tokens
, etc.).tags
session
, environment
, system
, versioning
, etc.). For more information about tags, see Getting Started with Tags.function llmCall (prompt) {
const completion = ... // user application logic to invoke LLM
llmobs.annotate({
inputData: [{ role: "user", content: "Hello world!" }],
outputData: [{ role: "assistant", content: "How can I help?" }],
metadata: { temperature: 0, max_tokens: 200 },
metrics: { input_tokens: 4, output_tokens: 6, total_tokens: 10 },
tags: { host: "host_name" }
})
return completion
}
llmCall = llmobs.wrap({ kind:'llm', modelName: 'modelName', modelProvider: 'modelProvider' }, llmCall)
function extractData (document) {
const resp = llmCall(document)
llmobs.annotate({
inputData: document,
outputData: resp,
tags: { host: "host_name" }
})
return resp
}
extractData = llmobs.wrap({ kind: 'workflow' }, extractData)
function performEmbedding () {
... // user application logic
llmobs.annotate(
undefined, { // this can be set to undefined or left out entirely
inputData: { text: "Hello world!" },
outputData: [0.0023064255, -0.009327292, ...],
metrics: { input_tokens: 4 },
tags: { host: "host_name" }
}
)
}
performEmbedding = llmobs.wrap({ kind: 'embedding', modelName: 'text-embedding-3', modelProvider: 'openai' }, performEmbedding)
function similaritySearch () {
... // user application logic
llmobs.annotate(undefined, {
inputData: "Hello world!",
outputData: [{ text: "Hello world is ...", name: "Hello, World! program", id: "document_id", score: 0.9893 }],
tags: { host: "host_name" }
})
return
}
similaritySearch = llmobs.wrap({ kind: 'retrieval', name: 'getRelevantDocs' }, similaritySearch)
The LLM Observability SDK provides the methods llmobs.exportSpan()
and llmobs.submitEvaluation()
to help your traced LLM application submit evaluations to LLM Observability.
llmobs.exportSpan()
can be used to extract the span context from a span. You’ll need to use this method to associate your evaluation with the corresponding span.
The llmobs.exportSpan()
method accepts the following argument:
span
function llmCall () {
const completion = ... // user application logic to invoke LLM
const spanContext = llmobs.exportSpan()
return completion
}
llmCall = llmobs.wrap({ kind: 'llm', name: 'invokeLLM', modelName: 'claude', modelProvider: 'anthropic' }, llmCall)
llmobs.submitEvaluation()
can be used to submit your custom evaluation associated with a given span.
The llmobs.submitEvaluation()
method accepts the following arguments:
span_context
LLMObs.export_span()
.evaluationOptions
The evaluationOptions
object can contain the following:
label
metricType
value
metric_type
) or number (for score metric_type
).tags
function llmCall () {
const completion = ... // user application logic to invoke LLM
const spanContext = llmobs.exportSpan()
llmobs.submitEvaluation(spanContext, {
label: "harmfulness",
metricType: "score",
value: 10,
tags: { evaluationProvider: "ragas" }
})
return completion
}
llmCall = llmobs.wrap({ kind: 'llm', name: 'invokeLLM', modelName: 'claude', modelProvider: 'anthropic' }, llmCall)
The llmobs
SDK provides a corresponding inline method to automatically trace the operation a given code block entails. These methods have the same argument signature as their function wrapper counterparts, with the addition that name
is required, as the name cannot be inferred from an anonymous callback. This method will finish the span under the following conditions:
function processMessage () {
return llmobs.trace({ kind: 'workflow', name: 'processMessage', sessionId: '<SESSION_ID>', mlApp: '<ML_APP>' }, workflowSpan => {
... // user application logic
return
})
}
function processMessage () {
return llmobs.trace({ kind: 'workflow', name: 'processMessage', sessionId: '<SESSION_ID>', mlApp: '<ML_APP>' }, (workflowSpan, cb) => {
... // user application logic
let maybeError = ...
cb(maybeError) // the span will finish here, and tag the error if it is not null or undefined
return
})
}
The return type of this function matches the return type of the traced function:
function processMessage () {
const result = llmobs.trace({ kind: 'workflow', name: 'processMessage', sessionId: '<SESSION_ID>', mlApp: '<ML_APP>' }, workflowSpan => {
... // user application logic
return 'hello world'
})
console.log(result) // 'hello world'
return result
}
The Node.js LLM Observability SDK offers an llmobs.decorate
function which serves as a function decorator for TypeScript applications. This functions tracing behavior is the same as llmobs.wrap
.
// index.ts
import tracer from 'dd-trace';
tracer.init({
llmobs: {
mlApp: "<YOUR_ML_APP_NAME>",
},
});
const { llmobs } = tracer;
class MyAgent {
@llmobs.decorate({ kind: 'agent' })
async runChain () {
... // user application logic
return
}
}
llmobs.flush()
is a blocking function that submits all buffered LLM Observability data to the Datadog backend. This can be useful in serverless environments to prevent an application from exiting until all LLM Observability traces are submitted.
The SDK supports tracing multiple LLM applications from the same service.
You can configure an environment variable DD_LLMOBS_ML_APP
to the name of your LLM application, which all generated spans are grouped into by default.
To override this configuration and use a different LLM application name for a given root span, pass the mlApp
argument with the string name of the underlying LLM application when starting a root span for a new trace or a span in a new process.
function processMessage () {
... // user application logic
return
}
processMessage = llmobs.wrap({ kind: 'workflow', name: 'processMessage', mlApp: '<NON_DEFAULT_ML_APP_NAME>' }, processMessage)
The SDK supports tracing across distributed services or hosts. Distributed tracing works by propagating span information across web requests.
The dd-trace
library provides out-of-the-box integrations that support distributed tracing for popular web frameworks. Requiring the tracer automatically enables these integrations, but you can disable them optionally with:
const tracer = require('dd-trace').init({
llmobs: { ... },
})
tracer.use('http', false) // disable the http integration