LLM Observability Instrumentation

This product is not supported for your selected Datadog site. ().
이 페이지는 아직 영어로 제공되지 않습니다. 번역 작업 중입니다.
현재 번역 프로젝트에 대한 질문이나 피드백이 있으신 경우 언제든지 연락주시기 바랍니다.

To get started with Datadog LLM Observability, instrument your LLM application or agent(s) by choosing from several approaches based on your programming language and setup. Datadog provides comprehensive instrumentation options designed to capture detailed traces, metrics, and evaluations from your LLM applications and agents with minimal code changes.

Instrumentation Options

You can instrument your application with the Python, Node.js, or Java SDKs, or by using the LLM Observability API.

Datadog provides native SDKs that offer the most comprehensive LLM observability features:

LanguageSDK AvailableAuto-InstrumentationCustom Instrumentation
PythonPython 3.7+
Node.jsNode.js 16+
JavaJava 8+

To instrument an LLM application with the SDK:

  1. Install the LLM Observability SDK
  2. Configure the SDK by providing the required environment variables in your application startup command, or programmatically in-code. Ensure you have configured your Datadog API key, Datadog site, and machine learning (ML) app name.

Auto-instrumentation

Auto-instrumentation captures LLM calls for Python and Node.js applications without requiring code changes. It allows you to get out-of-the-box traces and observability into popular frameworks and providers. For additional details and a full list of supported frameworks and providers, see the Auto-instrumentation Documentation.

Auto-instrumentation automatically captures:

  • Input prompts and output completions
  • Token usage and costs
  • Latency and error information
  • Model parameters (temperature, max_tokens, etc.)
  • Framework-specific metadata
Note: When using supported frameworks, no manual span creation is required for LLM calls. The SDK automatically creates appropriate spans with rich metadata.

Custom instrumentation

All supported SDKs provide advanced capabilities for custom instrumentation of your LLM applications in addition to auto-instrumentation, including:

  • Manual span creation using function decorators or context managers
  • Complex workflow tracing for multi-step LLM applications
  • Agent monitoring for autonomous LLM agents
  • Custom evaluations and quality measurements
  • Session tracking for user interactions

To learn more, see the SDK Reference Documentation.

HTTP API instrumentation

If your language is not supported by the SDKs or you are using custom integrations, you can instrument your application using Datadog’s HTTP API.

The API allows you to:

  • Submit spans directly via HTTP endpoints
  • Send custom evaluations associated with spans
  • Include full trace hierarchies for complex applications
  • Annotate spans with inputs, outputs, metadata, and metrics

API endpoints:

  • Spans API: POST https://api./api/intake/llm-obs/v1/trace/spans
  • Evaluations API: POST https://api./api/intake/llm-obs/v2/eval-metric

To learn more, see the HTTP API Documentation.

Further Reading