This product is not supported for your selected
Datadog site. (
).
このページは日本語には対応しておりません。随時翻訳に取り組んでいます。
翻訳に関してご質問やご意見ございましたら、
お気軽にご連絡ください。
Overview
LLM Observability offers several ways to support evaluations. They can be configured by navigating to AI Observability > Evaluations.
Custom LLM-as-a-judge evaluations
Custom LLM-as-a-judge evaluations allow you to define your own evaluation logic using natural language prompts. You can create custom evaluations to assess subjective or objective criteria (like tone, helpfulness, or factuality) and run them at scale across your traces and spans.
Managed evaluations
Datadog builds and supports managed evaluations to support common use cases. You can enable and configure them within the LLM Observability application.
Submit external evaluations
You can also submit external evaluations using Datadog’s API. This mechanism is great if you have your own evaluation system, but would like to centralize that information within Datadog.
Evaluation integrations
Datadog also supports integrations with some 3rd party evaluation frameworks, such as Ragas and NeMo.
Sensitive Data Scanner integration
In addition to evaluating the input and output of LLM requests, agents, workflows, or the application, LLM Observability integrates with Sensitive Data Scanner, which helps prevent data leakage by identifying and redacting any sensitive information.
Security
Get real-time security guardrails for your AI apps and agents
AI Guard helps secure your AI apps and agents in real time against prompt injection, jailbreaking, tool misuse, and sensitive data exfiltration attacks. Try it today!
JOIN THE PREVIEWPermissions
LLM Observability Write permissions are necessary to configure evaluations.
Retrieving spans
LLM Observability offers an Export API that you can use to retrieve spans for running external evaluations. This helps circumvent the need to keep track of evaluation-relevant data at execution time.