Managed Evaluations

This product is not supported for your selected Datadog site. ().

Overview

Managed evaluations are built-in tools to assess your LLM application. LLM Observability associates evaluations with individual spans so you can view the inputs and outputs that led to a specific evaluation.

Learn more about the compatibility requirements.

Create new evaluations

  1. Navigate to AI Observability > Evaluations.
  2. Click on the Create Evaluation button on the top right corner.
  3. Select a specific managed evaluation. This will open the evalution editor window.

After you click Save and Publish, the evaluation goes live. Alternatively, you can Save as Draft and edit or enable them later.

Edit existing evaluations

  1. Navigate to AI Observability > Evaluations.
  2. Hover over the evaluation you want to edit and click the Edit button.

Supported managed evaluations

  • Language Mismatch - Flags responses that are written in a different language than the user’s input
  • Sensitive Data Scanning - Flags the presence of sensitive or regulated information in model inputs or outputs

Further Reading