Managed Evaluations

이 제품은 선택한 Datadog 사이트에서 지원되지 않습니다. ().
이 페이지는 아직 한국어로 제공되지 않습니다. 번역 작업 중입니다.
현재 번역 프로젝트에 대한 질문이나 피드백이 있으신 경우 언제든지 연락주시기 바랍니다.

Overview

Managed evaluations are built-in tools to assess your LLM application. LLM Observability associates evaluations with individual spans so you can view the inputs and outputs that led to a specific evaluation.

Learn more about the compatibility requirements.

Create new evaluations

  1. Navigate to AI Observability > Evaluations.
  2. Click on the Create Evaluation button on the top right corner.
  3. Select a specific managed evaluation. This will open the evalution editor window.

After you click Save and Publish, the evaluation goes live. Alternatively, you can Save as Draft and edit or enable them later.

Edit existing evaluations

  1. Navigate to AI Observability > Evaluations.
  2. Hover over the evaluation you want to edit and click the Edit button.

Supported managed evaluations

  • Language Mismatch - Flags responses that are written in a different language than the user’s input
  • Sensitive Data Scanning - Flags the presence of sensitive or regulated information in model inputs or outputs

Further Reading