Ce produit n'est pas pris en charge par le
site Datadog que vous avez sélectionné. (
).
Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel,
n'hésitez pas à nous contacter.
Overview
Managed evaluations are built-in tools to assess your LLM application. LLM Observability associates evaluations with individual
spans so you can view the inputs and outputs that led to a specific evaluation.
Learn more about the compatibility requirements.
Create new evaluations
- Navigate to AI Observability > Evaluations.
- Click on the Create Evaluation button on the top right corner.
- Select a specific managed evaluation. This will open the evalution editor window.
After you click Save and Publish, the evaluation goes live. Alternatively, you can Save as Draft and edit or enable them later.
Edit existing evaluations
- Navigate to AI Observability > Evaluations.
- Hover over the evaluation you want to edit and click the Edit button.
Supported managed evaluations
- Language Mismatch - Flags responses that are written in a different language than the user’s input
- Sensitive Data Scanning - Flags the presence of sensitive or regulated information in model inputs or outputs
Further Reading
Documentation, liens et articles supplémentaires utiles: