- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
",t};e.buildCustomizationMenuUi=t;function n(e){let t='
",t}function s(e){let n=e.filter.currentValue||e.filter.defaultValue,t='${e.filter.label}
`,e.filter.options.forEach(s=>{let o=s.id===n;t+=``}),t+="${e.filter.label}
`,t+=`LLM Observability offers several ways to support evaluations:
Datadog builds and supports Out of the Box Evaluations to support common use cases. You can enable and configure them within the LLM Observability application.
You can also Submit Evaluations using Datadog’s API. This mechanism is great if you have your own evaluation system, but would like to centralize that information within Datadog.
Datadog also supports integrations with some 3rd party evaluation frameworks, such as Ragas and NeMo.
In addition to evaluating the input and output of LLM requests, agents, workflows, or the application, LLM Observability integrates with Sensitive Data Scanner, which helps prevent data leakage by identifying and redacting any sensitive information (such as personal data, financial details, or proprietary information) that may be present in any step of your LLM application.
By proactively scanning for sensitive data, LLM Observability ensures that conversations remain secure and compliant with data protection regulations. This additional layer of security reinforces Datadog’s commitment to maintaining the confidentiality and integration of user interactions with LLMs.