- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Supported OS
The Robust Intelligence AI Firewall is a protective layer for AI models.
The AI Firewall inspects incoming user prompts to block malicious payloads, including any that attempt prompt injection, prompt extraction, or PII detection. The AI Firewall scans LLM model output to ensure it’s free of false information, sensitive data, and harmful content. Responses that fall outside your organization’s standards are blocked from the application.
This integration monitors the AI Firewall results through the Datadog Agent. It provides users with observability of their AI security issues including metrics for allowed data points, blocked data points, and insight on why each data point was blocked.
Follow the instructions below to install and configure this check for an Agent running on a host. For containerized environments, see the Autodiscovery Integration Templates for guidance on applying these instructions.
For Agent v7.21+ / v6.21+, follow the instructions below to install the Robust Intelligence AI Firewall check on your host. See Use Community Integrations to install with the Docker Agent or earlier versions of the Agent.
Run the following command to install the Agent integration:
datadog-agent integration install -t datadog-robust-intelligence-ai-firewall==1.0.0
Configure your integration similar to core integrations. Refer to the Configuration section below for steps specific to this integration.
Edit the robust_intelligence_ai_firewall.d/conf.yaml
file in the conf.d/
folder at the root of your Agent’s configuration directory to start collecting your Robust Intelligence AI Firewall performance data.
init_config:
instances:
## @param metrics_endpoint - string - required
## The URL to Robust Intelligence AI Firewall
## internal metrics per loaded plugin in Prometheus
## format.
#
- openmetrics_endpoint: http://localhost:8080/metrics
See the sample robust_intelligence_ai_firewall.d/conf.yaml file for all available configuration options.
To configure the integration for AI Firewall running in a containerized environment, add the following annotation to pods:
apiVersion: v1
kind: Pod
# (...)
metadata:
name: '<POD_NAME>'
annotations:
ad.datadoghq.com/<CONTAINER_IDENTIFIER>.checks: |
{
"robust_intelligence_ai_firewall": {
"init_config": {},
"instances": [
{
"openmetrics_endpoint": "http://%%host%%:8080/metrics"
}
]
}
}
# (...)
Run the Agent’s status subcommand and look for robust_intelligence_ai_firewall
under the Checks section.
robust_intelligence_ai_firewall.firewall_requests.count (count) | Number of times the firewall was called to validate request |
robust_intelligence_ai_firewall.rule_evaluated.count (count) | Number of times the rule was evaluated by firewall |
Robust Intelligence AI Firewall does not include any service checks.
Robust Intelligence AI Firewall does not include any events.
Need Help? Contact Robust Intelligence Support.