The AI Firewall inspects incoming user prompts to block malicious payloads, including any that attempt prompt injection, prompt extraction, or PII detection. The AI Firewall scans LLM model output to ensure it’s free of false information, sensitive data, and harmful content. Responses that fall outside your organization’s standards are blocked from the application.
This integration monitors the AI Firewall results through the Datadog Agent. It provides users with observability of their AI security issues including metrics for allowed data points, blocked data points, and insight on why each data point was blocked.
Follow the instructions below to install and configure this check for an Agent running on a host. For containerized environments, see the Autodiscovery Integration Templates for guidance on applying these instructions.
For Agent v7.21+ / v6.21+, follow the instructions below to install the Robust Intelligence AI Firewall check on your host. See Use Community Integrations to install with the Docker Agent or earlier versions of the Agent.
Run the following command to install the Agent integration:
Edit the robust_intelligence_ai_firewall.d/conf.yaml file in the conf.d/ folder at the root of your Agent’s configuration directory to start collecting your Robust Intelligence AI Firewall performance data.
init_config:instances:## @param metrics_endpoint - string - required## The URL to Robust Intelligence AI Firewall ## internal metrics per loaded plugin in Prometheus## format.#- openmetrics_endpoint:http://localhost:8080/metrics