---
title: Get Started with AI Guard
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Datadog Security > AI Guard > Get Started with AI Guard
---

# Get Started with AI Guard

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com, us2.ddog-gov.com



{% alert level="danger" %}
AI Guard isn't available in the  site.
{% /alert %}


{% /callout %}

AI Guard helps secure your AI apps and agents in real time against prompt injection, jailbreaking, tool misuse, and sensitive data exfiltration attacks. AI Guard can also detect sensitive data such as PII and secrets in LLM conversations. This page describes how to set it up so you can keep your data secure against these AI-based threats.

For an overview on AI Guard, see [AI Guard](https://docs.datadoghq.com/security/ai_guard.md).

## Setup{% #setup %}

To set up AI Guard, you need to create API keys, install an SDK, configure retention filters, and set AI Guard policies including blocking, evaluation sensitivity, and sensitive data scanning.

For full setup instructions, see [Set Up AI Guard](https://docs.datadoghq.com/security/ai_guard/setup.md).

## View AI Guard data in Datadog{% #in-datadog %}

After completing the [setup steps](https://docs.datadoghq.com/security/ai_guard/setup.md) and using an [SDK](https://docs.datadoghq.com/security/ai_guard/setup/sdk.md) to instrument your code, you can view your data in Datadog on the [AI Guard page](https://app.datadoghq.com/security/ai-guard/).

{% alert level="info" %}
You can't see data in Datadog for evaluations performed directly using the REST API.
{% /alert %}

## Security signals{% #security-signals %}

AI Guard generates security signals when it detects threats such as prompt injection, jailbreaking, or tool misuse. You can create custom detection rules, set thresholds for notifications, and investigate signals alongside other application security threats.

For more information, see [AI Guard Security Signals](https://docs.datadoghq.com/security/ai_guard/signals.md).

## Set up Datadog Monitors for alerting{% #set-up-datadog-monitors %}

To create monitors for alerting at certain thresholds, you can use [Datadog Monitors](https://docs.datadoghq.com/monitors.md). You can monitor AI Guard evaluations with either APM traces or with metrics. For both types of monitor, you should set your alert conditions, name for the alert, and define notifications; Datadog recommends using Slack.

### APM monitor{% #apm-monitor %}

Follow the instructions to create a new [APM monitor](https://docs.datadoghq.com/monitors/types/apm.md?tab=traceanalytics), with its scope set to **Trace Analytics**.

- To monitor evaluation traffic, use the query `@ai_guard.action: (DENY OR ABORT)`.
- To monitor blocked traffic, use the query `@ai_guard.blocked:true`.

### Metric monitor{% #metric-monitor %}

Follow the instructions to create a new [metric monitor](https://docs.datadoghq.com/monitors/types/metric.md).

- To monitor evaluation traffic, use the metric `datadog.ai_guard.evaluations` with the tags `action:deny OR action:abort`.
- To monitor blocked traffic, use the metric `datadog.ai_guard.evaluations` with the tag `blocking_enabled:true`.

## Evaluate conversations in AI Guard Playground{% #playground %}

The [AI Guard Playground](https://app.datadoghq.com/security/ai-guard/playground) lets you test AI Guard evaluations directly from the Datadog UI, without writing any code. Submit a conversation, including user input, assistant output, and tool calls, and see the evaluation result (action and reason) in real time.

Use the Playground to:

- Experiment with different prompt patterns and see how AI Guard responds.
- Verify that AI Guard correctly detects prompt injection, jailbreaking, or unsafe tool calls.
- Tweak the evaluation sensitivity threshold and see how it affects detection results. You can then adjust the threshold in AI Guard's [evaluation sensitivity](https://docs.datadoghq.com/security/ai_guard/setup.md#evaluation-sensitivity) settings.
- Test sensitive data scanning on your conversations.
- Share evaluation results with your team during development.

## Further reading{% #further-reading %}

- [AI Guard](https://docs.datadoghq.com/security/ai_guard.md)
- [Set Up AI Guard](https://docs.datadoghq.com/security/ai_guard/setup.md)
- [AI Guard Security Signals](https://docs.datadoghq.com/security/ai_guard/signals.md)
- [Protect agentic AI applications with Datadog AI Guard](https://www.datadoghq.com/blog/ai-guard)
- [LLM guardrails: Best practices for deploying LLM apps securely](https://www.datadoghq.com/blog/llm-guardrails-best-practices/)
