---
title: Manual Integrations
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Datadog Security > AI Guard > Set Up AI Guard > Manual Integrations
---

# Manual Integrations

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com



{% alert level="danger" %}
AI Guard isn't available in the  site.
{% /alert %}


{% /callout %}

Manual integrations require additional configuration to enable AI Guard protection. Follow the instructions for each framework to set up AI Guard evaluations.

## Supported frameworks and libraries{% #supported-frameworks-and-libraries %}

### Python{% #python %}

| Framework      | Supported Versions | SDK Version |
| -------------- | ------------------ | ----------- |
| Amazon Strands | \>= 1.29.0         | \>= 4.7.0   |
| LiteLLM Proxy  | \>= 1.78.5         | \>= 4.8.0   |

## Set up the Datadog Agent

SDKs use the [Datadog Agent](https://docs.datadoghq.com/agent.md?tab=Host-based) to send AI Guard data to Datadog. The Agent must be running and accessible to your application.

If you don't use the Datadog Agent, the AI Guard evaluator API still works, but you can't see AI Guard traces in Datadog.

## Required environment variables

Set the following environment variables in your application:

| Variable              | Value                    |
| --------------------- | ------------------------ |
| `DD_AI_GUARD_ENABLED` | `true`                   |
| `DD_API_KEY`          | `<YOUR_API_KEY>`         |
| `DD_APP_KEY`          | `<YOUR_APPLICATION_KEY>` |
| `DD_ENV`              | `<YOUR_ENVIRONMENT>`     |
| `DD_SERVICE`          | `<YOUR_SERVICE>`         |

## Integrations{% #integrations %}

### Amazon Strands{% #amazon-strands %}

#### Python{% #python-1 %}

The Amazon Strands integration enables AI Guard evaluations for applications built with the [Amazon Strands Agents SDK](https://github.com/strands-agents/sdk-python).

##### Setup{% #setup %}

Install dd-trace-py v4.7.0 or later:

```shell
pip install ddtrace>=4.7.0
```

Next, define the entry point for the integration with a plugin or hook provider:

- Plugin (recommended):

```python
from ddtrace.appsec.ai_guard import AIGuardStrandsPlugin

agent = Agent(
    model=model,
    plugins=[AIGuardStrandsPlugin()]
)
```

- HookProvider (legacy):

```python
from ddtrace.appsec.ai_guard import AIGuardStrandsHookProvider

agent = Agent(
    model=model,
    hooks=[AIGuardStrandsHookProvider()]
)
```

### LiteLLM Proxy{% #litellm-proxy %}

#### Python{% #python-2 %}

The LiteLLM Proxy integration enables AI Guard evaluations for applications using the [LiteLLM Proxy](https://github.com/strands-agents/sdk-python).

##### Setup{% #setup-1 %}

Install dd-trace-py v4.8.0 or later:

```shell
pip install ddtrace>=4.8.0
```

Import Datadog's LiteLLM guardrail next to your configuration file (for example, `guardrails.py`):

```python
from ddtrace.appsec.ai_guard.integrations.litellm import DatadogAIGuardGuardrail

__all__ = ["DatadogAIGuardGuardrail"]
```

Add the imported guardrail to your configuration file:

```yaml
guardrails:
  - guardrail_name: datadog_ai_guard
    litellm_params:
      guardrail: guardrails.DatadogAIGuardGuardrail
      mode: [pre_call, post_call]
      on_input: true
      on_output: true
      block: true
```

The guardrail supports all three modes: `pre_call`, `post_call`, and `during_call`.

By default, the guardrail follows the blocking configuration set in the AI Guard service settings. To disable blocking, set the `block` parameter to `false` (equivalent to the `block` option in the [SDK](https://docs.datadoghq.com/security/ai_guard/setup/sdk.md) and [REST API](https://docs.datadoghq.com/security/ai_guard/setup/http_api.md)).

## Further reading{% #further-reading %}

- [Automatic Integrations](https://docs.datadoghq.com/security/ai_guard/setup/automatic_integrations.md)
- [SDK](https://docs.datadoghq.com/security/ai_guard/setup/sdk.md)
