---
title: Getting Started with Datadog
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Infrastructure > Datadog Resource Catalog
---

# aws_bedrock_guardrail{% #aws_bedrock_guardrail %}

## `account_id`{% #account_id %}

**Type**: `STRING`

## `blocked_input_messaging`{% #blocked_input_messaging %}

**Type**: `STRING`**Provider name**: `blockedInputMessaging`**Description**: The message that the guardrail returns when it blocks a prompt.

## `blocked_outputs_messaging`{% #blocked_outputs_messaging %}

**Type**: `STRING`**Provider name**: `blockedOutputsMessaging`**Description**: The message that the guardrail returns when it blocks a model response.

## `content_policy`{% #content_policy %}

**Type**: `STRUCT`**Provider name**: `contentPolicy`**Description**: The content policy that was configured for the guardrail.

- `filters`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `filters`**Description**: Contains the type of the content filter and how strongly it should apply to prompts and model responses.
  - `input_action`**Type**: `STRING`**Provider name**: `inputAction`**Description**: The action to take when harmful content is detected in the input. Supported values include:
    - `BLOCK` – Block the content and replace it with blocked messaging.
    - `NONE` – Take no action but return detection information in the trace response.
  - `input_enabled`**Type**: `BOOLEAN`**Provider name**: `inputEnabled`**Description**: Indicates whether guardrail evaluation is enabled on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
  - `input_modalities`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `inputModalities`**Description**: The input modalities selected for the guardrail content filter.
  - `input_strength`**Type**: `STRING`**Provider name**: `inputStrength`**Description**: The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
  - `output_action`**Type**: `STRING`**Provider name**: `outputAction`**Description**: The action to take when harmful content is detected in the output. Supported values include:
    - `BLOCK` – Block the content and replace it with blocked messaging.
    - `NONE` – Take no action but return detection information in the trace response.
  - `output_enabled`**Type**: `BOOLEAN`**Provider name**: `outputEnabled`**Description**: Indicates whether guardrail evaluation is enabled on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
  - `output_modalities`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `outputModalities`**Description**: The output modalities selected for the guardrail content filter.
  - `output_strength`**Type**: `STRING`**Provider name**: `outputStrength`**Description**: The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
  - `type`**Type**: `STRING`**Provider name**: `type`**Description**: The harmful category that the content filter is applied to.
- `tier`**Type**: `STRUCT`**Provider name**: `tier`**Description**: The tier that your guardrail uses for content filters.
  - `tier_name`**Type**: `STRING`**Provider name**: `tierName`**Description**: The tier that your guardrail uses for content filters. Valid values include:
    - `CLASSIC` tier – Provides established guardrails functionality supporting English, French, and Spanish languages.
    - `STANDARD` tier – Provides a more robust solution than the `CLASSIC` tier and has more comprehensive language support. This tier requires that your guardrail use [cross-Region inference](https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-cross-region.html).

## `contextual_grounding_policy`{% #contextual_grounding_policy %}

**Type**: `STRUCT`**Provider name**: `contextualGroundingPolicy`**Description**: The contextual grounding policy used in the guardrail.

- `filters`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `filters`**Description**: The filter details for the guardrails contextual grounding policy.
  - `action`**Type**: `STRING`**Provider name**: `action`**Description**: The action to take when content fails the contextual grounding evaluation. Supported values include:
    - `BLOCK` – Block the content and replace it with blocked messaging.
    - `NONE` – Take no action but return detection information in the trace response.
  - `enabled`**Type**: `BOOLEAN`**Provider name**: `enabled`**Description**: Indicates whether contextual grounding is enabled for evaluation. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
  - `threshold`**Type**: `DOUBLE`**Provider name**: `threshold`**Description**: The threshold details for the guardrails contextual grounding filter.
  - `type`**Type**: `STRING`**Provider name**: `type`**Description**: The filter type details for the guardrails contextual grounding filter.

## `created_at`{% #created_at %}

**Type**: `TIMESTAMP`**Provider name**: `createdAt`**Description**: The date and time at which the guardrail was created.

## `cross_region_details`{% #cross_region_details %}

**Type**: `STRUCT`**Provider name**: `crossRegionDetails`**Description**: Details about the system-defined guardrail profile that you're using with your guardrail, including the guardrail profile ID and Amazon Resource Name (ARN).

- `guardrail_profile_arn`**Type**: `STRING`**Provider name**: `guardrailProfileArn`**Description**: The Amazon Resource Name (ARN) of the guardrail profile that you're using with your guardrail.
- `guardrail_profile_id`**Type**: `STRING`**Provider name**: `guardrailProfileId`**Description**: The ID of the guardrail profile that your guardrail is using. Profile availability depends on your current Amazon Web Services Region. For more information, see the [Amazon Bedrock User Guide](https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-cross-region-support.html).

## `description`{% #description %}

**Type**: `STRING`**Provider name**: `description`**Description**: The description of the guardrail.

## `failure_recommendations`{% #failure_recommendations %}

**Type**: `UNORDERED_LIST_STRING`**Provider name**: `failureRecommendations`**Description**: Appears if the `status` of the guardrail is `FAILED`. A list of recommendations to carry out before retrying the request.

## `guardrail_arn`{% #guardrail_arn %}

**Type**: `STRING`**Provider name**: `guardrailArn`**Description**: The ARN of the guardrail.

## `guardrail_id`{% #guardrail_id %}

**Type**: `STRING`**Provider name**: `guardrailId`**Description**: The unique identifier of the guardrail.

## `kms_key_arn`{% #kms_key_arn %}

**Type**: `STRING`**Provider name**: `kmsKeyArn`**Description**: The ARN of the KMS key that encrypts the guardrail.

## `name`{% #name %}

**Type**: `STRING`**Provider name**: `name`**Description**: The name of the guardrail.

## `sensitive_information_policy`{% #sensitive_information_policy %}

**Type**: `STRUCT`**Provider name**: `sensitiveInformationPolicy`**Description**: The sensitive information policy that was configured for the guardrail.

- `pii_entities`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `piiEntities`**Description**: The list of PII entities configured for the guardrail.
  - `action`**Type**: `STRING`**Provider name**: `action`**Description**: The configured guardrail action when PII entity is detected.
  - `input_action`**Type**: `STRING`**Provider name**: `inputAction`**Description**: The action to take when harmful content is detected in the input. Supported values include:
    - `BLOCK` – Block the content and replace it with blocked messaging.
    - `ANONYMIZE` – Mask the content and replace it with identifier tags.
    - `NONE` – Take no action but return detection information in the trace response.
  - `input_enabled`**Type**: `BOOLEAN`**Provider name**: `inputEnabled`**Description**: Indicates whether guardrail evaluation is enabled on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
  - `output_action`**Type**: `STRING`**Provider name**: `outputAction`**Description**: The action to take when harmful content is detected in the output. Supported values include:
    - `BLOCK` – Block the content and replace it with blocked messaging.
    - `ANONYMIZE` – Mask the content and replace it with identifier tags.
    - `NONE` – Take no action but return detection information in the trace response.
  - `output_enabled`**Type**: `BOOLEAN`**Provider name**: `outputEnabled`**Description**: Indicates whether guardrail evaluation is enabled on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
  - `type`**Type**: `STRING`**Provider name**: `type`**Description**: The type of PII entity. For example, Social Security Number.
- `regexes`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `regexes`**Description**: The list of regular expressions configured for the guardrail.
  - `action`**Type**: `STRING`**Provider name**: `action`**Description**: The action taken when a match to the regular expression is detected.
  - `description`**Type**: `STRING`**Provider name**: `description`**Description**: The description of the regular expression for the guardrail.
  - `input_action`**Type**: `STRING`**Provider name**: `inputAction`**Description**: The action to take when harmful content is detected in the input. Supported values include:
    - `BLOCK` – Block the content and replace it with blocked messaging.
    - `NONE` – Take no action but return detection information in the trace response.
  - `input_enabled`**Type**: `BOOLEAN`**Provider name**: `inputEnabled`**Description**: Indicates whether guardrail evaluation is enabled on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
  - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the regular expression for the guardrail.
  - `output_action`**Type**: `STRING`**Provider name**: `outputAction`**Description**: The action to take when harmful content is detected in the output. Supported values include:
    - `BLOCK` – Block the content and replace it with blocked messaging.
    - `NONE` – Take no action but return detection information in the trace response.
  - `output_enabled`**Type**: `BOOLEAN`**Provider name**: `outputEnabled`**Description**: Indicates whether guardrail evaluation is enabled on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
  - `pattern`**Type**: `STRING`**Provider name**: `pattern`**Description**: The pattern of the regular expression configured for the guardrail.

## `status`{% #status %}

**Type**: `STRING`**Provider name**: `status`**Description**: The status of the guardrail.

## `status_reasons`{% #status_reasons %}

**Type**: `UNORDERED_LIST_STRING`**Provider name**: `statusReasons`**Description**: Appears if the `status` is `FAILED`. A list of reasons for why the guardrail failed to be created, updated, versioned, or deleted.

## `tags`{% #tags %}

**Type**: `UNORDERED_LIST_STRING`

## `topic_policy`{% #topic_policy %}

**Type**: `STRUCT`**Provider name**: `topicPolicy`**Description**: The topic policy that was configured for the guardrail.

- `tier`**Type**: `STRUCT`**Provider name**: `tier`**Description**: The tier that your guardrail uses for denied topic filters.
  - `tier_name`**Type**: `STRING`**Provider name**: `tierName`**Description**: The tier that your guardrail uses for denied topic filters. Valid values include:
    - `CLASSIC` tier – Provides established guardrails functionality supporting English, French, and Spanish languages.
    - `STANDARD` tier – Provides a more robust solution than the `CLASSIC` tier and has more comprehensive language support. This tier requires that your guardrail use [cross-Region inference](https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-cross-region.html).
- `topics`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `topics`**Description**: A list of policies related to topics that the guardrail should deny.
  - `definition`**Type**: `STRING`**Provider name**: `definition`**Description**: A definition of the topic to deny.
  - `examples`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `examples`**Description**: A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.
  - `input_action`**Type**: `STRING`**Provider name**: `inputAction`**Description**: The action to take when harmful content is detected in the input. Supported values include:
    - `BLOCK` – Block the content and replace it with blocked messaging.
    - `NONE` – Take no action but return detection information in the trace response.
  - `input_enabled`**Type**: `BOOLEAN`**Provider name**: `inputEnabled`**Description**: Indicates whether guardrail evaluation is enabled on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
  - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the topic to deny.
  - `output_action`**Type**: `STRING`**Provider name**: `outputAction`**Description**: The action to take when harmful content is detected in the output. Supported values include:
    - `BLOCK` – Block the content and replace it with blocked messaging.
    - `NONE` – Take no action but return detection information in the trace response.
  - `output_enabled`**Type**: `BOOLEAN`**Provider name**: `outputEnabled`**Description**: Indicates whether guardrail evaluation is enabled on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
  - `type`**Type**: `STRING`**Provider name**: `type`**Description**: Specifies to deny the topic.

## `updated_at`{% #updated_at %}

**Type**: `TIMESTAMP`**Provider name**: `updatedAt`**Description**: The date and time at which the guardrail was updated.

## `version`{% #version %}

**Type**: `STRING`**Provider name**: `version`**Description**: The version of the guardrail.

## `word_policy`{% #word_policy %}

**Type**: `STRUCT`**Provider name**: `wordPolicy`**Description**: The word policy that was configured for the guardrail.

- `managed_word_lists`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `managedWordLists`**Description**: A list of managed words configured for the guardrail.
  - `input_action`**Type**: `STRING`**Provider name**: `inputAction`**Description**: The action to take when harmful content is detected in the input. Supported values include:
    - `BLOCK` – Block the content and replace it with blocked messaging.
    - `NONE` – Take no action but return detection information in the trace response.
  - `input_enabled`**Type**: `BOOLEAN`**Provider name**: `inputEnabled`**Description**: Indicates whether guardrail evaluation is enabled on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
  - `output_action`**Type**: `STRING`**Provider name**: `outputAction`**Description**: The action to take when harmful content is detected in the output. Supported values include:
    - `BLOCK` – Block the content and replace it with blocked messaging.
    - `NONE` – Take no action but return detection information in the trace response.
  - `output_enabled`**Type**: `BOOLEAN`**Provider name**: `outputEnabled`**Description**: Indicates whether guardrail evaluation is enabled on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
  - `type`**Type**: `STRING`**Provider name**: `type`**Description**: ManagedWords$type The managed word type that was configured for the guardrail. (For now, we only offer profanity word list)
- `words`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `words`**Description**: A list of words configured for the guardrail.
  - `input_action`**Type**: `STRING`**Provider name**: `inputAction`**Description**: The action to take when harmful content is detected in the input. Supported values include:
    - `BLOCK` – Block the content and replace it with blocked messaging.
    - `NONE` – Take no action but return detection information in the trace response.
  - `input_enabled`**Type**: `BOOLEAN`**Provider name**: `inputEnabled`**Description**: Indicates whether guardrail evaluation is enabled on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
  - `output_action`**Type**: `STRING`**Provider name**: `outputAction`**Description**: The action to take when harmful content is detected in the output. Supported values include:
    - `BLOCK` – Block the content and replace it with blocked messaging.
    - `NONE` – Take no action but return detection information in the trace response.
  - `output_enabled`**Type**: `BOOLEAN`**Provider name**: `outputEnabled`**Description**: Indicates whether guardrail evaluation is enabled on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
  - `text`**Type**: `STRING`**Provider name**: `text`**Description**: Text of the word configured for the guardrail to block.
