---
title: Getting Started with Datadog
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Infrastructure > Datadog Resource Catalog
---

# aws_personalize_batch_inference_job{% #aws_personalize_batch_inference_job %}

## `account_id`{% #account_id %}

**Type**: `STRING`

## `batch_inference_job_arn`{% #batch_inference_job_arn %}

**Type**: `STRING`**Provider name**: `batchInferenceJobArn`**Description**: The Amazon Resource Name (ARN) of the batch inference job.

## `batch_inference_job_config`{% #batch_inference_job_config %}

**Type**: `STRUCT`**Provider name**: `batchInferenceJobConfig`**Description**: A string to string map of the configuration details of a batch inference job.

- `item_exploration_config`**Type**: `MAP_STRING_STRING`**Provider name**: `itemExplorationConfig`**Description**: A string to string map specifying the exploration configuration hyperparameters, including `explorationWeight` and `explorationItemAgeCutOff`, you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. See [User-Personalization](https://docs.aws.amazon.com/personalize/latest/dg/native-recipe-new-item-USER_PERSONALIZATION.html).

## `batch_inference_job_mode`{% #batch_inference_job_mode %}

**Type**: `STRING`**Provider name**: `batchInferenceJobMode`**Description**: The job's mode.

## `creation_date_time`{% #creation_date_time %}

**Type**: `TIMESTAMP`**Provider name**: `creationDateTime`**Description**: The time at which the batch inference job was created.

## `failure_reason`{% #failure_reason %}

**Type**: `STRING`**Provider name**: `failureReason`**Description**: If the batch inference job failed, the reason for the failure.

## `filter_arn`{% #filter_arn %}

**Type**: `STRING`**Provider name**: `filterArn`**Description**: The ARN of the filter used on the batch inference job.

## `job_input`{% #job_input %}

**Type**: `STRUCT`**Provider name**: `jobInput`**Description**: The Amazon S3 path that leads to the input data used to generate the batch inference job.

- `s3_data_source`**Type**: `STRUCT`**Provider name**: `s3DataSource`**Description**: The URI of the Amazon S3 location that contains your input data. The Amazon S3 bucket must be in the same region as the API endpoint you are calling.
  - `kms_key_arn`**Type**: `STRING`**Provider name**: `kmsKeyArn`**Description**: The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files.
  - `path`**Type**: `STRING`**Provider name**: `path`**Description**: The file path of the Amazon S3 bucket.

## `job_name`{% #job_name %}

**Type**: `STRING`**Provider name**: `jobName`**Description**: The name of the batch inference job.

## `job_output`{% #job_output %}

**Type**: `STRUCT`**Provider name**: `jobOutput`**Description**: The Amazon S3 bucket that contains the output data generated by the batch inference job.

- `s3_data_destination`**Type**: `STRUCT`**Provider name**: `s3DataDestination`**Description**: Information on the Amazon S3 bucket in which the batch inference job's output is stored.
  - `kms_key_arn`**Type**: `STRING`**Provider name**: `kmsKeyArn`**Description**: The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files.
  - `path`**Type**: `STRING`**Provider name**: `path`**Description**: The file path of the Amazon S3 bucket.

## `last_updated_date_time`{% #last_updated_date_time %}

**Type**: `TIMESTAMP`**Provider name**: `lastUpdatedDateTime`**Description**: The time at which the batch inference job was last updated.

## `num_results`{% #num_results %}

**Type**: `INT32`**Provider name**: `numResults`**Description**: The number of recommendations generated by the batch inference job. This number includes the error messages generated for failed input records.

## `role_arn`{% #role_arn %}

**Type**: `STRING`**Provider name**: `roleArn`**Description**: The ARN of the Amazon Identity and Access Management (IAM) role that requested the batch inference job.

## `solution_version_arn`{% #solution_version_arn %}

**Type**: `STRING`**Provider name**: `solutionVersionArn`**Description**: The Amazon Resource Name (ARN) of the solution version from which the batch inference job was created.

## `status`{% #status %}

**Type**: `STRING`**Provider name**: `status`**Description**: The status of the batch inference job. The status is one of the following values:

- PENDING
- IN PROGRESS
- ACTIVE
- CREATE FAILED



## `tags`{% #tags %}

**Type**: `UNORDERED_LIST_STRING`

## `theme_generation_config`{% #theme_generation_config %}

**Type**: `STRUCT`**Provider name**: `themeGenerationConfig`**Description**: The job's theme generation settings.

- `fields_for_theme_generation`**Type**: `STRUCT`**Provider name**: `fieldsForThemeGeneration`**Description**: Fields used to generate descriptive themes for a batch inference job.
  - `item_name`**Type**: `STRING`**Provider name**: `itemName`**Description**: The name of the Items dataset column that stores the name of each item in the dataset.
