---
title: Getting Started with Datadog
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Infrastructure > Datadog Resource Catalog
---

# aws_personalize_dataset_import_job{% #aws_personalize_dataset_import_job %}

## `account_id`{% #account_id %}

**Type**: `STRING`

## `creation_date_time`{% #creation_date_time %}

**Type**: `TIMESTAMP`**Provider name**: `creationDateTime`**Description**: The creation date and time (in Unix time) of the dataset import job.

## `data_source`{% #data_source %}

**Type**: `STRUCT`**Provider name**: `dataSource`**Description**: The Amazon S3 bucket that contains the training data to import.

- `data_location`**Type**: `STRING`**Provider name**: `dataLocation`**Description**: For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete. For example: `s3://bucket-name/folder-name/fileName.csv` If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize uses all files in the folder and any sub folder. Use the following syntax with a `/` after the folder name: `s3://bucket-name/folder-name/`

## `dataset_arn`{% #dataset_arn %}

**Type**: `STRING`**Provider name**: `datasetArn`**Description**: The Amazon Resource Name (ARN) of the dataset that receives the imported data.

## `dataset_import_job_arn`{% #dataset_import_job_arn %}

**Type**: `STRING`**Provider name**: `datasetImportJobArn`**Description**: The ARN of the dataset import job.

## `failure_reason`{% #failure_reason %}

**Type**: `STRING`**Provider name**: `failureReason`**Description**: If a dataset import job fails, provides the reason why.

## `import_mode`{% #import_mode %}

**Type**: `STRING`**Provider name**: `importMode`**Description**: The import mode used by the dataset import job to import new records.

## `job_name`{% #job_name %}

**Type**: `STRING`**Provider name**: `jobName`**Description**: The name of the import job.

## `last_updated_date_time`{% #last_updated_date_time %}

**Type**: `TIMESTAMP`**Provider name**: `lastUpdatedDateTime`**Description**: The date and time (in Unix time) the dataset was last updated.

## `publish_attribution_metrics_to_s3`{% #publish_attribution_metrics_to_s3 %}

**Type**: `BOOLEAN`**Provider name**: `publishAttributionMetricsToS3`**Description**: Whether the job publishes metrics to Amazon S3 for a metric attribution.

## `role_arn`{% #role_arn %}

**Type**: `STRING`**Provider name**: `roleArn`**Description**: The ARN of the IAM role that has permissions to read from the Amazon S3 data source.

## `status`{% #status %}

**Type**: `STRING`**Provider name**: `status`**Description**: The status of the dataset import job. A dataset import job can be in one of the following states:

- CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED



## `tags`{% #tags %}

**Type**: `UNORDERED_LIST_STRING`
