---
title: Getting Started with Datadog
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Infrastructure > Datadog Resource Catalog
---

# aws_fsx_task{% #aws_fsx_task %}

## `account_id`{% #account_id %}

**Type**: `STRING`

## `capacity_to_release`{% #capacity_to_release %}

**Type**: `INT64`**Provider name**: `CapacityToRelease`**Description**: Specifies the amount of data to release, in GiB, by an Amazon File Cache AUTO_RELEASE_DATA task that automatically releases files from the cache.

## `creation_time`{% #creation_time %}

**Type**: `TIMESTAMP`**Provider name**: `CreationTime`

## `end_time`{% #end_time %}

**Type**: `TIMESTAMP`**Provider name**: `EndTime`**Description**: The time the system completed processing the task, populated after the task is complete.

## `failure_details`{% #failure_details %}

**Type**: `STRUCT`**Provider name**: `FailureDetails`**Description**: Failure message describing why the task failed, it is populated only when `Lifecycle` is set to `FAILED`.

- `message`**Type**: `STRING`**Provider name**: `Message`

## `file_cache_id`{% #file_cache_id %}

**Type**: `STRING`**Provider name**: `FileCacheId`**Description**: The system-generated, unique ID of the cache.

## `file_system_id`{% #file_system_id %}

**Type**: `STRING`**Provider name**: `FileSystemId`**Description**: The globally unique ID of the file system.

## `lifecycle`{% #lifecycle %}

**Type**: `STRING`**Provider name**: `Lifecycle`**Description**: The lifecycle status of the data repository task, as follows:

- `PENDING` - The task has not started.
- `EXECUTING` - The task is in process.
- `FAILED` - The task was not able to be completed. For example, there may be files the task failed to process. The DataRepositoryTaskFailureDetails property provides more information about task failures.
- `SUCCEEDED` - The task has completed successfully.
- `CANCELED` - The task was canceled and it did not complete.
- `CANCELING` - The task is in process of being canceled.
You cannot delete an FSx for Lustre file system if there are data repository tasks for the file system in the `PENDING` or `EXECUTING` states. Please retry when the data repository task is finished (with a status of `CANCELED`, `SUCCEEDED`, or `FAILED`). You can use the DescribeDataRepositoryTask action to monitor the task status. Contact the FSx team if you need to delete your file system immediately.


## `paths`{% #paths %}

**Type**: `UNORDERED_LIST_STRING`**Provider name**: `Paths`**Description**: An array of paths that specify the data for the data repository task to process. For example, in an EXPORT_TO_REPOSITORY task, the paths specify which data to export to the linked data repository. (Default) If `Paths` is not specified, Amazon FSx uses the file system root directory.

## `release_configuration`{% #release_configuration %}

**Type**: `STRUCT`**Provider name**: `ReleaseConfiguration`**Description**: The configuration that specifies the last accessed time criteria for files that will be released from an Amazon FSx for Lustre file system.

- `duration_since_last_access`**Type**: `STRUCT`**Provider name**: `DurationSinceLastAccess`**Description**: Defines the point-in-time since an exported file was last accessed, in order for that file to be eligible for release. Only files that were last accessed before this point-in-time are eligible to be released from the file system.
  - `unit`**Type**: `STRING`**Provider name**: `Unit`**Description**: The unit of time used by the `Value` parameter to determine if a file can be released, based on when it was last accessed. `DAYS` is the only supported value. This is a required parameter.
  - `value`**Type**: `INT64`**Provider name**: `Value`**Description**: An integer that represents the minimum amount of time (in days) since a file was last accessed in the file system. Only exported files with a `MAX(atime, ctime, mtime)` timestamp that is more than this amount of time in the past (relative to the task create time) will be released. The default of `Value` is `0`. This is a required parameter.If an exported file meets the last accessed time criteria, its file or directory path must also be specified in the `Paths` parameter of the operation in order for the file to be released.

## `report`{% #report %}

**Type**: `STRUCT`**Provider name**: `Report`

- `enabled`**Type**: `BOOLEAN`**Provider name**: `Enabled`**Description**: Set `Enabled` to `True` to generate a `CompletionReport` when the task completes. If set to `true`, then you need to provide a report `Scope`, `Path`, and `Format`. Set `Enabled` to `False` if you do not want a `CompletionReport` generated when the task completes.
- `format`**Type**: `STRING`**Provider name**: `Format`**Description**: Required if `Enabled` is set to `true`. Specifies the format of the `CompletionReport`. `REPORT_CSV_20191124` is the only format currently supported. When `Format` is set to `REPORT_CSV_20191124`, the `CompletionReport` is provided in CSV format, and is delivered to `{path}/task-{id}/failures.csv`.
- `path`**Type**: `STRING`**Provider name**: `Path`**Description**: Required if `Enabled` is set to `true`. Specifies the location of the report on the file system's linked S3 data repository. An absolute path that defines where the completion report will be stored in the destination location. The `Path` you provide must be located within the file system's ExportPath. An example `Path` value is "s3://amzn-s3-demo-bucket/myExportPath/optionalPrefix". The report provides the following information for each file in the report: FilePath, FileStatus, and ErrorCode.
- `scope`**Type**: `STRING`**Provider name**: `Scope`**Description**: Required if `Enabled` is set to `true`. Specifies the scope of the `CompletionReport`; `FAILED_FILES_ONLY` is the only scope currently supported. When `Scope` is set to `FAILED_FILES_ONLY`, the `CompletionReport` only contains information about files that the data repository task failed to process.

## `resource_arn`{% #resource_arn %}

**Type**: `STRING`**Provider name**: `ResourceARN`

## `start_time`{% #start_time %}

**Type**: `TIMESTAMP`**Provider name**: `StartTime`**Description**: The time the system began processing the task.

## `status`{% #status %}

**Type**: `STRUCT`**Provider name**: `Status`**Description**: Provides the status of the number of files that the task has processed successfully and failed to process.

- `failed_count`**Type**: `INT64`**Provider name**: `FailedCount`**Description**: A running total of the number of files that the task failed to process.
- `last_updated_time`**Type**: `TIMESTAMP`**Provider name**: `LastUpdatedTime`**Description**: The time at which the task status was last updated.
- `released_capacity`**Type**: `INT64`**Provider name**: `ReleasedCapacity`**Description**: The total amount of data, in GiB, released by an Amazon File Cache AUTO_RELEASE_DATA task that automatically releases files from the cache.
- `succeeded_count`**Type**: `INT64`**Provider name**: `SucceededCount`**Description**: A running total of the number of files that the task has successfully processed.
- `total_count`**Type**: `INT64`**Provider name**: `TotalCount`**Description**: The total number of files that the task will process. While a task is executing, the sum of `SucceededCount` plus `FailedCount` may not equal `TotalCount`. When the task is complete, `TotalCount` equals the sum of `SucceededCount` plus `FailedCount`.

## `tags`{% #tags %}

**Type**: `UNORDERED_LIST_STRING`

## `task_id`{% #task_id %}

**Type**: `STRING`**Provider name**: `TaskId`**Description**: The system-generated, unique 17-digit ID of the data repository task.

## `type`{% #type %}

**Type**: `STRING`**Provider name**: `Type`**Description**: The type of data repository task.

- `EXPORT_TO_REPOSITORY` tasks export from your Amazon FSx for Lustre file system to a linked data repository.
- `IMPORT_METADATA_FROM_REPOSITORY` tasks import metadata changes from a linked S3 bucket to your Amazon FSx for Lustre file system.
- `RELEASE_DATA_FROM_FILESYSTEM` tasks release files in your Amazon FSx for Lustre file system that have been exported to a linked S3 bucket and that meet your specified release criteria.
- `AUTO_RELEASE_DATA` tasks automatically release files from an Amazon File Cache resource.


