---
title: Getting Started with Datadog
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Infrastructure > Datadog Resource Catalog
---

# aws_databrew_job{% #aws_databrew_job %}

## `account_id`{% #account_id %}

**Type**: `STRING`

## `create_date`{% #create_date %}

**Type**: `TIMESTAMP`**Provider name**: `CreateDate`**Description**: The date and time that the job was created.

## `created_by`{% #created_by %}

**Type**: `STRING`**Provider name**: `CreatedBy`**Description**: The Amazon Resource Name (ARN) of the user who created the job.

## `data_catalog_outputs`{% #data_catalog_outputs %}

**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `DataCatalogOutputs`**Description**: One or more artifacts that represent the Glue Data Catalog output from running the job.

- `catalog_id`**Type**: `STRING`**Provider name**: `CatalogId`**Description**: The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.
- `database_name`**Type**: `STRING`**Provider name**: `DatabaseName`**Description**: The name of a database in the Data Catalog.
- `database_options`**Type**: `STRUCT`**Provider name**: `DatabaseOptions`**Description**: Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.
  - `table_name`**Type**: `STRING`**Provider name**: `TableName`**Description**: A prefix for the name of a table DataBrew will create in the database.
  - `temp_directory`**Type**: `STRUCT`**Provider name**: `TempDirectory`**Description**: Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.
    - `bucket`**Type**: `STRING`**Provider name**: `Bucket`**Description**: The Amazon S3 bucket name.
    - `bucket_owner`**Type**: `STRING`**Provider name**: `BucketOwner`**Description**: The Amazon Web Services account ID of the bucket owner.
    - `key`**Type**: `STRING`**Provider name**: `Key`**Description**: The unique name of the object in the bucket.
- `overwrite`**Type**: `BOOLEAN`**Provider name**: `Overwrite`**Description**: A value that, if true, means that any data in the location specified for output is overwritten with new output. Not supported with DatabaseOptions.
- `s3_options`**Type**: `STRUCT`**Provider name**: `S3Options`**Description**: Represents options that specify how and where DataBrew writes the Amazon S3 output generated by recipe jobs.
  - `location`**Type**: `STRUCT`**Provider name**: `Location`**Description**: Represents an Amazon S3 location (bucket name and object key) where DataBrew can write output from a job.
    - `bucket`**Type**: `STRING`**Provider name**: `Bucket`**Description**: The Amazon S3 bucket name.
    - `bucket_owner`**Type**: `STRING`**Provider name**: `BucketOwner`**Description**: The Amazon Web Services account ID of the bucket owner.
    - `key`**Type**: `STRING`**Provider name**: `Key`**Description**: The unique name of the object in the bucket.
- `table_name`**Type**: `STRING`**Provider name**: `TableName`**Description**: The name of a table in the Data Catalog.

## `database_outputs`{% #database_outputs %}

**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `DatabaseOutputs`**Description**: Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write into.

- `database_options`**Type**: `STRUCT`**Provider name**: `DatabaseOptions`**Description**: Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.
  - `table_name`**Type**: `STRING`**Provider name**: `TableName`**Description**: A prefix for the name of a table DataBrew will create in the database.
  - `temp_directory`**Type**: `STRUCT`**Provider name**: `TempDirectory`**Description**: Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.
    - `bucket`**Type**: `STRING`**Provider name**: `Bucket`**Description**: The Amazon S3 bucket name.
    - `bucket_owner`**Type**: `STRING`**Provider name**: `BucketOwner`**Description**: The Amazon Web Services account ID of the bucket owner.
    - `key`**Type**: `STRING`**Provider name**: `Key`**Description**: The unique name of the object in the bucket.
- `database_output_mode`**Type**: `STRING`**Provider name**: `DatabaseOutputMode`**Description**: The output mode to write into the database. Currently supported option: NEW_TABLE.
- `glue_connection_name`**Type**: `STRING`**Provider name**: `GlueConnectionName`**Description**: The Glue connection that stores the connection information for the target database.

## `dataset_name`{% #dataset_name %}

**Type**: `STRING`**Provider name**: `DatasetName`**Description**: A dataset that the job is to process.

## `encryption_key_arn`{% #encryption_key_arn %}

**Type**: `STRING`**Provider name**: `EncryptionKeyArn`**Description**: The Amazon Resource Name (ARN) of an encryption key that is used to protect the job output. For more information, see [Encrypting data written by DataBrew jobs](https://docs.aws.amazon.com/databrew/latest/dg/encryption-security-configuration.html)

## `encryption_mode`{% #encryption_mode %}

**Type**: `STRING`**Provider name**: `EncryptionMode`**Description**: The encryption mode for the job, which can be one of the following:

- `SSE-KMS` - Server-side encryption with keys managed by KMS.
- `SSE-S3` - Server-side encryption with keys managed by Amazon S3.



## `job_sample`{% #job_sample %}

**Type**: `STRUCT`**Provider name**: `JobSample`**Description**: A sample configuration for profile jobs only, which determines the number of rows on which the profile job is run. If a `JobSample` value isn't provided, the default value is used. The default value is CUSTOM_ROWS for the mode parameter and 20,000 for the size parameter.

- `mode`**Type**: `STRING`**Provider name**: `Mode`**Description**: A value that determines whether the profile job is run on the entire dataset or a specified number of rows. This value must be one of the following:
  - FULL_DATASET - The profile job is run on the entire dataset.
  - CUSTOM_ROWS - The profile job is run on the number of rows specified in the `Size` parameter.
- `size`**Type**: `INT64`**Provider name**: `Size`**Description**: The `Size` parameter is only required when the mode is CUSTOM_ROWS. The profile job is run on the specified number of rows. The maximum value for size is Long.MAX_VALUE. Long.MAX_VALUE = 9223372036854775807

## `last_modified_by`{% #last_modified_by %}

**Type**: `STRING`**Provider name**: `LastModifiedBy`**Description**: The Amazon Resource Name (ARN) of the user who last modified the job.

## `last_modified_date`{% #last_modified_date %}

**Type**: `TIMESTAMP`**Provider name**: `LastModifiedDate`**Description**: The modification date and time of the job.

## `log_subscription`{% #log_subscription %}

**Type**: `STRING`**Provider name**: `LogSubscription`**Description**: The current status of Amazon CloudWatch logging for the job.

## `max_capacity`{% #max_capacity %}

**Type**: `INT32`**Provider name**: `MaxCapacity`**Description**: The maximum number of nodes that can be consumed when the job processes data.

## `max_retries`{% #max_retries %}

**Type**: `INT32`**Provider name**: `MaxRetries`**Description**: The maximum number of times to retry the job after a job run fails.

## `name`{% #name %}

**Type**: `STRING`**Provider name**: `Name`**Description**: The unique name of the job.

## `outputs`{% #outputs %}

**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `Outputs`**Description**: One or more artifacts that represent output from running the job.

- `compression_format`**Type**: `STRING`**Provider name**: `CompressionFormat`**Description**: The compression algorithm used to compress the output text of the job.
- `format`**Type**: `STRING`**Provider name**: `Format`**Description**: The data format of the output of the job.
- `format_options`**Type**: `STRUCT`**Provider name**: `FormatOptions`**Description**: Represents options that define how DataBrew formats job output files.
  - `csv`**Type**: `STRUCT`**Provider name**: `Csv`**Description**: Represents a set of options that define the structure of comma-separated value (CSV) job output.
    - `delimiter`**Type**: `STRING`**Provider name**: `Delimiter`**Description**: A single character that specifies the delimiter used to create CSV job output.
- `location`**Type**: `STRUCT`**Provider name**: `Location`**Description**: The location in Amazon S3 where the job writes its output.
  - `bucket`**Type**: `STRING`**Provider name**: `Bucket`**Description**: The Amazon S3 bucket name.
  - `bucket_owner`**Type**: `STRING`**Provider name**: `BucketOwner`**Description**: The Amazon Web Services account ID of the bucket owner.
  - `key`**Type**: `STRING`**Provider name**: `Key`**Description**: The unique name of the object in the bucket.
- `max_output_files`**Type**: `INT32`**Provider name**: `MaxOutputFiles`**Description**: Maximum number of files to be generated by the job and written to the output folder. For output partitioned by column(s), the MaxOutputFiles value is the maximum number of files per partition.
- `overwrite`**Type**: `BOOLEAN`**Provider name**: `Overwrite`**Description**: A value that, if true, means that any data in the location specified for output is overwritten with new output.
- `partition_columns`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `PartitionColumns`**Description**: The names of one or more partition columns for the output of the job.

## `project_name`{% #project_name %}

**Type**: `STRING`**Provider name**: `ProjectName`**Description**: The name of the project that the job is associated with.

## `recipe_reference`{% #recipe_reference %}

**Type**: `STRUCT`**Provider name**: `RecipeReference`**Description**: A set of steps that the job runs.

- `name`**Type**: `STRING`**Provider name**: `Name`**Description**: The name of the recipe.
- `recipe_version`**Type**: `STRING`**Provider name**: `RecipeVersion`**Description**: The identifier for the version for the recipe.

## `resource_arn`{% #resource_arn %}

**Type**: `STRING`**Provider name**: `ResourceArn`**Description**: The unique Amazon Resource Name (ARN) for the job.

## `role_arn`{% #role_arn %}

**Type**: `STRING`**Provider name**: `RoleArn`**Description**: The Amazon Resource Name (ARN) of the role to be assumed for this job.

## `tags`{% #tags %}

**Type**: `UNORDERED_LIST_STRING`

## `timeout`{% #timeout %}

**Type**: `INT32`**Provider name**: `Timeout`**Description**: The job's timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of `TIMEOUT`.

## `type`{% #type %}

**Type**: `STRING`**Provider name**: `Type`**Description**: The job type of the job, which must be one of the following:

- `PROFILE` - A job to analyze a dataset, to determine its size, data types, data distribution, and more.
- `RECIPE` - A job to apply one or more transformations to a dataset.



## `validation_configurations`{% #validation_configurations %}

**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `ValidationConfigurations`**Description**: List of validation configurations that are applied to the profile job.

- `ruleset_arn`**Type**: `STRING`**Provider name**: `RulesetArn`**Description**: The Amazon Resource Name (ARN) for the ruleset to be validated in the profile job. The TargetArn of the selected ruleset should be the same as the Amazon Resource Name (ARN) of the dataset that is associated with the profile job.
- `validation_mode`**Type**: `STRING`**Provider name**: `ValidationMode`**Description**: Mode of data quality validation. Default mode is "CHECK_ALL" which verifies all rules defined in the selected ruleset.
