---
title: Getting Started with Datadog
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Infrastructure > Datadog Resource Catalog
---

# aws_batch_job_definition{% #aws_batch_job_definition %}

## `account_id`{% #account_id %}

**Type**: `STRING`

## `consumable_resource_properties`{% #consumable_resource_properties %}

**Type**: `STRUCT`**Provider name**: `consumableResourceProperties`**Description**: Contains a list of consumable resources required by the job.

- `consumable_resource_list`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `consumableResourceList`**Description**: The list of consumable resources required by a job.
  - `consumable_resource`**Type**: `STRING`**Provider name**: `consumableResource`**Description**: The name or ARN of the consumable resource.
  - `quantity`**Type**: `INT64`**Provider name**: `quantity`**Description**: The quantity of the consumable resource that is needed.

## `container_orchestration_type`{% #container_orchestration_type %}

**Type**: `STRING`**Provider name**: `containerOrchestrationType`**Description**: The orchestration type of the compute environment. The valid values are `ECS` (default) or `EKS`.

## `container_properties`{% #container_properties %}

**Type**: `STRUCT`**Provider name**: `containerProperties`**Description**: An object with properties specific to Amazon ECS-based jobs. When `containerProperties` is used in the job definition, it can't be used in addition to `eksProperties`, `ecsProperties`, or `nodeProperties`.

- `command`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `command`**Description**: The command that's passed to the container. This parameter maps to `Cmd` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `COMMAND` parameter to [docker run](https://docs.docker.com/engine/reference/run/). For more information, see [https://docs.docker.com/engine/reference/builder/#cmd](https://docs.docker.com/engine/reference/builder/#cmd).
- `environment`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `environment`**Description**: The environment variables to pass to a container. This parameter maps to `Env` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–env` option to [docker run](https://docs.docker.com/engine/reference/run/).We don't recommend using plaintext environment variables for sensitive information, such as credential data.Environment variables cannot start with "`AWS_BATCH`". This naming convention is reserved for variables that Batch sets.
  - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the key-value pair. For environment variables, this is the name of the environment variable.
  - `value`**Type**: `STRING`**Provider name**: `value`**Description**: The value of the key-value pair. For environment variables, this is the value of the environment variable.
- `ephemeral_storage`**Type**: `STRUCT`**Provider name**: `ephemeralStorage`**Description**: The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on Fargate.
  - `size_in_gib`**Type**: `INT32`**Provider name**: `sizeInGiB`**Description**: The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is `21` GiB and the maximum supported value is `200` GiB.
- `execution_role_arn`**Type**: `STRING`**Provider name**: `executionRoleArn`**Description**: The Amazon Resource Name (ARN) of the execution role that Batch can assume. For jobs that run on Fargate resources, you must provide an execution role. For more information, see [Batch execution IAM role](https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html) in the Batch User Guide.
- `fargate_platform_configuration`**Type**: `STRUCT`**Provider name**: `fargatePlatformConfiguration`**Description**: The platform configuration for jobs that are running on Fargate resources. Jobs that are running on Amazon EC2 resources must not specify this parameter.
  - `platform_version`**Type**: `STRING`**Provider name**: `platformVersion`**Description**: The Fargate platform version where the jobs are running. A platform version is specified only for jobs that are running on Fargate resources. If one isn't specified, the `LATEST` platform version is used by default. This uses a recent, approved version of the Fargate platform for compute resources. For more information, see [Fargate platform versions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html) in the Amazon Elastic Container Service Developer Guide.
- `image`**Type**: `STRING`**Provider name**: `image`**Description**: Required. The image used to start a container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with `repository-url / image : tag`. It can be 255 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), underscores (_), colons (:), periods (.), forward slashes (/), and number signs (#). This parameter maps to `Image` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `IMAGE` parameter of [docker run](https://docs.docker.com/engine/reference/run/).Docker image architecture must match the processor architecture of the compute resources that they're scheduled on. For example, ARM-based Docker images can only run on ARM-based compute resources.
  - Images in Amazon ECR Public repositories use the full `registry/repository[:tag]` or `registry/repository[@digest]` naming conventions. For example, `public.ecr.aws/ registry_alias / my-web-app : latest`.
  - Images in Amazon ECR repositories use the full registry and repository URI (for example, `123456789012.dkr.ecr.<region-name>.amazonaws.com/<repository-name>`).
  - Images in official repositories on Docker Hub use a single name (for example, `ubuntu` or `mongo`).
  - Images in other repositories on Docker Hub are qualified with an organization name (for example, `amazon/amazon-ecs-agent`).
  - Images in other online repositories are qualified further by a domain name (for example, `quay.io/assemblyline/ubuntu`).
- `instance_type`**Type**: `STRING`**Provider name**: `instanceType`**Description**: The instance type to use for a multi-node parallel job. All node groups in a multi-node parallel job must use the same instance type.This parameter isn't applicable to single-node container jobs or jobs that run on Fargate resources, and shouldn't be provided.
- `job_role_arn`**Type**: `STRING`**Provider name**: `jobRoleArn`**Description**: The Amazon Resource Name (ARN) of the IAM role that the container can assume for Amazon Web Services permissions. For more information, see [IAM roles for tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) in the Amazon Elastic Container Service Developer Guide.
- `linux_parameters`**Type**: `STRUCT`**Provider name**: `linuxParameters`**Description**: Linux-specific modifications that are applied to the container, such as details for device mappings.
  - `devices`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `devices`**Description**: Any of the host devices to expose to the container. This parameter maps to `Devices` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–device` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
    - `container_path`**Type**: `STRING`**Provider name**: `containerPath`**Description**: The path inside the container that's used to expose the host device. By default, the `hostPath` value is used.
    - `host_path`**Type**: `STRING`**Provider name**: `hostPath`**Description**: The path for the device on the host container instance.
    - `permissions`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `permissions`**Description**: The explicit permissions to provide to the container for the device. By default, the container has permissions for `read`, `write`, and `mknod` for the device.
  - `init_process_enabled`**Type**: `BOOLEAN`**Provider name**: `initProcessEnabled`**Description**: If true, run an `init` process inside the container that forwards signals and reaps processes. This parameter maps to the `–init` option to [docker run](https://docs.docker.com/engine/reference/run/). This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
  - `max_swap`**Type**: `INT32`**Provider name**: `maxSwap`**Description**: The total amount of swap memory (in MiB) a container can use. This parameter is translated to the `–memory-swap` option to [docker run](https://docs.docker.com/engine/reference/run/) where the value is the sum of the container memory plus the `maxSwap` value. For more information, see [`–memory-swap` details](https://docs.docker.com/config/containers/resource_constraints/#--memory-swap-details) in the Docker documentation. If a `maxSwap` value of `0` is specified, the container doesn't use swap. Accepted values are `0` or any positive integer. If the `maxSwap` parameter is omitted, the container doesn't use the swap configuration for the container instance that it's running on. A `maxSwap` value must be set for the `swappiness` parameter to be used.This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
  - `shared_memory_size`**Type**: `INT32`**Provider name**: `sharedMemorySize`**Description**: The value for the size (in MiB) of the `/dev/shm` volume. This parameter maps to the `–shm-size` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
  - `swappiness`**Type**: `INT32`**Provider name**: `swappiness`**Description**: You can use this parameter to tune a container's memory swappiness behavior. A `swappiness` value of `0` causes swapping to not occur unless absolutely necessary. A `swappiness` value of `100` causes pages to be swapped aggressively. Valid values are whole numbers between `0` and `100`. If the `swappiness` parameter isn't specified, a default value of `60` is used. If a value isn't specified for `maxSwap`, then this parameter is ignored. If `maxSwap` is set to 0, the container doesn't use swap. This parameter maps to the `–memory-swappiness` option to [docker run](https://docs.docker.com/engine/reference/run/). Consider the following when you use a per-container swap configuration.
    - Swap space must be enabled and allocated on the container instance for the containers to use.By default, the Amazon ECS optimized AMIs don't have swap enabled. You must enable swap on the instance to use this feature. For more information, see [Instance store swap volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-store-swap-volumes.html) in the Amazon EC2 User Guide for Linux Instances or [How do I allocate memory to work as swap space in an Amazon EC2 instance by using a swap file?](http://aws.amazon.com/premiumsupport/knowledge-center/ec2-memory-swap-file/)
    - The swap space parameters are only supported for job definitions using EC2 resources.
    - If the `maxSwap` and `swappiness` parameters are omitted from a job definition, each container has a default `swappiness` value of 60. Moreover, the total swap usage is limited to two times the memory reservation of the container.
This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
  - `tmpfs`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `tmpfs`**Description**: The container path, mount options, and size (in MiB) of the `tmpfs` mount. This parameter maps to the `–tmpfs` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide this parameter for this resource type.
    - `container_path`**Type**: `STRING`**Provider name**: `containerPath`**Description**: The absolute file path in the container where the `tmpfs` volume is mounted.
    - `mount_options`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `mountOptions`**Description**: The list of `tmpfs` volume mount options. Valid values: "`defaults`" | "`ro`" | "`rw`" | "`suid`" | "`nosuid`" | "`dev`" | "`nodev`" | "`exec`" | "`noexec`" | "`sync`" | "`async`" | "`dirsync`" | "`remount`" | "`mand`" | "`nomand`" | "`atime`" | "`noatime`" | "`diratime`" | "`nodiratime`" | "`bind`" | "`rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime`" | "`norelatime`" | "`strictatime`" | "`nostrictatime`" | "`mode`" | "`uid`" | "`gid`" | "`nr_inodes`" | "`nr_blocks`" | "`mpol`"
    - `size`**Type**: `INT32`**Provider name**: `size`**Description**: The size (in MiB) of the `tmpfs` volume.
- `log_configuration`**Type**: `STRUCT`**Provider name**: `logConfiguration`**Description**: The log configuration specification for the container. This parameter maps to `LogConfig` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–log-driver` option to [docker run](https://docs.docker.com/engine/reference/run/). By default, containers use the same logging driver that the Docker daemon uses. However the container might use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). For more information on the options for different supported log drivers, see [Configure logging drivers](https://docs.docker.com/engine/admin/logging/overview/) in the Docker documentation.Batch currently supports a subset of the logging drivers available to the Docker daemon (shown in the [LogConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-logconfiguration.html) data type).This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html) in the Amazon Elastic Container Service Developer Guide.
  - `log_driver`**Type**: `STRING`**Provider name**: `logDriver`**Description**: The log driver to use for the container. The valid values that are listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default. The supported log drivers are `awslogs`, `fluentd`, `gelf`, `json-file`, `journald`, `logentries`, `syslog`, and `splunk`.Jobs that are running on Fargate resources are restricted to the `awslogs` and `splunk` log drivers.
    {% dl %}
    
    {% dt %}
awslogs
    {% /dt %}

    {% dd %}
    Specifies the Amazon CloudWatch Logs logging driver. For more information, see [Using the awslogs log driver](https://docs.aws.amazon.com/batch/latest/userguide/using_awslogs.html) in the Batch User Guide and [Amazon CloudWatch Logs logging driver](https://docs.docker.com/config/containers/logging/awslogs/) in the Docker documentation.
        {% /dd %}

    {% dt %}
fluentd
    {% /dt %}

    {% dd %}
    Specifies the Fluentd logging driver. For more information including usage and options, see [Fluentd logging driver](https://docs.docker.com/config/containers/logging/fluentd/) in the Docker documentation.
        {% /dd %}

    {% dt %}
gelf
    {% /dt %}

    {% dd %}
    Specifies the Graylog Extended Format (GELF) logging driver. For more information including usage and options, see [Graylog Extended Format logging driver](https://docs.docker.com/config/containers/logging/gelf/) in the Docker documentation.
        {% /dd %}

    {% dt %}
journald
    {% /dt %}

    {% dd %}
    Specifies the journald logging driver. For more information including usage and options, see [Journald logging driver](https://docs.docker.com/config/containers/logging/journald/) in the Docker documentation.
        {% /dd %}

    {% dt %}
json-file
    {% /dt %}

    {% dd %}
    Specifies the JSON file logging driver. For more information including usage and options, see [JSON File logging driver](https://docs.docker.com/config/containers/logging/json-file/) in the Docker documentation.
        {% /dd %}

    {% dt %}
splunk
    {% /dt %}

    {% dd %}
    Specifies the Splunk logging driver. For more information including usage and options, see [Splunk logging driver](https://docs.docker.com/config/containers/logging/splunk/) in the Docker documentation.
        {% /dd %}

    {% dt %}
syslog
    {% /dt %}

    {% dd %}
    Specifies the syslog logging driver. For more information including usage and options, see [Syslog logging driver](https://docs.docker.com/config/containers/logging/syslog/) in the Docker documentation.
        {% /dd %}

        {% /dl %}
If you have a custom driver that's not listed earlier that you want to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that's [available on GitHub](https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this software.This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
  - `options`**Type**: `MAP_STRING_STRING`**Provider name**: `options`**Description**: The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
  - `secret_options`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `secretOptions`**Description**: The secrets to pass to the log configuration. For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html) in the Batch User Guide.
    - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the secret.
    - `value_from`**Type**: `STRING`**Provider name**: `valueFrom`**Description**: The secret to expose to the container. The supported values are either the full Amazon Resource Name (ARN) of the Secrets Manager secret or the full ARN of the parameter in the Amazon Web Services Systems Manager Parameter Store.If the Amazon Web Services Systems Manager Parameter Store parameter exists in the same Region as the job you're launching, then you can use either the full Amazon Resource Name (ARN) or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
- `memory`**Type**: `INT32`**Provider name**: `memory`**Description**: This parameter is deprecated, use `resourceRequirements` to specify the memory requirements for the job definition. It's not supported for jobs running on Fargate resources. For jobs that run on Amazon EC2 resources, it specifies the memory hard limit (in MiB) for a container. If your container attempts to exceed the specified number, it's terminated. You must specify at least 4 MiB of memory for a job using this parameter. The memory hard limit can be specified in several places. It must be specified for each node at least once.
- `mount_points`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `mountPoints`**Description**: The mount points for data volumes in your container. This parameter maps to `Volumes` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–volume` option to [docker run](https://docs.docker.com/engine/reference/run/).
  - `container_path`**Type**: `STRING`**Provider name**: `containerPath`**Description**: The path on the container where the host volume is mounted.
  - `read_only`**Type**: `BOOLEAN`**Provider name**: `readOnly`**Description**: If this value is `true`, the container has read-only access to the volume. Otherwise, the container can write to the volume. The default value is `false`.
  - `source_volume`**Type**: `STRING`**Provider name**: `sourceVolume`**Description**: The name of the volume to mount.
- `network_configuration`**Type**: `STRUCT`**Provider name**: `networkConfiguration`**Description**: The network configuration for jobs that are running on Fargate resources. Jobs that are running on Amazon EC2 resources must not specify this parameter.
  - `assign_public_ip`**Type**: `STRING`**Provider name**: `assignPublicIp`**Description**: Indicates whether the job has a public IP address. For a job that's running on Fargate resources in a private subnet to send outbound traffic to the internet (for example, to pull container images), the private subnet requires a NAT gateway be attached to route requests to the internet. For more information, see [Amazon ECS task networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) in the Amazon Elastic Container Service Developer Guide. The default value is "`DISABLED`".
- `privileged`**Type**: `BOOLEAN`**Provider name**: `privileged`**Description**: When this parameter is true, the container is given elevated permissions on the host container instance (similar to the `root` user). This parameter maps to `Privileged` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–privileged` option to [docker run](https://docs.docker.com/engine/reference/run/). The default value is false.This parameter isn't applicable to jobs that are running on Fargate resources and shouldn't be provided, or specified as false.
- `readonly_root_filesystem`**Type**: `BOOLEAN`**Provider name**: `readonlyRootFilesystem`**Description**: When this parameter is true, the container is given read-only access to its root file system. This parameter maps to `ReadonlyRootfs` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–read-only` option to `docker run`.
- `repository_credentials`**Type**: `STRUCT`**Provider name**: `repositoryCredentials`**Description**: The private repository authentication credentials to use.
  - `credentials_parameter`**Type**: `STRING`**Provider name**: `credentialsParameter`**Description**: The Amazon Resource Name (ARN) of the secret containing the private repository credentials.
- `resource_requirements`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `resourceRequirements`**Description**: The type and amount of resources to assign to a container. The supported resources include `GPU`, `MEMORY`, and `VCPU`.
  - `type`**Type**: `STRING`**Provider name**: `type`**Description**: The type of resource to assign to a container. The supported resources include `GPU`, `MEMORY`, and `VCPU`.
  - `value`**Type**: `STRING`**Provider name**: `value`**Description**: The quantity of the specified resource to reserve for the container. The values vary based on the `type` specified.
    {% dl %}
    
    {% dt %}
type="GPU"
    {% /dt %}

    {% dd %}
    The number of physical GPUs to reserve for the container. Make sure that the number of GPUs reserved for all containers in a job doesn't exceed the number of available GPUs on the compute resource that the job is launched on.GPUs aren't available for jobs that are running on Fargate resources.
        {% /dd %}

    {% dt %}
type="MEMORY"
    {% /dt %}

    {% dd %}
    The memory hard limit (in MiB) present to the container. This parameter is supported for jobs that are running on Amazon EC2 resources. If your container attempts to exceed the memory specified, the container is terminated. This parameter maps to `Memory` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–memory` option to [docker run](https://docs.docker.com/engine/reference/run/). You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps to `Memory` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–memory` option to [docker run](https://docs.docker.com/engine/reference/run/).If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the Batch User Guide.For jobs that are running on Fargate resources, then `value` is the hard limit (in MiB), and must match one of the supported values and the `VCPU` values must be one of the values supported for that memory value.
    {% dl %}
    
    {% dt %}
value = 512
    {% /dt %}

    {% dd %}
    `VCPU` = 0.25
        {% /dd %}

    {% dt %}
value = 1024
    {% /dt %}

    {% dd %}
    `VCPU` = 0.25 or 0.5
        {% /dd %}

    {% dt %}
value = 2048
    {% /dt %}

    {% dd %}
    `VCPU` = 0.25, 0.5, or 1
        {% /dd %}

    {% dt %}
value = 3072
    {% /dt %}

    {% dd %}
    `VCPU` = 0.5, or 1
        {% /dd %}

    {% dt %}
value = 4096
    {% /dt %}

    {% dd %}
    `VCPU` = 0.5, 1, or 2
        {% /dd %}

    {% dt %}
value = 5120, 6144, or 7168
    {% /dt %}

    {% dd %}
    `VCPU` = 1 or 2
        {% /dd %}

    {% dt %}
value = 8192
    {% /dt %}

    {% dd %}
    `VCPU` = 1, 2, or 4
        {% /dd %}

    {% dt %}
value = 9216, 10240, 11264, 12288, 13312, 14336, or 15360
    {% /dt %}

    {% dd %}
    `VCPU` = 2 or 4
        {% /dd %}

    {% dt %}
value = 16384
    {% /dt %}

    {% dd %}
    `VCPU` = 2, 4, or 8
        {% /dd %}

    {% dt %}
value = 17408, 18432, 19456, 21504, 22528, 23552, 25600, 26624, 27648, 29696, or 30720
    {% /dt %}

    {% dd %}
    `VCPU` = 4
        {% /dd %}

    {% dt %}
value = 20480, 24576, or 28672
    {% /dt %}

    {% dd %}
    `VCPU` = 4 or 8
        {% /dd %}

    {% dt %}
value = 36864, 45056, 53248, or 61440
    {% /dt %}

    {% dd %}
    `VCPU` = 8
        {% /dd %}

    {% dt %}
value = 32768, 40960, 49152, or 57344
    {% /dt %}

    {% dd %}
    `VCPU` = 8 or 16
        {% /dd %}

    {% dt %}
value = 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
    {% /dt %}

    {% dd %}
    `VCPU` = 16
        {% /dd %}

        {% /dl %}

        {% /dd %}

    {% dt %}
type="VCPU"
    {% /dt %}

    {% dd %}
    The number of vCPUs reserved for the container. This parameter maps to `CpuShares` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–cpu-shares` option to [docker run](https://docs.docker.com/engine/reference/run/). Each vCPU is equivalent to 1,024 CPU shares. For Amazon EC2 resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once. The default for the Fargate On-Demand vCPU resource count quota is 6 vCPUs. For more information about Fargate quotas, see [Fargate quotas](https://docs.aws.amazon.com/general/latest/gr/ecs-service.html#service-quotas-fargate) in the Amazon Web Services General Reference. For jobs that are running on Fargate resources, then `value` must match one of the supported values and the `MEMORY` values must be one of the values supported for that `VCPU` value. The supported values are 0.25, 0.5, 1, 2, 4, 8, and 16
    {% dl %}
    
    {% dt %}
value = 0.25
    {% /dt %}

    {% dd %}
    `MEMORY` = 512, 1024, or 2048
        {% /dd %}

    {% dt %}
value = 0.5
    {% /dt %}

    {% dd %}
    `MEMORY` = 1024, 2048, 3072, or 4096
        {% /dd %}

    {% dt %}
value = 1
    {% /dt %}

    {% dd %}
    `MEMORY` = 2048, 3072, 4096, 5120, 6144, 7168, or 8192
        {% /dd %}

    {% dt %}
value = 2
    {% /dt %}

    {% dd %}
    `MEMORY` = 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384
        {% /dd %}

    {% dt %}
value = 4
    {% /dt %}

    {% dd %}
    `MEMORY` = 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720
        {% /dd %}

    {% dt %}
value = 8
    {% /dt %}

    {% dd %}
    `MEMORY` = 16384, 20480, 24576, 28672, 32768, 36864, 40960, 45056, 49152, 53248, 57344, or 61440
        {% /dd %}

    {% dt %}
value = 16
    {% /dt %}

    {% dd %}
    `MEMORY` = 32768, 40960, 49152, 57344, 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
        {% /dd %}

        {% /dl %}

        {% /dd %}

        {% /dl %}
- `runtime_platform`**Type**: `STRUCT`**Provider name**: `runtimePlatform`**Description**: An object that represents the compute environment architecture for Batch jobs on Fargate.
  - `cpu_architecture`**Type**: `STRING`**Provider name**: `cpuArchitecture`**Description**: The vCPU architecture. The default value is `X86_64`. Valid values are `X86_64` and `ARM64`.This parameter must be set to `X86_64` for Windows containers.Fargate Spot is not supported for `ARM64` and Windows-based containers on Fargate. A job queue will be blocked if a Fargate `ARM64` or Windows job is submitted to a job queue with only Fargate Spot compute environments. However, you can attach both `FARGATE` and `FARGATE_SPOT` compute environments to the same job queue.
  - `operating_system_family`**Type**: `STRING`**Provider name**: `operatingSystemFamily`**Description**: The operating system for the compute environment. Valid values are: `LINUX` (default), `WINDOWS_SERVER_2019_CORE`, `WINDOWS_SERVER_2019_FULL`, `WINDOWS_SERVER_2022_CORE`, and `WINDOWS_SERVER_2022_FULL`.The following parameters can't be set for Windows containers: `linuxParameters`, `privileged`, `user`, `ulimits`, `readonlyRootFilesystem`, and `efsVolumeConfiguration`.The Batch Scheduler checks the compute environments that are attached to the job queue before registering a task definition with Fargate. In this scenario, the job queue is where the job is submitted. If the job requires a Windows container and the first compute environment is `LINUX`, the compute environment is skipped and the next compute environment is checked until a Windows-based compute environment is found.Fargate Spot is not supported for `ARM64` and Windows-based containers on Fargate. A job queue will be blocked if a Fargate `ARM64` or Windows job is submitted to a job queue with only Fargate Spot compute environments. However, you can attach both `FARGATE` and `FARGATE_SPOT` compute environments to the same job queue.
- `secrets`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `secrets`**Description**: The secrets for the container. For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html) in the Batch User Guide.
  - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the secret.
  - `value_from`**Type**: `STRING`**Provider name**: `valueFrom`**Description**: The secret to expose to the container. The supported values are either the full Amazon Resource Name (ARN) of the Secrets Manager secret or the full ARN of the parameter in the Amazon Web Services Systems Manager Parameter Store.If the Amazon Web Services Systems Manager Parameter Store parameter exists in the same Region as the job you're launching, then you can use either the full Amazon Resource Name (ARN) or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
- `ulimits`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `ulimits`**Description**: A list of `ulimits` to set in the container. This parameter maps to `Ulimits` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–ulimit` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources and shouldn't be provided.
  - `hard_limit`**Type**: `INT32`**Provider name**: `hardLimit`**Description**: The hard limit for the `ulimit` type.
  - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The `type` of the `ulimit`. Valid values are: `core` | `cpu` | `data` | `fsize` | `locks` | `memlock` | `msgqueue` | `nice` | `nofile` | `nproc` | `rss` | `rtprio` | `rttime` | `sigpending` | `stack`.
  - `soft_limit`**Type**: `INT32`**Provider name**: `softLimit`**Description**: The soft limit for the `ulimit` type.
- `user`**Type**: `STRING`**Provider name**: `user`**Description**: The user name to use inside the container. This parameter maps to `User` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–user` option to [docker run](https://docs.docker.com/engine/reference/run/).
- `vcpus`**Type**: `INT32`**Provider name**: `vcpus`**Description**: This parameter is deprecated, use `resourceRequirements` to specify the vCPU requirements for the job definition. It's not supported for jobs running on Fargate resources. For jobs running on Amazon EC2 resources, it specifies the number of vCPUs reserved for the job. Each vCPU is equivalent to 1,024 CPU shares. This parameter maps to `CpuShares` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–cpu-shares` option to [docker run](https://docs.docker.com/engine/reference/run/). The number of vCPUs must be specified but can be specified in several places. You must specify it at least once for each node.
- `volumes`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `volumes`**Description**: A list of data volumes used in a job.
  - `efs_volume_configuration`**Type**: `STRUCT`**Provider name**: `efsVolumeConfiguration`**Description**: This parameter is specified when you're using an Amazon Elastic File System file system for job storage. Jobs that are running on Fargate resources must specify a `platformVersion` of at least `1.4.0`.
    - `authorization_config`**Type**: `STRUCT`**Provider name**: `authorizationConfig`**Description**: The authorization configuration details for the Amazon EFS file system.
      - `access_point_id`**Type**: `STRING`**Provider name**: `accessPointId`**Description**: The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the `EFSVolumeConfiguration` must either be omitted or set to `/` which enforces the path set on the EFS access point. If an access point is used, transit encryption must be enabled in the `EFSVolumeConfiguration`. For more information, see [Working with Amazon EFS access points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) in the Amazon Elastic File System User Guide.
      - `iam`**Type**: `STRING`**Provider name**: `iam`**Description**: Whether or not to use the Batch job IAM role defined in a job definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the `EFSVolumeConfiguration`. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Using Amazon EFS access points](https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html#efs-volume-accesspoints) in the Batch User Guide. EFS IAM authorization requires that `TransitEncryption` be `ENABLED` and that a `JobRoleArn` is specified.
    - `file_system_id`**Type**: `STRING`**Provider name**: `fileSystemId`**Description**: The Amazon EFS file system ID to use.
    - `root_directory`**Type**: `STRING`**Provider name**: `rootDirectory`**Description**: The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume is used instead. Specifying `/` has the same effect as omitting this parameter. The maximum length is 4,096 characters.If an EFS access point is specified in the `authorizationConfig`, the root directory parameter must either be omitted or set to `/`, which enforces the path set on the Amazon EFS access point.
    - `transit_encryption`**Type**: `STRING`**Provider name**: `transitEncryption`**Description**: Determines whether to enable encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be enabled if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Encrypting data in transit](https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html) in the Amazon Elastic File System User Guide.
    - `transit_encryption_port`**Type**: `INT32`**Provider name**: `transitEncryptionPort`**Description**: The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you don't specify a transit encryption port, it uses the port selection strategy that the Amazon EFS mount helper uses. The value must be between 0 and 65,535. For more information, see [EFS mount helper](https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html) in the Amazon Elastic File System User Guide.
  - `host`**Type**: `STRUCT`**Provider name**: `host`**Description**: The contents of the `host` parameter determine whether your data volume persists on the host container instance and where it's stored. If the host parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running.This parameter isn't applicable to jobs that are running on Fargate resources and shouldn't be provided.
    - `source_path`**Type**: `STRING`**Provider name**: `sourcePath`**Description**: The path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If this parameter contains a file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the source path location doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.This parameter isn't applicable to jobs that run on Fargate resources. Don't provide this for these jobs.
  - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the volume. It can be up to 255 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). This name is referenced in the `sourceVolume` parameter of container definition `mountPoints`.

## `ecs_properties`{% #ecs_properties %}

**Type**: `STRUCT`**Provider name**: `ecsProperties`**Description**: An object that contains the properties for the Amazon ECS resources of a job.When `ecsProperties` is used in the job definition, it can't be used in addition to `containerProperties`, `eksProperties`, or `nodeProperties`.

- `task_properties`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `taskProperties`**Description**: An object that contains the properties for the Amazon ECS task definition of a job.This object is currently limited to one task element. However, the task element can run up to 10 containers.
  - `containers`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `containers`**Description**: This object is a list of containers.
    - `command`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `command`**Description**: The command that's passed to the container. This parameter maps to `Cmd` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `COMMAND` parameter to [docker run](https://docs.docker.com/engine/reference/run/). For more information, see [Dockerfile reference: CMD](https://docs.docker.com/engine/reference/builder/#cmd).
    - `depends_on`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `dependsOn`**Description**: A list of containers that this container depends on.
      - `condition`**Type**: `STRING`**Provider name**: `condition`**Description**: The dependency condition of the container. The following are the available conditions and their behavior:
        - `START` - This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.
        - `COMPLETE` - This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.
        - `SUCCESS` - This condition is the same as `COMPLETE`, but it also requires that the container exits with a zero status. This condition can't be set on an essential container.
      - `container_name`**Type**: `STRING`**Provider name**: `containerName`**Description**: A unique identifier for the container.
    - `environment`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `environment`**Description**: The environment variables to pass to a container. This parameter maps to Env in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–env` parameter to [docker run](https://docs.docker.com/engine/reference/run/).We don't recommend using plaintext environment variables for sensitive information, such as credential data.Environment variables cannot start with `AWS_BATCH`. This naming convention is reserved for variables that Batch sets.
      - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the key-value pair. For environment variables, this is the name of the environment variable.
      - `value`**Type**: `STRING`**Provider name**: `value`**Description**: The value of the key-value pair. For environment variables, this is the value of the environment variable.
    - `essential`**Type**: `BOOLEAN`**Provider name**: `essential`**Description**: If the essential parameter of a container is marked as `true`, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the `essential` parameter of a container is marked as false, its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential. All jobs must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see [Application Architecture](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/application_architecture.html) in the Amazon Elastic Container Service Developer Guide.
    - `image`**Type**: `STRING`**Provider name**: `image`**Description**: The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either `repository-url/image:tag` or `repository-url/image@digest`. Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to `Image` in the [Create a container](https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.35/) and the `IMAGE` parameter of the [docker run ](https://docs.docker.com/engine/reference/run/#security-configuration).
    - `linux_parameters`**Type**: `STRUCT`**Provider name**: `linuxParameters`**Description**: Linux-specific modifications that are applied to the container, such as Linux kernel capabilities. For more information, see [KernelCapabilities](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_KernelCapabilities.html).
      - `devices`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `devices`**Description**: Any of the host devices to expose to the container. This parameter maps to `Devices` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–device` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
        - `container_path`**Type**: `STRING`**Provider name**: `containerPath`**Description**: The path inside the container that's used to expose the host device. By default, the `hostPath` value is used.
        - `host_path`**Type**: `STRING`**Provider name**: `hostPath`**Description**: The path for the device on the host container instance.
        - `permissions`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `permissions`**Description**: The explicit permissions to provide to the container for the device. By default, the container has permissions for `read`, `write`, and `mknod` for the device.
      - `init_process_enabled`**Type**: `BOOLEAN`**Provider name**: `initProcessEnabled`**Description**: If true, run an `init` process inside the container that forwards signals and reaps processes. This parameter maps to the `–init` option to [docker run](https://docs.docker.com/engine/reference/run/). This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
      - `max_swap`**Type**: `INT32`**Provider name**: `maxSwap`**Description**: The total amount of swap memory (in MiB) a container can use. This parameter is translated to the `–memory-swap` option to [docker run](https://docs.docker.com/engine/reference/run/) where the value is the sum of the container memory plus the `maxSwap` value. For more information, see [`–memory-swap` details](https://docs.docker.com/config/containers/resource_constraints/#--memory-swap-details) in the Docker documentation. If a `maxSwap` value of `0` is specified, the container doesn't use swap. Accepted values are `0` or any positive integer. If the `maxSwap` parameter is omitted, the container doesn't use the swap configuration for the container instance that it's running on. A `maxSwap` value must be set for the `swappiness` parameter to be used.This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
      - `shared_memory_size`**Type**: `INT32`**Provider name**: `sharedMemorySize`**Description**: The value for the size (in MiB) of the `/dev/shm` volume. This parameter maps to the `–shm-size` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
      - `swappiness`**Type**: `INT32`**Provider name**: `swappiness`**Description**: You can use this parameter to tune a container's memory swappiness behavior. A `swappiness` value of `0` causes swapping to not occur unless absolutely necessary. A `swappiness` value of `100` causes pages to be swapped aggressively. Valid values are whole numbers between `0` and `100`. If the `swappiness` parameter isn't specified, a default value of `60` is used. If a value isn't specified for `maxSwap`, then this parameter is ignored. If `maxSwap` is set to 0, the container doesn't use swap. This parameter maps to the `–memory-swappiness` option to [docker run](https://docs.docker.com/engine/reference/run/). Consider the following when you use a per-container swap configuration.
        - Swap space must be enabled and allocated on the container instance for the containers to use.By default, the Amazon ECS optimized AMIs don't have swap enabled. You must enable swap on the instance to use this feature. For more information, see [Instance store swap volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-store-swap-volumes.html) in the Amazon EC2 User Guide for Linux Instances or [How do I allocate memory to work as swap space in an Amazon EC2 instance by using a swap file?](http://aws.amazon.com/premiumsupport/knowledge-center/ec2-memory-swap-file/)
        - The swap space parameters are only supported for job definitions using EC2 resources.
        - If the `maxSwap` and `swappiness` parameters are omitted from a job definition, each container has a default `swappiness` value of 60. Moreover, the total swap usage is limited to two times the memory reservation of the container.
This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
      - `tmpfs`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `tmpfs`**Description**: The container path, mount options, and size (in MiB) of the `tmpfs` mount. This parameter maps to the `–tmpfs` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide this parameter for this resource type.
        - `container_path`**Type**: `STRING`**Provider name**: `containerPath`**Description**: The absolute file path in the container where the `tmpfs` volume is mounted.
        - `mount_options`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `mountOptions`**Description**: The list of `tmpfs` volume mount options. Valid values: "`defaults`" | "`ro`" | "`rw`" | "`suid`" | "`nosuid`" | "`dev`" | "`nodev`" | "`exec`" | "`noexec`" | "`sync`" | "`async`" | "`dirsync`" | "`remount`" | "`mand`" | "`nomand`" | "`atime`" | "`noatime`" | "`diratime`" | "`nodiratime`" | "`bind`" | "`rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime`" | "`norelatime`" | "`strictatime`" | "`nostrictatime`" | "`mode`" | "`uid`" | "`gid`" | "`nr_inodes`" | "`nr_blocks`" | "`mpol`"
        - `size`**Type**: `INT32`**Provider name**: `size`**Description**: The size (in MiB) of the `tmpfs` volume.
    - `log_configuration`**Type**: `STRUCT`**Provider name**: `logConfiguration`**Description**: The log configuration specification for the container. This parameter maps to `LogConfig` in the [Create a container](https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.35/) and the `–log-driver` option to [docker run](https://docs.docker.com/engine/reference/run/#security-configuration). By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). For more information about the options for different supported log drivers, see [Configure logging drivers ](https://docs.docker.com/engine/admin/logging/overview/)in the Docker documentation.Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the `LogConfiguration` data type). Additional log drivers may be available in future releases of the Amazon ECS container agent.This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version `–format '{{.Server.APIVersion}}'`The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html) in the Amazon Elastic Container Service Developer Guide.
      - `log_driver`**Type**: `STRING`**Provider name**: `logDriver`**Description**: The log driver to use for the container. The valid values that are listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default. The supported log drivers are `awslogs`, `fluentd`, `gelf`, `json-file`, `journald`, `logentries`, `syslog`, and `splunk`.Jobs that are running on Fargate resources are restricted to the `awslogs` and `splunk` log drivers.
        {% dl %}
        
        {% dt %}
awslogs
        {% /dt %}

        {% dd %}
        Specifies the Amazon CloudWatch Logs logging driver. For more information, see [Using the awslogs log driver](https://docs.aws.amazon.com/batch/latest/userguide/using_awslogs.html) in the Batch User Guide and [Amazon CloudWatch Logs logging driver](https://docs.docker.com/config/containers/logging/awslogs/) in the Docker documentation.
                {% /dd %}

        {% dt %}
fluentd
        {% /dt %}

        {% dd %}
        Specifies the Fluentd logging driver. For more information including usage and options, see [Fluentd logging driver](https://docs.docker.com/config/containers/logging/fluentd/) in the Docker documentation.
                {% /dd %}

        {% dt %}
gelf
        {% /dt %}

        {% dd %}
        Specifies the Graylog Extended Format (GELF) logging driver. For more information including usage and options, see [Graylog Extended Format logging driver](https://docs.docker.com/config/containers/logging/gelf/) in the Docker documentation.
                {% /dd %}

        {% dt %}
journald
        {% /dt %}

        {% dd %}
        Specifies the journald logging driver. For more information including usage and options, see [Journald logging driver](https://docs.docker.com/config/containers/logging/journald/) in the Docker documentation.
                {% /dd %}

        {% dt %}
json-file
        {% /dt %}

        {% dd %}
        Specifies the JSON file logging driver. For more information including usage and options, see [JSON File logging driver](https://docs.docker.com/config/containers/logging/json-file/) in the Docker documentation.
                {% /dd %}

        {% dt %}
splunk
        {% /dt %}

        {% dd %}
        Specifies the Splunk logging driver. For more information including usage and options, see [Splunk logging driver](https://docs.docker.com/config/containers/logging/splunk/) in the Docker documentation.
                {% /dd %}

        {% dt %}
syslog
        {% /dt %}

        {% dd %}
        Specifies the syslog logging driver. For more information including usage and options, see [Syslog logging driver](https://docs.docker.com/config/containers/logging/syslog/) in the Docker documentation.
                {% /dd %}

                {% /dl %}
If you have a custom driver that's not listed earlier that you want to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that's [available on GitHub](https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this software.This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
      - `options`**Type**: `MAP_STRING_STRING`**Provider name**: `options`**Description**: The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
      - `secret_options`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `secretOptions`**Description**: The secrets to pass to the log configuration. For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html) in the Batch User Guide.
        - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the secret.
        - `value_from`**Type**: `STRING`**Provider name**: `valueFrom`**Description**: The secret to expose to the container. The supported values are either the full Amazon Resource Name (ARN) of the Secrets Manager secret or the full ARN of the parameter in the Amazon Web Services Systems Manager Parameter Store.If the Amazon Web Services Systems Manager Parameter Store parameter exists in the same Region as the job you're launching, then you can use either the full Amazon Resource Name (ARN) or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
    - `mount_points`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `mountPoints`**Description**: The mount points for data volumes in your container. This parameter maps to `Volumes` in the [Create a container](https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.35/) and the –volume option to [docker run](https://docs.docker.com/engine/reference/run/#security-configuration). Windows containers can mount whole directories on the same drive as `$env:ProgramData`. Windows containers can't mount directories on a different drive, and mount point can't be across drives.
      - `container_path`**Type**: `STRING`**Provider name**: `containerPath`**Description**: The path on the container where the host volume is mounted.
      - `read_only`**Type**: `BOOLEAN`**Provider name**: `readOnly`**Description**: If this value is `true`, the container has read-only access to the volume. Otherwise, the container can write to the volume. The default value is `false`.
      - `source_volume`**Type**: `STRING`**Provider name**: `sourceVolume`**Description**: The name of the volume to mount.
    - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of a container. The name can be used as a unique identifier to target your `dependsOn` and `Overrides` objects.
    - `privileged`**Type**: `BOOLEAN`**Provider name**: `privileged`**Description**: When this parameter is `true`, the container is given elevated privileges on the host container instance (similar to the `root` user). This parameter maps to `Privileged` in the [Create a container](https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.35/) and the `–privileged` option to [docker run](https://docs.docker.com/engine/reference/run/#security-configuration).This parameter is not supported for Windows containers or tasks run on Fargate.
    - `readonly_root_filesystem`**Type**: `BOOLEAN`**Provider name**: `readonlyRootFilesystem`**Description**: When this parameter is true, the container is given read-only access to its root file system. This parameter maps to `ReadonlyRootfs` in the [Create a container](https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.35/) and the `–read-only` option to [docker run](https://docs.docker.com/engine/reference/run/#security-configuration).This parameter is not supported for Windows containers.
    - `repository_credentials`**Type**: `STRUCT`**Provider name**: `repositoryCredentials`**Description**: The private repository authentication credentials to use.
      - `credentials_parameter`**Type**: `STRING`**Provider name**: `credentialsParameter`**Description**: The Amazon Resource Name (ARN) of the secret containing the private repository credentials.
    - `resource_requirements`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `resourceRequirements`**Description**: The type and amount of a resource to assign to a container. The only supported resource is a GPU.
      - `type`**Type**: `STRING`**Provider name**: `type`**Description**: The type of resource to assign to a container. The supported resources include `GPU`, `MEMORY`, and `VCPU`.
      - `value`**Type**: `STRING`**Provider name**: `value`**Description**: The quantity of the specified resource to reserve for the container. The values vary based on the `type` specified.
        {% dl %}
        
        {% dt %}
type="GPU"
        {% /dt %}

        {% dd %}
        The number of physical GPUs to reserve for the container. Make sure that the number of GPUs reserved for all containers in a job doesn't exceed the number of available GPUs on the compute resource that the job is launched on.GPUs aren't available for jobs that are running on Fargate resources.
                {% /dd %}

        {% dt %}
type="MEMORY"
        {% /dt %}

        {% dd %}
        The memory hard limit (in MiB) present to the container. This parameter is supported for jobs that are running on Amazon EC2 resources. If your container attempts to exceed the memory specified, the container is terminated. This parameter maps to `Memory` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–memory` option to [docker run](https://docs.docker.com/engine/reference/run/). You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps to `Memory` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–memory` option to [docker run](https://docs.docker.com/engine/reference/run/).If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the Batch User Guide.For jobs that are running on Fargate resources, then `value` is the hard limit (in MiB), and must match one of the supported values and the `VCPU` values must be one of the values supported for that memory value.
        {% dl %}
        
        {% dt %}
value = 512
        {% /dt %}

        {% dd %}
        `VCPU` = 0.25
                {% /dd %}

        {% dt %}
value = 1024
        {% /dt %}

        {% dd %}
        `VCPU` = 0.25 or 0.5
                {% /dd %}

        {% dt %}
value = 2048
        {% /dt %}

        {% dd %}
        `VCPU` = 0.25, 0.5, or 1
                {% /dd %}

        {% dt %}
value = 3072
        {% /dt %}

        {% dd %}
        `VCPU` = 0.5, or 1
                {% /dd %}

        {% dt %}
value = 4096
        {% /dt %}

        {% dd %}
        `VCPU` = 0.5, 1, or 2
                {% /dd %}

        {% dt %}
value = 5120, 6144, or 7168
        {% /dt %}

        {% dd %}
        `VCPU` = 1 or 2
                {% /dd %}

        {% dt %}
value = 8192
        {% /dt %}

        {% dd %}
        `VCPU` = 1, 2, or 4
                {% /dd %}

        {% dt %}
value = 9216, 10240, 11264, 12288, 13312, 14336, or 15360
        {% /dt %}

        {% dd %}
        `VCPU` = 2 or 4
                {% /dd %}

        {% dt %}
value = 16384
        {% /dt %}

        {% dd %}
        `VCPU` = 2, 4, or 8
                {% /dd %}

        {% dt %}
value = 17408, 18432, 19456, 21504, 22528, 23552, 25600, 26624, 27648, 29696, or 30720
        {% /dt %}

        {% dd %}
        `VCPU` = 4
                {% /dd %}

        {% dt %}
value = 20480, 24576, or 28672
        {% /dt %}

        {% dd %}
        `VCPU` = 4 or 8
                {% /dd %}

        {% dt %}
value = 36864, 45056, 53248, or 61440
        {% /dt %}

        {% dd %}
        `VCPU` = 8
                {% /dd %}

        {% dt %}
value = 32768, 40960, 49152, or 57344
        {% /dt %}

        {% dd %}
        `VCPU` = 8 or 16
                {% /dd %}

        {% dt %}
value = 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
        {% /dt %}

        {% dd %}
        `VCPU` = 16
                {% /dd %}

                {% /dl %}

                {% /dd %}

        {% dt %}
type="VCPU"
        {% /dt %}

        {% dd %}
        The number of vCPUs reserved for the container. This parameter maps to `CpuShares` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–cpu-shares` option to [docker run](https://docs.docker.com/engine/reference/run/). Each vCPU is equivalent to 1,024 CPU shares. For Amazon EC2 resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once. The default for the Fargate On-Demand vCPU resource count quota is 6 vCPUs. For more information about Fargate quotas, see [Fargate quotas](https://docs.aws.amazon.com/general/latest/gr/ecs-service.html#service-quotas-fargate) in the Amazon Web Services General Reference. For jobs that are running on Fargate resources, then `value` must match one of the supported values and the `MEMORY` values must be one of the values supported for that `VCPU` value. The supported values are 0.25, 0.5, 1, 2, 4, 8, and 16
        {% dl %}
        
        {% dt %}
value = 0.25
        {% /dt %}

        {% dd %}
        `MEMORY` = 512, 1024, or 2048
                {% /dd %}

        {% dt %}
value = 0.5
        {% /dt %}

        {% dd %}
        `MEMORY` = 1024, 2048, 3072, or 4096
                {% /dd %}

        {% dt %}
value = 1
        {% /dt %}

        {% dd %}
        `MEMORY` = 2048, 3072, 4096, 5120, 6144, 7168, or 8192
                {% /dd %}

        {% dt %}
value = 2
        {% /dt %}

        {% dd %}
        `MEMORY` = 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384
                {% /dd %}

        {% dt %}
value = 4
        {% /dt %}

        {% dd %}
        `MEMORY` = 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720
                {% /dd %}

        {% dt %}
value = 8
        {% /dt %}

        {% dd %}
        `MEMORY` = 16384, 20480, 24576, 28672, 32768, 36864, 40960, 45056, 49152, 53248, 57344, or 61440
                {% /dd %}

        {% dt %}
value = 16
        {% /dt %}

        {% dd %}
        `MEMORY` = 32768, 40960, 49152, 57344, 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
                {% /dd %}

                {% /dl %}

                {% /dd %}

                {% /dl %}
    - `secrets`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `secrets`**Description**: The secrets to pass to the container. For more information, see [Specifying Sensitive Data](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html) in the Amazon Elastic Container Service Developer Guide.
      - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the secret.
      - `value_from`**Type**: `STRING`**Provider name**: `valueFrom`**Description**: The secret to expose to the container. The supported values are either the full Amazon Resource Name (ARN) of the Secrets Manager secret or the full ARN of the parameter in the Amazon Web Services Systems Manager Parameter Store.If the Amazon Web Services Systems Manager Parameter Store parameter exists in the same Region as the job you're launching, then you can use either the full Amazon Resource Name (ARN) or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
    - `ulimits`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `ulimits`**Description**: A list of `ulimits` to set in the container. If a `ulimit` value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to `Ulimits` in the [Create a container](https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.35/) and the `–ulimit` option to [docker run](https://docs.docker.com/engine/reference/run/#security-configuration). Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which Fargate overrides. The `nofile` resource limit sets a restriction on the number of open files that a container can use. The default `nofile` soft limit is `1024` and the default hard limit is `65535`. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version `–format '{{.Server.APIVersion}}'`This parameter is not supported for Windows containers.
      - `hard_limit`**Type**: `INT32`**Provider name**: `hardLimit`**Description**: The hard limit for the `ulimit` type.
      - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The `type` of the `ulimit`. Valid values are: `core` | `cpu` | `data` | `fsize` | `locks` | `memlock` | `msgqueue` | `nice` | `nofile` | `nproc` | `rss` | `rtprio` | `rttime` | `sigpending` | `stack`.
      - `soft_limit`**Type**: `INT32`**Provider name**: `softLimit`**Description**: The soft limit for the `ulimit` type.
    - `user`**Type**: `STRING`**Provider name**: `user`**Description**: The user to use inside the container. This parameter maps to User in the Create a container section of the Docker Remote API and the –user option to docker run.When running tasks using the `host` network mode, don't run containers using the `root user (UID 0)`. We recommend using a non-root user for better security.You can specify the `user` using the following formats. If specifying a UID or GID, you must specify it as a positive integer.
      - `user`
      - `user:group`
      - `uid`
      - `uid:gid`
      - `user:gi`
      - `uid:group`
This parameter is not supported for Windows containers.
  - `ephemeral_storage`**Type**: `STRUCT`**Provider name**: `ephemeralStorage`**Description**: The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on Fargate.
    - `size_in_gib`**Type**: `INT32`**Provider name**: `sizeInGiB`**Description**: The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is `21` GiB and the maximum supported value is `200` GiB.
  - `execution_role_arn`**Type**: `STRING`**Provider name**: `executionRoleArn`**Description**: The Amazon Resource Name (ARN) of the execution role that Batch can assume. For jobs that run on Fargate resources, you must provide an execution role. For more information, see [Batch execution IAM role](https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html) in the Batch User Guide.
  - `ipc_mode`**Type**: `STRING`**Provider name**: `ipcMode`**Description**: The IPC resource namespace to use for the containers in the task. The valid values are `host`, `task`, or `none`. If `host` is specified, all containers within the tasks that specified the `host` IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If `task` is specified, all containers within the specified `task` share the same IPC resources. If `none` is specified, the IPC resources within the containers of a task are private, and are not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance. For more information, see [IPC settings](https://docs.docker.com/engine/reference/run/#ipc-settings---ipc) in the Docker run reference.
  - `network_configuration`**Type**: `STRUCT`**Provider name**: `networkConfiguration`**Description**: The network configuration for jobs that are running on Fargate resources. Jobs that are running on Amazon EC2 resources must not specify this parameter.
    - `assign_public_ip`**Type**: `STRING`**Provider name**: `assignPublicIp`**Description**: Indicates whether the job has a public IP address. For a job that's running on Fargate resources in a private subnet to send outbound traffic to the internet (for example, to pull container images), the private subnet requires a NAT gateway be attached to route requests to the internet. For more information, see [Amazon ECS task networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) in the Amazon Elastic Container Service Developer Guide. The default value is "`DISABLED`".
  - `pid_mode`**Type**: `STRING`**Provider name**: `pidMode`**Description**: The process namespace to use for the containers in the task. The valid values are `host` or `task`. For example, monitoring sidecars might need `pidMode` to access information about other containers running in the same task. If `host` is specified, all containers within the tasks that specified the `host` PID mode on the same container instance share the process namespace with the host Amazon EC2 instance. If `task` is specified, all containers within the specified task share the same process namespace. If no value is specified, the default is a private namespace for each container. For more information, see [PID settings](https://docs.docker.com/engine/reference/run/#pid-settings---pid) in the Docker run reference.
  - `platform_version`**Type**: `STRING`**Provider name**: `platformVersion`**Description**: The Fargate platform version where the jobs are running. A platform version is specified only for jobs that are running on Fargate resources. If one isn't specified, the `LATEST` platform version is used by default. This uses a recent, approved version of the Fargate platform for compute resources. For more information, see [Fargate platform versions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html) in the Amazon Elastic Container Service Developer Guide.
  - `runtime_platform`**Type**: `STRUCT`**Provider name**: `runtimePlatform`**Description**: An object that represents the compute environment architecture for Batch jobs on Fargate.
    - `cpu_architecture`**Type**: `STRING`**Provider name**: `cpuArchitecture`**Description**: The vCPU architecture. The default value is `X86_64`. Valid values are `X86_64` and `ARM64`.This parameter must be set to `X86_64` for Windows containers.Fargate Spot is not supported for `ARM64` and Windows-based containers on Fargate. A job queue will be blocked if a Fargate `ARM64` or Windows job is submitted to a job queue with only Fargate Spot compute environments. However, you can attach both `FARGATE` and `FARGATE_SPOT` compute environments to the same job queue.
    - `operating_system_family`**Type**: `STRING`**Provider name**: `operatingSystemFamily`**Description**: The operating system for the compute environment. Valid values are: `LINUX` (default), `WINDOWS_SERVER_2019_CORE`, `WINDOWS_SERVER_2019_FULL`, `WINDOWS_SERVER_2022_CORE`, and `WINDOWS_SERVER_2022_FULL`.The following parameters can't be set for Windows containers: `linuxParameters`, `privileged`, `user`, `ulimits`, `readonlyRootFilesystem`, and `efsVolumeConfiguration`.The Batch Scheduler checks the compute environments that are attached to the job queue before registering a task definition with Fargate. In this scenario, the job queue is where the job is submitted. If the job requires a Windows container and the first compute environment is `LINUX`, the compute environment is skipped and the next compute environment is checked until a Windows-based compute environment is found.Fargate Spot is not supported for `ARM64` and Windows-based containers on Fargate. A job queue will be blocked if a Fargate `ARM64` or Windows job is submitted to a job queue with only Fargate Spot compute environments. However, you can attach both `FARGATE` and `FARGATE_SPOT` compute environments to the same job queue.
  - `task_role_arn`**Type**: `STRING`**Provider name**: `taskRoleArn`**Description**: The Amazon Resource Name (ARN) that's associated with the Amazon ECS task.This is object is comparable to [ContainerProperties:jobRoleArn](https://docs.aws.amazon.com/batch/latest/APIReference/API_ContainerProperties.html).
  - `volumes`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `volumes`**Description**: A list of volumes that are associated with the job.
    - `efs_volume_configuration`**Type**: `STRUCT`**Provider name**: `efsVolumeConfiguration`**Description**: This parameter is specified when you're using an Amazon Elastic File System file system for job storage. Jobs that are running on Fargate resources must specify a `platformVersion` of at least `1.4.0`.
      - `authorization_config`**Type**: `STRUCT`**Provider name**: `authorizationConfig`**Description**: The authorization configuration details for the Amazon EFS file system.
        - `access_point_id`**Type**: `STRING`**Provider name**: `accessPointId`**Description**: The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the `EFSVolumeConfiguration` must either be omitted or set to `/` which enforces the path set on the EFS access point. If an access point is used, transit encryption must be enabled in the `EFSVolumeConfiguration`. For more information, see [Working with Amazon EFS access points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) in the Amazon Elastic File System User Guide.
        - `iam`**Type**: `STRING`**Provider name**: `iam`**Description**: Whether or not to use the Batch job IAM role defined in a job definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the `EFSVolumeConfiguration`. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Using Amazon EFS access points](https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html#efs-volume-accesspoints) in the Batch User Guide. EFS IAM authorization requires that `TransitEncryption` be `ENABLED` and that a `JobRoleArn` is specified.
      - `file_system_id`**Type**: `STRING`**Provider name**: `fileSystemId`**Description**: The Amazon EFS file system ID to use.
      - `root_directory`**Type**: `STRING`**Provider name**: `rootDirectory`**Description**: The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume is used instead. Specifying `/` has the same effect as omitting this parameter. The maximum length is 4,096 characters.If an EFS access point is specified in the `authorizationConfig`, the root directory parameter must either be omitted or set to `/`, which enforces the path set on the Amazon EFS access point.
      - `transit_encryption`**Type**: `STRING`**Provider name**: `transitEncryption`**Description**: Determines whether to enable encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be enabled if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Encrypting data in transit](https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html) in the Amazon Elastic File System User Guide.
      - `transit_encryption_port`**Type**: `INT32`**Provider name**: `transitEncryptionPort`**Description**: The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you don't specify a transit encryption port, it uses the port selection strategy that the Amazon EFS mount helper uses. The value must be between 0 and 65,535. For more information, see [EFS mount helper](https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html) in the Amazon Elastic File System User Guide.
    - `host`**Type**: `STRUCT`**Provider name**: `host`**Description**: The contents of the `host` parameter determine whether your data volume persists on the host container instance and where it's stored. If the host parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running.This parameter isn't applicable to jobs that are running on Fargate resources and shouldn't be provided.
      - `source_path`**Type**: `STRING`**Provider name**: `sourcePath`**Description**: The path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If this parameter contains a file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the source path location doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.This parameter isn't applicable to jobs that run on Fargate resources. Don't provide this for these jobs.
    - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the volume. It can be up to 255 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). This name is referenced in the `sourceVolume` parameter of container definition `mountPoints`.

## `eks_properties`{% #eks_properties %}

**Type**: `STRUCT`**Provider name**: `eksProperties`**Description**: An object with properties that are specific to Amazon EKS-based jobs. When `eksProperties` is used in the job definition, it can't be used in addition to `containerProperties`, `ecsProperties`, or `nodeProperties`.

- `pod_properties`**Type**: `STRUCT`**Provider name**: `podProperties`**Description**: The properties for the Kubernetes pod resources of a job.
  - `containers`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `containers`**Description**: The properties of the container that's used on the Amazon EKS pod.This object is limited to 10 elements.
    - `args`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `args`**Description**: An array of arguments to the entrypoint. If this isn't specified, the `CMD` of the container image is used. This corresponds to the `args` member in the [Entrypoint](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint) portion of the [Pod](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/) in Kubernetes. Environment variable references are expanded using the container's environment. If the referenced environment variable doesn't exist, the reference in the command isn't changed. For example, if the reference is to "`$(NAME1)`" and the `NAME1` environment variable doesn't exist, the command string will remain "`$(NAME1)`." `$$` is replaced with `$`, and the resulting string isn't expanded. For example, `$$(VAR_NAME)` is passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists. For more information, see [Dockerfile reference: CMD](https://docs.docker.com/engine/reference/builder/#cmd) and [Define a command and arguments for a pod](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/) in the Kubernetes documentation.
    - `command`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `command`**Description**: The entrypoint for the container. This isn't run within a shell. If this isn't specified, the `ENTRYPOINT` of the container image is used. Environment variable references are expanded using the container's environment. If the referenced environment variable doesn't exist, the reference in the command isn't changed. For example, if the reference is to "`$(NAME1)`" and the `NAME1` environment variable doesn't exist, the command string will remain "`$(NAME1)`." `$$` is replaced with `$` and the resulting string isn't expanded. For example, `$$(VAR_NAME)` will be passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists. The entrypoint can't be updated. For more information, see [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint) in the Dockerfile reference and [Define a command and arguments for a container](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/) and [Entrypoint](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint) in the Kubernetes documentation.
    - `env`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `env`**Description**: The environment variables to pass to a container.Environment variables cannot start with "`AWS_BATCH`". This naming convention is reserved for variables that Batch sets.
      - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the environment variable.
      - `value`**Type**: `STRING`**Provider name**: `value`**Description**: The value of the environment variable.
    - `image`**Type**: `STRING`**Provider name**: `image`**Description**: The Docker image used to start the container.
    - `image_pull_policy`**Type**: `STRING`**Provider name**: `imagePullPolicy`**Description**: The image pull policy for the container. Supported values are `Always`, `IfNotPresent`, and `Never`. This parameter defaults to `IfNotPresent`. However, if the `:latest` tag is specified, it defaults to `Always`. For more information, see [Updating images](https://kubernetes.io/docs/concepts/containers/images/#updating-images) in the Kubernetes documentation.
    - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the container. If the name isn't specified, the default name "`Default`" is used. Each container in a pod must have a unique name.
    - `resources`**Type**: `STRUCT`**Provider name**: `resources`**Description**: The type and amount of resources to assign to a container. The supported resources include `memory`, `cpu`, and `nvidia.com/gpu`. For more information, see [Resource management for pods and containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) in the Kubernetes documentation.
      - `limits`**Type**: `MAP_STRING_STRING`**Provider name**: `limits`**Description**: The type and quantity of the resources to reserve for the container. The values vary based on the `name` that's specified. Resources can be requested using either the `limits` or the `requests` objects.
        {% dl %}
        
        {% dt %}
memory
        {% /dt %}

        {% dd %}
        The memory hard limit (in MiB) for the container, using whole integers, with a "Mi" suffix. If your container attempts to exceed the memory specified, the container is terminated. You must specify at least 4 MiB of memory for a job. `memory` can be specified in `limits`, `requests`, or both. If `memory` is specified in both places, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.To maximize your resource utilization, provide your jobs with as much memory as possible for the specific instance type that you are using. To learn how, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the Batch User Guide.
                {% /dd %}

        {% dt %}
cpu
        {% /dt %}

        {% dd %}
        The number of CPUs that's reserved for the container. Values must be an even multiple of `0.25`. `cpu` can be specified in `limits`, `requests`, or both. If `cpu` is specified in both places, then the value that's specified in `limits` must be at least as large as the value that's specified in `requests`.
                {% /dd %}

        {% dt %}
nvidia.com/gpu
        {% /dt %}

        {% dd %}
        The number of GPUs that's reserved for the container. Values must be a whole integer. `memory` can be specified in `limits`, `requests`, or both. If `memory` is specified in both places, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.
                {% /dd %}

                {% /dl %}
      - `requests`**Type**: `MAP_STRING_STRING`**Provider name**: `requests`**Description**: The type and quantity of the resources to request for the container. The values vary based on the `name` that's specified. Resources can be requested by using either the `limits` or the `requests` objects.
        {% dl %}
        
        {% dt %}
memory
        {% /dt %}

        {% dd %}
        The memory hard limit (in MiB) for the container, using whole integers, with a "Mi" suffix. If your container attempts to exceed the memory specified, the container is terminated. You must specify at least 4 MiB of memory for a job. `memory` can be specified in `limits`, `requests`, or both. If `memory` is specified in both, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the Batch User Guide.
                {% /dd %}

        {% dt %}
cpu
        {% /dt %}

        {% dd %}
        The number of CPUs that are reserved for the container. Values must be an even multiple of `0.25`. `cpu` can be specified in `limits`, `requests`, or both. If `cpu` is specified in both, then the value that's specified in `limits` must be at least as large as the value that's specified in `requests`.
                {% /dd %}

        {% dt %}
nvidia.com/gpu
        {% /dt %}

        {% dd %}
        The number of GPUs that are reserved for the container. Values must be a whole integer. `nvidia.com/gpu` can be specified in `limits`, `requests`, or both. If `nvidia.com/gpu` is specified in both, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.
                {% /dd %}

                {% /dl %}
    - `security_context`**Type**: `STRUCT`**Provider name**: `securityContext`**Description**: The security context for a job. For more information, see [Configure a security context for a pod or container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) in the Kubernetes documentation.
      - `allow_privilege_escalation`**Type**: `BOOLEAN`**Provider name**: `allowPrivilegeEscalation`**Description**: Whether or not a container or a Kubernetes pod is allowed to gain more privileges than its parent process. The default value is `false`.
      - `privileged`**Type**: `BOOLEAN`**Provider name**: `privileged`**Description**: When this parameter is `true`, the container is given elevated permissions on the host container instance. The level of permissions are similar to the `root` user permissions. The default value is `false`. This parameter maps to `privileged` policy in the [Privileged pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#privileged) in the Kubernetes documentation.
      - `read_only_root_filesystem`**Type**: `BOOLEAN`**Provider name**: `readOnlyRootFilesystem`**Description**: When this parameter is `true`, the container is given read-only access to its root file system. The default value is `false`. This parameter maps to `ReadOnlyRootFilesystem` policy in the [Volumes and file systems pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#volumes-and-file-systems) in the Kubernetes documentation.
      - `run_as_group`**Type**: `INT64`**Provider name**: `runAsGroup`**Description**: When this parameter is specified, the container is run as the specified group ID (`gid`). If this parameter isn't specified, the default is the group that's specified in the image metadata. This parameter maps to `RunAsGroup` and `MustRunAs` policy in the [Users and groups pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups) in the Kubernetes documentation.
      - `run_as_non_root`**Type**: `BOOLEAN`**Provider name**: `runAsNonRoot`**Description**: When this parameter is specified, the container is run as a user with a `uid` other than 0. If this parameter isn't specified, so such rule is enforced. This parameter maps to `RunAsUser` and `MustRunAsNonRoot` policy in the [Users and groups pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups) in the Kubernetes documentation.
      - `run_as_user`**Type**: `INT64`**Provider name**: `runAsUser`**Description**: When this parameter is specified, the container is run as the specified user ID (`uid`). If this parameter isn't specified, the default is the user that's specified in the image metadata. This parameter maps to `RunAsUser` and `MustRanAs` policy in the [Users and groups pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups) in the Kubernetes documentation.
    - `volume_mounts`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `volumeMounts`**Description**: The volume mounts for the container. Batch supports `emptyDir`, `hostPath`, and `secret` volume types. For more information about volumes and volume mounts in Kubernetes, see [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/) in the Kubernetes documentation.
      - `mount_path`**Type**: `STRING`**Provider name**: `mountPath`**Description**: The path on the container where the volume is mounted.
      - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name the volume mount. This must match the name of one of the volumes in the pod.
      - `read_only`**Type**: `BOOLEAN`**Provider name**: `readOnly`**Description**: If this value is `true`, the container has read-only access to the volume. Otherwise, the container can write to the volume. The default value is `false`.
      - `sub_path`**Type**: `STRING`**Provider name**: `subPath`**Description**: A sub-path inside the referenced volume instead of its root.
  - `dns_policy`**Type**: `STRING`**Provider name**: `dnsPolicy`**Description**: The DNS policy for the pod. The default value is `ClusterFirst`. If the `hostNetwork` parameter is not specified, the default is `ClusterFirstWithHostNet`. `ClusterFirst` indicates that any DNS query that does not match the configured cluster domain suffix is forwarded to the upstream nameserver inherited from the node. For more information, see [Pod's DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) in the Kubernetes documentation. Valid values: `Default` | `ClusterFirst` | `ClusterFirstWithHostNet`
  - `host_network`**Type**: `BOOLEAN`**Provider name**: `hostNetwork`**Description**: Indicates if the pod uses the hosts' network IP address. The default value is `true`. Setting this to `false` enables the Kubernetes pod networking model. Most Batch workloads are egress-only and don't require the overhead of IP allocation for each pod for incoming connections. For more information, see [Host namespaces](https://kubernetes.io/docs/concepts/security/pod-security-policy/#host-namespaces) and [Pod networking](https://kubernetes.io/docs/concepts/workloads/pods/#pod-networking) in the Kubernetes documentation.
  - `image_pull_secrets`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `imagePullSecrets`**Description**: References a Kubernetes secret resource. It holds a list of secrets. These secrets help to gain access to pull an images from a private registry. `ImagePullSecret$name` is required when this object is used.
    - `name`**Type**: `STRING`**Provider name**: `name`**Description**: Provides a unique identifier for the `ImagePullSecret`. This object is required when `EksPodProperties$imagePullSecrets` is used.
  - `init_containers`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `initContainers`**Description**: These containers run before application containers, always runs to completion, and must complete successfully before the next container starts. These containers are registered with the Amazon EKS Connector agent and persists the registration information in the Kubernetes backend data store. For more information, see [Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) in the Kubernetes documentation.This object is limited to 10 elements.
    - `args`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `args`**Description**: An array of arguments to the entrypoint. If this isn't specified, the `CMD` of the container image is used. This corresponds to the `args` member in the [Entrypoint](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint) portion of the [Pod](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/) in Kubernetes. Environment variable references are expanded using the container's environment. If the referenced environment variable doesn't exist, the reference in the command isn't changed. For example, if the reference is to "`$(NAME1)`" and the `NAME1` environment variable doesn't exist, the command string will remain "`$(NAME1)`." `$$` is replaced with `$`, and the resulting string isn't expanded. For example, `$$(VAR_NAME)` is passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists. For more information, see [Dockerfile reference: CMD](https://docs.docker.com/engine/reference/builder/#cmd) and [Define a command and arguments for a pod](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/) in the Kubernetes documentation.
    - `command`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `command`**Description**: The entrypoint for the container. This isn't run within a shell. If this isn't specified, the `ENTRYPOINT` of the container image is used. Environment variable references are expanded using the container's environment. If the referenced environment variable doesn't exist, the reference in the command isn't changed. For example, if the reference is to "`$(NAME1)`" and the `NAME1` environment variable doesn't exist, the command string will remain "`$(NAME1)`." `$$` is replaced with `$` and the resulting string isn't expanded. For example, `$$(VAR_NAME)` will be passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists. The entrypoint can't be updated. For more information, see [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint) in the Dockerfile reference and [Define a command and arguments for a container](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/) and [Entrypoint](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint) in the Kubernetes documentation.
    - `env`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `env`**Description**: The environment variables to pass to a container.Environment variables cannot start with "`AWS_BATCH`". This naming convention is reserved for variables that Batch sets.
      - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the environment variable.
      - `value`**Type**: `STRING`**Provider name**: `value`**Description**: The value of the environment variable.
    - `image`**Type**: `STRING`**Provider name**: `image`**Description**: The Docker image used to start the container.
    - `image_pull_policy`**Type**: `STRING`**Provider name**: `imagePullPolicy`**Description**: The image pull policy for the container. Supported values are `Always`, `IfNotPresent`, and `Never`. This parameter defaults to `IfNotPresent`. However, if the `:latest` tag is specified, it defaults to `Always`. For more information, see [Updating images](https://kubernetes.io/docs/concepts/containers/images/#updating-images) in the Kubernetes documentation.
    - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the container. If the name isn't specified, the default name "`Default`" is used. Each container in a pod must have a unique name.
    - `resources`**Type**: `STRUCT`**Provider name**: `resources`**Description**: The type and amount of resources to assign to a container. The supported resources include `memory`, `cpu`, and `nvidia.com/gpu`. For more information, see [Resource management for pods and containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) in the Kubernetes documentation.
      - `limits`**Type**: `MAP_STRING_STRING`**Provider name**: `limits`**Description**: The type and quantity of the resources to reserve for the container. The values vary based on the `name` that's specified. Resources can be requested using either the `limits` or the `requests` objects.
        {% dl %}
        
        {% dt %}
memory
        {% /dt %}

        {% dd %}
        The memory hard limit (in MiB) for the container, using whole integers, with a "Mi" suffix. If your container attempts to exceed the memory specified, the container is terminated. You must specify at least 4 MiB of memory for a job. `memory` can be specified in `limits`, `requests`, or both. If `memory` is specified in both places, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.To maximize your resource utilization, provide your jobs with as much memory as possible for the specific instance type that you are using. To learn how, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the Batch User Guide.
                {% /dd %}

        {% dt %}
cpu
        {% /dt %}

        {% dd %}
        The number of CPUs that's reserved for the container. Values must be an even multiple of `0.25`. `cpu` can be specified in `limits`, `requests`, or both. If `cpu` is specified in both places, then the value that's specified in `limits` must be at least as large as the value that's specified in `requests`.
                {% /dd %}

        {% dt %}
nvidia.com/gpu
        {% /dt %}

        {% dd %}
        The number of GPUs that's reserved for the container. Values must be a whole integer. `memory` can be specified in `limits`, `requests`, or both. If `memory` is specified in both places, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.
                {% /dd %}

                {% /dl %}
      - `requests`**Type**: `MAP_STRING_STRING`**Provider name**: `requests`**Description**: The type and quantity of the resources to request for the container. The values vary based on the `name` that's specified. Resources can be requested by using either the `limits` or the `requests` objects.
        {% dl %}
        
        {% dt %}
memory
        {% /dt %}

        {% dd %}
        The memory hard limit (in MiB) for the container, using whole integers, with a "Mi" suffix. If your container attempts to exceed the memory specified, the container is terminated. You must specify at least 4 MiB of memory for a job. `memory` can be specified in `limits`, `requests`, or both. If `memory` is specified in both, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the Batch User Guide.
                {% /dd %}

        {% dt %}
cpu
        {% /dt %}

        {% dd %}
        The number of CPUs that are reserved for the container. Values must be an even multiple of `0.25`. `cpu` can be specified in `limits`, `requests`, or both. If `cpu` is specified in both, then the value that's specified in `limits` must be at least as large as the value that's specified in `requests`.
                {% /dd %}

        {% dt %}
nvidia.com/gpu
        {% /dt %}

        {% dd %}
        The number of GPUs that are reserved for the container. Values must be a whole integer. `nvidia.com/gpu` can be specified in `limits`, `requests`, or both. If `nvidia.com/gpu` is specified in both, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.
                {% /dd %}

                {% /dl %}
    - `security_context`**Type**: `STRUCT`**Provider name**: `securityContext`**Description**: The security context for a job. For more information, see [Configure a security context for a pod or container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) in the Kubernetes documentation.
      - `allow_privilege_escalation`**Type**: `BOOLEAN`**Provider name**: `allowPrivilegeEscalation`**Description**: Whether or not a container or a Kubernetes pod is allowed to gain more privileges than its parent process. The default value is `false`.
      - `privileged`**Type**: `BOOLEAN`**Provider name**: `privileged`**Description**: When this parameter is `true`, the container is given elevated permissions on the host container instance. The level of permissions are similar to the `root` user permissions. The default value is `false`. This parameter maps to `privileged` policy in the [Privileged pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#privileged) in the Kubernetes documentation.
      - `read_only_root_filesystem`**Type**: `BOOLEAN`**Provider name**: `readOnlyRootFilesystem`**Description**: When this parameter is `true`, the container is given read-only access to its root file system. The default value is `false`. This parameter maps to `ReadOnlyRootFilesystem` policy in the [Volumes and file systems pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#volumes-and-file-systems) in the Kubernetes documentation.
      - `run_as_group`**Type**: `INT64`**Provider name**: `runAsGroup`**Description**: When this parameter is specified, the container is run as the specified group ID (`gid`). If this parameter isn't specified, the default is the group that's specified in the image metadata. This parameter maps to `RunAsGroup` and `MustRunAs` policy in the [Users and groups pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups) in the Kubernetes documentation.
      - `run_as_non_root`**Type**: `BOOLEAN`**Provider name**: `runAsNonRoot`**Description**: When this parameter is specified, the container is run as a user with a `uid` other than 0. If this parameter isn't specified, so such rule is enforced. This parameter maps to `RunAsUser` and `MustRunAsNonRoot` policy in the [Users and groups pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups) in the Kubernetes documentation.
      - `run_as_user`**Type**: `INT64`**Provider name**: `runAsUser`**Description**: When this parameter is specified, the container is run as the specified user ID (`uid`). If this parameter isn't specified, the default is the user that's specified in the image metadata. This parameter maps to `RunAsUser` and `MustRanAs` policy in the [Users and groups pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups) in the Kubernetes documentation.
    - `volume_mounts`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `volumeMounts`**Description**: The volume mounts for the container. Batch supports `emptyDir`, `hostPath`, and `secret` volume types. For more information about volumes and volume mounts in Kubernetes, see [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/) in the Kubernetes documentation.
      - `mount_path`**Type**: `STRING`**Provider name**: `mountPath`**Description**: The path on the container where the volume is mounted.
      - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name the volume mount. This must match the name of one of the volumes in the pod.
      - `read_only`**Type**: `BOOLEAN`**Provider name**: `readOnly`**Description**: If this value is `true`, the container has read-only access to the volume. Otherwise, the container can write to the volume. The default value is `false`.
      - `sub_path`**Type**: `STRING`**Provider name**: `subPath`**Description**: A sub-path inside the referenced volume instead of its root.
  - `metadata`**Type**: `STRUCT`**Provider name**: `metadata`**Description**: Metadata about the Kubernetes pod. For more information, see [Understanding Kubernetes Objects](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/) in the Kubernetes documentation.
    - `annotations`**Type**: `MAP_STRING_STRING`**Provider name**: `annotations`**Description**: Key-value pairs used to attach arbitrary, non-identifying metadata to Kubernetes objects. Valid annotation keys have two segments: an optional prefix and a name, separated by a slash (/).
      - The prefix is optional and must be 253 characters or less. If specified, the prefix must be a DNS subdomain− a series of DNS labels separated by dots (.), and it must end with a slash (/).
      - The name segment is required and must be 63 characters or less. It can include alphanumeric characters ([a-z0-9A-Z]), dashes (-), underscores (_), and dots (.), but must begin and end with an alphanumeric character.
Annotation values must be 255 characters or less.Annotations can be added or modified at any time. Each resource can have multiple annotations.
    - `labels`**Type**: `MAP_STRING_STRING`**Provider name**: `labels`**Description**: Key-value pairs used to identify, sort, and organize cube resources. Can contain up to 63 uppercase letters, lowercase letters, numbers, hyphens (-), and underscores (_). Labels can be added or modified at any time. Each resource can have multiple labels, but each key must be unique for a given object.
    - `namespace`**Type**: `STRING`**Provider name**: `namespace`**Description**: The namespace of the Amazon EKS cluster. In Kubernetes, namespaces provide a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Batch places Batch Job pods in this namespace. If this field is provided, the value can't be empty or null. It must meet the following requirements:
      - 1-63 characters long
      - Can't be set to default
      - Can't start with `kube`
      - Must match the following regular expression: `^ a-z0-9 ?$`
For more information, see [Namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) in the Kubernetes documentation. This namespace can be different from the `kubernetesNamespace` set in the compute environment's `EksConfiguration`, but must have identical role-based access control (RBAC) roles as the compute environment's `kubernetesNamespace`. For multi-node parallel jobs, the same value must be provided across all the node ranges.
  - `service_account_name`**Type**: `STRING`**Provider name**: `serviceAccountName`**Description**: The name of the service account that's used to run the pod. For more information, see [Kubernetes service accounts](https://docs.aws.amazon.com/eks/latest/userguide/service-accounts.html) and [Configure a Kubernetes service account to assume an IAM role](https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html) in the Amazon EKS User Guide and [Configure service accounts for pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) in the Kubernetes documentation.
  - `share_process_namespace`**Type**: `BOOLEAN`**Provider name**: `shareProcessNamespace`**Description**: Indicates if the processes in a container are shared, or visible, to other containers in the same pod. For more information, see [Share Process Namespace between Containers in a Pod](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/).
  - `volumes`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `volumes`**Description**: Specifies the volumes for a job definition that uses Amazon EKS resources.
    - `empty_dir`**Type**: `STRUCT`**Provider name**: `emptyDir`**Description**: Specifies the configuration of a Kubernetes `emptyDir` volume. For more information, see [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) in the Kubernetes documentation.
      - `medium`**Type**: `STRING`**Provider name**: `medium`**Description**: The medium to store the volume. The default value is an empty string, which uses the storage of the node.
        {% dl %}
        
        {% dt %}
""
        {% /dt %}

        {% dd %}
        (Default) Use the disk storage of the node.
                {% /dd %}

        {% dt %}
"Memory"
        {% /dt %}

        {% dd %}
        Use the `tmpfs` volume that's backed by the RAM of the node. Contents of the volume are lost when the node reboots, and any storage on the volume counts against the container's memory limit.
                {% /dd %}

                {% /dl %}
      - `size_limit`**Type**: `STRING`**Provider name**: `sizeLimit`**Description**: The maximum size of the volume. By default, there's no maximum size defined.
    - `host_path`**Type**: `STRUCT`**Provider name**: `hostPath`**Description**: Specifies the configuration of a Kubernetes `hostPath` volume. For more information, see [hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) in the Kubernetes documentation.
      - `path`**Type**: `STRING`**Provider name**: `path`**Description**: The path of the file or directory on the host to mount into containers on the pod.
    - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the volume. The name must be allowed as a DNS subdomain name. For more information, see [DNS subdomain names](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names) in the Kubernetes documentation.
    - `persistent_volume_claim`**Type**: `STRUCT`**Provider name**: `persistentVolumeClaim`**Description**: Specifies the configuration of a Kubernetes `persistentVolumeClaim` bounded to a `persistentVolume`. For more information, see [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) in the Kubernetes documentation.
      - `claim_name`**Type**: `STRING`**Provider name**: `claimName`**Description**: The name of the `persistentVolumeClaim` bounded to a `persistentVolume`. For more information, see [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) in the Kubernetes documentation.
      - `read_only`**Type**: `BOOLEAN`**Provider name**: `readOnly`**Description**: An optional boolean value indicating if the mount is read only. Default is false. For more information, see [Read Only Mounts](https://kubernetes.io/docs/concepts/storage/volumes/#read-only-mounts) in the Kubernetes documentation.
    - `secret`**Type**: `STRUCT`**Provider name**: `secret`**Description**: Specifies the configuration of a Kubernetes `secret` volume. For more information, see [secret](https://kubernetes.io/docs/concepts/storage/volumes/#secret) in the Kubernetes documentation.
      - `optional`**Type**: `BOOLEAN`**Provider name**: `optional`**Description**: Specifies whether the secret or the secret's keys must be defined.
      - `secret_name`**Type**: `STRING`**Provider name**: `secretName`**Description**: The name of the secret. The name must be allowed as a DNS subdomain name. For more information, see [DNS subdomain names](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names) in the Kubernetes documentation.

## `job_definition_arn`{% #job_definition_arn %}

**Type**: `STRING`**Provider name**: `jobDefinitionArn`**Description**: The Amazon Resource Name (ARN) for the job definition.

## `job_definition_name`{% #job_definition_name %}

**Type**: `STRING`**Provider name**: `jobDefinitionName`**Description**: The name of the job definition.

## `node_properties`{% #node_properties %}

**Type**: `STRUCT`**Provider name**: `nodeProperties`**Description**: An object with properties that are specific to multi-node parallel jobs. When `nodeProperties` is used in the job definition, it can't be used in addition to `containerProperties`, `ecsProperties`, or `eksProperties`.If the job runs on Fargate resources, don't specify `nodeProperties`. Use `containerProperties` instead.

- `main_node`**Type**: `INT32`**Provider name**: `mainNode`**Description**: Specifies the node index for the main node of a multi-node parallel job. This node index value must be fewer than the number of nodes.
- `node_range_properties`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `nodeRangeProperties`**Description**: A list of node ranges and their properties that are associated with a multi-node parallel job.
  - `consumable_resource_properties`**Type**: `STRUCT`**Provider name**: `consumableResourceProperties`**Description**: Contains a list of consumable resources required by a job.
    - `consumable_resource_list`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `consumableResourceList`**Description**: The list of consumable resources required by a job.
      - `consumable_resource`**Type**: `STRING`**Provider name**: `consumableResource`**Description**: The name or ARN of the consumable resource.
      - `quantity`**Type**: `INT64`**Provider name**: `quantity`**Description**: The quantity of the consumable resource that is needed.
  - `container`**Type**: `STRUCT`**Provider name**: `container`**Description**: The container details for the node range.
    - `command`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `command`**Description**: The command that's passed to the container. This parameter maps to `Cmd` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `COMMAND` parameter to [docker run](https://docs.docker.com/engine/reference/run/). For more information, see [https://docs.docker.com/engine/reference/builder/#cmd](https://docs.docker.com/engine/reference/builder/#cmd).
    - `environment`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `environment`**Description**: The environment variables to pass to a container. This parameter maps to `Env` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–env` option to [docker run](https://docs.docker.com/engine/reference/run/).We don't recommend using plaintext environment variables for sensitive information, such as credential data.Environment variables cannot start with "`AWS_BATCH`". This naming convention is reserved for variables that Batch sets.
      - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the key-value pair. For environment variables, this is the name of the environment variable.
      - `value`**Type**: `STRING`**Provider name**: `value`**Description**: The value of the key-value pair. For environment variables, this is the value of the environment variable.
    - `ephemeral_storage`**Type**: `STRUCT`**Provider name**: `ephemeralStorage`**Description**: The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on Fargate.
      - `size_in_gib`**Type**: `INT32`**Provider name**: `sizeInGiB`**Description**: The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is `21` GiB and the maximum supported value is `200` GiB.
    - `execution_role_arn`**Type**: `STRING`**Provider name**: `executionRoleArn`**Description**: The Amazon Resource Name (ARN) of the execution role that Batch can assume. For jobs that run on Fargate resources, you must provide an execution role. For more information, see [Batch execution IAM role](https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html) in the Batch User Guide.
    - `fargate_platform_configuration`**Type**: `STRUCT`**Provider name**: `fargatePlatformConfiguration`**Description**: The platform configuration for jobs that are running on Fargate resources. Jobs that are running on Amazon EC2 resources must not specify this parameter.
      - `platform_version`**Type**: `STRING`**Provider name**: `platformVersion`**Description**: The Fargate platform version where the jobs are running. A platform version is specified only for jobs that are running on Fargate resources. If one isn't specified, the `LATEST` platform version is used by default. This uses a recent, approved version of the Fargate platform for compute resources. For more information, see [Fargate platform versions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html) in the Amazon Elastic Container Service Developer Guide.
    - `image`**Type**: `STRING`**Provider name**: `image`**Description**: Required. The image used to start a container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with `repository-url / image : tag`. It can be 255 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), underscores (_), colons (:), periods (.), forward slashes (/), and number signs (#). This parameter maps to `Image` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `IMAGE` parameter of [docker run](https://docs.docker.com/engine/reference/run/).Docker image architecture must match the processor architecture of the compute resources that they're scheduled on. For example, ARM-based Docker images can only run on ARM-based compute resources.
      - Images in Amazon ECR Public repositories use the full `registry/repository[:tag]` or `registry/repository[@digest]` naming conventions. For example, `public.ecr.aws/ registry_alias / my-web-app : latest`.
      - Images in Amazon ECR repositories use the full registry and repository URI (for example, `123456789012.dkr.ecr.<region-name>.amazonaws.com/<repository-name>`).
      - Images in official repositories on Docker Hub use a single name (for example, `ubuntu` or `mongo`).
      - Images in other repositories on Docker Hub are qualified with an organization name (for example, `amazon/amazon-ecs-agent`).
      - Images in other online repositories are qualified further by a domain name (for example, `quay.io/assemblyline/ubuntu`).
    - `instance_type`**Type**: `STRING`**Provider name**: `instanceType`**Description**: The instance type to use for a multi-node parallel job. All node groups in a multi-node parallel job must use the same instance type.This parameter isn't applicable to single-node container jobs or jobs that run on Fargate resources, and shouldn't be provided.
    - `job_role_arn`**Type**: `STRING`**Provider name**: `jobRoleArn`**Description**: The Amazon Resource Name (ARN) of the IAM role that the container can assume for Amazon Web Services permissions. For more information, see [IAM roles for tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) in the Amazon Elastic Container Service Developer Guide.
    - `linux_parameters`**Type**: `STRUCT`**Provider name**: `linuxParameters`**Description**: Linux-specific modifications that are applied to the container, such as details for device mappings.
      - `devices`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `devices`**Description**: Any of the host devices to expose to the container. This parameter maps to `Devices` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–device` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
        - `container_path`**Type**: `STRING`**Provider name**: `containerPath`**Description**: The path inside the container that's used to expose the host device. By default, the `hostPath` value is used.
        - `host_path`**Type**: `STRING`**Provider name**: `hostPath`**Description**: The path for the device on the host container instance.
        - `permissions`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `permissions`**Description**: The explicit permissions to provide to the container for the device. By default, the container has permissions for `read`, `write`, and `mknod` for the device.
      - `init_process_enabled`**Type**: `BOOLEAN`**Provider name**: `initProcessEnabled`**Description**: If true, run an `init` process inside the container that forwards signals and reaps processes. This parameter maps to the `–init` option to [docker run](https://docs.docker.com/engine/reference/run/). This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
      - `max_swap`**Type**: `INT32`**Provider name**: `maxSwap`**Description**: The total amount of swap memory (in MiB) a container can use. This parameter is translated to the `–memory-swap` option to [docker run](https://docs.docker.com/engine/reference/run/) where the value is the sum of the container memory plus the `maxSwap` value. For more information, see [`–memory-swap` details](https://docs.docker.com/config/containers/resource_constraints/#--memory-swap-details) in the Docker documentation. If a `maxSwap` value of `0` is specified, the container doesn't use swap. Accepted values are `0` or any positive integer. If the `maxSwap` parameter is omitted, the container doesn't use the swap configuration for the container instance that it's running on. A `maxSwap` value must be set for the `swappiness` parameter to be used.This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
      - `shared_memory_size`**Type**: `INT32`**Provider name**: `sharedMemorySize`**Description**: The value for the size (in MiB) of the `/dev/shm` volume. This parameter maps to the `–shm-size` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
      - `swappiness`**Type**: `INT32`**Provider name**: `swappiness`**Description**: You can use this parameter to tune a container's memory swappiness behavior. A `swappiness` value of `0` causes swapping to not occur unless absolutely necessary. A `swappiness` value of `100` causes pages to be swapped aggressively. Valid values are whole numbers between `0` and `100`. If the `swappiness` parameter isn't specified, a default value of `60` is used. If a value isn't specified for `maxSwap`, then this parameter is ignored. If `maxSwap` is set to 0, the container doesn't use swap. This parameter maps to the `–memory-swappiness` option to [docker run](https://docs.docker.com/engine/reference/run/). Consider the following when you use a per-container swap configuration.
        - Swap space must be enabled and allocated on the container instance for the containers to use.By default, the Amazon ECS optimized AMIs don't have swap enabled. You must enable swap on the instance to use this feature. For more information, see [Instance store swap volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-store-swap-volumes.html) in the Amazon EC2 User Guide for Linux Instances or [How do I allocate memory to work as swap space in an Amazon EC2 instance by using a swap file?](http://aws.amazon.com/premiumsupport/knowledge-center/ec2-memory-swap-file/)
        - The swap space parameters are only supported for job definitions using EC2 resources.
        - If the `maxSwap` and `swappiness` parameters are omitted from a job definition, each container has a default `swappiness` value of 60. Moreover, the total swap usage is limited to two times the memory reservation of the container.
This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
      - `tmpfs`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `tmpfs`**Description**: The container path, mount options, and size (in MiB) of the `tmpfs` mount. This parameter maps to the `–tmpfs` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide this parameter for this resource type.
        - `container_path`**Type**: `STRING`**Provider name**: `containerPath`**Description**: The absolute file path in the container where the `tmpfs` volume is mounted.
        - `mount_options`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `mountOptions`**Description**: The list of `tmpfs` volume mount options. Valid values: "`defaults`" | "`ro`" | "`rw`" | "`suid`" | "`nosuid`" | "`dev`" | "`nodev`" | "`exec`" | "`noexec`" | "`sync`" | "`async`" | "`dirsync`" | "`remount`" | "`mand`" | "`nomand`" | "`atime`" | "`noatime`" | "`diratime`" | "`nodiratime`" | "`bind`" | "`rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime`" | "`norelatime`" | "`strictatime`" | "`nostrictatime`" | "`mode`" | "`uid`" | "`gid`" | "`nr_inodes`" | "`nr_blocks`" | "`mpol`"
        - `size`**Type**: `INT32`**Provider name**: `size`**Description**: The size (in MiB) of the `tmpfs` volume.
    - `log_configuration`**Type**: `STRUCT`**Provider name**: `logConfiguration`**Description**: The log configuration specification for the container. This parameter maps to `LogConfig` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–log-driver` option to [docker run](https://docs.docker.com/engine/reference/run/). By default, containers use the same logging driver that the Docker daemon uses. However the container might use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). For more information on the options for different supported log drivers, see [Configure logging drivers](https://docs.docker.com/engine/admin/logging/overview/) in the Docker documentation.Batch currently supports a subset of the logging drivers available to the Docker daemon (shown in the [LogConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-logconfiguration.html) data type).This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html) in the Amazon Elastic Container Service Developer Guide.
      - `log_driver`**Type**: `STRING`**Provider name**: `logDriver`**Description**: The log driver to use for the container. The valid values that are listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default. The supported log drivers are `awslogs`, `fluentd`, `gelf`, `json-file`, `journald`, `logentries`, `syslog`, and `splunk`.Jobs that are running on Fargate resources are restricted to the `awslogs` and `splunk` log drivers.
        {% dl %}
        
        {% dt %}
awslogs
        {% /dt %}

        {% dd %}
        Specifies the Amazon CloudWatch Logs logging driver. For more information, see [Using the awslogs log driver](https://docs.aws.amazon.com/batch/latest/userguide/using_awslogs.html) in the Batch User Guide and [Amazon CloudWatch Logs logging driver](https://docs.docker.com/config/containers/logging/awslogs/) in the Docker documentation.
                {% /dd %}

        {% dt %}
fluentd
        {% /dt %}

        {% dd %}
        Specifies the Fluentd logging driver. For more information including usage and options, see [Fluentd logging driver](https://docs.docker.com/config/containers/logging/fluentd/) in the Docker documentation.
                {% /dd %}

        {% dt %}
gelf
        {% /dt %}

        {% dd %}
        Specifies the Graylog Extended Format (GELF) logging driver. For more information including usage and options, see [Graylog Extended Format logging driver](https://docs.docker.com/config/containers/logging/gelf/) in the Docker documentation.
                {% /dd %}

        {% dt %}
journald
        {% /dt %}

        {% dd %}
        Specifies the journald logging driver. For more information including usage and options, see [Journald logging driver](https://docs.docker.com/config/containers/logging/journald/) in the Docker documentation.
                {% /dd %}

        {% dt %}
json-file
        {% /dt %}

        {% dd %}
        Specifies the JSON file logging driver. For more information including usage and options, see [JSON File logging driver](https://docs.docker.com/config/containers/logging/json-file/) in the Docker documentation.
                {% /dd %}

        {% dt %}
splunk
        {% /dt %}

        {% dd %}
        Specifies the Splunk logging driver. For more information including usage and options, see [Splunk logging driver](https://docs.docker.com/config/containers/logging/splunk/) in the Docker documentation.
                {% /dd %}

        {% dt %}
syslog
        {% /dt %}

        {% dd %}
        Specifies the syslog logging driver. For more information including usage and options, see [Syslog logging driver](https://docs.docker.com/config/containers/logging/syslog/) in the Docker documentation.
                {% /dd %}

                {% /dl %}
If you have a custom driver that's not listed earlier that you want to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that's [available on GitHub](https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this software.This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
      - `options`**Type**: `MAP_STRING_STRING`**Provider name**: `options`**Description**: The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
      - `secret_options`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `secretOptions`**Description**: The secrets to pass to the log configuration. For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html) in the Batch User Guide.
        - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the secret.
        - `value_from`**Type**: `STRING`**Provider name**: `valueFrom`**Description**: The secret to expose to the container. The supported values are either the full Amazon Resource Name (ARN) of the Secrets Manager secret or the full ARN of the parameter in the Amazon Web Services Systems Manager Parameter Store.If the Amazon Web Services Systems Manager Parameter Store parameter exists in the same Region as the job you're launching, then you can use either the full Amazon Resource Name (ARN) or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
    - `memory`**Type**: `INT32`**Provider name**: `memory`**Description**: This parameter is deprecated, use `resourceRequirements` to specify the memory requirements for the job definition. It's not supported for jobs running on Fargate resources. For jobs that run on Amazon EC2 resources, it specifies the memory hard limit (in MiB) for a container. If your container attempts to exceed the specified number, it's terminated. You must specify at least 4 MiB of memory for a job using this parameter. The memory hard limit can be specified in several places. It must be specified for each node at least once.
    - `mount_points`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `mountPoints`**Description**: The mount points for data volumes in your container. This parameter maps to `Volumes` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–volume` option to [docker run](https://docs.docker.com/engine/reference/run/).
      - `container_path`**Type**: `STRING`**Provider name**: `containerPath`**Description**: The path on the container where the host volume is mounted.
      - `read_only`**Type**: `BOOLEAN`**Provider name**: `readOnly`**Description**: If this value is `true`, the container has read-only access to the volume. Otherwise, the container can write to the volume. The default value is `false`.
      - `source_volume`**Type**: `STRING`**Provider name**: `sourceVolume`**Description**: The name of the volume to mount.
    - `network_configuration`**Type**: `STRUCT`**Provider name**: `networkConfiguration`**Description**: The network configuration for jobs that are running on Fargate resources. Jobs that are running on Amazon EC2 resources must not specify this parameter.
      - `assign_public_ip`**Type**: `STRING`**Provider name**: `assignPublicIp`**Description**: Indicates whether the job has a public IP address. For a job that's running on Fargate resources in a private subnet to send outbound traffic to the internet (for example, to pull container images), the private subnet requires a NAT gateway be attached to route requests to the internet. For more information, see [Amazon ECS task networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) in the Amazon Elastic Container Service Developer Guide. The default value is "`DISABLED`".
    - `privileged`**Type**: `BOOLEAN`**Provider name**: `privileged`**Description**: When this parameter is true, the container is given elevated permissions on the host container instance (similar to the `root` user). This parameter maps to `Privileged` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–privileged` option to [docker run](https://docs.docker.com/engine/reference/run/). The default value is false.This parameter isn't applicable to jobs that are running on Fargate resources and shouldn't be provided, or specified as false.
    - `readonly_root_filesystem`**Type**: `BOOLEAN`**Provider name**: `readonlyRootFilesystem`**Description**: When this parameter is true, the container is given read-only access to its root file system. This parameter maps to `ReadonlyRootfs` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–read-only` option to `docker run`.
    - `repository_credentials`**Type**: `STRUCT`**Provider name**: `repositoryCredentials`**Description**: The private repository authentication credentials to use.
      - `credentials_parameter`**Type**: `STRING`**Provider name**: `credentialsParameter`**Description**: The Amazon Resource Name (ARN) of the secret containing the private repository credentials.
    - `resource_requirements`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `resourceRequirements`**Description**: The type and amount of resources to assign to a container. The supported resources include `GPU`, `MEMORY`, and `VCPU`.
      - `type`**Type**: `STRING`**Provider name**: `type`**Description**: The type of resource to assign to a container. The supported resources include `GPU`, `MEMORY`, and `VCPU`.
      - `value`**Type**: `STRING`**Provider name**: `value`**Description**: The quantity of the specified resource to reserve for the container. The values vary based on the `type` specified.
        {% dl %}
        
        {% dt %}
type="GPU"
        {% /dt %}

        {% dd %}
        The number of physical GPUs to reserve for the container. Make sure that the number of GPUs reserved for all containers in a job doesn't exceed the number of available GPUs on the compute resource that the job is launched on.GPUs aren't available for jobs that are running on Fargate resources.
                {% /dd %}

        {% dt %}
type="MEMORY"
        {% /dt %}

        {% dd %}
        The memory hard limit (in MiB) present to the container. This parameter is supported for jobs that are running on Amazon EC2 resources. If your container attempts to exceed the memory specified, the container is terminated. This parameter maps to `Memory` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–memory` option to [docker run](https://docs.docker.com/engine/reference/run/). You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps to `Memory` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–memory` option to [docker run](https://docs.docker.com/engine/reference/run/).If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the Batch User Guide.For jobs that are running on Fargate resources, then `value` is the hard limit (in MiB), and must match one of the supported values and the `VCPU` values must be one of the values supported for that memory value.
        {% dl %}
        
        {% dt %}
value = 512
        {% /dt %}

        {% dd %}
        `VCPU` = 0.25
                {% /dd %}

        {% dt %}
value = 1024
        {% /dt %}

        {% dd %}
        `VCPU` = 0.25 or 0.5
                {% /dd %}

        {% dt %}
value = 2048
        {% /dt %}

        {% dd %}
        `VCPU` = 0.25, 0.5, or 1
                {% /dd %}

        {% dt %}
value = 3072
        {% /dt %}

        {% dd %}
        `VCPU` = 0.5, or 1
                {% /dd %}

        {% dt %}
value = 4096
        {% /dt %}

        {% dd %}
        `VCPU` = 0.5, 1, or 2
                {% /dd %}

        {% dt %}
value = 5120, 6144, or 7168
        {% /dt %}

        {% dd %}
        `VCPU` = 1 or 2
                {% /dd %}

        {% dt %}
value = 8192
        {% /dt %}

        {% dd %}
        `VCPU` = 1, 2, or 4
                {% /dd %}

        {% dt %}
value = 9216, 10240, 11264, 12288, 13312, 14336, or 15360
        {% /dt %}

        {% dd %}
        `VCPU` = 2 or 4
                {% /dd %}

        {% dt %}
value = 16384
        {% /dt %}

        {% dd %}
        `VCPU` = 2, 4, or 8
                {% /dd %}

        {% dt %}
value = 17408, 18432, 19456, 21504, 22528, 23552, 25600, 26624, 27648, 29696, or 30720
        {% /dt %}

        {% dd %}
        `VCPU` = 4
                {% /dd %}

        {% dt %}
value = 20480, 24576, or 28672
        {% /dt %}

        {% dd %}
        `VCPU` = 4 or 8
                {% /dd %}

        {% dt %}
value = 36864, 45056, 53248, or 61440
        {% /dt %}

        {% dd %}
        `VCPU` = 8
                {% /dd %}

        {% dt %}
value = 32768, 40960, 49152, or 57344
        {% /dt %}

        {% dd %}
        `VCPU` = 8 or 16
                {% /dd %}

        {% dt %}
value = 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
        {% /dt %}

        {% dd %}
        `VCPU` = 16
                {% /dd %}

                {% /dl %}

                {% /dd %}

        {% dt %}
type="VCPU"
        {% /dt %}

        {% dd %}
        The number of vCPUs reserved for the container. This parameter maps to `CpuShares` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–cpu-shares` option to [docker run](https://docs.docker.com/engine/reference/run/). Each vCPU is equivalent to 1,024 CPU shares. For Amazon EC2 resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once. The default for the Fargate On-Demand vCPU resource count quota is 6 vCPUs. For more information about Fargate quotas, see [Fargate quotas](https://docs.aws.amazon.com/general/latest/gr/ecs-service.html#service-quotas-fargate) in the Amazon Web Services General Reference. For jobs that are running on Fargate resources, then `value` must match one of the supported values and the `MEMORY` values must be one of the values supported for that `VCPU` value. The supported values are 0.25, 0.5, 1, 2, 4, 8, and 16
        {% dl %}
        
        {% dt %}
value = 0.25
        {% /dt %}

        {% dd %}
        `MEMORY` = 512, 1024, or 2048
                {% /dd %}

        {% dt %}
value = 0.5
        {% /dt %}

        {% dd %}
        `MEMORY` = 1024, 2048, 3072, or 4096
                {% /dd %}

        {% dt %}
value = 1
        {% /dt %}

        {% dd %}
        `MEMORY` = 2048, 3072, 4096, 5120, 6144, 7168, or 8192
                {% /dd %}

        {% dt %}
value = 2
        {% /dt %}

        {% dd %}
        `MEMORY` = 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384
                {% /dd %}

        {% dt %}
value = 4
        {% /dt %}

        {% dd %}
        `MEMORY` = 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720
                {% /dd %}

        {% dt %}
value = 8
        {% /dt %}

        {% dd %}
        `MEMORY` = 16384, 20480, 24576, 28672, 32768, 36864, 40960, 45056, 49152, 53248, 57344, or 61440
                {% /dd %}

        {% dt %}
value = 16
        {% /dt %}

        {% dd %}
        `MEMORY` = 32768, 40960, 49152, 57344, 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
                {% /dd %}

                {% /dl %}

                {% /dd %}

                {% /dl %}
    - `runtime_platform`**Type**: `STRUCT`**Provider name**: `runtimePlatform`**Description**: An object that represents the compute environment architecture for Batch jobs on Fargate.
      - `cpu_architecture`**Type**: `STRING`**Provider name**: `cpuArchitecture`**Description**: The vCPU architecture. The default value is `X86_64`. Valid values are `X86_64` and `ARM64`.This parameter must be set to `X86_64` for Windows containers.Fargate Spot is not supported for `ARM64` and Windows-based containers on Fargate. A job queue will be blocked if a Fargate `ARM64` or Windows job is submitted to a job queue with only Fargate Spot compute environments. However, you can attach both `FARGATE` and `FARGATE_SPOT` compute environments to the same job queue.
      - `operating_system_family`**Type**: `STRING`**Provider name**: `operatingSystemFamily`**Description**: The operating system for the compute environment. Valid values are: `LINUX` (default), `WINDOWS_SERVER_2019_CORE`, `WINDOWS_SERVER_2019_FULL`, `WINDOWS_SERVER_2022_CORE`, and `WINDOWS_SERVER_2022_FULL`.The following parameters can't be set for Windows containers: `linuxParameters`, `privileged`, `user`, `ulimits`, `readonlyRootFilesystem`, and `efsVolumeConfiguration`.The Batch Scheduler checks the compute environments that are attached to the job queue before registering a task definition with Fargate. In this scenario, the job queue is where the job is submitted. If the job requires a Windows container and the first compute environment is `LINUX`, the compute environment is skipped and the next compute environment is checked until a Windows-based compute environment is found.Fargate Spot is not supported for `ARM64` and Windows-based containers on Fargate. A job queue will be blocked if a Fargate `ARM64` or Windows job is submitted to a job queue with only Fargate Spot compute environments. However, you can attach both `FARGATE` and `FARGATE_SPOT` compute environments to the same job queue.
    - `secrets`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `secrets`**Description**: The secrets for the container. For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html) in the Batch User Guide.
      - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the secret.
      - `value_from`**Type**: `STRING`**Provider name**: `valueFrom`**Description**: The secret to expose to the container. The supported values are either the full Amazon Resource Name (ARN) of the Secrets Manager secret or the full ARN of the parameter in the Amazon Web Services Systems Manager Parameter Store.If the Amazon Web Services Systems Manager Parameter Store parameter exists in the same Region as the job you're launching, then you can use either the full Amazon Resource Name (ARN) or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
    - `ulimits`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `ulimits`**Description**: A list of `ulimits` to set in the container. This parameter maps to `Ulimits` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–ulimit` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources and shouldn't be provided.
      - `hard_limit`**Type**: `INT32`**Provider name**: `hardLimit`**Description**: The hard limit for the `ulimit` type.
      - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The `type` of the `ulimit`. Valid values are: `core` | `cpu` | `data` | `fsize` | `locks` | `memlock` | `msgqueue` | `nice` | `nofile` | `nproc` | `rss` | `rtprio` | `rttime` | `sigpending` | `stack`.
      - `soft_limit`**Type**: `INT32`**Provider name**: `softLimit`**Description**: The soft limit for the `ulimit` type.
    - `user`**Type**: `STRING`**Provider name**: `user`**Description**: The user name to use inside the container. This parameter maps to `User` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–user` option to [docker run](https://docs.docker.com/engine/reference/run/).
    - `vcpus`**Type**: `INT32`**Provider name**: `vcpus`**Description**: This parameter is deprecated, use `resourceRequirements` to specify the vCPU requirements for the job definition. It's not supported for jobs running on Fargate resources. For jobs running on Amazon EC2 resources, it specifies the number of vCPUs reserved for the job. Each vCPU is equivalent to 1,024 CPU shares. This parameter maps to `CpuShares` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–cpu-shares` option to [docker run](https://docs.docker.com/engine/reference/run/). The number of vCPUs must be specified but can be specified in several places. You must specify it at least once for each node.
    - `volumes`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `volumes`**Description**: A list of data volumes used in a job.
      - `efs_volume_configuration`**Type**: `STRUCT`**Provider name**: `efsVolumeConfiguration`**Description**: This parameter is specified when you're using an Amazon Elastic File System file system for job storage. Jobs that are running on Fargate resources must specify a `platformVersion` of at least `1.4.0`.
        - `authorization_config`**Type**: `STRUCT`**Provider name**: `authorizationConfig`**Description**: The authorization configuration details for the Amazon EFS file system.
          - `access_point_id`**Type**: `STRING`**Provider name**: `accessPointId`**Description**: The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the `EFSVolumeConfiguration` must either be omitted or set to `/` which enforces the path set on the EFS access point. If an access point is used, transit encryption must be enabled in the `EFSVolumeConfiguration`. For more information, see [Working with Amazon EFS access points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) in the Amazon Elastic File System User Guide.
          - `iam`**Type**: `STRING`**Provider name**: `iam`**Description**: Whether or not to use the Batch job IAM role defined in a job definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the `EFSVolumeConfiguration`. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Using Amazon EFS access points](https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html#efs-volume-accesspoints) in the Batch User Guide. EFS IAM authorization requires that `TransitEncryption` be `ENABLED` and that a `JobRoleArn` is specified.
        - `file_system_id`**Type**: `STRING`**Provider name**: `fileSystemId`**Description**: The Amazon EFS file system ID to use.
        - `root_directory`**Type**: `STRING`**Provider name**: `rootDirectory`**Description**: The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume is used instead. Specifying `/` has the same effect as omitting this parameter. The maximum length is 4,096 characters.If an EFS access point is specified in the `authorizationConfig`, the root directory parameter must either be omitted or set to `/`, which enforces the path set on the Amazon EFS access point.
        - `transit_encryption`**Type**: `STRING`**Provider name**: `transitEncryption`**Description**: Determines whether to enable encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be enabled if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Encrypting data in transit](https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html) in the Amazon Elastic File System User Guide.
        - `transit_encryption_port`**Type**: `INT32`**Provider name**: `transitEncryptionPort`**Description**: The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you don't specify a transit encryption port, it uses the port selection strategy that the Amazon EFS mount helper uses. The value must be between 0 and 65,535. For more information, see [EFS mount helper](https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html) in the Amazon Elastic File System User Guide.
      - `host`**Type**: `STRUCT`**Provider name**: `host`**Description**: The contents of the `host` parameter determine whether your data volume persists on the host container instance and where it's stored. If the host parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running.This parameter isn't applicable to jobs that are running on Fargate resources and shouldn't be provided.
        - `source_path`**Type**: `STRING`**Provider name**: `sourcePath`**Description**: The path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If this parameter contains a file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the source path location doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.This parameter isn't applicable to jobs that run on Fargate resources. Don't provide this for these jobs.
      - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the volume. It can be up to 255 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). This name is referenced in the `sourceVolume` parameter of container definition `mountPoints`.
  - `ecs_properties`**Type**: `STRUCT`**Provider name**: `ecsProperties`**Description**: This is an object that represents the properties of the node range for a multi-node parallel job.
    - `task_properties`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `taskProperties`**Description**: An object that contains the properties for the Amazon ECS task definition of a job.This object is currently limited to one task element. However, the task element can run up to 10 containers.
      - `containers`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `containers`**Description**: This object is a list of containers.
        - `command`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `command`**Description**: The command that's passed to the container. This parameter maps to `Cmd` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `COMMAND` parameter to [docker run](https://docs.docker.com/engine/reference/run/). For more information, see [Dockerfile reference: CMD](https://docs.docker.com/engine/reference/builder/#cmd).
        - `depends_on`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `dependsOn`**Description**: A list of containers that this container depends on.
          - `condition`**Type**: `STRING`**Provider name**: `condition`**Description**: The dependency condition of the container. The following are the available conditions and their behavior:
            - `START` - This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.
            - `COMPLETE` - This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.
            - `SUCCESS` - This condition is the same as `COMPLETE`, but it also requires that the container exits with a zero status. This condition can't be set on an essential container.
          - `container_name`**Type**: `STRING`**Provider name**: `containerName`**Description**: A unique identifier for the container.
        - `environment`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `environment`**Description**: The environment variables to pass to a container. This parameter maps to Env in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–env` parameter to [docker run](https://docs.docker.com/engine/reference/run/).We don't recommend using plaintext environment variables for sensitive information, such as credential data.Environment variables cannot start with `AWS_BATCH`. This naming convention is reserved for variables that Batch sets.
          - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the key-value pair. For environment variables, this is the name of the environment variable.
          - `value`**Type**: `STRING`**Provider name**: `value`**Description**: The value of the key-value pair. For environment variables, this is the value of the environment variable.
        - `essential`**Type**: `BOOLEAN`**Provider name**: `essential`**Description**: If the essential parameter of a container is marked as `true`, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the `essential` parameter of a container is marked as false, its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential. All jobs must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see [Application Architecture](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/application_architecture.html) in the Amazon Elastic Container Service Developer Guide.
        - `image`**Type**: `STRING`**Provider name**: `image`**Description**: The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either `repository-url/image:tag` or `repository-url/image@digest`. Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to `Image` in the [Create a container](https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.35/) and the `IMAGE` parameter of the [docker run ](https://docs.docker.com/engine/reference/run/#security-configuration).
        - `linux_parameters`**Type**: `STRUCT`**Provider name**: `linuxParameters`**Description**: Linux-specific modifications that are applied to the container, such as Linux kernel capabilities. For more information, see [KernelCapabilities](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_KernelCapabilities.html).
          - `devices`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `devices`**Description**: Any of the host devices to expose to the container. This parameter maps to `Devices` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–device` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
            - `container_path`**Type**: `STRING`**Provider name**: `containerPath`**Description**: The path inside the container that's used to expose the host device. By default, the `hostPath` value is used.
            - `host_path`**Type**: `STRING`**Provider name**: `hostPath`**Description**: The path for the device on the host container instance.
            - `permissions`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `permissions`**Description**: The explicit permissions to provide to the container for the device. By default, the container has permissions for `read`, `write`, and `mknod` for the device.
          - `init_process_enabled`**Type**: `BOOLEAN`**Provider name**: `initProcessEnabled`**Description**: If true, run an `init` process inside the container that forwards signals and reaps processes. This parameter maps to the `–init` option to [docker run](https://docs.docker.com/engine/reference/run/). This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
          - `max_swap`**Type**: `INT32`**Provider name**: `maxSwap`**Description**: The total amount of swap memory (in MiB) a container can use. This parameter is translated to the `–memory-swap` option to [docker run](https://docs.docker.com/engine/reference/run/) where the value is the sum of the container memory plus the `maxSwap` value. For more information, see [`–memory-swap` details](https://docs.docker.com/config/containers/resource_constraints/#--memory-swap-details) in the Docker documentation. If a `maxSwap` value of `0` is specified, the container doesn't use swap. Accepted values are `0` or any positive integer. If the `maxSwap` parameter is omitted, the container doesn't use the swap configuration for the container instance that it's running on. A `maxSwap` value must be set for the `swappiness` parameter to be used.This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
          - `shared_memory_size`**Type**: `INT32`**Provider name**: `sharedMemorySize`**Description**: The value for the size (in MiB) of the `/dev/shm` volume. This parameter maps to the `–shm-size` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
          - `swappiness`**Type**: `INT32`**Provider name**: `swappiness`**Description**: You can use this parameter to tune a container's memory swappiness behavior. A `swappiness` value of `0` causes swapping to not occur unless absolutely necessary. A `swappiness` value of `100` causes pages to be swapped aggressively. Valid values are whole numbers between `0` and `100`. If the `swappiness` parameter isn't specified, a default value of `60` is used. If a value isn't specified for `maxSwap`, then this parameter is ignored. If `maxSwap` is set to 0, the container doesn't use swap. This parameter maps to the `–memory-swappiness` option to [docker run](https://docs.docker.com/engine/reference/run/). Consider the following when you use a per-container swap configuration.
            - Swap space must be enabled and allocated on the container instance for the containers to use.By default, the Amazon ECS optimized AMIs don't have swap enabled. You must enable swap on the instance to use this feature. For more information, see [Instance store swap volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-store-swap-volumes.html) in the Amazon EC2 User Guide for Linux Instances or [How do I allocate memory to work as swap space in an Amazon EC2 instance by using a swap file?](http://aws.amazon.com/premiumsupport/knowledge-center/ec2-memory-swap-file/)
            - The swap space parameters are only supported for job definitions using EC2 resources.
            - If the `maxSwap` and `swappiness` parameters are omitted from a job definition, each container has a default `swappiness` value of 60. Moreover, the total swap usage is limited to two times the memory reservation of the container.
This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
          - `tmpfs`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `tmpfs`**Description**: The container path, mount options, and size (in MiB) of the `tmpfs` mount. This parameter maps to the `–tmpfs` option to [docker run](https://docs.docker.com/engine/reference/run/).This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide this parameter for this resource type.
            - `container_path`**Type**: `STRING`**Provider name**: `containerPath`**Description**: The absolute file path in the container where the `tmpfs` volume is mounted.
            - `mount_options`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `mountOptions`**Description**: The list of `tmpfs` volume mount options. Valid values: "`defaults`" | "`ro`" | "`rw`" | "`suid`" | "`nosuid`" | "`dev`" | "`nodev`" | "`exec`" | "`noexec`" | "`sync`" | "`async`" | "`dirsync`" | "`remount`" | "`mand`" | "`nomand`" | "`atime`" | "`noatime`" | "`diratime`" | "`nodiratime`" | "`bind`" | "`rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime`" | "`norelatime`" | "`strictatime`" | "`nostrictatime`" | "`mode`" | "`uid`" | "`gid`" | "`nr_inodes`" | "`nr_blocks`" | "`mpol`"
            - `size`**Type**: `INT32`**Provider name**: `size`**Description**: The size (in MiB) of the `tmpfs` volume.
        - `log_configuration`**Type**: `STRUCT`**Provider name**: `logConfiguration`**Description**: The log configuration specification for the container. This parameter maps to `LogConfig` in the [Create a container](https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.35/) and the `–log-driver` option to [docker run](https://docs.docker.com/engine/reference/run/#security-configuration). By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). For more information about the options for different supported log drivers, see [Configure logging drivers ](https://docs.docker.com/engine/admin/logging/overview/)in the Docker documentation.Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the `LogConfiguration` data type). Additional log drivers may be available in future releases of the Amazon ECS container agent.This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version `–format '{{.Server.APIVersion}}'`The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html) in the Amazon Elastic Container Service Developer Guide.
          - `log_driver`**Type**: `STRING`**Provider name**: `logDriver`**Description**: The log driver to use for the container. The valid values that are listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default. The supported log drivers are `awslogs`, `fluentd`, `gelf`, `json-file`, `journald`, `logentries`, `syslog`, and `splunk`.Jobs that are running on Fargate resources are restricted to the `awslogs` and `splunk` log drivers.
            {% dl %}
            
            {% dt %}
awslogs
            {% /dt %}

            {% dd %}
            Specifies the Amazon CloudWatch Logs logging driver. For more information, see [Using the awslogs log driver](https://docs.aws.amazon.com/batch/latest/userguide/using_awslogs.html) in the Batch User Guide and [Amazon CloudWatch Logs logging driver](https://docs.docker.com/config/containers/logging/awslogs/) in the Docker documentation.
                        {% /dd %}

            {% dt %}
fluentd
            {% /dt %}

            {% dd %}
            Specifies the Fluentd logging driver. For more information including usage and options, see [Fluentd logging driver](https://docs.docker.com/config/containers/logging/fluentd/) in the Docker documentation.
                        {% /dd %}

            {% dt %}
gelf
            {% /dt %}

            {% dd %}
            Specifies the Graylog Extended Format (GELF) logging driver. For more information including usage and options, see [Graylog Extended Format logging driver](https://docs.docker.com/config/containers/logging/gelf/) in the Docker documentation.
                        {% /dd %}

            {% dt %}
journald
            {% /dt %}

            {% dd %}
            Specifies the journald logging driver. For more information including usage and options, see [Journald logging driver](https://docs.docker.com/config/containers/logging/journald/) in the Docker documentation.
                        {% /dd %}

            {% dt %}
json-file
            {% /dt %}

            {% dd %}
            Specifies the JSON file logging driver. For more information including usage and options, see [JSON File logging driver](https://docs.docker.com/config/containers/logging/json-file/) in the Docker documentation.
                        {% /dd %}

            {% dt %}
splunk
            {% /dt %}

            {% dd %}
            Specifies the Splunk logging driver. For more information including usage and options, see [Splunk logging driver](https://docs.docker.com/config/containers/logging/splunk/) in the Docker documentation.
                        {% /dd %}

            {% dt %}
syslog
            {% /dt %}

            {% dd %}
            Specifies the syslog logging driver. For more information including usage and options, see [Syslog logging driver](https://docs.docker.com/config/containers/logging/syslog/) in the Docker documentation.
                        {% /dd %}

                        {% /dl %}
If you have a custom driver that's not listed earlier that you want to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that's [available on GitHub](https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this software.This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
          - `options`**Type**: `MAP_STRING_STRING`**Provider name**: `options`**Description**: The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
          - `secret_options`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `secretOptions`**Description**: The secrets to pass to the log configuration. For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html) in the Batch User Guide.
            - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the secret.
            - `value_from`**Type**: `STRING`**Provider name**: `valueFrom`**Description**: The secret to expose to the container. The supported values are either the full Amazon Resource Name (ARN) of the Secrets Manager secret or the full ARN of the parameter in the Amazon Web Services Systems Manager Parameter Store.If the Amazon Web Services Systems Manager Parameter Store parameter exists in the same Region as the job you're launching, then you can use either the full Amazon Resource Name (ARN) or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
        - `mount_points`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `mountPoints`**Description**: The mount points for data volumes in your container. This parameter maps to `Volumes` in the [Create a container](https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.35/) and the –volume option to [docker run](https://docs.docker.com/engine/reference/run/#security-configuration). Windows containers can mount whole directories on the same drive as `$env:ProgramData`. Windows containers can't mount directories on a different drive, and mount point can't be across drives.
          - `container_path`**Type**: `STRING`**Provider name**: `containerPath`**Description**: The path on the container where the host volume is mounted.
          - `read_only`**Type**: `BOOLEAN`**Provider name**: `readOnly`**Description**: If this value is `true`, the container has read-only access to the volume. Otherwise, the container can write to the volume. The default value is `false`.
          - `source_volume`**Type**: `STRING`**Provider name**: `sourceVolume`**Description**: The name of the volume to mount.
        - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of a container. The name can be used as a unique identifier to target your `dependsOn` and `Overrides` objects.
        - `privileged`**Type**: `BOOLEAN`**Provider name**: `privileged`**Description**: When this parameter is `true`, the container is given elevated privileges on the host container instance (similar to the `root` user). This parameter maps to `Privileged` in the [Create a container](https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.35/) and the `–privileged` option to [docker run](https://docs.docker.com/engine/reference/run/#security-configuration).This parameter is not supported for Windows containers or tasks run on Fargate.
        - `readonly_root_filesystem`**Type**: `BOOLEAN`**Provider name**: `readonlyRootFilesystem`**Description**: When this parameter is true, the container is given read-only access to its root file system. This parameter maps to `ReadonlyRootfs` in the [Create a container](https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.35/) and the `–read-only` option to [docker run](https://docs.docker.com/engine/reference/run/#security-configuration).This parameter is not supported for Windows containers.
        - `repository_credentials`**Type**: `STRUCT`**Provider name**: `repositoryCredentials`**Description**: The private repository authentication credentials to use.
          - `credentials_parameter`**Type**: `STRING`**Provider name**: `credentialsParameter`**Description**: The Amazon Resource Name (ARN) of the secret containing the private repository credentials.
        - `resource_requirements`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `resourceRequirements`**Description**: The type and amount of a resource to assign to a container. The only supported resource is a GPU.
          - `type`**Type**: `STRING`**Provider name**: `type`**Description**: The type of resource to assign to a container. The supported resources include `GPU`, `MEMORY`, and `VCPU`.
          - `value`**Type**: `STRING`**Provider name**: `value`**Description**: The quantity of the specified resource to reserve for the container. The values vary based on the `type` specified.
            {% dl %}
            
            {% dt %}
type="GPU"
            {% /dt %}

            {% dd %}
            The number of physical GPUs to reserve for the container. Make sure that the number of GPUs reserved for all containers in a job doesn't exceed the number of available GPUs on the compute resource that the job is launched on.GPUs aren't available for jobs that are running on Fargate resources.
                        {% /dd %}

            {% dt %}
type="MEMORY"
            {% /dt %}

            {% dd %}
            The memory hard limit (in MiB) present to the container. This parameter is supported for jobs that are running on Amazon EC2 resources. If your container attempts to exceed the memory specified, the container is terminated. This parameter maps to `Memory` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–memory` option to [docker run](https://docs.docker.com/engine/reference/run/). You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps to `Memory` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–memory` option to [docker run](https://docs.docker.com/engine/reference/run/).If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the Batch User Guide.For jobs that are running on Fargate resources, then `value` is the hard limit (in MiB), and must match one of the supported values and the `VCPU` values must be one of the values supported for that memory value.
            {% dl %}
            
            {% dt %}
value = 512
            {% /dt %}

            {% dd %}
            `VCPU` = 0.25
                        {% /dd %}

            {% dt %}
value = 1024
            {% /dt %}

            {% dd %}
            `VCPU` = 0.25 or 0.5
                        {% /dd %}

            {% dt %}
value = 2048
            {% /dt %}

            {% dd %}
            `VCPU` = 0.25, 0.5, or 1
                        {% /dd %}

            {% dt %}
value = 3072
            {% /dt %}

            {% dd %}
            `VCPU` = 0.5, or 1
                        {% /dd %}

            {% dt %}
value = 4096
            {% /dt %}

            {% dd %}
            `VCPU` = 0.5, 1, or 2
                        {% /dd %}

            {% dt %}
value = 5120, 6144, or 7168
            {% /dt %}

            {% dd %}
            `VCPU` = 1 or 2
                        {% /dd %}

            {% dt %}
value = 8192
            {% /dt %}

            {% dd %}
            `VCPU` = 1, 2, or 4
                        {% /dd %}

            {% dt %}
value = 9216, 10240, 11264, 12288, 13312, 14336, or 15360
            {% /dt %}

            {% dd %}
            `VCPU` = 2 or 4
                        {% /dd %}

            {% dt %}
value = 16384
            {% /dt %}

            {% dd %}
            `VCPU` = 2, 4, or 8
                        {% /dd %}

            {% dt %}
value = 17408, 18432, 19456, 21504, 22528, 23552, 25600, 26624, 27648, 29696, or 30720
            {% /dt %}

            {% dd %}
            `VCPU` = 4
                        {% /dd %}

            {% dt %}
value = 20480, 24576, or 28672
            {% /dt %}

            {% dd %}
            `VCPU` = 4 or 8
                        {% /dd %}

            {% dt %}
value = 36864, 45056, 53248, or 61440
            {% /dt %}

            {% dd %}
            `VCPU` = 8
                        {% /dd %}

            {% dt %}
value = 32768, 40960, 49152, or 57344
            {% /dt %}

            {% dd %}
            `VCPU` = 8 or 16
                        {% /dd %}

            {% dt %}
value = 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
            {% /dt %}

            {% dd %}
            `VCPU` = 16
                        {% /dd %}

                        {% /dl %}

                        {% /dd %}

            {% dt %}
type="VCPU"
            {% /dt %}

            {% dd %}
            The number of vCPUs reserved for the container. This parameter maps to `CpuShares` in the [Create a container](https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.23/) and the `–cpu-shares` option to [docker run](https://docs.docker.com/engine/reference/run/). Each vCPU is equivalent to 1,024 CPU shares. For Amazon EC2 resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once. The default for the Fargate On-Demand vCPU resource count quota is 6 vCPUs. For more information about Fargate quotas, see [Fargate quotas](https://docs.aws.amazon.com/general/latest/gr/ecs-service.html#service-quotas-fargate) in the Amazon Web Services General Reference. For jobs that are running on Fargate resources, then `value` must match one of the supported values and the `MEMORY` values must be one of the values supported for that `VCPU` value. The supported values are 0.25, 0.5, 1, 2, 4, 8, and 16
            {% dl %}
            
            {% dt %}
value = 0.25
            {% /dt %}

            {% dd %}
            `MEMORY` = 512, 1024, or 2048
                        {% /dd %}

            {% dt %}
value = 0.5
            {% /dt %}

            {% dd %}
            `MEMORY` = 1024, 2048, 3072, or 4096
                        {% /dd %}

            {% dt %}
value = 1
            {% /dt %}

            {% dd %}
            `MEMORY` = 2048, 3072, 4096, 5120, 6144, 7168, or 8192
                        {% /dd %}

            {% dt %}
value = 2
            {% /dt %}

            {% dd %}
            `MEMORY` = 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384
                        {% /dd %}

            {% dt %}
value = 4
            {% /dt %}

            {% dd %}
            `MEMORY` = 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720
                        {% /dd %}

            {% dt %}
value = 8
            {% /dt %}

            {% dd %}
            `MEMORY` = 16384, 20480, 24576, 28672, 32768, 36864, 40960, 45056, 49152, 53248, 57344, or 61440
                        {% /dd %}

            {% dt %}
value = 16
            {% /dt %}

            {% dd %}
            `MEMORY` = 32768, 40960, 49152, 57344, 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
                        {% /dd %}

                        {% /dl %}

                        {% /dd %}

                        {% /dl %}
        - `secrets`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `secrets`**Description**: The secrets to pass to the container. For more information, see [Specifying Sensitive Data](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html) in the Amazon Elastic Container Service Developer Guide.
          - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the secret.
          - `value_from`**Type**: `STRING`**Provider name**: `valueFrom`**Description**: The secret to expose to the container. The supported values are either the full Amazon Resource Name (ARN) of the Secrets Manager secret or the full ARN of the parameter in the Amazon Web Services Systems Manager Parameter Store.If the Amazon Web Services Systems Manager Parameter Store parameter exists in the same Region as the job you're launching, then you can use either the full Amazon Resource Name (ARN) or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
        - `ulimits`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `ulimits`**Description**: A list of `ulimits` to set in the container. If a `ulimit` value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to `Ulimits` in the [Create a container](https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/engine/api/v1.35/) and the `–ulimit` option to [docker run](https://docs.docker.com/engine/reference/run/#security-configuration). Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which Fargate overrides. The `nofile` resource limit sets a restriction on the number of open files that a container can use. The default `nofile` soft limit is `1024` and the default hard limit is `65535`. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version `–format '{{.Server.APIVersion}}'`This parameter is not supported for Windows containers.
          - `hard_limit`**Type**: `INT32`**Provider name**: `hardLimit`**Description**: The hard limit for the `ulimit` type.
          - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The `type` of the `ulimit`. Valid values are: `core` | `cpu` | `data` | `fsize` | `locks` | `memlock` | `msgqueue` | `nice` | `nofile` | `nproc` | `rss` | `rtprio` | `rttime` | `sigpending` | `stack`.
          - `soft_limit`**Type**: `INT32`**Provider name**: `softLimit`**Description**: The soft limit for the `ulimit` type.
        - `user`**Type**: `STRING`**Provider name**: `user`**Description**: The user to use inside the container. This parameter maps to User in the Create a container section of the Docker Remote API and the –user option to docker run.When running tasks using the `host` network mode, don't run containers using the `root user (UID 0)`. We recommend using a non-root user for better security.You can specify the `user` using the following formats. If specifying a UID or GID, you must specify it as a positive integer.
          - `user`
          - `user:group`
          - `uid`
          - `uid:gid`
          - `user:gi`
          - `uid:group`
This parameter is not supported for Windows containers.
      - `ephemeral_storage`**Type**: `STRUCT`**Provider name**: `ephemeralStorage`**Description**: The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on Fargate.
        - `size_in_gib`**Type**: `INT32`**Provider name**: `sizeInGiB`**Description**: The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is `21` GiB and the maximum supported value is `200` GiB.
      - `execution_role_arn`**Type**: `STRING`**Provider name**: `executionRoleArn`**Description**: The Amazon Resource Name (ARN) of the execution role that Batch can assume. For jobs that run on Fargate resources, you must provide an execution role. For more information, see [Batch execution IAM role](https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html) in the Batch User Guide.
      - `ipc_mode`**Type**: `STRING`**Provider name**: `ipcMode`**Description**: The IPC resource namespace to use for the containers in the task. The valid values are `host`, `task`, or `none`. If `host` is specified, all containers within the tasks that specified the `host` IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If `task` is specified, all containers within the specified `task` share the same IPC resources. If `none` is specified, the IPC resources within the containers of a task are private, and are not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance. For more information, see [IPC settings](https://docs.docker.com/engine/reference/run/#ipc-settings---ipc) in the Docker run reference.
      - `network_configuration`**Type**: `STRUCT`**Provider name**: `networkConfiguration`**Description**: The network configuration for jobs that are running on Fargate resources. Jobs that are running on Amazon EC2 resources must not specify this parameter.
        - `assign_public_ip`**Type**: `STRING`**Provider name**: `assignPublicIp`**Description**: Indicates whether the job has a public IP address. For a job that's running on Fargate resources in a private subnet to send outbound traffic to the internet (for example, to pull container images), the private subnet requires a NAT gateway be attached to route requests to the internet. For more information, see [Amazon ECS task networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) in the Amazon Elastic Container Service Developer Guide. The default value is "`DISABLED`".
      - `pid_mode`**Type**: `STRING`**Provider name**: `pidMode`**Description**: The process namespace to use for the containers in the task. The valid values are `host` or `task`. For example, monitoring sidecars might need `pidMode` to access information about other containers running in the same task. If `host` is specified, all containers within the tasks that specified the `host` PID mode on the same container instance share the process namespace with the host Amazon EC2 instance. If `task` is specified, all containers within the specified task share the same process namespace. If no value is specified, the default is a private namespace for each container. For more information, see [PID settings](https://docs.docker.com/engine/reference/run/#pid-settings---pid) in the Docker run reference.
      - `platform_version`**Type**: `STRING`**Provider name**: `platformVersion`**Description**: The Fargate platform version where the jobs are running. A platform version is specified only for jobs that are running on Fargate resources. If one isn't specified, the `LATEST` platform version is used by default. This uses a recent, approved version of the Fargate platform for compute resources. For more information, see [Fargate platform versions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html) in the Amazon Elastic Container Service Developer Guide.
      - `runtime_platform`**Type**: `STRUCT`**Provider name**: `runtimePlatform`**Description**: An object that represents the compute environment architecture for Batch jobs on Fargate.
        - `cpu_architecture`**Type**: `STRING`**Provider name**: `cpuArchitecture`**Description**: The vCPU architecture. The default value is `X86_64`. Valid values are `X86_64` and `ARM64`.This parameter must be set to `X86_64` for Windows containers.Fargate Spot is not supported for `ARM64` and Windows-based containers on Fargate. A job queue will be blocked if a Fargate `ARM64` or Windows job is submitted to a job queue with only Fargate Spot compute environments. However, you can attach both `FARGATE` and `FARGATE_SPOT` compute environments to the same job queue.
        - `operating_system_family`**Type**: `STRING`**Provider name**: `operatingSystemFamily`**Description**: The operating system for the compute environment. Valid values are: `LINUX` (default), `WINDOWS_SERVER_2019_CORE`, `WINDOWS_SERVER_2019_FULL`, `WINDOWS_SERVER_2022_CORE`, and `WINDOWS_SERVER_2022_FULL`.The following parameters can't be set for Windows containers: `linuxParameters`, `privileged`, `user`, `ulimits`, `readonlyRootFilesystem`, and `efsVolumeConfiguration`.The Batch Scheduler checks the compute environments that are attached to the job queue before registering a task definition with Fargate. In this scenario, the job queue is where the job is submitted. If the job requires a Windows container and the first compute environment is `LINUX`, the compute environment is skipped and the next compute environment is checked until a Windows-based compute environment is found.Fargate Spot is not supported for `ARM64` and Windows-based containers on Fargate. A job queue will be blocked if a Fargate `ARM64` or Windows job is submitted to a job queue with only Fargate Spot compute environments. However, you can attach both `FARGATE` and `FARGATE_SPOT` compute environments to the same job queue.
      - `task_role_arn`**Type**: `STRING`**Provider name**: `taskRoleArn`**Description**: The Amazon Resource Name (ARN) that's associated with the Amazon ECS task.This is object is comparable to [ContainerProperties:jobRoleArn](https://docs.aws.amazon.com/batch/latest/APIReference/API_ContainerProperties.html).
      - `volumes`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `volumes`**Description**: A list of volumes that are associated with the job.
        - `efs_volume_configuration`**Type**: `STRUCT`**Provider name**: `efsVolumeConfiguration`**Description**: This parameter is specified when you're using an Amazon Elastic File System file system for job storage. Jobs that are running on Fargate resources must specify a `platformVersion` of at least `1.4.0`.
          - `authorization_config`**Type**: `STRUCT`**Provider name**: `authorizationConfig`**Description**: The authorization configuration details for the Amazon EFS file system.
            - `access_point_id`**Type**: `STRING`**Provider name**: `accessPointId`**Description**: The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the `EFSVolumeConfiguration` must either be omitted or set to `/` which enforces the path set on the EFS access point. If an access point is used, transit encryption must be enabled in the `EFSVolumeConfiguration`. For more information, see [Working with Amazon EFS access points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) in the Amazon Elastic File System User Guide.
            - `iam`**Type**: `STRING`**Provider name**: `iam`**Description**: Whether or not to use the Batch job IAM role defined in a job definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the `EFSVolumeConfiguration`. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Using Amazon EFS access points](https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html#efs-volume-accesspoints) in the Batch User Guide. EFS IAM authorization requires that `TransitEncryption` be `ENABLED` and that a `JobRoleArn` is specified.
          - `file_system_id`**Type**: `STRING`**Provider name**: `fileSystemId`**Description**: The Amazon EFS file system ID to use.
          - `root_directory`**Type**: `STRING`**Provider name**: `rootDirectory`**Description**: The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume is used instead. Specifying `/` has the same effect as omitting this parameter. The maximum length is 4,096 characters.If an EFS access point is specified in the `authorizationConfig`, the root directory parameter must either be omitted or set to `/`, which enforces the path set on the Amazon EFS access point.
          - `transit_encryption`**Type**: `STRING`**Provider name**: `transitEncryption`**Description**: Determines whether to enable encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be enabled if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Encrypting data in transit](https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html) in the Amazon Elastic File System User Guide.
          - `transit_encryption_port`**Type**: `INT32`**Provider name**: `transitEncryptionPort`**Description**: The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you don't specify a transit encryption port, it uses the port selection strategy that the Amazon EFS mount helper uses. The value must be between 0 and 65,535. For more information, see [EFS mount helper](https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html) in the Amazon Elastic File System User Guide.
        - `host`**Type**: `STRUCT`**Provider name**: `host`**Description**: The contents of the `host` parameter determine whether your data volume persists on the host container instance and where it's stored. If the host parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running.This parameter isn't applicable to jobs that are running on Fargate resources and shouldn't be provided.
          - `source_path`**Type**: `STRING`**Provider name**: `sourcePath`**Description**: The path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If this parameter contains a file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the source path location doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.This parameter isn't applicable to jobs that run on Fargate resources. Don't provide this for these jobs.
        - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the volume. It can be up to 255 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). This name is referenced in the `sourceVolume` parameter of container definition `mountPoints`.
  - `eks_properties`**Type**: `STRUCT`**Provider name**: `eksProperties`**Description**: This is an object that represents the properties of the node range for a multi-node parallel job.
    - `pod_properties`**Type**: `STRUCT`**Provider name**: `podProperties`**Description**: The properties for the Kubernetes pod resources of a job.
      - `containers`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `containers`**Description**: The properties of the container that's used on the Amazon EKS pod.This object is limited to 10 elements.
        - `args`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `args`**Description**: An array of arguments to the entrypoint. If this isn't specified, the `CMD` of the container image is used. This corresponds to the `args` member in the [Entrypoint](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint) portion of the [Pod](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/) in Kubernetes. Environment variable references are expanded using the container's environment. If the referenced environment variable doesn't exist, the reference in the command isn't changed. For example, if the reference is to "`$(NAME1)`" and the `NAME1` environment variable doesn't exist, the command string will remain "`$(NAME1)`." `$$` is replaced with `$`, and the resulting string isn't expanded. For example, `$$(VAR_NAME)` is passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists. For more information, see [Dockerfile reference: CMD](https://docs.docker.com/engine/reference/builder/#cmd) and [Define a command and arguments for a pod](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/) in the Kubernetes documentation.
        - `command`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `command`**Description**: The entrypoint for the container. This isn't run within a shell. If this isn't specified, the `ENTRYPOINT` of the container image is used. Environment variable references are expanded using the container's environment. If the referenced environment variable doesn't exist, the reference in the command isn't changed. For example, if the reference is to "`$(NAME1)`" and the `NAME1` environment variable doesn't exist, the command string will remain "`$(NAME1)`." `$$` is replaced with `$` and the resulting string isn't expanded. For example, `$$(VAR_NAME)` will be passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists. The entrypoint can't be updated. For more information, see [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint) in the Dockerfile reference and [Define a command and arguments for a container](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/) and [Entrypoint](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint) in the Kubernetes documentation.
        - `env`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `env`**Description**: The environment variables to pass to a container.Environment variables cannot start with "`AWS_BATCH`". This naming convention is reserved for variables that Batch sets.
          - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the environment variable.
          - `value`**Type**: `STRING`**Provider name**: `value`**Description**: The value of the environment variable.
        - `image`**Type**: `STRING`**Provider name**: `image`**Description**: The Docker image used to start the container.
        - `image_pull_policy`**Type**: `STRING`**Provider name**: `imagePullPolicy`**Description**: The image pull policy for the container. Supported values are `Always`, `IfNotPresent`, and `Never`. This parameter defaults to `IfNotPresent`. However, if the `:latest` tag is specified, it defaults to `Always`. For more information, see [Updating images](https://kubernetes.io/docs/concepts/containers/images/#updating-images) in the Kubernetes documentation.
        - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the container. If the name isn't specified, the default name "`Default`" is used. Each container in a pod must have a unique name.
        - `resources`**Type**: `STRUCT`**Provider name**: `resources`**Description**: The type and amount of resources to assign to a container. The supported resources include `memory`, `cpu`, and `nvidia.com/gpu`. For more information, see [Resource management for pods and containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) in the Kubernetes documentation.
          - `limits`**Type**: `MAP_STRING_STRING`**Provider name**: `limits`**Description**: The type and quantity of the resources to reserve for the container. The values vary based on the `name` that's specified. Resources can be requested using either the `limits` or the `requests` objects.
            {% dl %}
            
            {% dt %}
memory
            {% /dt %}

            {% dd %}
            The memory hard limit (in MiB) for the container, using whole integers, with a "Mi" suffix. If your container attempts to exceed the memory specified, the container is terminated. You must specify at least 4 MiB of memory for a job. `memory` can be specified in `limits`, `requests`, or both. If `memory` is specified in both places, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.To maximize your resource utilization, provide your jobs with as much memory as possible for the specific instance type that you are using. To learn how, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the Batch User Guide.
                        {% /dd %}

            {% dt %}
cpu
            {% /dt %}

            {% dd %}
            The number of CPUs that's reserved for the container. Values must be an even multiple of `0.25`. `cpu` can be specified in `limits`, `requests`, or both. If `cpu` is specified in both places, then the value that's specified in `limits` must be at least as large as the value that's specified in `requests`.
                        {% /dd %}

            {% dt %}
nvidia.com/gpu
            {% /dt %}

            {% dd %}
            The number of GPUs that's reserved for the container. Values must be a whole integer. `memory` can be specified in `limits`, `requests`, or both. If `memory` is specified in both places, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.
                        {% /dd %}

                        {% /dl %}
          - `requests`**Type**: `MAP_STRING_STRING`**Provider name**: `requests`**Description**: The type and quantity of the resources to request for the container. The values vary based on the `name` that's specified. Resources can be requested by using either the `limits` or the `requests` objects.
            {% dl %}
            
            {% dt %}
memory
            {% /dt %}

            {% dd %}
            The memory hard limit (in MiB) for the container, using whole integers, with a "Mi" suffix. If your container attempts to exceed the memory specified, the container is terminated. You must specify at least 4 MiB of memory for a job. `memory` can be specified in `limits`, `requests`, or both. If `memory` is specified in both, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the Batch User Guide.
                        {% /dd %}

            {% dt %}
cpu
            {% /dt %}

            {% dd %}
            The number of CPUs that are reserved for the container. Values must be an even multiple of `0.25`. `cpu` can be specified in `limits`, `requests`, or both. If `cpu` is specified in both, then the value that's specified in `limits` must be at least as large as the value that's specified in `requests`.
                        {% /dd %}

            {% dt %}
nvidia.com/gpu
            {% /dt %}

            {% dd %}
            The number of GPUs that are reserved for the container. Values must be a whole integer. `nvidia.com/gpu` can be specified in `limits`, `requests`, or both. If `nvidia.com/gpu` is specified in both, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.
                        {% /dd %}

                        {% /dl %}
        - `security_context`**Type**: `STRUCT`**Provider name**: `securityContext`**Description**: The security context for a job. For more information, see [Configure a security context for a pod or container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) in the Kubernetes documentation.
          - `allow_privilege_escalation`**Type**: `BOOLEAN`**Provider name**: `allowPrivilegeEscalation`**Description**: Whether or not a container or a Kubernetes pod is allowed to gain more privileges than its parent process. The default value is `false`.
          - `privileged`**Type**: `BOOLEAN`**Provider name**: `privileged`**Description**: When this parameter is `true`, the container is given elevated permissions on the host container instance. The level of permissions are similar to the `root` user permissions. The default value is `false`. This parameter maps to `privileged` policy in the [Privileged pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#privileged) in the Kubernetes documentation.
          - `read_only_root_filesystem`**Type**: `BOOLEAN`**Provider name**: `readOnlyRootFilesystem`**Description**: When this parameter is `true`, the container is given read-only access to its root file system. The default value is `false`. This parameter maps to `ReadOnlyRootFilesystem` policy in the [Volumes and file systems pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#volumes-and-file-systems) in the Kubernetes documentation.
          - `run_as_group`**Type**: `INT64`**Provider name**: `runAsGroup`**Description**: When this parameter is specified, the container is run as the specified group ID (`gid`). If this parameter isn't specified, the default is the group that's specified in the image metadata. This parameter maps to `RunAsGroup` and `MustRunAs` policy in the [Users and groups pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups) in the Kubernetes documentation.
          - `run_as_non_root`**Type**: `BOOLEAN`**Provider name**: `runAsNonRoot`**Description**: When this parameter is specified, the container is run as a user with a `uid` other than 0. If this parameter isn't specified, so such rule is enforced. This parameter maps to `RunAsUser` and `MustRunAsNonRoot` policy in the [Users and groups pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups) in the Kubernetes documentation.
          - `run_as_user`**Type**: `INT64`**Provider name**: `runAsUser`**Description**: When this parameter is specified, the container is run as the specified user ID (`uid`). If this parameter isn't specified, the default is the user that's specified in the image metadata. This parameter maps to `RunAsUser` and `MustRanAs` policy in the [Users and groups pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups) in the Kubernetes documentation.
        - `volume_mounts`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `volumeMounts`**Description**: The volume mounts for the container. Batch supports `emptyDir`, `hostPath`, and `secret` volume types. For more information about volumes and volume mounts in Kubernetes, see [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/) in the Kubernetes documentation.
          - `mount_path`**Type**: `STRING`**Provider name**: `mountPath`**Description**: The path on the container where the volume is mounted.
          - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name the volume mount. This must match the name of one of the volumes in the pod.
          - `read_only`**Type**: `BOOLEAN`**Provider name**: `readOnly`**Description**: If this value is `true`, the container has read-only access to the volume. Otherwise, the container can write to the volume. The default value is `false`.
          - `sub_path`**Type**: `STRING`**Provider name**: `subPath`**Description**: A sub-path inside the referenced volume instead of its root.
      - `dns_policy`**Type**: `STRING`**Provider name**: `dnsPolicy`**Description**: The DNS policy for the pod. The default value is `ClusterFirst`. If the `hostNetwork` parameter is not specified, the default is `ClusterFirstWithHostNet`. `ClusterFirst` indicates that any DNS query that does not match the configured cluster domain suffix is forwarded to the upstream nameserver inherited from the node. For more information, see [Pod's DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) in the Kubernetes documentation. Valid values: `Default` | `ClusterFirst` | `ClusterFirstWithHostNet`
      - `host_network`**Type**: `BOOLEAN`**Provider name**: `hostNetwork`**Description**: Indicates if the pod uses the hosts' network IP address. The default value is `true`. Setting this to `false` enables the Kubernetes pod networking model. Most Batch workloads are egress-only and don't require the overhead of IP allocation for each pod for incoming connections. For more information, see [Host namespaces](https://kubernetes.io/docs/concepts/security/pod-security-policy/#host-namespaces) and [Pod networking](https://kubernetes.io/docs/concepts/workloads/pods/#pod-networking) in the Kubernetes documentation.
      - `image_pull_secrets`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `imagePullSecrets`**Description**: References a Kubernetes secret resource. It holds a list of secrets. These secrets help to gain access to pull an images from a private registry. `ImagePullSecret$name` is required when this object is used.
        - `name`**Type**: `STRING`**Provider name**: `name`**Description**: Provides a unique identifier for the `ImagePullSecret`. This object is required when `EksPodProperties$imagePullSecrets` is used.
      - `init_containers`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `initContainers`**Description**: These containers run before application containers, always runs to completion, and must complete successfully before the next container starts. These containers are registered with the Amazon EKS Connector agent and persists the registration information in the Kubernetes backend data store. For more information, see [Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) in the Kubernetes documentation.This object is limited to 10 elements.
        - `args`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `args`**Description**: An array of arguments to the entrypoint. If this isn't specified, the `CMD` of the container image is used. This corresponds to the `args` member in the [Entrypoint](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint) portion of the [Pod](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/) in Kubernetes. Environment variable references are expanded using the container's environment. If the referenced environment variable doesn't exist, the reference in the command isn't changed. For example, if the reference is to "`$(NAME1)`" and the `NAME1` environment variable doesn't exist, the command string will remain "`$(NAME1)`." `$$` is replaced with `$`, and the resulting string isn't expanded. For example, `$$(VAR_NAME)` is passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists. For more information, see [Dockerfile reference: CMD](https://docs.docker.com/engine/reference/builder/#cmd) and [Define a command and arguments for a pod](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/) in the Kubernetes documentation.
        - `command`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `command`**Description**: The entrypoint for the container. This isn't run within a shell. If this isn't specified, the `ENTRYPOINT` of the container image is used. Environment variable references are expanded using the container's environment. If the referenced environment variable doesn't exist, the reference in the command isn't changed. For example, if the reference is to "`$(NAME1)`" and the `NAME1` environment variable doesn't exist, the command string will remain "`$(NAME1)`." `$$` is replaced with `$` and the resulting string isn't expanded. For example, `$$(VAR_NAME)` will be passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists. The entrypoint can't be updated. For more information, see [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint) in the Dockerfile reference and [Define a command and arguments for a container](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/) and [Entrypoint](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint) in the Kubernetes documentation.
        - `env`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `env`**Description**: The environment variables to pass to a container.Environment variables cannot start with "`AWS_BATCH`". This naming convention is reserved for variables that Batch sets.
          - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the environment variable.
          - `value`**Type**: `STRING`**Provider name**: `value`**Description**: The value of the environment variable.
        - `image`**Type**: `STRING`**Provider name**: `image`**Description**: The Docker image used to start the container.
        - `image_pull_policy`**Type**: `STRING`**Provider name**: `imagePullPolicy`**Description**: The image pull policy for the container. Supported values are `Always`, `IfNotPresent`, and `Never`. This parameter defaults to `IfNotPresent`. However, if the `:latest` tag is specified, it defaults to `Always`. For more information, see [Updating images](https://kubernetes.io/docs/concepts/containers/images/#updating-images) in the Kubernetes documentation.
        - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the container. If the name isn't specified, the default name "`Default`" is used. Each container in a pod must have a unique name.
        - `resources`**Type**: `STRUCT`**Provider name**: `resources`**Description**: The type and amount of resources to assign to a container. The supported resources include `memory`, `cpu`, and `nvidia.com/gpu`. For more information, see [Resource management for pods and containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) in the Kubernetes documentation.
          - `limits`**Type**: `MAP_STRING_STRING`**Provider name**: `limits`**Description**: The type and quantity of the resources to reserve for the container. The values vary based on the `name` that's specified. Resources can be requested using either the `limits` or the `requests` objects.
            {% dl %}
            
            {% dt %}
memory
            {% /dt %}

            {% dd %}
            The memory hard limit (in MiB) for the container, using whole integers, with a "Mi" suffix. If your container attempts to exceed the memory specified, the container is terminated. You must specify at least 4 MiB of memory for a job. `memory` can be specified in `limits`, `requests`, or both. If `memory` is specified in both places, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.To maximize your resource utilization, provide your jobs with as much memory as possible for the specific instance type that you are using. To learn how, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the Batch User Guide.
                        {% /dd %}

            {% dt %}
cpu
            {% /dt %}

            {% dd %}
            The number of CPUs that's reserved for the container. Values must be an even multiple of `0.25`. `cpu` can be specified in `limits`, `requests`, or both. If `cpu` is specified in both places, then the value that's specified in `limits` must be at least as large as the value that's specified in `requests`.
                        {% /dd %}

            {% dt %}
nvidia.com/gpu
            {% /dt %}

            {% dd %}
            The number of GPUs that's reserved for the container. Values must be a whole integer. `memory` can be specified in `limits`, `requests`, or both. If `memory` is specified in both places, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.
                        {% /dd %}

                        {% /dl %}
          - `requests`**Type**: `MAP_STRING_STRING`**Provider name**: `requests`**Description**: The type and quantity of the resources to request for the container. The values vary based on the `name` that's specified. Resources can be requested by using either the `limits` or the `requests` objects.
            {% dl %}
            
            {% dt %}
memory
            {% /dt %}

            {% dd %}
            The memory hard limit (in MiB) for the container, using whole integers, with a "Mi" suffix. If your container attempts to exceed the memory specified, the container is terminated. You must specify at least 4 MiB of memory for a job. `memory` can be specified in `limits`, `requests`, or both. If `memory` is specified in both, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the Batch User Guide.
                        {% /dd %}

            {% dt %}
cpu
            {% /dt %}

            {% dd %}
            The number of CPUs that are reserved for the container. Values must be an even multiple of `0.25`. `cpu` can be specified in `limits`, `requests`, or both. If `cpu` is specified in both, then the value that's specified in `limits` must be at least as large as the value that's specified in `requests`.
                        {% /dd %}

            {% dt %}
nvidia.com/gpu
            {% /dt %}

            {% dd %}
            The number of GPUs that are reserved for the container. Values must be a whole integer. `nvidia.com/gpu` can be specified in `limits`, `requests`, or both. If `nvidia.com/gpu` is specified in both, then the value that's specified in `limits` must be equal to the value that's specified in `requests`.
                        {% /dd %}

                        {% /dl %}
        - `security_context`**Type**: `STRUCT`**Provider name**: `securityContext`**Description**: The security context for a job. For more information, see [Configure a security context for a pod or container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) in the Kubernetes documentation.
          - `allow_privilege_escalation`**Type**: `BOOLEAN`**Provider name**: `allowPrivilegeEscalation`**Description**: Whether or not a container or a Kubernetes pod is allowed to gain more privileges than its parent process. The default value is `false`.
          - `privileged`**Type**: `BOOLEAN`**Provider name**: `privileged`**Description**: When this parameter is `true`, the container is given elevated permissions on the host container instance. The level of permissions are similar to the `root` user permissions. The default value is `false`. This parameter maps to `privileged` policy in the [Privileged pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#privileged) in the Kubernetes documentation.
          - `read_only_root_filesystem`**Type**: `BOOLEAN`**Provider name**: `readOnlyRootFilesystem`**Description**: When this parameter is `true`, the container is given read-only access to its root file system. The default value is `false`. This parameter maps to `ReadOnlyRootFilesystem` policy in the [Volumes and file systems pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#volumes-and-file-systems) in the Kubernetes documentation.
          - `run_as_group`**Type**: `INT64`**Provider name**: `runAsGroup`**Description**: When this parameter is specified, the container is run as the specified group ID (`gid`). If this parameter isn't specified, the default is the group that's specified in the image metadata. This parameter maps to `RunAsGroup` and `MustRunAs` policy in the [Users and groups pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups) in the Kubernetes documentation.
          - `run_as_non_root`**Type**: `BOOLEAN`**Provider name**: `runAsNonRoot`**Description**: When this parameter is specified, the container is run as a user with a `uid` other than 0. If this parameter isn't specified, so such rule is enforced. This parameter maps to `RunAsUser` and `MustRunAsNonRoot` policy in the [Users and groups pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups) in the Kubernetes documentation.
          - `run_as_user`**Type**: `INT64`**Provider name**: `runAsUser`**Description**: When this parameter is specified, the container is run as the specified user ID (`uid`). If this parameter isn't specified, the default is the user that's specified in the image metadata. This parameter maps to `RunAsUser` and `MustRanAs` policy in the [Users and groups pod security policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups) in the Kubernetes documentation.
        - `volume_mounts`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `volumeMounts`**Description**: The volume mounts for the container. Batch supports `emptyDir`, `hostPath`, and `secret` volume types. For more information about volumes and volume mounts in Kubernetes, see [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/) in the Kubernetes documentation.
          - `mount_path`**Type**: `STRING`**Provider name**: `mountPath`**Description**: The path on the container where the volume is mounted.
          - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name the volume mount. This must match the name of one of the volumes in the pod.
          - `read_only`**Type**: `BOOLEAN`**Provider name**: `readOnly`**Description**: If this value is `true`, the container has read-only access to the volume. Otherwise, the container can write to the volume. The default value is `false`.
          - `sub_path`**Type**: `STRING`**Provider name**: `subPath`**Description**: A sub-path inside the referenced volume instead of its root.
      - `metadata`**Type**: `STRUCT`**Provider name**: `metadata`**Description**: Metadata about the Kubernetes pod. For more information, see [Understanding Kubernetes Objects](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/) in the Kubernetes documentation.
        - `annotations`**Type**: `MAP_STRING_STRING`**Provider name**: `annotations`**Description**: Key-value pairs used to attach arbitrary, non-identifying metadata to Kubernetes objects. Valid annotation keys have two segments: an optional prefix and a name, separated by a slash (/).
          - The prefix is optional and must be 253 characters or less. If specified, the prefix must be a DNS subdomain− a series of DNS labels separated by dots (.), and it must end with a slash (/).
          - The name segment is required and must be 63 characters or less. It can include alphanumeric characters ([a-z0-9A-Z]), dashes (-), underscores (_), and dots (.), but must begin and end with an alphanumeric character.
Annotation values must be 255 characters or less.Annotations can be added or modified at any time. Each resource can have multiple annotations.
        - `labels`**Type**: `MAP_STRING_STRING`**Provider name**: `labels`**Description**: Key-value pairs used to identify, sort, and organize cube resources. Can contain up to 63 uppercase letters, lowercase letters, numbers, hyphens (-), and underscores (_). Labels can be added or modified at any time. Each resource can have multiple labels, but each key must be unique for a given object.
        - `namespace`**Type**: `STRING`**Provider name**: `namespace`**Description**: The namespace of the Amazon EKS cluster. In Kubernetes, namespaces provide a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Batch places Batch Job pods in this namespace. If this field is provided, the value can't be empty or null. It must meet the following requirements:
          - 1-63 characters long
          - Can't be set to default
          - Can't start with `kube`
          - Must match the following regular expression: `^ a-z0-9 ?$`
For more information, see [Namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) in the Kubernetes documentation. This namespace can be different from the `kubernetesNamespace` set in the compute environment's `EksConfiguration`, but must have identical role-based access control (RBAC) roles as the compute environment's `kubernetesNamespace`. For multi-node parallel jobs, the same value must be provided across all the node ranges.
      - `service_account_name`**Type**: `STRING`**Provider name**: `serviceAccountName`**Description**: The name of the service account that's used to run the pod. For more information, see [Kubernetes service accounts](https://docs.aws.amazon.com/eks/latest/userguide/service-accounts.html) and [Configure a Kubernetes service account to assume an IAM role](https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html) in the Amazon EKS User Guide and [Configure service accounts for pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) in the Kubernetes documentation.
      - `share_process_namespace`**Type**: `BOOLEAN`**Provider name**: `shareProcessNamespace`**Description**: Indicates if the processes in a container are shared, or visible, to other containers in the same pod. For more information, see [Share Process Namespace between Containers in a Pod](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/).
      - `volumes`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `volumes`**Description**: Specifies the volumes for a job definition that uses Amazon EKS resources.
        - `empty_dir`**Type**: `STRUCT`**Provider name**: `emptyDir`**Description**: Specifies the configuration of a Kubernetes `emptyDir` volume. For more information, see [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) in the Kubernetes documentation.
          - `medium`**Type**: `STRING`**Provider name**: `medium`**Description**: The medium to store the volume. The default value is an empty string, which uses the storage of the node.
            {% dl %}
            
            {% dt %}
""
            {% /dt %}

            {% dd %}
            (Default) Use the disk storage of the node.
                        {% /dd %}

            {% dt %}
"Memory"
            {% /dt %}

            {% dd %}
            Use the `tmpfs` volume that's backed by the RAM of the node. Contents of the volume are lost when the node reboots, and any storage on the volume counts against the container's memory limit.
                        {% /dd %}

                        {% /dl %}
          - `size_limit`**Type**: `STRING`**Provider name**: `sizeLimit`**Description**: The maximum size of the volume. By default, there's no maximum size defined.
        - `host_path`**Type**: `STRUCT`**Provider name**: `hostPath`**Description**: Specifies the configuration of a Kubernetes `hostPath` volume. For more information, see [hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) in the Kubernetes documentation.
          - `path`**Type**: `STRING`**Provider name**: `path`**Description**: The path of the file or directory on the host to mount into containers on the pod.
        - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the volume. The name must be allowed as a DNS subdomain name. For more information, see [DNS subdomain names](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names) in the Kubernetes documentation.
        - `persistent_volume_claim`**Type**: `STRUCT`**Provider name**: `persistentVolumeClaim`**Description**: Specifies the configuration of a Kubernetes `persistentVolumeClaim` bounded to a `persistentVolume`. For more information, see [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) in the Kubernetes documentation.
          - `claim_name`**Type**: `STRING`**Provider name**: `claimName`**Description**: The name of the `persistentVolumeClaim` bounded to a `persistentVolume`. For more information, see [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) in the Kubernetes documentation.
          - `read_only`**Type**: `BOOLEAN`**Provider name**: `readOnly`**Description**: An optional boolean value indicating if the mount is read only. Default is false. For more information, see [Read Only Mounts](https://kubernetes.io/docs/concepts/storage/volumes/#read-only-mounts) in the Kubernetes documentation.
        - `secret`**Type**: `STRUCT`**Provider name**: `secret`**Description**: Specifies the configuration of a Kubernetes `secret` volume. For more information, see [secret](https://kubernetes.io/docs/concepts/storage/volumes/#secret) in the Kubernetes documentation.
          - `optional`**Type**: `BOOLEAN`**Provider name**: `optional`**Description**: Specifies whether the secret or the secret's keys must be defined.
          - `secret_name`**Type**: `STRING`**Provider name**: `secretName`**Description**: The name of the secret. The name must be allowed as a DNS subdomain name. For more information, see [DNS subdomain names](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names) in the Kubernetes documentation.
  - `instance_types`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `instanceTypes`**Description**: The instance types of the underlying host infrastructure of a multi-node parallel job.This parameter isn't applicable to jobs that are running on Fargate resources. In addition, this list object is currently limited to one element.
  - `target_nodes`**Type**: `STRING`**Provider name**: `targetNodes`**Description**: The range of nodes, using node index values. A range of `0:3` indicates nodes with index values of `0` through `3`. If the starting range value is omitted (`:n`), then `0` is used to start the range. If the ending range value is omitted (`n:`), then the highest possible node index is used to end the range. Your accumulative node ranges must account for all nodes (`0:n`). You can nest node ranges (for example, `0:10` and `4:5`). In this case, the `4:5` range properties override the `0:10` properties.
- `num_nodes`**Type**: `INT32`**Provider name**: `numNodes`**Description**: The number of nodes that are associated with a multi-node parallel job.

## `parameters`{% #parameters %}

**Type**: `MAP_STRING_STRING`**Provider name**: `parameters`**Description**: Default parameters or parameter substitution placeholders that are set in the job definition. Parameters are specified as a key-value pair mapping. Parameters in a `SubmitJob` request override any corresponding parameter defaults from the job definition. For more information about specifying parameters, see [Job definition parameters](https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html) in the Batch User Guide.

## `platform_capabilities`{% #platform_capabilities %}

**Type**: `UNORDERED_LIST_STRING`**Provider name**: `platformCapabilities`**Description**: The platform capabilities required by the job definition. If no value is specified, it defaults to `EC2`. Jobs run on Fargate resources specify `FARGATE`.

## `propagate_tags`{% #propagate_tags %}

**Type**: `BOOLEAN`**Provider name**: `propagateTags`**Description**: Specifies whether to propagate the tags from the job or job definition to the corresponding Amazon ECS task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the tasks when the tasks are created. For tags with the same name, job tags are given priority over job definitions tags. If the total number of combined tags from the job and job definition is over 50, the job is moved to the `FAILED` state.

## `retry_strategy`{% #retry_strategy %}

**Type**: `STRUCT`**Provider name**: `retryStrategy`**Description**: The retry strategy to use for failed jobs that are submitted with this job definition.

- `attempts`**Type**: `INT32`**Provider name**: `attempts`**Description**: The number of times to move a job to the `RUNNABLE` status. You can specify between 1 and 10 attempts. If the value of `attempts` is greater than one, the job is retried on failure the same number of attempts as the value.
- `evaluate_on_exit`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `evaluateOnExit`**Description**: Array of up to 5 objects that specify the conditions where jobs are retried or failed. If this parameter is specified, then the `attempts` parameter must also be specified. If none of the listed conditions match, then the job is retried.
  - `action`**Type**: `STRING`**Provider name**: `action`**Description**: Specifies the action to take if all of the specified conditions (`onStatusReason`, `onReason`, and `onExitCode`) are met. The values aren't case sensitive.
  - `on_exit_code`**Type**: `STRING`**Provider name**: `onExitCode`**Description**: Contains a glob pattern to match against the decimal representation of the `ExitCode` returned for a job. The pattern can be up to 512 characters long. It can contain only numbers, and can end with an asterisk (*) so that only the start of the string needs to be an exact match. The string can contain up to 512 characters.
  - `on_reason`**Type**: `STRING`**Provider name**: `onReason`**Description**: Contains a glob pattern to match against the `Reason` returned for a job. The pattern can contain up to 512 characters. It can contain letters, numbers, periods (.), colons (:), and white space (including spaces and tabs). It can optionally end with an asterisk (*) so that only the start of the string needs to be an exact match.
  - `on_status_reason`**Type**: `STRING`**Provider name**: `onStatusReason`**Description**: Contains a glob pattern to match against the `StatusReason` returned for a job. The pattern can contain up to 512 characters. It can contain letters, numbers, periods (.), colons (:), and white spaces (including spaces or tabs). It can optionally end with an asterisk (*) so that only the start of the string needs to be an exact match.

## `revision`{% #revision %}

**Type**: `INT32`**Provider name**: `revision`**Description**: The revision of the job definition.

## `scheduling_priority`{% #scheduling_priority %}

**Type**: `INT32`**Provider name**: `schedulingPriority`**Description**: The scheduling priority of the job definition. This only affects jobs in job queues with a fair share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower scheduling priority.

## `status`{% #status %}

**Type**: `STRING`**Provider name**: `status`**Description**: The status of the job definition.

## `tags`{% #tags %}

**Type**: `UNORDERED_LIST_STRING`

## `timeout`{% #timeout %}

**Type**: `STRUCT`**Provider name**: `timeout`**Description**: The timeout time for jobs that are submitted with this job definition. After the amount of time you specify passes, Batch terminates your jobs if they aren't finished.

- `attempt_duration_seconds`**Type**: `INT32`**Provider name**: `attemptDurationSeconds`**Description**: The job timeout time (in seconds) that's measured from the job attempt's `startedAt` timestamp. After this time passes, Batch terminates your jobs if they aren't finished. The minimum value for the timeout is 60 seconds. For array jobs, the timeout applies to the child jobs, not to the parent array job. For multi-node parallel (MNP) jobs, the timeout applies to the whole job, not to the individual nodes.

## `type`{% #type %}

**Type**: `STRING`**Provider name**: `type`**Description**: The type of job definition. It's either `container` or `multinode`. If the job is run on Fargate resources, then `multinode` isn't supported. For more information about multi-node parallel jobs, see [Creating a multi-node parallel job definition](https://docs.aws.amazon.com/batch/latest/userguide/multi-node-job-def.html) in the Batch User Guide.
