---
title: Amazon ECS on AWS Fargate
description: Track metrics for containers running with ECS Fargate
breadcrumbs: Docs > Integrations > Amazon ECS on AWS Fargate
---

# Amazon ECS on AWS Fargate
Supported OS Integration version7.3.0
## Overview{% #overview %}

{% alert level="warning" %}
This page describes the ECS Fargate integration. For EKS Fargate, see the documentation for Datadog's [EKS Fargate integration](http://docs.datadoghq.com/integrations/eks_fargate).
{% /alert %}

Get metrics from all your containers running in ECS Fargate:

- CPU/Memory usage & limit metrics
- Monitor your applications running on Fargate using Datadog integrations or custom metrics.

The Datadog Agent retrieves metrics for the task definition's containers with the ECS task metadata endpoint. According to the [ECS Documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint.html) on that endpoint:

- This endpoint returns Docker stats JSON for all of the containers associated with the task. For more information about each of the returned stats, see [ContainerStats](https://docs.docker.com/engine/api/v1.30/#operation/ContainerStats) in the Docker API documentation.

The Task Metadata endpoint is only available from within the task definition itself, which is why the Datadog Agent needs to be run as an additional container within each task definition to be monitored.

To enable metric collection, set the environment variable `ECS_FARGATE` to `"true"` in the Datadog container definition.

## Setup{% #setup %}

The following steps cover setup of the Datadog Container Agent within Amazon ECS Fargate. **Note**: Datadog Agent version 6.1.1 or higher is needed to take full advantage of the Fargate integration.

Tasks that do not have the Datadog Agent still report metrics with Cloudwatch, however the Agent is needed for Autodiscovery, detailed container metrics, tracing, and more. Additionally, Cloudwatch metrics are less granular, and have more latency in reporting than metrics shipped directly through the Datadog Agent.

### Installation{% #installation %}

{% alert level="info" %}
You can also monitor AWS Batch jobs on ECS Fargate. See Installation for AWS Batch.
{% /alert %}

To monitor your ECS Fargate tasks with Datadog, run the Agent as a container in **same task definition** as your application container. To collect metrics with Datadog, each task definition should include a Datadog Agent container in addition to the application containers. Follow these setup steps:

1. **Create an ECS Fargate task**
1. **Create or Modify your IAM Policy**
1. **Run the task as a replica service**

#### Create an ECS Fargate task{% #create-an-ecs-fargate-task %}

The primary unit of work in Fargate is the task, which is configured in the task definition. A task definition is comparable to a pod in Kubernetes. A task definition must contain one or more containers. In order to run the Datadog Agent, create your task definition to run your application container(s), as well as the Datadog Agent container.

The instructions below show you how to configure the task using the [Amazon Web Console](https://aws.amazon.com/console), [AWS CLI tools](https://aws.amazon.com/cli), or [AWS CloudFormation](https://aws.amazon.com/cloudformation/).

{% tab title="Web UI" %}
##### Web UI Task Definition{% #web-ui-task-definition %}

{% callout %}
# Important note for users on the following Datadog sites: app.datadoghq.com, us3.datadoghq.com, us5.datadoghq.com, app.datadoghq.eu, ap1.datadoghq.com, app.ddog-gov.com



1. Log in to your [AWS Web Console](https://aws.amazon.com/console) and navigate to the ECS section.
1. Click on **Task Definitions** in the left menu, then click the **Create new Task Definition** button or choose an existing Fargate task definition.
1. For new task definitions:
   1. Select **Fargate** as the launch type, then click the **Next step** button.
   1. Enter a **Task Definition Name**, such as `my-app-and-datadog`.
   1. Select a task execution IAM role. See permission requirements in the Create or Modify your IAM Policy section below.
   1. Choose **Task memory** and **Task CPU** based on your needs.
1. Click the **Add container** button to begin adding the Datadog Agent container.
   1. For **Container name** enter `datadog-agent`.
   1. For **Image** enter `public.ecr.aws/datadog/agent:latest`.
   1. For **Env Variables**, add the **Key** `DD_API_KEY` and enter your [Datadog API Key](https://app.datadoghq.com/organization-settings/api-keys) as the value.
   1. Add another environment variable using the **Key** `ECS_FARGATE` and the value `true`. Click **Add** to add the container.
   1. Add another environment variable using the **Key** `DD_SITE` and the value . This defaults to `datadoghq.com` if you don't set it.
   1. (Windows Only) Select `C:\` as the working directory.
1. Add your other application containers to the task definition. For details on collecting integration metrics, see [Integration Setup for ECS Fargate](http://docs.datadoghq.com/integrations/faq/integration-setup-ecs-fargate).
1. Click **Create** to create the task definition.


{% /callout %}

{% /tab %}

{% tab title="AWS CLI" %}
##### AWS CLI Task Definition{% #aws-cli-task-definition %}

1. Download [datadog-agent-ecs-fargate.json](https://docs.datadoghq.com/resources/json/datadog-agent-ecs-fargate.json). **Note**: If you are using Internet Explorer, this may download as a gzip file, which contains the JSON file mentioned below.

{% callout %}
# Important note for users on the following Datadog sites: app.datadoghq.com, us3.datadoghq.com, us5.datadoghq.com, app.datadoghq.eu, ap1.datadoghq.com, app.ddog-gov.com

2. Update the JSON with a `TASK_NAME`, your [Datadog API Key](https://app.datadoghq.com/organization-settings/api-keys), and the appropriate `DD_SITE` (). **Note**: The environment variable `ECS_FARGATE` is already set to `"true"`.
{% /callout %}

Add your other application containers to the task definition. For details on collecting integration metrics, see [Integration Setup for ECS Fargate](http://docs.datadoghq.com/integrations/faq/integration-setup-ecs-fargate).

Optionally - Add an Agent health check.

Add the following to your ECS task definition to create an Agent health check:

```json
"healthCheck": {
  "retries": 3,
  "command": ["CMD-SHELL","agent health"],
  "timeout": 5,
  "interval": 30,
  "startPeriod": 15
}
```

Execute the following command to register the ECS task definition:

```bash
aws ecs register-task-definition --cli-input-json file://<PATH_TO_FILE>/datadog-agent-ecs-fargate.json
```

{% /tab %}

{% tab title="CloudFormation" %}
##### AWS CloudFormation Task Definition{% #aws-cloudformation-task-definition %}

You can use [AWS CloudFormation](https://aws.amazon.com/cloudformation/) templating to configure your Fargate containers. Use the `AWS::ECS::TaskDefinition` resource within your CloudFormation template to set the Amazon ECS task and specify `FARGATE` as the required launch type for that task.

{% callout %}
# Important note for users on the following Datadog sites: app.datadoghq.com, us3.datadoghq.com, us5.datadoghq.com, app.datadoghq.eu, ap1.datadoghq.com, app.ddog-gov.com

Update this CloudFormation template below with your [Datadog API Key](https://app.datadoghq.com/organization-settings/api-keys). As well as include the appropriate `DD_SITE` () environment variable if necessary, as this defaults to `datadoghq.com` if you don't set it.
{% /callout %}

```yaml
Resources:
  ECSTaskDefinition:
    Type: 'AWS::ECS::TaskDefinition'
    Properties:
      NetworkMode: awsvpc
      RequiresCompatibilities:
        - FARGATE
      Cpu: 256
      Memory: 512
      ContainerDefinitions:
        - Name: datadog-agent
          Image: 'public.ecr.aws/datadog/agent:latest'
          Environment:
            - Name: DD_API_KEY
              Value: <DATADOG_API_KEY>
            - Name: ECS_FARGATE
              Value: true
```

Lastly, include your other application containers within the `ContainerDefinitions` and deploy through CloudFormation.

For more information on CloudFormation templating and syntax, see the [AWS CloudFormation task definition documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-taskdefinition.html).
{% /tab %}

{% tab title="CDK" %}
##### Datadog CDK Task Definition{% #datadog-cdk-task-definition %}

You can use the [Datadog CDK Constructs](https://github.com/datadog/datadog-cdk-constructs/) to configure your ECS Fargate task definition. Use the `DatadogECSFargate` construct to instrument your containers for desired Datadog features. This is supported in TypeScript, JavaScript, Python, and Go.

{% callout %}
# Important note for users on the following Datadog sites: app.datadoghq.com, us3.datadoghq.com, us5.datadoghq.com, app.datadoghq.eu, ap1.datadoghq.com, app.ddog-gov.com

Update this construct definition below with your [Datadog API Key](https://app.datadoghq.com/organization-settings/api-keys). In addition, include the appropriate `DD_SITE` () property if necessary, as this defaults to `datadoghq.com` if you don't set it.
{% /callout %}

```typescript
const ecsDatadog = new DatadogECSFargate({
  apiKey: <DATADOG_API_KEY>
  site: <DATADOG_SITE>
});
```

Then, define your task definition using [`FargateTaskDefinitionProps`](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs.FargateTaskDefinitionProps.html).

```typescript
const fargateTaskDefinition = ecsDatadog.fargateTaskDefinition(
  this,
  <TASK_ID>,
  <FARGATE_TASK_DEFINITION_PROPS>
);
```

Lastly, include your other application containers by adding your [`ContainerDefinitionOptions`](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs.ContainerDefinitionOptions.html).

```typescript
fargateTaskDefinition.addContainer(<CONTAINER_ID>, <CONTAINER_DEFINITION_OPTIONS>);
```

For more information on the `DatadogECSFargate` construct instrumentation and syntax, see the [Datadog ECS Fargate CDK documentation](https://github.com/DataDog/datadog-cdk-constructs/blob/main/src/ecs/fargate/README.md).
{% /tab %}

{% tab title="Terraform" %}
##### Datadog Terraform Task Definition{% #datadog-terraform-task-definition %}

You can use the [Datadog ECS Fargate Terraform module](https://registry.terraform.io/modules/DataDog/ecs-datadog/aws/latest) to configure your containers for Datadog. This Terraform module wraps the [`aws_ecs_task_definition`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_task_definition) resource and automatically instruments your task definition for Datadog. Pass your input arguments into the Datadog ECS Fargate Terraform module in a similiar manner as to the `aws_ecs_task_definition`. Make sure to include your task `family` and `container_definitions`.

{% callout %}
# Important note for users on the following Datadog sites: app.datadoghq.com, us3.datadoghq.com, us5.datadoghq.com, app.datadoghq.eu, ap1.datadoghq.com, app.ddog-gov.com

Update this Terraform module below with your [Datadog API Key](https://app.datadoghq.com/organization-settings/api-keys). As well as include the appropriate `DD_SITE` () environment variable if necessary, as this defaults to `datadoghq.com` if you don't set it.
{% /callout %}

```hcl
module "ecs_fargate_task" {
  source  = "DataDog/ecs-datadog/aws//modules/ecs_fargate"
  version = "1.0.0"

  # Configure Datadog
  dd_api_key = <DATADOG_API_KEY>
  dd_site    = <DATADOG_SITE>
  dd_dogstatsd = {
    enabled = true,
  }
  dd_apm = {
    enabled = true,
  }

  # Configure Task Definition
  family                   = <TASK_FAMILY>
  container_definitions    = <CONTAINER_DEFINITIONS>
  cpu                      = 256
  memory                   = 512
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
}
```

Lastly, include your other application containers within the `ContainerDefinitions` and deploy through Terraform.

For more information on the Terraform module, see the [Datadog ECS Fargate Terraform documentation](https://registry.terraform.io/modules/DataDog/ecs-datadog/aws/latest/submodules/ecs_fargate).
{% /tab %}

#### Run the task as a replica service{% #run-the-task-as-a-replica-service %}

The only option in ECS Fargate is to run the task as a [Replica Service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html#service_scheduler_replica). The Datadog Agent runs in the same task definition as your application and integration containers.

{% tab title="Web UI" %}
##### Web UI Replica Service{% #web-ui-replica-service %}

1. Log in to your [AWS Web Console](https://aws.amazon.com/console) and navigate to the ECS section. If needed, create a cluster with the **Networking only** cluster template.
1. Choose the cluster to run the Datadog Agent on.
1. On the **Services** tab, click the **Create** button.
1. For **Launch type**, choose **FARGATE**.
1. For **Task Definition**, select the task created in the previous steps.
1. Enter a **Service name**.
1. For **Number of tasks** enter `1`, then click the **Next step** button.
1. Select the **Cluster VPC**, **Subnets**, and **Security Groups**.
1. **Load balancing** and **Service discovery** are optional based on your preference.
1. Click the **Next step** button.
1. **Auto Scaling** is optional based on your preference.
1. Click the **Next step** button, then click the **Create service** button.

{% /tab %}

{% tab title="AWS CLI" %}
##### AWS CLI Replica Service{% #aws-cli-replica-service %}

Run the following commands using the [AWS CLI tools](https://aws.amazon.com/cli).

**Note**: Fargate version 1.1.0 or greater is required, so the command below specifies the platform version.

If needed, create a cluster:

```bash
aws ecs create-cluster --cluster-name "<CLUSTER_NAME>"
```

Run the task as a service for your cluster:

```bash
aws ecs run-task --cluster <CLUSTER_NAME> \
--network-configuration "awsvpcConfiguration={subnets=["<PRIVATE_SUBNET>"],securityGroups=["<SECURITY_GROUP>"]}" \
--task-definition arn:aws:ecs:us-east-1:<AWS_ACCOUNT_NUMBER>:task-definition/<TASK_NAME>:1 \
--region <AWS_REGION> --launch-type FARGATE --platform-version 1.4.0
```

{% /tab %}

{% tab title="CloudFormation" %}
##### AWS CloudFormation Replica Service{% #aws-cloudformation-replica-service %}

In the CloudFormation template you can reference the `ECSTaskDefinition` resource created in the previous example into the `AWS::ECS::Service` resource being created. After this specify your `Cluster`, `DesiredCount`, and any other parameters necessary for your application in your replica service.

```yaml
Resources:
  ECSTaskDefinition:
    #(...)
  ECSService:
    Type: 'AWS::ECS::Service'
    Properties:
      Cluster: <CLUSTER_NAME>
      TaskDefinition:
        Ref: "ECSTaskDefinition"
      DesiredCount: 1
      #(...)
```

For more information on CloudFormation templating and syntax, see the [AWS CloudFormation ECS service documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-service.html).
{% /tab %}

{% tab title="CDK" %}
##### AWS CDK Replica Service{% #aws-cdk-replica-service %}

In the CDK code you can reference the `fargateTaskDefinition` resource created in the previous example into the `FargateService` resource being created. After this, specify your `Cluster`, `DesiredCount`, and any other parameters necessary for your application in your replica service.

```typescript
const service = new ecs.FargateService(this, <SERVICE_ID>, {
  <CLUSTER>,
  fargateTaskDefinition,
  desiredCount: 1
});
```

For more information on the CDK ECS service construct and syntax, see the [AWS CDK ECS Service documentation](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs.FargateService.html).
{% /tab %}

{% tab title="Terraform" %}
##### AWS Terraform Replica Service{% #aws-terraform-replica-service %}

In the Terraform code you can reference the `aws_ecs_task_definition` resource created in the previous example within the `aws_ecs_service` resource being created. Then, specify your `Cluster`, `DesiredCount`, and any other parameters necessary for your application in your replica service.

```hcl
resource "aws_ecs_service" <SERVICE_ID> {
  name            = <SERVICE_NAME>
  cluster         = <CLUSTER_ID>
  task_definition = module.ecs_fargate_task.arn
  desired_count   = 1
}
```

For more information on the Terraform ECS service module and syntax, see the [AWS Terraform ECS service documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service).
{% /tab %}

To provide your Datadog API key as a secret, see Using secrets.

#### Installation for AWS Batch{% #installation-for-aws-batch %}

To monitor your AWS Batch jobs with Datadog, see [AWS Batch with ECS Fargate and the Datadog Agent](https://docs.datadoghq.com/containers/guide/aws-batch-ecs-fargate)

#### Create or modify your IAM policy{% #create-or-modify-your-iam-policy %}

Add the following permissions to your [Datadog IAM policy](https://docs.datadoghq.com/integrations/amazon_web_services/#installation) to collect ECS Fargate metrics. For more information, see the [ECS policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/list_ecs.html) on the AWS website.

| AWS Permission                   | Description                                                       |
| -------------------------------- | ----------------------------------------------------------------- |
| `ecs:ListClusters`               | List available clusters.                                          |
| `ecs:ListContainerInstances`     | List instances of a cluster.                                      |
| `ecs:DescribeContainerInstances` | Describe instances to add metrics on resources and tasks running. |

#### Using secrets{% #using-secrets %}

As an alternative to populating the `DD_API_KEY` environment variable with your API key in plaintext, you can instead reference the [ARN of a plaintext secret stored in AWS Secrets Manager](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-tutorial.html). Place the `DD_API_KEY` environment variable under the `containerDefinitions.secrets` section of the task or job definition file. Ensure that the task/job execution role has the necessary permission to fetch secrets from AWS Secrets Manager.

### Metric collection{% #metric-collection %}

After the Datadog Agent is setup as described above, the [ecs_fargate check](https://github.com/DataDog/integrations-core/blob/master/ecs_fargate/datadog_checks/ecs_fargate/data/conf.yaml.example) collects metrics with autodiscovery enabled. Add Docker labels to your other containers in the same task to collect additional metrics.

Although the integration works on Linux and Windows, some metrics are OS dependent. All metrics exposed when running on Windows are also exposed on Linux, but there are some metrics that are only available on Linux. See Data Collected for the list of metrics provided by this integration. The list also specifies which metrics are Linux-only.

For details on collecting integration metrics, see [Integration Setup for ECS Fargate](http://docs.datadoghq.com/integrations/faq/integration-setup-ecs-fargate).

#### DogStatsD{% #dogstatsd %}

Metrics are collected with [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/) through UDP port 8125.

#### Other environment variables{% #other-environment-variables %}

For environment variables available with the Docker Agent container, see the [Docker Agent](https://docs.datadoghq.com/agent/docker/#environment-variables) page. **Note**: Some variables are not be available for Fargate.

| Environment Variable           | Description                                       |
| ------------------------------ | ------------------------------------------------- |
| `DD_TAGS`                      | Add tags. For example: `key1:value1 key2:value2`. |
| `DD_DOCKER_LABELS_AS_TAGS`     | Extract docker container labels                   |
| `DD_CHECKS_TAG_CARDINALITY`    | Add tags to check metrics                         |
| `DD_DOGSTATSD_TAG_CARDINALITY` | Add tags to custom metrics                        |

For global tagging, it is recommended to use `DD_DOCKER_LABELS_AS_TAGS`. With this method, the Agent pulls in tags from your container labels. This requires you to add the appropriate labels to your other containers. Labels can be added directly in the [task definition](https://docs.aws.amazon.com/AmazonECS/latest/userguide/task_definition_parameters.html#container_definition_labels).

Format for the Agent container:

```json
{
  "name": "DD_DOCKER_LABELS_AS_TAGS",
  "value": "{\"<LABEL_NAME_TO_COLLECT>\":\"<TAG_KEY_FOR_DATADOG>\"}"
}
```

Example for the Agent container:

```json
{
  "name": "DD_DOCKER_LABELS_AS_TAGS",
  "value": "{\"com.docker.compose.service\":\"service_name\"}"
}
```

CloudFormation example (YAML):

```yaml
      ContainerDefinitions:
        - #(...)
          Environment:
            - Name: DD_DOCKER_LABELS_AS_TAGS
              Value: "{\"com.docker.compose.service\":\"service_name\"}"
```

**Note**: You should not use `DD_HOSTNAME` since there is no concept of a host to the user in Fargate. Using this tag can cause your tasks to appear as APM Hosts in the Infrastructure list, potentially impacting your billing. Instead, `DD_TAGS` is traditionally used to assign host tags. As of Datadog Agent version 6.13.0, you can also use the `DD_TAGS` environment variable to set global tags on your integration metrics.

### Crawler-based metrics{% #crawler-based-metrics %}

In addition to the metrics collected by the Datadog Agent, Datadog has a CloudWatch based ECS integration. This integration collects the [Amazon ECS CloudWatch Metrics](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch-metrics.html).

As noted there, Fargate tasks also report metrics in this way:

The metrics made available will depend on the launch type of the tasks and services in your clusters or batch jobs. If you are using the Fargate launch type for your services then CPU and memory utilization metrics are provided to assist in the monitoring of your services.

Since this method does not use the Datadog Agent, you need to configure the AWS integration by checking **ECS** on the integration tile. Then, Datadog pulls these CloudWatch metrics (namespaced `aws.ecs.*` in Datadog) on your behalf. See the [Data Collected](https://docs.datadoghq.com/integrations/amazon_ecs/#data-collected) section of the documentation.

If these are the only metrics you need, you could rely on this integration for collection using CloudWatch metrics. **Note**: CloudWatch data is less granular (1-5 min depending on the type of monitoring you have enabled) and delayed in reporting to Datadog. This is because the data collection from CloudWatch must adhere to AWS API limits, instead of pushing it to Datadog with the Agent.

Datadog's default CloudWatch crawler polls metrics once every 10 minutes. If you need a faster crawl schedule, contact [Datadog support](https://docs.datadoghq.com/help/) for availability. **Note**: There are cost increases involved on the AWS side as CloudWatch bills for API calls.

### Log collection{% #log-collection %}

You can monitor Fargate logs by using either:

- The AWS FireLens integration built on Datadog's Fluent Bit output plugin to send logs directly to Datadog
- Using the `awslogs` log driver to store the logs in a CloudWatch Log Group, and then a Lambda function to route logs to Datadog

Datadog recommends using AWS FireLens for the following reasons:

- You can configure Fluent Bit directly in your Fargate tasks.
- The Datadog Fluent Bit output plugin provides additional tagging on logs. The [ECS Explorer](https://docs.datadoghq.com/infrastructure/containers/amazon_elastic_container_explorer) uses the tags to correlate logs with ECS resources.

#### Fluent Bit and FireLens{% #fluent-bit-and-firelens %}

Configure the AWS FireLens integration built on Datadog's Fluent Bit output plugin to connect your FireLens monitored log data to Datadog Logs. You can find a full [sample task definition for this configuration here](https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/mainline/examples/fluent-bit/datadog).

1. Add the Fluent Bit FireLens log router container in your existing Fargate task. For more information about enabling FireLens, see the dedicated [AWS Firelens docs](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html). For more information about Fargate container definitions, see the [AWS docs on Container Definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definitions). AWS recommends that you use [the regional Docker image](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html#firelens-using-fluentbit). Here is an example snippet of a task definition where the Fluent Bit image is configured:

   ```json
   {
     "essential": true,
     "image": "amazon/aws-for-fluent-bit:stable",
     "name": "log_router",
     "firelensConfiguration": {
       "type": "fluentbit",
       "options": { "enable-ecs-log-metadata": "true" }
     }
   }
   ```

If your containers are publishing serialized JSON logs over stdout, you should use this [extra FireLens configuration](https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/master/examples/fluent-bit/parse-json) to get them correctly parsed within Datadog:

   ```json
   {
     "essential": true,
     "image": "amazon/aws-for-fluent-bit:stable",
     "name": "log_router",
     "firelensConfiguration": {
       "type": "fluentbit",
       "options": {
         "enable-ecs-log-metadata": "true",
         "config-file-type": "file",
         "config-file-value": "/fluent-bit/configs/parse-json.conf"
       }
     }
   }
   ```

This converts serialized JSON from the `log:` field into top-level fields. See the AWS sample [Parsing container stdout logs that are serialized JSON](https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/master/examples/fluent-bit/parse-json) for more details.

1. Next, in the same Fargate task define a log configuration for the desired containers to ship logs. This log configuration should have AWS FireLens as the log driver, and with data being output to Fluent Bit. Here is an example snippet of a task definition where the FireLens is the log driver, and it is outputting data to Fluent Bit:

{% callout %}
# Important note for users on the following Datadog sites: app.datadoghq.com



```json
{
  "logConfiguration": {
    "logDriver": "awsfirelens",
    "options": {
      "Name": "datadog",
      "apikey": "<DATADOG_API_KEY>",
      "Host": "http-intake.logs.datadoghq.com",
      "dd_service": "firelens-test",
      "dd_source": "redis",
      "dd_message_key": "log",
      "dd_tags": "region:us-west-2,project:fluentbit",
      "TLS": "on",
      "provider": "ecs"
    }
  }
}
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: us3.datadoghq.com



```json
{
  "logConfiguration": {
    "logDriver": "awsfirelens",
    "options": {
      "Name": "datadog",
      "apikey": "<DATADOG_API_KEY>",
      "Host": "http-intake.logs.us3.datadoghq.com",
      "dd_service": "firelens-test",
      "dd_source": "redis",
      "dd_message_key": "log",
      "dd_tags": "region:us-west-2,project:fluentbit",
      "TLS": "on",
      "provider": "ecs"
    }
  }
}
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: us5.datadoghq.com



```json
{
  "logConfiguration": {
    "logDriver": "awsfirelens",
    "options": {
      "Name": "datadog",
      "apikey": "<DATADOG_API_KEY>",
      "Host": "http-intake.logs.us5.datadoghq.com",
      "dd_service": "firelens-test",
      "dd_source": "redis",
      "dd_message_key": "log",
      "dd_tags": "region:us-west-2,project:fluentbit",
      "TLS": "on",
      "provider": "ecs"
    }
  }
}
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: app.datadoghq.eu



```json
{
  "logConfiguration": {
    "logDriver": "awsfirelens",
    "options": {
      "Name": "datadog",
      "apikey": "<DATADOG_API_KEY>",
      "Host": "http-intake.logs.datadoghq.eu",
      "dd_service": "firelens-test",
      "dd_source": "redis",
      "dd_message_key": "log",
      "dd_tags": "region:us-west-2,project:fluentbit",
      "TLS": "on",
      "provider": "ecs"
    }
  }
}
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: ap1.datadoghq.com



```json
{
  "logConfiguration": {
    "logDriver": "awsfirelens",
    "options": {
      "Name": "datadog",
      "apikey": "<DATADOG_API_KEY>",
      "Host": "http-intake.logs.ap1.datadoghq.com",
      "dd_service": "firelens-test",
      "dd_source": "redis",
      "dd_message_key": "log",
      "dd_tags": "region:us-west-2,project:fluentbit",
      "TLS": "on",
      "provider": "ecs"
    }
  }
}
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com



```json
{
  "logConfiguration": {
    "logDriver": "awsfirelens",
    "options": {
      "Name": "datadog",
      "apikey": "<DATADOG_API_KEY>",
      "Host": "http-intake.logs.ddog-gov.datadoghq.com",
      "dd_service": "firelens-test",
      "dd_source": "redis",
      "dd_message_key": "log",
      "dd_tags": "region:us-west-2,project:fluentbit",
      "TLS": "on",
      "provider": "ecs"
    }
  }
}
```


{% /callout %}

**Note**: Separate tags with commas in the `dd_tags` field.

{% collapsible-section %}
#### Example using secretOptions to avoid exposing the API Key in plain text

{% callout %}
# Important note for users on the following Datadog sites: app.datadoghq.com



```json
{
  "logConfiguration": {
    "logDriver": "awsfirelens",
    "options": {
      "Name": "datadog",
      "Host": "http-intake.logs.datadoghq.com",
      "dd_service": "firelens-test",
      "dd_source": "redis",
      "dd_message_key": "log",
      "dd_tags": "region:us-west-2,project:fluentbit",
      "TLS": "on",
      "provider": "ecs"
    },
    "secretOptions": [
    {
      "name": "apikey",
      "valueFrom": "<API_SECRET_ARN>"
    }
  ]
 }
}
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: us3.datadoghq.com



```json
{
  "logConfiguration": {
    "logDriver": "awsfirelens",
    "options": {
      "Name": "datadog",
      "Host": "http-intake.logs.us3.datadoghq.com",
      "dd_service": "firelens-test",
      "dd_source": "redis",
      "dd_message_key": "log",
      "dd_tags": "region:us-west-2,project:fluentbit",
      "TLS": "on",
      "provider": "ecs"
    },
    "secretOptions": [
    {
      "name": "apikey",
      "valueFrom": "<API_SECRET_ARN>"
    }
  ]
  }
}
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: us5.datadoghq.com



```json
{
  "logConfiguration": {
    "logDriver": "awsfirelens",
    "options": {
      "Name": "datadog",
      "Host": "http-intake.logs.us5.datadoghq.com",
      "dd_service": "firelens-test",
      "dd_source": "redis",
      "dd_message_key": "log",
      "dd_tags": "region:us-west-2,project:fluentbit",
      "TLS": "on",
      "provider": "ecs"
    },
    "secretOptions": [
    {
      "name": "apikey",
      "valueFrom": "<API_SECRET_ARN>"
    }
  ]
  }
}
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: app.datadoghq.eu



```json
{
  "logConfiguration": {
    "logDriver": "awsfirelens",
    "options": {
      "Name": "datadog",
      "Host": "http-intake.logs.datadoghq.eu",
      "dd_service": "firelens-test",
      "dd_source": "redis",
      "dd_message_key": "log",
      "dd_tags": "region:us-west-2,project:fluentbit",
      "TLS": "on",
      "provider": "ecs"
    },
    "secretOptions": [
    {
      "name": "apikey",
      "valueFrom": "<API_SECRET_ARN>"
    }
  ]
  }
}
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: ap1.datadoghq.com



```json
{
  "logConfiguration": {
    "logDriver": "awsfirelens",
    "options": {
      "Name": "datadog",
      "Host": "http-intake.logs.ap1.datadoghq.com",
      "dd_service": "firelens-test",
      "dd_source": "redis",
      "dd_message_key": "log",
      "dd_tags": "region:us-west-2,project:fluentbit",
      "TLS": "on",
      "provider": "ecs"
    },
    "secretOptions": [
    {
      "name": "apikey",
      "valueFrom": "<API_SECRET_ARN>"
    }
  ]
  }
}
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com



```json
{
  "logConfiguration": {
    "logDriver": "awsfirelens",
    "options": {
      "Name": "datadog",
      "Host": "http-intake.logs.ddog-gov.datadoghq.com",
      "dd_service": "firelens-test",
      "dd_source": "redis",
      "dd_message_key": "log",
      "dd_tags": "region:us-west-2,project:fluentbit",
      "TLS": "on",
      "provider": "ecs"
    },
    "secretOptions": [
    {
      "name": "apikey",
      "valueFrom": "<API_SECRET_ARN>"
    }
  ]
  }
}
```


{% /callout %}

To provide your Datadog API key as a secret, see Using secrets.
{% /collapsible-section %}

{% callout %}
# Important note for users on the following Datadog sites: app.datadoghq.com, us3.datadoghq.com, us5.datadoghq.com, app.datadoghq.eu, ap1.datadoghq.com, app.ddog-gov.com

**Note**: Set your `apikey` as well as the `Host` relative to your respective site `http-intake.logs.`. The full list of available parameters is described in the [Datadog Fluent Bit documentation](https://docs.datadoghq.com/integrations/fluentbit/#configuration-parameters).
{% /callout %}

The `dd_service`, `dd_source`, and `dd_tags` can be adjusted for your desired tags.
Whenever a Fargate task runs, Fluent Bit sends the container logs to Datadog with information about all of the containers managed by your Fargate tasks. You can see the raw logs on the [Log Explorer page](https://app.datadoghq.com/logs), [build monitors](https://docs.datadoghq.com/monitors/monitor_types/) for the logs, and use the [Live Container view](https://docs.datadoghq.com/infrastructure/livecontainers/?tab=linuxwindows).
{% tab title="Web UI" %}
##### Web UI{% #web-ui %}

To add the Fluent Bit container to your existing Task Definition check the **Enable FireLens integration** checkbox under **Log router integration** to automatically create the `log_router` container for you. This pulls the regional image, however, we do recommend to use the `stable` image tag instead of `latest`. Once you click **Apply** this creates the base container. To further customize the `firelensConfiguration` click the **Configure via JSON** button at the bottom to edit this manually.

After this has been added edit the application container in your Task Definition that you want to submit logs from and change the **Log driver** to `awsfirelens` filling in the **Log options** with the keys shown in the above example.
{% /tab %}

{% tab title="AWS CLI" %}
##### AWS CLI{% #aws-cli %}

Edit your existing JSON task definition file to include the `log_router` container and the updated `logConfiguration` for your application container, as described in the previous section. After this is done, create a new revision of your task definition with the following command:

```bash
aws ecs register-task-definition --cli-input-json file://<PATH_TO_FILE>/datadog-agent-ecs-fargate.json
```

{% /tab %}

{% tab title="CloudFormation" %}
##### AWS CloudFormation{% #aws-cloudformation %}

To use [AWS CloudFormation](https://aws.amazon.com/cloudformation/) templating, use the `AWS::ECS::TaskDefinition` resource and set the `Datadog` option to configure log management.

For example, to configure Fluent Bit to send logs to Datadog:

{% callout %}
# Important note for users on the following Datadog sites: app.datadoghq.com



```yaml
Resources:
  ECSTaskDefinition:
    Type: 'AWS::ECS::TaskDefinition'
    Properties:
      NetworkMode: awsvpc
      RequiresCompatibilities:
          - FARGATE
      Cpu: 256
      Memory: 1GB
      ContainerDefinitions:
        - Name: tomcat-test
          Image: 'tomcat:jdk8-adoptopenjdk-openj9'
          LogConfiguration:
            LogDriver: awsfirelens
            Options:
              Name: datadog
              apikey: <DATADOG_API_KEY>
              Host: http-intake.logs.datadoghq.com
              dd_service: test-service
              dd_source: test-source
              TLS: 'on'
              provider: ecs
          MemoryReservation: 500
        - Name: log_router
          Image: 'amazon/aws-for-fluent-bit:stable'
          Essential: true
          FirelensConfiguration:
            Type: fluentbit
            Options:
              enable-ecs-log-metadata: true
          MemoryReservation: 50
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: us3.datadoghq.com



```yaml
Resources:
  ECSTaskDefinition:
    Type: 'AWS::ECS::TaskDefinition'
    Properties:
      NetworkMode: awsvpc
      RequiresCompatibilities:
          - FARGATE
      Cpu: 256
      Memory: 1GB
      ContainerDefinitions:
        - Name: tomcat-test
          Image: 'tomcat:jdk8-adoptopenjdk-openj9'
          LogConfiguration:
            LogDriver: awsfirelens
            Options:
              Name: datadog
              apikey: <DATADOG_API_KEY>
              Host: http-intake.logs.us3.datadoghq.com
              dd_service: test-service
              dd_source: test-source
              TLS: 'on'
              provider: ecs
          MemoryReservation: 500
        - Name: log_router
          Image: 'amazon/aws-for-fluent-bit:stable'
          Essential: true
          FirelensConfiguration:
            Type: fluentbit
            Options:
              enable-ecs-log-metadata: true
          MemoryReservation: 50
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: us5.datadoghq.com



```yaml
Resources:
  ECSTaskDefinition:
    Type: 'AWS::ECS::TaskDefinition'
    Properties:
      NetworkMode: awsvpc
      RequiresCompatibilities:
          - FARGATE
      Cpu: 256
      Memory: 1GB
      ContainerDefinitions:
        - Name: tomcat-test
          Image: 'tomcat:jdk8-adoptopenjdk-openj9'
          LogConfiguration:
            LogDriver: awsfirelens
            Options:
              Name: datadog
              apikey: <DATADOG_API_KEY>
              Host: http-intake.logs.us5.datadoghq.com
              dd_service: test-service
              dd_source: test-source
              TLS: 'on'
              provider: ecs
          MemoryReservation: 500
        - Name: log_router
          Image: 'amazon/aws-for-fluent-bit:stable'
          Essential: true
          FirelensConfiguration:
            Type: fluentbit
            Options:
              enable-ecs-log-metadata: true
          MemoryReservation: 50
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: app.datadoghq.eu



```yaml
Resources:
  ECSTaskDefinition:
    Type: 'AWS::ECS::TaskDefinition'
    Properties:
      NetworkMode: awsvpc
      RequiresCompatibilities:
          - FARGATE
      Cpu: 256
      Memory: 1GB
      ContainerDefinitions:
        - Name: tomcat-test
          Image: 'tomcat:jdk8-adoptopenjdk-openj9'
          LogConfiguration:
            LogDriver: awsfirelens
            Options:
              Name: datadog
              apikey: <DATADOG_API_KEY>
              Host: http-intake.logs.datadoghq.eu
              dd_service: test-service
              dd_source: test-source
              TLS: 'on'
              provider: ecs
          MemoryReservation: 500
        - Name: log_router
          Image: 'amazon/aws-for-fluent-bit:stable'
          Essential: true
          FirelensConfiguration:
            Type: fluentbit
            Options:
              enable-ecs-log-metadata: true
          MemoryReservation: 50
```


{% /callout %}

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com



```yaml
Resources:
  ECSTaskDefinition:
    Type: 'AWS::ECS::TaskDefinition'
    Properties:
      NetworkMode: awsvpc
      RequiresCompatibilities:
          - FARGATE
      Cpu: 256
      Memory: 1GB
      ContainerDefinitions:
        - Name: tomcat-test
          Image: 'tomcat:jdk8-adoptopenjdk-openj9'
          LogConfiguration:
            LogDriver: awsfirelens
            Options:
              Name: datadog
              apikey: <DATADOG_API_KEY>
              Host: http-intake.logs.ddog-gov.datadoghq.com
              dd_service: test-service
              dd_source: test-source
              TLS: 'on'
              provider: ecs
          MemoryReservation: 500
        - Name: log_router
          Image: 'amazon/aws-for-fluent-bit:stable'
          Essential: true
          FirelensConfiguration:
            Type: fluentbit
            Options:
              enable-ecs-log-metadata: true
          MemoryReservation: 50
```


{% /callout %}

For more information on CloudFormation templating and syntax, see the [AWS CloudFormation documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-taskdefinition.html).
{% /tab %}

{% tab title="CDK" %}
##### Datadog ECS Fargate CDK Construct{% #datadog-ecs-fargate-cdk-construct %}

To enable logging through the [Datadog ECS Fargate CDK](https://github.com/DataDog/datadog-cdk-constructs/blob/main/src/ecs/fargate/README.md) construct, configure the `logCollection` property as seen below:

```typescript
const ecsDatadog = new DatadogECSFargate({
  apiKey: <DATADOG_API_KEY>,
  site: <DATADOG_SITE>,
  logCollection: {
    isEnabled: true,
  }
});
```

{% /tab %}

{% tab title="Terraform" %}
##### Datadog ECS Fargate Terraform Module{% #datadog-ecs-fargate-terraform-module %}

To enable logging through the [Datadog ECS Fargate Terraform](https://registry.terraform.io/modules/DataDog/ecs-datadog/aws/latest) module, configure the `dd_log_collection` input argument as seen below:

```hcl
module "ecs_fargate_task" {
  source  = "DataDog/ecs-datadog/aws//modules/ecs_fargate"
  version = "1.0.0"

  # Configure Datadog
  dd_api_key = <DATADOG_API_KEY>
  dd_site    = <DATADOG_SITE>
  dd_log_collection = {
    enabled = true,
  }

  # Configure Task Definition
  family                   = <TASK_FAMILY>
  container_definitions    = <CONTAINER_DEFINITIONS>
  cpu                      = 256
  memory                   = 512
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
}
```

{% /tab %}

**Note**: Use a [TaskDefinition secret](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-secret.html) to avoid exposing the `apikey` in plain text.

#### AWS log driver{% #aws-log-driver %}

Monitor Fargate logs by using the `awslogs` log driver and a Lambda function to route logs to Datadog.

1. Define the log driver as `awslogs` in the application container in the task or job you want to collect logs from. [Consult the AWS Fargate developer guide](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) for instructions.

1. This configures your Fargate tasks or jobs to send log information to Amazon CloudWatch Logs. The following shows a snippet of a task/job definition where the awslogs log driver is configured:

   ```json
   {
     "logConfiguration": {
       "logDriver": "awslogs",
       "options": {
         "awslogs-group": "/ecs/fargate-task|job-definition",
         "awslogs-region": "us-east-1",
         "awslogs-stream-prefix": "ecs"
       }
     }
   }
   ```

For more information about using the `awslogs` log driver in your task or job definitions to send container logs to CloudWatch Logs, see [Using the awslogs Log Driver](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html). This driver collects logs generated by the container and sends them to CloudWatch directly.

1. Finally, use the [Datadog Lambda Log Forwarder function](https://docs.datadoghq.com/logs/guide/forwarder/) to collect logs from CloudWatch and send them to Datadog. To automatically enrich logs with ECS tags (task_arn, service_arn, cluster_arn, …), ensure the following configuration:

   1. The CloudWatch Log Group must be named `/ecs/<ECS_CLUSTER_NAME>`.
   1. The Log Stream must follow the default naming format: `<awslogs-stream-prefix>/<container_name>/<task_id>`.

### Trace collection{% #trace-collection %}

{% callout %}
# Important note for users on the following Datadog sites: app.datadoghq.com, us3.datadoghq.com, us5.datadoghq.com, app.datadoghq.eu, ap1.datadoghq.com, app.ddog-gov.com



1. Follow the instructions above to add the Datadog Agent container to your task or job definition with the additional environment variable `DD_APM_ENABLED` set to `true`. Set the `DD_SITE` variable to . It defaults to `datadoghq.com` if you don't set it.


{% /callout %}

1. Instrument your application based on your setup:

**Note**: With Fargate APM applications, do **not** set `DD_AGENT_HOST` - the default of `localhost` works.

| Language                                                                                                                                  |
| ----------------------------------------------------------------------------------------------------------------------------------------- |
| [Java](https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/java?tab=containers#automatic-instrumentation)                    |
| [Python](https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/python?tab=containers#instrument-your-application)              |
| [Ruby](https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/ruby#instrument-your-application)                                 |
| [Go](https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/go/?tab=containers#activate-go-integrations-to-create-spans)        |
| [Node.js](https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/nodejs?tab=containers#instrument-your-application)             |
| [PHP](https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/php?tab=containers#automatic-instrumentation)                      |
| [C++](https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/cpp?tab=containers#instrument-your-application)                    |
| [.NET Core](https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/dotnet-core?tab=containers#custom-instrumentation)           |
| [.NET Framework](https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/dotnet-framework?tab=containers#custom-instrumentation) |

See more general information about [Sending Traces to Datadog](https://docs.datadoghq.com/tracing/setup/).

1. Ensure your application is running in the same task or job definition as the Datadog Agent container.

### Process collection{% #process-collection %}

{% alert level="warning" %}
You can view your ECS Fargate processes in Datadog. To see their relationship to ECS Fargate containers, use the Datadog Agent v7.50.0 or later.
{% /alert %}

You can monitor processes in ECS Fargate in Datadog by using the [Live Processes page](https://app.datadoghq.com/process). To enable process collection, add the [`PidMode` parameter](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#other_task_definition_params) in the Task Definition and set it to `task` as follows:

```text
"pidMode": "task"
```

To filter processes by ECS, use the `AWS Fargate` Containers facet or enter `fargate:ecs` in the search query on the Live Processes page.

## Out-of-the-box tags{% #out-of-the-box-tags %}

The Agent can autodiscover and attach tags to all data emitted by the entire task or an individual container within this task or job. The list of tags automatically attached depends on the Agent's [cardinality configuration](https://docs.datadoghq.com/getting_started/tagging/assigning_tags/?tab=containerizedenvironments#environment-variables).

**Note**: Set the `env` and `service` tags in your task definition to get the full benefits of Datadog's unified service tagging. See the [full configuration section](https://docs.datadoghq.com/getting_started/tagging/unified_service_tagging/?tab=ecs#full-configuration) of the unified service tagging documentation for instructions.

| Tag                  | Cardinality  | Source  |
| -------------------- | ------------ | ------- |
| `container_name`     | High         | ECS API |
| `container_id`       | High         | ECS API |
| `docker_image`       | Low          | ECS API |
| `image_name`         | Low          | ECS API |
| `short_image`        | Low          | ECS API |
| `image_tag`          | Low          | ECS API |
| `ecs_cluster_name`   | Low          | ECS API |
| `ecs_container_name` | Low          | ECS API |
| `task_arn`           | Orchestrator | ECS API |
| `task_family`        | Low          | ECS API |
| `task_name`          | Low          | ECS API |
| `task_version`       | Low          | ECS API |
| `availability-zone`  | Low          | ECS API |
| `region`             | Low          | ECS API |

## Data Collected{% #data-collected %}

### Metrics{% #metrics %}

|  |
|  |
| **ecs.fargate.cpu.limit**(gauge)                       | Soft limit (CPU Shares) in CPU Units.                                                                                                                                                          |
| **ecs.fargate.cpu.percent**(gauge)                     | Percentage of CPU used per container (Linux only).*Shown as percent*                                                                                                                           |
| **ecs.fargate.cpu.system**(gauge)                      | System CPU time.*Shown as nanocore*                                                                                                                                                            |
| **ecs.fargate.cpu.task.limit**(gauge)                  | Task CPU Limit (shared by all containers).*Shown as nanocore*                                                                                                                                  |
| **ecs.fargate.cpu.usage**(gauge)                       | Total CPU Usage.*Shown as nanocore*                                                                                                                                                            |
| **ecs.fargate.cpu.user**(gauge)                        | User CPU time.*Shown as nanocore*                                                                                                                                                              |
| **ecs.fargate.ephemeral\_storage.reserved**(gauge)     | The reserved ephemeral storage of this task. (Fargate 1.4.0+ required).*Shown as mebibyte*                                                                                                     |
| **ecs.fargate.ephemeral\_storage.utilized**(gauge)     | The current ephemeral storage usage of this task. (Fargate 1.4.0+ required).*Shown as mebibyte*                                                                                                |
| **ecs.fargate.io.bytes.read**(gauge)                   | Number of bytes read on the disk.*Shown as byte*                                                                                                                                               |
| **ecs.fargate.io.bytes.write**(gauge)                  | Number of bytes written to the disk.*Shown as byte*                                                                                                                                            |
| **ecs.fargate.io.ops.read**(gauge)                     | Number of read operation on the disk.                                                                                                                                                          |
| **ecs.fargate.io.ops.write**(gauge)                    | Number of write operations to the disk.                                                                                                                                                        |
| **ecs.fargate.mem.active\_anon**(gauge)                | Number of bytes of anonymous and swap cache memory on active LRU list (Linux only).*Shown as byte*                                                                                             |
| **ecs.fargate.mem.active\_file**(gauge)                | Number of bytes of file-backed memory on active LRU list (Linux only).*Shown as byte*                                                                                                          |
| **ecs.fargate.mem.cache**(gauge)                       | Number of bytes of page cache memory (Linux only).*Shown as byte*                                                                                                                              |
| **ecs.fargate.mem.hierarchical\_memory\_limit**(gauge) | Number of bytes of memory limit with regard to hierarchy under which the memory cgroup is (Linux only).*Shown as byte*                                                                         |
| **ecs.fargate.mem.hierarchical\_memsw\_limit**(gauge)  | Number of bytes of memory+swap limit with regard to hierarchy under which memory cgroup is (Linux only).*Shown as byte*                                                                        |
| **ecs.fargate.mem.inactive\_file**(gauge)              | Number of bytes of file-backed memory on inactive LRU list (Linux only).*Shown as byte*                                                                                                        |
| **ecs.fargate.mem.limit**(gauge)                       | Number of bytes memory limit (Linux only).*Shown as byte*                                                                                                                                      |
| **ecs.fargate.mem.mapped\_file**(gauge)                | Number of bytes of mapped file (includes tmpfs/shmem) (Linux only).*Shown as byte*                                                                                                             |
| **ecs.fargate.mem.max\_usage**(gauge)                  | Show max memory usage recorded.*Shown as byte*                                                                                                                                                 |
| **ecs.fargate.mem.pgfault**(gauge)                     | Number of page faults per second (Linux only).                                                                                                                                                 |
| **ecs.fargate.mem.pgmajfault**(gauge)                  | Number of major page faults per second (Linux only).                                                                                                                                           |
| **ecs.fargate.mem.pgpgin**(gauge)                      | Number of charging events to the memory cgroup. The charging event happens each time a page is accounted as either mapped anon page(RSS) or cache page(Page Cache) to the cgroup (Linux only). |
| **ecs.fargate.mem.pgpgout**(gauge)                     | Number of uncharging events to the memory cgroup. The uncharging event happens each time a page is unaccounted from the cgroup (Linux only).                                                   |
| **ecs.fargate.mem.rss**(gauge)                         | Number of bytes of anonymous and swap cache memory (includes transparent hugepages) (Linux only).*Shown as byte*                                                                               |
| **ecs.fargate.mem.task.limit**(gauge)                  | Task Memory Limit (shared by all containers).*Shown as byte*                                                                                                                                   |
| **ecs.fargate.mem.usage**(gauge)                       | Number of bytes of memory used.*Shown as byte*                                                                                                                                                 |
| **ecs.fargate.net.bytes\_rcvd**(gauge)                 | Number of bytes received (Fargate 1.4.0+ required).*Shown as byte*                                                                                                                             |
| **ecs.fargate.net.bytes\_sent**(gauge)                 | Number of bytes sent (Fargate 1.4.0+ required).*Shown as byte*                                                                                                                                 |
| **ecs.fargate.net.packet.in\_dropped**(gauge)          | Number of ingoing packets dropped (Fargate 1.4.0+ required).*Shown as packet*                                                                                                                  |
| **ecs.fargate.net.packet.out\_dropped**(gauge)         | Number of outgoing packets dropped (Fargate 1.4.0+ required).*Shown as packet*                                                                                                                 |
| **ecs.fargate.net.rcvd\_errors**(gauge)                | Number of received errors (Fargate 1.4.0+ required).*Shown as error*                                                                                                                           |
| **ecs.fargate.net.sent\_errors**(gauge)                | Number of sent errors (Fargate 1.4.0+ required).*Shown as error*                                                                                                                               |

### Events{% #events %}

The ECS Fargate check does not include any events.

### Service Checks{% #service-checks %}

**fargate\_check**

Returns `CRITICAL` if the Agent is unable to connect to Fargate, otherwise returns `OK`.

*Statuses: ok, critical*

## Troubleshooting{% #troubleshooting %}

### Agent does not start on a read-only filesystem{% #agent-does-not-start-on-a-read-only-filesystem %}

If you experience issues starting the Agent on a filesystem with the setting `"readonlyRootFilesystem": true`, follow either of the approaches below to remediate this:

{% tab title="Create a custom Agent image (recommended)" %}

1. Use a Dockerfile like the example below to add the volume at the necessary path, and copy over the existing `datadog.yaml` file. The `datadog.yaml` file can have any content or be empty, but it must be present.

```yaml
FROM gcr.io/datadoghq/agent:latest
VOLUME /etc/datadog-agent
ADD datadog.yaml /etc/datadog-agent/datadog.yaml
```
Build the container image. Datadog recommends tagging it with the version and type; for example, `docker.io/example/agent:7.62.2-rofs` (**r**ead **o**nly **f**ile **s**ystem).Reference the image in your task definition, as shown in the example below.Set `"readonlyRootFilesystem": true` on the Agent container, as shown in the example below.
```yaml
    "containerDefinitions": [
        {
            "name": "datadog-agent",
            "image": "docker.io/example/agent:7.62.2-rofs",
            ...
            "environment": [
                {
                    "name": "ECS_FARGATE",
                    "value": "true"
                },
                {
                    "name": "DD_API_KEY",
                    "value": "<API_KEY>"
                }
            ]
            "readonlyRootFilesystem": true
        },
        {
            "name": "example-app-container",
            "image": "example-image",
            ...
        }
    ]
```

{% /tab %}

{% tab title="Mount an empty volume on the Agent container" %}
If you cannot build a custom Agent image, you can follow the steps below to add an empty volume dynamically to the Agent.

{% alert level="warning" %}
This configuration deletes all the preexisting files in the `/etc/datadog-agent` folder, including:- All the Autodiscovery config files (`/auto_conf.yaml`)- JMX `metrics.yaml` files- The main ECS Fargate `/etc/datadog-agent/conf.d/ecs_fargate.d/conf.yaml.default` fileAs such, you must set up the integration with Autodiscovery Docker labels on the Datadog Agent container. This requires setting the `ignore_autodiscovery_tag: true` flag in the configuration. Otherwise, metrics from the app container are double-tagged with the Agent container's tags.
{% /alert %}

1. Create an empty volume for the Agent container to use. In the example below, this is named `agent_conf`.
1. Add this volume to the Agent's task definition.
1. Set `"readonlyRootFilesystem": "true"` on the Agent container.
1. Add `dockerLabels` to have the Agent start the `ecs_fargate` check manually.

The example below displays this configuration:

```yaml
    "containerDefinitions": [
        {
            "name": "datadog-agent",
            "image": "public.ecr.aws/datadog/agent:latest",
            ...
            "environment": [
                {
                    "name": "ECS_FARGATE",
                    "value": "true"
                },
                {
                    "name": "DD_API_KEY",
                    "value": "<API_KEY>"
                }
            ],
            "mountPoints": [
                {
                    "sourceVolume": "agent_conf",
                    "containerPath": "/etc/datadog-agent",
                    "readOnly": false
                }
            ],
            "readonlyRootFilesystem": true,
            "dockerLabels": {
                "com.datadoghq.ad.checks": "{\"ecs_fargate\":{\"ignore_autodiscovery_tags\":true,\"instances\":[{}]}}"
            }
        },
        {
            "name": "example-app-container",
            "image": "example-image",
            ...
        }
    ],
    "volumes": [
        {
            "name": "agent_conf",
            "host": {}
        }
    ]
```

{% /tab %}

Need help? Contact [Datadog support](https://docs.datadoghq.com/help/).

## Further Reading{% #further-reading %}

- [Monitor ECS applications on AWS Fargate with Datadog](https://www.datadoghq.com/blog/monitor-aws-fargate)
- [Integration Setup for ECS Fargate](https://docs.datadoghq.com/integrations/faq/integration-setup-ecs-fargate)
- [Monitor your Fargate container logs with FireLens and Datadog](https://www.datadoghq.com/blog/collect-fargate-logs-with-firelens/)
- [Key metrics for monitoring AWS Fargate](https://www.datadoghq.com/blog/aws-fargate-metrics/)
- [How to collect metrics and logs from AWS Fargate workloads](https://www.datadoghq.com/blog/tools-for-collecting-aws-fargate-metrics/)
- [AWS Fargate monitoring with Datadog](https://www.datadoghq.com/blog/aws-fargate-monitoring-with-datadog/)
