---
title: Set Up the Worker in ECS Fargate
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Observability Pipelines > Observability Pipelines Guides > Set Up the
  Worker in ECS Fargate
---

# Set Up the Worker in ECS Fargate

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

## Overview{% #overview %}

This document goes over one of the ways you can set up the Observability Pipelines Worker in ECS Fargate.

## Setup{% #setup %}

The setup configuration for this example consists of a Fargate task, Fargate service, and a load balancer.

{% image
   source="https://docs.dd-static.net/images/observability_pipelines/worker_fargate_architecture.a5b2ebc6ad385bb1535541ab4967877d.png?auto=format"
   alt="An architecture diagram with logs going to an application load balancer, a OP Worker task, and the Fargate service" /%}

## Configure the task definition{% #configure-the-task-definition %}

[Create a task definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html). The task definition describes which containers to run, the configuration (such as the environment variables and ports), and the CPU and memory resources allocated for the task.

The tasks should be deployed as a replica with auto scaling enabled, where the minimum number of containers should be based on your log volume and the maximum number of containers should be able to absorb any spikes or growth in log volume. See [Best Practices for Scaling Observability Pipelines](https://docs.datadoghq.com/observability_pipelines/scaling_and_performance/best_practices_for_scaling_observability_pipelines/) to help determine how much CPU and memory resources to allocate.

**Notes**:

- The guidance for CPU and memory allocation is not for a single instance of the task, but for the total number of tasks. For example, if you want to send 3 TB of logs to the Worker, you could either deploy three replicas with one vCPU each or deploy one replica with three vCPUs.
- Datadog recommends enabling load balancers for the pool of replica tasks.

Set the `DD_OP_SOURCE_*` environment variable according to the configuration of the pipeline and port mappings. `DD_OP_API_ENABLED` and `DD_OP_API_ADDRESS` allow the load balancer to do health checks on the Observability Pipelines Worker.

An example task definition:

```json
{
  "family": "my-opw",
  "containerDefinitions": [
    {
      "name": "my-opw",
      "image": "datadog/observability-pipelines-worker",
      "cpu": 0,
      "portMappings": [
        {
          "name": "my-opw-80-tcp",
          "containerPort": 80,
          "hostPort": 80,
          "protocol": "tcp"
        }
      ],
      "essential": true,
      "command": [
        "run"
      ],
      "environment": [
        {
          "name": "DD_OP_API_ENABLED",
          "value": "true"
        },
        {
          "name": "DD_API_KEY",
          "value": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
        },
        {
          "name": "DD_SITE",
          "value": "datadoghq.com"
        },
        {
          "name": "DD_OP_API_ADDRESS",
          "value": "0.0.0.0:8181"
        },
        {
          "name": "DD_OP_SOURCE_HTTP_SERVER_ADDRESS",
          "value": "0.0.0.0:80"
        },
        {
          "name": "DD_OP_PIPELINE_ID",
          "value": "xxxxxxx-xxxx-xxxx-xxxx-xxxx"
        }
      ],
      "mountPoints": [],
      "volumesFrom": [],
      "systemControls": []
    }
  ],
  "tags": [
    {
      "key": "PrincipalId",
      "value": "AROAYYB64AB3JW3TEST"
    },
    {
      "key": "User",
      "value": "username@test.com"
    }
  ],
  "executionRoleArn": "arn:aws:iam::60142xxxxxx:role/ecsTaskExecutionRole",
  "networkMode": "awsvpc",
  "volumes": [],
  "placementConstraints": [],
  "requiresCompatibilities": [
    "FARGATE"
  ],
  "cpu": "xxx",
  "memory": "xxx"
}
```

## Configure the ECS service{% #configure-the-ecs-service %}

[Create an ECS service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-console-v2.html). The service configuration sets the number of Worker replicas to run and the scaling policy. In this example, the scaling policy is set to target an average CPU utilization of 70% with a minimum of two replicas and a maximum of five replicas.

## Set up load balancing{% #set-up-load-balancing %}

Depending on your use case, configure either an [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html) or a [Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html) to target the group of Fargate tasks you defined earlier. Configure the health check against the Observability Pipelines' API port that was set in the task definition.
