---
title: Send Amazon EKS Fargate Logs with Amazon Data Firehose
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Log Management > Logs Guides > Send Amazon EKS Fargate Logs with Amazon
  Data Firehose
---

# Send Amazon EKS Fargate Logs with Amazon Data Firehose

## Overview{% #overview %}

AWS Fargate on EKS provides a fully managed experience for running Kubernetes workloads. Amazon Data Firehose can be used with EKS's Fluent Bit log router to collect logs in Datadog. This guide provides a comparison of log forwarding through Amazon Data Firehose and CloudWatch logs, as well as a sample EKS Fargate application to send logs to Datadog through Amazon Data Firehose.

{% image
   source="https://docs.dd-static.net/images/logs/guide/aws-eks-fargate-logs-with-kinesis-data-firehose/log_streaming_diagram.5dad1113a422c44b59f3d8228178c505.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/logs/guide/aws-eks-fargate-logs-with-kinesis-data-firehose/log_streaming_diagram.5dad1113a422c44b59f3d8228178c505.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="Diagram of the log flow depicting a Fargate EKS cluster sending container logs through Fluent Bit log router to Amazon data firehose and an S3 backup bucket within AWS and then on to Datadog" /%}

### Amazon Data Firehose and CloudWatch log forwarding{% #amazon-data-firehose-and-cloudwatch-log-forwarding %}

The following are key differences between using Amazon Data Firehose and CloudWatch log forwarding.

- **Metadata and tagging**: Metadata, such as Kubernetes namespace and container ID, are accessible as structured attributes when sending logs with Amazon Data Firehose.

- **AWS Costs**: AWS Costs may vary for individual use cases but Amazon Data Firehose ingestion is generally less expensive than comparable Cloudwatch Log ingestion.

## Requirements{% #requirements %}

1. The following command line tools: [`kubectl`](https://kubernetes.io/docs/tasks/tools/), [`aws`](https://aws.amazon.com/cli/).
1. An EKS cluster with a [Fargate profile](https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html) and Fargate pod execution role. In this guide, the cluster is named `fargate-cluster` with a Fargate profile named `fargate-profile` applied to the namespace `fargate-namespace`. If you don't already have these resources, use [Getting Started with Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) to create the cluster and [Getting Started with AWS Fargate using Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html) to create the Fargate profile and pod execution role.

## Setup{% #setup %}

The following steps outline the process for sending logs from a sample application deployed on an EKS cluster through Fluent Bit and an Amazon Data Firehose delivery stream to Datadog. To maximize consistency with standard Kubernetes tags in Datadog, instructions are included to remap selected attributes to tag keys.

1. Create an Amazon Data Firehose delivery stream that delivers logs to Datadog, along with an S3 Backup for any failed log deliveries.
1. Configure Fluent Bit for Firehose on EKS Fargate.
1. Deploy a sample application.
1. Apply remapper processors for correlation using Kubernetes tags and the `container_id` tag.

### Create an Amazon Data Firehose delivery stream{% #create-an-amazon-data-firehose-delivery-stream %}

See the [Send AWS Services Logs with the Datadog Amazon Data Firehose Destination](https://docs.datadoghq.com/logs/guide/send-aws-services-logs-with-the-datadog-kinesis-firehose-destination.md?tab=amazondatafirehosedeliverystream#setup) guide to set up an Amazon Data Firehose Delivery stream. **Note**: Set the **Source** as `Direct PUT`.

### Configure Fluent Bit for Firehose on an EKS Fargate cluster{% #configure-fluent-bit-for-firehose-on-an-eks-fargate-cluster %}

1. Create the `aws-observability` namespace.

   ```shell
      kubectl create namespace aws-observability
```



1. Create the following Kubernetes ConfigMap for Fluent Bit as `aws-logging-configmap.yaml`. Substitute the name of your delivery stream.
Important alert (level: info): For the new higher performance [Kinesis Firehose plugin](https://docs.fluentbit.io/manual/pipeline/outputs/firehose) use the plugin name `kinesis_firehose` instead of `amazon_data_firehose`.
   ```yaml
      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: aws-logging
        namespace: aws-observability
      data:
        filters.conf: |
          [FILTER]
              Name                kubernetes
              Match               kube.*
              Merge_Log           On
              Buffer_Size         0
              Kube_Meta_Cache_TTL 300s
   
        flb_log_cw: 'true'
   
        output.conf: |
          [OUTPUT]
              Name kinesis_firehose
              Match kube.*
              region <REGION>
              delivery_stream <YOUR-DELIVERY-STREAM-NAME>
```

1. Use `kubectl` to apply the ConfigMap manifest.

   ```shell
      kubectl apply -f aws-logging-configmap.yaml
```



1. Create an IAM policy and attach it to the pod execution role to allow the log router running on AWS Fargate to write to the Amazon Data Firehose. You can use the example below, replacing the ARN in the **Resource** field with the ARN of your delivery stream, as well as specifying your region and account ID.

In the `allow_firehose_put_permission.json` file:

   ```json
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "VisualEditor0",
                  "Effect": "Allow",
                  "Action": [
                      "firehose:PutRecord",
                      "firehose:PutRecordBatch"
                  ],
                  "Resource":
             "arn:aws:firehose:<REGION>:<ACCOUNTID>:deliverystream/<DELIVERY-STREAM-NAME>"
             }
      ]
   }
```



   1. Create the policy.
      ```shell
            aws iam create-policy \
                     --policy-name FluentBitEKSFargate \
                     --policy-document file://allow_firehose_put_permission.json
```
   1. Retrieve the Fargate Pod Execution Role and attach the IAM policy.
      ```shell
             POD_EXEC_ROLE=$(aws eks describe-fargate-profile \
               --cluster-name fargate-cluster \
               --fargate-profile-name fargate-profile \
               --query 'fargateProfile.podExecutionRoleArn' --output text |cut -d '/' -f 2)
             aws iam attach-role-policy \
                     --policy-arn arn:aws:iam::<ACCOUNTID>:policy/FluentBitEKSFargate \
                     --role-name $POD_EXEC_ROLE
```

### Deploy a sample application{% #deploy-a-sample-application %}

To generate logs and test the Amazon Data Firehose delivery stream, deploy a sample workload to your EKS Fargate cluster.

1. Create a deployment manifest `sample-deployment.yaml`.
In the `sample-deployment.yaml` file:

   ```yaml
      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: sample-app
      namespace: fargate-namespace
      spec:
      selector:
        matchLabels:
          app: nginx
      replicas: 1
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
```

1. Create the `fargate-namespace` namespace.
   ```shell
      kubectl create namespace fargate-namespace
```

1. Use `kubectl` to apply the deployment manifest.
   ```shell
      kubectl apply -f sample-deployment.yaml
```

### Validation{% #validation %}

1. Verify that `sample-app` pods are running in the namespace `fargate-namespace`.

   ```shell
      kubectl get pods -n fargate-namespace
```



Expected output:

   ```bash
      NAME                          READY   STATUS    RESTARTS   AGE
      sample-app-6c8b449b8f-kq2qz   1/1     Running   0          3m56s
      sample-app-6c8b449b8f-nn2w7   1/1     Running   0          3m56s
      sample-app-6c8b449b8f-wzsjj   1/1     Running   0          3m56s
```

1. Use `kubectl describe pod` to confirm that the Fargate logging feature is enabled.

   ```shell
      kubectl describe pod <POD-NAME> -n fargate-namespace |grep Logging
```



Expected output:

   ```bash
                       Logging: LoggingEnabled
      Normal  LoggingEnabled   5m   fargate-scheduler  Successfully enabled logging for pod
```



1. Inspect deployment logs.

   ```shell
      kubectl logs -l app=nginx -n fargate-namespace
```



Expected output:

   ```bash
      /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
      /docker-entrypoint.sh: Configuration complete; ready for start up
      2023/01/27 16:53:42 [notice] 1#1: using the "epoll" event method
      2023/01/27 16:53:42 [notice] 1#1: nginx/1.23.3
      2023/01/27 16:53:42 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
      2023/01/27 16:53:42 [notice] 1#1: OS: Linux 4.14.294-220.533.amzn2.x86_64
      2023/01/27 16:53:42 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1024:65535
      2023/01/27 16:53:42 [notice] 1#1: start worker processes
      ...
```



1. Verify the logs are in Datadog. In the [Datadog Log Explorer](https://app.datadoghq.com/logs), search for `@aws.firehose.arn:"<ARN>"`, replacing `<ARN>` with your Amazon Data Firehose ARN, to filter for logs from the Amazon Data Firehose.

   {% image
      source="https://docs.dd-static.net/images/logs/guide/aws-eks-fargate-logs-with-kinesis-data-firehose/log_verification.0811a40a8785a8429ef5dc083ee1a6a9.jpg?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/logs/guide/aws-eks-fargate-logs-with-kinesis-data-firehose/log_verification.0811a40a8785a8429ef5dc083ee1a6a9.jpg?auto=format&fit=max&w=850&dpr=2 2x"
      alt="Verification of the nginx log lines in Datadog Log Explorer" /%}

### Remap attributes for log correlation{% #remap-attributes-for-log-correlation %}

Logs from this configuration require some attributes to be remapped to maximize consistency with standard Kubernetes tags in Datadog.

1. Go to the [Datadog Log Pipelines](https://app.datadoghq.com/logs/pipelines) page.

1. Create a new pipeline with **Name** `EKS Fargate Log Pipeline` and **Filter** `service:aws source:aws`.

1. Create four [Remapper processors](https://docs.datadoghq.com/logs/log_configuration/processors.md?tab=ui#remapper) to remap the following attributes to tag keys:

| Attribute to remap          | Target Tag Key        |
| --------------------------- | --------------------- |
| `kubernetes.container_name` | `kube_container_name` |
| `kubernetes.namespace_name` | `kube_namespace`      |
| `kubernetes.pod_name`       | `pod_name`            |
| `kubernetes.docker_id`      | `container_id`        |

1. After creating this pipeline, logs emitted by the sample app are tagged like this example with the log attributes remapped to Kubernetes tags:

   {% image
      source="https://docs.dd-static.net/images/logs/guide/aws-eks-fargate-logs-with-kinesis-data-firehose/log_example_remapped.496a2809feaadfb904af7a8840a7e140.jpg?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/logs/guide/aws-eks-fargate-logs-with-kinesis-data-firehose/log_example_remapped.496a2809feaadfb904af7a8840a7e140.jpg?auto=format&fit=max&w=850&dpr=2 2x"
      alt="The detail view of a log in Datadog with the container_id, kube_container_name, kube_namespace, and pod_name tags" /%}

## Further Reading{% #further-reading %}

- [Processors](https://docs.datadoghq.com/logs/log_configuration/processors.md)
- [Fargate logging](https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html)
- [AWS Fargate profile](https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html)
- [How to send logs to Datadog while reducing data transfer fees](https://docs.datadoghq.com/logs/guide/reduce_data_transfer_fees.md)
