Send AWS EKS Fargate Logs with Kinesis Data Firehose


AWS Fargate on EKS provides a fully managed experience for running Kubernetes workloads. Kinesis Data Firehose can be used with EKS’s Fluent Bit log router to collect logs in Datadog. This guide provides a comparison of log forwarding through Kinesis Data Firehose and CloudWatch logs, as well as a sample EKS Fargate application to send logs to Datadog through Kinesis Data Firehose.

Diagram of the log flow depicting a Fargate EKS cluster sending container logs through Fluent Bit log router to Kinesis data firehose and an S3 backup bucket within AWS and then on to Datadog

Kinesis Data Firehose and CloudWatch log forwarding

The following are key differences between using Kinesis Data Firehose and CloudWatch log forwarding.

  • Metadata and tagging: Metadata such as Kubernetes namespace and container ID are accessible as structured attributes when sending logs with Kinesis Data Firehose.

  • AWS Costs: AWS Costs may vary for individual use cases but Kinesis Data Firehose ingestion is generally less expensive than comparable Cloudwatch Log ingestion.


  1. The following command line tools: kubectl, aws.
  2. An EKS cluster with a Fargate profile and Fargate pod execution role. In this guide, the cluster is named fargate-cluster with a Fargate profile named fargate-profile applied to the namespace fargate-namespace. If you don’t already have these resources, use Getting Started with Amazon EKS to create the cluster and Getting Started with AWS Fargate using Amazon EKS to create the Fargate profile and pod execution role.


The following steps outline the process for sending logs from a sample application deployed on an EKS cluster through Fluent Bit and a Kinesis Data Firehose delivery stream to Datadog. To maximize consistency with standard Kubernetes tags in Datadog, instructions are included to remap selected attributes to tag keys.

  1. Create a Kinesis Data Firehose delivery stream that delivers logs to Datadog, along with an S3 Backup for any failed log deliveries.
  2. Configure Fluent Bit for Firehose on EKS Fargate.
  3. Deploy a sample application.
  4. Apply remapper processors for correlation using Kubernetes tags and the container_id tag.

Create Kinesis Delivery Stream

See the Send AWS service logs with the Datadog Kinesis Firehose Destination guide to set up a Kinesis Firehose Delivery stream. Note: Set the Source as Direct PUT.

Configure Fluent Bit for Firehose on an EKS Fargate cluster

  1. Create the aws-observability namespace.
kubectl create namespace aws-observability
  1. Create the following Kubernetes ConfigMap for Fluent Bit as aws-logging-configmap.yaml. Substitute the name of your delivery stream.
apiVersion: v1
kind: ConfigMap
  name: aws-logging
  namespace: aws-observability
  filters.conf: |
        Name                kubernetes
        Match               kube.*
        Merge_Log           On
        Buffer_Size         0
        Kube_Meta_Cache_TTL 300s    

  flb_log_cw: 'true'

  output.conf: |
        Name kinesis_firehose
        Match kube.*
        region <REGION>
        delivery_stream <YOUR-DELIVERY-STREAM-NAME>    
  1. Use kubectl to apply the ConfigMap manifest.
kubectl apply -f aws-logging-configmap.yaml
  1. Create an IAM policy and attach it to the pod execution role to allow the log router running on AWS Fargate to write to the Kinesis Data Firehose. You can use the example below, replacing the ARN in the Resource field with the ARN of your delivery stream, as well as specifying your region and account ID.


    "Version": "2012-10-17",
    "Statement": [
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [

a. Create the policy.

aws iam create-policy \
         --policy-name FluentBitEKSFargate \
         --policy-document file://allow_kinesis_put_permission.json

b. Retrieve the Fargate Pod Execution Role and attach the IAM policy.

 POD_EXEC_ROLE=$(aws eks describe-fargate-profile \
   --cluster-name fargate-cluster \
   --fargate-profile-name fargate-profile \
   --query 'fargateProfile.podExecutionRoleArn' --output text |cut -d '/' -f 2)
 aws iam attach-role-policy \
         --policy-arn arn:aws:iam::<ACCOUNTID>:policy/FluentBitEKSFargate \
         --role-name $POD_EXEC_ROLE

Deploy a sample application

To generate logs and test the Kinesis pipeline, deploy a sample workload to your EKS Fargate cluster.

  1. Create a deployment manifest sample-deployment.yaml.


 apiVersion: apps/v1
 kind: Deployment
   name: sample-app
   namespace: fargate-namespace
       app: nginx
   replicas: 1
         app: nginx
       - name: nginx
         image: nginx
         - containerPort: 80
  1. Create the fargate-namespace namespace.
 kubectl create namespace fargate-namespace
  1. Use kubectl to apply the deployment manifest.
 kubectl apply -f sample-deployment.yaml


  1. Verify that sample-app pods are running in the namespace fargate-namespace.
 kubectl get pods -n fargate-namespace

Expected output:

 NAME                          READY   STATUS    RESTARTS   AGE
 sample-app-6c8b449b8f-kq2qz   1/1     Running   0          3m56s
 sample-app-6c8b449b8f-nn2w7   1/1     Running   0          3m56s
 sample-app-6c8b449b8f-wzsjj   1/1     Running   0          3m56s
  1. Use kubectl describe pod to confirm that the Fargate logging feature is enabled.
 kubectl describe pod <POD-NAME> -n fargate-namespace |grep Logging

Expected output:

                    Logging: LoggingEnabled
 Normal  LoggingEnabled   5m   fargate-scheduler  Successfully enabled logging for pod
  1. Inspect deployment logs.
 kubectl logs -l app=nginx -n fargate-namespace

Expected output:

 / Launching /docker-entrypoint.d/
 / Configuration complete; ready for start up
 2023/01/27 16:53:42 [notice] 1#1: using the "epoll" event method
 2023/01/27 16:53:42 [notice] 1#1: nginx/1.23.3
 2023/01/27 16:53:42 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
 2023/01/27 16:53:42 [notice] 1#1: OS: Linux 4.14.294-220.533.amzn2.x86_64
 2023/01/27 16:53:42 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1024:65535
 2023/01/27 16:53:42 [notice] 1#1: start worker processes
  1. Verify logs in Datadog UI. Select source:aws to filter for logs from Kinesis Data Firehose.
    Verification of the nginx log lines in Datadog Log Explorer

Remap attributes for log correlation

Logs from this configuration require some attributes to be remapped to maximize consistency with standard Kubernetes tags in Datadog.

  1. Go to the Datadog Log Pipelines page.

  2. Create a new pipeline with Name EKS Fargate Log Pipeline and Filter service:aws source:aws.

  3. Create four Remapper processors to remap the following attributes to tag keys:

    Attribute to remapTarget Tag Key
  4. After creating this pipeline, logs emitted by the sample app are tagged like this example with the log attributes remapped to Kubernetes tags:

    The detail view of a log in Datadog with the container_id, kube_container_name, kube_namespace, and pod_name tags

Further Reading