Amazon EKS on AWS Fargate is a managed Kubernetes service that automates certain aspects of deployment and maintenance for any standard Kubernetes environment. Kubernetes nodes are managed by AWS Fargate and abstracted away from the user.
These steps cover the setup of the Datadog Agent v7.17+ in a container within Amazon EKS on AWS Fargate. Refer to the Datadog-Amazon EKS integration documentation if you are not using AWS Fargate.
AWS Fargate pods are not physical pods, which means they exclude host-based system-checks, like CPU, memory, etc. In order to collect data from your AWS Fargate pods, you must run the Agent as a sidecar of your application pod with custom RBAC, which enables these features:
If you don’t specify through AWS Fargate Profile that your pods should run on fargate, your pods can use classical EC2 machines. If it’s the case refer to the Datadog-Amazon EKS integration setup in order to collect data from your them. This works by running the Agent as an EC2-type workload. The Agent setup is the same as that of the Kuberenetes Agent setup, and all options are available. To deploy the Agent on EC2 nodes, use the DaemonSet setup for the Datadog Agent.
To get the best observability coverage monitoring workloads in AWS EKS Fargate, install the Datadog integrations for:
Also, set up integrations for any other AWS services you are running with EKS (for example, ELB).
To install, download the custom Agent image: datadog/agent
with version v7.17 or above.
If the Agent is running as a sidecar, it can communicate only with containers on the same pod. Run an Agent for every pod you wish to monitor.
To collect data from your applications running in AWS EKS Fargate over a Fargate node, follow these setup steps:
To have EKS Fargate containers in the Datadog Live Container View, enable shareProcessNamespace
on your pod spec. See Process Collection.
Use the following Agent RBAC when deploying the Agent as a sidecar in AWS EKS Fargate:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: datadog-agent
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- nodes/spec
- nodes/stats
- nodes/proxy
- nodes/pods
- nodes/healthz
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: datadog-agent
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: datadog-agent
subjects:
- kind: ServiceAccount
name: datadog-agent
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: datadog-agent
namespace: default
To start collecting data from your Fargate type pod, deploy the Datadog Agent v7.17+ as a sidecar of your application. This is the minimum configuration required to collect metrics from your application running in the pod, notice the adition of DD_EKS_FARGATE=true
in the manifest to deploy your Datadog Agent sidecar.
apiVersion: apps/v1
kind: Deployment
metadata:
name: "<APPLICATION_NAME>"
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: "<APPLICATION_NAME>"
name: "<POD_NAME>"
spec:
serviceAccountName: datadog-agent
containers:
- name: "<APPLICATION_NAME>"
image: "<APPLICATION_IMAGE>"
## Running the Agent as a side-car
- image: datadog/agent
name: datadog-agent
env:
- name: DD_API_KEY
value: "<YOUR_DATADOG_API_KEY>"
## Set DD_SITE to "datadoghq.eu" to send your
## Agent data to the Datadog EU site
- name: DD_SITE
value: "datadoghq.com"
- name: DD_EKS_FARGATE
value: "true"
- name: DD_KUBERNETES_KUBELET_NODENAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "256Mi"
cpu: "200m"
Note: Don’t forget to replace <YOUR_DATADOG_API_KEY>
with the Datadog API key from your organization.
Use Autodiscovery labels with your application container to start collecting its metrics for the supported Agent integrations.
apiVersion: apps/v1
kind: Deployment
metadata:
name: "<APPLICATION_NAME>"
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: "<APPLICATION_NAME>"
name: "<POD_NAME>"
annotations:
ad.datadoghq.com/<CONTAINER_NAME>.check_names: '[<CHECK_NAME>]'
ad.datadoghq.com/<CONTAINER_IDENTIFIER>.init_configs: '[<INIT_CONFIG>]'
ad.datadoghq.com/<CONTAINER_IDENTIFIER>.instances: '[<INSTANCE_CONFIG>]'
spec:
serviceAccountName: datadog-agent
containers:
- name: "<APPLICATION_NAME>"
image: "<APPLICATION_IMAGE>"
## Running the Agent as a side-car
- image: datadog/agent
name: datadog-agent
env:
- name: DD_API_KEY
value: "<YOUR_DATADOG_API_KEY>"
## Set DD_SITE to "datadoghq.eu" to send your
## Agent data to the Datadog EU site
- name: DD_SITE
value: "datadoghq.com"
- name: DD_EKS_FARGATE
value: "true"
- name: DD_KUBERNETES_KUBELET_NODENAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "256Mi"
cpu: "200m"
Notes:
<YOUR_DATADOG_API_KEY>
with the Datadog API key from your organization.cgroups
volume from the host can’t be mounted into the Agent.Set up the container port 8125
over your Agent container to forward DogStatsD metrics from your application container to Datadog.
apiVersion: apps/v1
kind: Deployment
metadata:
name: "<APPLICATION_NAME>"
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: "<APPLICATION_NAME>"
name: "<POD_NAME>"
spec:
serviceAccountName: datadog-agent
containers:
- name: "<APPLICATION_NAME>"
image: "<APPLICATION_IMAGE>"
## Running the Agent as a side-car
- image: datadog/agent
name: datadog-agent
## Enabling port 8125 for DogStatsD metric collection
ports:
- containerPort: 8125
name: dogstatsdport
protocol: UDP
env:
- name: DD_API_KEY
value: "<YOUR_DATADOG_API_KEY>"
## Set DD_SITE to "datadoghq.eu" to send your
## Agent data to the Datadog EU site
- name: DD_SITE
value: "datadoghq.com"
- name: DD_EKS_FARGATE
value: "true"
- name: DD_KUBERNETES_KUBELET_NODENAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "256Mi"
cpu: "200m"
Note: Don’t forget to replace <YOUR_DATADOG_API_KEY>
with the Datadog API key from your organization.
Set up the container port 8126
over your Agent container to collect traces from your application container. Read more about how to set up tracing.
apiVersion: apps/v1
kind: Deployment
metadata:
name: "<APPLICATION_NAME>"
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: "<APPLICATION_NAME>"
name: "<POD_NAME>"
spec:
serviceAccountName: datadog-agent
containers:
- name: "<APPLICATION_NAME>"
image: "<APPLICATION_IMAGE>"
## Running the Agent as a side-car
- image: datadog/agent
name: datadog-agent
## Enabling port 8126 for Trace collection
ports:
- containerPort: 8126
name: traceport
protocol: TCP
env:
- name: DD_API_KEY
value: "<YOUR_DATADOG_API_KEY>"
## Set DD_SITE to "datadoghq.eu" to send your
## Agent data to the Datadog EU site
- name: DD_SITE
value: "datadoghq.com"
- name: DD_EKS_FARGATE
value: "true"
- name: DD_APM_ENABLED
value: "true"
- name: DD_KUBERNETES_KUBELET_NODENAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "256Mi"
cpu: "200m"
Note: Don’t forget to replace <YOUR_DATADOG_API_KEY>
with the Datadog API key from your organization.
To collect events from your AWS EKS Fargate API server, run a Datadog Cluster Agent over an AWS EKS EC2 pod within your Kubernetes cluster:
Optionally, deploy cluster check runners in addition to setting up the Datadog Cluster Agent to enable cluster checks.
Note: You can also collect events if you run the Datadog Cluster Agent in a pod in Fargate.
For Agent 6.19+/7.19+, Process Collection is available. Enable shareProcessNamespace
on your pod spec to collect all processes running on your Fargate pod. For example:
apiVersion: v1
kind: Pod
metadata:
name: <NAME>
spec:
shareProcessNamespace: true
...
Note: CPU and memory metrics are not available.
The eks_fargate check submits a heartbeat metric eks.fargate.pods.running
that is tagged by pod_name
and virtual_node
so you can keep track of how many pods are running.
eks_fargate does not include any service checks.
eks_fargate does not include any events.
Need help? Contact Datadog support.
On this Page