Esta página aún no está disponible en español. Estamos trabajando en su traducción. Si tienes alguna pregunta o comentario sobre nuestro actual proyecto de traducción, no dudes en ponerte en contacto con nosotros.
Overview
The DatadogPodAutoscaler (DPA) is a Kubernetes custom resource definition (CRD) that enables autoscaling of Kubernetes workloads using Datadog Kubernetes Autoscaling (DKA). This guide demonstrates how to manage DatadogPodAutoscaler resources using Terraform and HashiCorp’s Kubernetes provider to deploy an autoscaling configuration.
Prerequisites
Before you begin, ensure you have the following:
Kubernetes cluster: A working Kubernetes cluster with access using kubectl
Terraform: Terraform installed (version 0.13 or later recommended)
Datadog API credentials: Valid Datadog API key and application key
Project structure
This guide uses a multi-stage deployment approach to ensure proper dependency creation for Terraform.
.├──providers.tf# Provider configurations├──variables.tf# Input variables├──main.tf# Stage 1: Datadog secret and operator (CRDs)├──terraform.tfvars# Example variable values├──datadogagent/# Stage 2: DatadogAgent CRD resource│└──main.tf# DatadogAgent manifest├──nginx-dpa/# Stage 3: Nginx application with DPA│└──main.tf# Nginx namespace, deployment, and DPA
Deployment stages
A multi-stage deployment approach is essential when working with Kubernetes custom resource definitions (CRDs) and Terraform. This ordered approach is necessary to ensure that you create and install the dependencies required for each stage in the process.
Kubernetes CRDs must be installed in the cluster before you can create custom resources that use them. The DatadogPodAutoscaler CRD is created when you install the Datadog Operator in Stage 1. Terraform needs to know about these CRDs before it can manage resources that depend on them.
The Terraform Kubernetes provider discovers available resource types at initialization time. If you try to create a DatadogPodAutoscaler resource before the CRD is installed, Terraform will fail because it doesn’t recognize the custom resource type.
Stage 1 (Datadog Operator and CRDs): Creates Datadog secret, Operator, and CRD
Datadog Operator using Helm (creates CRDs)
Stage 2 (Datadog Agent): Deploys the Datadog Agent configured for Datadog Kubernetes Autoscaling
Datadog API and application secrets
DatadogAgent custom resource with Cluster Agent enabled
Stage 3 (Autoscaled workload): Deploys application with DatadogPodAutoscaler
Nginx namespace and deployment
DatadogPodAutoscaler resource for autoscaling the nginx deployment
Set up configuration files
First, set up the following configuration files for each stage in the process.
resource"helm_release" "datadog_operator" {
name="datadog-operator" namespace="datadog" repository="https://helm.datadoghq.com" chart="datadog-operator" version="2.11.1" # You can update to the latest stable version
create_namespace=true}
Alternatively, set the TF_VAR_datadog_api_key and TF_VAR_datadog_app_key environment variables in your shell.
cd datadogagent
terraform init
terraform apply
Verify that the Datadog Agent is deployed:
kubectl get datadogagent -n datadog
You should see the Datadog Agent custom resource created. It should be in the Running state before proceeding. Also verify that the Datadog Agent and datadog-cluster-agent pods are running:
kubectl get pods -n datadog
Stage 3: Application with DatadogPodAutoscaler
Deploy the nginx application with DatadogPodAutoscaler:
cd ../nginx-dpa
terraform init
terraform apply
After deployment, verify that all components are working correctly.