Esta página aún no está disponible en español. Estamos trabajando en su traducción. Si tienes alguna pregunta o comentario sobre nuestro actual proyecto de traducción, no dudes en ponerte en contacto con nosotros.
Overview
Before you install CloudPrem on Azure, you must set up a set of supporting infrastructure resources. These components provide the foundational compute, storage, database, and networking services that CloudPrem depends on. This documentation outlines all the resources you must set up in your Azure account before you proceed to the installation steps described in the Azure AKS Installation Guide.
NGINX Ingress Controller: Installed on the AKS cluster to route external traffic to CloudPrem services.
Datadog Agent: Deployed on the AKS cluster to collect and send logs to CloudPrem.
Azure Kubernetes Service (AKS)
CloudPrem runs entirely on Kubernetes. You need an AKS cluster with sufficient CPU, memory, and disk space configured for your workload. See the Kubernetes cluster sizing recommendations for guidance.
To confirm the cluster is reachable and nodes are in the Ready state, run the following command:
kubectl get nodes -o wide
Azure PostgreSQL Flexible Server
CloudPrem stores its metadata and configuration in a PostgreSQL database. Datadog recommends the Azure Database for PostgreSQL Flexible Server. It must be reachable from the AKS cluster, ideally with private networking enabled. See the Postgres sizing recommendations for details.
For security, create a dedicated database and user for CloudPrem, and grant the user rights only on that database, not cluster-wide.
Connect to your PostgreSQL database from within the AKS network using the PostgreSQL client, psql. First, start an interactive pod in your Kubernetes cluster using an image that includes psql:
CloudPrem uses Azure Blob Storage to persist logs. Create a dedicated container for this purpose.
Create a Blob Storage container
Use a dedicated container per environment (for example, cloudprem-prod, cloudprem-staging), and assign least-privilege RBAC roles at the container level, rather than at the storage account scope.
An Azure AD application must be granted read/write access to the Blob Storage container. Register a dedicated application for CloudPrem and assign the corresponding service principal the Contributor role on the Blob Storage container created above.
The public ingress is essential for enabling Datadog’s control plane and query service to manage and query CloudPrem clusters over the public internet. It provides secure access to the CloudPrem gRPC API through the following mechanisms:
Creates an internet-facing Azure Load Balancer that accepts traffic from Datadog services
Implements TLS encryption with termination at the ingress controller level
Uses HTTP/2 (gRPC) for communication between Datadog and CloudPrem clusters
Requires mutual TLS (mTLS) authentication where Datadog services must present valid client certificates
Configures the controller in TLS passthrough mode to forward client certificates to CloudPrem pods with the ssl-client-cert header
Rejects requests that are missing valid client certificates or the certificate header
Use the following nginx-public.yaml Helm values file in order to create the public NGINX Ingress Controller:
kubectl get pods -n nginx-ingress-public -l app.kubernetes.io/component=controller
Verify that the service exposes an external IP:
kubectl get svc -n nginx-ingress-public -l app.kubernetes.io/component=controller
Internal NGINX Ingress Controller
The internal ingress enables log ingestion from Datadog Agents and other log collectors within your environment through HTTP. Use the following nginx-internal.yaml Helm values file in order to create the public NGINX Ingress Controller:
kubectl get pods -n nginx-ingress-internal -l app.kubernetes.io/component=controller
Verify that the service exposes an external IP:
kubectl get svc -n nginx-ingress-internal -l app.kubernetes.io/component=controller
DNS
Optionally, you can add a DNS entry pointing to the IP of the public load balancer, so future IP changes won’t require updating the configuration on the Datadog side.