- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
",t};e.buildCustomizationMenuUi=t;function n(e){let t='
",t}function s(e){let n=e.filter.currentValue||e.filter.defaultValue,t='${e.filter.label}
`,e.filter.options.forEach(s=>{let o=s.id===n;t+=``}),t+="${e.filter.label}
`,t+=`Before you install CloudPrem on Azure, you must set up a set of supporting infrastructure resources. These components provide the foundational compute, storage, database, and networking services that CloudPrem depends on. This documentation outlines all the resources you must set up in your Azure account before you proceed to the installation steps described in the Azure AKS Installation Guide.
Here are the components you must provision:
CloudPrem runs entirely on Kubernetes. You need an AKS cluster with sufficient CPU, memory, and disk space configured for your workload. See the Kubernetes cluster sizing recommendations for guidance.
To confirm the cluster is reachable and nodes are in the Ready
state, run the following command:
kubectl get nodes -o wide
CloudPrem stores its metadata and configuration in a PostgreSQL database. Datadog recommends the Azure Database for PostgreSQL Flexible Server. It must be reachable from the AKS cluster, ideally with private networking enabled. See the Postgres sizing recommendations for details.
Connect to your PostgreSQL database from within the AKS network using the PostgreSQL client, psql
. First, start an interactive pod in your Kubernetes cluster using an image that includes psql
:
kubectl run psql-client \
-n <NAMESPACE_NAME> \
--rm -it \
--image=bitnami/postgresql:latest \
--command -- bash
Then, run the following command directly from the shell, replacing the placeholder values with your actual values:
psql "host=<HOST> \
port=<PORT> \
dbname=<DATABASE> \
user=<USERNAME> \
password=<PASSWORD>"
If successful, you should see a prompt similar to:
psql (15.2)
SSL connection (protocol: TLS...)
Type "help" for help.
<DATABASE>=>
CloudPrem uses Azure Blob Storage to persist logs. Create a dedicated container for this purpose.
Use a dedicated container per environment (for example, cloudprem-prod
, cloudprem-staging
), and assign least-privilege RBAC roles at the container level, rather than at the storage account scope.
An Azure AD application must be granted read/write access to the Blob Storage container. Register a dedicated application for CloudPrem and assign the corresponding service principal the Contributor
role on the Blob Storage container created above.
Register an application in Microsoft Entra ID
Assign an Azure role for access to blob data
The public ingress is essential for enabling Datadog’s control plane and query service to manage and query CloudPrem clusters over the public internet. It provides secure access to the CloudPrem gRPC API through the following mechanisms:
ssl-client-cert
headerUse the following nginx-public.yaml
Helm values file in order to create the public NGINX Ingress Controller:
nginx-public.yaml
controller:
electionID: public-ingress-controller-leader
ingressClass: nginx-public
ingressClassResource:
name: nginx-public
enabled: true
default: false
controllerValue: k8s.io/public-ingress-nginx
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz
Then, install the controller with Helm using the following command:
helm upgrade --install nginx-public ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace nginx-ingress-public \
--create-namespace \
-f nginx-public.yaml
Verify that the controller pod is running:
kubectl get pods -n nginx-ingress-public -l app.kubernetes.io/component=controller
Verify that the service exposes an external IP:
kubectl get svc -n nginx-ingress-public -l app.kubernetes.io/component=controller
The internal ingress enables log ingestion from Datadog Agents and other log collectors within your environment through HTTP. Use the following nginx-internal.yaml
Helm values file in order to create the public NGINX Ingress Controller:
nginx-internal.yaml
controller:
electionID: internal-ingress-controller-leader
ingressClass: nginx-internal
ingressClassResource:
name: nginx-internal
enabled: true
default: false
controllerValue: k8s.io/internal-ingress-nginx
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: true
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz
Then, install the controller with Helm using the following command:
helm upgrade --install nginx-internal ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace nginx-ingress-internal \
--create-namespace \
-f nginx-internal.yaml
Verify that the controller pod is running:
kubectl get pods -n nginx-ingress-internal -l app.kubernetes.io/component=controller
Verify that the service exposes an external IP:
kubectl get svc -n nginx-ingress-internal -l app.kubernetes.io/component=controller
Optionally, you can add a DNS entry pointing to the IP of the public load balancer, so future IP changes won’t require updating the configuration on the Datadog side.
After completing the Azure configuration
추가 유용한 문서, 링크 및 기사: