- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Compute optimized instances with at least 8 vCPUs and 16 GiB of memory. These are ideal units for horizontally scaling the Observability Pipelines Worker aggregator. Observability Pipelines Worker can vertically scale and automatically take advantage of additional resources if you choose larger instances. Choose a size that allows for at least two Observability Pipelines Worker instances for your data volume to improve availability.
Cloud Provider | Recommendation |
---|---|
AWS | c6i.2xlarge (recommended) or c6g.2xlarge |
Azure | f8 |
Google Cloud | c2 (8 vCPUs, 16 GiB memory) |
Private | 8 vCPUs, 16 GiB of memory, local disk is not required |
Most Observability Pipelines Worker workloads are CPU constrained and benefit from modern CPUs.
Cloud Provider | Recommendation |
---|---|
AWS | Latest generation Intel Xeon, 8 vCPUs (recommended), at least 4 vCPUs |
Azure | Latest generation Intel Xeon, 8 vCPUs (recommended), at least 4 vCPUs |
Google Cloud | Latest generation Intel Xeon, 8 vCPUs (recommended), at least 4 vCPUs |
Private | Latest generation Intel Xeon, 8 vCPUs (recommended), at least 4 vCPUs |
Observability Pipelines Worker runs on modern CPU architectures. X86_64 architectures offer the best return on performance for Observability Pipelines Worker.
Due to Observability Pipelines Worker’s affine type system, memory is rarely constrained for Observability Pipelines Worker workloads. Therefore, Datadog recommends ≥2 GiB of memory per vCPU minimum. Memory usage increases with the number of sinks due to the in-memory buffering and batching. If you have a lot of sinks, consider increasing the memory or switching to disk buffers.
If you’re using Observability Pipelines Worker’s disk buffers for high durability (recommended), provision at least 36 GiB per vCPU of disk space. Following the recommendation of 8 vCPUs, provision 288 GiB of disk space (10 MiB * 60 seconds * 60 minutes * 8 vCPUs).
Cloud Provider | Recommendation* |
---|---|
AWS | EBS gp3, 36 GiB per vCPU, no additional IOPS or throughput |
Azure | Ultra-disk or standard SSD, 36 GiB per vCPU |
Google Cloud | Balanced or SSD persistent disks, 36 GiB per vCPU |
Private | Network-based block storage equivalent, 36 GiB per vCPU |
*The recommended sizes are calculated at Observability Pipelines Worker’s 10 MiB/s/vCPU throughput for one hour. For example, an 8 vCPU machine would require 288 GiB of disk space (10 MiB * 60 seconds * 60 minutes * 8 vCPUs).
Choose a disk type that optimizes for durability and recovery. For example, standard block storage is ideal because it is decoupled from the instance and replicates data across multiple disks for high durability. High-performance local drives are not recommended because their throughput exceeds Observability Pipelines Worker’s needs, and their durability is reduced relative to block storage.
In addition, network file systems like Amazon’s EFS are usable, but only if sufficient throughput is provisioned; burst throughput modes do not suffice. Datadog recommends configuring at least twice the maximum expected throughput to give headroom for spikes in demand. The recommended disks above all have sufficient throughput that this is not a concern.
See Preventing Data Loss for more information on why disks are used in this architecture.
Choose a Linux-based operating system with glibc (GNU) ≥ 2.14 (released in 2011) if possible. Observability Pipelines Worker runs on other platforms, but this combination produces the best performance in Datadog’s benchmarks.