---
title: Cluster Sizing
description: Learn about cluster sizing for CloudPrem
breadcrumbs: Docs > CloudPrem > Operate CloudPrem > Cluster Sizing
---

# Cluster Sizing

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site.md). ().
{% /alert %}

{% /callout %}

{% callout %}
##### CloudPrem is in Preview

Join the CloudPrem Preview to access new self-hosted log management features.

[Request Access](https://www.datadoghq.com/product-preview/cloudprem/)
{% /callout %}

## Overview{% #overview %}

Proper cluster sizing ensures optimal performance, cost efficiency, and reliability for your CloudPrem deployment. Your sizing requirements depend on several factors including log ingestion volume, query patterns, and the complexity of your log data.

This guide provides baseline recommendations for dimensioning your CloudPrem cluster components—indexers, searchers, supporting services, and the PostgreSQL database.
Use your expected daily log volume and peak ingestion rates as starting points, then monitor your cluster's performance and adjust sizing as needed.
## Indexers{% #indexers %}

Indexers receive logs from Datadog Agents, then process, index, and store them as index files (called *splits*) in object storage. Proper sizing is critical for maintaining ingestion throughput and ensuring your cluster can handle your log volume.

| Specification        | Recommendation                 | Notes                                                                                                                                                                                                                                                                                                                                                                  |
| -------------------- | ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Performance**      | 5 MB/s per vCPU                | Baseline throughput to determine initial sizing. Actual performance depends on log characteristics (size, number of attributes, nesting level)                                                                                                                                                                                                                         |
| **Memory**           | 4 GB RAM per vCPU              |
| **Minimum Pod Size** | 2 vCPUs, 8 GB RAM              | Recommended minimum for indexer pods                                                                                                                                                                                                                                                                                                                                   |
| **Storage Capacity** | At least 250 GB                | Required for temporary data while creating and merging index files                                                                                                                                                                                                                                                                                                     |
| **Storage Type**     | Network-attached block storage | For example: Amazon EBS gp3, Azure Managed Disks, or GCP Persistent Disk. Data is temporarily stored in a write-ahead log (WAL) before being uploaded to object storage. The WAL is not replicated, so using local (ephemeral) SSDs increases the risk of losing a few minutes of data if the disk fails. Network-attached block storage provides built-in redundancy. |
| **Disk I/O**         | ~20 MB/s per vCPU              | Equivalent to 320 IOPS per vCPU for Amazon EBS (assuming 64 KB IOPS)                                                                                                                                                                                                                                                                                                   |

{% collapsible-section %}
#### Example: Sizing for 1 TB of logs per day

To index 1 TB of logs per day (~11.6 MB/s), follow these steps:

1. **Calculate vCPUs:** `11.6 MB/s ÷ 5 MB/s per vCPU ≈ 2.3 vCPUs`
1. **Calculate RAM:** `2.3 vCPUs × 4 GB RAM ≈ 9 GB RAM`
1. **Add headroom:** Start with one indexer pod configured with **3 vCPUs, 12 GB RAM, and a 200 GB disk**. Adjust these values based on observed performance and redundancy needs.

{% /collapsible-section %}

## Searchers{% #searchers %}

Searchers handle search queries from the Datadog UI, reading metadata from the Metastore and fetching data from object storage.

A general starting point is to provision roughly double the total number of vCPUs allocated to Indexers.

- **Performance:** Search performance depends heavily on the workload (query complexity, concurrency, amount of data scanned). For instance, term queries (`status:error AND message:exception`) are usually computationally less expensive than aggregations.
- **Memory:** 4 GB of RAM per searcher vCPU. Provision more RAM if you expect many concurrent aggregation requests.

## Other services{% #other-services %}

Allocate the following resources for these lightweight components:

| Service           | vCPUs | RAM  | Replicas |
| ----------------- | ----- | ---- | -------- |
| **Control Plane** | 2     | 4 GB | 1        |
| **Metastore**     | 2     | 4 GB | 2        |
| **Janitor**       | 2     | 4 GB | 1        |

## PostgreSQL database{% #postgresql-database %}

- **Instance Size:** For most use cases, a PostgreSQL instance with 1 vCPU and 4 GB of RAM is sufficient
- **AWS RDS Recommendation:** If using AWS RDS, the `t4g.medium` instance type is a suitable starting point
- **High Availability:** Enable Multi-AZ deployment with one standby replica for high availability

## Helm chart sizing tiers{% #helm-chart-sizing-tiers %}

The CloudPrem Helm chart provides predefined sizing tiers through the `indexer.podSize` and `searcher.podSize` parameters. Each tier sets the vCPU and memory resource limits for a pod, and automatically configures component-specific settings.

| Size    | vCPUs | Memory |
| ------- | ----- | ------ |
| medium  | 1     | 4 GB   |
| large   | 2     | 8 GB   |
| xlarge  | 4     | 16 GB  |
| 2xlarge | 8     | 32 GB  |
| 4xlarge | 16    | 64 GB  |
| 6xlarge | 24    | 96 GB  |
| 8xlarge | 32    | 128 GB |

{% collapsible-section %}
#### Indexer configuration per tier

The following values are automatically applied when you set `indexer.podSize` in the Helm chart. For more details on each parameter, see the [Quickwit Indexer configuration](https://quickwit.io/docs/configuration/node-config#indexer-configuration).

| Size    | split_store_max_num_bytes | split_store_max_num_splits |
| ------- | ------------------------- | -------------------------- |
| medium  | 200G                      | 10000                      |
| large   | 200G                      | 10000                      |
| xlarge  | 200G                      | 10000                      |
| 2xlarge | 200G                      | 10000                      |
| 4xlarge | 200G                      | 10000                      |
| 6xlarge | 200G                      | 10000                      |
| 8xlarge | 200G                      | 10000                      |

{% /collapsible-section %}

{% collapsible-section %}
#### Ingest API configuration per tier

The following values are automatically applied when you set `indexer.podSize` in the Helm chart. For more details on each parameter, see the [Quickwit Ingest API configuration](https://quickwit.io/docs/configuration/node-config#ingest-api-configuration).

| Size    | max_queue_memory_usage | max_queue_disk_usage |
| ------- | ---------------------- | -------------------- |
| medium  | 2GiB                   | 4GiB                 |
| large   | 4GiB                   | 8GiB                 |
| xlarge  | 8GiB                   | 16GiB                |
| 2xlarge | 16GiB                  | 32GiB                |
| 4xlarge | 32GiB                  | 64GiB                |
| 6xlarge | 48GiB                  | 96GiB                |
| 8xlarge | 64GiB                  | 128GiB               |

{% /collapsible-section %}

{% collapsible-section %}
#### Searcher configuration per tier

The following values are automatically applied to searcher configuration when you set `searcher.podSize` in the Helm chart. For more details on each parameter, see the [Quickwit Searcher configuration](https://quickwit.io/docs/configuration/node-config#searcher-configuration).

| Size    | fast_field_cache_capacity | split_footer_cache_capacity | partial_request_cache_capacity | max_num_concurrent_split_searches | aggregation_memory_limit |
| ------- | ------------------------- | --------------------------- | ------------------------------ | --------------------------------- | ------------------------ |
| medium  | 1GiB                      | 500MiB                      | 64MiB                          | 2                                 | 500MiB                   |
| large   | 2GiB                      | 1GiB                        | 128MiB                         | 4                                 | 1GiB                     |
| xlarge  | 4GiB                      | 2GiB                        | 256MiB                         | 8                                 | 2GiB                     |
| 2xlarge | 8GiB                      | 4GiB                        | 512MiB                         | 16                                | 4GiB                     |
| 4xlarge | 16GiB                     | 8GiB                        | 1GiB                           | 32                                | 8GiB                     |
| 6xlarge | 24GiB                     | 12GiB                       | 1536MiB                        | 48                                | 12GiB                    |
| 8xlarge | 32GiB                     | 16GiB                       | 2GiB                           | 64                                | 16GiB                    |

{% /collapsible-section %}

## Further reading{% #further-reading %}

- [Configure CloudPrem Ingress](https://docs.datadoghq.com/cloudprem/configure/ingress.md)
- [Configure CloudPrem Log Processing](https://docs.datadoghq.com/cloudprem/configure/pipelines.md)
- [Learn more about CloudPrem Architecture](https://docs.datadoghq.com/cloudprem/introduction/architecture.md)
