---
title: Install the DDOT Collector as a Gateway on Kubernetes
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > OpenTelemetry in Datadog > Send OpenTelemetry Data to Datadog > Datadog
  Distribution of OpenTelemetry Collector > Install > Install the DDOT Collector
  as a Gateway on Kubernetes
---

# Install the DDOT Collector as a Gateway on Kubernetes

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

{% alert level="info" %}
This guide assumes you are familiar with deploying the DDOT Collector as a DaemonSet. For more information, see [Install the DDOT Collector as a DaemonSet on Kubernetes](https://docs.datadoghq.com/opentelemetry/setup/ddot_collector/install/kubernetes_daemonset).
{% /alert %}

## Overview{% #overview %}

The OpenTelemetry Collector can be deployed in multiple ways. The *daemonset* pattern is a common deployment where a Collector instance runs on every Kubernetes node alongside the core Datadog Agent.

{% image
   source="https://datadog-docs.imgix.net/images/opentelemetry/embedded_collector/ddot_daemonset.24a4844a23b99cf6f3b186dec4380254.png?auto=format"
   alt="Architecture diagram of the OpenTelemetry Collector daemonset pattern. A Kubernetes cluster contains three nodes. On each node, an application instrumented with OpenTelemetry sends OTLP data to a local Agent DaemonSet. The Agent DaemonSet then forwards this data directly to the Datadog backend." /%}

The [gateway](https://opentelemetry.io/docs/collector/deployment/gateway/) pattern provides an additional deployment option that uses a centralized, standalone Collector service. This gateway layer can perform actions such as tail-based sampling, aggregation, filtering, and routing before exporting the data to one or more backends such as Datadog. It acts as a central point for managing and enforcing observability policies.

{% image
   source="https://datadog-docs.imgix.net/images/opentelemetry/embedded_collector/ddot_gateway.5fde504010b15297fd93cdb31ec712f1.png?auto=format"
   alt="Architecture diagram of the OpenTelemetry Collector gateway pattern. Applications send OTLP data to local Agent DaemonSets running on each node. The DaemonSets forward this data to a central load balancer, which distributes it to a separate deployment of gateway Collector pods. These gateway pods then process and send the telemetry data to Datadog." /%}

When you enable the gateway:

1. A Kubernetes Deployment (`<RELEASE_NAME>-datadog-otel-agent-gateway-deployment`) manages the standalone **gateway Collector pods**.
1. A Kubernetes Service (`<RELEASE_NAME>-datadog-otel-agent-gateway`) exposes the gateway pods and provides load balancing.
1. The existing **DaemonSet Collector pods** are configured by default to send their telemetry data to the gateway service instead of directly to Datadog.

## Requirements{% #requirements %}

Before you begin, ensure you have the following:

- **Datadog Account**:
  - A [Datadog account](https://www.datadoghq.com/free-datadog-trial/).
  - Your Datadog [API key](https://app.datadoghq.com/organization-settings/api-keys/).
- **Software**:
  - A Kubernetes cluster (v1.29+). EKS Fargate and GKE Autopilot are not supported.
  - [Helm](https://helm.sh) (v3+).
  - Datadog Helm chart version 3.160.1+ or Datadog Operator version 1.23.0+.
  - [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl).
- **Network**: When using the Datadog SDK with OpenTelemetry API support, telemetry is routed to different components depending on the signal source. Ensure the following ports are accessible on your Datadog Agent or Collector:
| Signal Source             | Protocol             | Port        | Destination Component                         |
| ------------------------- | -------------------- | ----------- | --------------------------------------------- |
| OTel Metrics and Logs API | OTLP (gRPC/HTTP)     | 4317 / 4318 | Datadog Agent OTLP Receiver or DDOT Collector |
| Datadog Tracing           | Datadog trace intake | 8126 (TCP)  | Datadog Trace Agent                           |
| Runtime Metrics           | DogStatsD            | 8125 (UDP)  | DogStatsD Server                              |

## Installation and configuration{% #installation-and-configuration %}

This guide shows how to configure the DDOT Collector gateway using either the Datadog Operator or Helm chart.

{% alert level="info" %}
This installation is required for both Datadog SDK + DDOT and OpenTelemetry SDK + DDOT configurations. While the Datadog SDK implements the OpenTelemetry API, it still requires the DDOT Collector to process and forward OTLP metrics and logs.
{% /alert %}

Choose one of the following installation methods:

- **Datadog Operator**: A Kubernetes-native approach that automatically reconciles and maintains your Datadog setup. It reports deployment status, health, and errors in its Custom Resource status, and it limits the risk of misconfiguration thanks to higher-level configuration options.
- **Helm chart**: A straightforward way to deploy Datadog Agent. It provides versioning, rollback, and templating capabilities, making deployments consistent and easier to replicate.

### Install the Datadog Operator or Helm{% #install-the-datadog-operator-or-helm %}

{% tab title="Datadog Operator" %}
If you haven't already installed the Datadog Operator, you can install it in your cluster using the Datadog Operator Helm chart:

```shell
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm install datadog-operator datadog/datadog-operator
```

For more information, see the [Datadog Operator documentation](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/).
{% /tab %}

{% tab title="Helm" %}
If you haven't already added the Datadog Helm repository, add it now:

```shell
helm repo add datadog https://helm.datadoghq.com
helm repo update
```

For more information about Helm configuration options, see the [Datadog Helm chart README](http://github.com/DataDog/helm-charts/blob/main/charts/datadog/README.md).
{% /tab %}

### Deploying the gateway with a DaemonSet{% #deploying-the-gateway-with-a-daemonset %}

{% tab title="Datadog Operator" %}
To get started, enable both the gateway and the DaemonSet Collector in your `DatadogAgent` resource. This is the most common setup.

Create a file named `datadog-agent.yaml`:

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiSecret:
        secretName: datadog-secret
        keyName: api-key

  features:
    # Enable the Collector in the Agent DaemonSet
    otelCollector:
      enabled: true

    # Enable the standalone Gateway Deployment
    otelAgentGateway:
      enabled: true

  override:
    otelAgentGateway:
      # Number of replicas
      replicas: 3
      # Control placement of gateway pods
      nodeSelector:
        gateway: "true"
```

Apply the configuration:

```shell
kubectl apply -f datadog-agent.yaml
```

{% /tab %}

{% tab title="Helm" %}
To get started, enable both the gateway and the DaemonSet Collector in your `values.yaml` file. This is the most common setup.

```yaml
# values.yaml
targetSystem: "linux"
datadog:
  apiKey: <DATADOG_API_KEY>
  appKey: <DATADOG_APP_KEY>
  # Enable the Collector in the Agent Daemonset
  otelCollector:
    enabled: true

# Enable the standalone Gateway Deployment
otelAgentGateway:
  enabled: true
  replicas: 3
  nodeSelector:
    # Example selector to place gateway pods on specific nodes
    gateway: "true"
```

{% /tab %}

In this case, the daemonset Collector uses a default config that sends OTLP data to the gateway's Kubernetes service:

```yaml
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
exporters:
  debug:
    verbosity: detailed
  otlphttp:
    endpoint: http://<release>-datadog-otel-agent-gateway:4318
    tls:
      insecure: true
    sending_queue:
      batch:
        flush_timeout: 10s
processors:
  infraattributes:
    cardinality: 2
connectors:
  datadog/connector:
    traces:
      compute_top_level_by_span_kind: true
      peer_tags_aggregation: true
      compute_stats_by_span_kind: true
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [infraattributes]
      exporters: [otlphttp, datadog/connector]
    metrics:
      receivers: [otlp, datadog/connector]
      processors: [infraattributes]
      exporters: [otlphttp]
    logs:
      receivers: [otlp]
      processors: [infraattributes]
      exporters: [otlphttp]
```

The gateway Collector uses a default config that listens on the service ports and sends data to Datadog:

```yaml
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
exporters:
  debug:
    verbosity: detailed
  datadog:
    api:
      key: ${env:DD_API_KEY}
    sending_queue:
      batch:
        flush_timeout: 10s
processors:
extension:
  datadog:
    api:
      key: ${env:DD_API_KEY}
    deployment_type: gateway
service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [datadog]
    metrics:
      receivers: [otlp]
      exporters: [datadog]
    logs:
      receivers: [otlp]
      exporters: [datadog]
```
**For Helm users:** Configure `otelAgentGateway.affinity` or `otelAgentGateway.nodeSelector` to control pod placement, and adjust `otelAgentGateway.replicas` to scale the gateway.**For Operator users:** Use `override.otelAgentGateway.affinity`, `override.otelAgentGateway.nodeSelector`, and `override.otelAgentGateway.replicas` for these settings.
### Deploying a standalone gateway{% #deploying-a-standalone-gateway %}

{% tab title="Datadog Operator" %}
If you have an existing DaemonSet deployment, you can deploy the gateway independently by disabling other components:

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog-gateway
spec:
  global:
    credentials:
      apiSecret:
        secretName: datadog-secret
        keyName: api-key

  features:
    otelAgentGateway:
      enabled: true

  override:
    otelAgentGateway:
      # Number of replicas
      replicas: 3
      # Control placement of gateway pods
      nodeSelector:
        gateway: "true"

    # Disable the Agent DaemonSet
    nodeAgent:
      disabled: true
    # Disable the Cluster Agent
    clusterAgent:
      disabled: true
```

After deploying the gateway, you must update the configuration of your existing DaemonSet Collectors to send data to the new gateway service endpoint (for example, `http://datadog-gateway-otel-agent-gateway:4318`).
{% /tab %}

{% tab title="Helm" %}
If you have an existing DaemonSet deployment, you can deploy the gateway independently.

```yaml
# values.yaml
targetSystem: "linux"
fullnameOverride: "gw-only"
agents:
  enabled: false
clusterAgent:
  enabled: false
datadog:
  apiKey: <DATADOG_API_KEY>
  appKey: <DATADOG_APP_KEY>
otelAgentGateway:
  enabled: true
  replicas: 3
  nodeSelector:
    gateway: "true"
```

After deploying the gateway, you must update the configuration of your existing DaemonSet Collectors to send data to the new gateway service endpoint (for example, `http://gw-only-otel-agent-gateway:4318`).
{% /tab %}

### Customizing Collector configurations{% #customizing-collector-configurations %}

{% tab title="Datadog Operator" %}
You can customize the gateway Collector configuration using ConfigMaps. Create a ConfigMap with your custom configuration:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-gateway-config
data:
  otel-gateway-config.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "0.0.0.0:4317"
          http:
            endpoint: "0.0.0.0:4318"
    exporters:
      datadog:
        api:
          key: ${env:DD_API_KEY}
      sending_queue:
        batch:
          flush_timeout: 10s
    service:
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [datadog]
        metrics:
          receivers: [otlp]
          exporters: [datadog]
        logs:
          receivers: [otlp]
          exporters: [datadog]
```

Then reference it in your `DatadogAgent` resource:

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiSecret:
        secretName: datadog-secret
        keyName: api-key

  features:
    otelAgentGateway:
      enabled: true
      # Reference the custom ConfigMap
      config:
        configMap:
          name: otel-gateway-config

  override:
    otelAgentGateway:
      replicas: 3
```

For multi-item ConfigMaps or inline configuration, see the [DatadogAgent examples](https://github.com/DataDog/datadog-operator/tree/main/examples/datadogagent).
{% /tab %}

{% tab title="Helm" %}
You can override the default configurations for both the DaemonSet and gateway Collectors using the `datadog.otelCollector.config` and `otelAgentGateway.config` values, respectively.

```yaml
# values.yaml
targetSystem: "linux"
fullnameOverride: "my-gw"
datadog:
  apiKey: <DATADOG_API_KEY>
  appKey: <DATADOG_APP_KEY>
  # Enable and configure the DaemonSet Collector
  otelCollector:
    enabled: true
    config: |
      receivers:
        otlp:
          protocols:
            grpc:
              endpoint: "localhost:4317"
      exporters:
        otlp:
          endpoint: http://my-gw-otel-agent-gateway:4317
          tls:
            insecure: true
      service:
        pipelines:
          traces:
            receivers: [otlp]
            exporters: [otlp]
          metrics:
            receivers: [otlp]
            exporters: [otlp]
          logs:
            receivers: [otlp]
            exporters: [otlp]

# Enable and configure the gateway Collector
otelAgentGateway:
  enabled: true
  replicas: 3
  nodeSelector:
    gateway: "true"
  ports:
    - containerPort: 4317
      name: "otel-grpc"
  config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "0.0.0.0:4317"
    exporters:
      datadog:
        api:
          key: ${env:DD_API_KEY}
        sending_queue:
          batch:
            flush_timeout: 10s
    service:
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [datadog]
        metrics:
          receivers: [otlp]
          exporters: [datadog]
        logs:
          receivers: [otlp]
          exporters: [datadog]
```

{% alert level="warning" %}
For the `infraattributes` processor to add Kubernetes tags, your telemetry must include the `container.id` resource attribute. This is often, but not always, added by OTel SDK auto-instrumentation.

If your tags are missing, see the [troubleshooting guide](https://docs.datadoghq.com/opentelemetry/troubleshooting/#infrastructure-tags-are-missing-from-telemetry) for details on how to add this attribute.
{% /alert %}

{% alert level="info" %}
If you set `fullnameOverride`, the gateway's Kubernetes service name becomes `-otel-agent-gateway`. The ports defined in `otelAgentGateway.ports` are exposed on this service. Ensure these ports match the OTLP receiver configuration in the gateway and the OTLP exporter configuration in the DaemonSet.
{% /alert %}

{% /tab %}

The example configurations use insecure TLS for simplicity. Follow the [OTel configtls instructions](https://github.com/open-telemetry/opentelemetry-collector/tree/main/config/configtls) if you want to enable TLS.

### Advanced configuration options{% #advanced-configuration-options %}

{% tab title="Datadog Operator" %}
The Datadog Operator provides additional configuration options for the OTel Agent Gateway under `override.otelAgentGateway` (**NOT** `features.otelAgentGateway` except `featureGates`):

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiSecret:
        secretName: datadog-secret
        keyName: api-key

  features:
    otelAgentGateway:
      enabled: true

      # Feature gates for OTel collector (feature-specific configuration)
      featureGates: "telemetry.UseLocalHostAsDefaultMetricsAddress"

  override:
    otelAgentGateway:
      # Number of replicas
      replicas: 3

      # Node selector for pod placement
      nodeSelector:
        kubernetes.io/os: linux
        gateway: "true"

      # Affinity configuration
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - datadog-otel-agent-gateway
              topologyKey: kubernetes.io/hostname

      # Tolerations for tainted nodes
      tolerations:
      - key: "dedicated"
        operator: "Equal"
        value: "otel-gateway"
        effect: "NoSchedule"

      # Priority class for scheduling
      priorityClassName: high-priority

      # Environment variables
      env:
      - name: OTEL_LOG_LEVEL
        value: "info"

      # Environment variables from ConfigMaps or Secrets
      envFrom:
      - configMapRef:
          name: otel-gateway-config

      # Custom image (optional)
      image:
        name: ddot-collector
        tag: "7.77.0"
        pullPolicy: IfNotPresent

      # Pod-level security context
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000

      # Configure resources
      containers:
        otel-agent:
          resources:
            requests:
              cpu: 200m
              memory: 512Mi
            limits:
              cpu: 500m
              memory: 1Gi

      # Additional labels and annotations
      labels:
        team: observability
      annotations:
        prometheus.io/scrape: "true"
```

For a complete reference of all available options, see the [DatadogAgent v2alpha1 configuration documentation](https://github.com/DataDog/datadog-operator/blob/main/docs/configuration.v2alpha1.md).
{% /tab %}

{% tab title="Helm" %}
For Helm-based deployments, many of these advanced configuration options can be set directly in the `values.yaml` file under the `otelAgentGateway` section. For a complete reference, see the [Datadog Helm chart README](http://github.com/DataDog/helm-charts/blob/main/charts/datadog/README.md).
{% /tab %}

## Advanced use cases{% #advanced-use-cases %}

### Tail sampling with the load balancing exporter{% #tail-sampling-with-the-load-balancing-exporter %}

A primary use case for the gateway is tail-based sampling. To ensure that all spans for a given trace are processed by the same gateway pod, use the **load balancing exporter** in your DaemonSet Collectors. This exporter consistently routes spans based on a key, such as `traceID`.

{% tab title="Datadog Operator" %}
The DaemonSet Collector is configured with the `loadbalancing` exporter, which uses the Kubernetes service resolver to discover and route data to the gateway pods. The gateway Collector uses the `tail_sampling` processor to sample traces based on defined policies before exporting them to Datadog.

**Note**: RBAC permissions are required for the k8s resolver in the loadbalancing exporter.

Create a ConfigMap for the DaemonSet Collector configuration with the load balancing exporter:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-daemonset-config
data:
  otel-config.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "localhost:4317"
    exporters:
      loadbalancing:
        routing_key: "traceID"
        protocol:
          otlp:
            tls:
              insecure: true
        resolver:
          k8s:
            service: datadog-otel-agent-gateway
            ports:
              - 4317
    service:
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [loadbalancing]
```

Create a ConfigMap for the gateway Collector configuration with tail sampling:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-gateway-tailsampling-config
data:
  otel-gateway-config.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "0.0.0.0:4317"
    processors:
      tail_sampling:
        decision_wait: 10s
        policies:
          # Add your sampling policies here
          - name: sample-errors
            type: status_code
            status_code:
              status_codes: [ERROR]
          - name: sample-slow-traces
            type: latency
            latency:
              threshold_ms: 1000
    connectors:
      datadog/connector:
    exporters:
      datadog:
        api:
          key: ${env:DD_API_KEY}
    service:
      pipelines:
        traces/sample:
          receivers: [otlp]
          processors: [tail_sampling]
          exporters: [datadog]
        traces:
          receivers: [otlp]
          exporters: [datadog/connector]
        metrics:
          receivers: [datadog/connector]
          exporters: [datadog]
```

Apply the DatadogAgent configuration:

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiSecret:
        secretName: datadog-secret
        keyName: api-key

  features:
    otelCollector:
      enabled: true
      # Reference the custom DaemonSet config
      config:
        configMap:
          name: otel-daemonset-config
      # RBAC permissions for the k8s resolver
      rbac:
        create: true

    otelAgentGateway:
      enabled: true
      # Reference the custom gateway config
      config:
        configMap:
          name: otel-gateway-tailsampling-config

  override:
    otelAgentGateway:
      replicas: 3
```

Create a ClusterRole for the DaemonSet to access endpoints:

```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: otel-collector-k8s-resolver
rules:
- apiGroups: [""]
  resources: ["endpoints"] # for v0.139.0 and before
  verbs: ["get", "watch", "list"]
- apiGroups: ["discovery.k8s.io"]
  resources: ["endpointslices"] # for v0.140.0 and after
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: otel-collector-k8s-resolver
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: otel-collector-k8s-resolver
subjects:
- kind: ServiceAccount
  name: datadog-agent
  namespace: default
```

{% alert level="warning" %}
To ensure APM Stats are calculated on 100% of your traces before sampling, the `datadog/connector` runs in a separate pipeline without the `tail_sampling` processor. The Connector can run in either the DaemonSet or the gateway layer.
{% /alert %}

{% /tab %}

{% tab title="Helm" %}
In the configuration below:

1. The daemonset Collector (`datadog.otelCollector`) is configured with the `loadbalancing` exporter, which uses the Kubernetes service resolver to discover and route data to the gateway pods.
1. The gateway Collector (`otelAgentGateway`) uses the `tail_sampling` processor to sample traces based on defined policies before exporting them to Datadog.

```yaml
# values.yaml
targetSystem: "linux"
fullnameOverride: "my-gw"
datadog:
  apiKey: <DATADOG_API_KEY>
  appKey: <DATADOG_APP_KEY>
  otelCollector:
    enabled: true
    # RBAC permissions are required for the k8s resolver in the loadbalancing exporter
    rbac:
      create: true
      rules:
        - apiGroups: [""]
          resources: ["endpoints"] # for v0.139.0 and before
          verbs: ["get", "watch", "list"]
        - apiGroups: ["discovery.k8s.io"]
          resources: ["endpointslices"] # for v0.140.0 and after
          verbs: ["get", "watch", "list"]
    config: |
      receivers:
        otlp:
          protocols:
            grpc:
              endpoint: "localhost:4317"
      exporters:
        loadbalancing:
          routing_key: "traceID"
          protocol:
            otlp:
              tls:
                insecure: true
          resolver:
            k8s:
              service: my-gw-otel-agent-gateway
              ports:
                - 4317
      service:
        pipelines:
          traces:
            receivers: [otlp]
            exporters: [loadbalancing]

otelAgentGateway:
  enabled: true
  replicas: 3
  ports:
    - containerPort: 4317
      name: "otel-grpc"
  config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "0.0.0.0:4317"
    processors:
      tail_sampling:
        decision_wait: 10s
        policies: <Add your sampling policies here>
    connectors:
      datadog/connector:
    exporters:
      datadog:
        api:
          key: ${env:DD_API_KEY}
    service:
      pipelines:
        traces/sample:
          receivers: [otlp]
          processors: [tail_sampling]
          exporters: [datadog]
        traces:
          receivers: [otlp]
          exporters: [datadog/connector]
        metrics:
          receivers: [datadog/connector]
          exporters: [datadog]
```

{% alert level="warning" %}
To ensure APM Stats are calculated on 100% of your traces before sampling, the `datadog/connector` runs in a separate pipeline without the `tail_sampling` processor. The Connector can run in either the DaemonSet or the gateway layer.
{% /alert %}

{% /tab %}

### Using a custom Collector image{% #using-a-custom-collector-image %}

To use a custom-built Collector image for your gateway, specify the image repository and tag. If you need instructions on how to build the custom images, see [Use Custom OpenTelemetry Components](https://docs.datadoghq.com/opentelemetry/setup/ddot_collector/custom_components).

{% alert level="info" %}
**Note:** The Datadog Operator supports the following image name formats:
- `name` - The image name (for example, `ddot-collector`)
- `name:tag` - Image name with tag (for example, `ddot-collector:7.77.0`)
- `registry/name:tag` - Full image reference (for example, `gcr.io/datadoghq/ddot-collector:7.77.0`)
The `registry/name` format (without tag in the name field) is **not supported** when using a separate `tag` field. Either include the full image reference with tag in the `name` field, or use the image name with a separate `tag` field.
{% /alert %}

{% tab title="Datadog Operator" %}

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiSecret:
        secretName: datadog-secret
        keyName: api-key

  features:
    otelAgentGateway:
      enabled: true

  override:
    otelAgentGateway:
      image:
        name: <YOUR REPO>:<IMAGE TAG>
```

{% /tab %}

{% tab title="Helm" %}

```yaml
# values.yaml
targetSystem: "linux"
agents:
  enabled: false
clusterAgent:
  enabled: false
otelAgentGateway:
  enabled: true
  image:
    repository: <YOUR REPO>
    tag: <IMAGE TAG>
    doNotCheckTag: true
  ports:
    - containerPort: "4317"
      name: "otel-grpc"
  config: | <YOUR CONFIG>
```

{% /tab %}

### Enable Autoscaling with Horizontal Pod Autoscaler (HPA){% #enable-autoscaling-with-horizontal-pod-autoscaler-hpa %}

The DDOT Collector gateway supports autoscaling with the Kubernetes Horizontal Pod Autoscaler (HPA) feature.

{% tab title="Datadog Operator" %}
**Note**: The Datadog Operator does not directly manage HPA resources. You need to create the HPA resource separately and configure it to target the OTel Agent Gateway deployment.

Create an HPA resource:

```yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: datadog-otel-agent-gateway-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: datadog-otel-agent-gateway
  minReplicas: 2
  maxReplicas: 10
  metrics:
  # Aim for high CPU utilization for higher throughput
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 80
  behavior:
    scaleUp:
      stabilizationWindowSeconds: 30
    scaleDown:
      stabilizationWindowSeconds: 60
```

Apply the DatadogAgent configuration with resource requests/limits (required for HPA):

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiSecret:
        secretName: datadog-secret
        keyName: api-key

  features:
    otelAgentGateway:
      enabled: true

  override:
    otelAgentGateway:
      replicas: 4  # Initial replicas, HPA will override based on metrics
      containers:
        otel-agent:
          resources:
            requests:
              cpu: 200m
              memory: 512Mi
            limits:
              cpu: 500m
              memory: 1Gi
```

{% /tab %}

{% tab title="Helm" %}
To enable HPA, configure `otelAgentGateway.autoscaling`:

```yaml
# values.yaml
targetSystem: "linux"
agents:
  enabled: false
clusterAgent:
  enabled: false
otelAgentGateway:
  enabled: true
  ports:
    - containerPort: "4317"
      name: "otel-grpc"
  config: | <YOUR CONFIG>
  replicas: 4  # 4 replicas to begin with and HPA may override it based on the metrics
  autoscaling:
    enabled: true
    minReplicas: 2
    maxReplicas: 10
    metrics:
      # Aim for high CPU utilization for higher throughput
      - type: Resource
        resource:
          name: cpu
          target:
            type: Utilization
            averageUtilization: 80
    behavior:
      scaleUp:
        stabilizationWindowSeconds: 30
      scaleDown:
        stabilizationWindowSeconds: 60
```

{% /tab %}

You can use resource metrics (CPU or memory), custom metrics (Kubernetes Pod or Object), or external metrics as autoscaling inputs. For resource metrics, ensure that the [Kubernetes metrics server](http://github.com/kubernetes-sigs/metrics-server) is running in your cluster. For custom or external metrics, consider configuring the [Datadog Cluster Agent metrics provider](https://docs.datadoghq.com/containers/guide/cluster_agent_autoscaling_metrics/?tab=helm).

### Deploying a multi-layer gateway{% #deploying-a-multi-layer-gateway %}

For advanced scenarios, you can deploy multiple gateway layers to create a processing chain.

{% tab title="Datadog Operator" %}
Deploy each layer as a separate `DatadogAgent` resource, starting from the final layer and working backward.

1. **Deploy Layer 1 (Final Layer):** This layer receives from Layer 2 and exports to Datadog.

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog-gw-layer-1
spec:
  global:
    credentials:
      apiSecret:
        secretName: datadog-secret
        keyName: api-key

  features:
    otelAgentGateway:
      enabled: true
      config:
        configMap:
          name: gw-layer-1-config

  override:
    otelAgentGateway:
      replicas: 3
      nodeSelector:
        gateway: "gw-node-1"

    nodeAgent:
      disabled: true
    clusterAgent:
      disabled: true
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: gw-layer-1-config
data:
  otel-gateway-config.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "0.0.0.0:4317"
    exporters:
      datadog:
        api:
          key: ${env:DD_API_KEY}
    service:
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [datadog]
        metrics:
          receivers: [otlp]
          exporters: [datadog]
        logs:
          receivers: [otlp]
          exporters: [datadog]
```
**Deploy Layer 2 (Intermediate Layer):** This layer receives from the DaemonSet and exports to Layer 1.
```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog-gw-layer-2
spec:
  global:
    credentials:
      apiSecret:
        secretName: datadog-secret
        keyName: api-key

  features:
    otelAgentGateway:
      enabled: true
      config:
        configMap:
          name: gw-layer-2-config

  override:
    otelAgentGateway:
      replicas: 3
      nodeSelector:
        gateway: "gw-node-2"

    nodeAgent:
      disabled: true
    clusterAgent:
      disabled: true
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: gw-layer-2-config
data:
  otel-gateway-config.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "0.0.0.0:4317"
    exporters:
      otlp:
        endpoint: http://datadog-gw-layer-1-otel-agent-gateway:4317
        tls:
          insecure: true
    service:
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [otlp]
        metrics:
          receivers: [otlp]
          exporters: [otlp]
        logs:
          receivers: [otlp]
          exporters: [otlp]
```
**Deploy DaemonSet:** Configure the DaemonSet to export to Layer 2.
```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiSecret:
        secretName: datadog-secret
        keyName: api-key

  features:
    otelCollector:
      enabled: true
      config:
        configMap:
          name: daemonset-layer2-config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: daemonset-layer2-config
data:
  otel-config.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "localhost:4317"
    exporters:
      otlp:
        endpoint: http://datadog-gw-layer-2-otel-agent-gateway:4317
        tls:
          insecure: true
    service:
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [otlp]
        metrics:
          receivers: [otlp]
          exporters: [otlp]
        logs:
          receivers: [otlp]
          exporters: [otlp]
```

{% /tab %}

{% tab title="Helm" %}
Deploy each layer as a separate Helm release, starting from the final layer and working backward.

1. **Deploy Layer 1 (Final Layer):** This layer receives from Layer 2 and exports to Datadog.

   ```yaml
   # layer-1-values.yaml
   targetSystem: "linux"
   fullnameOverride: "gw-layer-1"
   agents:
     enabled: false
   clusterAgent:
     enabled: false
   otelAgentGateway:
     enabled: true
     replicas: 3
     nodeSelector:
       gateway: "gw-node-1"
     ports:
       - containerPort: "4317"
         hostPort: "4317"
         name: "otel-grpc"
     config: |
       receivers:
         otlp:
           protocols:
             grpc:
               endpoint: "0.0.0.0:4317"
       exporters:
         datadog:
           api:
             key: <API Key>
       service:
         pipelines:
           traces:
             receivers: [otlp]
             exporters: [datadog]
           metrics:
             receivers: [otlp]
             exporters: [datadog]
           logs:
             receivers: [otlp]
             exporters: [datadog]
   ```

1. **Deploy Layer 2 (Intermediate Layer):** This layer receives from the DaemonSet and exports to Layer 1.

   ```yaml
   # layer-2-values.yaml
   targetSystem: "linux"
   fullnameOverride: "gw-layer-2"
   agents:
     enabled: false
   clusterAgent:
     enabled: false
   otelAgentGateway:
     enabled: true
     replicas: 3
     nodeSelector:
       gateway: "gw-node-2"
     ports:
       - containerPort: "4317"
         hostPort: "4317"
         name: "otel-grpc"
     config: |
       receivers:
         otlp:
           protocols:
             grpc:
               endpoint: "0.0.0.0:4317"
       exporters:
         otlp:
           endpoint: http://gw-layer-1-otel-agent-gateway:4317
           tls:
             insecure: true
       service:
         pipelines:
           traces:
             receivers: [otlp]
             exporters: [otlp]
           metrics:
             receivers: [otlp]
             exporters: [otlp]
           logs:
             receivers: [otlp]
             exporters: [otlp]
   ```

1. **Deploy DaemonSet:** Configure the DaemonSet to export to Layer 2.

   ```yaml
   # daemonset-values.yaml
   targetSystem: "linux"
   datadog:
     apiKey: <DATADOG_API_KEY>
     appKey: <DATADOG_APP_KEY>
     otelCollector:
       enabled: true
       config: |
         receivers:
           otlp:
             protocols:
               grpc:
                 endpoint: "localhost:4317"
         exporters:
           otlp:
             endpoint: http://gw-layer-2-otel-agent-gateway:4317
             tls:
               insecure: true
         service:
           pipelines:
             traces:
               receivers: [otlp]
               exporters: [otlp]
             metrics:
               receivers: [otlp]
               exporters: [otlp]
             logs:
               receivers: [otlp]
               exporters: [otlp]
   ```

{% /tab %}

## View gateway pods on Fleet Automation{% #view-gateway-pods-on-fleet-automation %}

The DDOT Collector gateway includes the [Datadog extension](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/datadogextension) by default. This extension exports Collector build information and configurations to Datadog, allowing you to monitor your telemetry pipeline from Infrastructure Monitoring and Fleet Automation.

To view your gateway pods:

1. Navigate to **Integrations > Fleet Automation**.

{% image
   source="https://datadog-docs.imgix.net/images/opentelemetry/embedded_collector/fleet_automation2.66cdc09bae35f663a4cc4504d7784057.png?auto=format"
   alt="Fleet Automation page showing DDOT gateway pods" /%}
Select a gateway pod to view detailed build information and the running Collector configuration.
{% image
   source="https://datadog-docs.imgix.net/images/opentelemetry/embedded_collector/fleet_automation3.6149357755c51e1a63d4aace7aca95ae.png?auto=format"
   alt="Fleet Automation page showing the collector config of one DDOT gateway pod" /%}

## Known limitations{% #known-limitations %}

- **Startup race condition**: When deploying the DaemonSet and gateway in the same release, DaemonSet pods might start before the gateway service is ready, causing initial connection error logs. The OTLP exporter automatically retries, so these logs can be safely ignored. Alternatively, deploy the gateway first and wait for it to become ready before deploying the DaemonSet.

## Further reading{% #further-reading %}

- [Centralize and govern your OpenTelemetry pipeline with the DDOT gateway](https://www.datadoghq.com/blog/ddot-gateway)
- [Use Custom OpenTelemetry Components with Datadog Agent](https://docs.datadoghq.com/opentelemetry/setup/ddot_collector/custom_components)
- [Collector Deployment: Gateway](https://opentelemetry.io/docs/collector/deployment/gateway/)
- [Load Balancing Exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/loadbalancingexporter)
- [Tail Sampling Processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/tailsamplingprocessor)
