---
title: Further Configure the Datadog Agent on Kubernetes
description: >-
  Additional configuration options for APM, logs, processes, events, and other
  capabilities after installing the Datadog Agent
breadcrumbs: >-
  Docs > Containers > Kubernetes > Further Configure the Datadog Agent on
  Kubernetes
---

# Further Configure the Datadog Agent on Kubernetes

## Overview{% #overview %}

After you have installed the Datadog Agent in your Kubernetes environment, you may choose additional configuration options.

### Enable Datadog to collect:{% #enable-datadog-to-collect %}

- Traces (APM)
- Kubernetes events
- CNM
- Logs
- Processes

### Other capabilities{% #other-capabilities %}

- Datadog Cluster Agent
- Integrations
- Containers view
- Orchestrator Explorer
- External metrics server

### More configurations{% #more-configurations %}

- Environment variables
- DogStatsD for custom metrics
- Tag mapping
- Secrets
- Ignore containers
- Kubernetes API server timeout
- Proxy settings
- Autodiscovery
- Set cluster name
- Miscellaneous

## Enable APM and tracing{% #enable-apm-and-tracing %}

{% tab title="Datadog Operator" %}
Edit your `datadog-agent.yaml` to set `features.apm.enabled` to `true`.

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiKey: <DATADOG_API_KEY>

  features:
    apm:
      enabled: true
```

After making your changes, apply the new configuration by using the following command:

```shell
kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml
```

{% /tab %}

{% tab title="Helm" %}
In Helm, APM is **enabled by default** over UDS or Windows named pipe.

To verify, ensure that `datadog.apm.socketEnabled` is set to `true` in your `values.yaml`.

```yaml
datadog:
  apm:
    socketEnabled: true    
```

{% /tab %}

For more information, see [Kubernetes Trace Collection](https://docs.datadoghq.com/containers/kubernetes/apm.md).

## Enable Kubernetes event collection{% #enable-kubernetes-event-collection %}

Use the [Datadog Cluster Agent](https://docs.datadoghq.com/containers/cluster_agent.md) to collect Kubernetes events.

{% tab title="Datadog Operator" %}
Event collection is enabled by default by the Datadog Operator. This can be managed in the configuration `features.eventCollection.collectKubernetesEvents` in your `datadog-agent.yaml`.

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiKey: <DATADOG_API_KEY>
    site: <DATADOG_SITE>

  features:
    eventCollection:
      collectKubernetesEvents: true
```

{% /tab %}

{% tab title="Helm" %}
To collect Kubernetes events with the Datadog Cluster Agent, ensure that the `clusterAgent.enabled`, `datadog.collectEvents` and `clusterAgent.rbac.create` options are set to `true` in your `datadog-values.yaml` file.

```yaml
datadog:
  collectEvents: true
clusterAgent:
  enabled: true
  rbac: 
    create: true
```

If you don't want to use the Cluster Agent, you can still have a Node Agent collect Kubernetes events by setting `datadog.leaderElection`, `datadog.collectEvents`, and `agents.rbac.create` options to `true` in your `datadog-values.yaml` file.

```yaml
datadog:
  leaderElection: true
  collectEvents: true
agents:
  rbac:
    create: true
```

{% /tab %}

For DaemonSet configuration, see [DaemonSet Cluster Agent event collection](https://docs.datadoghq.com/containers/guide/kubernetes_daemonset.md#cluster-agent-event-collection).

## Enable CNM collection{% #enable-cnm-collection %}

{% tab title="Datadog Operator" %}
In your `datadog-agent.yaml`, set `features.npm.enabled` to `true`.

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiKey: <DATADOG_API_KEY>

  features:
    npm:
      enabled: true
```

Then apply the new configuration:

```shell
kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml
```

{% /tab %}

{% tab title="Helm" %}
Update your `datadog-values.yaml` with the following configuration:

```yaml
datadog:
  # (...)
  networkMonitoring:
    enabled: true
```

Then upgrade your Helm chart:

```shell
helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadog
```

{% /tab %}

For more information, see [Cloud Network Monitoring](https://docs.datadoghq.com/network_monitoring/cloud_network_monitoring.md).

## Enable log collection{% #enable-log-collection %}

{% tab title="Datadog Operator" %}
In your `datadog-agent.yaml`, set `features.logCollection.enabled` and `features.logCollection.containerCollectAll` to `true`.

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiKey: <DATADOG_API_KEY>

  features:
    logCollection:
      enabled: true
      containerCollectAll: true
```

Then apply the new configuration:

```shell
kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml
```

{% /tab %}

{% tab title="Helm" %}
Update your `datadog-values.yaml` with the following configuration:

```yaml
datadog:
  # (...)
  logs:
    enabled: true
    containerCollectAll: true
```

Then upgrade your Helm chart:

```shell
helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadog
```

{% /tab %}

For more information, see [Kubernetes log collection](https://docs.datadoghq.com/containers/kubernetes/log.md).

## Enable process collection{% #enable-process-collection %}

{% tab title="Datadog Operator" %}
In your `datadog-agent.yaml`, set `features.liveProcessCollection.enabled` to `true`.

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiKey: <DATADOG_API_KEY>

  features:
    liveProcessCollection:
      enabled: true
```

Then apply the new configuration:

```shell
kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml
```

{% /tab %}

{% tab title="Helm" %}
Update your `datadog-values.yaml` with the following configuration:

```yaml
datadog:
  # (...)
  processAgent:
    enabled: true
    processCollection: true
```

Then upgrade your Helm chart:

```shell
helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadog
```

{% /tab %}

For more information, see [Live Processes](https://docs.datadoghq.com/infrastructure/process.md)

## Datadog Cluster Agent{% #datadog-cluster-agent %}

The Datadog Cluster Agent provides a streamlined, centralized approach to collecting cluster level monitoring data. Datadog strongly recommends using the Cluster Agent for monitoring Kubernetes.

The Datadog Operator v1.0.0+ and Helm chart v2.7.0+ **enable the Cluster Agent by default**. No further configuration is necessary.

{% tab title="Datadog Operator" %}
The Datadog Operator v1.0.0+ enables the Cluster Agent by default. The Operator creates the necessary RBACs and deploys the Cluster Agent. Both Agents use the same API key.

The Operator automatically generates a random token in a Kubernetes `Secret` to be shared by the Datadog Agent and Cluster Agent for secure communication.

You can manually specify this token in the `global.clusterAgentToken` field in your `datadog-agent.yaml`:

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiKey: <DATADOG_API_KEY>
      appKey: <DATADOG_APP_KEY>
    clusterAgentToken: <DATADOG_CLUSTER_AGENT_TOKEN>
```

Alternatively, you can specify this token by referencing the name of an existing `Secret` and the data key containing this token:

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiKey: <DATADOG_API_KEY>
      appKey: <DATADOG_APP_KEY>
    clusterAgentTokenSecret: 
      secretName: <SECRET_NAME>
      keyName: <KEY_NAME>
```

**Note**: When set manually, this token must be 32 alphanumeric characters.

Then apply the new configuration:

```shell
kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml
```

{% /tab %}

{% tab title="Helm" %}
Helm chart v2.7.0+ enables the Cluster Agent by default.

For verification, ensure that `clusterAgent.enabled` is set to `true` in your `datadog-values.yaml`:

```yaml
clusterAgent:
  enabled: true
```

Helm automatically generates a random token in a Kubernetes `Secret` to be shared by the Datadog Agent and Cluster Agent for secure communication.

You can manually specify this token in the `clusterAgent.token` field in your `datadog-agent.yaml`:

```yaml
clusterAgent:
  enabled: true
  token: <DATADOG_CLUSTER_AGENT_TOKEN>
```

Alternatively, you can specify this token by referencing the name of an existing `Secret`, where the token is in a key named `token`:

```yaml
clusterAgent:
  enabled: true
  tokenExistingSecret: <SECRET_NAME>
```

{% /tab %}

For more information, see the [Datadog Cluster Agent documentation](https://docs.datadoghq.com/containers/cluster_agent.md).

## Custom metrics server{% #custom-metrics-server %}

To use the Cluster Agent's [custom metrics server](https://docs.datadoghq.com/containers/guide/cluster_agent_autoscaling_metrics.md?tab=helm) feature, you must supply a Datadog [application key](https://docs.datadoghq.com/account_management/api-app-keys.md#application-keys) and enable the metrics provider.

{% tab title="Datadog Operator" %}
In `datadog-agent.yaml`, supply an application key under `spec.global.credentials.appKey` and set `features.externalMetricsServer.enabled` to `true`.

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiKey: <DATADOG_API_KEY>
      appKey: <DATADOG_APP_KEY>

  features:
    externalMetricsServer:
      enabled: true
```

Then apply the new configuration:

```shell
kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml
```

{% /tab %}

{% tab title="Helm" %}
In `datadog-values.yaml`, supply an application key under `datadog.appKey` and set `clusterAgent.metricsProvider.enabled` to `true`.

```yaml
datadog:
  apiKey: <DATADOG_API_KEY>
  appKey: <DATADOG_APP_KEY>

clusterAgent:
  enabled: true
  metricsProvider:
    enabled: true
```

Then upgrade your Helm chart:

```shell
helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadog
```

{% /tab %}

## Integrations{% #integrations %}

Once the Agent is up and running in your cluster, use [Datadog's Autodiscovery feature](https://docs.datadoghq.com/containers/kubernetes/integrations.md) to collect metrics and logs automatically from your pods.

## Containers view{% #containers-view %}

To make use of Datadog's [Container Explorer](https://app.datadoghq.com/containers), enable the Process Agent. The Datadog Operator and Helm chart **enable the Process Agent by default**. No further configuration is necessary.

{% tab title="Datadog Operator" %}
The Datadog Operator enables the Process Agent by default.

For verification, ensure that `features.liveContainerCollection.enabled` is set to `true` in your `datadog-agent.yaml`:

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiKey: <DATADOG_API_KEY>
      appKey: <DATADOG_APP_KEY>
  features:
    liveContainerCollection:
      enabled: true
```

In some setups, the Process Agent and Cluster Agent cannot automatically detect a Kubernetes cluster name. If this happens, the feature does not start, and the following warning displays in the Cluster Agent log: `Orchestrator explorer enabled but no cluster name set: disabling`. In this case, you must set `spec.global.clusterName` to your cluster name in `datadog-agent.yaml`:

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    clusterName: <YOUR_CLUSTER_NAME>
    credentials:
      apiKey: <DATADOG_API_KEY>
      appKey: <DATADOG_APP_KEY>
  features:
    orchestratorExplorer:
      enabled: true
```

{% /tab %}

{% tab title="Helm" %}
The Helm chart enables the Process Agent by default.

For verification, ensure that `processAgent.enabled` is set to `true` in your `datadog-values.yaml`:

```yaml
datadog:
  # (...)
  processAgent:
    enabled: true
```

In some setups, the Process Agent and Cluster Agent cannot automatically detect a Kubernetes cluster name. If this happens, the feature does not start, and the following warning displays in the Cluster Agent log: `Orchestrator explorer enabled but no cluster name set: disabling.` In this case, you must set `datadog.clusterName` to your cluster name in `datadog-values.yaml`.

```yaml
datadog:
  #(...)
  clusterName: <YOUR_CLUSTER_NAME>
  #(...)
  processAgent:
    enabled: true
```

{% /tab %}

For restrictions on valid cluster names, see Set cluster name.

See the [Containers view](https://docs.datadoghq.com/infrastructure/containers.md) documentation for additional information.

## Orchestrator Explorer{% #orchestrator-explorer %}

The Datadog Operator and Helm chart **enable Datadog's [Orchestrator Explorer](https://app.datadoghq.com/orchestration/overview) by default**. No further configuration is necessary.

{% tab title="Datadog Operator" %}
The Orchestrator Explorer is enabled in the Datadog Operator by default.

For verification, ensure that the `features.orchestratorExplorer.enabled` parameter is set to `true` in your `datadog-agent.yaml`:

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiKey: <DATADOG_API_KEY>
      appKey: <DATADOG_APP_KEY>
  features:
    orchestratorExplorer:
      enabled: true
```

In some setups, the Process Agent and Cluster Agent cannot automatically detect a Kubernetes cluster name. If this happens, the feature does not start, and the following warning displays in the Cluster Agent log: `Orchestrator explorer enabled but no cluster name set: disabling`. In this case, you must set `spec.global.clusterName` to your cluster name in `datadog-agent.yaml`:

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    clusterName: <YOUR_CLUSTER_NAME>
    credentials:
      apiKey: <DATADOG_API_KEY>
      appKey: <DATADOG_APP_KEY>
  features:
    orchestratorExplorer:
      enabled: true
```

{% /tab %}

{% tab title="Helm" %}
The Helm chart enables Orchestrator Explorer by default.

For verification, ensure that the `orchestratorExplorer.enabled` parameter is set to `true` in your `datadog-values.yaml` file:

```yaml
datadog:
  # (...)
  processAgent:
    enabled: true
  orchestratorExplorer:
    enabled: true
```

In some setups, the Process Agent and Cluster Agent cannot automatically detect a Kubernetes cluster name. If this happens, the feature does not start, and the following warning displays in the Cluster Agent log: `Orchestrator explorer enabled but no cluster name set: disabling.` In this case, you must set `datadog.clusterName` to your cluster name in `values.yaml`.

```yaml
datadog:
  #(...)
  clusterName: <YOUR_CLUSTER_NAME>
  #(...)
  processAgent:
    enabled: true
  orchestratorExplorer:
    enabled: true
```

{% /tab %}

For restrictions on valid cluster names, see Set cluster name.

See the [Orchestrator Explorer documentation](https://docs.datadoghq.com/infrastructure/containers/orchestrator_explorer.md) for additional information.

## Basic configuration{% #basic-configuration %}

Use the following configuration fields to configure the Datadog Agent.

{% tab title="Datadog Operator" %}

| Parameter (v2alpha1)                      | Description                                                                                                                                                                   |
| ----------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `global.credentials.apiKey`               | Configures your Datadog API key.                                                                                                                                              |
| `global.credentials.apiSecret.secretName` | Instead of `global.credentials.apiKey`, supply the name of a Kubernetes `Secret` containing your Datadog API key.                                                             |
| `global.credentials.apiSecret.keyName`    | Instead of `global.credentials.apiKey`, supply the key of the Kubernetes `Secret` named in `global.credentials.apiSecret.secretName`.                                         |
| `global.credentials.appKey`               | Configures your Datadog application key. If you are using the external metrics server, you must set a Datadog application key for read access to your metrics.                |
| `global.credentials.appSecret.secretName` | Instead of `global.credentials.apiKey`, supply the name of a Kubernetes `Secret` containing your Datadog app key.                                                             |
| `global.credentials.appSecret.keyName`    | Instead of `global.credentials.apiKey`, supply the key of the Kubernetes `Secret` named in `global.credentials.appSecret.secretName`.                                         |
| `global.logLevel`                         | Sets logging verbosity. This can be overridden by the container. Valid log levels are: `trace`, `debug`, `info`, `warn`, `error`, `critical`, and `off`. Default: `info`.     |
| `global.registry`                         | Image registry to use for all Agent images. Default: `gcr.io/datadoghq`.                                                                                                      |
| `global.site`                             | Sets the Datadog [intake site](https://docs.datadoghq.com/getting_started.md) to which Agent data is sent. Your site is . (Ensure the correct SITE is selected on the right). |
| `global.tags`                             | A list of tags to attach to every metric, event, and service check collected.                                                                                                 |

For a complete list of configuration fields for the Datadog Operator, see the [Operator v2alpha1 spec](https://github.com/DataDog/datadog-operator/blob/main/docs/configuration.v2alpha1.md). For older versions, see [Migrate DatadogAgent CRDs to v2alpha1](https://docs.datadoghq.com/containers/guide/v2alpha1_migration.md). Configuration fields can also be queried using `kubectl explain datadogagent --recursive`.
{% /tab %}

{% tab title="Helm" %}

| Helm                           | Description                                                                                                                                                                        |
| ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `datadog.apiKey`               | Configures your Datadog API key.                                                                                                                                                   |
| `datadog.apiKeyExistingSecret` | Instead of `datadog.apiKey`, supply the name of an existing Kubernetes `Secret` containing your Datadog API key, set with the key name `api-key`.                                  |
| `datadog.appKey`               | Configures your Datadog application key. If you are using the external metrics server, you must set a Datadog application key for read access to your metrics.                     |
| `datadog.appKeyExistingSecret` | Instead of `datadog.appKey`, supply the name of an existing Kubernetes `Secret` containing your Datadog app key, set with the key name `app-key`.                                  |
| `datadog.logLevel`             | Sets logging verbosity. This can be overridden by the container. Valid log levels are: `trace`, `debug`, `info`, `warn`, `error`, `critical`, and `off`. Default: `info`.          |
| `registry`                     | Image registry to use for all Agent images. Default: `gcr.io/datadoghq`.                                                                                                           |
| `datadog.site`                 | Sets the Datadog [intake site](https://docs.datadoghq.com/getting_started/site.md) to which Agent data is sent. Your site is . (Ensure the correct SITE is selected on the right). |
| `datadog.tags`                 | A list of tags to attach to every metric, event, and service check collected.                                                                                                      |

For a complete list of environment variables for the Helm chart, see the [full list of options](https://github.com/DataDog/helm-charts/tree/main/charts/datadog#all-configuration-options) for `datadog-values.yaml`.
{% /tab %}

{% tab title="DaemonSet" %}

| Env Variable           | Description                                                                                                                                                                                                                                                                                                                                      |
| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `DD_API_KEY`           | Your Datadog API key (**required**)                                                                                                                                                                                                                                                                                                              |
| `DD_ENV`               | Sets the global `env` tag for all data emitted.                                                                                                                                                                                                                                                                                                  |
| `DD_HOSTNAME`          | Hostname to use for metrics (if autodetection fails)                                                                                                                                                                                                                                                                                             |
| `DD_TAGS`              | Host tags separated by spaces. For example: `simple-tag-0 tag-key-1:tag-value-1`                                                                                                                                                                                                                                                                 |
| `DD_SITE`              | Destination site for your metrics, traces, and logs. Your `DD_SITE` is . Defaults to `datadoghq.com`.                                                                                                                                                                                                                                            |
| `DD_DD_URL`            | Optional setting to override the URL for metric submission.                                                                                                                                                                                                                                                                                      |
| `DD_URL` (6.36+/7.36+) | Alias for `DD_DD_URL`. Ignored if `DD_DD_URL` is already set.                                                                                                                                                                                                                                                                                    |
| `DD_CHECK_RUNNERS`     | The Agent runs all checks concurrently by default (default value = `4` runners). To run the checks sequentially, set the value to `1`. If you need to run a high number of checks (or slow checks) the `collector-queue` component might fall behind and fail the healthcheck. You can increase the number of runners to run checks in parallel. |
| `DD_LEADER_ELECTION`   | If multiple instances of the Agent are running in your cluster, set this variable to `true` to avoid the duplication of event collection.                                                                                                                                                                                                        |

{% /tab %}

## Environment variables{% #environment-variables %}

The containerized Datadog Agent can be configured by using environment variables. For an extensive list of supported environment variables, see the [Environment variables](https://docs.datadoghq.com/containers/docker.md?tab=standard#environment-variables) section of the Docker Agent documentation.

### Examples{% #examples %}

{% tab title="Datadog Operator" %}
When using the Datadog Operator, you can set additional environment variables in `override` for a component with `[key].env []object`, or for a container with `[key].containers.[key].env []object`. The following keys are supported:

- `nodeAgent`
- `clusterAgent`
- `clusterChecksRunner`

Container-level settings take priority over any component-level settings.

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  override:
    nodeAgent:
      env:
        - name: <ENV_VAR_NAME>
          value: <ENV_VAR_VALUE>
    clusterAgent:
      containers:
        cluster-agent:
          env:
            - name: <ENV_VAR_NAME>
              value: <ENV_VAR_VALUE>
```

{% /tab %}

{% tab title="Helm" %}

```yaml
datadog:
  env:
  - name: <ENV_VAR_NAME>
    value: <ENV_VAR_VALUE>
clusterAgent:
  env:
  - name: <ENV_VAR_NAME>
    value: <ENV_VAR_VALUE>
```

{% /tab %}

{% tab title="DaemonSet" %}
Add environment variables to the DaemonSet or Deployment (for Datadog Cluster Agent).

```yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: datadog
spec:
  template:
    spec:
      containers:
        - name: agent
          ...
          env:
            - name: <ENV_VAR_NAME>
              value: <ENV_VAR_VALUE>
```

{% /tab %}

## Configure DogStatsD{% #configure-dogstatsd %}

DogStatsD can send custom metrics over UDP with the StatsD protocol. **DogStatsD is enabled by default by the Datadog Operator and Helm**. See the [DogStatsD documentation](https://docs.datadoghq.com/extend/dogstatsd.md) for more information.

You can use the following environment variables to configure DogStatsD with DaemonSet:

| Env Variable                     | Description                                                                                                                                           |
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
| `DD_DOGSTATSD_NON_LOCAL_TRAFFIC` | Listen to DogStatsD packets from other containers (required to send custom metrics).                                                                  |
| `DD_HISTOGRAM_PERCENTILES`       | The histogram percentiles to compute (separated by spaces). The default is `0.95`.                                                                    |
| `DD_HISTOGRAM_AGGREGATES`        | The histogram aggregates to compute (separated by spaces). The default is `"max median avg count"`.                                                   |
| `DD_DOGSTATSD_SOCKET`            | Path to the Unix socket to listen to. Must be in a `rw` mounted volume.                                                                               |
| `DD_DOGSTATSD_ORIGIN_DETECTION`  | Enable container detection and tagging for Unix socket metrics.                                                                                       |
| `DD_DOGSTATSD_TAGS`              | Additional tags to append to all metrics, events, and service checks received by this DogStatsD server, for example: `"env:golden group:retrievers"`. |

## Configure tag mapping{% #configure-tag-mapping %}

Datadog automatically collects common tags from Kubernetes.

In addition, you can map Kubernetes node labels, pod labels, and annotations to Datadog tags. Use the following environment variables to configure this mapping:

{% tab title="Datadog Operator" %}

| Parameter (v2alpha1)           | Description                                                                                                         |
| ------------------------------ | ------------------------------------------------------------------------------------------------------------------- |
| `global.namespaceLabelsAsTags` | Provide a mapping of Kubernetes namespace labels to Datadog tags. `<KUBERNETES_NAMESPACE_LABEL>: <DATADOG_TAG_KEY>` |
| `global.nodeLabelsAsTags`      | Provide a mapping of Kubernetes node labels to Datadog tags. `<KUBERNETES_NODE_LABEL>: <DATADOG_TAG_KEY>`           |
| `global.podAnnotationsAsTags`  | Provide a mapping of Kubernetes Annotations to Datadog tags. `<KUBERNETES_ANNOTATION>: <DATADOG_TAG_KEY>`           |
| `global.podLabelsAsTags`       | Provide a mapping of Kubernetes labels to Datadog tags. `<KUBERNETES_LABEL>: <DATADOG_TAG_KEY>`                     |

### Examples{% #examples %}

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiKey: <DATADOG_API_KEY>
    namespaceLabelsAsTags:
      env: environment
      # <KUBERNETES_NAMESPACE_LABEL>: <DATADOG_TAG_KEY>
    nodeLabelsAsTags:
      beta.kubernetes.io/instance-type: aws-instance-type
      kubernetes.io/role: kube_role
      # <KUBERNETES_NODE_LABEL>: <DATADOG_TAG_KEY>
    podLabelsAsTags:
      app: kube_app
      release: helm_release
      # <KUBERNETES_LABEL>: <DATADOG_TAG_KEY>
    podAnnotationsAsTags:
      iam.amazonaws.com/role: kube_iamrole
       # <KUBERNETES_ANNOTATIONS>: <DATADOG_TAG_KEY>
```

{% /tab %}

{% tab title="Helm" %}

| Helm                            | Description                                                                                                         |
| ------------------------------- | ------------------------------------------------------------------------------------------------------------------- |
| `datadog.namespaceLabelsAsTags` | Provide a mapping of Kubernetes namespace labels to Datadog tags. `<KUBERNETES_NAMESPACE_LABEL>: <DATADOG_TAG_KEY>` |
| `datadog.nodeLabelsAsTags`      | Provide a mapping of Kubernetes node labels to Datadog tags. `<KUBERNETES_NODE_LABEL>: <DATADOG_TAG_KEY>`           |
| `datadog.podAnnotationsAsTags`  | Provide a mapping of Kubernetes Annotations to Datadog tags. `<KUBERNETES_ANNOTATION>: <DATADOG_TAG_KEY>`           |
| `datadog.podLabelsAsTags`       | Provide a mapping of Kubernetes labels to Datadog tags. `<KUBERNETES_LABEL>: <DATADOG_TAG_KEY>`                     |

### Examples{% #examples %}

```yaml
datadog:
  # (...)
  namespaceLabelsAsTags:
    env: environment
    # <KUBERNETES_NAMESPACE_LABEL>: <DATADOG_TAG_KEY>
  nodeLabelsAsTags:
    beta.kubernetes.io/instance-type: aws-instance-type
    kubernetes.io/role: kube_role
    # <KUBERNETES_NODE_LABEL>: <DATADOG_TAG_KEY>
  podLabelsAsTags:
    app: kube_app
    release: helm_release
    # <KUBERNETES_LABEL>: <DATADOG_TAG_KEY>
  podAnnotationsAsTags:
    iam.amazonaws.com/role: kube_iamrole
     # <KUBERNETES_ANNOTATIONS>: <DATADOG_TAG_KEY>
```

{% /tab %}

## Using secret files{% #using-secret-files %}

Integration credentials can be stored in Docker or Kubernetes secrets and used in Autodiscovery templates. For more information, see [Secrets Management](https://docs.datadoghq.com/agent/configuration/secrets-management.md).

## Ignore containers{% #ignore-containers %}

Exclude containers from logs collection, metrics collection, and Autodiscovery. Datadog excludes Kubernetes and OpenShift `pause` containers by default. These allowlists and blocklists apply to Autodiscovery only; traces and DogStatsD are not affected. These environment variables support regular expressions in their values.

See the [Container Discover Management](https://docs.datadoghq.com/agent/guide/autodiscovery-management.md) page for examples.

**Note**: The `kubernetes.containers.running`, `kubernetes.pods.running`, `docker.containers.running`, `.stopped`, `.running.total` and `.stopped.total` metrics are not affected by these settings. All containers are counted.

## Kubernetes API server timeout{% #kubernetes-api-server-timeout %}

By default, the [Kubernetes State Metrics Core check](https://docs.datadoghq.com/integrations/kubernetes_state_core.md) waits 10 seconds for a response from the Kubernetes API server. For large clusters, the request may time out, resulting in missing metrics.

You can avoid this by setting the environment variable `DD_KUBERNETES_APISERVER_CLIENT_TIMEOUT` to a higher value than the default 10 seconds.

{% tab title="Datadog Operator" %}
Update your `datadog-agent.yaml` with the following configuration:

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  override:
    clusterAgent:
      env:
        - name: DD_KUBERNETES_APISERVER_CLIENT_TIMEOUT
          value: <value_greater_than_10>
```

Then apply the new configuration:

```shell
kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml
```

{% /tab %}

{% tab title="Helm" %}
Update your `datadog-values.yaml` with the following configuration:

```yaml
clusterAgent:
  env:
    - name: DD_KUBERNETES_APISERVER_CLIENT_TIMEOUT
      value: <value_greater_than_10>
```

Then upgrade your Helm chart:

```shell
helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadog
```

{% /tab %}

## Proxy settings{% #proxy-settings %}

Starting with Agent v6.4.0 (and v6.5.0 for the Trace Agent), you can override the Agent proxy settings with the following environment variables:

| Env Variable             | Description                                                            |
| ------------------------ | ---------------------------------------------------------------------- |
| `DD_PROXY_HTTP`          | An HTTP URL to use as a proxy for `http` requests.                     |
| `DD_PROXY_HTTPS`         | An HTTPS URL to use as a proxy for `https` requests.                   |
| `DD_PROXY_NO_PROXY`      | A space-separated list of URLs for which no proxy should be used.      |
| `DD_SKIP_SSL_VALIDATION` | An option to test if the Agent is having issues connecting to Datadog. |

## Set cluster name{% #set-cluster-name %}

Some capabilities require that you set a Kubernetes cluster name. A valid cluster name must be unique and dot-separated, with the following restrictions:

- Can contain only lowercase letters, numbers, and hyphens
- Must start with a letter
- Overall length is less than or equal to 80 characters

{% tab title="Datadog Operator" %}
Set `spec.global.clusterName` to your cluster name in `datadog-agent.yaml`:

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    clusterName: <YOUR_CLUSTER_NAME>
```

{% /tab %}

{% tab title="Helm" %}
Set `datadog.clusterName` to your cluster name in `datadog-values.yaml`.

```yaml
datadog:
  #(...)
  clusterName: <YOUR_CLUSTER_NAME>
```

{% /tab %}

## Autodiscovery{% #autodiscovery %}

| Env Variable                | Description                                                                                                                                                                                                                                                                                                                                                               |
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `DD_LISTENERS`              | Autodiscovery listeners to run.                                                                                                                                                                                                                                                                                                                                           |
| `DD_EXTRA_LISTENERS`        | Additional Autodiscovery listeners to run. They are added in addition to the variables defined in the `listeners` section of the `datadog.yaml` configuration file.                                                                                                                                                                                                       |
| `DD_CONFIG_PROVIDERS`       | The providers the Agent should call to collect checks configurations. Available providers are:`kubelet` - Handles templates embedded in pod annotations.`docker` - Handles templates embedded in container labels.`clusterchecks` - Retrieves cluster-level check configurations from the Cluster Agent.`kube_services` - Watches Kubernetes services for cluster checks. |
| `DD_EXTRA_CONFIG_PROVIDERS` | Additional Autodiscovery configuration providers to use. They are added in addition to the variables defined in the `config_providers` section of the `datadog.yaml` configuration file.                                                                                                                                                                                  |

## Miscellaneous{% #miscellaneous %}

| Env Variable                        | Description                                                                                                                                                                                                                                                         |
| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `DD_PROCESS_AGENT_CONTAINER_SOURCE` | Overrides container source auto-detection to force a single source. e.g `"docker"`, `"ecs_fargate"`, `"kubelet"`. This is no longer needed since Agent v7.35.0.                                                                                                     |
| `DD_HEALTH_PORT`                    | Set this to `5555` to expose the Agent health check at port `5555`.                                                                                                                                                                                                 |
| `DD_CLUSTER_NAME`                   | Set a custom Kubernetes cluster identifier to avoid host alias collisions. The cluster name can be up to 40 characters with the following restrictions: Lowercase letters, numbers, and hyphens only. Must start with a letter. Must end with a number or a letter. |
| `DD_COLLECT_KUBERNETES_EVENTS`      | Enable event collection with the Agent. If you are running multiple instances of the Agent in your cluster, set `DD_LEADER_ELECTION` to `true` as well.                                                                                                             |
