---
title: Kubernetes and Integrations
description: >-
  Configure monitoring integrations for applications running in Kubernetes using
  Autodiscovery templates
breadcrumbs: Docs > Containers > Kubernetes > Kubernetes and Integrations
---

# Kubernetes and Integrations

This page covers how to install and configure integrations for your Kubernetes infrastructure by using a Datadog feature known as *Autodiscovery*. This enables you to use [variables](https://docs.datadoghq.com/containers/guide/template_variables/) like `%%host%%` to dynamically populate your configuration settings. For a detailed explanation of how Autodiscovery works, see [Getting Started with Containers: Autodiscovery](https://docs.datadoghq.com/getting_started/containers/autodiscovery). For advanced Autodiscovery options, such as excluding certain containers from Autodiscovery or tolerating unready pods, see [Container Discovery Management](https://docs.datadoghq.com/containers/guide/autodiscovery-management).

If you are using Docker or Amazon ECS, see [Docker and Integrations](https://docs.datadoghq.com/agent/docker/integrations/).

{% alert level="info" %}
Some Datadog integrations don't work with Autodiscovery because they require either process tree data or filesystem access: [Ceph](https://docs.datadoghq.com/integrations/ceph), [Varnish](https://docs.datadoghq.com/integrations/varnish), [Postfix](https://docs.datadoghq.com/integrations/postfix), [Cassandra Nodetool](https://docs.datadoghq.com/integrations/cassandra/#agent-check-cassandra-nodetool), and [Gunicorn](https://docs.datadoghq.com/integrations/gunicorn).To monitor integrations that are not compatible with Autodiscovery, you can use a Prometheus exporter in the pod to expose an HTTP endpoint, and then use the [OpenMetrics integration](https://docs.datadoghq.com/integrations/openmetrics/) (which supports Autodiscovery) to find the pod and query the endpoint.
{% /alert %}

## Set up your integration{% #set-up-your-integration %}

Some integrations require setup steps, such as creating an access token or granting read permission to the Datadog Agent. Follow the instructions in the **Setup** section of your integration's documentation.

### Community integrations{% #community-integrations %}

To use an integration that is not packaged with the Datadog Agent, you must build a custom image that contains your desired integration. See [Use Community Integrations](https://docs.datadoghq.com/agent/guide/use-community-integrations/) for instructions.

## Configuration{% #configuration %}

Some commonly-used integrations come with default configuration for Autodiscovery. See [Autodiscovery auto-configuration](https://docs.datadoghq.com/containers/guide/auto_conf) for details, including a list of auto-configured integrations and their corresponding default configuration files. If your integration is in this list, and the default configuration is sufficient for your use case, no further action is required.

Otherwise:

1. Choose a configuration method (Kubernetes pod annotations, a local file, a ConfigMap, a key-value store, a Datadog Operator manifest, or a Helm chart) that suits your use case.
1. Reference the template format for your chosen method. Each format contains placeholders, such as `<CONTAINER_NAME>`.
1. Supply values for these placeholders.

{% tab title="Annotations" %}
If you define your Kubernetes pods directly with `kind: Pod`, add each pod's annotations directly under its `metadata` section, as shown:

**Autodiscovery annotations v2** (for Datadog Agent v7.36+)

```yaml
apiVersion: v1
kind: Pod
# (...)
metadata:
  name: '<POD_NAME>'
  annotations:
    ad.datadoghq.com/<CONTAINER_NAME>.checks: |
      {
        "<INTEGRATION_NAME>": {
          "init_config": <INIT_CONFIG>,
          "instances": [<INSTANCES_CONFIG>]
        }
      }
    ad.datadoghq.com/<CONTAINER_NAME>.logs: '[<LOGS_CONFIG>]'
    # (...)
spec:
  containers:
    - name: '<CONTAINER_NAME>'
# (...)
```

**Autodiscovery annotations v1**

```yaml
apiVersion: v1
kind: Pod
# (...)
metadata:
  name: '<POD_NAME>'
  annotations:
    ad.datadoghq.com/<CONTAINER_NAME>.check_names: '[<INTEGRATION_NAME>]'
    ad.datadoghq.com/<CONTAINER_NAME>.init_configs: '[<INIT_CONFIG>]'
    ad.datadoghq.com/<CONTAINER_NAME>.instances: '[<INSTANCES_CONFIG>]'
    ad.datadoghq.com/<CONTAINER_NAME>.logs: '[<LOGS_CONFIG>]'
    # (...)
spec:
  containers:
    - name: '<CONTAINER_NAME>'
# (...)
```

If you define pods indirectly (with deployments, ReplicaSets, or ReplicationControllers) add pod annotations under `spec.template.metadata`.
{% /tab %}

{% tab title="Local file" %}
You can store Autodiscovery templates as local files inside the mounted `conf.d` directory (`/etc/datadog-agent/conf.d`). You must restart your Agent containers each time you change, add, or remove templates.

1. Create a `conf.d/<INTEGRATION_NAME>.d/conf.yaml` file on your host:

   ```yaml
   ad_identifiers:
     - <CONTAINER_IMAGE>
   
   init_config:
     <INIT_CONFIG>
   
   instances:
     <INSTANCES_CONFIG>
   
   logs:
     <LOGS_CONFIG>
   ```

1. Mount your host `conf.d/` folder to the containerized Agent's `conf.d` folder.

For Datadog Operator:

   ```yaml
   spec:
     override:
       nodeAgent:
         volumes:
           - hostPath:
               path: <PATH_TO_LOCAL_FOLDER>/conf.d
             name: confd 
   ```

For Helm:

   ```yaml
   agents:
     volumes:
     - hostPath:
         path: <PATH_TO_LOCAL_FOLDER>/conf.d
       name: confd
     volumeMounts:
     - name: confd
       mountPath: /conf.d
   ```

{% /tab %}

{% tab title="ConfigMap" %}
You can use [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap) to externally define configurations and subsequently mount them.

```yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: "<NAME>-config-map"
  namespace: default
data:
  <INTEGRATION_NAME>-config: |-
    ad_identifiers:
      <CONTAINER_IMAGE>
    init_config:
      <INIT_CONFIG>
    instances:
      <INSTANCES_CONFIG>
    logs:
      <LOGS_CONFIG>
```

{% /tab %}

{% tab title="Key-value store" %}
You can source Autodiscovery templates from [Consul](https://docs.datadoghq.com/integrations/consul/), [etcd](https://docs.datadoghq.com/integrations/etcd/), or [ZooKeeper](https://docs.datadoghq.com/integrations/zk/). You can configure your key-value store in the `datadog.yaml` configuration file (and subsequently mount this file inside the Agent container), or as environment variables in the Agent container.

**Configure in datadog.yaml**:

In `datadog.yaml`, set the `<KEY_VALUE_STORE_IP>` address and `<KEY_VALUE_STORE_PORT>` of your key-value store:

```yaml
config_providers:
  - name: etcd
    polling: true
    template_dir: /datadog/check_configs
    template_url: '<KV_STORE_IP>:<KV_STORE_PORT>'
    username:
    password:

  - name: consul
    polling: true
    template_dir: datadog/check_configs
    template_url: '<KV_STORE_IP>:<KV_STORE_PORT>'
    ca_file:
    ca_path:
    cert_file:
    key_file:
    username:
    password:
    token:

  - name: zookeeper
    polling: true
    template_dir: /datadog/check_configs
    template_url: '<KV_STORE_IP>:<KV_STORE_PORT>'
    username:
    password:
```

[Restart the Datadog Agent](https://docs.datadoghq.com/agent/configuration/agent-commands/) to apply your changes.

**Configure in environment variables**:

With the key-value store enabled as a template source, the Agent looks for templates under the key `/datadog/check_configs`. Autodiscovery expects a key-value hierarchy like this:

```yaml
/datadog/
  check_configs/
    <CONTAINER_IMAGE>/
      - check_names: ["<INTEGRATION_NAME>"]
      - init_configs: ["<INIT_CONFIG>"]
      - instances: ["<INSTANCES_CONFIG>"]
      - logs: ["<LOGS_CONFIG>"]
    ...
```

{% /tab %}

{% tab title="Datadog Operator" %}
To configure integrations in `datadog-agent.yaml`, add an override `extraConfd.configDataMap` to the `nodeAgent` component of your `DatadogAgent` configuration. Each key becomes a file in the Agent's `conf.d` directory.

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    [...]
  features:
    [...]
  override:
    nodeAgent:
      extraConfd:
        configDataMap:
          <INTEGRATION_NAME>.yaml: |-
            ad_identifiers:
              - <CONTAINER_IMAGE>
            init_config:
              <INIT_CONFIG>
            instances:
              <INSTANCES_CONFIG>
            logs:
              <LOGS_CONFIG>
```

{% alert level="info" %}
When multiple deployed `DatadogAgent` CRDs use `configDataMap`, each CRD writes to a shared ConfigMap named `nodeagent-extra-confd`. This can cause configurations to override each other.
{% /alert %}

To monitor a [Cluster Check](https://docs.datadoghq.com/agent/cluster_agent/clusterchecks), add an override `extraConfd.configDataMap` to the `clusterAgent` component. You must also enable cluster checks by setting `features.clusterChecks.enabled: true`.

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    [...]
  features:
    clusterChecks:
      enabled: true
    [...]
  override:
    nodeAgent:
      [...]
    clusterAgent:
      extraConfd:
        configDataMap:
          <INTEGRATION_NAME>.yaml: |-
            ad_identifiers:
              - <CONTAINER_IMAGE>
            cluster_check: true
            init_config:
              <INIT_CONFIG>
            instances:
              <INSTANCES_CONFIG>
            logs:
              <LOGS_CONFIG>
```

See [Cluster Checks](https://docs.datadoghq.com/agent/cluster_agent/clusterchecks) for more context.
{% /tab %}

{% tab title="Helm" %}
Your `datadog-values.yaml` file contains a `datadog.confd` section where you can define Autodiscovery templates. You can find inline examples in the sample [values.yaml](https://github.com/DataDog/helm-charts/blob/92fd908e3dd7b7149ce02de1fe859ae5ac717d03/charts/datadog/values.yaml#L315-L330). Each key becomes a file in the Agent's `conf.d` directory.

```yaml
datadog:
  confd:
    <INTEGRATION_NAME>.yaml: |-
      ad_identifiers:
        - <CONTAINER_IMAGE>
      init_config:
        <INIT_CONFIG>
      instances:
        <INSTANCES_CONFIG>
      logs:
        <LOGS_CONFIG>
```

To monitor a [Cluster Check](https://docs.datadoghq.com/agent/cluster_agent/clusterchecks), define your template under `clusterAgent.confd`. You can find inline examples in the sample [values.yaml](https://github.com/DataDog/helm-charts/blob/92fd908e3dd7b7149ce02de1fe859ae5ac717d03/charts/datadog/values.yaml#L680-L689). You must also enable the Cluster Agent by setting `clusterAgent.enabled: true` and enable cluster checks by setting `datadog.clusterChecks.enabled: true`.

```yaml
datadog:
  clusterChecks:
    enabled: true
clusterAgent:
  enabled: true
  confd:
    <INTEGRATION_NAME>.yaml: |-
      ad_identifiers:
        - <CONTAINER_IMAGE>
      cluster_check: true
      init_config:
        <INIT_CONFIG>
      instances:
        <INSTANCES_CONFIG>
      logs:
        <LOGS_CONFIG>
```

See [Cluster Checks](https://docs.datadoghq.com/agent/cluster_agent/clusterchecks) for more context.
{% /tab %}

### Placeholder values{% #placeholder-values %}

Supply placeholder values as follows:

{% dl %}

{% dt %}
`<INTEGRATION_NAME>`
{% /dt %}

{% dd %}
The name of your Datadog integration, such as `etcd` or `redisdb`.
{% /dd %}

{% dt %}
`<CONTAINER_NAME>`
{% /dt %}

{% dd %}
An identifier to match against the names (`spec.containers[i].name`, **not** `spec.containers[i].image`) of the containers that correspond to your integration.
{% /dd %}

{% dt %}
`<CONTAINER_IMAGE>`
{% /dt %}

{% dd %}
An identifier to match against the container image (`.spec.containers[i].image`).For example: if you supply `redis` as a container identifier, your Autodiscovery template is applied to all containers with image names that match `redis`. If you have one container running `foo/redis:latest` and `bar/redis:v2`, your Autodiscovery template is applied to both containers.The `ad_identifiers` parameter takes a list, so you can supply multiple container identifiers. You can also use custom identifiers. See [Custom Autodiscovery Identifiers](https://docs.datadoghq.com/containers/guide/ad_identifiers).
{% /dd %}

{% dt %}
`<INIT_CONFIG>`
{% /dt %}

{% dd %}
The configuration parameters listed under `init_config` in your integration's `<INTEGRATION_NAME>.d/conf.yaml.example` file. The `init_config` section is usually empty.
{% /dd %}

{% dt %}
`<INSTANCES_CONFIG>`
{% /dt %}

{% dd %}
The configuration parameters listed under `instances` in your integration's `<INTEGRATION_NAME>.d/conf.yaml.example` file.
{% /dd %}

{% dt %}
`<LOGS_CONFIG>`
{% /dt %}

{% dd %}
The configuration parameters listed under `logs` in your integration's `<INTEGRATION_NAME>.d/conf.yaml.example` file.
{% /dd %}

{% /dl %}

### Advanced annotation parameters{% #advanced-annotation-parameters %}

In addition to the core Autodiscovery annotations for checks, logs, and instances, you can use additional annotations to customize check behavior:

#### Tag cardinality{% #tag-cardinality %}

Set the tag cardinality level for a specific check using the `check_tag_cardinality` annotation. This overrides the global Agent tag cardinality setting for metrics collected by that check.

```yaml
apiVersion: v1
kind: Pod
metadata:
  name: '<POD_NAME>'
  annotations:
    ad.datadoghq.com/<CONTAINER_NAME>.checks: |
      {
        "<INTEGRATION_NAME>": {
          "init_config": <INIT_CONFIG>,
          "instances": [<INSTANCES_CONFIG>]
        }
      }
    ad.datadoghq.com/<CONTAINER_NAME>.check_tag_cardinality: "<low|orchestrator|high>"
spec:
  containers:
    - name: '<CONTAINER_NAME>'
```

{% alert level="info" %}
The `check_tag_cardinality` annotation only affects metrics collected by integration checks. It does not affect metrics sent through DogStatsD. To configure DogStatsD tag cardinality, use the global `dogstatsd_tag_cardinality` configuration parameter or the `DD_DOGSTATSD_TAG_CARDINALITY` environment variable.
{% /alert %}

For more information about tag cardinality, see [Per-check tag configuration](https://docs.datadoghq.com/getting_started/integrations/#per-check-tag-configuration).

### Auto-configuration{% #auto-configuration %}

The Datadog Agent automatically recognizes and supplies basic configuration for some common technologies. For a complete list, see [Autodiscovery auto-configuration](https://docs.datadoghq.com/containers/guide/auto_conf).

Configurations set with Kubernetes annotations take precedence over auto-configuration, but auto-configuration takes precedence over configurations set with Datadog Operator or Helm. To use Datadog Operator or Helm to configure an integration in the [Autodiscovery auto-configuration](https://docs.datadoghq.com/containers/guide/auto_conf) list, you must [disable auto-configuration](https://docs.datadoghq.com/containers/guide/auto_conf/#disable-auto-configuration).

## Example: Postgres integration{% #example-postgres-integration %}

In this example scenario, you deployed Postgres on Kubernetes. You want to set up and configure the [Datadog-Postgres integration](https://docs.datadoghq.com/integrations/postgres). All of your Postgres containers have container names that contain the string `postgres`.

First, reference the [Postgres integration documentation](https://docs.datadoghq.com/integrations/postgres) for any additional setup steps. The Postgres integration requires that you create a read-only user named `datadog` and store the corresponding password as an environment variable named `PG_PASSWORD`.

If you were to configure this integration **on a host**, you could reference [`postgresql.d/conf.yaml.example`](https://github.com/DataDog/integrations-core/blob/master/postgres/datadog_checks/postgres/data/conf.yaml.example) for parameters and create a `postgresql.d/conf.yaml` file that contains the following:

```yaml
init_config:
instances:
  - host: localhost
    port: 5432
    username: datadog
    password: <PASSWORD>
logs:
  - type: file
    path: /var/log/postgres.log
    source: postgresql
    service: pg_service
```

Here, `<PASSWORD>` corresponds to the password for the `datadog` user you created.

To apply this configuration to your Postgres containers:

{% tab title="Annotations" %}
In your pod manifest:

**Autodiscovery annotations v2** (for Datadog Agent v7.36+)

```yaml
apiVersion: v1
kind: Pod
metadata:
  name: postgres
  annotations:
    ad.datadoghq.com/postgres.checks: |
      {
        "postgres": {
          "instances": [
            {
              "host": "%%host%%",
              "port": "5432",
              "username": "datadog",
              "password":"%%env_PG_PASSWORD%%"
            }
          ]
        }
      }
    ad.datadoghq.com/postgres.logs: |
      [
        {
          "type": "file",
          "path": "/var/log/postgres.log",
          "source": "postgresql",
          "service": "pg_service"
        }
      ]
spec:
  containers:
    - name: postgres
```

**Autodiscovery annotations v1**

```yaml
apiVersion: v1
kind: Pod
metadata:
  name: postgres
  annotations:
    ad.datadoghq.com/postgres.check_names: '["postgres"]'
    ad.datadoghq.com/postgres.init_configs: '[{}]'
    ad.datadoghq.com/postgres.instances: |
      [
        {
          "host": "%%host%%",
          "port": "5432",
          "username": "datadog",
          "password":"%%env_PG_PASSWORD%%"
        }
      ]
    ad.datadoghq.com/postgres.logs: |
      [
        {
          "type": "file",
          "path": "/var/log/postgres.log",
          "source": "postgresql",
          "service": "pg_service"
        }
      ]
spec:
  containers:
    - name: postgres
```

{% /tab %}

{% tab title="Local file" %}

1. Create a `conf.d/postgresql.d/conf.yaml` file on your host:

   ```yaml
   ad_identifiers:
     - postgres
   init config:
   instances:
     - host: "%%host%%"
       port: "5432"
       username: "datadog"
       password: "%%env_PG_PASSWORD%%"
   logs:
     - type: "file"
       path: "/var/log/postgres.log"
       source: "postgresql"
       service: "pg_service"
   ```

1. Mount your host `conf.d/` folder to the containerized Agent's `conf.d` folder.

For Datadog Operator:

   ```yaml
   spec:
     override:
       nodeAgent:
         volumes:
           - hostPath:
               path: <PATH_TO_LOCAL_FOLDER>/conf.d
             name: confd 
   ```

For Helm:

   ```yaml
   agents:
     volumes:
     - hostPath:
         path: <PATH_TO_LOCAL_FOLDER>/conf.d
       name: confd
     volumeMounts:
     - name: confd
       mountPath: /conf.d
   ```

{% /tab %}

{% tab title="ConfigMap" %}
In a ConfigMap:

```yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: postgresql-config-map
  namespace: default
data:
  postgresql-config: |-
    ad_identifiers:
      - postgres
    init_config:
    instances:
      - host: "%%host%%"
        port: "5432"
        username: "datadog"
        password: "%%env_PG_PASSWORD%%"
    logs:
      - type: "file"
        path: "/var/log/postgres.log"
        source: "postgresql"
        service: "pg_service"
```

Then, in your manifest, define the `volumeMounts` and `volumes`:

```yaml
# [...]
        volumeMounts:
        # [...]
          - name: postgresql-config-map
            mountPath: /etc/datadog-agent/conf.d/postgresql.d
        # [...]
      volumes:
      # [...]
        - name: postgresql-config-map
          configMap:
            name: postgresql-config-map
            items:
              - key: postgresql-config
                path: conf.yaml
# [...]
```

{% /tab %}

{% tab title="Key-value store" %}
The following etcd commands create a Postgres integration template with a custom `password` parameter:

```
etcdctl mkdir /datadog/check_configs/postgres
etcdctl set /datadog/check_configs/postgres/check_names '["postgres"]'
etcdctl set /datadog/check_configs/postgres/init_configs '[{}]'
etcdctl set /datadog/check_configs/postgres/instances '[{"host": "%%host%%","port":"5432","username":"datadog","password":"%%env_PG_PASSWORD%%"}]'
```

Notice that each of the three values is a list. Autodiscovery assembles list items into the integration configurations based on shared list indexes. In this case, it composes the first (and only) check configuration from `check_names[0]`, `init_configs[0]` and `instances[0]`.
{% /tab %}

{% tab title="Datadog Operator" %}
In `datadog-agent.yaml`:

```yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    [...]
  features:
    [...]
  override:
    nodeAgent:
      extraConfd:
        configDataMap:
          postgres.yaml: |-
            ad_identifiers:
              - postgres
            init_config:
            instances:
              - host: "%%host%%"
                port: 5432
                username: "datadog"
                password: "%%env_PG_PASSWORD%%"
```

As a result, the Agent contains a `postgres.yaml` file with the above configuration in the `conf.d` directory.
{% /tab %}

{% tab title="Helm" %}
In `datadog-values.yaml`:

```yaml
datadog:
  confd:
    postgres.yaml: |-
      ad_identifiers:
        - postgres
      init_config:
      instances:
        - host: "%%host%%"
          port: 5432
          username: "datadog"
          password: "%%env_PG_PASSWORD%%"
```

As a result, the Agent contains a `postgres.yaml` file with the above configuration in the `conf.d` directory.
{% /tab %}

These templates make use of [Autodiscovery template variables](https://docs.datadoghq.com/containers/guide/template_variables/):

- `%%host%%` is dynamically populated with the container's IP.
- `%%env_PG_PASSWORD%%` references an environment variable named `PG_PASSWORD` as seen by the Agent process.

For more examples, including how to configure multiple checks for multiple sets of containers, see [Autodiscovery: Scenarios & Examples](https://docs.datadoghq.com/containers/guide/autodiscovery-examples).

## Further Reading{% #further-reading %}

- [Collect your application logs](https://docs.datadoghq.com/agent/kubernetes/log/)
- [Collect your application traces](https://docs.datadoghq.com/agent/kubernetes/apm/)
- [Collect your Prometheus metrics](https://docs.datadoghq.com/agent/kubernetes/prometheus/)
- [Limit data collection to a subset of containers only](https://docs.datadoghq.com/agent/guide/autodiscovery-management/)
- [Assign tags to all data emitted by a container](https://docs.datadoghq.com/agent/kubernetes/tag/)
- [Datadog Tips & Tricks: How to write annotations in Kubernetes with JSON for Datadog Autodiscovery](https://www.youtube.com/watch?v=nuxmVf9ByE0)
