Configure Containers View

Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.

This page lists configuration options for the Containers page in Datadog. To learn more about the Containers page and its capabilities, see Containers View documentation.

Configuration options

Include or exclude containers

Include and exclude containers from real-time collection:

  • Exclude containers either by passing the environment variable DD_CONTAINER_EXCLUDE or by adding container_exclude: in your datadog.yaml main configuration file.
  • Include containers either by passing the environment variable DD_CONTAINER_INCLUDE or by adding container_include: in your datadog.yaml main configuration file.

Both arguments take an image name as value. Regular expressions are also supported.

For example, to exclude all Debian images except containers with a name starting with frontend, add these two configuration lines in your datadog.yaml file:

container_exclude: ["image:debian"]
container_include: ["name:frontend.*"]

Note: For Agent 5, instead of including the above in the datadog.conf main configuration file, explicitly add a datadog.yaml file to /etc/datadog-agent/, as the Process Agent requires all configuration options here. This configuration only excludes containers from real-time collection, not from Autodiscovery.

Scrubbing sensitive information

To prevent the leaking of sensitive data, you can scrub sensitive words in container YAML files. Container scrubbing is enabled by default for Helm charts, and some default sensitive words are provided:

  • password
  • passwd
  • mysql_pwd
  • access_token
  • auth_token
  • api_key
  • apikey
  • pwd
  • secret
  • credentials
  • stripetoken

You can set additional sensitive words by providing a list of words to the environment variable DD_ORCHESTRATOR_EXPLORER_CUSTOM_SENSITIVE_WORDS. This adds to, and does not overwrite, the default words.

Note: The additional sensitive words must be in lowercase, as the Agent compares the text with the pattern in lowercase. This means password scrubs MY_PASSWORD to MY_*******, while PASSWORD does not.

You need to setup this environment variable for the following agents:

  • process-agent
  • cluster-agent
env:
    - name: DD_ORCHESTRATOR_EXPLORER_CUSTOM_SENSITIVE_WORDS
      value: "customword1 customword2 customword3"

For example, because password is a sensitive word, the scrubber changes <MY_PASSWORD> in any of the following to a string of asterisks, ***********:

password <MY_PASSWORD>
password=<MY_PASSWORD>
password: <MY_PASSWORD>
password::::== <MY_PASSWORD>

However, the scrubber does not scrub paths that contain sensitive words. For example, it does not overwrite /etc/vaultd/secret/haproxy-crt.pem with /etc/vaultd/******/haproxy-crt.pem even though secret is a sensitive word.

Configure Orchestrator Explorer

Resource collection compatibility matrix

The following table presents the list of collected resources and the minimal Agent, Cluster Agent, and Helm chart versions for each.

ResourceMinimal Agent versionMinimal Cluster Agent version*Minimal Helm chart versionMinimal Kubernetes version
ClusterRoleBindings7.33.01.19.02.30.91.14.0
ClusterRoles7.33.01.19.02.30.91.14.0
Clusters7.33.01.18.02.10.0 1.17.0
CronJobs7.33.07.40.0 2.15.5 1.16.0
DaemonSets7.33.01.18.0 2.16.3 1.16.0
Deployments7.33.01.18.02.10.0 1.16.0
HorizontalPodAutoscalers7.33.07.51.02.10.01.1.1
Ingresses7.33.01.22.02.30.71.21.0
Jobs7.33.01.18.02.15.51.16.0
Namespaces7.33.07.41.02.30.91.17.0
Network Policies7.33.07.56.03.57.21.14.0
Nodes7.33.0 1.18.0 2.10.0 1.17.0
PersistentVolumes7.33.01.18.0 2.30.4 1.17.0
PersistentVolumeClaims7.33.01.18.0 2.30.4 1.17.0
Pods7.33.01.18.03.9.01.17.0
ReplicaSets7.33.01.18.0 2.10.0 1.16.0
RoleBindings7.33.01.19.02.30.91.14.0
Roles7.33.01.19.02.30.91.14.0
ServiceAccounts7.33.01.19.02.30.91.17.0
Services7.33.01.18.0 2.10.0 1.17.0
Statefulsets7.33.01.15.0 2.20.1 1.16.0
VerticalPodAutoscalers7.33.07.46.03.6.81.16.0

Note: After version 1.22, Cluster Agent version numbering follows Agent release numbering, starting with version 7.39.0.

Add custom tags to resources

You can add custom tags to Kubernetes resources to ease filtering inside the Kubernetes resources view.

Additional tags are added through the DD_ORCHESTRATOR_EXPLORER_EXTRA_TAGS environment variable.

Note: These tags only show up in the Kubernetes resources view.

Add the environment variable on both the Process Agent and the Cluster Agent by setting agents.containers.processAgent.env and clusterAgent.env in datadog-agent.yaml.

apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  global:
    credentials:
      apiKey: <DATADOG_API_KEY>
      appKey: <DATADOG_APP_KEY>
  features:
    liveContainerCollection:
      enabled: true
    orchestratorExplorer:
      enabled: true
  override:
    agents:
      containers:
        processAgent:
          env:
            - name: "DD_ORCHESTRATOR_EXPLORER_EXTRA_TAGS"
              value: "tag1:value1 tag2:value2"
    clusterAgent:
      env:
        - name: "DD_ORCHESTRATOR_EXPLORER_EXTRA_TAGS"
          value: "tag1:value1 tag2:value2"

Then, apply the new configuration:

kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml

If you are using the official Helm chart, add the environment variable on both the Process Agent and the Cluster Agent by setting agents.containers.processAgent.env and clusterAgent.env in values.yaml.

agents:
  containers:
    processAgent:
      env:
        - name: "DD_ORCHESTRATOR_EXPLORER_EXTRA_TAGS"
          value: "tag1:value1 tag2:value2"
clusterAgent:
  env:
    - name: "DD_ORCHESTRATOR_EXPLORER_EXTRA_TAGS"
      value: "tag1:value1 tag2:value2"

Then, upgrade your Helm chart.

Set the environment variable on both the Process Agent and Cluster Agent containers:

- name: DD_ORCHESTRATOR_EXPLORER_EXTRA_TAGS
  value: "tag1:value1 tag2:value2"

Collect custom resources and CustomResourceDefinitions

The Orchestrator Explorer collects CustomResourceDefinitions by default. These definitions appear in Datadog without any user configuration required.

To collect custom resources, you need to configure both the Datadog Agent and set up indexing.

  1. Configure the Datadog Agent:

    Add the following configuration to datadog-values.yaml:

    orchestratorExplorer:
        customResources:
            - <CUSTOM_RESOURCE_NAME>
    

    Each <CUSTOM_RESOURCE_NAME> must use the format group/version/kind.

    The Datadog Operator needs permission to allow the Agent to collect custom resources. Install the Operator with an option that grants this permission:

    helm install datadog-operator datadog/datadog-operator --set clusterRole.allowReadAllResources=true
    

    Then, add the following configuration to your DatadogAgent manifest, datadogagent.yaml:

    features:
      orchestratorExplorer:
        customResources:
          - <CUSTOM_RESOURCE_NAME>
    

    Each <CUSTOM_RESOURCE_NAME> must use the format group/version/kind.

  2. In Datadog, open Orchestrator Explorer.

  3. On the left panel, under Select Resources, select Kubernetes > Custom Resources > Resource Definitions.

  4. Locate the custom resource definition that corresponds to the resource you want to visualize in the explorer. Click on the version under the Versions column.

  5. Click to select the fields you would like to index from the Custom Resource (maximum of 50 fields per resource), then click Enable Indexing to save

Once fields are indexed, they will be available to add as columns in the explorer or as part of Saved Views.

Further reading