이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.
Hostname information collected from OpenTelemetry

Overview

To extract the correct hostname and host tags, Datadog Exporter uses the resource detection processor and the Kubernetes attributes processor. These processors allow for extracting information from hosts and containers in the form of resource semantic conventions, which is then used to build the hostname, host tags, and container tags. These tags enable automatic correlation among telemetry signals and tag-based navigation for filtering and grouping telemetry data within Datadog.

For more information, see the OpenTelemetry project documentation for the resource detection and Kubernetes attributes processors.

Setup

Add the following lines to your Collector configuration:

processors:
  resourcedetection:
    # bare metal
    detectors: [env, system]
    system:
      resource_attributes:
        os.description:
          enabled: true
        host.arch:
          enabled: true
        host.cpu.vendor.id:
          enabled: true
        host.cpu.family:
          enabled: true
        host.cpu.model.id:
          enabled: true
        host.cpu.model.name:
          enabled: true
        host.cpu.stepping:
          enabled: true
        host.cpu.cache.l2.size:
          enabled: true
    # GCP
    detectors: [env, gcp, system]
    # AWS
    detectors: [env, ecs, ec2, system]
    # Azure
    detectors: [env, azure, system]
    timeout: 2s
    override: false

Add the following lines to values.yaml:

presets:
  kubernetesAttributes:
    enabled: true

The Helm kubernetesAttributes preset sets up the service account necessary for the Kubernetes attributes processor to extract metadata from pods. Read Important Components for Kubernetes for additional information about the required service account.

Add the following in the Collector configuration:

processors:
  k8sattributes:
        passthrough: false
        auth_type: "serviceAccount"
   pod_association:
      - sources:
          - from: resource_attribute
            name: k8s.pod.ip
    extract:
      metadata:
        - k8s.pod.name
        - k8s.pod.uid
        - k8s.deployment.name
        - k8s.node.name
        - k8s.namespace.name
        - k8s.pod.start_time
        - k8s.replicaset.name
        - k8s.replicaset.uid
        - k8s.daemonset.name
        - k8s.daemonset.uid
        - k8s.job.name
        - k8s.job.uid
        - k8s.cronjob.name
        - k8s.statefulset.name
        - k8s.statefulset.uid
        - container.image.name
        - container.image.tag
        - container.id
        - k8s.container.name
        - container.image.name
        - container.image.tag
        - container.id
      labels:
        - tag_name: kube_app_name
          key: app.kubernetes.io/name
          from: pod
        - tag_name: kube_app_instance
          key: app.kubernetes.io/instance
          from: pod
        - tag_name: kube_app_version
          key: app.kubernetes.io/version
          from: pod
        - tag_name: kube_app_component
          key: app.kubernetes.io/component
          from: pod
        - tag_name: kube_app_part_of
          key: app.kubernetes.io/part-of
          from: pod
        - tag_name: kube_app_managed_by
          key: app.kubernetes.io/managed-by
          from: pod
  resourcedetection:
    # remove the ones that you do not use
    detectors: [env, eks, ec2, aks, azure, gke, gce, system]
    timeout: 2s
    override: false

Add the following lines to values.yaml:

presets:
  kubernetesAttributes:
    enabled: true

Use the Helm k8sattributes preset in both Daemonset and Gateway, to set up the service account necessary for k8sattributesprocessor to extract metadata from pods. Read Important Components for Kubernetes for additional information about the required service account.

DaemonSet:

processors:
  k8sattributes:
        passthrough: true
        auth_type: "serviceAccount"
  resourcedetection:
    detectors: [env, <eks/ec2>, <aks/azure>, <gke/gce>, system]
    timeout: 2s
    override: false

Because the processor is in passthrough mode in the DaemonSet, it adds only the pod IP addresses. These addresses are then used by the Gateway processor to make Kubernetes API calls and extract metadata.

Gateway:

processors:
 k8sattributes:
        passthrough: false
        auth_type: "serviceAccount"
   pod_association:
      - sources:
          - from: resource_attribute
            name: k8s.pod.ip
    extract:
      metadata:
        - k8s.pod.name
        - k8s.pod.uid
        - k8s.deployment.name
        - k8s.node.name
        - k8s.namespace.name
        - k8s.pod.start_time
        - k8s.replicaset.name
        - k8s.replicaset.uid
        - k8s.daemonset.name
        - k8s.daemonset.uid
        - k8s.job.name
        - k8s.job.uid
        - k8s.cronjob.name
        - k8s.statefulset.name
        - k8s.statefulset.uid
        - container.image.name
        - container.image.tag
        - container.id
        - k8s.container.name
        - container.image.name
        - container.image.tag
        - container.id
      labels:
        - tag_name: kube_app_name
          key: app.kubernetes.io/name
          from: pod
        - tag_name: kube_app_instance
          key: app.kubernetes.io/instance
          from: pod
        - tag_name: kube_app_version
          key: app.kubernetes.io/version
          from: pod
        - tag_name: kube_app_component
          key: app.kubernetes.io/component
          from: pod
        - tag_name: kube_app_part_of
          key: app.kubernetes.io/part-of
          from: pod
        - tag_name: kube_app_managed_by
          key: app.kubernetes.io/managed-by
          from: pod

Add the following lines to values.yaml:

presets:
  kubernetesAttributes:
    enabled: true

The Helm kubernetesAttributes preset sets up the service account necessary for the Kubernetes attributes processor to extract metadata from pods. Read Important Components for Kubernetes for additional information about the required service account.

Add the following in the Collector configuration:

processors:
  k8sattributes:
        passthrough: false
        auth_type: "serviceAccount"
   pod_association:
      - sources:
          - from: resource_attribute
            name: k8s.pod.ip
    extract:
      metadata:
        - k8s.pod.name
        - k8s.pod.uid
        - k8s.deployment.name
        - k8s.node.name
        - k8s.namespace.name
        - k8s.pod.start_time
        - k8s.replicaset.name
        - k8s.replicaset.uid
        - k8s.daemonset.name
        - k8s.daemonset.uid
        - k8s.job.name
        - k8s.job.uid
        - k8s.cronjob.name
        - k8s.statefulset.name
        - k8s.statefulset.uid
        - container.image.name
        - container.image.tag
        - container.id
        - k8s.container.name
        - container.image.name
        - container.image.tag
        - container.id
      labels:
        - tag_name: kube_app_name
          key: app.kubernetes.io/name
          from: pod
        - tag_name: kube_app_instance
          key: app.kubernetes.io/instance
          from: pod
        - tag_name: kube_app_version
          key: app.kubernetes.io/version
          from: pod
        - tag_name: kube_app_component
          key: app.kubernetes.io/component
          from: pod
        - tag_name: kube_app_part_of
          key: app.kubernetes.io/part-of
          from: pod
        - tag_name: kube_app_managed_by
          key: app.kubernetes.io/managed-by
          from: pod
  resourcedetection:
    detectors: [env, <eks/ec2>, <aks/azure>, <gke/gce>, system]
    timeout: 2s
    override: false

Data collected

OpenTelemetry attributeDatadog TagProcessor
host.archresourcedetectionprocessor{system}
host.nameresourcedetectionprocessor{system,gcp,ec2,azure}
host.idresourcedetectionprocessor{system,gcp,ec2,azure}
host.cpu.vendor.idresourcedetectionprocessor{system}
host.cpu.familyresourcedetectionprocessor{system}
host.cpu.model.idresourcedetectionprocessor{system}
host.cpu.model.nameresourcedetectionprocessor{system}
host.cpu.steppingresourcedetectionprocessor{system}
host.cpu.cache.l2.sizeresourcedetectionprocessor{system}
os.descriptionresourcedetectionprocessor{system}
os.typeresourcedetectionprocessor{system}
cloud.providercloud_providerresourcedetectionprocessor{gcp,ec2,ecs,eks,azure,aks}
cloud.platform"resourcedetectionprocessor{gcp,ec2,ecs,eks,azure,aks}"
cloud.account.id"resourcedetectionprocessor{gcp,ec2,ecs,azure}"
cloud.regionregionresourcedetectionprocessor{gcp,ec2,ecs,azure}
cloud.availability_zonezoneresourcedetectionprocessor{gcp,ec2,ecs}
host.type"resourcedetectionprocessor{gcp,ec2}"
gcp.gce.instance.hostnameresourcedetectionprocessor{gcp}
gcp.gce.instance.nameresourcedetectionprocessor{gcp}
k8s.cluster.namekube_cluster_nameresourcedetectionprocessor{gcp,eks}
host.image.idresourcedetectionprocessor{ec2}
aws.ecs.cluster.arnecs_cluster_namek8sattributes
aws.ecs.task.arntask_arnk8sattributes
aws.ecs.task.familytask_familyk8sattributes
aws.ecs.task.revisiontask_versionk8sattributes
aws.ecs.launchtypek8sattributes
aws.log.group.namesk8sattributes
aws.log.group.arnsk8sattributes
aws.log.stream.namesk8sattributes
aws.log.stream.arnsk8sattributes
azure.vm.namek8sattributes
azure.vm.sizek8sattributes
azure.vm.scaleset.namek8sattributes
azure.resourcegroup.namek8sattributes
k8s.cluster.uidk8sattributes
k8s.namespace.namekube_namespacek8sattributes
k8s.pod.namepod_namek8sattributes
k8s.pod.uidk8sattributes
k8s.pod.start_timek8sattributes
k8s.deployment.namekube_deploymentk8sattributes
k8s.replicaset.namekube_replica_setk8sattributes
k8s.replicaset.uidk8sattributes
k8s.daemonset.namekube_daemon_setk8sattributes
k8s.daemonset.uidk8sattributes
k8s.statefulset.namekube_stateful_setk8sattributes
k8s.statefulset.uidk8sattributes
k8s.container.namekube_container_namek8sattributes
k8s.job.namekube_jobk8sattributes
k8s.job.uidk8sattributes
k8s.cronjob.namekube_cronjobk8sattributes
k8s.node.namek8sattributes
container.idcontainer_idk8sattributes
container.image.nameimage_namek8sattributes
container.image.tagimage_tagk8sattributes

Full example configuration

For a full working example configuration with the Datadog exporter, see k8s-values.yaml. This example is for Amazon EKS.

Example logging output

ResourceSpans #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.6.1
Resource attributes:
     -> container.id: Str(0cb82a1bf21466b4189414cf326683d653114c0f61994c73f78d1750b9fcdf06)
     -> service.name: Str(cartservice)
     -> service.instance.id: Str(5f35cd94-1b9c-47ff-bf45-50ac4a998a6b)
     -> service.namespace: Str(opentelemetry-demo)
     -> k8s.namespace.name: Str(otel-gateway)
     -> k8s.node.name: Str(ip-192-168-61-208.ec2.internal)
     -> k8s.pod.name: Str(opentelemetry-demo-cartservice-567765cd64-cbmwz)
     -> deployment.environment: Str(otel-gateway)
     -> k8s.pod.ip: Str(192.168.45.90)
     -> telemetry.sdk.name: Str(opentelemetry)
     -> telemetry.sdk.language: Str(dotnet)
     -> telemetry.sdk.version: Str(1.5.1)
     -> cloud.provider: Str(aws)
     -> cloud.platform: Str(aws_ec2)
     -> cloud.region: Str(us-east-1)
     -> cloud.account.id: Str(XXXXXXXXXX)
     -> cloud.availability_zone: Str(us-east-1c)
     -> host.id: Str(i-09e82186d7d8d7c95)
     -> host.image.id: Str(ami-06f28e19c3ba73ef7)
     -> host.type: Str(m5.large)
     -> host.name: Str(ip-192-168-50-0.ec2.internal)
     -> os.type: Str(linux)
     -> k8s.deployment.name: Str(opentelemetry-demo-cartservice)
     -> kube_app_name: Str(opentelemetry-demo-cartservice)
     -> k8s.replicaset.uid: Str(ddb3d058-6d6d-4423-aca9-0437c3688217)
     -> k8s.replicaset.name: Str(opentelemetry-demo-cartservice-567765cd64)
     -> kube_app_instance: Str(opentelemetry-demo)
     -> kube_app_component: Str(cartservice)
     -> k8s.pod.start_time: Str(2023-11-13T15:03:46Z)
     -> k8s.pod.uid: Str(5f35cd94-1b9c-47ff-bf45-50ac4a998a6b)
     -> k8s.container.name: Str(cartservice)
     -> container.image.name: Str(XXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/otel-demo)
     -> container.image.tag: Str(v4615c8d7-cartservice)
ScopeSpans #0
ScopeSpans SchemaURL: 
InstrumentationScope Microsoft.AspNetCore 
Span #0
    Trace ID       : fc6794b53df7e44bab9dced42bdfbf7b
    Parent ID      : 2d3ba75ad6a6b1a0
    ID             : f669b0fcd98365b9
    Name           : oteldemo.CartService/AddItem
    Kind           : Server
    Start time     : 2023-11-20 13:37:11.2060978 +0000 UTC
    End time       : 2023-11-20 13:37:11.2084166 +0000 UTC
    Status code    : Unset
    Status message : 
Attributes:
     -> net.host.name: Str(opentelemetry-demo-cartservice)
     -> net.host.port: Int(8080)
     -> http.method: Str(POST)
     -> http.scheme: Str(http)
     -> http.target: Str(/oteldemo.CartService/AddItem)
     -> http.url: Str(http://opentelemetry-demo-cartservice:8080/oteldemo.CartService/AddItem)
     -> http.flavor: Str(2.0)
     -> http.user_agent: Str(grpc-node-js/1.8.14)
     -> app.user.id: Str(e8521c8c-87a9-11ee-b20a-4eaeb9e6ddbc)
     -> app.product.id: Str(LS4PSXUNUM)
     -> app.product.quantity: Int(3)
     -> http.status_code: Int(200)
     -> rpc.system: Str(grpc)
     -> net.peer.ip: Str(::ffff:192.168.36.112)
     -> net.peer.port: Int(36654)
     -> rpc.service: Str(oteldemo.CartService)
     -> rpc.method: Str(AddItem)
     -> rpc.grpc.status_code: Int(0)

Custom tagging

Custom host tags

In the Datadog exporter

Set custom hosts tags directly in the Datadog exporter:

      ## @param tags - list of strings - optional - default: empty list
      ## List of host tags to be sent as part of the host metadata.
      ## These tags will be attached to telemetry signals that have the host metadata hostname.
      ##
      ## To attach tags to telemetry signals regardless of the host, use a processor instead.
      #
      tags: ["team:infra", "<TAG_KEY>:<TAG_VALUE>"]

See all configurations options here.

As OTLP resource attributes

Custom host tags can also be set as resource attributes that start with the namespace datadog.host.tag.

This can be set as an env var OTEL_RESOURCE_ATTRIBUTES=datadog.host.tag.<custom_tag_name>=<custom_tag_value> in an OTel SDK. Or this can be set in a processor:

processors:
  resource:
    attributes:
    - key: datadog.host.tag.<custom_tag_name>
      action: upsert
      from_attribute: <custom_tag_name>

Note: This is only supported if you have opted-in as described here.

Custom container tags

Same as for custom host tags, custom containers tags can be set by prefixing resource attributes by datadog.container.tag in your OTEL instrumentation.

This can be set as an env var OTEL_RESOURCE_ATTRIBUTES=datadog.container.tag.<custom_tag_name>=<custom_tag_value> in an OTel SDK. Or this can be set in a processor:

processors:
  resource:
    attributes:
    - key: datadog.container.tag.<custom_tag_name>
      action: upsert
      from_attribute: <custom_tag_name>