Network Performance Monitoring is now generally available! Network Monitoring is now available!

Assigning Tags

Overview

Tagging is used throughout Datadog to query the machines and metrics you monitor. Without the ability to assign and filter based on tags, finding problems in your environment and narrowing them down enough to discover the true causes could be difficult. Learn how to define tags in Datadog before going further.

There are several places tags can be assigned: configuration files, environment variables, your traces, the Datadog UI, API, DogStatsD, and inheriting from the integrations. It is recommended that you use configuration files and integration inheritance for most of your tagging needs.

Configuration Files

Hostname

The hostname (tag key host) is assigned automatically by the Datadog Agent. To customize the hostname, use the Agent configuration file, datadog.yaml:

# Set the hostname (default: auto-detected)
# Must comply with RFC-1123, which permits only:
# "A" to "Z", "a" to "z", "0" to "9", and the hyphen (-)
hostname: mymachine.mydomain

Changing the hostname

  • The old hostname remains in the UI for 2 hours but does not show new metrics.
  • Any data from hosts with the old hostname can be queried with the API.
  • To graph metrics with the old and new hostname in one graph, use Arithmetic between two metrics.

Add tags

The Agent configuration file (datadog.yaml) is also used to set host tags which apply to all metrics, traces, and logs forwarded by the Datadog Agent (see YAML formats below).

The Agent configuration file (datadog.conf) is also used to set host tags which apply to all metrics, traces, and logs forwarded by the Datadog Agent. Tags within datadog.conf must be in the format:

tags: <KEY_1>:<VALUE_1>, <KEY_2>:<VALUE_2>, <KEY_3>:<VALUE_3>

Tags for the integrations installed with the Agent are configured with YAML files located in the conf.d directory of the Agent install. To locate the configuration files, refer to Agent configuration files.

YAML formats

In YAML files, use a tag dictionary to assign a list of tags. Tag dictionaries have two different yet functionally equivalent forms:

tags: <KEY_1>:<VALUE_1>, <KEY_2>:<VALUE_2>, <KEY_3>:<VALUE_3>

or

tags:
    - <KEY_1>:<VALUE_1>
    - <KEY_2>:<VALUE_2>
    - <KEY_3>:<VALUE_3>

It is recommended to assign tags as <KEY>:<VALUE> pairs, but simple tags are also accepted. See defining tags for more details.

Environment Variables

When installing the containerized Datadog Agent, set your host tags using the environment variable DD_TAGS. We automatically collect common tags from Docker, Kubernetes, ECS, Swarm, Mesos, Nomad, and Rancher. To extract even more tags, use the following options:

Environment VariableDescription
DD_DOCKER_LABELS_AS_TAGSExtract docker container labels
DD_DOCKER_ENV_AS_TAGSExtract docker container environment variables
DD_KUBERNETES_POD_LABELS_AS_TAGSExtract pod labels
DD_CHECKS_TAG_CARDINALITYAdd tags to check metrics
DD_DOGSTATSD_TAG_CARDINALITYAdd tags to custom metrics

Examples:

DD_KUBERNETES_POD_LABELS_AS_TAGS='{"app":"kube_app","release":"helm_release"}'
DD_DOCKER_LABELS_AS_TAGS='{"com.docker.compose.service":"service_name"}'

When using DD_KUBERNETES_POD_LABELS_AS_TAGS, you can use wildcards in the format:

{"foo", "bar_%%label%%"}

For example, {"app*", "kube_%%label%%"} resolves to the tag name kube_application for the label application. Further, {"*", "kube_%%label%%"} adds all pod labels as tags prefixed with kube_.

When using the DD_DOCKER_LABELS_AS_TAGS variable within a Docker Swarm docker-compose.yaml file, remove the apostrophes, for example:

DD_DOCKER_LABELS_AS_TAGS={"com.docker.compose.service":"service_name"}

When adding labels to Docker containers, the placement of the labels: keyword inside the docker-compose.yaml file is very important. If the container needs to be labeled then place the labels: keyword inside the services: section not inside the deploy: section. Place the labels: keyword inside the deploy: section only when the service needs to be labeled. The Datadog Agent does not have any labels to extract from the containers without this placement. Below is a sample, working docker-compose.yaml file that shows this. In the example below, the labels in the myapplication: section, my.custom.label.project and my.custom.label.version each have unique values. Using the DD_DOCKER_LABELS_AS_TAGS environment variable in the datadog: section extracts the labels and produces these tags for the myapplication container:

Inside the myapplication container the labels are: my.custom.label.project and my.custom.label.version

After the Agent extracts the labels from the container the tags are: projecttag:projectA versiontag:1

Sample docker-compose.yaml:

services:
  datadog:
    volumes:
      - '/var/run/docker.sock:/var/run/docker.sock:ro'
      - '/proc:/host/proc:ro'
      - '/sys/fs/cgroup/:/host/sys/fs/cgroup:ro'
    environment:
      - DD_API_KEY=abcdefghijklmnop
      - DD_DOCKER_LABELS_AS_TAGS={"my.custom.label.project":"projecttag","my.custom.label.version":"versiontag"}
      - DD_TAGS="key1:value1 key2:value2 key3:value3"
    image: 'datadog/agent:latest'
    deploy:
      restart_policy:
        condition: on-failure
      mode: replicated
      replicas: 1
  myapplication:
    image: 'myapplication'
    labels:
      my.custom.label.project: 'projectA'
      my.custom.label.version: '1'
    deploy:
      restart_policy:
        condition: on-failure
      mode: replicated
      replicas: 1

Either define the variables in your custom datadog.yaml, or set them as JSON maps in these environment variables. The map key is the source (label/envvar) name, and the map value is the Datadog tag name.

There are two environment variables that set tag cardinality: DD_CHECKS_TAG_CARDINALITY and DD_DOGSTATSD_TAG_CARDINALITY—as DogStatsD is priced differently, its tag cardinality setting is separated in order to provide the opportunity for finer configuration. Otherwise, these variables function the same way: they can have values low, orchestrator, or high. They both default to low, which pulls in host-level tags.

Setting the variable to orchestrator adds the following tags: pod_name (Kubernetes), oshift_deployment (OpenShift), task_arn (ECS and Fargate), mesos_task (Mesos).

Setting the variable to high additionally adds the following tags: container_name (Docker), container_id (Docker), display_container_name (Kubelet).

Traces

When submitting a single trace, tag its spans to override Agent configuration tags and/or the host tags value (if any) for those traces:

The following examples use the default primary tag env:<ENVIRONMENT> but you can use any <KEY>:<VALUE> tag instead.

tracer.SetTag("env", "<ENVIRONMENT>")

For OpenTracing use the tracer.WithGlobalTag start option to set the environment globally.

Via sysprop:

-Ddd.trace.span.tags=env:<ENVIRONMENT>

Via environment variables:

DD_TRACE_SPAN_TAGS="env:<ENVIRONMENT>"
Datadog.tracer.set_tags('env' => '<ENVIRONMENT>')
from ddtrace import tracer
tracer.set_tags({'env': '<ENVIRONMENT>'})
using Datadog.Tracing;
Tracer.Instance.ActiveScope.Span.SetTag("env", "<ENVIRONMENT>");

Note: Span metadata must respect a typed tree structure. Each node of the tree is split by a . and a node can be of a single type: it can’t be both an object (with sub-nodes) and a string for instance.

So this example of span tags is invalid:

{
  "key": "value",
  "key.subkey": "value_2"
}

UI

Assign host tags in the UI via the Host Map page. Click on any hexagon (host) to show the host overlay on the bottom of the page. Then, under the User section, click the Edit Tags button. Enter the tags as a comma separated list, then click Save Tags. Note: Changes to metric tags made via the UI may take up to 30 minutes to apply.

Assign host tags in the UI via the Infrastructure List page. Click on any host to show the host overlay on the right of the page. Then, under the User section, click the Edit Tags button. Enter the tags as a comma separated list, then click Save Tags. Note: Changes to metric tags made via the UI may take up to 30 minutes to apply.

From the Manage Monitors page, select the checkbox next to each monitor to add tags (select one or multiple monitors). Click the Edit Tags button. Enter a tag or select one used previously. Then click Add Tag tag:name or Apply Changes. If tags were added previously, multiple tags can be assigned at once using the tag checkboxes.

When creating a monitor, assign monitor tags under step 4 Say what’s happening:

Create percentile aggregations within Distribution Metrics by applying a whitelist of up to ten tags to a metric - this creates a timeseries for every potentially queryable combination of tag values. For more information on counting custom metrics and timeseries emitted from distribution metrics, see Custom Metrics.

** Apply up to ten tags. Exclusionary tags will not be accepted **:

The AWS integration tile allows you to assign additional tags to all metrics at the account level. Use a comma separated list of tags in the form <KEY>:<VALUE>.

API

Tags can be assigned in various ways with the Datadog API. See the list below for links to those sections:

Tagging within Datadog is a powerful way to gather your metrics. For a quick example, perhaps you’re looking for a sum of the following metrics coming from your website (example.com):

Web server 1: api.metric('page.views', [(1317652676, 100), ...], host="example_prod_1")
Web server 2: api.metric('page.views', [(1317652676, 500), ...], host="example_prod_2")

Datadog recommends adding the tag domain:example.com and leaving off the hostname (the Datadog API determines the hostname automatically):

Web server 1: api.metric('page.views', [(1317652676, 100), ...], tags=['domain:example.com'])
Web server 2: api.metric('page.views', [(1317652676, 500), ...], tags=['domain:example.com'])

With the domain:example.com tag, the page views can be summed across hosts:

sum:page.views{domain:example.com}

To get a breakdown by host, use:

sum:page.views{domain:example.com} by {host}

DogStatsD

Add tags to any metric, event, or service check you send to DogStatsD. For example, compare the performance of two algorithms by tagging a timer metric with the algorithm version:

@statsd.timed('algorithm.run_time', tags=['algorithm:one'])
def algorithm_one():
    # Do fancy things here ...

@statsd.timed('algorithm.run_time', tags=['algorithm:two'])
def algorithm_two():
    # Do fancy things (maybe faster?) here ...

Note: Tagging is a Datadog-specific extension to StatsD.

Special consideration is necessary when assigning the host tag to DogStatsD metrics. For more information on the host tag key, see the DogStatsD section.

Integration inheritance

The most efficient method for assigning tags is to rely on integration inheritance. Tags you assign to your AWS instances, Chef recipes, and other integrations are automatically inherited by hosts and metrics you send to Datadog.

Cloud integrations

Cloud integrations are authentication based. Datadog recommends using the main cloud integration tile (AWS, Azure, Google Cloud, etc.) and installing the Agent where possible. Note: If you choose to use the Agent only, some integration tags are not available.

Amazon Web Services

The following tags are collected from AWS integrations. Note: Some tags only display on specific metrics.

IntegrationDatadog Tag Keys
Allregion
API Gatewayapiid, apiname, method, resource, stage
Auto Scalingautoscalinggroupname, autoscaling_group
Billingaccount_id, budget_name, budget_type, currency, servicename, time_unit
CloudFrontdistributionid
CodeBuildproject_name
CodeDeployapplication, creator, deployment_config, deployment_group, deployment_option, deployment_type, status
DirectConnectconnectionid
DynamoDBglobalsecondaryindexname, operation, streamlabel, tablename
EBSvolumeid, volume-name, volume-type
EC2autoscaling_group, availability-zone, image, instance-id, instance-type, kernel, name, security_group_name
ECSclustername, servicename, instance_id
EFSfilesystemid
ElastiCachecachenodeid, cache_node_type, cacheclusterid, cluster_name, engine, engine_version, prefered_availability-zone, replication_group
ElasticBeanstalkenvironmentname, enviromentid
ELBavailability-zone, hostname, loadbalancername, name, targetgroup
EMRcluster_name, jobflowid
ESdedicated_master_enabled, ebs_enabled, elasticsearch_version, instance_type, zone_awareness_enabled
Firehosedeliverystreamname
Healthevent_category, status, service
IoTactiontype, protocol, rulename
Kinesisstreamname, name, state
KMSkeyid
Lambdafunctionname, resource, executedversion, memorysize, runtime
Machine Learningmlmodelid, requestmode
MQbroker, queue, topic
OpsWorksstackid, layerid, instanceid
Pollyoperation
RDSauto_minor_version_upgrade, dbinstanceclass, dbclusteridentifier, dbinstanceidentifier, dbname, engine, engineversion, hostname, name, publicly_accessible, secondary_availability-zone
Redshiftclusteridentifier, latency, nodeid, service_class, stage, wlmid
Route 53healthcheckid
S3bucketname, filterid, storagetype
SESTag keys are custom set in AWS.
SNStopicname
SQSqueuename
VPCnategatewayid, vpnid, tunnelipaddress
WorkSpacesdirectoryid, workspaceid

Azure

Azure integration metrics, events, and service checks receive the following tags:

IntegrationNamespaceDatadog Tag Keys
All Azure integrationsAllcloud_provider, region, kind, type, name, resource_group, tenant_name, subscription_name, subscription_id, status (if applicable)
Azure VM integrationsazure.vm.*host, size, operating_system, availability_zone
Azure App Service Plans(1)azure.web_serverfarms.*per_site_scaling, plan_size, plan_tier, operating_system
Azure App Services Web Apps & Functions(1)azure.app_services.*, azure.functions.*operating_system, server_farm_id, reserved, usage_state, fx_version (linux web apps only), php_version, dot_net_framework_version, java_version, node_version, python_version
Azure SQL DB(1)azure.sql_servers_databases.*license_type, max_size_mb, server_name, role, zone_redundant.
For replication Links only: state primary_server_name primary_server_region secondary_server_name secondary_server_region
Azure Load Balancer(1)azure.network_loadbalancers.*sku_name
Azure Usage and Quota(1)azure.usage.*usage_category, usage_name

(1)Resource-specific tags are in beta.

Google Cloud Platform

See the Google Cloud Platform integration documentation.

Web integrations

Web integrations are authentication based. Metrics are collected with API calls. Note: CamelCase tags are converted to underscores by Datadog, for example TestTag –> test_tag.

Further Reading