Datadog Synthetics is now available!

Assigning tags


Tagging is used throughout Datadog to query the machines and metrics you monitor. Without the ability to assign and filter based on tags, finding problems in your environment and narrowing them down enough to discover the true causes could be difficult. Learn how to define tags in Datadog before going further.

There are several places tags can be assigned: configuration files, environment variables, your traces, the Datadog UI, API, DogStatsD, and inheriting from the integrations. It is recommended that you use configuration files and integration inheritance for most of your tagging needs.

Configuration Files

The hostname (tag key host) is assigned automatically by the Datadog Agent. To customize the hostname, use the Agent configuration file, datadog.yaml:

# Set the hostname (default: auto-detected)
# Must comply with RFC-1123, which permits only:
# "A" to "Z", "a" to "z", "0" to "9", and the hyphen (-)
hostname: mymachine.mydomain

When changing the hostname:

  • The old hostname remains in the UI for 24 hours but does not show new metrics.
  • Any data from hosts with the old hostname can be queried with the API.
  • To graph metrics with the old and new hostname in one graph, use Arithmetic between two metrics.

The Agent configuration file (datadog.yaml) is also used to set host tags which apply to all metrics, traces, and logs forwarded by the Datadog Agent (see YAML formats below).

Tags for the integrations installed with the Agent are configured via YAML files located in the conf.d directory of the Agent install. To locate the configuration files, refer to the Agent configuration files FAQ.

YAML formats

In YAML files, use a tag dictionary with a list of tags you want assigned at that level. Tag dictionaries have two different yet functionally equivalent forms:

tags: <KEY_1>:<VALUE_1>, <KEY_2>:<VALUE_2>, <KEY_3>:<VALUE_3>


    - <KEY_1>:<VALUE_1>
    - <KEY_2>:<VALUE_2>
    - <KEY_3>:<VALUE_3>

It is recommended you assign tags as <KEY>:<VALUE> pairs, but simple tags are also accepted. See defining tags for more details.

Environment Variables

When installing the containerized Datadog Agent, set your host tags using the environment variable DD_TAGS. We automatically collect common tags from Docker, Kubernetes, ECS, Swarm, Mesos, Nomad, and Rancher. To extract even more tags, use the following options:

Environment Variable Description
DD_DOCKER_LABELS_AS_TAGS Extract docker container labels
DD_DOCKER_ENV_AS_TAGS Extract docker container environment variables
DD_CHECKS_TAG_CARDINALITY Add tags to check metrics
DD_DOGSTATSD_TAG_CARDINALITY Add tags to custom metrics



When using the DD_DOCKER_LABELS_AS_TAGS variable within a Docker Swarm docker-compose.yaml file, remove the apostrophes, for example:


When adding labels to Docker containers, the placement of the labels: keyword inside the docker-compose.yaml file is very important. If the container needs to be labeled then place the labels: keyword inside the services: section not inside the deploy: section. Place the labels: keyword inside the deploy: section only when the service needs to be labeled. The Datadog Agent will not have any labels to extract from the containers without this placement. Below is a sample, working docker-compose.yaml file that shows this. In the example below, the labels in the myapplication: section, my.custom.label.project and my.custom.label.version each have unique values. Using the DD_DOCKER_LABELS_AS_TAGS environment variable in the datadog: section will extract the labels and produce these two tags for the myapplication container:

Inside the myapplication container the labels are: my.custom.label.project and my.custom.label.version

After the Agent extracts the labels from the container the tags will be: projecttag:projectA versiontag:1

Sample docker-compose.yaml:

      - '/var/run/docker.sock:/var/run/docker.sock:ro'
      - '/proc:/host/proc:ro'
      - '/sys/fs/cgroup/:/host/sys/fs/cgroup:ro'
      - DD_API_KEY=abcdefghijklmnop
      - DD_DOCKER_LABELS_AS_TAGS={"my.custom.label.project":"projecttag","my.custom.label.version":"versiontag"}
      - DD_TAGS="key1:value1 key2:value2 key3:value3"
    image: 'datadog/agent:latest'
        condition: on-failure
      mode: replicated
      replicas: 1
    image: 'myapplication'
      my.custom.label.project: 'projectA'
      my.custom.label.version: '1'
        condition: on-failure
      mode: replicated
      replicas: 1

Either define the variables in your custom datadog.yaml, or set them as JSON maps in these environment variables. The map key is the source (label/envvar) name, and the map value is the Datadog tag name.

The environment variables that set tag cardinality (DD_CHECKS_TAG_CARDINALITY and DD_DOGSTATSD_TAG_CARDINALITY) can have values low, orchestrator, or high. They both default to low.

Setting the variable to orchestrator adds the following tags: pod_name (Kubernetes), oshift_deployment (OpenShift), task_arn (ECS and Fargate), mesos_task (Mesos).

Setting the variable to high additionally adds the following tags: container_name (Docker), container_id (Docker), display_container_name (Kubelet).


When submitting a single trace, tag its spans to override Agent configuration tags and/or the host tags value (if any) for those traces:

The following examples use the default primary tag env:<ENVIRONMENT> but you can use any <KEY>:<VALUE> tag instead.

tracer.SetTag("env", "<ENVIRONMENT>")

For OpenTracing use the tracer.WithGlobalTag start option to set the environment globally.

Via sysprop:


Via environment variables:

Datadog.tracer.set_tags('env' => '<ENVIRONMENT>')
from ddtrace import tracer
tracer.set_tags({'env': '<ENVIRONMENT>'})
using Datadog.Tracing;
Tracer.Instance.ActiveScope.Span.SetTag("env", "<ENVIRONMENT>");

Note: Span metadata must respect a typed tree structure. Each node of the tree is split by a . and a node can be of a single type: it can’t be both an object (with sub-nodes) and a string for instance.

So this example of span metadata is invalid:

  "key": "value",
  "key.subkey": "value_2"


Assign host tags in the UI via the Host Map page. Click on any hexagon (host) to show the host overlay on the bottom of the page. Then, under the User section, click the Edit Tags button. Enter the tags as a comma separated list, then click Save Tags. Note: Changes to metric tags made via the UI may take up to 30 minutes to apply.

Host Map Tags

Assign host tags in the UI via the Infrastructure List page. Click on any host to show the host overlay on the right of the page. Then, under the User section, click the Edit Tags button. Enter the tags as a comma separated list, then click Save Tags. Note: Changes to metric tags made via the UI may take up to 30 minutes to apply.

Infrastructure List Tags

From the Manage Monitors page, select the checkbox next to each monitor to add tags (select one or multiple monitors). Click the Edit Tags button. Enter a tag or select one used previously. Then click Add Tag tag:name or Apply Changes. If tags were added previously, multiple tags can be assigned at once using the tag checkboxes.

Manage Monitors Tags

When creating a monitor, assign monitor tags under step 4 Say what’s happening:

Create Monitor Tags

Assign tag keys within Distribution Metrics (Beta) to create aggregate timeseries by applying sets of tags to a metric, for which a timeseries is created for every combination of tag values within the set.

Sets of tags are limited to groups of four:

Distribution Metrics Tags

The AWS integration tile allows you to assign additional tags to all metrics at the account level. Use a comma separated list of tags in the form <KEY>:<VALUE>.

AWS Tags


Tags can be assigned in various ways with the Datadog API. See the list below for links to those sections:

Tagging within Datadog is a powerful way to gather your metrics. For a quick example, perhaps you’re looking for a sum of the following metrics coming from your website (

Web server 1: api.metric('page.views', [(1317652676, 100), ...], host="example_prod_1")
Web server 2: api.metric('page.views', [(1317652676, 500), ...], host="example_prod_2")

Datadog recommends adding the tag and leaving off the hostname (the Datadog API will determine the hostname automatically):

Web server 1: api.metric('page.views', [(1317652676, 100), ...], tags=[''])
Web server 2: api.metric('page.views', [(1317652676, 500), ...], tags=[''])

With the tag, the page views can be summed across hosts:


To get a breakdown by host, use:

sum:page.views{} by {host}


Add tags to any metric, event, or service check you send to DogStatsD. For example, compare the performance of two algorithms by tagging a timer metric with the algorithm version:

@statsd.timed('algorithm.run_time', tags=['algorithm:one'])
def algorithm_one():
    # Do fancy things here ...

@statsd.timed('algorithm.run_time', tags=['algorithm:two'])
def algorithm_two():
    # Do fancy things (maybe faster?) here ...

Note that tagging is a Datadog-specific extension to StatsD.

Special consideration is necessary when assigning the host tag to DogStatsD metrics. For more information on the host tag key, see the DogStatsD section.

Integration Inheritance

The most efficient method for assigning tags is to rely on your integrations. Tags assigned to your Amazon Web Services instances, Chef recipes, and more are all automatically assigned to the hosts and metrics when they are brought into Datadog. Note: CamelCase tags are converted to underscores by Datadog, for example TestTag –> test_tag.

The following integration sources create tags automatically in Datadog:

Integration Source
Amazon CloudFront Distribution
Amazon EC2 AMI, Customer Gateway, DHCP Option, EBS Volume, Instance, Internet Gateway, Network ACL, Network Interface, Reserved Instance, Reserved Instance Listing, Route Table , Security Group - EC2 Classic, Security Group - VPC, Snapshot, Spot Batch, Spot Instance Request, Spot Instances, Subnet, Virtual Private Gateway, VPC, VPN Connection
Amazon Elastic File System Filesystem
Amazon Kinesis Stream State
Amazon Machine Learning BatchPrediction, DataSource, Evaluation, MLModel
Amazon Route 53 Domains, Healthchecks, HostedZone
Amazon WorkSpaces WorkSpaces
AWS CloudTrail CloudTrail
AWS Elastic Load Balancing Loadbalancer, TargetGroups
AWS Identity and Access Management Profile Name
AWS SQS Queue Name
Apache Apache Host and Port
Azure Tenant Name, Status, Tags, Subscription ID and Name, Availability Zone in common with AWS tag after contacting Datadog support
BTRFS Usage and Replication Type
Chef Chef Roles
Consul Previous and Current Consul Leaders and Followers, Consul Datacenter, Service Name, Service ID
CouchDB Database Name, Instance Name
CouchBase CouchBase Tags, Instance Name
Docker Docker, Kubernetes, ECS, Swarm, Mesos, Nomad and Rancher, collect more tag with the Docker Agent tags collection options
Dyn Zone, Record Type
Elasticsearch Cluster Name, Host Name, Port Number
Etcd State Leader or Follower
Fluentd Host Name, Port Number
Google App Engine Project Name, Version ID, Task Queue
Google Cloud Platform Zone, Instance Type and ID, Automatic Restart, Project Name and ID, Name, Availability Zone in common with AWS tag after contacting Datadog support
Go Expvar Expvar Path
Gunicorn State Idle or Working, App Name
HAProxy Service Name, Availability, Backend Host, Status, Type
HTTP Check URL, Instance
IIS Site
Jenkins Job Name, Build Number, Branch, and Results
Kafka Topic
Kubernetes Minion Name, Namespace, Replication Controller, Labels, Container Alias
Marathon URL
Memcached Host, Port, Request, Cache Hit or Miss
Mesos Role, URL, PID, Slave or Master Role, Node, Cluster,
Mongo Server Name
OpenStack Network ID, Network Name, Hypervisor Name, ID, and Type, Tenant ID, Availability Zone
PHP FPM Pool Name
Pivotal Current State, Owner, Labels, Requester, Story Type
Postfix Queue, Instance
Puppet Puppet Tags
RabbitMQ Node, Queue Name, Vhost, Policy, Host
Redis Host, Port, Slave or Master
RiakCS Aggregation Key
SNMP Device IP Address
Supervisord Server Name, Process Name
TeamCity Tags, Code Deployments, Build Number
TokuMX Role Primary or Secondary, Replset, Replstate, Db, Coll, Shard
Varnish Name, Backend
VSphere Host, Datacenter, Server, Instance
Win32 Events Event ID
Windows Services Service Name

Further Reading