The Service Map for APM is here!

From the query to the graph

While setting up graphs is pretty simple in Datadog, this page aims at helping you leverage even more value from the Datadog graphing system.

This article focuses on describing the steps performed by Datadog’s graphing system from the query to the graph, so that you get a good idea how to choose your graph settings.

Tl;Dr ? there is a short version of this article.

Use the metric system.disk.total as an example. Say that you want to graph data associated to this metric and coming from a specific server (host:moby).

When setting up a new graph in a Timeboard/Screenboard, you can use the editor—but you can also switch to the JSON tab to set up advanced queries:

graph_metric

Now, follow each step executed by the Datadog backend to perform the query and render a graph line on your dashboard.

At each step, this article notes the effect of each parameter of the query. Before the query, storage: data is stored separately depending on the tags

The metric system.disk.total (collected by default by the datadog-agent) is seen from different sources.

This is because this metric is reported by different hosts, and also because each datadog-agent collects this metric per device. It adds to the metric system.disk.total the tag device:tmpfs when sending data associated to the disk with the same name, etc.

Thus, this metric is seen with different {host, device} tag combinations.

For each source (defined by a host and a set of tags), data is stored separately. In this example, consider host:moby as having 5 devices. Thus, Datadog is storing 5 timeseries (all datapoints submitted over time for a source) for:

  • {host:moby, device:tmpfs}
  • {host:moby, device:cgroup_root}
  • {host:moby, device:/dev/vda1}
  • {host:moby, device:overlay}
  • {host:moby, device:shm}

Now, consider the successive steps followed by the backend for the query presented above.

Find which timeseries are needed for the query

In this query, you only asked for data associated to host:moby. So the first step for Datadog’s backend is to scan all sources (in this case all {host, device} combinations with which metric system.disk.total is submitted) and only retain those corresponding to the scope of the query.

As you may have guessed, the backend finds five matching sources (see previous paragraph).

metrics_graph_2

The idea is then to aggregate data from these sources together to give you a metric representing the system.disk.total for your host. This is done at step 3.

Note: The tagging system adopted by Datadog is simple and powerful. You don’t have to know or specify the sources to combine—you just have to give a tag, i.e. an ID, and Datadog combines all data with this ID and not the rest. For instance, you don’t need to know the number of hosts or devices you have when you query system.disk.total{*}. Datadog aggregates data from all sources for you.

More information about timeseries and tag cardinality

Parameter involved: scope
You can use more than one tag, e.g. {host:moby, device:udev} if you want to fetch data responding to both tags.

Proceed to time-aggregation

Our backend selects all data corresponding to the time period of your graph.

However, before combining all data from the different sources (step 3), Datadog needs to proceed to time aggregation.

Why?

As Datadog stores data at a 1 second granularity, it cannot display all real data on graphs. See this article to learn more on how data is aggregated in graphs

For a graph on a 1-week time window, it would require sending hundreds of thousands of values to your browser—and besides, not all these points could be graphed on a widget occupying a small portion of your screen. For these reasons, Datadog is forced to proceed to data aggregation and to send a limited number of points to your browser to render a graph.

Which granularity?

For instance, on a one-day view with the ‘lines’ display, you’ll have one datapoint every 5 minutes. The Datadog backend slices the 1-day interval into 288 buckets of 5 minutes. For each bucket, the backend rolls up all data into a single value. For instance, the datapoint rendered on your graph with timestamp 07:00 is actually an aggregate of all real datapoints submitted between 07:00:00 and 07:05:00 that day.

How?

By default, the Datadog backend computes the rollup aggregate by averaging all real values, which tends to smooth out graphs as you zoom out. See more information about why does zooming out a timeframe also smooth out your graphs. Data aggregation needs to occur whether you have 1 or 1000 sources as long as you look at a large time window. What you generally see on graph is not the real values submitted but local aggregates.

metrics_graph_3

Our backend computes a series of local aggregates for each source corresponding to the query.

However, you can control how this aggregation is performed.

Parameter involved: rollup (optional) How to use the ‘rollup’ function?

In this example, rollup(avg,60) defines an aggregate period of 60 seconds. So the X minutes interval is sliced into Y intervals of 1 minute each. Data within a given minute is aggregated into a single point that shows up on your graph (after step 3, the space aggregation).

Note that the Datadog backend tries to keep the number of intervals to a number below ~300. So if you do rollup(60) over a 2-month time window, you won’t get the one-minute granularity requested.

Proceed to space-aggregation

Now you can mix data from different source into a single line.

You have ~300 points for each source. Each of them represents a minute. In this example, for each minute, Datadog computes the sum across all sources, resulting in the following graph:

metrics_graph_4

The value obtained (25.74GB) is the sum of the values reported by all sources (see previous image).

Note: Of course, if there is only one source (for instance, if we had chosen the scope {host:moby, device:/dev/disk} for the query), using sum/avg/max/min has no effect as no space aggregation needs to be performed. See here for more information.

Parameter involved: space aggregator

Datadog offers 4 space aggregators:

  • max
  • min
  • avg
  • sum

Apply functions (optional)

Most of the functions are applied at the last step. From the ~300 points obtained after time (step 2) and space (step 3) aggregations, the function computes new values which can be seen on your graph.

In this example the function abs makes sure that your results are positive numbers.

Parameter involved: function

Grouped queries, arithmetic, as_count/rate

Grouped queries

metric_graph_6

The logic is the same:

  1. The Datadog backend finds all different devices associated to the source selected.
  2. For each device, the backend performs the query system.disk.total{host:example, device:<device>} as explained in this article.
  3. All final results are graphed on the same graph.
metric_graph_2

Note: rollup or as_count modifiers have to be placed after the by {device} mention.

Note2: You can use multiple tags, for instance: system.disk.in_use{*} by {host,device}.

Arithmetic

Arithmetic is applied after time and space aggregation as well—(step 4: Apply function).

metric_graph_8

as_count and as_rate

As_count and as_rate are time aggregators specific to rates and counters submitted via StatsD/DogStatsD. They make it possible to view metrics as a rate per second, or to see them as raw counts. Syntax: instead of adding a rollup, you can use .as_count() or .as_rate().

More information in this blog post. Documentation about StatsD/DogStatsD.