Datadog graphs will generally display local aggregates, rather than the original submitted value.
Data is stored at a 1 second granularity, but data can be aggregated when displayed.
For a graph on a 1-week time window, it would require sending hundreds of thousands of values to your browser. Not all of these points can be graphed on a widget occupying a small portion of your screen. For these reasons we are forced to aggregate data and to send a limited number of points to your browser to render a graph.
For instance, on a one-day view with the ‘lines’ display you’ll have one datapoint every 5 min. Our backend slices the 1-day interval into 288 buckets of 5 minutes. For each bucket our backend rolls up all data into a single value. For instance, the datapoint rendered on your graph with timestamp 07:00 is actually an aggregate of all real datapoints submitted between 07:00:00 and 07:05:00 that day.
By default our backend computes the rollup aggregate by averaging all real values, which tends to smooth out graphs as you zoom out.
Data aggregation needs to occur whether you have 1 or 1000 sources as long as you look at a large time window.
However what you can do is control how this aggregation is performed by using the rollup function:
Note that our backend tries to keep the number of intervals to a number below ~300. So if you do rollup(60) over a 2-month time window, you can’t have the one-minute granularity requested.
The graph above is a bar graph over the past 2 hours. On this graph you have one datapoint per minute, i.e. what you see are not the real values submitted but local aggregates, each one representing one minute of your metric data.
Yes, if you zoom in enough you’ll get the original values. For instance with the datadog-agent (submitting data every ~15 seconds) if you look at a 45-minute (or less) timewindow you have unaggregated values.