Monitor-based SLOs
New announcements from Dash: Incident Management, Continuous Profiler, and more! New announcements from Dash!

Monitor-based SLOs

Overview

Select a monitor-based source if you want to build your SLO based on existing or new Datadog monitors. For more information about monitors, see the Monitor documentation. Monitor-based SLOs are useful for a time-based stream of data where you are differentiating time of good behavior vs bad behavior. Using the sum of the good time divided by the sum of total time provides a Service Level Indicator (or SLI).

Setup

On the SLO status page, select New SLO +. Then select Monitor.

Define queries

To start, you need to be using Datadog monitors. To set up a new monitor, go to the monitor creation page and select one of the monitor types that are supported by SLOs (listed below). Search for monitors by name and click on it to add it to the source list.

For example, if you have a Metric Monitor that is configured to alert when user request latency is greater than 250ms, you could set a monitor-based SLO on that monitor. Let’s say you choose an SLO target of 99% over the past 30 days. What this means is that the latency of user requests should be less than 250ms 99% of the time over the past 30 days. To set this up, you would:

  1. Select a single monitor or,
  2. Select multiple monitors (up to 20) or,
  3. Select a single multi-alert monitor and select specific monitor groups (up to 20) to be included in SLO calculation using the Calculate on selected groups toggle.

Supported Monitor Types:

  • Metric Monitor Types (Metric, Integration, APM Metric, Anomaly, Forecast, Outlier)
  • Synthetic
  • Service Checks (open beta)

Example: You might be tracking the uptime of a physical device. You have already configured a metric monitor on host:foo using a custom metric. This monitor might also ping your on-call team if it’s no longer reachable. To avoid burnout you want to track how often this host is down.

Set your SLO targets

An SLO target is comprised of the target percentage and the time window. When you set a target for a monitor-based SLO the target percentage specifies what portion of the time the underlying monitor(s) of the SLO should be in an OK state, while the time window specifies the rolling time period over which the target should be tracked.

Example: 99% of the time requests should have a latency of less than 300ms over the past 7 days.

While the SLO remains above the target percentage, the SLO’s status will be displayed in green font. When the target percentage is violated, the SLO’s status will be displayed in red font. You can also optionally include a warning percentage that is greater than the target percentage to indicate when you are approaching an SLO breach. When the warning percentage is violated (but the target percentage is not violated), the SLO status will be displayed in yellow font.

Identify this indicator

Here you can add contextual information about the purpose of the SLO, including any related information or resources in the description and tags you would like to associate with the SLO.

Overall Status Calculation

The overall status can be considered as a percentage of the time where all monitors or all the calculated groups in a single multi-alert monitor are in the OK state. It is not the average of the aggregated monitors or the aggregated groups, respectively.

Consider the following example for 3 monitors (this is also applicable to a monitor-based SLO based on a single multi-alert monitor):

Monitort1t2t3t4t5t6t7t8t9t10Status
Monitor 1OKOKOKOKALERTOKOKOKOKOK90%
Monitor 2OKOKOKOKOKOKOKOKALERTOK90%
Monitor 3OKOKALERTOKALERTOKOKOKOKOK80%
Overall StatusOKOKALERTOKALERTOKOKOKALERTOK70%

This can result in the overall status being lower than the average of the individual statuses.

Exceptions for Synthetic Tests

In certain cases, there is an exception to the status calculation for monitor-based SLOs that are comprised of one grouped Synthetic test. Synthetic tests have optional special alerting conditions that change the behavior of when the test enters the ALERT state and consequently impact the overall uptime:

  • Wait until the groups are failing for a specified number of minutes (default: 0)
  • Wait until a specified number of the groups are failing (default: 1)
  • Retry a specified number of times before a location’s test is considered a failure (default: 0)

By changing any of these conditions to something other than their defaults, the overall status for a monitor-based SLO using just that one Synthetic test could appear to be better than the aggregated statuses of the Synthetic test’s individual groups.

For more information on Synthetic test alerting conditions, visit the Synthetic Monitoring documentation.

Underlying monitor and SLO histories

SLOs based on the metric monitor types have a feature called SLO Replay that will backfill SLO statuses with historical data pulled from the underlying monitors’ metrics and query configurations. This means that if you create a new Metric Monitor and set an SLO on that new monitor, rather than having to wait a full 7, 30 or 90 days for the SLO’s status to fill out, SLO Replay will trigger and look at the underlying metric of that monitor and the monitor’s query to get the status sooner. SLO Replay also triggers when the underlying metric monitor’s query is changed (e.g. the threshold is changed) to correct the status based on the new monitor configuration. As a result of SLO Replay recalculating an SLO’s status history, the monitor’s status history and the SLO’s status history may not match after a monitor update.

Note: SLO Replay is not supported for SLOs based on Synthetic tests or Service Checks.

Datadog recommends against using monitors with Alert Recovery Threshold and Warning Recovery Threshold as they can also affect your SLO calculations and do not allow you to cleanly differentiate between a SLI’s good behavior and bad behavior.

SLO calculations do not take into account when a monitor is resolved manually or as a result of the After x hours automatically resolve this monitor from a triggered state setting. If these are important tools for your workflow, consider cloning your monitor, removing auto-resolve settings and @-notifications, and using the clone for your SLO.

Confirm you are using the preferred SLI type for your use case. Datadog supports monitor-based SLIs and metric-based SLIs as described in the SLO metric documentation.

Further Reading

Additional helpful documentation, links, and articles: