Monitor-based SLOs
Security Monitoring is now available Security Monitoring is now available

Monitor-based SLOs

Overview

Select a monitor-based source if you want to build your SLO based on existing or new Datadog monitors. For more information about monitors, see the Monitor documentation. Monitor-based SLOs are useful for a time-based stream of data where you are differentiating time of good behavior vs bad behavior. Using the sum of the good time divided by the sum of total time provides a Service Level Indicator (or SLI).

Setup

On the SLO status page, select New SLO +. Then select Monitor.

Define queries

To start, you need to be using Datadog monitors. To set up a new monitor, go to the monitor creation page and select one of the monitor types that are supported by SLOs (listed below). Search for monitors by name and click on it to add it to the source list. An example SLO on a monitor is if the latency of all user requests should be less than 250ms 99% of the time in any 30 day window. To set this up, you would:

  1. Select a single monitor or,
  2. Select multiple monitors (up to 20) or,
  3. Select a single multi-alert monitor and select specific monitor groups (up to 20) to be included in SLO calculation using the Calculate on selected groups toggle.

Supported Monitor Types:

  • Metric Monitor Types (Metric, Integration, APM Metric, Anomaly, Forecast, Outlier)
  • Synthetic
  • Service Checks (open beta)

Example: You might be tracking the uptime of a physical device. You have already configured a metric monitor on host:foo using a custom metric. This monitor might also ping your on-call team if it’s no longer reachable. To avoid burnout you want to track how often this host is down.

Set your SLO targets

An SLO target is comprised of the target percentage and the time window. When you set a target for a monitor-based SLO the target percentage specifies what portion of the time the underlying monitor(s) of the SLO should be in an OK state, while the time window specifies the rolling time period over which the target should be tracked.

Example: 99% of the time requests should have a latency of less than 300ms over the past 7 days.

While the SLO remains above the target percentage, the SLO’s status will be displayed in green font. When the target percentage is violated, the SLO’s status will be displayed in red font. You can also optionally include a warning percentage that is greater than the target percentage to indicate when you are approaching an SLO breach. When the warning percentage is violated (but the target percentage is not violated), the SLO status will be displayed in yellow font.

Identify this indicator

Here you can add contextual information about the purpose of the SLO, including any related information or resources in the description and tags you would like to associate with the SLO.

Overall Status Calculation

The overall status can be considered as a percentage of the time where all monitors are in the OK state. It is not the average of the aggregated monitors.

Consider the following example for 3 monitors:

Monitort1t2t3t4t5t6t7t8t9t10Status
Monitor 1OKOKOKOKALERTOKOKOKOKOK90%
Monitor 2OKOKOKOKOKOKOKOKALERTOK90%
Monitor 3OKOKALERTOKALERTOKOKOKOKOK80%
Overall StatusOKOKALERTOKALERTOKOKOKALERTOK70%

This can result in the overall status being lower than the average of the individual statuses.

Underlying monitor and SLO histories

SLOs based on the metric monitor types have a feature called SLO Replay that will backfill SLO statuses with historical data pulled from the underlying monitors’ metrics and query configurations. This means that if you create a new Metric Monitor and set an SLO on that new monitor, rather than having to wait a full 7, 30 or 90 days for the SLO’s status to fill out, SLO Replay will trigger and look at the underlying metric of that monitor and the monitor’s query to get the status sooner. SLO Replay also triggers when the underlying metric monitor’s query is changed (e.g. the threshold is changed) to correct the status based on the new monitor configuration. As a result of SLO Replay recalculating an SLO’s status history, the monitor’s status history and the SLO’s status history may not match after a monitor update.

Note: SLO Replay is not supported for SLOs based on Synthetic tests or Service Checks.

Datadog recommends against using monitors with Alert Recovery Threshold and Warning Recovery Threshold as they can also affect your SLO calculations and do not allow you to cleanly differentiate between a SLI’s good behavior and bad behavior.

SLO calculations do not take into account when a monitor is resolved manually or as a result of the After x hours automatically resolve this monitor from a triggered state setting. If these are important tools for your workflow, consider cloning your monitor, removing auto-resolve settings and @-notifications, and using the clone for your SLO.

Confirm you are using the preferred SLI type for your use case. Datadog supports monitor-based SLIs and metric-based SLIs as described in the SLO metric documentation.

Further Reading

Additional helpful documentation, links, and articles: