Logs Monitor
New announcements from Dash: Incident Management, Continuous Profiler, and more! New announcements from Dash!

Logs Monitor

Overview

Once log management is enabled for your organization, you can create a logs monitor to alert you when a specified type of log exceeds a user-defined threshold over a given period of time.

Monitor creation

To create a logs monitor in Datadog, use the main navigation: Monitors –> New Monitor –> Logs.

Define the search query

As you define the search query, the graph above the search fields updates.

  1. Construct a search query using the same logic as a log explorer search.
  2. Choose to monitor over a log count, facet, or measure:
    • Monitor over a log count: Use the search bar (optional) and do not select a facet or measure. Datadog evaluates the number of logs over a selected time frame, then compares it to the threshold conditions.
    • Monitor over a facet: If a facet is selected, the monitor alerts over the Unique value count of the facet.
    • Monitor over measure: If a measure is selected, the monitor alerts over the numerical value of the log facet (similar to a metric monitor) and aggregation needs to be selected (min, avg, sum, median, pc75, pc90, pc95, pc98, pc99, or max).
  3. Configure the alerting grouping strategy (optional):
    • Simple-Alert: Simple alerts aggregate over all reporting sources. You receive one alert when the aggregated value meets the set conditions. This works best to monitor a metric from a single host or the sum of a metric across many hosts. This strategy may be selected to reduce notification noise.
    • Multi-Alert: Multi alerts apply the alert to each source according to your group parameters, up to 100 matching groups. An alerting event is generated for each group that meets the set conditions. For example, you could group system.disk.in_use by device to receive a separate alert for each device that is running out of space.

Set alert conditions

  • Trigger when the metric is above, above or equal to, below, or below or equal to
  • the threshold during the last 5 minutes, 15 minutes, 1 hour, etc.
  • Alert threshold <NUMBER>
  • Warning threshold <NUMBER>

No data and below alerts

To receive a notification when all groups in a service have stopped sending logs, set the condition to below 1. This notifies when no logs match the monitor query in a given timeframe across all aggregate groups.

When splitting the monitor by any dimension (tag or facet) and using a below condition, the alert is triggered if and only if there are logs for a given group, and the count is below the threshold—or if there are no logs for all of the groups.

Examples:

  • This monitor triggers if and only if there are no logs for all services:
  • This monitor triggers if there are no logs for the service backend:

Notifications

For detailed instructions on the Say what’s happening and Notify your team sections, see the Notifications page.

Log samples

By default, when a logs monitor is triggered, samples or values are added to the notification message.

Monitor overAdded to notification message
Log countGrouped: The top 10 breaching values and their corresponding counts.
Ungrouped: Up to 10 log samples.
Facet or measureThe top 10 facet or measure values.

These are available for notifications sent to Slack, Jira, webhooks, Microsoft Teams, Pagerduty, and email. Note: Samples are not displayed for recovery notifications.

To disable log samples, uncheck the box at the bottom of the Say what’s happening section. The text next to the box is based on your monitor’s grouping (as stated above).

Examples

Include a table of the top 10 breaching values:

Include a sample of 10 logs in the alert notification:

Further Reading