---
title: Generate Metrics from Ingested Logs
description: Generate Metrics from Ingested Logs.
breadcrumbs: >-
  Docs > Log Management > Log Configuration > Generate Metrics from Ingested
  Logs
---

# Generate Metrics from Ingested Logs

## Overview{% #overview %}

{% alert level="info" %}
The solutions outlined in this documentation are specific to cloud-based logging environments. To generate metrics from on-premises logs, see the [Observability Pipelines](https://docs.datadoghq.com/observability_pipelines/configuration/explore_templates#generate-metrics) documentation.
{% /alert %}

Datadog's [Logging without Limits](https://docs.datadoghq.com/logs/)\* lets you dynamically decide what to include or exclude from your indexes for storage and query, at the same time many types of logs are meant to be used for telemetry to track trends, such as KPIs, over long periods of time. Log-based metrics are a cost-efficient way to summarize log data from the entire ingest stream. This means that even if you use [exclusion filters](https://docs.datadoghq.com/logs/indexes/#exclusion-filters) to limit what you store for exploration, you can still visualize trends and anomalies over all of your log data at 10s granularity for 15 months.

With log-based metrics, you can generate a count metric of logs that match a query or a [distribution metric](https://docs.datadoghq.com/metrics/distributions/#overview) of a numeric value contained in the logs, such as request duration.

**Billing Note:** Metrics created from ingested logs are billed as [Custom Metrics](https://docs.datadoghq.com/metrics/custom_metrics/).

## Generate a log-based metric{% #generate-a-log-based-metric %}

{% image
   source="https://datadog-docs.imgix.net/images/logs/processing/logs_to_metrics/generate_logs_to_metric.f6d084362cc8c62f05c78517dfd821bf.png?auto=format"
   alt="Generate Logs to metric" /%}

To generate a new log-based metric:

1. Navigate to the [Generate Metrics](https://app.datadoghq.com/logs/pipelines/generate-metrics) page.
1. Select the **Generate Metrics** tab.
1. Click **+New Metric**.

You can also create metrics from an Analytics search by selecting the "Generate new metric" option from the Export menu.

{% image
   source="https://datadog-docs.imgix.net/images/logs/processing/logs_to_metrics/metrics_from_analytics2.314ef1196fef67cd0d5b689e4ab29818.jpg?auto=format"
   alt="Generate Logs to metric" /%}

### Add a new log-based metric{% #add-a-new-log-based-metric %}

{% image
   source="https://datadog-docs.imgix.net/images/logs/processing/logs_to_metrics/create_custom_metrics2.777139e5c24df097da0684e4e4b9532c.png?auto=format"
   alt="Create a Logs to metric" /%}

1. **Input a query to filter the log stream**: The query syntax is the same as for the [Log Explorer Search](https://docs.datadoghq.com/logs/search_syntax/). Only logs ingested with a timestamp within the past 20 minutes are considered for aggregation. The index must be excluded from the query.
1. **Select the field you would like to track**: Select `*` to generate a count of all logs matching your query or enter a log attribute (for example, `@network.bytes_written`) to aggregate a numeric value and create its corresponding `count`, `min`, `max`, `sum`, and `avg` aggregated metrics. If the log attribute facet is a [measure](https://docs.datadoghq.com/logs/explorer/facets/#quantitative-facets-measures), the value of the metric is the value of the log attribute.
1. **Add dimensions to `group by`**: By default, metrics generated from logs do not have any tags unless explicitly added. Any attribute or tag dimension that exists in your logs (for example, `@network.bytes_written`, `env`) can be used to create metric [tags](https://docs.datadoghq.com/getting_started/tagging/). Metric tags names are equal to the originating attribute or tag name, without the @.
1. **Add percentile aggregations**: For distribution metrics, you can optionally generate p50, p75, p90, p95, and p99 percentiles. Percentile metrics are also considered custom metrics, and [billed accordingly](https://docs.datadoghq.com/account_management/billing/custom_metrics/?tab=countrategauge).
1. **Name your metric**: Log-based metric names must follow the [custom metric naming convention](https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics).

**Note**: Data points for log-based metrics are generated at 10-second intervals. When you create a [dashboard graph](https://docs.datadoghq.com/dashboards/querying/) for log-based metrics, the `count unique` parameter is based on the values within the 10-second interval.

{% image
   source="https://datadog-docs.imgix.net/images/logs/processing/logs_to_metrics/count_unique.3c9d601604ba7e9ea291c6d935270b35.png?auto=format"
   alt="The timeseries graph configuration page with the count unique query parameter highlighted" /%}

{% alert level="danger" %}
Log-based metrics are considered [custom metrics](https://docs.datadoghq.com/metrics/custom_metrics/) and billed accordingly. Avoid grouping by unbounded or extremely high cardinality attributes like timestamps, user IDs, request IDs, or session IDs to avoid impacting your billing.
{% /alert %}

### Update a log-based metric{% #update-a-log-based-metric %}

After a metric is created, the following fields can be updated:

- Stream filter query: To change the set of matching logs to be aggregated into metrics
- Aggregation groups: To update the tags or manage the cardinality of the generated metrics
- Percentile selection: Check or uncheck the **Calculate percentiles** box to remove or generate percentile metrics

To change the metric type or name, a new metric must be created.

## Logs usage metrics{% #logs-usage-metrics %}

{% image
   source="https://datadog-docs.imgix.net/images/logs/processing/logs_to_metrics/estimated_usage_metrics.cd6886c77d24cee568de34fa4bfab7cb.png?auto=format"
   alt="Recommended Usage Metrics" /%}

Usage metrics are estimates of your current Datadog usage in near real-time. They enable you to:

- Graph your estimated usage.
- Create monitors around your estimated usage.
- Get instant alerts about spikes or drops in your usage.
- Assess the potential impact of code changes on your usage in near real-time.

Log Management usage metrics come with three tags that can be used for more granular monitoring:

| Tag                   | Description                                                          |
| --------------------- | -------------------------------------------------------------------- |
| `datadog_index`       | Indicates the routing query that matches a log to an intended index. |
| `datadog_is_excluded` | Indicates whether or not a log matches an exclusion query.           |
| `service`             | The service attribute of the log event.                              |

**Note**: The `datadog_is_excluded` and `datadog_index` fields can have a value of `N/A`. This indicates that the log(s) was ingested, but didn't match any inclusion or exclusion criteria to be explicitly routed to an index.

An extra `status` tag is available on the `datadog.estimated_usage.logs.ingested_events` metric to reflect the log status (`info`, `warning`, etc.).

## Further Reading{% #further-reading %}



- [Learn how to process your logs](https://docs.datadoghq.com/logs/log_configuration/processors)
- [Use CIDR notation queries to filter your network traffic logs](https://www.datadoghq.com/blog/cidr-queries-datadog-log-management/)
\*Logging without Limits is a trademark of Datadog, Inc.

