---
title: Generate Log-based Metrics Processor
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Observability Pipelines > Processors > Generate Log-based Metrics
  Processor
---

# Generate Log-based Metrics Processor

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}
Available for:
{% icon name="icon-logs" /%}
 Logs 
## Overview{% #overview %}

Many types of logs are meant to be used for telemetry to track trends, such as KPIs, over long periods of time. Generating metrics from your logs is a cost-effective way to summarize log data from high-volume logs, such as CDN logs, VPC flow logs, firewall logs, and networks logs. Use the generate metrics processor to generate either a count metric of logs that match a query or a distribution metric of a numeric value contained in the logs, such as a request duration.

**Note**: The metrics generated are [custom metrics](https://docs.datadoghq.com/metrics/custom_metrics/) and billed accordingly. See [Custom Metrics Billing](https://docs.datadoghq.com/account_management/billing/custom_metrics/) for more information.

## Setup{% #setup %}

To set up the processor:

Click **Manage Metrics** to create new metrics or edit existing metrics. This opens a side panel.

- If you have not created any metrics yet, enter the metric parameters as described in the Add a metric section to create a metric.
- If you have already created metrics, click on the metric's row in the overview table to edit or delete it. Use the search bar to find a specific metric by its name, and then select the metric to edit or delete it. Click **Add Metric** to add another metric.

##### Add a metric{% #add-a-metric %}

1. Enter a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline. See [Search Syntax](https://docs.datadoghq.com/observability_pipelines/search_syntax/logs/) for more information. **Note**: Since a single processor can generate multiple metrics, you can define a different filter query for each metric.
1. Enter a name for the metric.
1. In the **Define parameters** section, select the metric type (count, gauge, or distribution). See the Count metric example and Distribution metric example. Also see Metrics Types for more information.
   - For gauge and distribution metric types, select a log field which has a numeric (or parseable numeric string) value that is used for the value of the generated metric.
   - For the distribution metric type, the log field's value can be an array of (parseable) numerics, which is used for the generated metric's sample set.
   - The **Group by** field determines how the metric values are grouped together. For example, if you have hundreds of hosts spread across four regions, grouping by region allows you to graph one line for every region. The fields listed in the **Group by** setting are set as tags on the configured metric.
1. Click **Add Metric**.

##### Metrics types{% #metrics-types %}

You can generate these types of metrics for your logs. See the [Metrics types](https://docs.datadoghq.com/metrics/types/) and [Distributions](https://docs.datadoghq.com/metrics/distributions/) documentation for more details.

| Metric type  | Description                                                                                                                                     | Example                                                                                             |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- |
| COUNT        | Represents the total number of event occurrences in one time interval. This value can be reset to zero, but cannot be decreased.                | You want to count the number of logs with `status:error`.                                           |
| GAUGE        | Represents a snapshot of events in one time interval.                                                                                           | You want to measure the latest CPU utilization per host for all logs in the production environment. |
| DISTRIBUTION | Represent the global statistical distribution of a set of values calculated across your entire distributed infrastructure in one time interval. | You want to measure the average time it takes for an API call to be made.                           |

##### Count metric example{% #count-metric-example %}

For this `status:error` log example:

```
{"status": "error", "env": "prod", "host": "ip-172-25-222-111.ec2.internal"}
```

To create a count metric that counts the number of logs that contain `"status":"error"` and groups them by `env` and `host`, enter the following information:

| Input parameters | Value                |
| ---------------- | -------------------- |
| Filter query     | `@status:error`      |
| Metric name      | `status_error_total` |
| Metric type      | Count                |
| Group by         | `env`, `prod`        |

##### Distribution metric example{% #distribution-metric-example %}

For this example of an API response log:

```
{
    "timestamp": "2018-10-15T17:01:33Z",
    "method": "GET",
    "status": 200,
    "request_body": "{"information"}",
    "response_time_seconds: 10
}
```

To create a distribution metric that measures the average time it takes for an API call to be made, enter the following information:

| Input parameters       | Value                   |
| ---------------------- | ----------------------- |
| Filter query           | `@method`               |
| Metric name            | `status_200_response`   |
| Metric type            | Distribution            |
| Select a log attribute | `response_time_seconds` |
| Group by               | `method`                |
