---
title: Custom OpenMetrics Check
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Extend Datadog > Custom Checks > Custom OpenMetrics Check
---

# Custom OpenMetrics Check

## Overview{% #overview %}

This page dives into the `OpenMetricsBaseCheckV2` interface for more advanced usage, including an example of a simple check that collects timing metrics and status events from [Kong](https://github.com/DataDog/integrations-core/blob/master/kube_dns/datadog_checks/kube_dns/kube_dns.py). For details on configuring a basic OpenMetrics check, see [Kubernetes Prometheus and OpenMetrics metrics collection](https://docs.datadoghq.com/agent/kubernetes/prometheus/).

**Note**: `OpenMetricsBaseCheckV2` is available in Agent v`7.26.x`+ and requires Python 3.

{% alert level="info" %}
If you are looking for the legacy implementation or `OpenMetricsBaseCheck` interface custom check guide, please see [Custom Legacy OpenMetrics Check](https://docs.datadoghq.com/extend/faq/legacy-openmetrics/).
{% /alert %}

## Advanced usage: OpenMetrics check interface{% #advanced-usage-openmetrics-check-interface %}

If you have more advanced needs than the generic check, such as metrics preprocessing, you can write a custom `OpenMetricsBaseCheckV2`. It's the [base class](https://github.com/DataDog/integrations-core/tree/master/datadog_checks_base/datadog_checks/base/checks/openmetrics/v2) of the generic check, and it provides a structure and some helpers to collect metrics, events, and service checks exposed with Prometheus. The minimal configuration for checks based on this class include:

- Creating a default instance with `namespace` and `metrics` mapping.
- Implementing the `check()` method AND/OR:
- Creating a method named after the OpenMetric metric handled (see `self.prometheus_metric_name`).

See this [example in the Kong integration](https://github.com/DataDog/integrations-core/blob/459e8c12a9c828a0b3faff59df69c2e1f083309c/kong/datadog_checks/kong/check.py#L22-L45) where the Prometheus metric `kong_upstream_target_health` value is used as service check.

## Writing a custom OpenMetrics check{% #writing-a-custom-openmetrics-check %}

This is a simple example of writing a Kong check to illustrate usage of the `OpenMetricsBaseCheckV2` class. The example below replicates the functionality of the following generic Openmetrics check:

```yaml
instances:
  - openmetrics_endpoint: http://localhost:8001/status/
    namespace: "kong"
    metrics:
      - kong_bandwidth: bandwidth
      - kong_http_consumer_status: http.consumer.status
      - kong_http_status: http.status
      - kong_latency:
          name: latency
          type: counter
      - kong_memory_lua_shared_dict_bytes: memory.lua.shared_dict.bytes
      - kong_memory_lua_shared_dict_total_bytes: memory.lua.shared_dict.total_bytes
      - kong_nginx_http_current_connections: nginx.http.current_connections
      - kong_nginx_stream_current_connections: nginx.stream.current_connections
      - kong_stream_status: stream.status
```

### Configuration{% #configuration %}

{% alert level="danger" %}
The names of the configuration and check files must match. If your check is called `mycheck.py` your configuration file *must* be named `mycheck.yaml`.
{% /alert %}

Configuration for an Openmetrics check is almost the same as a regular [Agent check](https://docs.datadoghq.com/extend/integrations/). The main difference is to include the variable `openmetrics_endpoint` in your `check.yaml` file. This goes into `conf.d/kong.yaml`:

```yaml
init_config:

instances:
    # URL of the Prometheus metrics endpoint
  - openmetrics_endpoint: http://localhost:8001/status/
```

### Writing the check{% #writing-the-check %}

All OpenMetrics checks inherit from the [`OpenMetricsBaseCheckV2` class](https://github.com/DataDog/integrations-core/tree/master/datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/base.py):

```python
from datadog_checks.base import OpenMetricsBaseCheckV2

class KongCheck(OpenMetricsBaseCheckV2):
```

## Define the integration namespace{% #define-the-integration-namespace %}

The value of `__NAMESPACE__` will prefix all metrics and service checks collected by this integration.

```python
from datadog_checks.base import OpenMetricsBaseCheckV2

class KongCheck(OpenMetricsBaseCheckV2):
    __NAMESPACE__ = "kong"
```

#### Define a metrics mapping{% #define-a-metrics-mapping %}

The [metrics](https://github.com/DataDog/integrations-core/blob/459e8c12a9c828a0b3faff59df69c2e1f083309c/openmetrics/datadog_checks/openmetrics/data/conf.yaml.example#L65-L104) mapping allows you to rename the metric name and override the native metric type.

```python
from datadog_checks.base import OpenMetricsBaseCheckV2

class KongCheck(OpenMetricsBaseCheckV2):
    __NAMESPACE__ = "kong"

    def __init__(self, name, init_config, instances):
        super(KongCheck, self).__init__(name, init_config, instances)

        self.metrics_map =  {
            'kong_bandwidth': 'bandwidth',
            'kong_http_consumer_status': 'http.consumer.status',
            'kong_http_status': 'http.status',
            'kong_latency': {
                'name': 'latency',
                'type': 'counter',
            },
            'kong_memory_lua_shared_dict_bytes': 'memory.lua.shared_dict.bytes',
            'kong_memory_lua_shared_dict_total_bytes': 'memory.lua.shared_dict.total_bytes',
            'kong_nginx_http_current_connections': 'nginx.http.current_connections',
            'kong_nginx_stream_current_connections': 'nginx.stream.current_connections',
            'kong_stream_status': 'stream.status',
        }
```

#### Define a default instance{% #define-a-default-instance %}

A default instance is the basic configuration used for the check. The default instance should override `metrics`, and `openmetrics_endpoint`. [Override](https://github.com/DataDog/integrations-core/blob/459e8c12a9c828a0b3faff59df69c2e1f083309c/datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/base.py#L86-L87) the `get_default_config` in OpenMetricsBaseCheckV2 with your default instance.

```python
from datadog_checks.base import OpenMetricsBaseCheckV2

class KongCheck(OpenMetricsBaseCheckV2):
    __NAMESPACE__ = "kong"

    def __init__(self, name, init_config, instances):
        super(KongCheck, self).__init__(name, init_config, instances)

        self.metrics_map = {
            'kong_bandwidth': 'bandwidth',
            'kong_http_consumer_status': 'http.consumer.status',
            'kong_http_status': 'http.status',
            'kong_latency': {
                'name': 'latency',
                'type': 'counter',
            },
            'kong_memory_lua_shared_dict_bytes': 'memory.lua.shared_dict.bytes',
            'kong_memory_lua_shared_dict_total_bytes': 'memory.lua.shared_dict.total_bytes',
            'kong_nginx_http_current_connections': 'nginx.http.current_connections',
            'kong_nginx_stream_current_connections': 'nginx.stream.current_connections',
            'kong_stream_status': 'stream.status',
        }

      def get_default_config(self):
            return {'metrics': self.metrics_map}
```

#### Implementing the check method{% #implementing-the-check-method %}

If you want to implement additional features, override the `check()` function.

From `instance`, use `endpoint`, which is the Prometheus or OpenMetrics metrics endpoint to poll metrics from:

```python
def check(self, instance):
    endpoint = instance.get('openmetrics_endpoint')
```

##### Exceptions{% #exceptions %}

If a check cannot run because of improper configuration, a programming error, or because it could not collect any metrics, it should raise a meaningful exception. This exception is logged and is shown in the Agent [status command](https://docs.datadoghq.com/agent/configuration/agent-commands/?tab=agentv6v7#agent-status-and-information) for debugging. For example:

```
$ sudo /etc/init.d/datadog-agent info

  Checks
  ======

    my_custom_check
    ---------------
      - instance #0 [ERROR]: Unable to find openmetrics_endpoint in config file.
      - Collected 0 metrics & 0 events
```

Improve your `check()` method with `ConfigurationError`:

```python
from datadog_checks.base import ConfigurationError

def check(self, instance):
    endpoint = instance.get('openmetrics_endpoint')
    if endpoint is None:
        raise ConfigurationError("Unable to find openmetrics_endpoint in config file.")
```

Then as soon as you have data available, flush:

```python

def check(self, instance):
    endpoint = instance.get('openmetrics_endpoint')
    if endpoint is None:
        raise ConfigurationError("Unable to find openmetrics_endpoint in config file.")

    super().check(instance)
```

### Putting it all together{% #putting-it-all-together %}

```python
from datadog_checks.base import OpenMetricsBaseCheckV2
from datadog_checks.base import ConfigurationError

class KongCheck(OpenMetricsBaseCheckV2):
    __NAMESPACE__ = "kong"

    def __init__(self, name, init_config, instances):
        super(KongCheck, self).__init__(name, init_config, instances)

        self.metrics_map = {
            'kong_bandwidth': 'bandwidth',
            'kong_http_consumer_status': 'http.consumer.status',
            'kong_http_status': 'http.status',
            'kong_latency': {
                'name': 'latency',
                'type': 'counter',
            },
            'kong_memory_lua_shared_dict_bytes': 'memory.lua.shared_dict.bytes',
            'kong_memory_lua_shared_dict_total_bytes': 'memory.lua.shared_dict.total_bytes',
            'kong_nginx_http_current_connections': 'nginx.http.current_connections',
            'kong_nginx_stream_current_connections': 'nginx.stream.current_connections',
            'kong_stream_status': 'stream.status',
        }

      def get_default_config(self):
            return {'metrics': self.metrics_map}

      def check(self, instance):
          endpoint = instance.get('openmetrics_endpoint')
          if endpoint is None:
              raise ConfigurationError("Unable to find openmetrics_endpoint in config file.")

          super().check(instance)
```

## Going further{% #going-further %}

To read more about Prometheus and OpenMetrics base integrations, see the integrations [developer docs](https://datadoghq.dev/integrations-core/base/openmetrics/).

To see all configuration options available in Openmetrics, see the [conf.yaml.example](https://github.com/DataDog/integrations-core/blob/master/openmetrics/datadog_checks/openmetrics/data/conf.yaml.example). You can improve your OpenMetrics check by including default values for additional configuration options:

{% dl %}

{% dt %}
`exclude_metrics`
{% /dt %}

{% dd %}
Some metrics are ignored because they are duplicates or introduce a high cardinality. Metrics included in this list are silently skipped without an `Unable to handle metric` debug line in the logs. In order to exclude all metrics but the ones matching a specific filter, you can use a negative lookahead regex like: `- ^(?!foo).*$`
{% /dd %}

{% dt %}
`share_labels`
{% /dt %}

{% dd %}
If the `share_labels` mapping is provided, the mapping allows for the sharing of labels across multiple metrics. The keys represent the exposed metrics from which to share labels, and the values are mappings that configure the sharing behavior. Each mapping must have at least one of the following keys: `labels`, `match`, or `values`.
{% /dd %}

{% dt %}
`exclude_labels`
{% /dt %}

{% dd %}
`exclude_labels` is an array of labels to exclude. Those labels are not added as tags when submitting the metric.
{% /dd %}

{% /dl %}

## Further Reading{% #further-reading %}

- [Configuring an OpenMetrics Check](https://docs.datadoghq.com/agent/kubernetes/prometheus)
- [Write a Custom Check](https://docs.datadoghq.com/extend/custom_checks/write_agent_check/)
- [Introduction to Agent-based Integrations](https://docs.datadoghq.com/extend/integrations/)
