---
title: Algorithmia
description: Monitor metrics for machine learning models in production
breadcrumbs: Docs > Integrations > Algorithmia
---

# Algorithmia
Supported OS 
## Overview{% #overview %}

[Algorithmia](https://algorithmia.com/) is an MLOps platform that includes capabilities for data scientists, application developers, and IT operators to deploy, manage, govern, and secure machine learning and other probabilistic models in production.



Algorithmia Insights is a feature of Algorithmia Enterprise and provides a metrics pipeline that can be used to instrument, measure, and monitor your machine learning models. Use cases for monitoring inference-related metrics from machine learning models include detecting model drift, data drift, model bias, etc.

This integration allows you to stream operational metrics as well as user-defined, inference-related metrics from Algorithmia to Kafka to the metrics API in Datadog.

## Setup{% #setup %}

1. From your Algorithmia instance, configure Algorithmia Insights to connect to a Kafka broker (external to Algorithmia).

1. See the [Algorithmia Integrations repository](https://github.com/algorithmiaio/integrations) to install, configure, and start the Datadog message forwarding service used in this integration, which forwards metrics from a Kafka topic to the metrics API in Datadog.

### Validation{% #validation %}

1. From Algorithmia, query an algorithm that has Insights enabled.
1. In the Datadog interface, navigate to the **Metrics** summary page.
1. Verify that the metrics from Insights are present in Datadog by filtering for `algorithmia`.

### Streaming metrics{% #streaming-metrics %}

This integration streams metrics from Algorithmia when a model that has Insights enabled is queried. Each log entry includes operational metrics and inference-related metrics.

The `duration_milliseconds` metric is one of the operational metrics that is included in the default payload from Algorithmia. To help you get started, this integration also includes a dashboard and monitor for this default metric.

Additional metrics can include any user-defined, inference-related metrics that are specified in Algorithmia by the algorithm developer. User-defined metrics depend on your specific machine learning framework and use case, but might include values such as prediction probabilities from a regression model in scikit-learn, confidence scores from an image classifier in TensorFlow, or input data from incoming API requests. **Note**: The message forwarding script provided in this integration prefixes user-defined metrics with `algorithmia.` in Datadog.

## Data Collected{% #data-collected %}

### Metrics{% #metrics %}

|  |
|  |
| **algorithmia.duration\_milliseconds**(gauge) | Duration of algorithm run*Shown as millisecond* |

### Service Checks{% #service-checks %}

The Algorithmia check does not include any service checks.

### Events{% #events %}

The Algorithmia check does not include any events.

## Troubleshooting{% #troubleshooting %}

Need help? Contact [Algorithmia support](https://algorithmia.com/contact).
