---
title: Sidekiq
description: Track metrics about your Sidekiq jobs, queues, and batches.
breadcrumbs: Docs > Integrations > Sidekiq
---

# Sidekiq
Supported OS Integration version3.2.0
## Overview{% #overview %}

This integration monitors [Sidekiq](https://sidekiq.org/) through [Dogstatsd](https://docs.datadoghq.com/developers/dogstatsd/). It collects metrics through [Datadog's Dogstatsd Ruby client](https://github.com/DataDog/dogstatsd-ruby).

**Note** Only Sidekiq Pro (>= 3.6) or Enterprise (>= 1.1.0) users can collect metrics.

**Minimum Agent version:** 7.19.0

## Setup{% #setup %}

### Installation{% #installation %}

The Sidekiq integration is packaged with the [Datadog Agent](https://app.datadoghq.com/account/settings/agent/latest). No additional installation is needed on your server.

### Configuration{% #configuration %}

1. Install the `dogstatsd-ruby` [gem](https://github.com/DataDog/dogstatsd-ruby):

   ```
    gem install dogstatsd-ruby
   ```

1. Enable Sidekiq Pro metric collection by including this in your initializer; for a containerized deployment, update `localhost` to your Agent container address:

   ```ruby
        require 'datadog/statsd' # gem 'dogstatsd-ruby'
   
        Sidekiq::Pro.dogstatsd = ->{ Datadog::Statsd.new('localhost', 8125, namespace:'sidekiq') }
   
        Sidekiq.configure_server do |config|
          config.server_middleware do |chain|
            require 'sidekiq/middleware/server/statsd'
            chain.add Sidekiq::Middleware::Server::Statsd
          end
        end
   ```

If you are using Sidekiq Enterprise and would like to collect historical metrics, include this line as well:

   ```ruby
          Sidekiq.configure_server do |config|
            # history is captured every 30 seconds by default
            config.retain_history(30)
          end
   ```

See the Sidekiq [Pro](https://github.com/mperham/sidekiq/wiki/Pro-Metrics) and [Enterprise](https://github.com/mperham/sidekiq/wiki/Ent-Historical-Metrics) documentation for more information, and the [Dogstatsd Ruby](https://github.com/DataDog/dogstatsd-ruby) documentation for further configuration options.

1. Update the [Datadog Agent main configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files/) `datadog.yaml` by adding the following configs:

   ```yaml
   # dogstatsd_mapper_cache_size: 1000  # default to 1000
   dogstatsd_mapper_profiles:
     - name: sidekiq
       prefix: "sidekiq."
       mappings:
         - match: 'sidekiq\.sidekiq\.(.*)'
           match_type: "regex"
           name: "sidekiq.$1"
         - match: 'sidekiq\.jobs\.(.*)\.perform'
           name: "sidekiq.jobs.perform"
           match_type: "regex"
           tags:
             worker: "$1"
         - match: 'sidekiq\.jobs\.(.*)\.(count|success|failure)'
           name: "sidekiq.jobs.worker.$2"
           match_type: "regex"
           tags:
             worker: "$1"
   ```

These parameters can also be set by adding the `DD_DOGSTATSD_MAPPER_PROFILES` environment variable to the Datadog Agent.

1. [Restart the Agent](https://docs.datadoghq.com/agent/guide/agent-commands/#start-stop-and-restart-the-agent).

## Data Collected{% #data-collected %}

### Metrics{% #metrics %}

|  |
|  |
| **sidekiq.batches.complete**(count)          | Count of when a batch is completed                                                      |
| **sidekiq.batches.created**(count)           | Count of when a batch is created                                                        |
| **sidekiq.batches.success**(count)           | Count of when a batch is successful                                                     |
| **sidekiq.busy**(count)                      | Total Busy Size (Enterprise only)*Shown as job*                                         |
| **sidekiq.dead**(count)                      | Total Dead Size (Enterprise only)*Shown as job*                                         |
| **sidekiq.enqueued**(count)                  | Total size of all known queues (Enterprise only)*Shown as job*                          |
| **sidekiq.failed**(count)                    | Number of job executions which raised an error (Enterprise only)*Shown as job*          |
| **sidekiq.jobs.count**(count)                | Total count of Sidekiq jobs*Shown as job*                                               |
| **sidekiq.jobs.expired**(count)              | Count of when a job is expired*Shown as job*                                            |
| **sidekiq.jobs.failure**(count)              | Total count of failed Sidekiq jobs*Shown as job*                                        |
| **sidekiq.jobs.perform.95percentile**(gauge) | 95th percentile of amount of time spent in a worker*Shown as millisecond*               |
| **sidekiq.jobs.perform.avg**(gauge)          | Average amount of time spent in a worker*Shown as millisecond*                          |
| **sidekiq.jobs.perform.count**(count)        | The number of times that the amount of time spent in a worker was measured              |
| **sidekiq.jobs.perform.max**(gauge)          | Max amount of time spent in a worker*Shown as millisecond*                              |
| **sidekiq.jobs.perform.median**(gauge)       | Median amount of time spent in a worker*Shown as millisecond*                           |
| **sidekiq.jobs.recovered.fetch**(count)      | Count of when a job is recovered by super_fetch after process crash*Shown as job*       |
| **sidekiq.jobs.recovered.push**(count)       | Count of when a job is recovered by reliable_push after network outage*Shown as job*    |
| **sidekiq.jobs.success**(count)              | Total count of successful Sidekiq jobs*Shown as job*                                    |
| **sidekiq.jobs.worker.count**(count)         | Count of Sidekiq jobs*Shown as job*                                                     |
| **sidekiq.jobs.worker.failure**(count)       | Count of failed Sidekiq jobs*Shown as job*                                              |
| **sidekiq.jobs.worker.success**(count)       | Count of successful Sidekiq jobs*Shown as job*                                          |
| **sidekiq.processed**(count)                 | Number of job executions completed (success or failure) (Enterprise only)*Shown as job* |
| **sidekiq.retries**(count)                   | Total retries size (Enterprise only)*Shown as job*                                      |
| **sidekiq.scheduled**(count)                 | Total Scheduled Size (Enterprise only)*Shown as job*                                    |

The Sidekiq integration also allows custom metrics, see [Sidekiq Enterprise Historical Metrics](https://github.com/mperham/sidekiq/wiki/Ent-Historical-Metrics#custom).

### Log collection{% #log-collection %}

1. Collecting logs is disabled by default in the Datadog Agent. Enable it in the `datadog.yaml` file with:

   ```yaml
     logs_enabled: true
   ```

1. Add this configuration block to your `sidekiq.d/conf.yaml` file to start collecting your Sidekiq logs:

   ```yaml
     logs:
       - type: file
         path:  /var/log/sidekiq.log
         source: sidekiq
         service: <SERVICE>
   ```

Change the `path` and `service` parameter values and configure them for your environment. If you cannot find your logs, see [Sidekiq Logging](https://github.com/mperham/sidekiq/wiki/Logging#log-file).

1. [Restart the Agent](https://docs.datadoghq.com/agent/guide/agent-commands/#start-stop-and-restart-the-agent).

### Service Checks{% #service-checks %}

Sidekiq does not include any service checks.

### Events{% #events %}

Sidekiq does not include any events.

## Troubleshooting{% #troubleshooting %}

Need help? Contact [Datadog support](https://docs.datadoghq.com/help/).
