---
title: Test Impact Analysis for Python
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Test Optimization in Datadog > Test Impact Analysis > Configure Test
  Impact Analysis > Test Impact Analysis for Python
---

# Test Impact Analysis for Python

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

## Compatibility{% #compatibility %}

Test Impact Analysis is only supported in the following versions and testing frameworks:

- `pytest>=7.2.0`
  - From `ddtrace>=2.1.0`.
  - From `Python>=3.7`.
  - Requires `coverage>=5.5`.
  - Incompatible with `pytest-cov` (see known limitations)
- `unittest`
  - From `ddtrace>=2.2.0`.
  - From `Python>=3.7`.
- `coverage`
  - Incompatible for coverage collection (see known limitations)

## Setup{% #setup %}

### Test Optimization{% #test-optimization %}

Prior to setting up Test Impact Analysis, set up [Test Optimization for Python](https://docs.datadoghq.com/continuous_integration/tests/python). If you are reporting data through the Agent, use v6.40 and later or v7.40 and later.

### Activate Test Impact Analysis for the test service{% #activate-test-impact-analysis-for-the-test-service %}

You, or a user in your organization with the **Intelligent Test Runner Activation** (`intelligent_test_runner_activation_write`) permission, must activate Test Impact Analysis on the [Test Service Settings](https://app.datadoghq.com/ci/settings/test-service) page.

### Required dependencies{% #required-dependencies %}

Test Impact Analysis requires the [`coverage` package](https://pypi.org/project/coverage/).

Install the package in your CI test environment by specifying it in the relevant requirements file, for example, or using `pip`:

```shell
pip install coverage
```

See known limitations if you are already using the `coverage` package or a plugin like `pytest-cov`.

## Running tests with Test Impact Analysis enabled{% #running-tests-with-test-impact-analysis-enabled %}

Test Impact Analysis is enabled when you run tests with the Datadog integration active. Run your tests with the following command:

{% tab title="Pytest" %}

```shell
DD_ENV=ci DD_SERVICE=my-python-app pytest --ddtrace
```

{% /tab %}

{% tab title="Unittest" %}

```shell
DD_ENV=ci DD_SERVICE=my-python-app ddtrace-run python -m unittest
```

{% /tab %}

### Temporarily disabling Test Impact Analysis{% #temporarily-disabling-test-impact-analysis %}

Test Impact Analysis can be disabled locally by setting the `DD_CIVISIBILITY_ITR_ENABLED` environment variable to `false` or `0`.

{% dl %}

{% dt %}
`DD_CIVISIBILITY_ITR_ENABLED` (Optional)
{% /dt %}

{% dd %}
Enable Test Impact Analysis coverage and test skipping features**Default**: `(true)`
{% /dd %}

{% /dl %}

Run the following command to disable Test Impact Analysis:

{% tab title="Pytest" %}

```shell
DD_ENV=ci DD_SERVICE=my-python-app DD_CIVISIBILITY_ITR_ENABLED=false pytest --ddtrace
```

{% /tab %}

{% tab title="Unittest" %}

```shell
DD_ENV=ci DD_SERVICE=my-python-app DD_CIVISIBILITY_ITR_ENABLED=false ddtrace-run python -m unittest
```

{% /tab %}

## Disabling skipping for specific tests{% #disabling-skipping-for-specific-tests %}

You can override Test Impact Analysis's behavior and prevent specific tests from being skipped. These tests are referred to as unskippable tests.

### Why make tests unskippable?{% #why-make-tests-unskippable %}

Test Impact Analysis uses code coverage data to determine whether or not tests should be skipped. In some cases, this data may not be sufficient to make this determination.

Examples include:

- Tests that read data from text files
- Tests that interact with APIs outside of the code being tested (such as remote REST APIs)

Designating tests as unskippable ensures that Test Impact Analysis runs them regardless of coverage data.

{% tab title="Pytest" %}
### Compatibility{% #compatibility %}

Unskippable tests are supported in the following versions:

- `pytest`
  - From `ddtrace>=1.19.0`.

### Marking tests as unskippable{% #marking-tests-as-unskippable %}

You can use [`pytest`](https://pytest.org/)'s [`skipif` mark](https://docs.pytest.org/en/latest/reference/reference.html#pytest-mark-skipif-ref) to prevent Test Impact Analysis from skipping individual tests or modules. Specify the `condition` as `False`, and the `reason` as `"datadog_itr_unskippable"`.

#### Individual tests{% #individual-tests %}

Individual tests can be marked as unskippable using the `@pytest.mark.skipif` decorator as follows:

```python
import pytest

@pytest.mark.skipif(False, reason="datadog_itr_unskippable")
def test_function():
    assert True
```

#### Modules{% #modules %}

Modules can be skipped using the [`pytestmark` global variable](https://docs.pytest.org/en/latest/reference/reference.html#globalvar-pytestmark) as follows:

```python
import pytest

pytestmark = pytest.mark.skipif(False, reason="datadog_itr_unskippable")

def test_function():
    assert True
```

**Note**: This does not override any other `skip` marks, or `skipif` marks that have a `condition` evaluating to `True`.
{% /tab %}

{% tab title="Unittest" %}
### Compatibility{% #compatibility %}

Unskippable tests are supported in the following versions:

- `unittest`
  - From `ddtrace>=2.2.0`.

### Marking tests as unskippable in `unittest`{% #marking-tests-as-unskippable-in-unittest %}

You can use [`unittest`](https://docs.python.org/3/library/unittest.html)'s [`skipif` mark](https://docs.python.org/3/library/unittest.html#unittest.skipIf) to prevent Test Impact Analysis from skipping individual tests. Specify the `condition` as `False`, and the `reason` as `"datadog_itr_unskippable"`.

#### Individual tests{% #individual-tests %}

Individual tests can be marked as unskippable using the `@unittest.skipif` decorator as follows:

```python
import unittest

class MyTestCase(unittest.TestCase):
  @unittest.skipIf(False, reason="datadog_itr_unskippable")
  def test_function(self):
      assert True
```

Using `@unittest.skipif` does not override any other `skip` marks, or `skipIf` marks that have a `condition` evaluating to `True`.
{% /tab %}

## Known limitations{% #known-limitations %}

### Code coverage collection{% #code-coverage-collection %}

#### Interaction with coverage tools{% #interaction-with-coverage-tools %}

Coverage data may appear incomplete when Test Impact Analysis is enabled. Lines of code that would normally be covered by tests are not be covered when these tests are skipped.

#### Interaction with the coverage package{% #interaction-with-the-coverage-package %}

Test Impact Analysis uses the [`coverage`](https://pypi.org/project/coverage/) package's API to collect code coverage. Data from `coverage run` or plugins like `pytest-cov` is incomplete as a result of `ddtrace`'s use of the `Coverage` class.

Some race conditions may cause exceptions when using `pytest` plugins such as `pytest-xdist` that change test execution order or introduce parallelization.

## Further reading{% #further-reading %}

- [Explore Test Results and Performance](https://docs.datadoghq.com/continuous_integration/tests)
- [Troubleshooting CI Visibility](https://docs.datadoghq.com/continuous_integration/troubleshooting/)
