---
title: Test Impact Analysis
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Test Optimization in Datadog > Test Impact Analysis
---

# Test Impact Analysis

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

{% alert level="danger" %}
This feature was formerly known as Intelligent Test Runner, and some tags still contain "itr".
{% /alert %}

## Overview{% #overview %}

Test Impact Analysis automatically selects and runs only the relevant tests for a given commit based on the code being changed. Significantly reduce time spent testing and overall CI costs, while maintaining test coverage.

{% image
   source="https://datadog-docs.imgix.net/images/continuous_integration/itr_savings.450d19438e5d38530cb3e9418b9759d9.png?auto=format"
   alt="Test Impact Analysis enabled in a test session showing its time savings." /%}

Test Impact Analysis works by analyzing your test suite to identify the code each test covers. It then cross-references that coverage with the files impacted by a new code change. Datadog uses this information to run a selection of relevant, impacted tests, omitting the ones unaffected by the code change and reducing the overall testing duration. Find out more details about [How It Works](https://docs.datadoghq.com/tests/test_impact_analysis/how_it_works/).

By minimizing the number of tests run per commit, Test Impact Analysis reduces the frequency of [flaky tests](https://docs.datadoghq.com/glossary/#flaky-test) disrupting your pipelines. This can be particularly frustrating when the test flaking is unrelated to the code change being tested. After enabling Test Impact Analysis for your test services, you can limit each commit to its relevant tests to ensure that flaky tests unrelated to your code change don't end up arbitrarily breaking your build.

### Out-of-the-box configuration limitations{% #out-of-the-box-configuration-limitations %}

With the default configuration, there are known situations that can cause Test Impact Analysis to skip tests that should have been run. Specifically, Test Impact Analysis is not able to automatically detect changes in:

- Library dependencies
- Compiler options
- External services
- Changes to data files in data-driven tests

In these scenarios, Test Impact Analysis might skip impacted tests with the out-of-the-box configuration.

There are several configuration mechanisms that you can use in these scenarios to ensure that no tests are skipped:

- You can mark certain files in your repository as tracked files, which causes all tests to run whenever these files are changed. Dockerfiles, Makefiles, dependency files, and other build configuration files are good candidates for tracked files.
- You can mark certain tests in your source as unskippable to ensure they are always run. This is a good fit for data-driven tests or tests that interact with external systems. More information in the [setup page](https://docs.datadoghq.com/tests/test_impact_analysis/setup).
- If you are authoring a risky commit and you'd like to run all tests, add `ITR:NoSkip` (case insensitive) anywhere in your Git commit message.
- If GitHub is your source code management provider, use the `ITR:NoSkip` label (case insensitive) to prevent Test Impact Analysis from skipping tests in pull requests. To use this feature, configure the GitHub App using the [GitHub integration tile](https://docs.datadoghq.com/integrations/github/) with the `Software Delivery: Collect Pull Request Information` feature enabled. This mechanism does not work with tests executed on GitHub actions triggered by `pull_request` events.
- You can add a list of excluded branches, which disables Test Impact Analysis in those branches.

## Set up a Datadog library{% #set-up-a-datadog-library %}

Before setting up Test Impact Analysis, you must configure [Test Optimization](https://docs.datadoghq.com/continuous_integration/tests/) for your particular language. If you are reporting data through the Agent, use v6.40 or 7.40 and later.

- [.NET](https://docs.datadoghq.com/intelligent_test_runner/setup/dotnet)
- [Java](https://docs.datadoghq.com/intelligent_test_runner/setup/java)
- [JavaScript](https://docs.datadoghq.com/intelligent_test_runner/setup/javascript)
- [Swift](https://docs.datadoghq.com/intelligent_test_runner/setup/swift)
- [Python](https://docs.datadoghq.com/intelligent_test_runner/setup/python)
- [Ruby](https://docs.datadoghq.com/intelligent_test_runner/setup/ruby)
- [Go](https://docs.datadoghq.com/intelligent_test_runner/setup/go)

## Configuration{% #configuration %}

Once you have set up your Datadog library for Test Impact Analysis, configure it from the [Test Service Settings](https://app.datadoghq.com/ci/settings/test-optimization) page. Enabling Test Impact Analysis requires the `Test Optimization Settings Write` permission.

{% image
   source="https://datadog-docs.imgix.net/images/getting_started/intelligent_test_runner/test-impact-analysis-gs-configuration.fde9c54fae1798940e2c5a39ffb623b6.png?auto=format"
   alt="Enable Test Impact Analysis for a test service on the Test Optimization Settings page" /%}

### Git executable{% #git-executable %}

For Test Impact Analysis to work, [Git](https://git-scm.com/) needs to be available in the host running tests.

### Excluded branches{% #excluded-branches %}

Due to the limitations described above, the default branch of your repository is automatically excluded from having Test Impact Analysis enabled. Datadog recommends this configuration to ensure that all of your tests run prior to reaching production.

If there are other branches you want to exclude, add them on the Test Optimization Settings page. The query bar supports using the wildcard character `*` to exclude any branches that match, such as `release_*`.

Excluded branches collect per-test code coverage, which has a performance impact on the total testing time. However, this performance impact is mitigated by only collecting code coverage when Datadog detects that running with code coverage generates enough new coverage information that it offsets the cost of collecting the coverage. You can check whether a test session has code coverage enabled or not by looking at the `@test.code_coverage.enabled` field.

### Tracked files{% #tracked-files %}

Tracked files are non-code files that can potentially impact your tests. Changes in tracked files could make your tests fail or change the code coverage of your tests. Examples of files that are good candidates to add as tracked files are:

- Dockerfiles used for the CI environment
- Files that define your dependencies (for example, `pom.xml` in Maven, `requirements.txt` in Python, or `package.json` in Javascript)
- Makefiles

When you specify a set of tracked files, Test Impact Analysis runs all tests if any of these files change.

All file paths are considered to be relative to the root of the repository. You may use the `*` and `**` wildcard characters to match multiple files or directories. For instance, `**/*.mdx` matches any `.mdx` file in the repository.

{% image
   source="https://datadog-docs.imgix.net/images/getting_started/intelligent_test_runner/test-impact-analysis-gs-config.237f2f7c4b6bf439612f921ea3f90405.png?auto=format"
   alt="Select branches to exclude and tracked files" /%}

## Explore test sessions{% #explore-test-sessions %}

You can explore the time savings you get from Test Impact Analysis by looking at the test commit page and test sessions panel.

{% image
   source="https://datadog-docs.imgix.net/images/continuous_integration/itr_commit.fd1676897714459374d2aa9a0e8e4e18.png?auto=format"
   alt="Test commit page with Test Impact Analysis" /%}

{% image
   source="https://datadog-docs.imgix.net/images/continuous_integration/itr_savings.450d19438e5d38530cb3e9418b9759d9.png?auto=format"
   alt="ITest Impact Analysis enabled in a test session showing its time savings." /%}

When Test Impact Analysis is active and skipping tests, purple text displays the amount of time saved on each test session or on each commit. The duration bar also changes color to purple so you can identify which test sessions are using Test Impact Analysis on the [Test Runs](https://app.datadoghq.com/ci/test-runs) page.

## Explore adoption and global savings{% #explore-adoption-and-global-savings %}

Track your organization's savings and adoption of Test Impact Analysis through the out-of-the-box [Test Impact Analysis dashboard](https://app.datadoghq.com/dash/integration/30941/ci-visibility-intelligent-test-runner). The dashboard includes widgets to track your overall savings as well as a per-repository, per-committer, and per-service view of the data. View the dashboard to understand which parts of your organization are using and getting the most out of Test Impact Analysis.

{% image
   source="https://datadog-docs.imgix.net/images/continuous_integration/itr_dashboard1.02eb16c44444579173a3a943298d4986.png?auto=format"
   alt="Test Impact Analysis dashboard" /%}

The dashboard also tracks adoption of Test Impact Analysis throughout your organization.

{% image
   source="https://datadog-docs.imgix.net/images/continuous_integration/itr_dashboard2.1a0b68cf9475ff36704d09821c26426a.png?auto=format"
   alt="Test Impact Analysis dashboard" /%}

## Further Reading{% #further-reading %}

- [Check out the latest Software Delivery releases! (App login required)](https://app.datadoghq.com/release-notes?category=Software%20Delivery)
- [Streamline CI testing with Datadog Intelligent Test Runner](https://www.datadoghq.com/blog/streamline-ci-testing-with-datadog-intelligent-test-runner/)
- [Monitor all your CI pipelines with Datadog](https://www.datadoghq.com/blog/monitor-ci-pipelines/)
