- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Test Impact Analysis allows you to skip irrelevant tests unaffected by a code change.
With Test Optimization, development teams can configure Test Impact Analysis for their test services, set branches to exclude (such as the default branch), and define files to be tracked (which triggers full runs of all tests when any tracked file changes).
Configure and enable Test Impact Analysis for your test services to reduce unnecessary testing time, enhance CI test efficiency, and reduce costs, while maintaining the reliability and performance across your CI environments.
Test Impact Analysis uses code coverage data to determine whether or not tests should be skipped. For more information, see How Test Impact Analysis Works in Datadog.
To set up Test Impact Analysis, see the following documentation for your programming language:
To enable Test Impact Analysis:
Test Impact Analysis
column for a service.You must have the Test Impact Analysis Activation Write
permission. For more information, see the Datadog Role Permissions documentation.
Disabling Test Impact Analysis on critical branches (such as your default branch) ensures comprehensive test coverage, whereas enabling it to run on feature or development branches helps maximize testing efficiency.
You can configure Test Impact Analysis to prevent specific tests from being skipped. These tests are known as unskippable tests, and are run regardless of code coverage data.
To configure Test Impact Analysis:
documentation/content/**
or domains/shopist/apps/api/BUILD.bazel
). Test Impact Analysis runs all CI tests when any of these tracked files change.Once you’ve configured Test Impact Analysis on a test service, execute a test suite run on your default branch. This establishes a baseline for Test Impact Analysis to accurately skip irrelevant tests in future commits.
Explore the data collected by enabling Test Impact Analysis, such as the time savings achieved by skipping tests, as well as your organization’s usage of Test Impact Analysis, to improve your CI efficiency.
You can create dashboards to visualize your testing metrics, or use an out-of-the-box dashboard containing widgets populated with data collected by Test Impact Analysis to help you identify areas of improvement with usage patterns and trends.
The Test Optimization Explorer allows you to create visualizations and filter test spans using the data collected from Test Optimization and Test Impact Analysis. When Test Impact Analysis is active, it displays the amount of time saved for each test session or commit. The duration bars turn purple to indicate active test skipping.
Navigate to Software Delivery > Test Optimization > Test Runs and select Session
to start filtering your test session span results.
Navigate to Software Delivery > Test Optimization > Test Runs and select Module
to start filtering your test module span results.
Navigate to Software Delivery > Test Optimization > Test Runs and select Suite
to start filtering your test suite span results.
Navigate to Software Delivery > Test Optimization > Test Runs and select Test
to start filtering your test span results.
Use the following out-of-the-box Test Impact Analysis facets to customize the search query:
For example, to filter test session runs that have Test Skipping Enabled
, you can use @test.itr.tests_skipping.enabled:true
in the search query.
Then, click on a test session run and see the amount of time saved by Test Impact Analysis in the Test Session Details section on the test session side panel.