- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Test Optimization allows you to better understand your test posture, identify commits introducing flaky tests, identify performance regressions, and troubleshoot complex test failures.
You can visualize the performance of your test runs as traces, where spans represent the execution of different parts of the test.
Test Optimization enables development teams to debug, optimize, and accelerate software testing across CI environments by providing insights about test performance, flakiness, and failures. Test Optimization automatically instruments each test and integrates intelligent test selection using the Test Impact Analysis, enhancing test efficiency and reducing redundancy.
With historical test data, teams can understand performance regressions, compare the outcome of tests from feature branches to default branches, and establish performance benchmarks. By using Test Optimization, teams can improve their developer workflows and maintain quality code output.
Test Optimization tracks the performance and results of your CI tests, and displays results of the test runs.
To start instrumenting and running tests, see the documentation for one of the following languages.
Test Optimization is compatible with any CI provider and is not limited to those supported by CI Visibility. For more information about supported features, see Test Optimization.
Access your tests’ metrics (such as executions, duration, distribution of duration, overall success rate, failure rate, and more) to start identifying important trends and patterns using the data collected from your tests across CI pipelines.
You can create dashboards for monitoring flaky tests, performance regressions, and test failures occurring within your tests. Alternatively, you can utilize an out-of-the-box dashboard containing widgets populated with data collected in Test Optimization to visualize the health and performance of your CI test sessions, modules, suites, and tests.
A flaky test is a test that exhibits both a passing and failing status across multiple test runs for the same commit. If you commit some code and run it through CI, and a test fails, and you run it through CI again and the same test now passes, that test is unreliable and marked as flaky.
You can access flaky test information in the Flaky Tests section of a test run’s overview page, or as a column on your list of test services on the Test List page.
For each branch, the list shows the number of new flaky tests, the number of commits flaked by the tests, total test time, and the branch’s latest commit details.
Test Optimization displays the following graphs to help you understand your flaky test trends and the impact of your flaky tests in a commit’s Flaky Tests section:
To ignore new flaky tests for a commit that you’ve determined the flaky tests were detected by mistake, click on a test containing a New Flaky value with a dropdown option, and click Ignore flaky tests. For more information, see Flaky Test Management.
The Test Optimization Explorer allows you to create visualizations and filter test spans using the data collected from your testing. Each test run is reported as a trace, which includes additional spans generated by the test request.
Navigate to Software Delivery > Test Optimization > Test Runs and select Session
to start filtering your test session span results.
Navigate to Software Delivery > Test Optimization > Test Runs and select Module
to start filtering your test module span results.
Navigate to Software Delivery > Test Optimization > Test Runs and select Suite
to start filtering your test suite span results.
Navigate to Software Delivery > Test Optimization > Test Runs and select Test
to start filtering your test span results.
Use facets to customize the search query and identify changes in time spent on each level of your test run.
Once you click on a test on the Test List page, you can see a flame graph or a list of spans on the Trace tab.
You can identify bottlenecks in your test runs and examine individual levels ranked from the largest to smallest percentage of execution time.
You can programmatically search and manage test events using the CI Visibility Tests API endpoint. For more information, see the API documentation.
To enhance the data collected from your CI tests, you can programmatically add tags or measures (like memory usage) directly to the spans created during test execution. For more information, see Add Custom Measures To Your Tests.
Alert relevant teams in your organization about test performance regressions when failures occur or new flaky tests occur.
To set up a monitor that alerts when the amount of test failures exceeds a threshold of 1 failure:
New Flaky Test
to trigger alerts when new flaky tests are added to your code base, Test Failures
to trigger alerts for test failures, or Test Performance
to trigger alerts for test performance regressions, or customize your own search query. In this example, select the Branch (@git.branch)
facet to filter your test runs on the main
branch.Evaluate the query over the
section, select last 15 minutes.Alert threshold > 1
.For more information, see the CI Monitor documentation.