- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Test Optimization provides a test-first view into your CI health by displaying important metrics and results from your tests. It can help you investigate performance problems and test failures that are most relevant to your work, focusing on the code you are responsible for, rather than the pipelines which run your tests.
Select an option to configure Test Optimization in Datadog:
In addition to tests, Test Optimization provides visibility over the whole testing phase of your project.
.NET | Java/JVM‑based | Javascript | Python | Ruby | Swift | Go | JUnit Xml | |
---|---|---|---|---|---|---|---|---|
| ||||||||
| ||||||||
| ||||||||
| ||||||||
| ||||||||
| ||||||||
| (partially) | |||||||
| (only start) | (only start) | (only start) | |||||
| ||||||||
| ||||||||
| ||||||||
| (manual) | |||||||
| ||||||||
| ||||||||
| ||||||||
| ||||||||
|
* The feature is opt-in, and needs to be enabled on the Test Service Settings page.
Tests evaluate the behavior of code for a set of given conditions. Some of those conditions are related to the environment where the tests are run, such as the operating system or the runtime used. The same code executed under different sets of conditions can behave differently, so developers usually configure their tests to run in different sets of conditions and validate that the behavior is the expected for all of them. This specific set of conditions is called a configuration.
In Test Optimization, a test with multiple configurations is treated as multiple tests with a separate test for each configuration. In the case where one of the configurations fails but the others pass, only that specific test and configuration combination is marked as failed.
For example, suppose you’re testing a single commit and you have a Python test that runs against three different Python versions. If the test fails for one of those versions, that specific test is marked as failed, while the other versions are marked as passed. If you retry the tests against the same commit and now the test for all three Python versions pass, the test with the version that previously failed is now marked as both passed and flaky, while the other two versions remain passed, with no flakiness detected.
When you run your tests with Test Optimization, the library detects and reports information about the environment where tests are run as test tags. For example, the operating system name, such as Windows
or Linux
, and the architecture of the platform, such as arm64
or x86_64
, are added as tags on each test. These values are shown in the commit and on branch overview pages when a test fails or is flaky for a specific configuration but not others.
The following tags are automatically collected to identify test configurations, and some may only apply to specific platforms:
Tag Name | Description |
---|---|
os.platform | Name of the operating system where the tests are run. |
os.family | Family of the operating system where the tests are run. |
os.version | Version of the operating system where the tests are run. |
os.architecture | Architecture of the operating system where the tests are run. |
runtime.name | Name of the runtime system for the tests. |
runtime.version | Version of the runtime system. |
runtime.vendor | Vendor that built the runtime platform where the tests are run. |
runtime.architecture | Architecture of the runtime system for the tests. |
device.model | The device model running the tests. |
device.name | Name of the device. |
ui.appearance | User Interface style. |
ui.orientation | Orientation the UI is run in. |
ui.localization | Language of the application. |
When you run parameterized tests, the library detects and reports information about the parameters used. Parameters are a part of test configuration, so the same test case executed with different parameters is considered as two different tests in Test Optimization.
If a test parameter is non-deterministic and has a different value every time a test is run, each test execution is considered a new test in Test Optimization. As a consequence, some product features may not work correctly for such tests: history of executions, flakiness detection, Test Impact Analysis, and others.
Some examples of non-deterministic test parameters are:
toString()
method is not overridden)Avoid using non-deterministic test parameters. In case this is not possible, some testing frameworks provide a way to specify a deterministic string representation for a non-deterministic parameter (such as overriding parameter display name).
There are some configurations that cannot be directly identified and reported automatically because they can depend on environment variables, test run arguments, or other approaches that developers use. For those cases, you must provide the configuration details to the library so Test Optimization can properly identify them.
Define these tags as part of the DD_TAGS
environment variable using the test.configuration
prefix.
For example, the following test configuration tags identify a test configuration where disk response time is slow and available memory is low:
DD_TAGS=test.configuration.disk:slow,test.configuration.memory:low
All tags with the test.configuration
prefix are used as configuration tags, in addition to the automatically collected ones.
Note: Nested test.configuration
tags, such as test.configuration.cpu.memory
, are not supported.
In order to filter using these configurations tags, you must create facets for these tags.
Integrate Test Optimization with tools to report code coverage data, enhance browser tests with RUM, and access insights across platforms by streamlining issue identification and resolution in your development cycle.
When Test Visibility is enabled, the following data is collected from your project:
When creating a dashboard or a notebook, you can use CI test data in your search query, which updates the visualization widget options. For more information, see the Dashboards and Notebooks documentation.
When you’re evaluating failed or flaky tests, or the performance of a CI test, you can export your search query in the Test Optimization Explorer to a CI Test monitor by clicking the Export button.
추가 유용한 문서, 링크 및 기사: