Test Optimization in Datadog

이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

Test Optimization is not available in the selected site () at this time.

Overview

Test Optimization provides a test-first view into your CI health by displaying important metrics and results from your tests. It can help you investigate performance problems and test failures that are most relevant to your work, focusing on the code you are responsible for, rather than the pipelines which run your tests.

Setup

Select an option to configure Test Optimization in Datadog:

.net
java
javascript
python
ruby
swift
go
upload junit tests to datadog

In addition to tests, Test Optimization provides visibility over the whole testing phase of your project.

Supported features

.NETJava/JVM‑basedJavascriptPythonRubySwiftGoJUnit Xml
Accurate time/durations results

Microseconds resolution in test start time and duration.

Distributed traces on integration tests

Tests that make calls to external services instrumented with Datadog show the full distributed trace in their test details.

Agent-based reporting

Ability to report test information through the Datadog Agent.

Agentless reporting

Ability to report test information without the Datadog Agent.

Test suite level visibility

Visibility over the whole testing process, including session, module, suites, and tests.

Manual API

Ability to programmatically create CI Visibility events for test frameworks that are not supported by Datadog's automatic instrumentation.

Codeowner by test

Automatic detection of the owner of a test file based on the CODEOWNERS file.

(partially)
Source code start/end

Automatic report of the start and end lines of a test.

(only start) (only start) (only start)
CI and git info

Automatic collection of git and CI environment metadata, such as CI provider, git commit SHA or pipeline URL.

Git metadata upload

Automatic upload of git tree information used for Test Impact Analysis.

Test Impact Analysis *

Capability to enable Test Impact Analysis, which intelligently skips tests based on code coverage and git metadata.

Code coverage support

Ability to report total code coverage metrics.

(manual)
Benchmark tests support

Automatic detection of performance statistics for benchmark tests.

Parameterized tests

Automatic detection of parameterized tests.

Early flake detection *

Automatically retry new tests to detect flakiness.

Auto test retries *

Automatically retry failed tests up to N times to avoid failing the build due to test flakiness.

Selenium RUM integration

Automatically link browser sessions to test cases when testing RUM-instrumented applications.

* The feature is opt-in, and needs to be enabled on the Test Optimization Settings page.

Default configurations

Tests evaluate the behavior of code for a set of given conditions. Some of those conditions are related to the environment where the tests are run, such as the operating system or the runtime used. The same code executed under different sets of conditions can behave differently, so developers usually configure their tests to run in different sets of conditions and validate that the behavior is the expected for all of them. This specific set of conditions is called a configuration.

In Test Optimization, a test with multiple configurations is treated as multiple tests with a separate test for each configuration. In the case where one of the configurations fails but the others pass, only that specific test and configuration combination is marked as failed.

For example, suppose you’re testing a single commit and you have a Python test that runs against three different Python versions. If the test fails for one of those versions, that specific test is marked as failed, while the other versions are marked as passed. If you retry the tests against the same commit and now the test for all three Python versions pass, the test with the version that previously failed is now marked as both passed and flaky, while the other two versions remain passed, with no flakiness detected.

Test configuration attributes

When you run your tests with Test Optimization, the library detects and reports information about the environment where tests are run as test tags. For example, the operating system name, such as Windows or Linux, and the architecture of the platform, such as arm64 or x86_64, are added as tags on each test. These values are shown in the commit and on branch overview pages when a test fails or is flaky for a specific configuration but not others.

The following tags are automatically collected to identify test configurations, and some may only apply to specific platforms:

Tag NameDescription
os.platformName of the operating system where the tests are run.
os.familyFamily of the operating system where the tests are run.
os.versionVersion of the operating system where the tests are run.
os.architectureArchitecture of the operating system where the tests are run.
runtime.nameName of the runtime system for the tests.
runtime.versionVersion of the runtime system.
runtime.vendorVendor that built the runtime platform where the tests are run.
runtime.architectureArchitecture of the runtime system for the tests.
device.modelThe device model running the tests.
device.nameName of the device.
ui.appearanceUser Interface style.
ui.orientationOrientation the UI is run in.
ui.localizationLanguage of the application.

Parameterized test configurations

When you run parameterized tests, the library detects and reports information about the parameters used. Parameters are a part of test configuration, so the same test case executed with different parameters is considered as two different tests in Test Optimization.

If a test parameter is non-deterministic and has a different value every time a test is run, each test execution is considered a new test in Test Optimization. As a consequence, some product features may not work correctly for such tests: history of executions, flakiness detection, Test Impact Analysis, and others.

Some examples of non-deterministic test parameters are:

  • current date
  • a random value
  • a value that depends on the test execution environment (such as an absolute file path or the current username)
  • a value that has no deterministic string representation (for example an instance of a Java class whose toString() method is not overridden)

Avoid using non-deterministic test parameters. In case this is not possible, some testing frameworks provide a way to specify a deterministic string representation for a non-deterministic parameter (such as overriding parameter display name).

Custom configurations

There are some configurations that cannot be directly identified and reported automatically because they can depend on environment variables, test run arguments, or other approaches that developers use. For those cases, you must provide the configuration details to the library so Test Optimization can properly identify them.

Define these tags as part of the DD_TAGS environment variable using the test.configuration prefix.

For example, the following test configuration tags identify a test configuration where disk response time is slow and available memory is low:

DD_TAGS=test.configuration.disk:slow,test.configuration.memory:low

All tags with the test.configuration prefix are used as configuration tags, in addition to the automatically collected ones.

Note: Nested test.configuration tags, such as test.configuration.cpu.memory, are not supported.

In order to filter using these configurations tags, you must create facets for these tags.

Enhance your developer workflow

Integrate Test Optimization with tools to report code coverage data, enhance browser tests with RUM, and access insights across platforms by streamlining issue identification and resolution in your development cycle.


Use CI tests data

When Test Visibility is enabled, the following data is collected from your project:

  • Test names and durations.
  • Predefined environment variables set by CI providers.
  • Git commit history including the hash, message, author information, and files changed (without file contents).
  • Information from the CODEOWNERS file.

When creating a dashboard or a notebook, you can use CI test data in your search query, which updates the visualization widget options. For more information, see the Dashboards and Notebooks documentation.

Alert on test data

When you’re evaluating failed or flaky tests, or the performance of a CI test, you can export your search query in the Test Optimization Explorer to a CI Test monitor by clicking the Export button.

Further reading