CI Visibility is not available in the selected site () at this time.
Overview
Test Visibility provides a test-first view into your CI health by displaying important metrics and results from your tests. It can help you investigate performance problems and test failures that concern you the most because you work on the related code, not because you maintain the pipelines they are run in.
Setup
Choose a language to set up Test Visibility in Datadog:
In addition to tests, CI Visibility provides visibility over the whole testing phase of your project (except for Ruby).
Supported features
.NET
Java
Javascript
Python
Ruby
Swift
JUnit Xml
Accurate time/durations results
Microseconds resolution in test start time and duration.
Distributed traces on integration tests
Tests that make calls to external services instrumented with Datadog show the full distributed trace in their test details.
Agent-based reporting
Ability to report test information through the Datadog Agent.
Agentless reporting
Ability to report test information without the Datadog Agent.
Test suite level visibility
Visibility over the whole testing process, including session, module, suites, and tests.
Manual API
Ability to programmatically create CI Visibility events for test frameworks that are not supported by Datadog's automatic instrumentation.
Codeowner by test
Automatic detection of the owner of a test file based on the CODEOWNERS file.
Source code start/end
Automatic report of the start and end lines of a test.
(only start)
(only start)
CI and git info
Automatic collection of git and CI environment metadata, such as CI provider, git commit SHA or pipeline URL.
Git metadata upload
Automatic upload of git tree information used for Intelligent Test Runner.
Intelligent Test Runner
Capability to enable Intelligent Test Runner, which intelligently skips tests based on code coverage and git metadata.
Code coverage support
Ability to report total code coverage metrics.
(manual)
Benchmark tests support
Automatic detection of performance statistics for benchmark tests.
Parameterized tests
Automatic detection of parameterized tests.
Default configurations
Tests evaluate the behavior of code for a set of given conditions. Some of those conditions are related to the environment where the tests are run, such as the operating system or the runtime used. The same code executed under different sets of conditions can behave differently, so developers usually configure their tests to run in different sets of conditions and validate that the behavior is the expected for all of them. This specific set of conditions is called a configuration.
In CI Visibility, a test with multiple configurations is treated as multiple tests with a separate test for each configuration. In the case where one of the configurations fails but the others pass, only that specific test and configuration combination is marked as failed.
For example, suppose you’re testing a single commit and you have a Python test that runs against three different Python versions. If the test fails for one of those versions, that specific test is marked as failed, while the other versions are marked as passed. If you retry the tests against the same commit and now the test for all three Python versions pass, the test with the version that previously failed is now marked as both passed and flaky, while the other two versions remain passed, with no flakiness detected.
Test configuration attributes
When you run your tests with CI Visibility, the library detects and reports information about the environment where tests are run as test tags. For example, the operating system name, such as Windows or Linux, and the architecture of the platform, such as arm64 or x86_64, are added as tags on each test. These values are shown in the commit and on branch overview pages when a test fails or is flaky for a specific configuration but not others.
The following tags are automatically collected to identify test configurations, and some may only apply to specific platforms:
Tag Name
Description
os.platform
Name of the operating system where the tests are run.
os.family
Family of the operating system where the tests are run.
os.version
Version of the operating system where the tests are run.
os.architecture
Architecture of the operating system where the tests are run.
runtime.name
Name of the runtime system for the tests.
runtime.version
Version of the runtime system.
runtime.vendor
Vendor that built the runtime platform where the tests are run.
runtime.architecture
Architecture of the runtime system for the tests.
device.model
The device model running the tests.
device.name
Name of the device.
ui.appearance
User Interface style.
ui.orientation
Orientation the UI is run in.
ui.localization
Language of the application.
Custom configurations
There are some configurations that cannot be directly identified and reported automatically because they can depend on environment variables, test run arguments, or other approaches that developers use. For those cases, you must provide the configuration details to the library so CI Visibility can properly identify them.
Define these tags as part of the DD_TAGS environment variable using the test.configuration prefix.
For example, the following test configuration tags identify a test configuration where disk response time is slow and available memory is low:
If Intelligent Test Runner is enabled for .NET, Java, JavaScript, or Swift, per test code coverage information, including file names and line numbers covered by each test, are collected from your projects.
When creating a dashboard or a notebook, you can use test execution data in your search query, which updates the visualization widget options.
Alert on test data
When you evaluate failed or flaky tests, or the performance of a CI test on the Test Runs page, click Create Monitor to create a CI Test monitor.
Further reading
Additional helpful documentation, links, and articles: