---
title: Test Optimization Troubleshooting
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Test Optimization in Datadog > Test Optimization Troubleshooting
---

# Test Optimization Troubleshooting

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

## Overview{% #overview %}

This page provides information to help you troubleshot issues with Test Optimization. If you need additional help, contact [Datadog Support](https://docs.datadoghq.com/help/).

## Your tests are instrumented, but Datadog isn't showing any data{% #your-tests-are-instrumented-but-datadog-isnt-showing-any-data %}

1. Go to the [**Tests**](https://docs.datadoghq.com/continuous_integration/tests/) page for the language you're instrumenting and check that the testing framework you are using is supported in the **Compatibility** section.
1. Check if you see any test results in the [**Test Runs**](https://app.datadoghq.com/ci/test/runs) section. If you do see results there, but not in [**Test Health**](https://app.datadoghq.com/ci/test/health) when viewing the repositories list or an individual repository, Git information is missing. See Data appears in Test Runs but not Test Health to troubleshoot it.
1. If you are reporting the data through the Datadog Agent, make sure there is [network connectivity](https://docs.datadoghq.com/agent/configuration/network/) from your test-running host to the Agent's host and port. Run your tests with the appropriate Agent hostname set in the `DD_AGENT_HOST` and the appropriate port in `DD_TRACE_AGENT_PORT` environment variables. You can activate [debug mode](https://docs.datadoghq.com/tracing/troubleshooting/tracer_debug_logs) in the tracer to verify connectivity to the Agent.
1. If you are reporting the data directly to Datadog ("Agentless mode"), make sure there is [network connectivity](https://docs.datadoghq.com/tests/network/) from the test-running hosts to Datadog's hosts. You can activate [debug mode](https://docs.datadoghq.com/tracing/troubleshooting/tracer_debug_logs) in the tracer to verify connectivity to Datadog.
1. If you still don't see any results, [contact Support](https://docs.datadoghq.com/help/) for troubleshooting help.

## You are uploading JUnit test reports with `datadog-ci` but some or all tests are missing{% #you-are-uploading-junit-test-reports-with-datadog-ci-but-some-or-all-tests-are-missing %}

If you are uploading JUnit test report files with `datadog-ci` CLI and you do not see the tests, it is likely the tests are being discarded because the report is considered incorrect.

The following aspects make a JUnit test report incorrect:

- A timestamp of the reported tests that is older than **71 hours** before the moment the report is uploaded.
- A testsuite without a name.

## Data appears in Test Runs but not Test Health{% #data-appears-in-test-runs-but-not-test-health %}

If you can see test results data in the **Test Runs** tab, but not in **Test Health** when viewing the repositories list or an individual repository, Git metadata (repository, commit, or branch) is probably missing. To confirm this is the case, open a test execution in the [**Test Runs**](https://app.datadoghq.com/ci/test/runs) section, and check that there is no `git.repository_url`, `git.commit.sha`, or `git.branch`. If these tags are not populated, nothing shows in repository sections of the [**Test Health**](https://app.datadoghq.com/ci/test/health) page.

1. Tracers first use the environment variables, if any, set by the CI provider to collect Git information. See [Running tests inside a container](https://docs.datadoghq.com/continuous_integration/tests/containers/) for a list of environment variables that the tracer attempts to read for each supported CI provider. At a minimum, this populates the repository, commit hash, and branch information.

1. Next, tracers fetch Git metadata using the local `.git` folder, if present, by executing `git` commands. This populates all Git metadata fields, including commit message, author, and committer information. Ensure the `.git` folder is present and the `git` binary is installed and in `$PATH`. This information is used to populate attributes not detected in the previous step.

1. You can also provide Git information manually using environment variables, which override information detected by any of the previous steps.

The supported environment variables for providing Git information are:

   {% dl %}
   
   {% dt %}
   `DD_GIT_REPOSITORY_URL` **(required)**
      {% /dt %}

   {% dd %}
   URL of the repository where the code is stored. Both HTTP and SSH URLs are supported.**Example**: `git@github.com:MyCompany/MyApp.git`, `https://github.com/MyCompany/MyApp.git`
      {% /dd %}

   {% dt %}
   `DD_GIT_COMMIT_SHA` **(required)**
      {% /dt %}

   {% dd %}
   Full (40-character long SHA1) commit hash.**Example**: `a18ebf361cc831f5535e58ec4fae04ffd98d8152`
      {% /dd %}

   {% dt %}
   `DD_GIT_COMMIT_AUTHOR_EMAIL` **(required)**
      {% /dt %}

   {% dd %}
   Commit author email.**Example**: `john@example.com`
      {% /dd %}

   {% dt %}
`DD_GIT_COMMIT_AUTHOR_NAME`
   {% /dt %}

   {% dd %}
   Commit author name.**Example**: `John Smith`
      {% /dd %}

   {% dt %}
`DD_GIT_COMMIT_AUTHOR_DATE`
   {% /dt %}

   {% dd %}
   Commit author date in ISO 8601 format.**Example**: `2021-03-12T16:00:28Z`
      {% /dd %}

   {% dt %}
`DD_GIT_COMMIT_COMMITTER_EMAIL`
   {% /dt %}

   {% dd %}
   Commit committer email.**Example**: `jane@example.com`
      {% /dd %}

   {% dt %}
`DD_GIT_COMMIT_COMMITTER_NAME`
   {% /dt %}

   {% dd %}
   Commit committer name.**Example**: `Jane Smith`
      {% /dd %}

   {% dt %}
`DD_GIT_COMMIT_COMMITTER_DATE`
   {% /dt %}

   {% dd %}
   Commit committer date in ISO 8601 format.**Example**: `2021-03-12T16:00:28Z`
      {% /dd %}

   {% dt %}
`DD_GIT_COMMIT_MESSAGE`
   {% /dt %}

   {% dd %}
   Commit message.**Example**: `Set release number`
      {% /dd %}

   {% dt %}
`DD_GIT_BRANCH`
   {% /dt %}

   {% dd %}
   Git branch being tested. Leave empty if providing tag information instead.**Example**: `develop`
      {% /dd %}

   {% dt %}
`DD_GIT_TAG`
   {% /dt %}

   {% dd %}
   Git tag being tested (if applicable). Leave empty if providing branch information instead.**Example**: `1.0.1`
      {% /dd %}

      {% /dl %}

1. If no CI provider environment variables are found, tests results are sent with no Git metadata.

### The total test time is empty{% #the-total-test-time-is-empty %}

If you cannot see the total test time, it is likely that test suite level visibility is not enabled. To confirm, check if your language supports test suite level visibility in [Supported features](https://docs.datadoghq.com/tests/#supported-features). If test suite level visibility is supported, update your tracer to the latest version.

If you still don't see the total time after updating the tracer version, contact [Datadog support](https://docs.datadoghq.com/help/) for help.

### The total test time is different than expected{% #the-total-test-time-is-different-than-expected %}

#### How total time is calculated{% #how-total-time-is-calculated %}

The total time is defined as the sum of the maximum test session durations.

1. The maximum duration of a test session grouped by the test session fingerprint is calculated.
1. The maximum test session durations are summed.

## The test status numbers are not what is expected{% #the-test-status-numbers-are-not-what-is-expected %}

The test status numbers are calculated based on the unique tests that were collected. The uniqueness of a test is defined not only by its suite and name, but by its test parameters and test configurations as well.

### The numbers are lower than expected{% #the-numbers-are-lower-than-expected %}

If the numbers are lower than expected, it is likely that either the library or the tool you are using to collect test data cannot collect test parameters and/or some test configurations.

1. If you are uploading JUnit test report files:
   1. If you are running the same tests in different environment configurations, [make sure you are setting those configuration tags during the upload](https://docs.datadoghq.com/continuous_integration/tests/junit_upload/?tabs=linux#collecting-environment-configuration-metadata).
   1. If you are running parameterized tests, it's very likely that the JUnit report does not have that information. [Try using a native library to report test data](https://docs.datadoghq.com/continuous_integration/tests/).
1. If you still don't see the expected results, [contact Datadog support](https://docs.datadoghq.com/help/) for troubleshooting help.

### The passed/failed/skipped numbers are different than expected{% #the-passedfailedskipped-numbers-are-different-than-expected %}

If the same test is collected several times for the same commit but with different status, the aggregated result follows the algorithm in the table below:

| **Test Status - First Try** | **Test Status - Retry #1** | **Result** |
| --------------------------- | -------------------------- | ---------- |
| `Passed`                    | `Passed`                   | `Passed`   |
| `Passed`                    | `Failed`                   | `Passed`   |
| `Passed`                    | `Skipped`                  | `Passed`   |
| `Failed`                    | `Passed`                   | `Passed`   |
| `Failed`                    | `Failed`                   | `Failed`   |
| `Failed`                    | `Skipped`                  | `Failed`   |
| `Skipped`                   | `Passed`                   | `Passed`   |
| `Skipped`                   | `Failed`                   | `Failed`   |
| `Skipped`                   | `Skipped`                  | `Skipped`  |

## The default branch is not correct{% #the-default-branch-is-not-correct %}

### How it impacts the product{% #how-it-impacts-the-product %}

The default branch is used to power some features of the products, namely:

- Default branches list on the Tests page: This list only displays default branches. Setting the wrong default branch can result in missing or incorrect data in the default branches list.

- New flaky tests: Tests that are not currently classified as flaky in the default branch. If the default branch is not properly set, this could lead to a wrong number of detected new flaky tests.

- Pipelines list: The pipelines list only displays default branches. Setting the wrong default branch can result in missing or incorrect data in the pipelines list.

### How to fix the default branch{% #how-to-fix-the-default-branch %}

If you have admin access, you can update it from the [Repository Settings Page](https://app.datadoghq.com/source-code/repositories).

## Execution history is not available for a specific test case{% #execution-history-is-not-available-for-a-specific-test-case %}

Other symptoms of the same issue include:

- A test case is not classified as flaky even if it exhibits flakiness.
- A test case cannot be skipped by [Test Impact Analysis](https://docs.datadoghq.com/tests/test_impact_analysis/).

It is likely that the [test case configuration](https://docs.datadoghq.com/tests/#parameterized-test-configurations) is unstable because one or more of the test parameters are non-deterministic (for instance, they include current date or a random number).

The best way to fix this is to make sure that the test parameters are the same between test runs.

## Session history, performance or code coverage tab only show a single execution{% #session-history-performance-or-code-coverage-tab-only-show-a-single-execution %}

This is likely caused by an unstable test session fingerprint. There's a set of parameters that Datadog checks to establish correspondence between test sessions. The test command used to execute the tests is one of them. If the test command contains a string that changes for every execution, such as a temporary folder, Datadog considers the sessions to be unrelated to each other. For example:

- `yarn test --temp-dir=/var/folders/t1/rs2htfh55mz9px2j4prmpg_c0000gq/T`
- `mvn test --temp-dir=/var/folders/t1/rs2htfh55mz9px2j4prmpg_c0000gq/T`
- `bundle exec rspec --temp-dir=/var/folders/t1/rs2htfh55mz9px2j4prmpg_c0000gq/T`
- `dotnet test --results-directory /var/folders/t1/rs2htfh55mz9px2j4prmpg_c0000gq/T`

This can be solved by using the `DD_TEST_SESSION_NAME` environment variable. Use `DD_TEST_SESSION_NAME` to identify a group of tests. Example values for this tag include:

- `unit-tests`
- `integration-tests`
- `smoke-tests`
- `flaky-tests`
- `ui-tests`
- `backend-tests`

## Test Impact Analysis does not show any time saved{% #test-impact-analysis-does-not-show-any-time-saved %}

This is also caused by an unstable test session fingerprint. See the Session history, performance or code coverage tab only show a single execution section for more information.

## Flaky test management tags are missing or have an unexpected order in test events{% #flaky-test-management-tags-are-missing-or-have-an-unexpected-order-in-test-events %}

When retrying a flaky test multiple times within a short span of time (less than a second), test run events might contain unexpected `@test.is_flaky`, `@test.is_known_flaky`, or `@test.is_new_flaky` tags. This is a known limitation that occurs due to a race condition in the flaky test detection system. In some cases, test run events might be processed out of order, causing the tags to not follow the logical order of events.

## Further reading{% #further-reading %}

- [Learn how to monitor your CI tests](https://docs.datadoghq.com/continuous_integration/tests)
