---
title: Python Tests
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Test Optimization in Datadog > Configure Test Optimization > Python
  Tests
---

# Python Tests

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

## Compatibility{% #compatibility %}

Supported languages:

| Language | Version |
| -------- | ------- |
| Python 2 | \>= 2.7 |
| Python 3 | \>= 3.6 |

Supported test frameworks:

| Test Framework     | Version   |
| ------------------ | --------- |
| `pytest`           | \>= 3.0.0 |
| `pytest-benchmark` | \>= 3.1.0 |
| `unittest`         | \>= 3.7   |

## Configuring reporting method{% #configuring-reporting-method %}

To report test results to Datadog, you need to configure the Datadog Python library:

{% tab title="CI Provider with Auto-Instrumentation Support" %}
We support auto-instrumentation for the following CI providers:

| CI Provider    | Auto-Instrumentation method                                                                                                                         |
| -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| GitHub Actions | [Datadog Test Visibility Github Action](https://github.com/marketplace/actions/configure-datadog-test-visibility)                                   |
| Jenkins        | [UI-based configuration](https://docs.datadoghq.com/continuous_integration/pipelines/jenkins/#enable-test-optimization) with Datadog Jenkins plugin |
| GitLab         | [Datadog Test Visibility GitLab Script](https://github.com/DataDog/test-visibility-gitlab-script)                                                   |
| CircleCI       | [Datadog Test Visibility CircleCI Orb](https://circleci.com/orbs/registry/orb/datadog/test-visibility-circleci-orb)                                 |

If you are using auto-instrumentation for one of these providers, you can skip the rest of the setup steps below.
{% /tab %}

{% tab title="Other Cloud CI Provider" %}
If you are using a cloud CI provider without access to the underlying worker nodes, such as GitHub Actions or CircleCI, configure the library to use the Agentless mode. For this, set the following environment variables:

{% dl %}

{% dt %}
`DD_CIVISIBILITY_AGENTLESS_ENABLED=true` (Required)
{% /dt %}

{% dd %}
Enables or disables Agentless mode.**Default**: `false`
{% /dd %}

{% dt %}
`DD_API_KEY` (Required)
{% /dt %}

{% dd %}
The [Datadog API key](https://app.datadoghq.com/organization-settings/api-keys) used to upload the test results.**Default**: `(empty)`
{% /dd %}

{% /dl %}

Additionally, configure the [Datadog site](https://docs.datadoghq.com/getting_started/site/) to which you want to send data.

{% dl %}

{% dt %}
`DD_SITE` (Required)
{% /dt %}

{% dd %}
The [Datadog site](https://docs.datadoghq.com/getting_started/site/) to upload results to.**Default**: `datadoghq.com`
{% /dd %}

{% /dl %}

{% /tab %}

{% tab title="On-Premises CI Provider" %}
If you are running tests on an on-premises CI provider, such as Jenkins or self-managed GitLab CI, install the Datadog Agent on each worker node by following the [Agent installation instructions](https://docs.datadoghq.com/agent/). This is the recommended option as it allows you to automatically link test results to [logs](https://docs.datadoghq.com/tracing/other_telemetry/connect_logs_and_traces/) and [underlying host metrics](https://docs.datadoghq.com/infrastructure/).

If you are using a Kubernetes executor, Datadog recommends using the [Datadog Operator](https://docs.datadoghq.com/containers/datadog_operator/). The operator includes [Datadog Admission Controller](https://docs.datadoghq.com/agent/cluster_agent/admission_controller/) which can automatically [inject the tracer library](https://docs.datadoghq.com/tracing/trace_collection/library_injection_local/?tab=kubernetes) into the build pods. **Note:** If you use the Datadog Operator, there is no need to download and inject the tracer library since the Admission Controller can do this for you, so you can skip the corresponding step below. However, you still need to make sure that your pods set the environment variables or command-line parameters necessary to enable Test Visibility.

If you are not using Kubernetes or can't use the Datadog Admission Controller and the CI provider is using a container-based executor, set the `DD_TRACE_AGENT_URL` environment variable (which defaults to `http://localhost:8126`) in the build container running the tracer to an endpoint that is accessible from within that container. **Note:** Using `localhost` inside the build references the container itself and not the underlying worker node or any container where the Agent might be running in.

`DD_TRACE_AGENT_URL` includes the protocol and port (for example, `http://localhost:8126`) and takes precedence over `DD_AGENT_HOST` and `DD_TRACE_AGENT_PORT`, and is the recommended configuration parameter to configure the Datadog Agent's URL for CI Visibility.

If you still have issues connecting to the Datadog Agent, use the Agentless Mode. **Note:** When using this method, tests are not correlated with [logs](https://docs.datadoghq.com/tracing/other_telemetry/connect_logs_and_traces/) and [infrastructure metrics](https://docs.datadoghq.com/infrastructure/).
{% /tab %}

## Installing the Python tracer{% #installing-the-python-tracer %}

Install the Python tracer by running:

```shell
pip install -U ddtrace
```

For more information, see the [Python tracer installation documentation](https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/python/).

## Instrumenting your tests{% #instrumenting-your-tests %}

{% tab title="pytest" %}
To enable instrumentation of `pytest` tests, add the `--ddtrace` option when running `pytest`.

```shell
pytest --ddtrace
```

If you also want to enable the rest of the APM integrations to get more information in your flamegraph, add the `--ddtrace-patch-all` option:

```shell
pytest --ddtrace --ddtrace-patch-all
```

For additional configuration see Configuration Settings.

### Adding custom tags to tests{% #adding-custom-tags-to-tests %}

To add custom tags to your tests, declare `ddspan` as an argument in your test:

```python
from ddtrace import tracer

# Declare `ddspan` as argument to your test
def test_simple_case(ddspan):
    # Set your tags
    ddspan.set_tag("test_owner", "my_team")
    # test continues normally
    # ...
```

To create filters or `group by` fields for these tags, you must first create facets. For more information about adding tags, see the [Adding Tags](https://docs.datadoghq.com/tracing/trace_collection/custom_instrumentation/python?tab=locally#adding-tags) section of the Python custom instrumentation documentation.

### Adding custom measures to tests{% #adding-custom-measures-to-tests %}

Just like tags, to add custom measures to your tests, use the current active span:

```python
from ddtrace import tracer

# Declare `ddspan` as an argument to your test
def test_simple_case(ddspan):
    # Set your tags
    ddspan.set_tag("memory_allocations", 16)
    # test continues normally
    # ...
```

Read more about custom measures in the [Add Custom Measures Guide](https://docs.datadoghq.com/tests/guides/add_custom_measures/?tab=python).
{% /tab %}

{% tab title="pytest-benchmark" %}
To instrument your benchmark tests with `pytest-benchmark`, run your benchmark tests with the `--ddtrace` option when running `pytest`, and Datadog detects metrics from `pytest-benchmark` automatically:

```python
def square_value(value):
    return value * value


def test_square_value(benchmark):
    result = benchmark(square_value, 5)
    assert result == 25
```

For additional configurations, see Configuration Settings.
{% /tab %}

{% tab title="unittest" %}
To enable instrumentation of `unittest` tests, run your tests by appending `ddtrace-run` to the beginning of your `unittest` command.

```shell
ddtrace-run python -m unittest
```

Alternatively, if you wish to enable `unittest` instrumentation manually, use `patch()` to enable the integration:

```python
from ddtrace import patch
import unittest
patch(unittest=True)

class MyTest(unittest.TestCase):
def test_will_pass(self):
assert True
```

For additional configurations, see Configuration Settings.
{% /tab %}

{% tab title="Manual instrumentation (beta)" %}
### Manual testing API{% #manual-testing-api %}

{% alert level="warning" %}
The Test Optimization manual testing API is in **beta** and subject to change.
{% /alert %}

As of version `2.13.0`, the [Datadog Python tracer](https://github.com/DataDog/dd-trace-py) provides the Test Optimization API (`ddtrace.ext.test_visibility`) to submit test optimization results as needed.

#### API execution{% #api-execution %}

The API uses classes to provide namespaced methods to submit test optimization events.

Test execution has two phases:

- Discovery: inform the API what items to expect
- Execution: submit results (using start and finish calls)

The distinct discovery and execution phases allow for a gap between the test runner process collecting the tests and the tests starting.

API users must provide consistent identifiers (described below) that are used as references for Test Optimization items within the API's state storage.

##### Enable `test_visibility`{% #enable-test_visibility %}

You must call the `ddtrace.ext.test_visibility.api.enable_test_visibility()` function before using the Test Optimization API.

Call the `ddtrace.ext.test_visibility.api.disable_test_visibility()` function before process shutdown to ensure proper flushing of data.

#### Domain model{% #domain-model %}

The API is based around four concepts: test session, test module, test suite, and test.

Modules, suites, and tests form a hierarchy in the Python Test Optimization API, represented by the item identifier's parent relationship.

##### Test session{% #test-session %}

A test session represents a project's test execution, typically corresponding to the execution of a test command. Only one session can be discovered, started, and finished in the execution of Test Optimization program.

Call `ddtrace.ext.test_visibility.api.TestSession.discover()` to discover the session, passing the test command, a given framework name, and version.

Call `ddtrace.ext.test_visibility.api.TestSession.start()` to start the session.

When tests have completed, call `ddtrace.ext.test_visibility.api.TestSession.finish()` .

##### Test module{% #test-module %}

A test module represents a smaller unit of work within a project's tests run (a directory, for example).

Call `ddtrace.ext.test_visibility.api.TestModuleId()`, providing the module name as a parameter, to create a `TestModuleId`.

Call `ddtrace.ext.test_visibility.api.TestModule.discover()`, passing the `TestModuleId` object as an argument, to discover the module.

Call `ddtrace.ext.test_visibility.api.TestModule.start()`, passing the `TestModuleId` object as an argument, to start the module.

After all the children items within the module have completed, call `ddtrace.ext.test_visibility.api.TestModule.finish()`, passing the `TestModuleId` object as an argument.

##### Test suite{% #test-suite %}

A test suite represents a subset of tests within a project's modules (`.py` file, for example).

Call `ddtrace.ext.test_visibility.api.TestSuiteId()`, providing the parent module's `TestModuleId` and the suite's name as arguments, to create a `TestSuiteId`.

Call `ddtrace.ext.test_visibility.api.TestSuite.discover()`, passing the `TestSuiteId` object as an argument, to discover the suite.

Call `ddtrace.ext.test_visibility.api.TestSuite.start()`, passing the `TestSuiteId` object as an argument, to start the suite.

After all the child items within the suite have completed, call `ddtrace.ext.test_visibility.api.TestSuite.finish()`, passing the `TestSuiteId` object as an argument.

##### Test{% #test %}

A test represents a single test case that is executed as part of a test suite.

Call `ddtrace.ext.test_visibility.api.TestId()`, providing the parent suite's `TestSuiteId` and the test's name as arguments, to create a `TestId`. The `TestId()` method accepts a JSON-parseable string as the optional `parameters` argument. The `parameters` argument can be used to distinguish parametrized tests that have the same name, but different parameter values.

Call `ddtrace.ext.test_visibility.api.Test.discover()`, passing the `TestId` object as an argument, to discover the test. The `Test.discover()` classmethod accepts a string as the optional `resource` parameter, which defaults to the `TestId`'s `name`.

Call `ddtrace.ext.test_visibility.api.Test.start()`, passing the `TestId` object as an argument, to start the test.

Call `ddtrace.ext.test_visibility.api.Test.mark_pass()`, passing the `TestId` object as an argument, to mark that the test has passed successfully. Call `ddtrace.ext.test_visibility.api.Test.mark_fail()`, passing the `TestId` object as an argument, to mark that the test has failed. `mark_fail()` accepts an optional `TestExcInfo` object as the `exc_info` parameter. Call `ddtrace.ext.test_visibility.api.Test.mark_skip()`, passing the `TestId` object as an argument, to mark that the test was skipped. `mark_skip()` accepts an optional string as the `skip_reason` parameter.

###### Exception information{% #exception-information %}

The `ddtrace.ext.test_visibility.api.Test.mark_fail()` classmethod holds information about exceptions encountered during a test's failure.

The `ddtrace.ext.test_visibility.api.TestExcInfo()` method takes three positional parameters:

- `exc_type`: the type of the exception encountered
- `exc_value`: the `BaseException` object for the exception
- `exc_traceback`: the `Traceback` object for the exception

###### Codeowner information{% #codeowner-information %}

The `ddtrace.ext.test_visibility.api.Test.discover()` classmethod accepts an optional list of strings as the `codeowners` parameter.

###### Test source file information{% #test-source-file-information %}

The `ddtrace.ext.test_visibility.api.Test.discover()` classmethod accepts an optional `TestSourceFileInfo` object as the `source_file_info` parameter. A `TestSourceFileInfo` object represents the path and optionally, the start and end lines for a given test.

The `ddtrace.ext.test_visibility.api.TestSourceFileInfo()` method accepts three positional parameters:

- `path`: a `pathlib.Path` object (made relative to the repo root by the `Test Optimization` API)
- `start_line`: an optional integer representing the start line of the test in the file
- `end_line`: an optional integer representing the end line of the test in the file

###### Setting parameters after test discovery{% #setting-parameters-after-test-discovery %}

The `ddtrace.ext.test_visibility.api.Test.set_parameters()` classmethod accepts a `TestId` object as an argument, and a JSON-parseable string, to set the `parameters` for the test.

**Note:** this overwrites the parameters associated with the test, but does not modify the `TestId` object's `parameters` field.

Setting parameters after a test has been discovered requires that the `TestId` object be unique even without the `parameters` field being set.

#### Code example{% #code-example %}

```python
from ddtrace.ext.test_visibility import api
import pathlib
import sys

if __name__ == "__main__":
    # Enable the Test Optimization service
    api.enable_test_visibility()

    # Discover items
    api.TestSession.discover("manual_test_api_example", "my_manual_framework", "1.0.0")
    test_module_1_id = api.TestModuleId("module_1")
    api.TestModule.discover(test_module_1_id)

    test_suite_1_id = api.TestSuiteId(test_module_1_id, "suite_1")
    api.TestSuite.discover(test_suite_1_id)

    test_1_id = api.TestId(test_suite_1_id, "test_1")
    api.Test.discover(test_1_id)

    # A parameterized test with codeowners and a source file
    test_2_codeowners = ["team_1", "team_2"]
    test_2_source_info = api.TestSourceFileInfo(pathlib.Path("/path/to_my/tests.py"), 16, 35)

    parametrized_test_2_a_id = api.TestId(
        test_suite_1_id,
        "test_2",
        parameters='{"parameter_1": "value_is_a"}'
    )
    api.Test.discover(
        parametrized_test_2_a_id,
        codeowners=test_2_codeowners,
        source_file_info=test_2_source_info,
        resource="overriden resource name A",
    )

    parametrized_test_2_b_id = api.TestId(
        test_suite_1_id,
        "test_2",
        parameters='{"parameter_1": "value_is_b"}'
    )
    api.Test.discover(
      parametrized_test_2_b_id,
      codeowners=test_2_codeowners,
      source_file_info=test_2_source_info,
      resource="overriden resource name B"
    )

    test_3_id = api.TestId(test_suite_1_id, "test_3")
    api.Test.discover(test_3_id)

    test_4_id = api.TestId(test_suite_1_id, "test_4")
    api.Test.discover(test_4_id)


    # Start and execute items
    api.TestSession.start()

    api.TestModule.start(test_module_1_id)
    api.TestSuite.start(test_suite_1_id)

    # test_1 passes successfully
    api.Test.start(test_1_id)
    api.Test.mark_pass(test_1_id)

    # test_2's first parametrized test succeeds, but the second fails without attaching exception info
    api.Test.start(parametrized_test_2_a_id)
    api.Test.mark_pass(parametrized_test_2_a_id)

    api.Test.start(parametrized_test_2_b_id)
    api.Test.mark_fail(parametrized_test_2_b_id)

    # test_3 is skipped
    api.Test.start(test_3_id)
    api.Test.mark_skip(test_3_id, skip_reason="example skipped test")

    # test_4 fails, and attaches exception info
    api.Test.start(test_4_id)
    try:
      raise(ValueError("this test failed"))
    except:
      api.Test.mark_fail(test_4_id, exc_info=api.TestExcInfo(*sys.exc_info()))

    # Finish suites and modules
    api.TestSuite.finish(test_suite_1_id)
    api.TestModule.finish(test_module_1_id)
    api.TestSession.finish()
```

For additional configurations, see Configuration Settings.
{% /tab %}

## Configuration settings{% #configuration-settings %}

The following is a list of the most important configuration settings that can be used with the tracer, either in code or using environment variables:

{% dl %}

{% dt %}
`DD_TEST_SESSION_NAME`
{% /dt %}

{% dd %}
Identifies a group of tests, such as `integration-tests`, `unit-tests` or `smoke-tests`.**Environment variable**: `DD_TEST_SESSION_NAME`**Default**: (CI job name + test command)**Example**: `unit-tests`, `integration-tests`, `smoke-tests`
{% /dd %}

{% dt %}
`DD_SERVICE`
{% /dt %}

{% dd %}
Name of the service or library under test.**Environment variable**: `DD_SERVICE`**Default**: `pytest`**Example**: `my-python-app`
{% /dd %}

{% dt %}
`DD_ENV`
{% /dt %}

{% dd %}
Name of the environment where tests are being run.**Environment variable**: `DD_ENV`**Default**: `none`**Examples**: `local`, `ci`
{% /dd %}

{% /dl %}

For more information about `service` and `env` reserved tags, see [Unified Service Tagging](https://docs.datadoghq.com/getting_started/tagging/unified_service_tagging).

The following environment variable can be used to configure the location of the Datadog Agent:

{% dl %}

{% dt %}
`DD_TRACE_AGENT_URL`
{% /dt %}

{% dd %}
Datadog Agent URL for trace collection in the form `http://hostname:port`.**Default**: `http://localhost:8126`
{% /dd %}

{% /dl %}

All other [Datadog Tracer configuration](https://docs.datadoghq.com/tracing/trace_collection/library_config/python/?tab=containers#configuration) options can also be used.

## Collecting Git metadata{% #collecting-git-metadata %}

Datadog uses Git information for visualizing your test results and grouping them by repository, branch, and commit. Git metadata is automatically collected by the test instrumentation from CI provider environment variables and the local `.git` folder in the project path, if available.

If you are running tests in non-supported CI providers or with no `.git` folder, you can set the Git information manually using environment variables. These environment variables take precedence over any auto-detected information. Set the following environment variables to provide Git information:

{% dl %}

{% dt %}
`DD_GIT_REPOSITORY_URL`
{% /dt %}

{% dd %}
URL of the repository where the code is stored. Both HTTP and SSH URLs are supported.**Example**: `git@github.com:MyCompany/MyApp.git`, `https://github.com/MyCompany/MyApp.git`
{% /dd %}

{% dt %}
`DD_GIT_BRANCH`
{% /dt %}

{% dd %}
Git branch being tested. Leave empty if providing tag information instead.**Example**: `develop`
{% /dd %}

{% dt %}
`DD_GIT_TAG`
{% /dt %}

{% dd %}
Git tag being tested (if applicable). Leave empty if providing branch information instead.**Example**: `1.0.1`
{% /dd %}

{% dt %}
`DD_GIT_COMMIT_SHA`
{% /dt %}

{% dd %}
Full commit hash.**Example**: `a18ebf361cc831f5535e58ec4fae04ffd98d8152`
{% /dd %}

{% dt %}
`DD_GIT_COMMIT_MESSAGE`
{% /dt %}

{% dd %}
Commit message.**Example**: `Set release number`
{% /dd %}

{% dt %}
`DD_GIT_COMMIT_AUTHOR_NAME`
{% /dt %}

{% dd %}
Commit author name.**Example**: `John Smith`
{% /dd %}

{% dt %}
`DD_GIT_COMMIT_AUTHOR_EMAIL`
{% /dt %}

{% dd %}
Commit author email.**Example**: `john@example.com`
{% /dd %}

{% dt %}
`DD_GIT_COMMIT_AUTHOR_DATE`
{% /dt %}

{% dd %}
Commit author date in ISO 8601 format.**Example**: `2021-03-12T16:00:28Z`
{% /dd %}

{% dt %}
`DD_GIT_COMMIT_COMMITTER_NAME`
{% /dt %}

{% dd %}
Commit committer name.**Example**: `Jane Smith`
{% /dd %}

{% dt %}
`DD_GIT_COMMIT_COMMITTER_EMAIL`
{% /dt %}

{% dd %}
Commit committer email.**Example**: `jane@example.com`
{% /dd %}

{% dt %}
`DD_GIT_COMMIT_COMMITTER_DATE`
{% /dt %}

{% dd %}
Commit committer date in ISO 8601 format.**Example**: `2021-03-12T16:00:28Z`
{% /dd %}

{% /dl %}

## Best practices{% #best-practices %}

### Test session name `DD_TEST_SESSION_NAME`{% #test-session-name-dd_test_session_name %}

Use `DD_TEST_SESSION_NAME` to define the name of the test session and the related group of tests. Examples of values for this tag would be:

- `unit-tests`
- `integration-tests`
- `smoke-tests`
- `flaky-tests`
- `ui-tests`
- `backend-tests`

If `DD_TEST_SESSION_NAME` is not specified, the default value used is a combination of the:

- CI job name
- Command used to run the tests (such as `pytest --ddtrace`)

The test session name needs to be unique within a repository to help you distinguish different groups of tests.

#### When to use `DD_TEST_SESSION_NAME`{% #when-to-use-dd_test_session_name %}

There's a set of parameters that Datadog checks to establish correspondence between test sessions. The test command used to execute the tests is one of them. If the test command contains a string that changes for every execution, such as a temporary folder, Datadog considers the sessions to be unrelated to each other. For example:

- `pytest --temp-dir=/var/folders/t1/rs2htfh55mz9px2j4prmpg_c0000gq/T`

Datadog recommends using `DD_TEST_SESSION_NAME` if your test commands vary between executions.

## Known limitations{% #known-limitations %}

{% tab title="pytest" %}
Plugins for `pytest` that alter test execution may cause unexpected behavior.

### Parallelization{% #parallelization %}

Plugins that introduce parallelization to `pytest` (such as [`pytest-xdist`](https://pypi.org/project/pytest-xdist/) or [`pytest-forked`](https://pypi.org/project/pytest-forked/)) create one session event for each parallelized instance.

There are several issues when these plugins are used together with `ddtrace`, although they have been resolved for `pytest-xdist` in recent versions of `dd-trace-py` (3.12.6 and later). For example, a session, module, or suite may pass even when individual tests fail. Likewise, all the tests may pass and the suite/session/module fail. This happens because these plugins create worker subprocesses, and spans created in the parent process may not reflect the results from the child processes. For this reason, **the usage of `ddtrace` together with `pytest-forked` is not supported at the moment, while `pytest-xdist` only has support for `ddtrace>=3.12.6`.**

Each worker reports test results to Datadog independently, so tests from the same module running in different processes generate separate module or suite events.

The overall count of test events (and their correctness) remains unaffected. Individual session, module, or suite events can have inconsistent results with other events in the same `pytest` run (with `pytest-forked`).

### Test ordering{% #test-ordering %}

Plugins that change the ordering of test execution (such as [`pytest-randomly`](https://pypi.org/project/pytest-randomly/)) can create multiple module or suite events. The duration and results of module or suite events may also be inconsistent with the results reported by `pytest`.

The overall count of test events (and their correctness) remain unaffected.
{% /tab %}

{% tab title="unittest" %}
In some cases, if your `unittest` test execution is run in a parallel manner, this may break the instrumentation and affect test optimization.

Datadog recommends you use up to one process at a time to prevent affecting test optimization.
{% /tab %}

## Further reading{% #further-reading %}

- [Forwarding Environment Variables for Tests in Containers](https://docs.datadoghq.com/continuous_integration/tests/containers/)
- [Explore Test Results and Performance](https://docs.datadoghq.com/continuous_integration/tests)
- [Troubleshooting Test Optimization](https://docs.datadoghq.com/tests/troubleshooting/)
