このページは日本語には対応しておりません。随時翻訳に取り組んでいます。翻訳に関してご質問やご意見ございましたら、お気軽にご連絡ください。

CI Visibility is not available in the selected site () at this time.

Compatibility

Supported languages:

LanguageVersion
Python 2>= 2.7
Python 3>= 3.6

Supported test frameworks:

Test FrameworkVersion
pytest>= 3.0.0
pytest-benchmark>= 3.1.0
unittest>= 3.7

Configuring reporting method

To report test results to Datadog, you need to configure the Datadog Python library:

We support auto-instrumentation for the following CI providers:

CI ProviderAuto-Instrumentation method
GitHub ActionsDatadog Test Visibility Github Action
JenkinsUI-based configuration with Datadog Jenkins plugin
GitLabDatadog Test Visibility GitLab Script
CircleCIDatadog Test Visibility CircleCI Orb

If you are using auto-instrumentation for one of these providers, you can skip the rest of the setup steps below.

GitHub Actions や CircleCI など、基盤となるワーカーノードにアクセスできないクラウド CI プロバイダーを使用している場合は、Agentless モードを使用するようにライブラリを構成します。そのためには、以下の環境変数を設定します。

DD_CIVISIBILITY_AGENTLESS_ENABLED=true (必須)
Agentless モードを有効または無効にします。
デフォルト: false
DD_API_KEY (必須)
テスト結果のアップロードに使用される Datadog API キー
デフォルト: (empty)

さらに、データを送信する Datadog サイトを構成します。

DD_SITE (必須)
結果をアップロードする Datadog サイト
デフォルト: datadoghq.com

Jenkins や自己管理型の GitLab CI など、オンプレミスの CI プロバイダーでテストを実行している場合は、Agent のインストール手順に従って、各ワーカーノードに Datadog Agent をインストールします。 これは、テスト結果をログおよび基盤となるホストのメトリクスに自動的にリンクできるため、推奨されるオプションです。

Kubernetes エグゼキュータを使用している場合は、Datadog が Datadog Operator の使用を推奨しています。 この Operator には Datadog Admission Controller が含まれており、自動的にビルドポッドにトレーサーライブラリを注入 することができます。 注: Datadog Operator を使用する場合、Admission Controller がトレーサーライブラリのダウンロードと注入を行うため、以下のステップを省略することができます。 ただし、Test Visibility を有効にするために必要な環境変数またはコマンドラインパラメーターをポッドで設定する必要があります。

Kubernetes を使用していない、または Datadog Admission Controller を使用できない場合で、CI プロバイダーがコンテナベースのエクゼキュータを使用している場合、トレーサーを実行するビルドコンテナの DD_TRACE_AGENT_URL 環境変数 (デフォルトは http://localhost:8126) を、そのコンテナ内からアクセス可能なエンドポイントに設定します。注: ビルドコンテナ内で localhost を使用すると、コンテナ自体を参照し、基盤となるワーカーノードや Container Agent が動作しているコンテナを参照しません。

DD_TRACE_AGENT_URL は、プロトコルとポート (例えば、http://localhost:8126) を含み、DD_AGENT_HOSTDD_TRACE_AGENT_PORT よりも優先され、CI Visibility のために Datadog Agent の URL を構成するために推奨される構成パラメーターです。

それでも Datadog Agent への接続に問題がある場合は、Agentless Mode を使用してください。 : この方法を使用する場合、テストはログインフラストラクチャーメトリクスと相関しません。

Installing the Python tracer

Install the Python tracer by running:

pip install -U ddtrace

For more information, see the Python tracer installation documentation.

Instrumenting your tests

To enable instrumentation of pytest tests, add the --ddtrace option when running pytest, specifying the name of the service or library under test in the DD_SERVICE environment variable, and the environment where tests are being run (for example, local when running tests on a developer workstation, or ci when running them on a CI provider) in the DD_ENV environment variable:

DD_SERVICE=my-python-app DD_ENV=ci pytest --ddtrace

If you also want to enable the rest of the APM integrations to get more information in your flamegraph, add the --ddtrace-patch-all option:

DD_SERVICE=my-python-app DD_ENV=ci pytest --ddtrace --ddtrace-patch-all

Adding custom tags to tests

To add custom tags to your tests, declare ddspan as an argument in your test:

from ddtrace import tracer

# Declare `ddspan` as argument to your test
def test_simple_case(ddspan):
    # Set your tags
    ddspan.set_tag("test_owner", "my_team")
    # test continues normally
    # ...

To create filters or group by fields for these tags, you must first create facets. For more information about adding tags, see the Adding Tags section of the Python custom instrumentation documentation.

Adding custom measures to tests

Just like tags, to add custom measures to your tests, use the current active span:

from ddtrace import tracer

# Declare `ddspan` as an argument to your test
def test_simple_case(ddspan):
    # Set your tags
    ddspan.set_tag("memory_allocations", 16)
    # test continues normally
    # ...

Read more about custom measures in the Add Custom Measures Guide.

To instrument your benchmark tests with pytest-benchmark, run your benchmark tests with the --ddtrace option when running pytest, and Datadog detects metrics from pytest-benchmark automatically:

def square_value(value):
    return value * value


def test_square_value(benchmark):
    result = benchmark(square_value, 5)
    assert result == 25

To enable instrumentation of unittest tests, run your tests by appending ddtrace-run to the beginning of your unittest command.

Make sure to specify the name of the service or library under test in the DD_SERVICE environment variable. Additionally, you may declare the environment where tests are being run in the DD_ENV environment variable:

DD_SERVICE=my-python-app DD_ENV=ci ddtrace-run python -m unittest

Alternatively, if you wish to enable unittest instrumentation manually, use patch() to enable the integration:

from ddtrace import patch
import unittest
patch(unittest=True)

class MyTest(unittest.TestCase):
def test_will_pass(self):
assert True

Manual testing API

Note: The Test Visibility manual testing API is in beta and subject to change.

As of version 2.13.0, the Datadog Python tracer provides the Test Visibility API (ddtrace.ext.test_visibility) to submit test visibility results as needed.

API execution

The API uses classes to provide namespaced methods to submit test visibility events.

Test execution has two phases:

  • Discovery: inform the API what items to expect
  • Execution: submit results (using start and finish calls)

The distinct discovery and execution phases allow for a gap between the test runner process collecting the tests and the tests starting.

API users must provide consistent identifiers (described below) that are used as references for Test Visibility items within the API’s state storage.

Enable test_visibility

You must call the ddtrace.ext.test_visibility.api.enable_test_visibility() function before using the Test Visibility API.

Call the ddtrace.ext.test_visibility.api.disable_test_visibility() function before process shutdown to ensure proper flushing of data.

Domain model

The API is based around four concepts: test session, test module, test suite, and test.

Modules, suites, and tests form a hierarchy in the Python Test Visibility API, represented by the item identifier’s parent relationship.

Test session

A test session represents a project’s test execution, typically corresponding to the execution of a test command. Only one session can be discovered, started, and finished in the execution of Test Visibility program.

Call ddtrace.ext.test_visibility.api.TestSession.discover() to discover the session, passing the test command, a given framework name, and version.

Call ddtrace.ext.test_visibility.api.TestSession.start() to start the session.

When tests have completed, call ddtrace.ext.test_visibility.api.TestSession.finish() .

Test module

A test module represents a smaller unit of work within a project’s tests run (a directory, for example).

Call ddtrace.ext.test_visibility.api.TestModuleId(), providing the module name as a parameter, to create a TestModuleId.

Call ddtrace.ext.test_visibility.api.TestModule.discover(), passing the TestModuleId object as an argument, to discover the module.

Call ddtrace.ext.test_visibility.api.TestModule.start(), passing the TestModuleId object as an argument, to start the module.

After all the children items within the module have completed, call ddtrace.ext.test_visibility.api.TestModule.finish(), passing the TestModuleId object as an argument.

Test suite

A test suite represents a subset of tests within a project’s modules (.py file, for example).

Call ddtrace.ext.test_visibility.api.TestSuiteId(), providing the parent module’s TestModuleId and the suite’s name as arguments, to create a TestSuiteId.

Call ddtrace.ext.test_visibility.api.TestSuite.discover(), passing the TestSuiteId object as an argument, to discover the suite.

Call ddtrace.ext.test_visibility.api.TestSuite.start(), passing the TestSuiteId object as an argument, to start the suite.

After all the child items within the suite have completed, call ddtrace.ext.test_visibility.api.TestSuite.finish(), passing the TestSuiteId object as an argument.

Test

A test represents a single test case that is executed as part of a test suite.

Call ddtrace.ext.test_visibility.api.TestId(), providing the parent suite’s TestSuiteId and the test’s name as arguments, to create a TestId. The TestId() method accepts a JSON-parseable string as the optional parameters argument. The parameters argument can be used to distinguish parametrized tests that have the same name, but different parameter values.

Call ddtrace.ext.test_visibility.api.Test.discover(), passing the TestId object as an argument, to discover the test. The Test.discover() classmethod accepts a string as the optional resource parameter, which defaults to the TestId’s name.

Call ddtrace.ext.test_visibility.api.Test.start(), passing the TestId object as an argument, to start the test.

Call ddtrace.ext.test_visibility.api.Test.mark_pass(), passing the TestId object as an argument, to mark that the test has passed successfully. Call ddtrace.ext.test_visibility.api.Test.mark_fail(), passing the TestId object as an argument, to mark that the test has failed. mark_fail() accepts an optional TestExcInfo object as the exc_info parameter. Call ddtrace.ext.test_visibility.api.Test.mark_skip(), passing the TestId object as an argument, to mark that the test was skipped. mark_skip() accepts an optional string as the skip_reason parameter.

Exception information

The ddtrace.ext.test_visibility.api.Test.mark_fail() classmethod holds information about exceptions encountered during a test’s failure.

The ddtrace.ext.test_visibility.api.TestExcInfo() method takes three positional parameters:

  • exc_type: the type of the exception encountered
  • exc_value: the BaseException object for the exception
  • exc_traceback: the Traceback object for the exception
Codeowner information

The ddtrace.ext.test_visibility.api.Test.discover() classmethod accepts an optional list of strings as the codeowners parameter.

Test source file information

The ddtrace.ext.test_visibility.api.Test.discover() classmethod accepts an optional TestSourceFileInfo object as the source_file_info parameter. A TestSourceFileInfo object represents the path and optionally, the start and end lines for a given test.

The ddtrace.ext.test_visibility.api.TestSourceFileInfo() method accepts three positional parameters:

  • path: a pathlib.Path object (made relative to the repo root by the Test Visibility API)
  • start_line: an optional integer representing the start line of the test in the file
  • end_line: an optional integer representing the end line of the test in the file
Setting parameters after test discovery

The ddtrace.ext.test_visibility.api.Test.set_parameters() classmethod accepts a TestId object as an argument, and a JSON-parseable string, to set the parameters for the test.

Note: this overwrites the parameters associated with the test, but does not modify the TestId object’s parameters field.

Setting parameters after a test has been discovered requires that the TestId object be unique even without the parameters field being set.

Code example

from ddtrace.ext.test_visibility import api
import pathlib
import sys

if __name__ == "__main__":
    # Enable the Test Visibility service
    api.enable_test_visibility()

    # Discover items
    api.TestSession.discover("manual_test_api_example", "my_manual_framework", "1.0.0")
    test_module_1_id = api.TestModuleId("module_1")
    api.TestModule.discover(test_module_1_id)

    test_suite_1_id = api.TestSuiteId(test_module_1_id, "suite_1")
    api.TestSuite.discover(test_suite_1_id)

    test_1_id = api.TestId(test_suite_1_id, "test_1")
    api.Test.discover(test_1_id)

    # A parameterized test with codeowners and a source file
    test_2_codeowners = ["team_1", "team_2"]
    test_2_source_info = api.TestSourceFileInfo(pathlib.Path("/path/to_my/tests.py"), 16, 35)

    parametrized_test_2_a_id = api.TestId(
        test_suite_1_id,
        "test_2",
        parameters='{"parameter_1": "value_is_a"}'
    )
    api.Test.discover(
        parametrized_test_2_a_id,
        codeowners=test_2_codeowners,
        source_file_info=test_2_source_info,
        resource="overriden resource name A",
    )

    parametrized_test_2_b_id = api.TestId(
        test_suite_1_id,
        "test_2",
        parameters='{"parameter_1": "value_is_b"}'
    )
    api.Test.discover(
      parametrized_test_2_b_id,
      codeowners=test_2_codeowners,
      source_file_info=test_2_source_info,
      resource="overriden resource name B"
    )

    test_3_id = api.TestId(test_suite_1_id, "test_3")
    api.Test.discover(test_3_id)

    test_4_id = api.TestId(test_suite_1_id, "test_4")
    api.Test.discover(test_4_id)


    # Start and execute items
    api.TestSession.start()

    api.TestModule.start(test_module_1_id)
    api.TestSuite.start(test_suite_1_id)

    # test_1 passes successfully
    api.Test.start(test_1_id)
    api.Test.mark_pass(test_1_id)

    # test_2's first parametrized test succeeds, but the second fails without attaching exception info
    api.Test.start(parametrized_test_2_a_id)
    api.Test.mark_pass(parametrized_test_2_a_id)

    api.Test.start(parametrized_test_2_b_id)
    api.Test.mark_fail(parametrized_test_2_b_id)

    # test_3 is skipped
    api.Test.start(test_3_id)
    api.Test.mark_skip(test_3_id, skip_reason="example skipped test")

    # test_4 fails, and attaches exception info
    api.Test.start(test_4_id)
    try:
      raise(ValueError("this test failed"))
    except:
      api.Test.mark_fail(test_4_id, exc_info=api.TestExcInfo(*sys.exc_info()))

    # Finish suites and modules
    api.TestSuite.finish(test_suite_1_id)
    api.TestModule.finish(test_module_1_id)
    api.TestSession.finish()

Configuration settings

The following is a list of the most important configuration settings that can be used with the tracer, either in code or using environment variables:

DD_SERVICE
Name of the service or library under test.
Environment variable: DD_SERVICE
Default: pytest
Example: my-python-app
DD_ENV
Name of the environment where tests are being run.
Environment variable: DD_ENV
Default: none
Examples: local, ci

For more information about service and env reserved tags, see Unified Service Tagging.

The following environment variable can be used to configure the location of the Datadog Agent:

DD_TRACE_AGENT_URL
Datadog Agent URL for trace collection in the form http://hostname:port.
Default: http://localhost:8126

All other Datadog Tracer configuration options can also be used.

Collecting Git metadata

Datadog は、テスト結果を可視化し、リポジトリ、ブランチ、コミットごとにグループ化するために Git の情報を使用します。Git のメタデータは、CI プロバイダーの環境変数や、プロジェクトパス内のローカルな .git フォルダがあれば、そこからテストインスツルメンテーションによって自動的に収集されます。

サポートされていない CI プロバイダーでテストを実行する場合や、.git フォルダがない場合は、環境変数を使って Git の情報を手動で設定することができます。これらの環境変数は、自動検出された情報よりも優先されます。Git の情報を提供するために、以下の環境変数を設定します。

DD_GIT_REPOSITORY_URL
コードが格納されているリポジトリの URL。HTTP と SSH の両方の URL に対応しています。
: git@github.com:MyCompany/MyApp.githttps://github.com/MyCompany/MyApp.git
DD_GIT_BRANCH
テスト中の Git ブランチ。タグ情報を指定する場合は、空のままにしておきます。
: develop
DD_GIT_TAG
テストされる Git タグ (該当する場合)。ブランチ情報を指定する場合は、空のままにしておきます。
: 1.0.1
DD_GIT_COMMIT_SHA
フルコミットハッシュ。
: a18ebf361cc831f5535e58ec4fae04ffd98d8152
DD_GIT_COMMIT_MESSAGE
コミットのメッセージ。
: Set release number
DD_GIT_COMMIT_AUTHOR_NAME
コミット作成者名。
: John Smith
DD_GIT_COMMIT_AUTHOR_EMAIL
コミット作成者メールアドレス。
: john@example.com
DD_GIT_COMMIT_AUTHOR_DATE
ISO 8601 形式のコミット作成者の日付。
: 2021-03-12T16:00:28Z
DD_GIT_COMMIT_COMMITTER_NAME
コミットのコミッター名。
: Jane Smith
DD_GIT_COMMIT_COMMITTER_EMAIL
コミットのコミッターのメールアドレス。
: jane@example.com
DD_GIT_COMMIT_COMMITTER_DATE
ISO 8601 形式のコミットのコミッターの日付。
: 2021-03-12T16:00:28Z

Known limitations

Plugins for pytest that alter test execution may cause unexpected behavior.

Parallelization

Plugins that introduce parallelization to pytest (such as pytest-xdist or pytest-forked) create one session event for each parallelized instance. Multiple module or suite events may be created if tests from the same package or module execute in different processes.

The overall count of test events (and their correctness) remain unaffected. Individual session, module, or suite events may have inconsistent results with other events in the same pytest run.

Test ordering

Plugins that change the ordering of test execution (such as pytest-randomly) can create multiple module or suite events. The duration and results of module or suite events may also be inconsistent with the results reported by pytest.

The overall count of test events (and their correctness) remain unaffected.

In some cases, if your unittest test execution is run in a parallel manner, this may break the instrumentation and affect test visibility.

Datadog recommends you use up to one process at a time to prevent affecting test visibility.

Further reading