- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Supported languages:
Language | Version |
---|---|
Python 2 | >= 2.7 |
Python 3 | >= 3.6 |
Supported test frameworks:
Test Framework | Version |
---|---|
pytest | >= 3.0.0 |
pytest-benchmark | >= 3.1.0 |
unittest | >= 3.7 |
To report test results to Datadog, you need to configure the Datadog Python library:
We support auto-instrumentation for the following CI providers:
CI Provider | Auto-Instrumentation method |
---|---|
GitHub Actions | Datadog Test Visibility Github Action |
Jenkins | UI-based configuration with Datadog Jenkins plugin |
GitLab | Datadog Test Visibility GitLab Script |
CircleCI | Datadog Test Visibility CircleCI Orb |
If you are using auto-instrumentation for one of these providers, you can skip the rest of the setup steps below.
GitHub Actions나 CircleCI와 같이 기본 작업자 노드에 액세스하지 않고 클라우드 CI 공급자를 사용할 경우, 라이브러리를 구성해 에이전트리스 모드로 사용하세요. 이 모드를 이용하려면 다음 환경 변수를 설정하세요.
DD_CIVISIBILITY_AGENTLESS_ENABLED=true
(필수)false
DD_API_KEY
(필수)(empty)
추가로 데이터를 보낼 Datadog 사이트를 구성하세요.
DD_SITE
(필수)datadoghq.com
Jenkins 또는 자체 관리형 GitLab CI와 같은 온프레미스 CI 공급자에서 테스트를 실행하는 경우, 에이전트 설치 지침에 따라 각 작업자 노드에 Datadog 에이전트를 설치합니다. 자동으로 테스트 결과를 로그와 기본 호트스 메트릭과 연결할 수 있기 때문에 이 옵션을 추천합니다.
쿠버네티스 실행기를 사용하는 경우 Datadog에서는 Datadog 연산자를 사용할 것을 권고합니다. 연산자에는 Datadog 허용 제어기가 포함되어 있어 빌드 파드에 자동으로 추적기 라이브러리를 삽입합니다. 참고: Datadog 연산자를 사용할 경우 허용 제어기가 작업을 해주기 때문에 추적기 라이브러리를 다운로드 받고 삽입할 필요가 없습니다. 따라서 아래 해당 단계를 건너뛰어도 됩니다. 그러나 테스트 가시화 기능을 사용할 때 필요한 파드의 환경 변수나 명령줄 파라미터는 설정해야 합니다.
쿠버네티스를 사용하지 않거나 Datadog 허용 제어기를 사용할 수 없고 CI 공급자가 컨테이너 기반 실행기를 사용하는 경우, 추적기를 실행하는 빌드 컨테이너에서 환경 변수 DD_TRACE_AGENT_URL
(기본값 http://localhost:8126
)를 해당 컨테이너 내에서 액세스할 수 있는 엔드포인트로 설정합니다. 참고: 빌드 내에서 localhost
를 사용하면 기본 작업자 노드나 에이전트를 실행하는 컨테이너를 참조하지 않고 컨테이너 자체를 참조합니다.
DD_TRACE_AGENT_URL
은 프로토콜과 포트(예: http://localhost:8126
)를 포함하고 DD_AGENT_HOST
과 DD_TRACE_AGENT_PORT
보다 우선하며, CI Visibility를 위해 Datadog 에이전트의 URL을 설정하는 데 권장되는 설정 파라미터입니다.
Datdog 에이전트에 연결하는 데 아직 문제가 있다면 에이전트리스 모드를 사용해 보세요. 참고: 이 방법을 사용할 경우 테스트가 로그 및 인프라스트럭처 메트릭과 상관 관계를 수립하지 않습니다.
Install the Python tracer by running:
pip install -U ddtrace
For more information, see the Python tracer installation documentation.
To enable instrumentation of pytest
tests, add the --ddtrace
option when running pytest
, specifying the name of the service or library under test in the DD_SERVICE
environment variable, and the environment where tests are being run (for example, local
when running tests on a developer workstation, or ci
when running them on a CI provider) in the DD_ENV
environment variable:
DD_SERVICE=my-python-app DD_ENV=ci pytest --ddtrace
If you also want to enable the rest of the APM integrations to get more information in your flamegraph, add the --ddtrace-patch-all
option:
DD_SERVICE=my-python-app DD_ENV=ci pytest --ddtrace --ddtrace-patch-all
To add custom tags to your tests, declare ddspan
as an argument in your test:
from ddtrace import tracer
# Declare `ddspan` as argument to your test
def test_simple_case(ddspan):
# Set your tags
ddspan.set_tag("test_owner", "my_team")
# test continues normally
# ...
To create filters or group by
fields for these tags, you must first create facets. For more information about adding tags, see the Adding Tags section of the Python custom instrumentation documentation.
Just like tags, to add custom measures to your tests, use the current active span:
from ddtrace import tracer
# Declare `ddspan` as an argument to your test
def test_simple_case(ddspan):
# Set your tags
ddspan.set_tag("memory_allocations", 16)
# test continues normally
# ...
Read more about custom measures in the Add Custom Measures Guide.
To instrument your benchmark tests with pytest-benchmark
, run your benchmark tests with the --ddtrace
option when running pytest
, and Datadog detects metrics from pytest-benchmark
automatically:
def square_value(value):
return value * value
def test_square_value(benchmark):
result = benchmark(square_value, 5)
assert result == 25
To enable instrumentation of unittest
tests, run your tests by appending ddtrace-run
to the beginning of your unittest
command.
Make sure to specify the name of the service or library under test in the DD_SERVICE
environment variable.
Additionally, you may declare the environment where tests are being run in the DD_ENV
environment variable:
DD_SERVICE=my-python-app DD_ENV=ci ddtrace-run python -m unittest
Alternatively, if you wish to enable unittest
instrumentation manually, use patch()
to enable the integration:
from ddtrace import patch
import unittest
patch(unittest=True)
class MyTest(unittest.TestCase):
def test_will_pass(self):
assert True
As of version 2.13.0
, the Datadog Python tracer provides the Test Optimization API (ddtrace.ext.test_visibility
) to submit test optimization results as needed.
The API uses classes to provide namespaced methods to submit test optimization events.
Test execution has two phases:
The distinct discovery and execution phases allow for a gap between the test runner process collecting the tests and the tests starting.
API users must provide consistent identifiers (described below) that are used as references for Test Optimization items within the API’s state storage.
test_visibility
You must call the ddtrace.ext.test_visibility.api.enable_test_visibility()
function before using the Test Optimization API.
Call the ddtrace.ext.test_visibility.api.disable_test_visibility()
function before process shutdown to ensure proper flushing of data.
The API is based around four concepts: test session, test module, test suite, and test.
Modules, suites, and tests form a hierarchy in the Python Test Optimization API, represented by the item identifier’s parent relationship.
A test session represents a project’s test execution, typically corresponding to the execution of a test command. Only one session can be discovered, started, and finished in the execution of Test Optimization program.
Call ddtrace.ext.test_visibility.api.TestSession.discover()
to discover the session, passing the test command, a given framework name, and version.
Call ddtrace.ext.test_visibility.api.TestSession.start()
to start the session.
When tests have completed, call ddtrace.ext.test_visibility.api.TestSession.finish()
.
A test module represents a smaller unit of work within a project’s tests run (a directory, for example).
Call ddtrace.ext.test_visibility.api.TestModuleId()
, providing the module name as a parameter, to create a TestModuleId
.
Call ddtrace.ext.test_visibility.api.TestModule.discover()
, passing the TestModuleId
object as an argument, to discover the module.
Call ddtrace.ext.test_visibility.api.TestModule.start()
, passing the TestModuleId
object as an argument, to start the module.
After all the children items within the module have completed, call ddtrace.ext.test_visibility.api.TestModule.finish()
, passing the TestModuleId
object as an argument.
A test suite represents a subset of tests within a project’s modules (.py
file, for example).
Call ddtrace.ext.test_visibility.api.TestSuiteId()
, providing the parent module’s TestModuleId
and the suite’s name as arguments, to create a TestSuiteId
.
Call ddtrace.ext.test_visibility.api.TestSuite.discover()
, passing the TestSuiteId
object as an argument, to discover the suite.
Call ddtrace.ext.test_visibility.api.TestSuite.start()
, passing the TestSuiteId
object as an argument, to start the suite.
After all the child items within the suite have completed, call ddtrace.ext.test_visibility.api.TestSuite.finish()
, passing the TestSuiteId
object as an argument.
A test represents a single test case that is executed as part of a test suite.
Call ddtrace.ext.test_visibility.api.TestId()
, providing the parent suite’s TestSuiteId
and the test’s name as arguments, to create a TestId
. The TestId()
method accepts a JSON-parseable string as the optional parameters
argument. The parameters
argument can be used to distinguish parametrized tests that have the same name, but different parameter values.
Call ddtrace.ext.test_visibility.api.Test.discover()
, passing the TestId
object as an argument, to discover the test. The Test.discover()
classmethod accepts a string as the optional resource
parameter, which defaults to the TestId
’s name
.
Call ddtrace.ext.test_visibility.api.Test.start()
, passing the TestId
object as an argument, to start the test.
Call ddtrace.ext.test_visibility.api.Test.mark_pass()
, passing the TestId
object as an argument, to mark that the test has passed successfully.
Call ddtrace.ext.test_visibility.api.Test.mark_fail()
, passing the TestId
object as an argument, to mark that the test has failed. mark_fail()
accepts an optional TestExcInfo
object as the exc_info
parameter.
Call ddtrace.ext.test_visibility.api.Test.mark_skip()
, passing the TestId
object as an argument, to mark that the test was skipped. mark_skip()
accepts an optional string as the skip_reason
parameter.
The ddtrace.ext.test_visibility.api.Test.mark_fail()
classmethod holds information about exceptions encountered during a test’s failure.
The ddtrace.ext.test_visibility.api.TestExcInfo()
method takes three positional parameters:
exc_type
: the type of the exception encounteredexc_value
: the BaseException
object for the exceptionexc_traceback
: the Traceback
object for the exceptionThe ddtrace.ext.test_visibility.api.Test.discover()
classmethod accepts an optional list of strings as the codeowners
parameter.
The ddtrace.ext.test_visibility.api.Test.discover()
classmethod accepts an optional TestSourceFileInfo
object as the source_file_info
parameter. A TestSourceFileInfo
object represents the path and optionally, the start and end lines for a given test.
The ddtrace.ext.test_visibility.api.TestSourceFileInfo()
method accepts three positional parameters:
path
: a pathlib.Path
object (made relative to the repo root by the Test Optimization
API)start_line
: an optional integer representing the start line of the test in the fileend_line
: an optional integer representing the end line of the test in the fileThe ddtrace.ext.test_visibility.api.Test.set_parameters()
classmethod accepts a TestId
object as an argument, and a JSON-parseable string, to set the parameters
for the test.
Note: this overwrites the parameters associated with the test, but does not modify the TestId
object’s parameters
field.
Setting parameters after a test has been discovered requires that the TestId
object be unique even without the parameters
field being set.
from ddtrace.ext.test_visibility import api
import pathlib
import sys
if __name__ == "__main__":
# Enable the Test Optimization service
api.enable_test_visibility()
# Discover items
api.TestSession.discover("manual_test_api_example", "my_manual_framework", "1.0.0")
test_module_1_id = api.TestModuleId("module_1")
api.TestModule.discover(test_module_1_id)
test_suite_1_id = api.TestSuiteId(test_module_1_id, "suite_1")
api.TestSuite.discover(test_suite_1_id)
test_1_id = api.TestId(test_suite_1_id, "test_1")
api.Test.discover(test_1_id)
# A parameterized test with codeowners and a source file
test_2_codeowners = ["team_1", "team_2"]
test_2_source_info = api.TestSourceFileInfo(pathlib.Path("/path/to_my/tests.py"), 16, 35)
parametrized_test_2_a_id = api.TestId(
test_suite_1_id,
"test_2",
parameters='{"parameter_1": "value_is_a"}'
)
api.Test.discover(
parametrized_test_2_a_id,
codeowners=test_2_codeowners,
source_file_info=test_2_source_info,
resource="overriden resource name A",
)
parametrized_test_2_b_id = api.TestId(
test_suite_1_id,
"test_2",
parameters='{"parameter_1": "value_is_b"}'
)
api.Test.discover(
parametrized_test_2_b_id,
codeowners=test_2_codeowners,
source_file_info=test_2_source_info,
resource="overriden resource name B"
)
test_3_id = api.TestId(test_suite_1_id, "test_3")
api.Test.discover(test_3_id)
test_4_id = api.TestId(test_suite_1_id, "test_4")
api.Test.discover(test_4_id)
# Start and execute items
api.TestSession.start()
api.TestModule.start(test_module_1_id)
api.TestSuite.start(test_suite_1_id)
# test_1 passes successfully
api.Test.start(test_1_id)
api.Test.mark_pass(test_1_id)
# test_2's first parametrized test succeeds, but the second fails without attaching exception info
api.Test.start(parametrized_test_2_a_id)
api.Test.mark_pass(parametrized_test_2_a_id)
api.Test.start(parametrized_test_2_b_id)
api.Test.mark_fail(parametrized_test_2_b_id)
# test_3 is skipped
api.Test.start(test_3_id)
api.Test.mark_skip(test_3_id, skip_reason="example skipped test")
# test_4 fails, and attaches exception info
api.Test.start(test_4_id)
try:
raise(ValueError("this test failed"))
except:
api.Test.mark_fail(test_4_id, exc_info=api.TestExcInfo(*sys.exc_info()))
# Finish suites and modules
api.TestSuite.finish(test_suite_1_id)
api.TestModule.finish(test_module_1_id)
api.TestSession.finish()
The following is a list of the most important configuration settings that can be used with the tracer, either in code or using environment variables:
DD_SERVICE
DD_SERVICE
pytest
my-python-app
DD_ENV
DD_ENV
none
local
, ci
For more information about service
and env
reserved tags, see Unified Service Tagging.
The following environment variable can be used to configure the location of the Datadog Agent:
DD_TRACE_AGENT_URL
http://hostname:port
.http://localhost:8126
All other Datadog Tracer configuration options can also be used.
Datadog은 Git 정보를 사용하여 테스트 결과를 시각화하고 리포지토리, 브랜치, 커밋별로 그룹화합니다. Git 메타데이터는 CI 공급자 환경 변수와 프로젝트 경로의 로컬 .git
폴더(사용 가능한 경우)에서 테스트 계측으로 자동 수집합니다.
지원되지 않는 CI 공급자이거나 .git
폴더가 없는 상태에서 테스트를 실행하는 경우, 환경 변수를 사용하여 Git 정보를 수동으로 설정할 수 있습니다. 해당 환경 변수는 자동 탐지된 정보보다 우선합니다. 다음 환경 변수를 설정하여 Git 정보를 제공합니다.
DD_GIT_REPOSITORY_URL
git@github.com:MyCompany/MyApp.git
, https://github.com/MyCompany/MyApp.git
DD_GIT_BRANCH
develop
DD_GIT_TAG
1.0.1
DD_GIT_COMMIT_SHA
a18ebf361cc831f5535e58ec4fae04ffd98d8152
DD_GIT_COMMIT_MESSAGE
Set release number
DD_GIT_COMMIT_AUTHOR_NAME
John Smith
DD_GIT_COMMIT_AUTHOR_EMAIL
john@example.com
DD_GIT_COMMIT_AUTHOR_DATE
2021-03-12T16:00:28Z
DD_GIT_COMMIT_COMMITTER_NAME
Jane Smith
DD_GIT_COMMIT_COMMITTER_EMAIL
jane@example.com
DD_GIT_COMMIT_COMMITTER_DATE
2021-03-12T16:00:28Z
Plugins for pytest
that alter test execution may cause unexpected behavior.
Plugins that introduce parallelization to pytest
(such as pytest-xdist
or pytest-forked
) create one session event for each parallelized instance. Multiple module or suite events may be created if tests from the same package or module execute in different processes.
The overall count of test events (and their correctness) remain unaffected. Individual session, module, or suite events may have inconsistent results with other events in the same pytest
run.
Plugins that change the ordering of test execution (such as pytest-randomly
) can create multiple module or suite events. The duration and results of module or suite events may also be inconsistent with the results reported by pytest
.
The overall count of test events (and their correctness) remain unaffected.
In some cases, if your unittest
test execution is run in a parallel manner, this may break the instrumentation and affect test optimization.
Datadog recommends you use up to one process at a time to prevent affecting test optimization.