CI Visibility is not available in the selected site () at this time.

If your CI provider is Jenkins, you can use UI-based configuration to enable Test Visibility for your jobs and pipelines.

Compatibility

Supported test frameworks:

Test FrameworkVersionNotes
Jest>= 24.8.0Only jsdom (in the jest-environment-jsdom package) and node (in the jest-environment-node package) are supported as test environments. Custom environments like @jest-runner/electron/environment in jest-electron-runner are not supported.

Only jest-circus is supported as testRunner.

Jest >= 28 is only supported from dd-trace>=2.7.0.

test.concurrent is not supported.
Mocha>= 5.2.0Mocha >= 9.0.0 has partial support.

Mocha parallel mode is not supported.
Cucumber>= 7.0.0Cucumber-js parallel mode is not supported.
Cypress>= 6.7.0From dd-trace>=1.4.0.
Playwright>= 1.18.0From dd-trace>=3.13.0 and dd-trace>=2.26.0 for 2.x release line.

The instrumentation works at runtime, so any transpilers such as TypeScript, Webpack, or Babel are supported out-of-the-box.

Configuring reporting method

To report test results to Datadog, you need to configure the Datadog JavaScript library:

Agentless mode is available in Datadog JavaScript library versions >= 2.5.0

If you are using a cloud CI provider without access to the underlying worker nodes, such as GitHub Actions or CircleCI, configure the library to use the Agentless mode. For this, set the following environment variables:

DD_CIVISIBILITY_AGENTLESS_ENABLED=true (Required)
Enables or disables Agentless mode.
Default: false
DD_API_KEY (Required)
The Datadog API key used to upload the test results.
Default: (empty)

Additionally, configure the Datadog site to which you want to send data.

DD_SITE (Required)
The Datadog site to upload results to.
Default: datadoghq.com

If you are running tests on an on-premises CI provider, such as Jenkins or self-managed GitLab CI, install the Datadog Agent on each worker node by following the Agent installation instructions. This is the recommended option as it allows you to automatically link test results to logs and underlying host metrics.

If you are using a Kubernetes executor, Datadog recommends using the Datadog Operator. The operator includes Datadog Admission Controller which can automatically inject the tracer library into the build pods. Note: If you use the Datadog Operator, there is no need to download and inject the tracer library since the Admission Controller can do this for you, so you can skip the corresponding step below. However, you still need to make sure that your pods set the environment variables or command-line parameters necessary to enable Test Visibility.

If you are not using Kubernetes or can’t use the Datadog Admission Controller and the CI provider is using a container-based executor, set the DD_TRACE_AGENT_URL environment variable (which defaults to http://localhost:8126) in the build container running the tracer to an endpoint that is accessible from within that container. Note: Using localhost inside the build references the container itself and not the underlying worker node or any container where the Agent might be running in.

DD_TRACE_AGENT_URL includes the protocol and port (for example, http://localhost:8126) and takes precedence over DD_AGENT_HOST and DD_TRACE_AGENT_PORT, and is the recommended configuration parameter to configure the Datadog Agent’s URL for CI Visibility.

If you still have issues connecting to the Datadog Agent, use the Agentless Mode. Note: When using this method, tests are not correlated with logs and infrastructure metrics.

Installing the JavaScript tracer

To install the JavaScript Tracer, run:

yarn add --dev dd-trace

For more information, see the JavaScript Tracer installation documentation.

Instrument your tests

Set the NODE_OPTIONS environment variable to -r dd-trace/ci/init. Run your tests as you normally would, specifying the environment where the tests are run in the DD_ENV environment variable. For example, set DD_ENV to local when running tests on a developer workstation, or ci when running them on a CI provider:

NODE_OPTIONS="-r dd-trace/ci/init" DD_ENV=ci DD_SERVICE=my-javascript-app yarn test

Note: If you set a value for NODE_OPTIONS, make sure it does not overwrite -r dd-trace/ci/init. This can be done using the ${NODE_OPTIONS:-} clause:

package.json

{
  "scripts": {
    "test": "NODE_OPTIONS=\"--max-old-space-size=12288 ${NODE_OPTIONS:-}\" jest"
  }
}

Adding custom tags to tests

You can add custom tags to your tests by using the current active span:

  it('sum function can sum', () => {
    const testSpan = require('dd-trace').scope().active()
    testSpan.setTag('team_owner', 'my_team')
    // test continues normally
    // ...
  })

To create filters or group by fields for these tags, you must first create facets. For more information about adding tags, see the Adding Tags section of the Node.js custom instrumentation documentation.

Adding custom measures to tests

Just like tags, you can add custom measures to your tests by using the current active span:

  it('sum function can sum', () => {
    const testSpan = require('dd-trace').scope().active()
    testSpan.setTag('memory_allocations', 16)
    // test continues normally
    // ...
  })

For more information about custom measures, see the Add Custom Measures Guide.

Set the NODE_OPTIONS environment variable to -r dd-trace/ci/init. Run your tests as you normally would, specifying the environment where the tests are run in the DD_ENV environment variable. For example, set DD_ENV to local when running tests on a developer workstation, or ci when running them on a CI provider:

NODE_OPTIONS="-r dd-trace/ci/init" DD_ENV=ci DD_SERVICE=my-javascript-app yarn test

Note: If you set a value for NODE_OPTIONS, make sure it does not overwrite -r dd-trace/ci/init. This can be done using the ${NODE_OPTIONS:-} clause:

package.json

{
  "scripts": {
    "test": "NODE_OPTIONS=\"--max-old-space-size=12288 ${NODE_OPTIONS:-}\" jest"
  }
}

Adding custom tags to tests

You can add custom tags to your tests by using the custom annotations API from Playwright:

test('user profile', async ({ page }) => {
  test.info().annotations.push({
    type: 'DD_TAGS[test.memory.usage]', // DD_TAGS is mandatory and case sensitive
    description: 'low',
  });
  test.info().annotations.push({
    type: 'DD_TAGS[test.task.id]',
    description: '41123',
  });
  // ...
});

test('landing page', async ({ page }) => {
  test.info().annotations.push({
    type: 'DD_TAGS[test.cpu.usage]',
    description: 'high',
  });
  // ...
});

The format of the annotations is the following, where $TAG_NAME and $TAG_VALUE are strings representing tag name and value respectively:

{
  "type": "DD_TAGS[$TAG_NAME]",
  "description": "$TAG_VALUE"
}

### Adding custom measures to tests

Custom measures also use custom annotations:

```javascript
test('user profile', async ({ page }) => {
  test.info().annotations.push({
    type: 'DD_TAGS[test.memory.allocations]', // DD_TAGS is mandatory and case sensitive
    description: 16, // this is a number
  });
});

The format of the annotations is the following, where $TAG_NAME is a string representing the tag name and $TAG_VALUE is a number representing the tag value:

{
  "type": "DD_TAGS[$TAG_NAME]",
  "description": $TAG_VALUE
}

Note: description values in annotations are typed as strings. Numbers also work, but you may need to disable the typing error with // @ts-expect-error.

Important: The DD_TAGS prefix is mandatory and case sensitive.

Set the NODE_OPTIONS environment variable to -r dd-trace/ci/init. Run your tests as you normally would, specifying the environment where the tests are run in the DD_ENV environment variable. For example, set DD_ENV to local when running tests on a developer workstation, or ci when running them on a CI provider:

NODE_OPTIONS="-r dd-trace/ci/init" DD_ENV=ci DD_SERVICE=my-javascript-app yarn test

Note: If you set a value for NODE_OPTIONS, make sure it does not overwrite -r dd-trace/ci/init. This can be done using the ${NODE_OPTIONS:-} clause:

package.json

{
  "scripts": {
    "test": "NODE_OPTIONS=\"--max-old-space-size=12288 ${NODE_OPTIONS:-}\" jest"
  }
}

Adding custom tags to tests

You can add custom tags to your test by grabbing the current active span:

  When('the function is called', function () {
    const stepSpan = require('dd-trace').scope().active()
    testSpan.setTag('team_owner', 'my_team')
    // test continues normally
    // ...
  })

To create filters or group by fields for these tags, you must first create facets. For more information about adding tags, see the Adding Tags section of the Node.js custom instrumentation documentation.

Adding custom measures to tests

You may also add custom measures to your test by grabbing the current active span:

  When('the function is called', function () {
    const stepSpan = require('dd-trace').scope().active()
    testSpan.setTag('memory_allocations', 16)
    // test continues normally
    // ...
  })

For more information about custom measures, see the Add Custom Measures Guide.

Cypress version 10 or later

Use the Cypress API documentation to learn how to use plugins for cypress>=10.

In your cypress.config.js file, set the following:

cypress.config.js

const { defineConfig } = require('cypress')

module.exports = defineConfig({
  e2e: {
    setupNodeEvents: require('dd-trace/ci/cypress/plugin'),
    supportFile: 'cypress/support/e2e.js'
  }
})

Add the following line to the top level of your supportFile:

cypress/support/e2e.js

// Your code can be before this line
// require('./commands')
require('dd-trace/ci/cypress/support')
// Also supported:
// import 'dd-trace/ci/cypress/support'
// Your code can also be after this line
// Cypress.Commands.add('login', (email, pw) => {})

If you’re using other Cypress plugins, your cypress.config.js file should contain the following:

cypress.config.js

const { defineConfig } = require('cypress')

module.exports = defineConfig({
  e2e: {
    setupNodeEvents(on, config) {
      // your previous code is before this line
      require('dd-trace/ci/cypress/plugin')(on, config)
    }
  }
})

Cypress after:run event

Datadog requires the after:run Cypress event to work, and Cypress does not allow multiple handlers for that event. If you defined handlers for after:run already, add the Datadog handler manually by importing 'dd-trace/ci/cypress/after-run':

cypress.config.js

const { defineConfig } = require('cypress')

module.exports = defineConfig({
  e2e: {
    setupNodeEvents(on, config) {
      require('dd-trace/ci/cypress/plugin')(on, config)
      // other plugins
      on('after:run', (details) => {
        // other 'after:run' handlers
        // important that this function call is returned
        return require('dd-trace/ci/cypress/after-run')(details)
      })
    }
  }
})

Cypress after:spec event

Datadog requires the after:spec Cypress event to work, and Cypress does not allow multiple handlers for that event. If you defined handlers for after:spec already, add the Datadog handler manually by importing 'dd-trace/ci/cypress/after-spec':

cypress.config.js

const { defineConfig } = require('cypress')

module.exports = defineConfig({
  e2e: {
    setupNodeEvents(on, config) {
      require('dd-trace/ci/cypress/plugin')(on, config)
      // other plugins
      on('after:spec', (...args) => {
        // other 'after:spec' handlers
        // Important that this function call is returned
        // Important that all the arguments are passed
        return require('dd-trace/ci/cypress/after-spec')(...args)
      })
    }
  }
})

Cypress before version 10

These are the instructions if you’re using a version older than cypress@10. See the Cypress documentation for more information about migrating to a newer version.

  1. Set pluginsFile to "dd-trace/ci/cypress/plugin", for example, through cypress.json:

cypress.json

{
  "pluginsFile": "dd-trace/ci/cypress/plugin"
}

If you already defined a pluginsFile, initialize the instrumentation with:

cypress/plugins/index.js

module.exports = (on, config) => {
  // your previous code is before this line
  require('dd-trace/ci/cypress/plugin')(on, config)
}
  1. Add the following line to the top level of your supportFile:

cypress/support/index.js

// Your code can be before this line
// require('./commands')
require('dd-trace/ci/cypress/support')
// Your code can also be after this line
// Cypress.Commands.add('login', (email, pw) => {})

Cypress after:run event

Datadog requires the after:run Cypress event to work, and Cypress does not allow multiple handlers for that event. If you defined handlers for after:run already, add the Datadog handler manually by importing 'dd-trace/ci/cypress/after-run':

cypress/plugins/index.js

module.exports = (on, config) => {
  // your previous code is before this line
  require('dd-trace/ci/cypress/plugin')(on, config)
  on('after:run', (details) => {
    // other 'after:run' handlers
    // important that this function call is returned
    return require('dd-trace/ci/cypress/after-run')(details)
  })
}

Cypress after:spec event

Datadog requires the after:spec Cypress event to work, and Cypress does not allow multiple handlers for that event. If you defined handlers for after:spec already, add the Datadog handler manually by importing 'dd-trace/ci/cypress/after-spec':

cypress/plugins/index.js

module.exports = (on, config) => {
  // your previous code is before this line
  require('dd-trace/ci/cypress/plugin')(on, config)
  on('after:spec', (...args) => {
    // other 'after:spec' handlers
    // Important that this function call is returned
    // Important that all the arguments are passed
    return require('dd-trace/ci/cypress/after-run')(...args)
  })
}

Run your tests as you normally do, specifying the environment where test are being run (for example, local when running tests on a developer workstation, or ci when running them on a CI provider) in the DD_ENV environment variable. For example:

DD_ENV=ci DD_SERVICE=my-ui-app npm test

Adding custom tags to tests

To add additional information to your tests, such as the team owner, use cy.task('dd:addTags', { yourTags: 'here' }) in your test or hooks.

For example:

beforeEach(() => {
  cy.task('dd:addTags', {
    'before.each': 'certain.information'
  })
})
it('renders a hello world', () => {
  cy.task('dd:addTags', {
    'team.owner': 'ui'
  })
  cy.get('.hello-world')
    .should('have.text', 'Hello World')
})

To create filters or group by fields for these tags, you must first create facets. For more information about adding tags, see the Adding Tags section of the Node.js custom instrumentation documentation.

Adding custom measures to tests

To add custom measures to your tests, such as memory allocations, use cy.task('dd:addTags', { yourNumericalTags: 1 }) in your test or hooks.

For example:

it('renders a hello world', () => {
  cy.task('dd:addTags', {
    'memory_allocations': 16
  })
  cy.get('.hello-world')
    .should('have.text', 'Hello World')
})

For more information about custom measures, see the Add Custom Measures Guide.

Cypress - RUM integration

If the browser application being tested is instrumented using Browser Monitoring, the Cypress test results and their generated RUM browser sessions and session replays are automatically linked. For more information, see the Instrumenting your browser tests with RUM guide.

How to fix “Cannot find module ‘dd-trace/ci/init’” errors

When using dd-trace, you might encounter the following error message:

 Error: Cannot find module 'dd-trace/ci/init'

This might be because of an incorrect usage of NODE_OPTIONS.

For example, if your GitHub Action looks like this:

jobs:
  my-job:
    name: Run tests
    runs-on: ubuntu-latest
    env:
      NODE_OPTIONS: -r dd-trace/ci/init
    steps:
      - name: Checkout repository
        uses: actions/checkout@v3
      - name: Install node
        uses: actions/setup-node@v3
      - name: Install dependencies
        run: npm install
      - name: Run tests
        run: npm test

Note: This does not work because NODE_OPTIONS are interpreted by every node process, including npm install. If you try to import dd-trace/ci/init before it’s installed, this step fails.

Your GitHub Action should instead look like this:

jobs:
  my-job:
    name: Run tests
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v3
      - name: Install node
        uses: actions/setup-node@v3
      - name: Install dependencies
        run: npm install
      - name: Run tests
        run: npm test
        env:
          NODE_OPTIONS: -r dd-trace/ci/init

Follow these best practices:

  • Make sure the NODE_OPTIONS environment variable is only set to the process running tests.
  • Specifically avoid defining NODE_OPTIONS in the global environment variables settings in your pipeline or job definition.

Using Yarn 2 or later

If you’re using yarn>=2 and a .pnp.cjs file, you might also get the same error:

 Error: Cannot find module 'dd-trace/ci/init'

You can fix it by setting NODE_OPTIONS to the following:

NODE_OPTIONS="-r $(pwd)/.pnp.cjs -r dd-trace/ci/init" yarn test

Reporting code coverage

When tests are instrumented with Istanbul, the Datadog Tracer (v3.20.0 or later) reports it under the test.code_coverage.lines_pct tag for your test sessions.

You can see the evolution of the test coverage in the Coverage tab of a test session.

For more information, see Code Coverage.

Configuration settings

The following is a list of the most important configuration settings that can be used with the tracer.

service
Name of the service or library under test.
Environment variable: DD_SERVICE
Default: (test framework name)
Example: my-ui
env
Name of the environment where tests are being run.
Environment variable: DD_ENV
Default: none
Examples: local, ci
url
Datadog Agent URL for trace collection in the form http://hostname:port.
Environment variable: DD_TRACE_AGENT_URL
Default: http://localhost:8126

For more information about service and env reserved tags, see Unified Service Tagging. All other Datadog Tracer configuration options can also be used.

Collecting Git metadata

Datadog uses Git information for visualizing your test results and grouping them by repository, branch, and commit. Git metadata is automatically collected by the test instrumentation from CI provider environment variables and the local .git folder in the project path, if available.

If you are running tests in non-supported CI providers or with no .git folder, you can set the Git information manually using environment variables. These environment variables take precedence over any auto-detected information. Set the following environment variables to provide Git information:

DD_GIT_REPOSITORY_URL
URL of the repository where the code is stored. Both HTTP and SSH URLs are supported.
Example: git@github.com:MyCompany/MyApp.git, https://github.com/MyCompany/MyApp.git
DD_GIT_BRANCH
Git branch being tested. Leave empty if providing tag information instead.
Example: develop
DD_GIT_TAG
Git tag being tested (if applicable). Leave empty if providing branch information instead.
Example: 1.0.1
DD_GIT_COMMIT_SHA
Full commit hash.
Example: a18ebf361cc831f5535e58ec4fae04ffd98d8152
DD_GIT_COMMIT_MESSAGE
Commit message.
Example: Set release number
DD_GIT_COMMIT_AUTHOR_NAME
Commit author name.
Example: John Smith
DD_GIT_COMMIT_AUTHOR_EMAIL
Commit author email.
Example: john@example.com
DD_GIT_COMMIT_AUTHOR_DATE
Commit author date in ISO 8601 format.
Example: 2021-03-12T16:00:28Z
DD_GIT_COMMIT_COMMITTER_NAME
Commit committer name.
Example: Jane Smith
DD_GIT_COMMIT_COMMITTER_EMAIL
Commit committer email.
Example: jane@example.com
DD_GIT_COMMIT_COMMITTER_DATE
Commit committer date in ISO 8601 format.
Example: 2021-03-12T16:00:28Z

Manual testing API

Note: To use the manual testing API, you must pass DD_CIVISIBILITY_MANUAL_API_ENABLED=1 as an environment variable.
Note: The manual testing API is in beta, so its API might change. It is available starting in dd-trace versions 4.4.0, 3.25.0, and 2.38.0.

If you use Jest, Mocha, Cypress, Playwright, or Cucumber, do not use the manual testing API, as CI Visibility automatically instruments them and sends the test results to Datadog. The manual testing API is incompatible with already supported testing frameworks.

Use the manual testing API only if you use an unsupported testing framework or have a different testing mechanism.

The manual testing API leverages the node:diagnostics_channel module from Node.js and is based on channels you can publish to:

const { channel } = require('node:diagnostics_channel')

const { describe, test, beforeEach, afterEach, assert } = require('my-custom-test-framework')

const testStartCh = channel('dd-trace:ci:manual:test:start')
const testFinishCh = channel('dd-trace:ci:manual:test:finish')
const testSuite = __filename

describe('can run tests', () => {
  beforeEach((testName) => {
    testStartCh.publish({ testName, testSuite })
  })
  afterEach((status, error) => {
    testFinishCh.publish({ status, error })
  })
  test('first test will pass', () => {
    assert.equal(1, 1)
  })
})

Test start channel

Grab this channel by its ID dd-trace:ci:manual:test:start to publish that a test is starting. A good place to do this is a beforeEach hook or similar.

const { channel } = require('node:diagnostics_channel')
const testStartCh = channel('dd-trace:ci:manual:test:start')

// ... code for your testing framework goes here
  beforeEach(() => {
    const testDefinition = {
      testName: 'a-string-that-identifies-this-test',
      testSuite: 'what-suite-this-test-is-from.js'
    }
    testStartCh.publish(testDefinition)
  })
// code for your testing framework continues here ...

The payload to be published has attributes testName and testSuite, both strings, that identify the test that is about to start.

Test finish channel

Grab this channel by its ID dd-trace:ci:manual:test:finish to publish that a test is ending. A good place to do this is an afterEach hook or similar.

const { channel } = require('node:diagnostics_channel')
const testFinishCh = channel('dd-trace:ci:manual:test:finish')

// ... code for your testing framework goes here
  afterEach(() => {
    const testStatusPayload = {
      status: 'fail',
      error: new Error('assertion error')
    }
    testStartCh.publish(testStatusPayload)
  })
// code for your testing framework continues here ...

The payload to be published has attributes status and error:

  • status is a string that takes one of three values:

    • 'pass' when a test passes.
    • 'fail' when a test fails.
    • 'skip' when a test has been skipped.
  • error is an Error object containing the reason why a test failed.

Add tags channel

Grab this channel by its ID dd-trace:ci:manual:test:addTags to publish that a test needs custom tags. This can be done within the test function:

const { channel } = require('node:diagnostics_channel')
const testAddTagsCh = channel('dd-trace:ci:manual:test:addTags')

// ... code for your testing framework goes here
  test('can sum', () => {
    testAddTagsCh.publish({ 'test.owner': 'my-team', 'number.assertions': 3 })
    const result = sum(2, 1)
    assert.equal(result, 3)
  })
// code for your testing framework continues here ...

The payload to be published is a dictionary <string, string|number> of tags or measures that are added to the test.

Run the tests

When the test start and end channels are in your code, run your testing framework like you normally do, including the following environment variables:

NODE_OPTIONS="-r dd-trace/ci/init" DD_CIVISIBILITY_MANUAL_API_ENABLED=1 DD_ENV=ci DD_SERVICE=my-custom-framework-tests yarn run-my-test-framework

Known limitations

ES modules

Mocha >=9.0.0 uses an ESM-first approach to load test files. That means that if ES modules are used (for example, by defining test files with the .mjs extension), the instrumentation is limited. Tests are detected, but there isn’t visibility into your test. For more information about ES modules, see the Node.js documentation.

Browser tests

Browser tests executed with mocha, jest, cucumber, cypress, and playwright are instrumented by dd-trace-js, but visibility into the browser session itself is not provided by default (for example, network calls, user actions, page loads, and more.).

If you want visibility into the browser process, consider using RUM & Session Replay. When using Cypress, test results and their generated RUM browser sessions and session replays are automatically linked. For more information, see the Instrumenting your browser tests with RUM guide.

Cypress interactive mode

Cypress interactive mode (which you can enter by running cypress open) is not supported by CI Visibility because some cypress events, such as before:run, are not fired. If you want to try it anyway, pass experimentalInteractiveRunEvents: true to the cypress configuration file.

Mocha parallel tests

Mocha’s parallel mode is not supported. Tests run in parallel mode are not instrumented by CI Visibility.

Cucumber parallel tests

Cucumber’s parallel mode is not supported. Tests run in parallel mode are not instrumented by CI Visibility.

Jest’s test.concurrent

Jest’s test.concurrent is not supported.

Jest’s --forceExit

Jest’s –forceExit option may cause data loss. Datadog tries to send data immediately after your tests finish, but shutting down the process abruptly can cause some requests to fail. Use --forceExit with caution.

Mocha’s --exit

Mocha’s –exit option may cause data loss. Datadog tries to send data immediately after your tests finish, but shutting down the process abruptly can cause some requests to fail. Use --exit with caution.

Best practices

Follow these practices to take full advantage of the testing framework and CI Visibility.

Parameterized tests

Whenever possible, leverage the tools that testing frameworks provide for parameterized tests. For example, for jest:

Avoid this:

[[1,2,3], [3,4,7]].forEach((a,b,expected) => {
  test('sums correctly', () => {
    expect(a+b).toEqual(expected)
  })
})

And use test.each instead:

test.each([[1,2,3], [3,4,7]])('sums correctly %i and %i', (a,b,expected) => {
  expect(a+b).toEqual(expected)
})

For mocha, use mocha-each:

const forEach = require('mocha-each');
forEach([
  [1,2,3],
  [3,4,7]
])
.it('adds %i and %i then returns %i', (a,b,expected) => {
  expect(a+b).to.equal(expected)
});

When you use this approach, both the testing framework and CI Visibility can tell your tests apart.

Further reading