Overview

Test Impact Analysis for JavaScript skips entire test suites (test files) rather than individual tests.

Compatibility

Test Impact Analysis is only supported in the following versions and testing frameworks:

  • jest>=24.8.0
    • From dd-trace>=4.17.0 or dd-trace>=3.38.0.
    • Only jest-circus/runner is supported as testRunner.
    • Only jsdom and node are supported as test environments.
  • mocha>=5.2.0
    • From dd-trace>=4.17.0 or dd-trace>=3.38.0.
    • Run mocha with nyc to enable code coverage.
  • cucumber-js>=7.0.0
    • From dd-trace>=4.17.0 or dd-trace>=3.38.0.
    • Run cucumber-js with nyc to enable code coverage.
  • cypress>=6.7.0
    • From dd-trace>=4.17.0 or dd-trace>=3.38.0.
    • Instrument your web application with code coverage.

Setup

Test Optimization

Prior to setting up Test Impact Analysis, set up Test Optimization for JavaScript and TypeScript. If you are reporting data through the Agent, use v6.40 and later or v7.40 and later.

Activate Intelligent Test Runner for the test service

You, or a user in your organization with the Intelligent Test Runner Activation (intelligent_test_runner_activation_write) permission, must activate Test Impact Analysis on the Test Service Settings page.

Test Impact Analysis enabled in test service settings in the CI section of Datadog.

Run tests with Test Impact Analysis enabled

After completing setup, run your tests as you normally do:

NODE_OPTIONS="-r dd-trace/ci/init" DD_ENV=ci DD_SERVICE=my-javascript-app yarn test

After completing setup, run your tests as you normally do:

NODE_OPTIONS="-r dd-trace/ci/init" DD_ENV=ci DD_SERVICE=my-javascript-app DD_CIVISIBILITY_AGENTLESS_ENABLED=true DD_API_KEY=$DD_API_KEY yarn test

Cypress

For Test Impact Analysis for Cypress to work, you must instrument your web application with code coverage. For more information about enabling code coverage, see the Cypress documentation.

To check that you’ve successfully enabled code coverage, navigate to your web app with Cypress and check the window.__coverage__ global variable. This is what dd-trace uses to collect code coverage for Test Impact Analysis.

Inconsistent test durations

In some frameworks, such as jest, there are cache mechanisms that make tests faster after other tests have run (see jest cache docs). If Test Impact Analysis is skipping all but a few test files, these suites might run slower than they usually do. This is because they run with a colder cache. Regardless of this, total execution time for your test command should still be reduced.

Disabling skipping for specific tests

You can override the Test Impact Analysis behavior and prevent specific tests from being skipped. These tests are referred to as unskippable tests.

Why make tests unskippable?

Test Impact Analysis uses code coverage data to determine whether or not tests should be skipped. In some cases, this data may not be sufficient to make this determination.

Examples include:

  • Tests that read data from text files
  • Tests that interact with APIs outside of the code being tested (such as remote REST APIs)

Designating tests as unskippable ensures that Test Impact Analysis runs them regardless of coverage data.

Marking tests as unskippable

You can use the following docblock at the top of your test file to mark a suite as unskippable. This prevents any of the tests defined in the test file from being skipped by Test Impact Analysis. This is similar to jest’s testEnvironmentOptions.

/**
 * @datadog {"unskippable": true}
 */

describe('context', () => {
  it('can sum', () => {
    expect(1 + 2).to.equal(3)
  })
})

You can use the @datadog:unskippable tag in your feature file to mark it as unskippable. This prevents any of the scenarios defined in the feature file from being skipped by Test Impact Analysis.

@datadog:unskippable
Feature: Greetings
  Scenario: Say greetings
    When the greeter says greetings
    Then I should have heard "greetings"

Examples of tests that should be unskippable

This section shows some examples of tests that should be marked as unskippable.

Tests that depend on fixtures

/**
 * We have a `payload.json` fixture file in `./fixtures/payload`
 * that is processed by `processPayload` and put into a snapshot.
 * Changes in `payload.json` do not affect the test code coverage but can
 * make the test fail.
 */

/**
 * @datadog {"unskippable": true}
 */
import processPayload from './process-payload';
import payload from './fixtures/payload';

it('can process payload', () => {
    expect(processPayload(payload)).toMatchSnapshot();
});

Tests that communicate with external services

/**
 * We query an external service running outside the context of
 * the test.
 * Changes in this external service do not affect the test code coverage
 * but can make the test fail.
 */

/**
 * @datadog {"unskippable": true}
 */
it('can query data', (done) => {
    fetch('https://www.external-service.com/path')
        .then((res) => res.json())
        .then((json) => {
            expect(json.data[0]).toEqual('value');
            done();
        });
});
# Same way as above we're requesting an external service

@datadog:unskippable
Feature: Process the payload
  Scenario: Server responds correctly
    When the server responds correctly
    Then I should have received "value"

Further Reading