이 제품은 선택한 Datadog 사이트에서 지원되지 않습니다. ().
이 페이지는 아직 한국어로 제공되지 않습니다. 번역 작업 중입니다.
현재 번역 프로젝트에 대한 질문이나 피드백이 있으신 경우 언제든지 연락주시기 바랍니다.

Overview

If you experience issues setting up or running Datadog Experiments, use this page to troubleshoot. If you continue to have trouble, contact Datadog support.

Experiment results do not appear

If experiment results are missing after you launch an experiment, start by checking whether the experiment is assigning users. Then, navigate to the appropriate troubleshooting step.

Step 1: Confirm the experiment is assigning users

On the Experiments page, select your experiment to open its detail page. Hover over the values of the metric scorecard:

  • If the subject assignment count for each variant is zero, go to Step 2 to debug traffic.
  • If the subject assignment count is greater than zero, but the metric values are zero, skip to Step 3.

In the following example, the subject assignment count is 12,427 for Variant A and 12,573 for Variant B.

If your metric scorecard shows N/A for metric values or subject assignment counts, this means the analysis has not run yet. Wait for it to run, then continue with the troubleshooting steps as needed.
An experiment scorecard tooltip showing the metric name, the average user-level metric value per variant, the total metric value, and the subject assignment count for each variant.

Step 2: Confirm the experiment is receiving traffic

Verify that your feature flag is enabled and evaluates in the correct environment. Then, confirm that traffic reaches the experiment’s targeting rule.

  1. On the Experiments page, select your experiment to open its detail page. Hover over the experiment flag label (for example, new-product-photos).

  2. Note the Environment where the experiment is running, then click Go to Flag.

    An experiment page showing a tooltip on the feature flag with the environment (dev, enabled) and a Go to Flag link highlighted.
  3. On the Feature Flags page, select the correct environment tab and confirm that the flag is Enabled. If the flag is disabled, enable it before proceeding.

    The Feature Flags page with the Enabled toggle highlighted in the Targeting rules and rollouts section.
  4. In the Real-time metric overview section, confirm that the bar chart shows exposure events.

    The Real-time metric overview section of the Feature Flags page, showing a bar chart of exposures over time broken down by variant.

Based on what you see in the Real-time metric overview, follow the appropriate path:

The flag is not receiving traffic

If the Real-time metric overview section shows no exposure events, the flag is not receiving traffic from your application.

Confirm the flag is enabled in the environment where your application runs. You can manage environments on the Environments page. See the Getting Started with Feature Flags guide for details on environments.

After you enable the flag, check the Real-time metric overview for incoming exposure events. Then, return to Step 1 to verify that the subject assignment count is increasing.

The flag is receiving traffic but experiment assignments are zero

If the flag shows exposures but the metric scorecard shows zero assignments, traffic is not reaching the experiment’s targeting rule.

The Targeting Rules & Rollouts section displays a list of targeting rules that the flag evaluates from top to bottom. Rules above the experiment’s targeting rule, such as rules that exclude internal users or specific organizations, can capture traffic before it reaches the experiment.

The Targeting Rules & Rollouts section of a feature flag showing the experiment targeting rule with 269 users and rollout percentages for each variant across four stages.

Check the following and edit the targeting rule and traffic exposure as needed:

  • Targeting rule order: Are targeting rules above the experiment capturing traffic before it reaches the experiment rule?
  • Targeting rule filters: Does incoming traffic match the filters in the experiment’s targeting rule?
  • Traffic exposure: Is the traffic exposure for the targeting rule set correctly?

After making the necessary changes, return to Step 1 to verify that the subject assignment count is increasing.

Step 3: Confirm metric events are firing

If users are assigned to the experiment but the metric values are missing, check whether the assigned users have associated metric events.

Work through the following checks in order. Each builds on the previous one, so continue to the next if the issue persists.

A metric event must meet two criteria for Datadog to include it in experiment results:
  • The event must come from a user with at least one experiment exposure event.
  • The event must occur after the user's first experiment exposure.

Check the metric scorecard

  1. From the metric scorecard you checked in Step 1, hover over the metric with a zero value and click the ⋮ menu icon. Select Edit Metric to open the metric definition page.

  2. Verify that the metric event name is correct (for example, check for typos). Then, review the event volume chart on the right side of the page to confirm the event is firing.

    The Edit Metric page showing the metric definition on the left and a bar chart of metric event volume over the past week on the right.

If the event is firing but metric values are still zero, the metric events may not be matching to experiment exposures. Continue to the next section to verify that the exposure and metric identifiers match.

Datadog matches metric events to experiment exposures using a set of identifiers:

If the targetingKey in your SDK does not match the subject type attribute configured in Datadog, the experiment cannot associate metrics with users.

  1. On the Experiments page, select your experiment to open its detail page.

  2. Select the Flag & Exposures tab. Then, click View Exposures Log to see a list of recently exposed subjects. For details on how exposure events are tracked, see the SDK documentation.

    The Exposures log showing the flag key and allocation key as header metadata, with a table of recently exposed users listing timestamp, subject, and variant columns.
  3. The Subject column shows the value your SDK passes as targetingKey. Confirm that the targetingKey in your SDK matches the subject type attribute (for example, @usr.id). If these identifiers do not match, update them before proceeding.

  4. To resolve a mismatch, update either the targetingKey in your SDK or the attribute on the Subject Types page so that both use the same identifier.

If identifiers match and users are assigned to the experiment but experiment results are still missing, inspect individual sessions to identify why specific users are not generating metric events.

  1. On the Activity Stream page, filter for experiment sessions using the following syntax:

    @feature_flags.<flag-key>:<variant-value>
    

    For example, to filter sessions for the false variant of the new-product-photos flag, use @feature_flags.new-product-photos:false.

    The Product Analytics Activity Stream page filtered by @feature_flags.new-product-photos:false, showing a list of sessions with columns for date, session type, time spent, view count, error count, action count, frustration count, initial view name, and last view name.
  2. Select a session from a user assigned to the experiment. In the session timeline, check for the following:

    • Is the metric event present? Verify that the expected metric event is firing within the session.
    • Does the metric event occur after the feature flag evaluation? Events that occur before the feature flag evaluates do not count toward experiment results.

    If the metric event is missing or fires before the feature flag evaluation, contact Datadog support with the session URL, the experiment name, and the metric event name.

    An individual session detail view showing a timeline of events including a view load and multiple _dd_exposure custom actions fired at 5.39 seconds into the session.

If you have confirmed that metric events are firing and identifiers match, but metric values are still zero, outlier handling may be the cause.

When outlier handling is enabled, Datadog calculates a threshold based on the distribution of metric values across users. If the number of users with a metric event is small, Datadog may compute the threshold as zero, which truncates all metric values to zero.

To check if outlier handling is the cause:

  1. On the Experiments page, select your experiment to open its detail page.
  2. Hover over the metric name, click the ⋮ menu icon, and select Edit Metric to open the metric definition page.
  3. Expand the Experiment settings accordion. Under Outlier handling, toggle off both Lower bound percentile and Upper bound percentile.
  4. Save the metric.
  5. To trigger an immediate recompute, go to the Metrics section of the experiment’s detail page. Click the ⋮ menu icon next to Last Updated and select run an update now. Otherwise, wait for the next scheduled update.
The experiment's detail page, in the Metrics section showing the Last Updated menu with the option to run an update now.

If metric values appear after disabling outlier handling, the threshold was truncating your data. To resolve this, keep outlier handling disabled or set a higher threshold on the Edit Metric page.

The Edit Metric page with the Outlier handling toggles highlighted.

If the issue persists after completing all checks, contact the Datadog support team.

Further reading

추가 유용한 문서, 링크 및 기사: