---
title: Reading Experiment Results
description: Read and understand the results of your experiments.
breadcrumbs: Docs > Experiments > Reading Experiment Results
---

# Reading Experiment Results

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

## Overview{% #overview %}

After [launching your experiment](https://docs.datadoghq.com/experiments/), Datadog begins calculating results for your selected metrics. You can add additional metrics, organize metrics into groups, and explore related user sessions to understand the impact of each variant.

{% image
   source="https://datadog-docs.imgix.net/images/product_analytics/experiment/exp_reading_exps_overview.f782a3fd7d20dcaf1da9210056753f49.png?auto=format"
   alt="The experiment results overview showing a decision metrics table with control and treatment values, relative lift, and confidence interval bars for three metrics." /%}

## Confidence intervals{% #confidence-intervals %}

For each metric, Datadog shows the average per-subject value (typically per user) for both the control and treatment variants. It also reports the relative lift and the associated confidence interval.

The relative lift is defined as:

```
                      (Average metric value per treatment subject - Average metric value per control subject)
 Relative lift =   -----------------------------------------------------------------------------------------------             
                                       (Average metric value per control subject)
```

The confidence interval represents the range of lift values that are plausibly supported by the experiment's data. While the true lift could fall outside this range, values inside the interval are statistically more consistent with the observed data.

If the entire confidence interval is above zero, then the result is statistically significant. This suggests that the observed difference in metrics is unlikely to be attributable to random noise, and supports the conclusion that the experiment produced a true effect.

## Exploring results{% #exploring-results %}

To dive deeper into experiment results, hover over a metric and click **Chart**. This gives you the option to compare the experiment's impact across different user segments.

### Segment-level results{% #segment-level-results %}

Subject-level properties are based on attributes at the initial time of exposure (for example, a user's region, whether they are a new visitor, or a repeat visitor). This is useful for understanding when certain cohorts of users reacted differently to the new experience.

{% image
   source="https://datadog-docs.imgix.net/images/product_analytics/experiment/exp_segment_view.0310c285096a01282426f2a823c60ca4.png?auto=format"
   alt="Segment-level view of a metric split by Country ISO Code, showing a bar chart of relative lift and a data table with control and treatment values per country." /%}

## Further reading{% #further-reading %}

- [Make data-driven design decisions with Product Analytics](https://www.datadoghq.com/blog/datadog-product-analytics/)
- [Analytics Explorer](https://docs.datadoghq.com/product_analytics/analytics_explorer/)
