---
title: Experiments
description: >-
  Plan, run, and analyze randomized experiments across your stack with Datadog
  Experiments.
breadcrumbs: Docs > Experiments
---

# Experiments

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

## Overview{% #overview %}

Datadog Experiments helps teams run and analyze randomized experiments, such as A/B tests. These experiments help you understand how new features affect business outcomes, user behavior, and application performance, so you can make confident, data-backed decisions about what to implement.

Datadog Experiments consists of two components:

- An integration with [Datadog Feature Flags](https://docs.datadoghq.com/feature_flags/) for deploying and managing randomized experiments.
- A statistical analysis of [Real User Monitoring (RUM)](https://docs.datadoghq.com/real_user_monitoring/), [Product Analytics](https://docs.datadoghq.com/product_analytics/#getting-started), and [warehouse](https://docs.datadoghq.com/experiments/guide/) data to evaluate experiment results.

## Getting started{% #getting-started %}

To start using Datadog Experiments, configure at least one of the following data sources:

- [Real User Monitoring (RUM)](https://docs.datadoghq.com/real_user_monitoring/) for client-side and performance signals.
- [Product Analytics](https://docs.datadoghq.com/product_analytics/#getting-started) for user behavior and journey metrics.
- [Data warehouse](https://docs.datadoghq.com/experiments/guide/) for running experiment analysis directly in your warehouse using Snowflake, BigQuery, Redshift, or Databricks.

After configuring a data source, follow these steps to launch your experiment:

1. **[Create a feature flag](https://docs.datadoghq.com/getting_started/feature_flags/#create-your-first-feature-flag)** and implement it using the [SDK](https://docs.datadoghq.com/getting_started/feature_flags/#feature-flags-sdks) to assign users to the control and variant groups. A feature flag is required to launch your experiment.
1. **[Create a metric](https://docs.datadoghq.com/experiments/defining_metrics)** to evaluate your experiment.
1. **[Create an experiment](https://docs.datadoghq.com/experiments/plan_and_launch_experiments)** to define your hypothesis and optionally calculate a [sample size](https://docs.datadoghq.com/experiments/plan_and_launch_experiments#add-a-sample-size-calculation-optional).
1. **[Launch your experiment](https://docs.datadoghq.com/experiments/plan_and_launch_experiments#step-3---launch-your-experiment)** to see the impact of your change on business outcomes, user journey, and application performance.

{% image
   source="https://datadog-docs.imgix.net/images/product_analytics/experiment/overview_metrics_view-1.e2f0d2533689f53c091dd432a8299def.png?auto=format"
   alt="The Experiments metrics view showing business, funnel, and performance metrics with control and variant values and relative lift for each metric. A tooltip is open on the Revenue metric showing Non-CUPED values for Revenue per User, Total Revenue, and User Assignment Count across the control and variant groups." /%}

## Further reading{% #further-reading %}

- [Feature Flags](https://docs.datadoghq.com/feature_flags/)
- [Product Analytics](https://docs.datadoghq.com/product_analytics/)
