---
title: Java Feature Flags
description: Set up Datadog Feature Flags for Java applications.
breadcrumbs: Docs > Feature Flags > Server-Side Feature Flags > Java Feature Flags
---

# Java Feature Flags

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site.md). ().
{% /alert %}

{% /callout %}

{% alert level="warning" %}
Java Feature Flags support is experimental and requires enabling an experimental flag in the tracer. See the Configuration section for details.
{% /alert %}

## Overview{% #overview %}

This page describes how to instrument a Java application with the Datadog Feature Flags SDK. Datadog feature flags provide a unified way to remotely control feature availability in your app, experiment safely, and deliver new experiences with confidence.

The Java SDK integrates feature flags directly into the Datadog APM tracer and implements the [OpenFeature](https://openfeature.dev/) standard for maximum flexibility and compatibility.

{% alert level="info" %}
If you're using Datadog APM and your application already has the Datadog Java tracer and Remote Configuration enabled, skip to Initialize the OpenFeature provider. You only need to add the OpenFeature dependencies and initialize the provider.
{% /alert %}

## Compatibility requirements{% #compatibility-requirements %}

The Datadog Feature Flags SDK for Java requires:

- **Java 11 or higher**
- **Datadog Java APM Tracer**: Version **1.57.0** or later
- **OpenFeature SDK**: Version **1.18.2** or later
- **Datadog Agent**: Version **7.x or later** with [Remote Configuration](https://docs.datadoghq.com/remote_configuration.md) enabled
- **Datadog API Key**: Required for Remote Configuration

For a full list of Datadog's Java version and framework support, read [Compatibility Requirements](https://docs.datadoghq.com/tracing/trace_collection/compatibility/java.md).

## Getting started{% #getting-started %}

Before you begin, make sure you've already [installed and configured the Agent](https://docs.datadoghq.com/tracing/trace_collection/automatic_instrumentation/dd_libraries/java.md#install-and-configure-the-agent).

## Installation{% #installation %}

Feature flagging is integrated into the Datadog Java APM tracer. You need the tracer JAR and the OpenFeature SDK dependencies.

{% tab title="Gradle (Groovy)" %}
Add the following dependencies to your `build.gradle`:

In the `build.gradle` file:

```groovy
dependencies {
    // OpenFeature SDK for flag evaluation
    implementation 'dev.openfeature:sdk:1.18.2'

    // Datadog OpenFeature Provider
    implementation 'com.datadoghq:dd-openfeature:1.57.0'
}
```

{% /tab %}

{% tab title="Gradle (Kotlin)" %}
Add the following dependencies to your `build.gradle.kts`:

In the `build.gradle.kts` file:

```kotlin
dependencies {
    // OpenFeature SDK for flag evaluation
    implementation("dev.openfeature:sdk:1.18.2")

    // Datadog OpenFeature Provider
    implementation("com.datadoghq:dd-openfeature:1.57.0")
}
```

{% /tab %}

{% tab title="Maven" %}
Add the following dependencies to your `pom.xml`:

In the `pom.xml` file:

```xml
<dependencies>
    <!-- OpenFeature SDK for flag evaluation -->
    <dependency>
        <groupId>dev.openfeature</groupId>
        <artifactId>sdk</artifactId>
        <version>1.18.2</version>
    </dependency>

    <!-- Datadog OpenFeature Provider -->
    <dependency>
        <groupId>com.datadoghq</groupId>
        <artifactId>dd-openfeature</artifactId>
        <version>1.57.0</version>
    </dependency>
</dependencies>
```

{% /tab %}

## Configuration{% #configuration %}

If your Datadog Agent already has Remote Configuration enabled for other features (like Dynamic Instrumentation or Application Security), you can skip the Agent configuration and go directly to Application configuration.

### Agent configuration{% #agent-configuration %}

Configure your Datadog Agent to enable Remote Configuration:

In the `datadog.yaml` file:

```yaml
# Enable Remote Configuration
remote_configuration:
  enabled: true

# Set your API key
api_key: <YOUR_API_KEY>
```

### Application configuration{% #application-configuration %}

If your application already runs with `-javaagent:dd-java-agent.jar` and has Remote Configuration enabled (`DD_REMOTE_CONFIG_ENABLED=true`), you only need to add the experimental feature flag (`DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true`). Skip the tracer download and JVM configuration steps.

Configure your Java application with the required environment variables or system properties:

{% tab title="Environment Variables" %}

```bash
# Required: Enable Remote Configuration in the tracer
export DD_REMOTE_CONFIG_ENABLED=true

# Required: Enable experimental feature flagging support
export DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true

# Required: Your Datadog API key
export DD_API_KEY=<YOUR_API_KEY>

# Required: Service name
export DD_SERVICE=<YOUR_SERVICE_NAME>

# Required: Environment (e.g., prod, staging, dev)
export DD_ENV=<YOUR_ENVIRONMENT>

# Optional: Version
export DD_VERSION=<YOUR_APP_VERSION>

# Start your application with the tracer
java -javaagent:path/to/dd-java-agent.jar -jar your-application.jar
```

{% /tab %}

{% tab title="System Properties" %}

```bash
java -javaagent:path/to/dd-java-agent.jar \
  -Ddd.remote.config.enabled=true \
  -Ddd.experimental.flagging.provider.enabled=true \
  -Ddd.api.key=<YOUR_API_KEY> \
  -Ddd.service=<YOUR_SERVICE_NAME> \
  -Ddd.env=<YOUR_ENVIRONMENT> \
  -Ddd.version=<YOUR_APP_VERSION> \
  -jar your-application.jar
```

{% /tab %}

The Datadog feature flagging system starts automatically when the tracer is initialized with both Remote Configuration and the experimental flagging provider enabled. No additional initialization code is required in your application.

{% alert level="danger" %}
Feature flagging requires both `DD_REMOTE_CONFIG_ENABLED=true` and `DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true`. Without the experimental flag, the feature flagging system does not start and the `Provider` returns the programmatic default.
{% /alert %}

### Add the Java tracer to the JVM{% #add-the-java-tracer-to-the-jvm %}

For instructions on how to add the `-javaagent` argument to your application server or framework, see [Add the Java Tracer to the JVM](https://docs.datadoghq.com/tracing/trace_collection/automatic_instrumentation/dd_libraries/java.md#add-the-java-tracer-to-the-jvm).

Make sure to include the feature flagging configuration flags:

- `-Ddd.remote.config.enabled=true`
- `-Ddd.experimental.flagging.provider.enabled=true`

## Initialize the OpenFeature provider{% #initialize-the-openfeature-provider %}

Initialize the Datadog OpenFeature provider in your application startup code. The provider connects to the feature flagging system running in the Datadog tracer.

```java
import dev.openfeature.sdk.OpenFeatureAPI;
import dev.openfeature.sdk.Client;
import datadog.trace.api.openfeature.Provider;
import dev.openfeature.sdk.exceptions.ProviderNotReadyError;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class App {
    private static final Logger logger = LoggerFactory.getLogger(App.class);
    private static Client client;

    public static void main(String[] args) throws Exception {
        // Initialize the Datadog provider
        logger.info("Initializing Datadog OpenFeature Provider...");
        OpenFeatureAPI api = OpenFeatureAPI.getInstance();

        try {
            // Set provider and wait for initial configuration (recommended)
            api.setProviderAndWait(new Provider());
            client = api.getClient("my-app");
            logger.info("OpenFeature provider initialized successfully");
        } catch (ProviderNotReadyError e) {
            // Handle gracefully - app will use default flag values
            logger.warn("Provider not ready (no tracer/config available), continuing with defaults", e);
            client = api.getClient("my-app");
            logger.info("App will use default flag values until provider is ready");
        } catch (Exception e) {
            logger.error("Failed to initialize OpenFeature provider", e);
            throw e;
        }

        // Your application code here
    }
}
```

Use `setProviderAndWait()` to block evaluation until the initial flag configuration is received from Remote Configuration. This ensures flags are ready before the application starts serving traffic. The default timeout is 30 seconds.

`ProviderNotReadyError` is an OpenFeature SDK exception thrown when the provider times out during initialization. Catching it allows the application to start with default flag values if Remote Configuration is unavailable. If not caught, the exception propagates and may prevent application startup. Handle this based on your availability requirements.

### Asynchronous initialization{% #asynchronous-initialization %}

For non-blocking initialization, use `setProvider()` and listen for provider events:

```java
import dev.openfeature.sdk.ProviderEvent;

OpenFeatureAPI api = OpenFeatureAPI.getInstance();
Client client = api.getClient();

// Listen for provider state changes
client.on(ProviderEvent.PROVIDER_READY, (event) -> {
    logger.info("Feature flags ready!");
});

client.on(ProviderEvent.PROVIDER_ERROR, (event) -> {
    logger.error("Provider error: {}", event.getMessage());
});

client.on(ProviderEvent.PROVIDER_STALE, (event) -> {
    logger.warn("Provider configuration is stale");
});

// Set provider asynchronously
api.setProvider(new Provider());
```

## Set the evaluation context{% #set-the-evaluation-context %}

The evaluation context defines the subject (user, device, session) for flag evaluation. It determines which flag variations are returned based on targeting rules.

```java
import dev.openfeature.sdk.EvaluationContext;
import dev.openfeature.sdk.MutableContext;

// Create an evaluation context with a targeting key and attributes
EvaluationContext context = new MutableContext("user-123")
    .add("email", "user@example.com")
    .add("tier", "premium");

// Use the context for flag evaluations (see next section)
```

The `targetingKey` (for example, `user-123`) is the primary identifier used for consistent flag evaluations and percentage-based rollouts. It's typically a user ID, session ID, or device ID.

## Evaluate flags{% #evaluate-flags %}

Evaluate feature flags using the OpenFeature client. All flag types are supported: Boolean, string, integer, double, and object.

{% tab title="Boolean" %}

```java
// Simple Boolean evaluation
boolean enabled = client.getBooleanValue("checkout.new", false, context);

if (enabled) {
    // New checkout flow
} else {
    // Old checkout flow
}

// Get detailed evaluation result
import dev.openfeature.sdk.FlagEvaluationDetails;

FlagEvaluationDetails<Boolean> details =
    client.getBooleanDetails("checkout.new", false, context);

logger.info("Value: {}", details.getValue());
logger.info("Variant: {}", details.getVariant());
logger.info("Reason: {}", details.getReason());
```

{% /tab %}

{% tab title="String" %}

```java
// Evaluate string flags (e.g., UI themes, API endpoints)
String theme = client.getStringValue("ui.theme", "light", context);

String apiEndpoint = client.getStringValue(
    "payment.api.endpoint",
    "https://api.example.com/v1",
    context
);
```

{% /tab %}

{% tab title="Number" %}

```java
// Integer flags (e.g., limits, quotas)
int maxRetries = client.getIntegerValue("retries.max", 3, context);

// Double flags (e.g., thresholds, rates)
double discountRate = client.getDoubleValue("pricing.discount.rate", 0.0, context);
```

{% /tab %}

{% tab title="Object" %}

```java
import dev.openfeature.sdk.Value;

// Evaluate object/JSON flags for complex configuration
Value config = client.getObjectValue("ui.config", new Value(), context);

// Access structured data
if (config.isStructure()) {
    Value timeout = config.asStructure().getValue("timeout");
    Value endpoint = config.asStructure().getValue("endpoint");
}
```

{% /tab %}

## Error handling{% #error-handling %}

The OpenFeature SDK uses a default value pattern. If evaluation fails for any reason, the default value you provide is returned.

```java
import dev.openfeature.sdk.ErrorCode;

// Check evaluation details for errors
FlagEvaluationDetails<Boolean> details =
    client.getBooleanDetails("checkout.new", false, context);

if (details.getErrorCode() != null) {
    switch (details.getErrorCode()) {
        case FLAG_NOT_FOUND:
            logger.warn("Flag does not exist: {}", "checkout.new");
            break;
        case PROVIDER_NOT_READY:
            logger.warn("Provider not initialized yet");
            break;
        case TARGETING_KEY_MISSING:
            logger.warn("Evaluation context missing targeting key");
            break;
        case TYPE_MISMATCH:
            logger.error("Flag value type doesn't match requested type");
            break;
        default:
            logger.error("Evaluation error for flag: {}", "checkout.new", details.getErrorCode());
    }
}
```

### Common error codes{% #common-error-codes %}

| Error Code              | Description                                     | Resolution                                                     |
| ----------------------- | ----------------------------------------------- | -------------------------------------------------------------- |
| `PROVIDER_NOT_READY`    | Initial configuration not received              | Wait for provider initialization or use `setProviderAndWait()` |
| `FLAG_NOT_FOUND`        | Flag doesn't exist in configuration             | Check flag key or create flag in Datadog UI                    |
| `TARGETING_KEY_MISSING` | No targeting key in evaluation context          | Provide a targeting key when creating context                  |
| `TYPE_MISMATCH`         | Flag value can't be converted to requested type | Use correct evaluation method for flag type                    |
| `INVALID_CONTEXT`       | Evaluation context is null                      | Provide a valid evaluation context                             |

## Advanced configuration{% #advanced-configuration %}

### Custom initialization timeout{% #custom-initialization-timeout %}

Configure how long the provider waits for initial configuration:

```java
import datadog.trace.api.openfeature.Provider;
import java.util.concurrent.TimeUnit;

Provider.Options options = new Provider.Options()
    .initTimeout(10, TimeUnit.SECONDS);

api.setProviderAndWait(new Provider(options));
```

### Configuration change events{% #configuration-change-events %}

Listen for configuration updates from Remote Configuration:

```java
import dev.openfeature.sdk.ProviderEvent;

client.on(ProviderEvent.PROVIDER_CONFIGURATION_CHANGED, (event) -> {
    logger.info("Flag configuration updated: {}", event.getMessage());
    // Optionally re-evaluate flags or trigger cache refresh
});
```

`PROVIDER_CONFIGURATION_CHANGED` is an optional OpenFeature event. Check the Datadog provider documentation to verify this event is supported in your version.

### Multiple clients{% #multiple-clients %}

Use named clients to organize context and flags by domain or team:

```java
// Named clients share the same provider instance but can have different contexts
Client checkoutClient = api.getClient("checkout");
Client analyticsClient = api.getClient("analytics");

// Each client can have its own evaluation context
EvaluationContext checkoutContext = new MutableContext("session-abc");
EvaluationContext analyticsContext = new MutableContext("user-123");

boolean newCheckout = checkoutClient.getBooleanValue(
    "checkout.ui.new", false, checkoutContext
);

boolean enhancedAnalytics = analyticsClient.getBooleanValue(
    "analytics.enhanced", false, analyticsContext
);
```

The `Provider` instance is shared globally. Client names are for organizational purposes only and don't create separate provider instances. All clients use the same underlying Datadog provider and flag configurations.

## Best practices{% #best-practices %}

### Initialize early{% #initialize-early %}

Initialize the OpenFeature provider as early as possible in your application lifecycle (for example, in `main()` or application startup). This ensures flags are ready before business logic executes.

### Use meaningful default values{% #use-meaningful-default-values %}

Always provide sensible default values that maintain safe behavior if flag evaluation fails:

```java
// Good: Safe default that maintains current behavior
boolean useNewAlgorithm = client.getBooleanValue("algorithm.new", false, context);

// Good: Conservative default for limits
int rateLimit = client.getIntegerValue("rate.limit", 100, context);
```

### Create context once{% #create-context-once %}

Create the evaluation context once per request/user/session and reuse it for all flag evaluations:

```java
// In a web filter or request handler
EvaluationContext userContext = new MutableContext(userId)
    .add("email", user.getEmail())
    .add("tier", user.getTier());

// Reuse context for all flags in this request
boolean featureA = client.getBooleanValue("feature.a", false, userContext);
boolean featureB = client.getBooleanValue("feature.b", false, userContext);
```

Rebuilding the evaluation context for every flag evaluation adds unnecessary overhead. Create the context once at the start of the request lifecycle, then pass it to all subsequent flag evaluations.

### Handle initialization failures (optional){% #handle-initialization-failures-optional %}

Consider handling initialization failures if your application can function with default flag values:

```java
try {
    api.setProviderAndWait(new Provider());
} catch (ProviderNotReadyError e) {
    // Log error and continue with defaults
    logger.warn("Feature flags not ready, using defaults", e);
    // Application will use default values for all flags
}
```

If feature flags are critical for your application to function, let the exception propagate to prevent startup.

### Use consistent targeting keys{% #use-consistent-targeting-keys %}

Use consistent, stable identifiers as targeting keys:

- **Good**: User IDs, session IDs, device IDs
- **Avoid**: Timestamps, random values, frequently changing IDs

### Monitor flag evaluation{% #monitor-flag-evaluation %}

Use the detailed evaluation results for logging and debugging:

```java
FlagEvaluationDetails<Boolean> details =
    client.getBooleanDetails("feature.critical", false, context);

logger.info("Flag: {} | Value: {} | Variant: {} | Reason: {}",
    "feature.critical",
    details.getValue(),
    details.getVariant(),
    details.getReason()
);
```

## Troubleshooting{% #troubleshooting %}

### Start here: verify prerequisites{% #start-here-verify-prerequisites %}

Before investigating specific errors, confirm these prerequisites are in place:

1. **The Datadog Agent is healthy and reachable**: See [APM Connection Errors](https://docs.datadoghq.com/tracing/troubleshooting/connection_errors.md) to verify Agent connectivity.
1. **The experimental flagging provider is enabled on the tracer**: Set `DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true`.
1. **Required tracer environment variables are set**: `DD_API_KEY`, `DD_ENV`, and `DD_SITE`.
1. **Your `DD_ENV` value appears in the Feature Flag environments list**: Confirm your environment is visible in the [Feature Flag Environments](https://app.datadoghq.com/feature-flags/settings/environments) settings.

After confirming all prerequisites, continue with the following sections if feature flags still aren't working.

### Debug flag evaluations{% #debug-flag-evaluations %}

If flags evaluate but return unexpected values, use `getBooleanDetails()` instead of `getBooleanValue()`. The `Details` variant of each evaluation method returns a `FlagEvaluationDetails` object that exposes the provider's internal state, including the reason, variant, and any error code.

```java
FlagEvaluationDetails<Boolean> details =
    client.getBooleanDetails("your.flag.key", false, context);

logger.info("Flag evaluation details: value={}, variant={}, reason={}, errorCode={}",
    details.getValue(),
    details.getVariant(),
    details.getReason(),
    details.getErrorCode());
```

Review the logged output to understand why the provider returned a particular result.

### Monitor provider state changes{% #monitor-provider-state-changes %}

Add event listeners early in your application startup to observe provider life cycle transitions:

```java
import dev.openfeature.sdk.ProviderEvent;

client.on(ProviderEvent.PROVIDER_READY, (event) -> {
    logger.info("Feature flag provider is ready");
});

client.on(ProviderEvent.PROVIDER_ERROR, (event) -> {
    logger.error("Feature flag provider error: {}", event.getMessage());
});

client.on(ProviderEvent.PROVIDER_STALE, (event) -> {
    logger.warn("Feature flag provider configuration is stale");
});

client.on(ProviderEvent.PROVIDER_CONFIGURATION_CHANGED, (event) -> {
    logger.info("Feature flag configuration updated");
});
```

A `PROVIDER_STALE` or `PROVIDER_ERROR` event after a period of normal operation indicates a loss of connectivity to the Agent or a Remote Configuration disruption.

### Provider not ready{% #provider-not-ready %}

**Problem**: `PROVIDER_NOT_READY` errors when evaluating flags

`PROVIDER_NOT_READY` is returned when flag evaluation is attempted before the provider has received its first configuration from Remote Configuration. This state persists until the tracer receives its initial flag configuration payload from the Agent.

**Common causes**:

1. **Async initialization**: `setProvider()` was used instead of `setProviderAndWait()`. Evaluations that happen before the first Remote Configuration payload arrives return `PROVIDER_NOT_READY`.
1. **Initialization timeout**: `setProviderAndWait()` timed out (default 30 seconds) and threw `ProviderNotReadyError`, which was caught. The application continues evaluating flags while still waiting for the first configuration.

**Solutions**:

1. **Enable debug logging** to see the feature flagging system startup sequence. These messages are emitted at DEBUG level—set `DD_TRACE_DEBUG=true` to see them:
   ```
   [dd.trace] Feature Flagging system starting
   [dd.trace] Feature Flagging system started
   ```
1. **Wait for Remote Configuration sync** (can take 30-60 seconds after publishing flags)
1. **Verify flags are published** in Datadog UI to the correct service and environment
1. If none of these apply, verify the Datadog Agent is healthy and reachable. See [APM Connection Errors](https://docs.datadoghq.com/tracing/troubleshooting/connection_errors.md).

### EVP proxy not available error{% #evp-proxy-not-available-error %}

**Problem**: Logs show `Cannot create backend API client since agentless mode is disabled, and agent does not support EVP proxy`.

Verify the Datadog Agent is healthy and reachable. See [APM Connection Errors](https://docs.datadoghq.com/tracing/troubleshooting/connection_errors.md).

### No exposures in Datadog{% #no-exposures-in-datadog %}

**Problem**: Experiment exposures aren't appearing in Datadog

**Solution**: Verify the flag is associated with an experiment in the Datadog UI. Exposures are only recorded for flags that are part of an experiment—standard feature flags without an experiment association do not generate exposure events.

## Further reading{% #further-reading %}

- [Server-Side Feature Flags](https://docs.datadoghq.com/feature_flags/server.md)
- [Java APM and Distributed Tracing](https://docs.datadoghq.com/tracing/trace_collection/automatic_instrumentation/dd_libraries/java.md)
