---
title: Java Feature Flags
description: Set up Datadog Feature Flags for Java applications.
breadcrumbs: Docs > Feature Flags > Server-Side Feature Flags > Java Feature Flags
---

# Java Feature Flags

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com, us2.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site.md). ().
{% /alert %}

{% /callout %}

{% alert level="info" %}
Enable Java Feature Flags by setting `DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true`. The `EXPERIMENTAL_` prefix is kept for backwards compatibility; the provider is stable. See the Configuration section for details.
{% /alert %}

## Overview{% #overview %}

This page describes how to instrument a Java application with the Datadog Feature Flags SDK. Datadog feature flags provide a unified way to remotely control feature availability in your app, experiment safely, and deliver new experiences with confidence.

The Java SDK integrates feature flags directly into the Datadog SDK and implements the [OpenFeature](https://openfeature.dev/) standard for maximum flexibility and compatibility.

{% alert level="info" %}
If you're using Datadog APM and your application already has the Datadog Java SDK and Remote Configuration enabled, skip to Initialize the OpenFeature provider. You only need to add the OpenFeature dependencies and initialize the provider.
{% /alert %}

## Compatibility requirements{% #compatibility-requirements %}

The Datadog Feature Flags SDK for Java requires:

- **Java 11 or higher**
- **Datadog Java SDK**: Version **1.57.0** or later
- **OpenFeature SDK**: Version **1.18.2** or later
- **Datadog Agent**: Version **7.x or later** with [Remote Configuration](https://docs.datadoghq.com/remote_configuration.md) enabled
- **Datadog [API key](https://docs.datadoghq.com/account_management/api-app-keys.md#api-keys)**: Configured on the Agent (not the application) for Remote Configuration

For a full list of Datadog's Java version and framework support, read [Compatibility Requirements](https://docs.datadoghq.com/tracing/trace_collection/compatibility/java.md).

## Getting started{% #getting-started %}

Before you begin, make sure you've already [installed and configured the Agent](https://docs.datadoghq.com/tracing/trace_collection/automatic_instrumentation/dd_libraries/java.md#install-and-configure-the-agent).

## Installation{% #installation %}

Feature flagging is integrated into the Datadog Java SDK. You need the SDK JAR and the OpenFeature SDK dependencies.

{% tab title="Gradle (Groovy)" %}
Add the following dependencies to your `build.gradle`:

In the `build.gradle` file:

```groovy
dependencies {
    // OpenFeature SDK for flag evaluation
    implementation 'dev.openfeature:sdk:1.18.2'

    // Datadog OpenFeature Provider
    implementation 'com.datadoghq:dd-openfeature:1.57.0'
}
```

{% /tab %}

{% tab title="Gradle (Kotlin)" %}
Add the following dependencies to your `build.gradle.kts`:

In the `build.gradle.kts` file:

```kotlin
dependencies {
    // OpenFeature SDK for flag evaluation
    implementation("dev.openfeature:sdk:1.18.2")

    // Datadog OpenFeature Provider
    implementation("com.datadoghq:dd-openfeature:1.57.0")
}
```

{% /tab %}

{% tab title="Maven" %}
Add the following dependencies to your `pom.xml`:

In the `pom.xml` file:

```xml
<dependencies>
    <!-- OpenFeature SDK for flag evaluation -->
    <dependency>
        <groupId>dev.openfeature</groupId>
        <artifactId>sdk</artifactId>
        <version>1.18.2</version>
    </dependency>

    <!-- Datadog OpenFeature Provider -->
    <dependency>
        <groupId>com.datadoghq</groupId>
        <artifactId>dd-openfeature</artifactId>
        <version>1.57.0</version>
    </dependency>
</dependencies>
```

{% /tab %}

## Configuration{% #configuration %}

If your Datadog Agent already has Remote Configuration enabled for other features (like Dynamic Instrumentation or Application Security), you can skip the Agent configuration and go directly to Application configuration.

### Agent configuration{% #agent-configuration %}

Configure your Datadog Agent to enable Remote Configuration:

In the `datadog.yaml` file:

```yaml
# Enable Remote Configuration
remote_configuration:
  enabled: true

# Set your API key
api_key: <YOUR_API_KEY>
```

### Application configuration{% #application-configuration %}

If your application already runs with `-javaagent:dd-java-agent.jar` and has Remote Configuration enabled (`DD_REMOTE_CONFIG_ENABLED=true`), you only need to add the feature flagging variable (`DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true`). Skip the SDK download and JVM configuration steps.

Configure your Java application with the required environment variables or system properties:

{% tab title="Environment Variables" %}

```bash
# Required: Enable Remote Configuration in the SDK
export DD_REMOTE_CONFIG_ENABLED=true

# Required: Enable the feature flagging provider
# The EXPERIMENTAL_ prefix is historical; the provider is no longer experimental.
export DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true

# Required: Service name
export DD_SERVICE=<YOUR_SERVICE_NAME>

# Required: Environment (e.g., prod, staging, dev)
export DD_ENV=<YOUR_ENVIRONMENT>

# Optional: Version
export DD_VERSION=<YOUR_APP_VERSION>

# Start your application with the SDK
java -javaagent:path/to/dd-java-agent.jar -jar your-application.jar
```

{% /tab %}

{% tab title="System Properties" %}

```bash
java -javaagent:path/to/dd-java-agent.jar \
  -Ddd.remote.config.enabled=true \
  -Ddd.experimental.flagging.provider.enabled=true \
  -Ddd.service=<YOUR_SERVICE_NAME> \
  -Ddd.env=<YOUR_ENVIRONMENT> \
  -Ddd.version=<YOUR_APP_VERSION> \
  -jar your-application.jar
```

{% /tab %}

The Datadog feature flagging system starts automatically when the tracer is initialized with both Remote Configuration and the feature flagging provider enabled. No additional initialization code is required in your application.

{% alert level="danger" %}
Feature flagging requires both `DD_REMOTE_CONFIG_ENABLED=true` and `DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true`. Without these settings, the feature flagging system does not start and the `Provider` returns the programmatic default.
{% /alert %}

### Add the Java tracer to the JVM{% #add-the-java-tracer-to-the-jvm %}

For instructions on how to add the `-javaagent` argument to your application server or framework, see [Add the Java SDK to the JVM](https://docs.datadoghq.com/tracing/trace_collection/automatic_instrumentation/dd_libraries/java.md#add-the-java-sdk-to-the-jvm).

Make sure to include the feature flagging configuration flags:

- `-Ddd.remote.config.enabled=true`
- `-Ddd.experimental.flagging.provider.enabled=true`

## Initialize the OpenFeature provider{% #initialize-the-openfeature-provider %}

Initialize the Datadog OpenFeature provider in your application startup code. The provider connects to the feature flagging system running in the Datadog SDK.

```java
import dev.openfeature.sdk.OpenFeatureAPI;
import dev.openfeature.sdk.Client;
import datadog.trace.api.openfeature.Provider;
import dev.openfeature.sdk.exceptions.ProviderNotReadyError;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class App {
    private static final Logger logger = LoggerFactory.getLogger(App.class);
    private static Client client;

    public static void main(String[] args) throws Exception {
        // Initialize the Datadog provider
        logger.info("Initializing Datadog OpenFeature Provider...");
        OpenFeatureAPI api = OpenFeatureAPI.getInstance();

        try {
            // Set provider and wait for initial configuration (recommended)
            api.setProviderAndWait(new Provider());
            client = api.getClient("my-app");
            logger.info("OpenFeature provider initialized successfully");
        } catch (ProviderNotReadyError e) {
            // Handle gracefully - app will use default flag values
            logger.warn("Provider not ready (no tracer/config available), continuing with defaults", e);
            client = api.getClient("my-app");
            logger.info("App will use default flag values until provider is ready");
        } catch (Exception e) {
            logger.error("Failed to initialize OpenFeature provider", e);
            throw e;
        }

        // Your application code here
    }
}
```

Use `setProviderAndWait()` to block evaluation until the initial flag configuration is received from Remote Configuration. This helps ensure flags are ready before the application starts serving traffic. The default timeout is 30 seconds.

`ProviderNotReadyError` is an OpenFeature SDK exception thrown when the provider times out during initialization. Catching it allows the application to start with default flag values if Remote Configuration is unavailable. If not caught, the exception propagates and may prevent application startup. Handle this based on your availability requirements.

### Asynchronous initialization{% #asynchronous-initialization %}

For non-blocking initialization, use `setProvider()` and listen for provider events:

```java
import dev.openfeature.sdk.ProviderEvent;

OpenFeatureAPI api = OpenFeatureAPI.getInstance();
Client client = api.getClient();

// Listen for provider state changes
client.on(ProviderEvent.PROVIDER_READY, (event) -> {
    logger.info("Feature flags ready!");
});

client.on(ProviderEvent.PROVIDER_ERROR, (event) -> {
    logger.error("Provider error: {}", event.getMessage());
});

client.on(ProviderEvent.PROVIDER_STALE, (event) -> {
    logger.warn("Provider configuration is stale");
});

// Set provider asynchronously
api.setProvider(new Provider());
```

## Set the evaluation context{% #set-the-evaluation-context %}

The evaluation context defines the subject (user, device, session) for flag evaluation. It determines which flag variations are returned based on targeting rules.

```java
import dev.openfeature.sdk.EvaluationContext;
import dev.openfeature.sdk.MutableContext;

// Create an evaluation context with a targeting key and attributes
EvaluationContext context = new MutableContext("user-123")
    .add("email", "user@example.com")
    .add("tier", "premium");

// Use the context for flag evaluations (see next section)
```

The `targetingKey` (for example, `user-123`) is the primary identifier used for consistent flag evaluations and percentage-based rollouts. It's typically a user ID, session ID, or device ID.

## Evaluate flags{% #evaluate-flags %}

Evaluate feature flags using the OpenFeature client. All flag types are supported: Boolean, string, integer, double, and object.

{% tab title="Boolean" %}

```java
// Simple Boolean evaluation
boolean enabled = client.getBooleanValue("checkout.new", false, context);

if (enabled) {
    // New checkout flow
} else {
    // Old checkout flow
}

// Get detailed evaluation result
import dev.openfeature.sdk.FlagEvaluationDetails;

FlagEvaluationDetails<Boolean> details =
    client.getBooleanDetails("checkout.new", false, context);

logger.info("Value: {}", details.getValue());
logger.info("Variant: {}", details.getVariant());
logger.info("Reason: {}", details.getReason());
```

{% /tab %}

{% tab title="String" %}

```java
// Evaluate string flags (e.g., UI themes, API endpoints)
String theme = client.getStringValue("ui.theme", "light", context);

String apiEndpoint = client.getStringValue(
    "payment.api.endpoint",
    "https://api.example.com/v1",
    context
);
```

{% /tab %}

{% tab title="Number" %}

```java
// Integer flags (e.g., limits, quotas)
int maxRetries = client.getIntegerValue("retries.max", 3, context);

// Double flags (e.g., thresholds, rates)
double discountRate = client.getDoubleValue("pricing.discount.rate", 0.0, context);
```

{% /tab %}

{% tab title="Object" %}

```java
import dev.openfeature.sdk.Value;

// Evaluate object/JSON flags for complex configuration
Value config = client.getObjectValue("ui.config", new Value(), context);

// Access structured data
if (config.isStructure()) {
    Value timeout = config.asStructure().getValue("timeout");
    Value endpoint = config.asStructure().getValue("endpoint");
}
```

{% /tab %}

## Error handling{% #error-handling %}

The OpenFeature SDK uses a default value pattern. If evaluation fails for any reason, the default value you provide is returned.

```java
import dev.openfeature.sdk.ErrorCode;

// Check evaluation details for errors
FlagEvaluationDetails<Boolean> details =
    client.getBooleanDetails("checkout.new", false, context);

if (details.getErrorCode() != null) {
    switch (details.getErrorCode()) {
        case FLAG_NOT_FOUND:
            logger.warn("Flag does not exist: {}", "checkout.new");
            break;
        case PROVIDER_NOT_READY:
            logger.warn("Provider not initialized yet");
            break;
        case TARGETING_KEY_MISSING:
            logger.warn("Evaluation context missing targeting key");
            break;
        case TYPE_MISMATCH:
            logger.error("Flag value type doesn't match requested type");
            break;
        default:
            logger.error("Evaluation error for flag: {}", "checkout.new", details.getErrorCode());
    }
}
```

### Common error codes{% #common-error-codes %}

| Error Code              | Description                                     | Resolution                                                     |
| ----------------------- | ----------------------------------------------- | -------------------------------------------------------------- |
| `PROVIDER_NOT_READY`    | Initial configuration not received              | Wait for provider initialization or use `setProviderAndWait()` |
| `FLAG_NOT_FOUND`        | Flag doesn't exist in configuration             | Check flag key or create flag in Datadog UI                    |
| `TARGETING_KEY_MISSING` | No targeting key in evaluation context          | Provide a targeting key when creating context                  |
| `TYPE_MISMATCH`         | Flag value can't be converted to requested type | Use correct evaluation method for flag type                    |
| `INVALID_CONTEXT`       | Evaluation context is null                      | Provide a valid evaluation context                             |

## Advanced configuration{% #advanced-configuration %}

### Custom initialization timeout{% #custom-initialization-timeout %}

Configure how long the provider waits for initial configuration:

```java
import datadog.trace.api.openfeature.Provider;
import java.util.concurrent.TimeUnit;

Provider.Options options = new Provider.Options()
    .initTimeout(10, TimeUnit.SECONDS);

api.setProviderAndWait(new Provider(options));
```

### Configuration change events{% #configuration-change-events %}

Listen for configuration updates from Remote Configuration:

```java
import dev.openfeature.sdk.ProviderEvent;

client.on(ProviderEvent.PROVIDER_CONFIGURATION_CHANGED, (event) -> {
    logger.info("Flag configuration updated: {}", event.getMessage());
    // Optionally re-evaluate flags or trigger cache refresh
});
```

`PROVIDER_CONFIGURATION_CHANGED` is an optional OpenFeature event. Check the Datadog provider documentation to verify this event is supported in your version.

### Multiple clients{% #multiple-clients %}

Use named clients to organize context and flags by domain or team:

```java
// Named clients share the same provider instance but can have different contexts
Client checkoutClient = api.getClient("checkout");
Client analyticsClient = api.getClient("analytics");

// Each client can have its own evaluation context
EvaluationContext checkoutContext = new MutableContext("session-abc");
EvaluationContext analyticsContext = new MutableContext("user-123");

boolean newCheckout = checkoutClient.getBooleanValue(
    "checkout.ui.new", false, checkoutContext
);

boolean enhancedAnalytics = analyticsClient.getBooleanValue(
    "analytics.enhanced", false, analyticsContext
);
```

The `Provider` instance is shared globally. Client names are for organizational purposes only and don't create separate provider instances. All clients use the same underlying Datadog provider and flag configurations.

## Best practices{% #best-practices %}

### Initialize early{% #initialize-early %}

Initialize the OpenFeature provider as early as possible in your application lifecycle (for example, in `main()` or application startup). This helps ensure flags are ready before business logic executes.

### Use meaningful default values{% #use-meaningful-default-values %}

Always provide sensible default values that maintain safe behavior if flag evaluation fails:

```java
// Good: Safe default that maintains current behavior
boolean useNewAlgorithm = client.getBooleanValue("algorithm.new", false, context);

// Good: Conservative default for limits
int rateLimit = client.getIntegerValue("rate.limit", 100, context);
```

### Create context once{% #create-context-once %}

Create the evaluation context once per request/user/session and reuse it for all flag evaluations:

```java
// In a web filter or request handler
EvaluationContext userContext = new MutableContext(userId)
    .add("email", user.getEmail())
    .add("tier", user.getTier());

// Reuse context for all flags in this request
boolean featureA = client.getBooleanValue("feature.a", false, userContext);
boolean featureB = client.getBooleanValue("feature.b", false, userContext);
```

Rebuilding the evaluation context for every flag evaluation adds unnecessary overhead. Create the context once at the start of the request lifecycle, then pass it to all subsequent flag evaluations.

### Handle initialization failures (optional){% #handle-initialization-failures-optional %}

Consider handling initialization failures if your application can function with default flag values:

```java
try {
    api.setProviderAndWait(new Provider());
} catch (ProviderNotReadyError e) {
    // Log error and continue with defaults
    logger.warn("Feature flags not ready, using defaults", e);
    // Application will use default values for all flags
}
```

If feature flags are critical for your application to function, let the exception propagate to prevent startup.

### Use consistent targeting keys{% #use-consistent-targeting-keys %}

Use consistent, stable identifiers as targeting keys:

- **Good**: User IDs, session IDs, device IDs
- **Avoid**: Timestamps, random values, frequently changing IDs

### Monitor flag evaluation{% #monitor-flag-evaluation %}

Use the detailed evaluation results for logging and debugging:

```java
FlagEvaluationDetails<Boolean> details =
    client.getBooleanDetails("feature.critical", false, context);

logger.info("Flag: {} | Value: {} | Variant: {} | Reason: {}",
    "feature.critical",
    details.getValue(),
    details.getVariant(),
    details.getReason()
);
```

## Testing{% #testing %}

You can test against a dedicated Datadog test environment with the real `DatadogProvider`, or swap it for OpenFeature's `InMemoryProvider` to control flag values directly in test code. This section shows the in-memory approach, which keeps tests hermetic and offline. `InMemoryProvider` ships in `dev.openfeature:sdk` (already a test-scope dependency), so no additional library is required. Add `dev.openfeature:sdk` to your test configuration if it is not already present.

```java
import dev.openfeature.sdk.Client;
import dev.openfeature.sdk.OpenFeatureAPI;
import dev.openfeature.sdk.providers.memory.Flag;
import dev.openfeature.sdk.providers.memory.InMemoryProvider;
import java.util.Map;
import org.junit.jupiter.api.*;
import static org.junit.jupiter.api.Assertions.*;

class CheckoutFlagTest {
    private Client client;

    @BeforeEach
    void setUp() {
        Map<String, Flag<?>> flags = Map.of(
            "new-checkout-flow", Flag.<Boolean>builder()
                .variant("on", true)
                .variant("off", false)
                .defaultVariant("on")
                .build(),
            "ui-theme", Flag.<String>builder()
                .variant("dark", "dark")
                .variant("light", "light")
                .defaultVariant("light")
                .build()
        );

        OpenFeatureAPI api = OpenFeatureAPI.getInstance();
        api.setProviderAndWait(new InMemoryProvider(flags));
        client = api.getClient();
    }

    @AfterEach
    void tearDown() {
        OpenFeatureAPI.getInstance().shutdown();
    }

    @Test
    void newCheckoutEnabledByDefault() {
        assertTrue(client.getBooleanValue("new-checkout-flow", false));
    }

    @Test
    void missingFlagReturnsDefault() {
        assertFalse(client.getBooleanValue("does-not-exist", false));
    }
}
```

`OpenFeatureAPI.getInstance()` is a singleton. Always call `shutdown()` in `@AfterEach` (or equivalent); otherwise, provider state leaks between test classes and causes flaky suites.

In Spring Boot tests, register the `InMemoryProvider` through a `@TestConfiguration` bean or in a `@BeforeAll` hook on an `@SpringBootTest` class — the OpenFeature API singleton persists for the lifetime of the Spring context, so initialization only needs to run once.

## Troubleshooting{% #troubleshooting %}

This guide follows the flag data path from the **Flagging Platform** (where flags are configured) through **Remote Configuration** and the **Datadog Agent** to the **Java SDK**, and back to the **Flagging Platform** (where evaluations and exposures appear). Work through each step in sequence to isolate where a problem originates.

### 1. Flagging platform: Verify flag configuration

Before checking infrastructure, confirm the flag itself is set up correctly:

1. The flag is **enabled** for the target environment, not disabled. Flags are disabled by default in each environment.
1. The flag targets the **correct environment** (`DD_ENV`). Flags do not target specific services—they apply to all services within the enabled environment.
1. Your `DD_ENV` value appears in [Feature Flag Environments](https://app.datadoghq.com/feature-flags/settings/environments). If it is absent, the environment has not received any flag traffic yet.

### 2. Remote Configuration: Verify the path

Remote Configuration delivers flag configurations from the Datadog backend to the Agent.

1. **RC is enabled on the Agent**: Set `remote_configuration.enabled: true` in `datadog.yaml` or `DD_REMOTE_CONFIG_ENABLED=true`. See [Remote Configuration](https://docs.datadoghq.com/remote_configuration.md).
1. **`DD_API_KEY` is valid on the Agent** and belongs to the target organization.
1. **`DD_SITE` is set correctly** on the Agent (`site` in `datadog.yaml` or `DD_SITE` env var). See [Agent Site Issues](https://docs.datadoghq.com/agent/troubleshooting/site.md).
1. **Fleet Automation**: Open [Fleet Automation](https://app.datadoghq.com/fleet), select the Agent your application connects to, and confirm Remote Configuration is active.
1. **Agent CLI**: Run `datadog-agent status` and review the Remote Configuration section of the output. See [Agent Commands](https://docs.datadoghq.com/agent/configuration/agent-commands.md).

### 3. Agent: Verify Agent health and connectivity

1. **Agent is running and reachable**: See [APM Connection Errors](https://docs.datadoghq.com/tracing/troubleshooting/connection_errors.md) for steps to verify Agent connectivity from the tracer.

1. **Agent version**: Feature flagging requires Agent 7.x or later with EVP Proxy support.

1. **EVP proxy is available**: Query the Agent's info endpoint and confirm the response includes `evp_proxy/v4/` and `v0.7/config` in the `endpoints` array:

   ```bash
   curl http://localhost:8126/info
   ```

If the Agent logs show the following error, the Agent either started after the tracer connected or does not support EVP Proxy. In this case, check the Agent version and restart order.

   ```
   Cannot create backend API client since Agentless mode is disabled, and agent does not support EVP proxy
   ```

### 4. SDK: Verify Java SDK state

#### Enable debug logging{% #enable-debug-logging %}

All feature flagging startup messages are emitted at DEBUG level. Set `DD_TRACE_DEBUG=true` and look for the startup sequence:

```
[dd.trace] Feature Flagging system starting
[dd.trace] Feature Flagging system started
```

Then confirm the EVP Proxy and config endpoints were discovered:

```
discovered ... evpProxyEndpoint=evp_proxy/v4/ configEndpoint=v0.7/config
```

If these messages are absent, verify `DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true` is set and the tracer started correctly.

#### Monitor provider state changes{% #monitor-provider-state-changes %}

Add event listeners early in application startup to observe provider life cycle transitions. Event listeners detect connectivity changes after initialization:

```java
import dev.openfeature.sdk.ProviderEvent;

client.on(ProviderEvent.PROVIDER_READY, (event) -> {
    logger.info("Feature flag provider is ready");
});

client.on(ProviderEvent.PROVIDER_ERROR, (event) -> {
    logger.error("Feature flag provider error: {}", event.getMessage());
});

client.on(ProviderEvent.PROVIDER_STALE, (event) -> {
    logger.warn("Feature flag provider configuration is stale");
});

client.on(ProviderEvent.PROVIDER_CONFIGURATION_CHANGED, (event) -> {
    logger.info("Feature flag configuration updated");
});
```

A `PROVIDER_ERROR` or `PROVIDER_STALE` event after a period of normal operation indicates a loss of connectivity to the Agent or a Remote Configuration disruption.

#### Provider not ready{% #provider-not-ready %}

`PROVIDER_NOT_READY` is returned when flag evaluation is attempted before the provider has received its first configuration from Remote Configuration. This state persists until the tracer receives its initial flag configuration payload from the Agent.

Common causes:

- **Async initialization**: `setProvider()` was used instead of `setProviderAndWait()`. Evaluations that happen before the first Remote Configuration payload arrives return `PROVIDER_NOT_READY`.
- **Initialization timeout**: `setProviderAndWait()` timed out (default 30 seconds) and threw `ProviderNotReadyError`, which was caught. The application continues evaluating flags while waiting for the first configuration.

Remote Configuration sync can take 30-60 seconds after publishing flags. If `PROVIDER_NOT_READY` persists beyond that, re-check steps 2 and 3.

#### Debug flag evaluations{% #debug-flag-evaluations %}

If flags return unexpected values, use `getBooleanDetails()` instead of `getBooleanValue()`. The `Details` variant returns a `FlagEvaluationDetails` object exposing the provider's internal state:

```java
FlagEvaluationDetails<Boolean> details =
    client.getBooleanDetails("your.flag.key", false, context);

logger.info("Flag evaluation details: value={}, variant={}, reason={}, errorCode={}",
    details.getValue(),
    details.getVariant(),
    details.getReason(),
    details.getErrorCode());
```

Review `reason` and `errorCode` to understand why the provider returned a given result.

#### Type mismatch errors{% #type-mismatch-errors %}

`TYPE_MISMATCH` is returned when the evaluation method does not match the flag's configured type. Use the correct method for each flag type: `getBooleanValue()`, `getStringValue()`, `getIntegerValue()`, `getDoubleValue()`.

### 5. Flagging platform: Verify data appears in Datadog

#### Flag evaluation metrics{% #flag-evaluation-metrics %}

Flag evaluation counts appear in Datadog when `DD_METRICS_OTEL_ENABLED=true` is set on the tracer. Each evaluation emits a `feature_flag.evaluations` counter metric tagged with the flag key, result variant, and evaluation reason. If this metric does not appear, verify the setting is enabled and the tracer version supports it.

#### Experiment exposures{% #experiment-exposures %}

Exposures appear in Datadog only for flags associated with an experiment. Standard feature flags without an experiment association do not generate exposure events. If exposures are missing:

1. Verify the flag is associated with an experiment in the Datadog UI.
1. Verify the Agent's `DD_API_KEY` is correct and the Agent is receiving events.

## Further reading{% #further-reading %}

- [Server-Side Feature Flags](https://docs.datadoghq.com/feature_flags/server.md)
- [Java APM and Distributed Tracing](https://docs.datadoghq.com/tracing/trace_collection/automatic_instrumentation/dd_libraries/java.md)
