---
title: Azure Storage Destination
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Observability Pipelines > Destinations > Azure Storage Destination
---

# Azure Storage Destination

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}
Available for:
{% icon name="icon-logs" /%}
 Logs 
Use the Azure Storage destination to send logs to an Azure Storage bucket. If you want to send logs to Azure Storage for [archiving](https://docs.datadoghq.com/logs/log_configuration/archives/) and [rehydration](https://docs.datadoghq.com/logs/log_configuration/rehydrating/), you must configure Log Archives. If you don't want to rehydrate logs in Datadog, skip to Set up the destination for your pipeline.

## Configure Log Archives{% #configure-log-archives %}

This step is only required if you want to send logs to Azure Storage in Datadog-rehydratable format for [archiving](https://docs.datadoghq.com/logs/log_configuration/archives/) and [rehydration](https://docs.datadoghq.com/logs/log_configuration/rehydrating/), and you don't already have a Datadog Log Archive configured for Observability Pipelines. If you already have a Datadog Log Archive configured or don't want to rehydrate logs in Datadog, skip to Set up the destination for your pipeline.

You need to have Datadog's [Azure integration](https://docs.datadoghq.com/integrations/azure/#setup) installed to set up Datadog Log Archives.

#### Create a storage account{% #create-a-storage-account %}

Create an [Azure storage account](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal) if you don't already have one.

1. Navigate to [Storage accounts](https://portal.azure.com/#browse/Microsoft.Storage%2FStorageAccounts).
1. Click **Create**.
1. Select the subscription name and resource name you want to use.
1. Enter a name for your storage account.
1. Select a region in the dropdown menu.
1. Select **Standard** performance or **Premium** account type.
1. Click **Next**.
1. In the **Blob storage** section, select **Hot** or **Cool** storage.
1. Click **Review + create**.

#### Create a storage bucket{% #create-a-storage-bucket %}

1. In your storage account, click **Containers** under **Data storage** in the left navigation menu.
1. Click **+ Container** at the top to create a new container.
1. Enter a name for the new container. This name is used later when you set up the Observability Pipelines Azure Storage destination.

**Note**: Do not set [immutability policies](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-immutability-policies-manage) because the most recent data might need to be rewritten in rare cases (typically when there is a timeout).

#### Connect the Azure container to Datadog Log Archives{% #connect-the-azure-container-to-datadog-log-archives %}

1. Navigate to Datadog [Log Forwarding](https://app.datadoghq.com/logs/pipelines/log-forwarding).
1. Click **New archive**.
1. Enter a descriptive archive name.
1. Add a query that filters out all logs going through log pipelines so that none of those logs go into this archive. For example, add the query `observability_pipelines_read_only_archive`, assuming no logs going through the pipeline have that tag added.
1. Select **Azure Storage**.
1. Select the Azure tenant and client your storage account is in.
1. Enter the name of the storage account.
1. Enter the name of the container you created earlier.
1. Optionally, enter a path.
1. Optionally, set permissions, add tags, and define the maximum scan size for rehydration. See [Advanced settings](https://docs.datadoghq.com/logs/log_configuration/archives/?tab=awss3#advanced-settings) for more information.
1. Click **Save**.

See the [Log Archives documentation](https://docs.datadoghq.com/logs/log_configuration/archives) for additional information.

## Set up the destination for your pipeline{% #set-up-the-destination-for-your-pipeline %}

Set up the Azure Storage destination and its environment variables when you [set up an Archive Logs pipeline](https://docs.datadoghq.com/observability_pipelines/configuration/explore_templates/?tab=logs#archive-logs). The information below is configured in the pipelines UI.

1. Enter the identifier for your Azure connection string. If you leave it blank, the default is used.
   - **Note**: Only enter the identifier for the connection string. Do **not** enter the actual connection string.
1. Enter the name of the Azure container you created earlier.

### Optional settings{% #optional-settings %}

#### Prefix to apply to all key objects{% #prefix-to-apply-to-all-key-objects %}

Enter a prefix that you want to apply to all key objects.

- Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in `/` to act as a directory path; a trailing `/` is not automatically added.
- See [template syntax](https://docs.datadoghq.com/observability_pipelines/destinations/#template-syntax) if you want to route logs to different object keys based on specific fields in your logs.
  - **Note**: Datadog recommends that you start your prefixes with the directory name and without a lead slash (`/`). For example, `app-logs/` or `service-logs/`.

#### Buffering{% #buffering %}

Toggle the switch to enable **Buffering Options**. Enable a configurable buffer on your destination to ensure intermittent latency or an outage at the destination doesn't create immediate backpressure, and allow events to continue to be ingested from your source. Disk buffers can also increase pipeline durability by writing data to disk, ensuring buffered data persists through a Worker restart. See [Destination buffers](https://docs.datadoghq.com/observability_pipelines/scaling_and_performance/buffering_and_backpressure/#destination-buffers) for more information.

- If left unconfigured, your destination uses a memory buffer with a capacity of 500 events.
- To configure a buffer on your destination:
  1. Select the buffer type you want to set (**Memory** or **Disk**).
  1. Enter the buffer size and select the unit.
     1. Maximum memory buffer size is 128 GB.
     1. Maximum disk buffer size is 500 GB.
  1. In the **Behavior on full buffer** dropdown menu, select whether you want to **block** events or **drop new events** when the buffer is full.

### Set secrets{% #set-secrets %}

These are the defaults used for secret identifiers and environment variables.

**Note**: If you enter secret identifiers and then choose to use environment variables, the environment variable is the identifier entered and prepended with `DD_OP`. For example, if you entered `PASSWORD_1` for a password identifier, the environment variable for that password is `DD_OP_PASSWORD_1`.

{% tab title="Secrets Management" %}

- Azure connection string identifier:
  - References the connection string that gives the Worker access to your Azure Storage bucket.
  - The default identifier is `DESTINATION_DATADOG_ARCHIVES_AZURE_BLOB_CONNECTION_STRING`.

{% /tab %}

{% tab title="Environment Variables" %}
#### Azure Storage{% #azure-storage %}

- Azure connections string to give the Worker access to your Azure Storage bucket.
  - The default environment variable is `DD_OP_DESTINATION_DATADOG_ARCHIVES_AZURE_BLOB_CONNECTION_STRING`.

To get the connection string:

1. Navigate to [Azure Storage accounts](https://portal.azure.com/#browse/Microsoft.Storage%2FStorageAccounts).
1. Click **Access keys** under **Security and networking** in the left navigation menu.
1. Copy the connection string for the storage account and paste it into the **Azure connection string** field on the Observability Pipelines Worker installation page.

{% /tab %}

## How the destination works{% #how-the-destination-works %}

### Event batching{% #event-batching %}

A batch of events is flushed when one of these parameters is met. See [event batching](https://docs.datadoghq.com/observability_pipelines/destinations/#event-batching) for more information.

| Maximum Events | Maximum Size (MB) | Timeout (seconds) |
| -------------- | ----------------- | ----------------- |
| None           | 100               | 900               |
