---
title: Datadog Archives Destination
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Observability Pipelines > Destinations > Datadog Archives Destination
---

# Datadog Archives Destination

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site.md). ().
{% /alert %}

{% /callout %}
Available for:
{% icon name="icon-logs" /%}
 Logs 
## Overview{% #overview %}

Use the Datadog Archives destination to send logs to Amazon S3 for [archiving](https://docs.datadoghq.com/logs/log_configuration/archives.md) in Datadog-rehydratable format. You can [rehydrate](https://docs.datadoghq.com/logs/log_configuration/rehydrating.md) these logs later when you want to analyze and investigate them in Datadog.

**Note**: Use the [Amazon S3](https://docs.datadoghq.com/observability_pipelines/destinations/amazon_s3.md) destination if you want to send your logs to Amazon S3 in JSON or Parquet format.

You can also route logs to Snowflake using the Datadog Archives destination.

## Prerequisites{% #prerequisites %}

To use the Datadog Archives destination, you must install Datadog's [AWS integration](https://docs.datadoghq.com/integrations/amazon_web_services.md#setup) so you can configure Datadog Log Archives.

## Configure Log Archives{% #configure-log-archives %}

If you already have Datadog Log Archives configured, skip to Set up the destination for your pipeline.

### Create an Amazon S3 bucket{% #create-an-amazon-s3-bucket %}

1. Navigate to [Amazon S3 buckets](https://s3.console.aws.amazon.com/s3/home).
1. Click **Create bucket**.
1. Enter a descriptive name for your bucket.
1. Do not make your bucket publicly readable.
1. Optionally, add tags.
1. Click **Create bucket**.

### Set up an IAM policy that allows Workers to write to the S3 bucket{% #set-up-an-iam-policy-that-allows-workers-to-write-to-the-s3-bucket %}

1. Navigate to the [IAM console](https://console.aws.amazon.com/iam/).
1. Select **Policies** in the left side menu.
1. Click **Create policy**.
1. Click **JSON** in the **Specify permissions** section.
1. Copy the below policy and paste it into the **Policy editor**. Replace `<MY_BUCKET_NAME_1>` and `<MY_BUCKET_NAME_1>/<MY_OPTIONAL_BUCKET_PATH_1>` with the information for the S3 bucket you created in the previous section.
   ```json
   {
       "Version": "2012-10-17",
       "Statement": [
           {
               "Sid": "DatadogUploadAndRehydrateLogArchives",
               "Effect": "Allow",
               "Action": ["s3:PutObject", "s3:GetObject"],
               "Resource": "arn:aws:s3:::<MY_BUCKET_NAME_1>/<MY_OPTIONAL_BUCKET_PATH_1>/*"
           },
           {
               "Sid": "DatadogRehydrateLogArchivesListBucket",
               "Effect": "Allow",
               "Action": "s3:ListBucket",
               "Resource": "arn:aws:s3:::<MY_BUCKET_NAME_1>"
           }
       ]
   }
   ```
1. Click **Next**.
1. Enter a descriptive policy name.
1. Optionally, add tags.
1. Click **Create policy**.

{% tab title="Docker" %}
#### Create an IAM user or role{% #create-an-iam-user-or-role %}

Create an IAM [user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) or [role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html) and attach the policy to it.
{% /tab %}

{% tab title="Amazon EKS" %}
#### Create a service account{% #create-a-service-account %}

[Create a service account](https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html) to use the policy you created above.
{% /tab %}

{% tab title="Linux (APT)" %}
#### Create an IAM user or role{% #create-an-iam-user-or-role %}

Create an IAM [user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) or [role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html) and attach the policy to it.
{% /tab %}

{% tab title="Linux (RPM)" %}
#### Create an IAM user or role{% #create-an-iam-user-or-role %}

Create an IAM [user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) or [role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html) and attach the policy to it.
{% /tab %}

### Connect the S3 bucket to Datadog Log Archives{% #connect-the-s3-bucket-to-datadog-log-archives %}

1. Navigate to Datadog [Log Forwarding](https://app.datadoghq.com/logs/pipelines/log-forwarding).
1. Click **New archive**.
1. Enter a descriptive archive name.
1. Add a query that filters out all logs going through log pipelines so that none of those logs go into this archive. For example, add the query `observability_pipelines_read_only_archive`, assuming no logs going through the pipeline have that tag added.
1. Select **AWS S3**.
1. Select the AWS account that your bucket is in.
1. Enter the name of the S3 bucket.
1. Optionally, enter a path.
1. Check the confirmation statement.
1. Optionally, add tags and define the maximum scan size for rehydration. See [Advanced settings](https://docs.datadoghq.com/logs/log_configuration/archives.md?tab=awss3#advanced-settings) for more information.
1. Click **Save**.

See the [Log Archives documentation](https://docs.datadoghq.com/logs/log_configuration/archives.md) for additional information.

## Set up the destination for your pipeline{% #set-up-the-destination-for-your-pipeline %}

Set up the Datadog Archives destination and its environment variables when you [set up an Archive Logs pipeline](https://docs.datadoghq.com/observability_pipelines/configuration/explore_templates.md?tab=logs#archive-logs). The information below is configured in the pipelines UI.

1. Enter your S3 bucket name. If you configured Log Archives, it's the name of the bucket you created earlier.
1. Enter the AWS region the S3 bucket is in.
1. Enter the key prefix.
   - Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in `/` to act as a directory path; a trailing `/` is not automatically added.
   - See [template syntax](https://docs.datadoghq.com/observability_pipelines/destinations.md#template-syntax) if you want to route logs to different object keys based on specific fields in your logs.
   - **Note**: Datadog recommends that you start your prefixes with the directory name and without a lead slash (`/`). For example, `app-logs/` or `service-logs/`.
1. Select the storage class for your S3 bucket in the **Storage Class** dropdown menu. If you are going to archive and rehydrate your logs:
   - **Note**: Rehydration only supports the following [storage classes](https://docs.datadoghq.com/logs/log_configuration/archives.md?tab=awss3#storage-class):
     - Standard
     - Intelligent-Tiering, only if [the optional asynchronous archive access tiers](https://aws.amazon.com/s3/storage-classes/intelligent-tiering/) are both disabled.
     - Standard-IA
     - One Zone-IA
   - If you wish to rehydrate from archives in another storage class, you must first move them to one of the supported storage classes above.
   - See the Example destination and log archive setup section of this page for how to configure your Log Archive based on your Amazon S3 destination setup.

### Optional settings{% #optional-settings %}

#### AWS authentication{% #aws-authentication %}

Select an AWS authentication option. If you are only using the user or role you created earlier for authentication, do not select **Assume role**. Select **Assume role** only if the user or role you created earlier needs to assume a different role to access the AWS resource. The assumed role's permissions must be explicitly defined.If you select **Assume role**:

1. Enter the ARN of the IAM role you want to assume.
   - **Note:** The user or role you created earlier must have permission to assume this role so that the Worker can authenticate with AWS.
1. (Optional) Enter the assumed role session name and external ID.

#### Buffering{% #buffering %}

Toggle the switch to enable **Buffering Options**. Enable a configurable buffer on your destination to ensure intermittent latency or an outage at the destination doesn't create immediate backpressure, and allow events to continue to be ingested from your source. Disk buffers can also increase pipeline durability by writing data to disk, ensuring buffered data persists through a Worker restart. See [Destination buffers](https://docs.datadoghq.com/observability_pipelines/scaling_and_performance/buffering_and_backpressure.md#destination-buffers) for more information.

- If left unconfigured, your destination uses a memory buffer with a capacity of 500 events.
- To configure a buffer on your destination:
  1. Select the buffer type you want to set (**Memory** or **Disk**).
  1. Enter the buffer size and select the unit.
     1. Maximum memory buffer size is 128 GB.
     1. Maximum disk buffer size is 500 GB.
  1. In the **Behavior on full buffer** dropdown menu, select whether you want to **block** events or **drop new events** when the buffer is full.

### Example destination and log archive setup{% #example-destination-and-log-archive-setup %}

If you enter the following values for your Datadog Archives destination:

- S3 Bucket Name: `test-op-bucket`
- Prefix to apply to all object keys: `op-logs`
- Storage class for the created objects: `Standard`

{% image
   source="https://docs.dd-static.net/images/observability_pipelines/setup/amazon_s3_destination.7a57596dfb8b770b3b18fc4143d43368.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/observability_pipelines/setup/amazon_s3_destination.7a57596dfb8b770b3b18fc4143d43368.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="The Datadog Archives destination setup with the example values" /%}

Then these are the values you enter for configuring the S3 bucket for Log Archives:

- S3 bucket: `test-op-bucket`
- Path: `op-logs`
- Storage class: `Standard`

{% image
   source="https://docs.dd-static.net/images/observability_pipelines/setup/amazon_s3_archive.dbaecf54ca740beaedf90bc0d791475d.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/observability_pipelines/setup/amazon_s3_archive.dbaecf54ca740beaedf90bc0d791475d.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="The log archive configuration with the example values" /%}

### Set secrets{% #set-secrets %}

These are the defaults used for secret identifiers and environment variables.

**Note**: If you enter secret identifiers and then choose to use environment variables, the environment variable is the identifier entered and prepended with `DD_OP`. For example, if you entered `PASSWORD_1` for a password identifier, the environment variable for that password is `DD_OP_PASSWORD_1`.

{% tab title="Secrets Management" %}
There are no secret identifiers to configure.
{% /tab %}

{% tab title="Environment Variables" %}
There are no environment variables to configure.
{% /tab %}

## Route logs to Snowflake using the Datadog Archives destination{% #route-logs-to-snowflake-using-the-datadog-archives-destination %}

You can route logs from Observability Pipelines to Snowflake using the Datadog Archives destination by configuring Snowpipe in Snowflake to automatically ingest those logs. Snowpipe continuously monitors your S3 bucket for new files and automatically ingests them into your Snowflake tables, ensuring near real-time data availability for analytics or further processing. When logs are collected by Observability Pipelines, they are written to an S3 bucket. To set this up:

1. Configure Log Archives.
1. [Set up a pipeline](https://docs.datadoghq.com/observability_pipelines/configuration/set_up_pipelines.md) to use Datadog Archives as the log destination. Use the configuration detailed in Set up the destination for your pipeline.
1. Set up Snowpipe in Snowflake. See [Automating Snowpipe for Amazon S3](https://docs.snowflake.com/en/user-guide/data-load-snowpipe-auto-s3) for instructions.

## How the destination works{% #how-the-destination-works %}

### AWS Authentication{% #aws-authentication-1 %}

The Observability Pipelines Worker uses the standard AWS credential provider chain for authentication. See [AWS SDKs and Tools standardized credential providers](https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html) for more information.

#### Permissions{% #permissions %}

The Observability Pipelines Worker requires these policy permissions to send logs to Amazon S3:

- `s3:ListBucket`
- `s3:PutObject`
- `s3:GetObject`

### Event batching{% #event-batching %}

A batch of events is flushed when one of these parameters is met. See [event batching](https://docs.datadoghq.com/observability_pipelines/destinations.md#event-batching) for more information.

| Maximum Events | Maximum Size (MB) | Timeout (seconds) |
| -------------- | ----------------- | ----------------- |
| None           | 100               | 900               |
