---
title: Amazon Security Lake Destination
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Observability Pipelines > Destinations > Amazon Security Lake
  Destination
---

# Amazon Security Lake Destination

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}
Available for:
{% icon name="icon-logs" /%}
 Logs 
Use Observability Pipelines' Amazon Security Lake destination to send logs to Amazon Security Lake.

## Prerequisites{% #prerequisites %}

You need to do the following before setting up the Amazon Security Lake destination:

1. Follow the [Getting Started with Amazon Security Lake](https://docs.aws.amazon.com/security-lake/latest/userguide/getting-started.html) to set up Amazon Security Lake, and make sure to:
   - Enable Amazon Security Lake for the AWS account.
   - Select the AWS regions where S3 buckets will be created for OCSF data.
1. Follow [Collecting data from custom sources in Security Lake](https://docs.aws.amazon.com/security-lake/latest/userguide/custom-sources.html) to create a custom source in Amazon Security Lake.
   - When you [configure a custom log source in Security Lake in the AWS console](https://docs.aws.amazon.com/security-lake/latest/userguide/get-started-console.html):
     - Enter a source name.
     - Select the OCSF event class for the log source and type.
     - Enter the account details for the AWS account that will write logs to Amazon Security Lake:
       - AWS account ID
       - External ID
   - Select **Create and use a new service** for service access.
   - Take note of the name of the bucket that is created because you need it when you set up the Amazon Security Lake destination later on.
     - To find the bucket name, navigate to [Custom Sources](https://console.aws.amazon.com/securitylake/home?custom-sources). The bucket name is in the location for your custom source. For example, if the location is `s3://aws-security-data-lake-us-east-2-qjh9pr8hy/ext/op-api-activity-test`, the bucket name is `aws-security-data-lake-us-east-2-qjh9pr8hy`.

## Setup{% #setup %}

Set up the Amazon Security Lake destination and its environment variables when you [set up a pipeline](https://app.datadoghq.com/observability-pipelines). The information below is configured in the pipelines UI.

**Notes**:

- When you add the Amazon Security Lake destination, the OCSF processor is automatically added so that you can convert your logs to Parquet before they are sent to Amazon Security Lake. See [Remap to OCSF documentation](https://docs.datadoghq.com/observability_pipelines/processors/remap_ocsf) for setup instructions.
- Only logs formatted by the OCSF processor are converted to Parquet.

### Set up the destination{% #set-up-the-destination %}

1. Enter your S3 bucket name.
1. Enter the AWS region.
1. Enter the custom source name.

#### Optional settings{% #optional-settings %}

##### AWS authentication{% #aws-authentication %}

1. Select an [AWS authentication](https://docs.datadoghq.com/observability_pipelines/destinations/amazon_security_lake/#aws-authentication) option.
1. Enter the ARN of the IAM role you want to assume.
1. Optionally, enter the assumed role session name and external ID.

##### Enable TLS{% #enable-tls %}

Toggle the switch to **Enable TLS**. If you enable TLS, the following certificate and key files are required. **Note**: All file paths are made relative to the configuration data directory, which is `/var/lib/observability-pipelines-worker/config/` by default. See [Advanced Worker Configurations](https://docs.datadoghq.com/observability_pipelines/configuration/install_the_worker/advanced_worker_configurations/) for more information. The file must be owned by the `observability-pipelines-worker group` and `observability-pipelines-worker` user, or at least readable by the group or user.

- Enter the identifier for your Amazon Security Lake key pass. If you leave it blank, the default is used.
  - **Note**: Only enter the identifier for the key pass. Do **not** enter the actual key pass.
- `Server Certificate Path`: The path to the certificate file that has been signed by your Certificate Authority (CA) root file in DER or PEM (X.509).
- `CA Certificate Path`: The path to the certificate file that is your Certificate Authority (CA) root file in DER or PEM (X.509).
- `Private Key Path`: The path to the `.key` private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.

##### Buffering{% #buffering %}

Toggle the switch to enable **Buffering Options**. Enable a configurable buffer on your destination to ensure intermittent latency or an outage at the destination doesn't create immediate backpressure, and allow events to continue to be ingested from your source. Disk buffers can also increase pipeline durability by writing data to disk, ensuring buffered data persists through a Worker restart. See [Destination buffers](https://docs.datadoghq.com/observability_pipelines/scaling_and_performance/buffering_and_backpressure/#destination-buffers) for more information.

- If left unconfigured, your destination uses a memory buffer with a capacity of 500 events.
- To configure a buffer on your destination:
  1. Select the buffer type you want to set (**Memory** or **Disk**).
  1. Enter the buffer size and select the unit.
     1. Maximum memory buffer size is 128 GB.
     1. Maximum disk buffer size is 500 GB.
  1. In the **Behavior on full buffer** dropdown menu, select whether you want to **block** events or **drop new events** when the buffer is full.

### Set secrets{% #set-secrets %}

These are the defaults used for secret identifiers and environment variables.

**Note**: If you enter secret identifiers and then choose to use environment variables, the environment variable is the identifier entered and prepended with `DD_OP`. For example, if you entered `PASSWORD_1` for a password identifier, the environment variable for that password is `DD_OP_PASSWORD_1`.

{% tab title="Secrets Management" %}

- Amazon Security Lake TLS passphrase identifier (when TLS is enabled):
  - The default identifier is `DESTINATION_AWS_SECURITY_LAKE_KEY_PASS`.

{% /tab %}

{% tab title="Environment Variables" %}

- Amazon Security Lake TLS passphrase (when enabled):
  - The default environment variable is `DD_OP_DESTINATION_AMAZON_SECURITY_LAKE_KEY_PASS`.

{% /tab %}

## How the destination works{% #how-the-destination-works %}

### AWS Authentication{% #aws-authentication-1 %}

The Observability Pipelines Worker uses the standard AWS credential provider chain for authentication. See [AWS SDKs and Tools standardized credential providers](https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html) for more information.

#### Permissions{% #permissions %}

For Observability Pipelines to send logs to Amazon Security Lake, the following policy permissions are required:

- `s3:ListBucket`
- `s3:PutObject`

### Event batching{% #event-batching %}

A batch of events is flushed when one of these parameters is met. See [event batching](https://docs.datadoghq.com/observability_pipelines/destinations/#event-batching) for more information.

| Maximum Events | Maximum Size (MB) | Timeout (seconds) |
| -------------- | ----------------- | ----------------- |
| None           | 256               | 300               |
