---
title: Google Cloud Storage Destination
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Observability Pipelines > Destinations > Google Cloud Storage
  Destination
---

# Google Cloud Storage Destination

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site.md). ().
{% /alert %}

{% /callout %}
Available for:
{% icon name="icon-logs" /%}
 Logs 
{% alert level="info" %}
For Worker versions 2.7 and later, the Google Cloud destination supports [uniform bucket-level access](https://cloud.google.com/storage/docs/uniform-bucket-level-access). Google [recommends](https://cloud.google.com/storage/docs/uniform-bucket-level-access#should-you-use) using uniform bucket-level access.For Worker version older than 2.7, only [Access Control Lists](https://cloud.google.com/storage/docs/access-control/lists) is supported.
{% /alert %}

Use the Google Cloud Storage destination to send your logs to a Google Cloud Storage bucket. If you want to send logs to Google Cloud Storage for [archiving](https://docs.datadoghq.com/logs/log_configuration/archives.md) and [rehydration](https://docs.datadoghq.com/logs/log_configuration/rehydrating.md), you must configure Log Archives. If you do not want to rehydrate logs in Datadog, skip to Set up the destination for your pipeline.

The Observability Pipelines Worker uses standard Google authentication methods. See [Authentication methods at Google](https://cloud.google.com/docs/authentication#auth-flowchart) for more information about choosing the authentication method for your use case.

## Configure Log Archives{% #configure-log-archives %}

This step is only required if you want to send logs to Google Cloud Storage for [archiving](https://docs.datadoghq.com/logs/log_configuration/archives.md) and [rehydration](https://docs.datadoghq.com/logs/log_configuration/rehydrating.md), and you don't already have a Datadog Log Archive configured for Observability Pipelines. If you already have a Datadog Log Archive configured or do not want to rehydrate your logs in Datadog, skip to Set up the destination for your pipeline.

If you already have a Datadog Log Archive configured for Observability Pipelines, skip to Set up the destination for your pipeline.

You need to have Datadog's [Google Cloud Platform integration](https://docs.datadoghq.com/integrations/google_cloud_platform.md#setup) installed to set up Datadog Log Archives.

### Create a storage bucket{% #create-a-storage-bucket %}

1. Navigate to [Google Cloud Storage](https://console.cloud.google.com/storage).
1. On the Buckets page, click **Create** to create a bucket for your archives..
1. Enter a name for the bucket and choose where to store your data.
1. Select **Fine-grained** in the **Choose how to control access to objects** section.
1. Do not add a retention policy because the most recent data needs to be rewritten in some rare cases (typically a timeout case).
1. Click **Create**.

### Create a service account to allow Workers to write to the bucket{% #create-a-service-account-to-allow-workers-to-write-to-the-bucket %}

1. Create a Google Cloud Storage [service account](https://console.cloud.google.com/iam-admin/serviceaccounts).
   - Grant the Service Account permissions to your bucket with `Storage Admin` and `Storage Object Admin` permissions.
   - If you want to authenticate with a credentials file, download the service account key file and place it under `DD_OP_DATA_DIR/config`. You reference this file when you set up the Google Cloud Storage destination later on.
1. Follow these [instructions](https://cloud.google.com/iam/docs/keys-create-delete#creating) to create a service account key. Choose `json` for the key type.

### Connect the storage bucket to Datadog Log Archives{% #connect-the-storage-bucket-to-datadog-log-archives %}

1. Navigate to Datadog [Log Forwarding](https://app.datadoghq.com/logs/pipelines/log-forwarding).
1. Click **New archive**.
1. Enter a descriptive archive name.
1. Add a query that filters out all logs going through log pipelines so that none of those logs go into this archive. For example, add the query `observability_pipelines_read_only_archive`, assuming no logs going through the pipeline have that tag added.
1. Select **Google Cloud Storage**.
1. Select the service account your storage bucket is in.
1. Select the project.
1. Enter the name of the storage bucket you created earlier.
1. Optionally, enter a path.
1. Optionally, set permissions, add tags, and define the maximum scan size for rehydration. See [Advanced settings](https://docs.datadoghq.com/logs/log_configuration/archives.md?tab=awss3#advanced-settings) for more information.
1. Click **Save**.

See the [Log Archives documentation](https://docs.datadoghq.com/logs/log_configuration/archives.md) for additional information.

## Set up the destination for your pipeline{% #set-up-the-destinations %}

Set up the Google Cloud Storage destination and its environment variables when you [set up an Archive Logs pipeline](https://docs.datadoghq.com/observability_pipelines/configuration/explore_templates.md?tab=logs#archive-logs). The information below is configured in the pipelines UI.

1. Enter the name of your Google Cloud storage bucket. If you configured Log Archives, it's the bucket you created earlier.
1. If you have a credentials JSON file, enter the path to your credentials JSON file. If you configured Log Archives it's the credentials you downloaded earlier. The credentials file must be placed under `DD_OP_DATA_DIR/config`. Alternatively, you can use the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to provide the credential path.
   - If you're using [workload identity](https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity) on Google Kubernetes Engine (GKE), the `GOOGLE_APPLICATION_CREDENTIALS` is provided for you.
   - The Worker uses standard [Google authentication methods](https://cloud.google.com/docs/authentication#auth-flowchart).
1. Select the storage class for the created objects.
1. Select the access level of the created objects.

#### Optional settings{% #optional-settings %}

##### Prefix to apply to all key objects{% #prefix-to-apply-to-all-key-objects %}

Enter a prefix that you want to apply to all key objects.

- Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in `/` to act as a directory path; a trailing `/` is not automatically added.
- See [template syntax](https://docs.datadoghq.com/observability_pipelines/destinations.md#template-syntax) if you want to route logs to different object keys based on specific fields in your logs.
  - **Note**: Datadog recommends that you start your prefixes with the directory name and without a lead slash (`/`). For example, `app-logs/` or `service-logs/`.

##### Metadata{% #metadata %}

1. Click **Add Header** to add metadata.
1. Enter values for the header name and value.

##### Buffering{% #buffering %}

Toggle the switch to enable **Buffering Options**. Enable a configurable buffer on your destination to ensure intermittent latency or an outage at the destination doesn't create immediate backpressure, and allow events to continue to be ingested from your source. Disk buffers can also increase pipeline durability by writing data to disk, ensuring buffered data persists through a Worker restart. See [Destination buffers](https://docs.datadoghq.com/observability_pipelines/scaling_and_performance/buffering_and_backpressure.md#destination-buffers) for more information.

- If left unconfigured, your destination uses a memory buffer with a capacity of 500 events.
- To configure a buffer on your destination:
  1. Select the buffer type you want to set (**Memory** or **Disk**).
  1. Enter the buffer size and select the unit.
     1. Maximum memory buffer size is 128 GB.
     1. Maximum disk buffer size is 500 GB.
  1. In the **Behavior on full buffer** dropdown menu, select whether you want to **block** events or **drop new events** when the buffer is full.

### Set secrets{% #set-secrets %}

These are the defaults used for secret identifiers and environment variables.

**Note**: If you enter secret identifiers and then choose to use environment variables, the environment variable is the identifier entered and prepended with `DD_OP`. For example, if you entered `PASSWORD_1` for a password identifier, the environment variable for that password is `DD_OP_PASSWORD_1`.

{% tab title="Secrets Management" %}
There are no secret identifiers to configure.
{% /tab %}

{% tab title="Environment Variables" %}
#### Google Cloud Storage{% #google-cloud-storage %}

There are no environment variables to configure.
{% /tab %}

## How the destination works{% #how-the-destination-works %}

### Event batching{% #event-batching %}

A batch of events is flushed when one of these parameters is met. See [event batching](https://docs.datadoghq.com/observability_pipelines/destinations.md#event-batching) for more information.

| Maximum Events | Maximum Size (MB) | Timeout (seconds) |
| -------------- | ----------------- | ----------------- |
| None           | 100               | 900               |
