Google Cloud Storage Destination

This product is not supported for your selected Datadog site. ().
Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.
For Worker versions 2.7 and later, the Google Cloud destination supports uniform bucket-level access. Google recommends using uniform bucket-level access.
For Worker version older than 2.7, only Access Control Lists is supported.

Use the Google Cloud Storage destination to send your logs to a Google Cloud Storage bucket. If you want to send logs to Google Cloud Storage for archiving and rehydration, you must configure Log Archives. If you do not want to rehydrate logs in Datadog, skip to Set up the destination for your pipeline.

The Observability Pipelines Worker uses standard Google authentication methods. See Authentication methods at Google for more information about choosing the authentication method for your use case.

Configure Log Archives

This step is only required if you want to send logs to Google Cloud Storage for archiving and rehydration, and you don’t already have a Datadog Log Archive configured for Observability Pipelines. If you already have a Datadog Log Archive configured or do not want to rehydrate your logs in Datadog, skip to Set up the destination for your pipeline.

If you already have a Datadog Log Archive configured for Observability Pipelines, skip to Set up the destination for your pipeline.

You need to have Datadog’s Google Cloud Platform integration installed to set up Datadog Log Archives.

Create a storage bucket

  1. Navigate to Google Cloud Storage.
  2. On the Buckets page, click Create to create a bucket for your archives..
  3. Enter a name for the bucket and choose where to store your data.
  4. Select Fine-grained in the Choose how to control access to objects section.
  5. Do not add a retention policy because the most recent data needs to be rewritten in some rare cases (typically a timeout case).
  6. Click Create.

Create a service account to allow Workers to write to the bucket

  1. Create a Google Cloud Storage service account.
    • Grant the Service Account permissions to your bucket with Storage Admin and Storage Object Admin permissions.
    • If you want to authenticate with a credentials file, download the service account key file and place it under DD_OP_DATA_DIR/config. You reference this file when you set up the Google Cloud Storage destination later on.
  2. Follow these instructions to create a service account key. Choose json for the key type.

Connect the storage bucket to Datadog Log Archives

  1. Navigate to Datadog Log Forwarding.
  2. Click New archive.
  3. Enter a descriptive archive name.
  4. Add a query that filters out all logs going through log pipelines so that none of those logs go into this archive. For example, add the query observability_pipelines_read_only_archive, assuming no logs going through the pipeline have that tag added.
  5. Select Google Cloud Storage.
  6. Select the service account your storage bucket is in.
  7. Select the project.
  8. Enter the name of the storage bucket you created earlier.
  9. Optionally, enter a path.
  10. Optionally, set permissions, add tags, and define the maximum scan size for rehydration. See Advanced settings for more information.
  11. Click Save.

See the Log Archives documentation for additional information.

Set up the destination for your pipeline

Set up the Google Cloud Storage destination and its environment variables when you set up an Archive Logs pipeline. The information below is configured in the pipelines UI.

  1. Enter the name of your Google Cloud storage bucket. If you configured Log Archives, it’s the bucket you created earlier.
  2. If you have a credentials JSON file, enter the path to your credentials JSON file. If you configured Log Archives it’s the credentials you downloaded earlier. The credentials file must be placed under DD_OP_DATA_DIR/config. Alternatively, you can use the GOOGLE_APPLICATION_CREDENTIALS environment variable to provide the credential path.
  3. Select the storage class for the created objects.
  4. Select the access level of the created objects.
  5. Optionally, enter in the prefix.
    • Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in / to act as a directory path; a trailing / is not automatically added.
    • See template syntax if you want to route logs to different object keys based on specific fields in your logs.
    • Note: Datadog recommends that you start your prefixes with the directory name and without a lead slash (/). For example, app-logs/ or service-logs/.
  6. Optionally, click Add Header to add metadata.
  7. Optionally, toggle the switch to enable Buffering Options.
    Note: Buffering options is in Preview. Contact your account manager to request access.
    • If left disabled, the maximum size for buffering is 500 events.
    • If enabled:
      1. Select the buffer type you want to set (Memory or Disk).
      2. Enter the buffer size and select the unit.

Set the environment variables

There are no environment variables to configure.

How the destination works

Event batching

A batch of events is flushed when one of these parameters is met. See event batching for more information.

Max EventsMax BytesTimeout (seconds)
None100,000,000900