Google Cloud Storage Destination

Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.

The Google Cloud Storage destination is available for the Archive Logs template. Use this destination to send your logs in Datadog-rehydratable format to a Google Cloud Storage bucket for archiving. You need to set up Datadog Log Archives if you haven’t already, and then set up the destination in the pipeline UI.

Configure Log Archives

If you already have a Datadog Log Archive configured for Observability Pipelines, skip to Set up the destination for your pipeline.

You need to have Datadog’s Google Cloud Platform integration installed to set up Datadog Log Archives.

Create a storage bucket

  1. Navigate to Google Cloud Storage.
  2. On the Buckets page, click Create to create a bucket for your archives..
  3. Enter a name for the bucket and choose where to store your data.
  4. Select Fine-grained in the Choose how to control access to objects section.
  5. Do not add a retention policy because the most recent data needs to be rewritten in some rare cases (typically a timeout case).
  6. Click Create.

Allow the Observability Pipeline Worker to write to the bucket

To authenticate the Observability Pipelines Worker for Google Cloud Storage, contact your Google Security Operations representative for a Google Developer Service Account Credential. This credential is a JSON file and must be placed under DD_OP_DATA_DIR/config. See Getting API authentication credential for more information.

Note: If you are installing the Worker in Kubernetes, see Referencing files in Kubernetes for information on how to reference the credentials file.

Connect the storage bucket to Datadog Log Archives

  1. Navigate to Datadog Log Forwarding.
  2. Click New archive.
  3. Enter a descriptive archive name.
  4. Add a query that filters out all logs going through log pipelines so that none of those logs go into this archive. For example, add the query observability_pipelines_read_only_archive, assuming no logs going through the pipeline have that tag added.
  5. Select Google Cloud Storage.
  6. Select the service account your storage bucket is in.
  7. Select the project.
  8. Enter the name of the storage bucket you created earlier.
  9. Optionally, enter a path.
  10. Optionally, set permissions, add tags, and define the maximum scan size for rehydration. See Advanced settings for more information.
  11. Click Save.

See the Log Archives documentation for additional information.

Set up the destination for your pipeline

Set up the Amazon S3 destination and its environment variables when you set up an Archive Logs pipeline. The information below is configured in the pipelines UI.

  1. Enter the name of the Google Cloud storage bucket you created earlier.
  2. Enter the path to the credentials JSON file you downloaded earlier.
  3. Select the storage class for the created objects.
  4. Select the access level of the created objects.
  5. Optionally, enter in the prefix. Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in / to act as a directory path; a trailing / is not automatically added.
  6. Optionally, click Add Header to add metadata.

Set the environment variables

There are no environment variables to configure.

How the destination works

Event batching

A batch of events is flushed when one of these parameters is met. See event batching for more information.

Max EventsMax BytesTimeout (seconds)
None100,000,000900