This product is not supported for your selected Datadog site. ().
Cette page n'est pas encore disponible en français, sa traduction est en cours. Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.
Use the Amazon S3 destination to send logs to Amazon S3. If you want to send logs in Datadog-rehydratable format to Amazon S3 for archiving and rehydration, you must configure Log Archives. If you want to send your logs directly to Amazon S3, without converting them to Datadog-rehydratable format, skip to Set up the destination for your pipeline.
This step is only required if you want to send logs to Amazon S3 in Datadog-rehydratable format for archiving and rehydration, and you don’t already have a Datadog Log Archive configured for Observability Pipelines. If you already have a Datadog Log Archive configured or only want to send your logs directly to Amazon S3, skip to Set up the destination for your pipeline.
You need to have Datadog’s AWS integration installed to set up Datadog Log Archives.
Copy the below policy and paste it into the Policy editor. Replace <MY_BUCKET_NAME> and <MY_BUCKET_NAME_1_/_MY_OPTIONAL_BUCKET_PATH_1> with the information for the S3 bucket you created earlier.
Add a query that filters out all logs going through log pipelines so that none of those logs go into this archive. For example, add the query observability_pipelines_read_only_archive, assuming no logs going through the pipeline have that tag added.
Select AWS S3.
Select the AWS account that your bucket is in.
Enter the name of the S3 bucket.
Optionally, enter a path.
Check the confirmation statement.
Optionally, add tags and define the maximum scan size for rehydration. See Advanced settings for more information.
Set up the Amazon S3 destination and its environment variables when you set up an Archive Logs pipeline. The information below is configured in the pipelines UI.
Enter the S3 bucket name for the S3 bucket you created earlier.
Enter the AWS region the S3 bucket is in.
Enter the key prefix.
Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in / to act as a directory path; a trailing / is not automatically added.
See template syntax if you want to route logs to different object keys based on specific fields in your logs.
Note: Datadog recommends that you start your prefixes with the directory name and without a lead slash (/). For example, app-logs/ or service-logs/.
Select the storage class for your S3 bucket in the Storage Class dropdown menu. If you are going to archive and rehydrate your logs:
Note: Rehydration only supports the following storage classes:
Optionally, select an AWS authentication option. If you are only using the user or role you created earlier for authentication, do not select Assume role. The Assume role option should only be used if the user or role you created earlier needs to assume a different role to access the specific AWS resource and that permission has to be explicitly defined. If you select Assume role:
Enter the ARN of the IAM role you want to assume.
Optionally, enter the assumed role session name and external ID.
Note: The user or role you created earlier must have permission to assume this role so that the Worker can authenticate with AWS.
Example destination and log archive setup
If you enter the following values for your Amazon S3 destination:
S3 Bucket Name: test-op-bucket
Prefix to apply to all object keys: op-logs
Storage class for the created objects: Standard
Then these are the values you enter for configuring the S3 bucket for Log Archives:
S3 bucket: test-op-bucket
Path: op-logs
Storage class: Standard
Set the environment variables
There are no environment variables to configure.
Route logs to Snowflake using the Amazon S3 destination
You can route logs from Observability Pipelines to Snowflake using the Amazon S3 destination by configuring Snowpipe in Snowflake to automatically ingest those logs. To set this up:
Configure Log Archives if you want to archive and rehydrate your logs. If you only want to send logs to Amazon S3, skip to step 2.
Set up a pipeline to use Amazon S3 as the log destination. When logs are collected by Observability Pipelines, they are written to an S3 bucket using the same configuration detailed in Set up the destination for your pipeline, which includes AWS authentication, region settings, and permissions.
Set up Snowpipe in Snowflake. See Automating Snowpipe for Amazon S3 for instructions. Snowpipe continuously monitors your S3 bucket for new files and automatically ingests them into your Snowflake tables, ensuring near real-time data availability for analytics or further processing.