- 필수 기능
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- 디지털 경험
- 소프트웨어 제공
- 보안
- 로그 관리
- 관리
- 인프라스트럭처
- ci
- containers
- csm
- ndm
- otel_guides
- overview
- slos
- synthetics
- tests
- 워크플로
The Observability Pipelines datadog_archives
destination formats logs into a Datadog-rehydratable format and then routes them to Log Archives. These logs are not ingested into Datadog, but are routed directly to the archive. You can then rehydrate the archive in Datadog when you need to analyze and investigate them.
The Observability Pipelines Datadog Archives destination is useful when:
For example in this first diagram, some logs are sent to a cloud storage for archiving and others to Datadog for analysis and investigation. However, the logs sent directly to cloud storage cannot be rehydrated in Datadog when you need to investigate them.
In this second diagram, all logs are going to the Datadog Agent, including the logs that went to a cloud storage in the first diagram. However, in the second scenario, before the logs are ingested into Datadog, the datadog_archives
destination formats and routes the logs that would have gone directly to a cloud storage to Datadog Log Archives instead. The logs in Log Archive can be rehydrated in Datadog when needed.
This guide walks you through how to:
datadog_archives
is available for Observability Pipelines Worker version 1.5 and later.
See AWS Pricing for inter-region data transfer fees and how cloud storage costs may be impacted.
<MY_BUCKET_NAME>
and <MY_BUCKET_NAME_1_/_MY_OPTIONAL_BUCKET_PATH_1>
with the information for the S3 bucket you created earlier.{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DatadogUploadAndRehydrateLogArchives",
"Effect": "Allow",
"Action": ["s3:PutObject", "s3:GetObject"],
"Resource": "arn:aws:s3:::<MY_BUCKET_NAME_1_/_MY_OPTIONAL_BUCKET_PATH_1>/*"
},
{
"Sid": "DatadogRehydrateLogArchivesListBucket",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<MY_BUCKET_NAME>"
}
]
}
Create an IAM user and attach the IAM policy you created earlier to it.
Create access credentials for the new IAM user. Save these credentials as AWS_ACCESS_KEY
and AWS_SECRET_ACCESS_KEY
.
Create a service account to use the policy you created above. In the Helm configuration, replace ${DD_ARCHIVES_SERVICE_ACCOUNT}
with the name of the service account.
Create an IAM user and attach the IAM policy you created earlier to it.
Create access credentials for the new IAM user. Save these credentials as AWS_ACCESS_KEY
and AWS_SECRET_ACCESS_KEY
.
Create an IAM user and attach the IAM policy you created earlier to it.
Create access credentials for the new IAM user. Save these credentials as AWS_ACCESS_KEY
and AWS_SECRET_ACCESS_KEY
.
Attach the policy to the IAM Instance Profile that is created with Terraform, which you can find under the iam-role-name
output.
observability_pipelines_read_only_archive
, assuming no logs going through the pipeline have that tag added.See the Log Archives documentation for additional information.
datadog_archives
destinationYou can configure the datadog_archives
destination using the configuration file or the pipeline builder UI.
datadog_archives
and those logs have the status tagged as severity
instead of the reserved attribute of status
and the host tagged as hostname
instead of the reserved attribute hostname
. When these logs are rehydrated in Datadog, the status
for the logs are all set to info
and none of the logs will have a hostname tag.For manual deployments, the sample pipelines configuration file for Datadog includes a sink for sending logs to Amazon S3 under a Datadog-rehydratable format.
In the sample pipelines configuration file, replace AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
with the AWS credentials you created earlier.
In the sample pipelines configuration file, replace AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
with the AWS credentials you created earlier.
In the sample pipelines configuration file, replace AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
with the AWS credentials you created earlier.
In the sample pipelines configuration file, replace AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
with the AWS credentials you created earlier.
Replace ${DD_ARCHIVES_BUCKET}
and ${DD_ARCHIVES_REGION}
parameters based on your S3 configuration.
datadog_archives
..sender = "observability_pipelines_worker"
in the Source section.aws_s3
in the Service field.azure_blob
in the Service field.gcp_cloud_storage
in the Service field.If you are using Remote Configuration, deploy the change to your pipeline in the UI. For manual configuration, download the updated configuration and restart the worker.
See Datadog Archives reference for details on all configuration options.
See Rehydrating from Archives for instructions on how to rehydrate your archive in Datadog so that you can start analyzing and investigating those logs.
Additional helpful documentation, links, and articles: