- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
The Google Cloud Storage destination is available for the Archive Logs template. Use this destination to send your logs in Datadog-rehydratable format to a Google Cloud Storage bucket for archiving. You need to set up Datadog Log Archives if you haven’t already, and then set up the destination in the pipeline UI.
If you already have a Datadog Log Archive configured for Observability Pipelines, skip to Set up the destination for your pipeline.
You need to have Datadog’s Google Cloud Platform integration installed to set up Datadog Log Archives.
To authenticate the Observability Pipelines Worker for Google Cloud Storage, contact your Google Security Operations representative for a Google Developer Service Account Credential. This credential is a JSON file and must be placed under DD_OP_DATA_DIR/config
. See Getting API authentication credential for more information.
observability_pipelines_read_only_archive
, assuming no logs going through the pipeline have that tag added.See the Log Archives documentation for additional information.
Set up the Amazon S3 destination and its environment variables when you set up an Archive Logs pipeline. The information below is configured in the pipelines UI.
/
to act as a directory path; a trailing /
is not automatically added.There are no environment variables to configure.
A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|---|---|
None | 100,000,000 | 900 |