- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Use Observability Pipelines’ Amazon Data Firehose source to receive logs from Amazon Data Firehose. Select and set up this source when you set up a pipeline.
To use Observability Pipelines’ Amazon Data Firehose:
Select and set up this source when you set up a pipeline. The information below is for the source settings in the pipeline UI.
Optionally, toggle the switch to enable TLS. If you enable TLS, the following certificate and key files are required:
Server Certificate Path
: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).CA Certificate Path
: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).Private Key Path
: The path to the .key
private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.Since Amazon Data Firehose can only deliver data over HTTP to an HTTPS URL, when you deploy the Observability Pipelines Worker, you need to deploy it with a publicly exposed endpoint and solve TLS termination. To solve TLS termination, you can front OPW with a load balancer or configure TLS options. See Understand HTTP endpoint delivery request and response specifications for more information.
To send logs to the Observability Pipelines Worker, set up an Amazon Data Firehose stream with an HTTP endpoint destination in the region where your logs are. Configure the endpoint URL to the endpoint where OPW is deployed.