- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Use Observability Pipelines’ sources to receive logs from your different log sources.
Select and set up your source when you build a pipeline in the UI. This is step 3 in the pipeline setup process:
Sources have different prerequisites and settings. Some sources also need to be configured to send logs to the Observability Pipelines Worker.
Select a source for more information:
All sources add the following standard metadata fields to ingested events:
Field name | Value type | Example |
---|---|---|
hostname | String | "ip-34-2-553.us.test" |
timestamp | String | "2024-06-17T22:25:55.439Z" |
source_type | String | "splunk_tcp" |
For example, if this is the raw event:
{
"foo": "bar"
}
Then the enriched event with the standard metadata fields is:
{
"foo": "bar",
"hostname": "ip-34-2-553.us.test",
"timestamp": "2024-06-17T22:25:55.439Z",
"source_type": "splunk_tcp"
}
You can see these standard metadata fields when you use the tap
command to see the events sent through the source.
After events are ingested by the source, they get sent to different processors and destinations that might update those fields. For example, if the event is sent to the Datadog Logs destination, the timestamp field gets converted to UNIX format.
Note: The bytes in per second
metric in the UI is for ingested raw events, not enriched events.