- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
This page walks Technology Partners through creating a log pipeline. A log pipeline is required if your integration sends logs to Datadog.
When developing your integration to send logs to Datadog follow these guidelines to ensure the best experience for your users.
Before creating a log pipeline, consider the following guidelines and best practices:
http-intake.logs
.
.ddtags=<TAGS>
query parameter.source
tag to the integration namesource
tag to <integration_name>
(source:okta
) for an application. source
must be set before sending logs to Datadog’s endpoints as it cannot be remapped in the Datadog UI.The source
tag must be in lowercase and must not be editable by users as it is used to enable integration pipelines and dashboards.You can create and design your log integration assets directly within your Datadog partner account.
Log integrations consist of two sets of assets: pipelines and associated facets. Centralizing logs from various technologies and applications can produce many unique attributes. To use out-of-the-box dashboards, Technology Partner Integrations should rely on Datadog’s standard naming convention when creating integrations.
After finalizing your Datadog Integration design and successfully sending logs to Datadog’s log endpoint(s), define your log pipelines and facets to enrich and structure your integration’s Logs.
For information about becoming a Datadog Technology Partner, and gaining access to an integration development sandbox, read Build an Integration.
Logs sent to Datadog are processed in log pipelines using pipeline processors. These processors allow users to parse, remap, and extract attribute information, enriching and standardizing logs for use across the platform.
Create a log pipeline to process specified logs with pipeline processors.
source
tag that defines the log source for the Technology Partner’s logs. For example, source:okta
for the Okta integration.
Note: Make sure that logs sent through the integration are tagged with the correct source tags before they are sent to Datadog.After you set-up a pipeline, add processors to enrich and structure the logs further.
Before defining your pipeline processors, review Datadog’s Standard Attributes.
Use processors within your pipelines to enrich and restructure your data, and generate log attributes. For a list of all log processors, see the Processors documentation.
network.client.ip
.service
tag to the name of the service producing telemetryservice
attribute. When source and service share the same value, remap the service
tag to the source
tag. service
tags must be lowercase.status
attributestatus
of a log, or a Category Processor for statuses mapped to a range (as with HTTP status codes).message
attributefile
would be remapped to integration_name.file
.
Use the Attribute Remapper to set attribute keys to a new namespaced attribute.Arithmetic Processor
can be used to calculate information based off of attributes, or the String Builder Processor
can concatenate multiple string attributes.Tips
preserveSource:false
. This helps avoid confusion and removes duplicates.%{data:}
and %{regex(".*"):}
. Make your parsing statements as specific as possible.Facets are specific qualitative or quantitative attributes that can be used to filter and narrow down search results. While facets are not strictly necessary for filtering search results, they play a crucial role in helping users understand the available dimensions for refining their search.
Facets for standard attributes are automatically added by Datadog when a pipeline is published. Review if the attribute should be remapped to a Datadog Standard Attribute.
Not all attributes are meant to be used as a facet. The need for facets in integrations is focused on two things:
@deviceCPUper
→ Device CPU Utilization Percentage
.You can create facets in the Log Explorer.
Correctly defining facets is important as they improve the usability of indexed logs in analytics, monitors, and aggregation features across Datadog’s Log Management product.
They allow for better findability of application logs by populating autocomplete features across Log Management.
attribute_name
to integration_name.attribute_name
.source
nameGroup
value to the source
, same as the integration’s name.Add a facet or measure
Tips
TIME
and BYTES
, with units such as millisecond
or gibibyte
.preserveSource:true
option, define a facet on only a single one..yaml
configuration files, note they are assigned a source
. This refers to where the attribute is captured from and can be log
for attributes or tag
for tags.Datadog reviews the log integration based on the guidelines and requirements documented on this page and provides feedback to the Technology Partner through GitHub. In turn, the Technology Partner reviews and makes changes accordingly.
To start a review process, export your log pipeline and relevant custom facets using the Export icon on the Logs Configuration page.
Include sample raw logs with all the attributes you expect to be sent into Datadog by your integration. Raw logs comprise of the raw messages generated directly from the source application before they are sent to Datadog.
Exporting your log pipeline includes two YAML files:
pipeline-name.yaml
.result
section. The exported file is named pipeline-name_test.yaml
.Note: Depending on your browser, you may need to adjust your settings to allow file downloads.
After you’ve downloaded these files, navigate to your integration’s pull request on GitHub and add them in the Assets > Logs directory. If a Logs folder does not exist yet, you can create one.
Validations are run automatically in your pull request, and validate your pipelines against the raw samples provided. These will produce a result that you can set as the result
section of your pipeline-name_test.yaml
file.
Once the validations runs again, if no issues are detected, the logs validation should succeed.
Three common validation errors are:
id
field in both YAML files: Ensure that the id
field matches the app_id
field in your integration’s manifest.json
file to connect your pipeline to your integration.result
of running the raw logs you provided against your pipeline. If the resulting output from the validation is accurate, take that output and add it to the result
field in the YAML file containing the raw example logs.service
as a parameter, instead of sending it in the log payload, you must include the service
field below your log samples within the yaml file.After validations pass, Datadog creates and deploys the new log integration assets. If you have any questions, add them as comments in your pull request. Datadog team members respond within 2-3 business days.