Control how your logs are processed from the log configuration page.
Pipelines and processors can be applied to any type of log. You don’t need to change logging configuration or deploy changes to any server-side processing rules. Everything can be configured within the pipeline configuration page.
Implementing a log processing strategy is beneficial as it introduces an attribute naming convention for your organization.
Define custom processing rules for custom log formats. Use any log syntax to extract all attributes and, when necessary, remap them to global or canonical attributes.
For example, with custom processing rules you can transform this log:
Into this one:
See the processor documentation for a full list of processors available.
For optimal use of the Log Management solution, Datadog recommends using at most 20 processors per pipeline and 10 parsing rules within a Grok processor.
Datadog reserves the right to disable underperforming parsing rules, processors, or pipelines that might impact Datadog’s service performance.
A processing pipeline takes a filtered subset of incoming logs and applies them over a list of sequential processors. See the log pipelines documentation to learn more about preprocessing for JSON logs and integrations pipelines.
Additional helpful documentation, links, and articles: