Control how your logs are processed from the log configuration page.

Pipelines and processors can be applied to any type of log. You don’t need to change logging configuration or deploy changes to any server-side processing rules. Everything can be configured within the pipeline configuration page.

Implementing a log processing strategy is beneficial as it introduces an attribute naming convention for your organization.

Custom logs

Define custom processing rules for custom log formats. Use any log syntax to extract all attributes and, when necessary, remap them to global or canonical attributes.

For example, with custom processing rules you can transform this log:

Into this one:

See the pipelines documentation to learn more about performing actions on a subset of your logs with the pipeline filters.

See the processor documentation for a full list of processors available.

If you want to learn more about parsing capabilities, see the parsing docs. There is also a parsing best practices and parsing troubleshooting guide.


  • For optimal use of the Log Management solution, Datadog recommends using at most 20 processors per pipeline and 10 parsing rules within a Grok processor.

  • Datadog reserves the right to disable underperforming parsing rules, processors, or pipelines that might impact Datadog’s service performance.

Processing pipelines

A processing pipeline takes a filtered subset of incoming logs and applies them over a list of sequential processors. See the log pipelines documentation to learn more about preprocessing for JSON logs and integrations pipelines.

Further Reading