Processing
New announcements from Dash: Incident Management, Continuous Profiler, and more! New announcements from Dash!

Processing

Overview

To access the configuration panel use the left Logs menu then the configuration sub menu.

Log configuration page allows full control over how your logs are processed with Datadog Pipelines and Processors.

Pipelines and Processors can be applied to any type of logs:

Therefore, you don’t need to change how you log, and you don’t need to deploy changes to any server-side processing rules. Everything is happening and can be configured in the Datadog processing page.

The other benefit to implement a log processing strategy is to implement an attribute naming convention for your organization.

Log Processing

Integration logs

For integration logs, an Integration Pipeline is automatically installed that takes care of parsing your logs and adds the corresponding facet in your Logs Explorer. See the ELB logs example below:

Consult the current list of supported integrations.

Custom logs

However, log formats can be totally custom which is why you can define custom processing rules. With any log syntax, you can extract all your attributes and, when necessary, remap them to more global or canonical attributes.

So for instance with custom processing rules you can transform this log:

Into this one:

Consult the Pipelines documentation page to learn more on how to perform actions only on some subset of your logs with the Pipeline filters.

To discover the full list of Processors available, refer to the dedicated Processor documentation page.

If you want to learn more about pure parsing possibilities of the Datadog application, follow the parsing training guide. There is also a parsing best practice and parsing troubleshooting guide.

For optimal usage of the Log Management solution, Datadog recommends using at most 20 processors per pipeline and 10 parsing rules within a grok processor. Datadog reserves the right to disable underperforming parsing rules, processors, or pipelines that might impact Datadog’s service performance.

JSON Logs Pre processing

JSON Logs preprocessing applies on all logs before they actually enter Log Pipelines processing. Preprocessing runs a series of operations based on reserved attributes:

JSON Logs pre processing comes with a default configuration that works for standard log forwarders. Edit this configuration at any time to adapt to custom or specific log forwarding approaches. To change the default values, go to the configuration page and edit Pre processing for JSON logs:

JSON Logs Preprocessing Tile

source attribute

If a JSON formatted log file includes the ddsource attribute, Datadog interprets its value as the log’s source. To use the same source names Datadog uses, see the Integration Pipeline Library.

Note: Logs coming from a containerized environment require the use of an environment variable to override the default source and service values.

host attribute

Using the Datadog Agent or the RFC5424 format automatically sets the host value on your logs. However, if a JSON formatted log file includes the following attribute, Datadog interprets its value as the log’s host:

  • host
  • hostname
  • syslog.hostname

date attribute

By default Datadog generates a timestamp and appends it in a date attribute when logs are received. However, if a JSON formatted log file includes one of the following attributes, Datadog interprets its value as the log’s official date:

  • @timestamp
  • timestamp
  • _timestamp
  • Timestamp
  • eventTime
  • date
  • published_date
  • syslog.timestamp

You can also specify alternate attributes to use as the source of a log’s date by setting a log date remapper processor.

Note: Datadog rejects a log entry if its official date is older than 18 hours in the past.

The recognized date formats are: ISO8601, UNIX (the milliseconds EPOCH format), and RFC3164.

message attribute

By default, Datadog ingests the message value as the body of the log entry. That value is then highlighted and displayed in the logstream, where it is indexed for full text search.

status attribute

Each log entry may specify a status level which is made available for faceted search within Datadog. However, if a JSON formatted log file includes one of the following attributes, Datadog interprets its value as the log’s official status:

  • status
  • severity
  • level
  • syslog.severity

If you would like to remap a status existing in the status attribute, you can do so with the log status remapper.

service attribute

Using the Datadog Agent or the RFC5424 format automatically sets the service value on your logs. However, if a JSON formatted log file includes the following attribute, Datadog interprets its value as the log’s service:

  • service
  • syslog.appname

trace_id attribute

By default, Datadog tracers can automatically inject trace and span IDs in the logs. However, if a JSON formatted log includes the following attributes, Datadog interprets its value as the log’s trace_id:

  • dd.trace_id
  • contextMap.dd.trace_id

Further Reading