Datadog automatically parses JSON-formatted logs. When logs are not JSON-formatted, you can add value to your raw logs by sending them through a processing pipeline. Pipelines take logs from a wide variety of formats and translate them into a common format in Datadog. Implementing a log pipelines and processing strategy is beneficial as it introduces an attribute naming convention for your organization.
With pipelines, logs are parsed and enriched by chaining them sequentially through processors. This extracts meaningful information or attributes from semi-structured text to reuse as facets. Each log that comes through the pipelines is tested against every pipeline filter. If it matches a filter, then all the processors are applied sequentially before moving to the next pipeline.
Pipelines and processors can be applied to any type of log. You don’t need to change logging configuration or deploy changes to any server-side processing rules. Everything can be configured within the pipeline configuration page.
Note: For optimal use of the Log Management solution, Datadog recommends using at most 20 processors per pipeline and 10 parsing rules within a Grok processor. Datadog reserves the right to disable underperforming parsing rules, processors, or pipelines that might impact Datadog’s service performance.
Preprocessing of JSON logs occurs before logs enter pipeline processing. Preprocessing runs a series of operations based on reserved attributes, such as
message. If you have different attribute names in your JSON logs, use preprocessing to map your log attribute names to those in the reserved attribute list.
JSON log preprocessing comes with a default configuration that works for standard log forwarders. To edit this configuration to adapt custom or specific log forwarding approaches:
Note: Preprocessing JSON logs is the only way to define one of your log attributes as
host for your logs.
Change the default mapping based on reserved attribute:
If a JSON formatted log file includes the
ddsource attribute, Datadog interprets its value as the log’s source. To use the same source names Datadog uses, see the Integration Pipeline Library.
Note: Logs coming from a containerized environment require the use of an environment variable to override the default source and service values.
Using the Datadog Agent or the RFC5424 format automatically sets the host value on your logs. However, if a JSON formatted log file includes the following attribute, Datadog interprets its value as the log’s host:
By default Datadog generates a timestamp and appends it in a date attribute when logs are received. However, if a JSON formatted log file includes one of the following attributes, Datadog interprets its value as the log’s official date:
Specify alternate attributes to use as the source of a log’s date by setting a log date remapper processor.
Note: Datadog rejects a log entry if its official date is older than 18 hours in the past.
Specify alternate attributes to use as the source of a log’s message by setting a log message remapper processor.
Each log entry may specify a status level which is made available for faceted search within Datadog. However, if a JSON formatted log file includes one of the following attributes, Datadog interprets its value as the log’s official status:
To remap a status existing in the
status attribute, use the log status remapper.
Using the Datadog Agent or the RFC5424 format automatically sets the service value on your logs. However, if a JSON formatted log file includes the following attribute, Datadog interprets its value as the log’s service:
Specify alternate attributes to use as the source of a log’s service by setting a log service remapper processor.
By default, Datadog tracers can automatically inject trace and span IDs into your logs. However, if a JSON formatted log includes the following attributes, Datadog interprets its value as the log’s
Specify alternate attributes to use as the source of a log’s trace ID by setting a trace ID remapper processor.
Navigate to Pipelines in the Datadog app.
Select New Pipeline.
Select a log from the live tail preview to apply a filter, or apply your own filter. Choose a filter from the dropdown menu or create your own filter query by selecting the </> icon. Filters let you limit what kinds of logs a pipeline applies to.
Note: The pipeline filtering is applied before any of the pipeline’s processors. For this reason, you cannot filter on an attribute that is extracted in the pipeline itself.
Name your pipeline, and press Save.
An example of a log transformed by a pipeline:
Integration processing pipelines are available for certain sources when they are set up to collect logs. These pipelines are read-only and parse out your logs in ways appropriate for the particular source. For integration logs, an integration pipeline is automatically installed that takes care of parsing your logs and adds the corresponding facet in your Logs Explorer.
To view an integration pipeline, navigate to the Pipelines page. To edit an integration pipeline, clone it and then edit the clone:
See the ELB logs example below:
To see the full list of integration pipelines that Datadog offers, browse the integration pipeline library. The pipeline library shows how Datadog processes different log formats by default.
To use an integration pipeline, Datadog recommends installing the integration by configuring the corresponding log
source. Once Datadog receives the first log with this source, the installation is automatically triggered and the integration pipeline is added to the processing pipelines list. To configure the log source, refer to the corresponding integration documentation.
It’s also possible to copy an integration pipeline using the clone button.
A processor executes within a pipeline to complete a data-structuring action. See the Processors docs to learn how to add and configure a processor by processor type, within the app or with the API.
Nested pipelines are pipelines within a pipeline. Use nested pipelines to split the processing into two steps. For example, first use a high-level filter such as team and then a second level of filtering based on the integration, service, or any other tag or attribute.
A pipeline can contain nested pipelines and processors whereas a nested pipeline can only contain processors.
It is possible to drag and drop a pipeline into another pipeline to transform it into a nested pipeline:
Additional helpful documentation, links, and articles: