Datadog automatically parses JSON-formatted logs. You can then add value to all your logs (raw and JSON) by sending them through a processing pipeline. Pipelines take logs from a wide variety of formats and translate them into a common format in Datadog. Implementing a log pipelines and processing strategy is beneficial as it introduces an attribute naming convention for your organization.
With pipelines, logs are parsed and enriched by chaining them sequentially through processors. This extracts meaningful information or attributes from semi-structured text to reuse as facets. Each log that comes through the pipelines is tested against every pipeline filter. If it matches a filter, then all the processors are applied sequentially before moving to the next pipeline.
Pipelines and processors can be applied to any type of log. You don’t need to change logging configuration or deploy changes to any server-side processing rules. Everything can be configured within the pipeline configuration page.
Note: For optimal use of the Log Management solution, Datadog recommends using at most 20 processors per pipeline and 10 parsing rules within a Grok processor. Datadog reserves the right to disable underperforming parsing rules, processors, or pipelines that might impact Datadog’s service performance.
Preprocessing of JSON logs occurs before logs enter pipeline processing. Preprocessing runs a series of operations based on reserved attributes, such as
message. If you have different attribute names in your JSON logs, use preprocessing to map your log attribute names to those in the reserved attribute list.
JSON log preprocessing comes with a default configuration that works for standard log forwarders. To edit this configuration to adapt custom or specific log forwarding approaches:
Navigate to Pipelines in the Datadog app and select Preprocessing for JSON logs.
Note: Preprocessing JSON logs is the only way to define one of your log attributes as
host for your logs.
Change the default mapping based on reserved attribute:
If a JSON formatted log file includes the
ddsource attribute, Datadog interprets its value as the log’s source. To use the same source names Datadog uses, see the Integration Pipeline Library.
Note: Logs coming from a containerized environment require the use of an environment variable to override the default source and service values.
Using the Datadog Agent or the RFC5424 format automatically sets the host value on your logs. However, if a JSON formatted log file includes the following attribute, Datadog interprets its value as the log’s host:
By default Datadog generates a timestamp and appends it in a date attribute when logs are received. However, if a JSON formatted log file includes one of the following attributes, Datadog interprets its value as the log’s official date:
Specify alternate attributes to use as the source of a log’s date by setting a log date remapper processor.
Note: Datadog rejects a log entry if its official date is older than 18 hours in the past.
By default, Datadog ingests the message value as the body of the log entry. That value is then highlighted and displayed in the Log Explorer, where it is indexed for full text search.
Specify alternate attributes to use as the source of a log’s message by setting a log message remapper processor.
Each log entry may specify a status level which is made available for faceted search within Datadog. However, if a JSON formatted log file includes one of the following attributes, Datadog interprets its value as the log’s official status:
Specify alternate attributes to use as the source of a log’s status by setting a log status remapper processor.
Using the Datadog Agent or the RFC5424 format automatically sets the service value on your logs. However, if a JSON formatted log file includes the following attribute, Datadog interprets its value as the log’s service:
Specify alternate attributes to use as the source of a log’s service by setting a log service remapper processor.
Create a pipeline
Navigate to Pipelines in the Datadog app.
Select New Pipeline.
Select a log from the live tail preview to apply a filter, or apply your own filter. Choose a filter from the dropdown menu or create your own filter query by selecting the </> icon. Filters let you limit what kinds of logs a pipeline applies to.
Note: The pipeline filtering is applied before any of the pipeline’s processors. For this reason, you cannot filter on an attribute that is extracted in the pipeline itself.
Name your pipeline.
(Optional) Grant editing access to processors in the pipeline.
(Optional) Add tags and a description to the pipeline. The description and tags can be used to state the pipeline’s purpose and which team owns it.
An example of a log transformed by a pipeline:
Integration processing pipelines are available for certain sources when they are set up to collect logs. These pipelines are read-only and parse out your logs in ways appropriate for the particular source. For integration logs, an integration pipeline is automatically installed that takes care of parsing your logs and adds the corresponding facet in your Logs Explorer.
To view an integration pipeline, navigate to the Pipelines page. To edit an integration pipeline, clone it and then edit the clone:
See the ELB logs example below:
Integration pipeline library
To see the full list of integration pipelines that Datadog offers, browse the integration pipeline library. The pipeline library shows how Datadog processes different log formats by default.
To use an integration pipeline, Datadog recommends installing the integration by configuring the corresponding log
source. Once Datadog receives the first log with this source, the installation is automatically triggered and the integration pipeline is added to the processing pipelines list. To configure the log source, refer to the corresponding integration documentation.
It’s also possible to copy an integration pipeline using the clone button.
Add a processor or nested pipeline
- Navigate to Pipelines in the Datadog app.
- Hover over a pipeline and click the arrow next to it to expand processors and nested pipelines.
- Select Add Processor or Add Nested Pipeline.
A processor executes within a pipeline to complete a data-structuring action. See the Processors docs to learn how to add and configure a processor by processor type, within the app or with the API.
Nested pipelines are pipelines within a pipeline. Use nested pipelines to split the processing into two steps. For example, first use a high-level filter such as team and then a second level of filtering based on the integration, service, or any other tag or attribute.
A pipeline can contain nested pipelines and processors whereas a nested pipeline can only contain processors.
It is possible to move a pipeline into another pipeline to transform it into a nested pipeline:
Manage your pipelines
Identify when the last change to a pipeline or processor was made and which user made the change using the modification information on the pipeline. Filter your pipelines using this modification information, as well as other faceted properties such as whether the pipeline is enabled or read-only.
Reorder pipelines precisely with the
Move to option in the sliding option panel. Scroll and click on the exact position to move the selected pipeline to using the
Move to modal. Pipelines cannot be moved into other read-only pipelines. Pipelines containing nested pipelines can only be moved to other top level positions. They cannot be moved into other pipelines.
Estimated usage metrics
Estimated usage metrics are displayed per pipeline - specifically, the volume and count of logs being ingested and modified by each pipeline. There is also a link to the out-of-the-box Logs Estimated Usage Dashboard from every pipeline where you can view that pipeline’s usage metrics in more detailed charts.
Additional helpful documentation, links, and articles:
*Logging without Limits is a trademark of Datadog, Inc.