이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

Overview

Datadog automatically parses JSON-formatted logs. You can then add value to all your logs (raw and JSON) by sending them through a processing pipeline. Pipelines take logs from a wide variety of formats and translate them into a common format in Datadog. Implementing a log pipelines and processing strategy is beneficial as it introduces an attribute naming convention for your organization.

With pipelines, logs are parsed and enriched by chaining them sequentially through processors. This extracts meaningful information or attributes from semi-structured text to reuse as facets. Each log that comes through the pipelines is tested against every pipeline filter. If it matches a filter, then all the processors are applied sequentially before moving to the next pipeline.

Pipelines and processors can be applied to any type of log. You don’t need to change logging configuration or deploy changes to any server-side processing rules. Everything can be configured within the pipeline configuration page.

Note: For optimal use of the Log Management solution, Datadog recommends using at most 20 processors per pipeline and 10 parsing rules within a Grok processor. Datadog reserves the right to disable underperforming parsing rules, processors, or pipelines that might impact Datadog’s service performance.

Preprocessing

Preprocessing of JSON logs occurs before logs enter pipeline processing. Preprocessing runs a series of operations based on reserved attributes, such as timestamp, status, host, service, and message. If you have different attribute names in your JSON logs, use preprocessing to map your log attribute names to those in the reserved attribute list.

JSON log preprocessing comes with a default configuration that works for standard log forwarders. To edit this configuration to adapt custom or specific log forwarding approaches:

  1. Navigate to Pipelines in the Datadog app and select Preprocessing for JSON logs.

    Note: Preprocessing JSON logs is the only way to define one of your log attributes as host for your logs.

  2. Change the default mapping based on reserved attribute:

Source attribute

If a JSON formatted log file includes the ddsource attribute, Datadog interprets its value as the log’s source. To use the same source names Datadog uses, see the Integration Pipeline Library.

Note: Logs coming from a containerized environment require the use of an environment variable to override the default source and service values.

Host attribute

Using the Datadog Agent or the RFC5424 format automatically sets the host value on your logs. However, if a JSON formatted log file includes the following attribute, Datadog interprets its value as the log’s host:

  • host
  • hostname
  • syslog.hostname

Date attribute

By default Datadog generates a timestamp and appends it in a date attribute when logs are received. However, if a JSON formatted log file includes one of the following attributes, Datadog interprets its value as the log’s official date:

  • @timestamp
  • timestamp
  • _timestamp
  • Timestamp
  • eventTime
  • date
  • published_date
  • syslog.timestamp

Specify alternate attributes to use as the source of a log’s date by setting a log date remapper processor.

Note: Datadog rejects a log entry if its official date is older than 18 hours in the past.

The recognized date formats are: ISO8601, UNIX (the milliseconds EPOCH format), and RFC3164.

Message attribute

By default, Datadog ingests the message value as the body of the log entry. That value is then highlighted and displayed in the Log Explorer, where it is indexed for full text search.

Specify alternate attributes to use as the source of a log’s message by setting a log message remapper processor.

Status attribute

Each log entry may specify a status level which is made available for faceted search within Datadog. However, if a JSON formatted log file includes one of the following attributes, Datadog interprets its value as the log’s official status:

  • status
  • severity
  • level
  • syslog.severity

Specify alternate attributes to use as the source of a log’s status by setting a log status remapper processor.

Service attribute

Using the Datadog Agent or the RFC5424 format automatically sets the service value on your logs. However, if a JSON formatted log file includes the following attribute, Datadog interprets its value as the log’s service:

  • service
  • syslog.appname

Specify alternate attributes to use as the source of a log’s service by setting a log service remapper processor.

Trace ID attribute

By default, Datadog tracers can automatically inject trace and span IDs into your logs. However, if a JSON formatted log includes the following attributes, Datadog interprets its value as the log’s trace_id:

  • dd.trace_id
  • contextMap.dd.trace_id

Specify alternate attributes to use as the source of a log’s trace ID by setting a trace ID remapper processor.

Span ID attribute

By default, Datadog tracers can automatically inject span IDs into your logs. However, if a JSON formatted log includes the following attributes, Datadog interprets its value as the log’s span_id:

  • dd.span_id
  • contextMap.dd.span_id

Create a pipeline

  1. Navigate to Pipelines in the Datadog app.

  2. Select New Pipeline.

  3. Select a log from the live tail preview to apply a filter, or apply your own filter. Choose a filter from the dropdown menu or create your own filter query by selecting the </> icon. Filters let you limit what kinds of logs a pipeline applies to.

    Note: The pipeline filtering is applied before any of the pipeline’s processors. For this reason, you cannot filter on an attribute that is extracted in the pipeline itself.

  4. Name your pipeline.

  5. (Optional) Grant editing access to processors in the pipeline. If you assign a role to a pipeline, the role receives logs_write_processor permissions specifically scoped to that pipeline. Roles with logs_write_processor permissions assigned globally (by editing role), cannot be selected, as they have access to all pipelines.

  6. (Optional) Add tags and a description to the pipeline. The description and tags can be used to state the pipeline’s purpose and which team owns it.

  7. Press Create.

An example of a log transformed by a pipeline:

An example of a log transformed by a pipeline

Integration pipelines

Integration processing pipelines are available for certain sources when they are set up to collect logs. These pipelines are read-only and parse out your logs in ways appropriate for the particular source. For integration logs, an integration pipeline is automatically installed that takes care of parsing your logs and adds the corresponding facet in your Log Explorer.

To view an integration pipeline, navigate to the Pipelines page. To edit an integration pipeline, clone it and then edit the clone:

Cloning pipeline

See the ELB logs example below:

ELB log post processing

Note: Integration pipelines cannot be deleted, only disabled.

Integration pipeline library

To see the full list of integration pipelines that Datadog offers, browse the integration pipeline library. The pipeline library shows how Datadog processes different log formats by default.

To use an integration pipeline, Datadog recommends installing the integration by configuring the corresponding log source. Once Datadog receives the first log with this source, the installation is automatically triggered and the integration pipeline is added to the processing pipelines list. To configure the log source, refer to the corresponding integration documentation.

It’s also possible to copy an integration pipeline using the clone button.

Add a processor or nested pipeline

  1. Navigate to Pipelines in the Datadog app.
  2. Hover over a pipeline and click the arrow next to it to expand processors and nested pipelines.
  3. Select Add Processor or Add Nested Pipeline.

Processors

A processor executes within a pipeline to complete a data-structuring action. See the Processors docs to learn how to add and configure a processor by processor type, within the app or with the API.

See Parsing dates for more information about parsing a custom date and time format and for information on the timezone parameter, which is needed if your timestamps are not in UTC.

Nested pipelines

Nested pipelines are pipelines within a pipeline. Use nested pipelines to split the processing into two steps. For example, first use a high-level filter such as team and then a second level of filtering based on the integration, service, or any other tag or attribute.

A pipeline can contain nested pipelines and processors whereas a nested pipeline can only contain processors.

Nested pipelines

Move a pipeline into another pipeline to make it into a nested pipeline:

  1. Hover over the pipeline you want to move, and click on the Move to icon.
  2. Select the pipeline you want to move the original pipeline into. Note: Pipelines containing nested pipelines can only be moved to another top level position. They cannot be moved into another pipeline.
  3. Click Move.

Manage your pipelines

Identify when the last change to a pipeline or processor was made and which user made the change using the modification information on the pipeline. Filter your pipelines using this modification information, as well as other faceted properties such as whether the pipeline is enabled or read-only.

How to manage your pipelines with faceted search, pipeline modificiation information, and the reordering modal

Reorder pipelines precisely with the Move to option in the sliding option panel. Scroll and click on the exact position to move the selected pipeline to using the Move to modal. Pipelines cannot be moved into other read-only pipelines. Pipelines containing nested pipelines can only be moved to other top level positions. They cannot be moved into other pipelines.

How to reorder your pipelines precisely using the move to modal

Estimated usage metrics

Estimated usage metrics are displayed per pipeline - specifically, the volume and count of logs being ingested and modified by each pipeline. There is also a link to the out-of-the-box Logs Estimated Usage Dashboard from every pipeline where you can view that pipeline’s usage metrics in more detailed charts.

How to get a quick view of your pipelines' usage metrics

Further Reading


*Logging without Limits is a trademark of Datadog, Inc.