Parsed logs are central to be able to use Datadog Log Management to its full capacity, for queries, monitors, aggregations or automatic enrichments such as sensitive data scanner. When you are scaling your volume of logs it can be challenging to identify and fix logs patterns that are not parsed by your pipelines.
To identify and control the volume of unparsed logs in your organization:
To determine if a specific log has been parsed by your pipelines, open the log and check the Event Attributes panel. If the log is unparsed, instead of showing attributes extracted from your log, the panel shows a message saying that no attributes were extracted:
If you have many logs, making one-by-one checking unviable, you can instead query for unparsed logs by using the filter
datadog.pipelines:false in the Log Explorer:
This filter returns all indexed logs without custom attributes after the pipeline processing. Pattern aggregation shows an aggregated view of the common patterns in the unparsed logs, which can kickstart your creation of custom pipelines.
Querying for unparsed logs lets you select the unparsed indexed logs. It’s also a good practice to ensure that even the logs that you do not index are parsed, so that the content of your archives are structured.
To create a metric for unparsed logs, create a custom metric using the
As for any log-based metric, you can add dimensions in the
group by field. The example above shows grouping by
team. You should group by the dimensions that you are using to define the ownership of a log.
To ensure that the logs parsing in your organization is kept under control, apply a quota for the unparsed logs volume. This approach is close to what is proposed with daily quotas for indexes.
To monitor the volume of unparsed logs: