New announcements for Serverless, Network, RUM, and more from Dash! New announcements from Dash!

Parsing

Overview

If your logs are JSON-formatted, Datadog automatically parses them, but for other formats, Datadog allows you to enrich your logs with the help of Grok Parser. The Grok syntax provides an easier way to parse logs than pure regular expressions. The main usage of the Grok Parser is to extract attributes from semi-structured text messages.

Grok comes with a lot of reusable patterns to parse integers, ip addresses, hostnames, etc…

Parsing rules can be written with the %{MATCHER:EXTRACT:FILTER} syntax:

  • Matcher: rule (possibly a reference to another token rule) that describes what to expect (number, word, notSpace,…)

  • Extract (optional): an identifier representing the capture destination for the piece of text matched by the MATCHER.

  • Filter (optional): a post-processor of the match to transform it

Example for this classic unstructured log:

john connected on 11/08/2017

With the following parsing rule:

MyParsingRule %{word:user} connected on %{date("MM/dd/yyyy"):connect_date}

You would have at the end this structured log:

Note: If you have multiple parsing rules in a single Grok parser, only one can match any given log. The first one that matches from top to bottom is the one that does the parsing.

Matcher and Filter

Here is the list of all the matchers and filters natively implemented by Datadog:

PatternUsage
date("pattern"[, "timezoneId"[, "localeId"]])Matches a date with the specified pattern and parses to produce a Unix timestamp. See the date Matcher examples.
regex("pattern")Matches a regex. Check the regex Matcher examples.
dataMatches any string including spaces and newlines. Equivalent to .*.
notSpaceMatches any string until the next space.
boolean("truePattern", "falsePattern")Matches and parses a boolean optionally defining the true and false patterns (defaults to ‘true’ and ‘false’ ignoring case).
numberStrMatches a decimal floating point number and parses it as a string.
numberMatches a decimal floating point number and parses it as a double precision number.
numberExtStrMatches a floating point number (with scientific notation support).
numberExtMatches a floating point number (with scientific notation support) and parses it as a double precision number.
integerStrMatches a decimal integer number and parses it as a string.
integerMatches a decimal integer number and parses it as an integer number.
integerExtStrMatches an integer number (with scientific notation support).
integerExtMatches an integer number (with scientific notation support) and parses it as an integer number.
wordMatches characters from a-z, A-Z, 0-9, including the _ (underscore) character.
doubleQuotedStringMatches a double-quoted string.
singleQuotedStringMatches a single-quoted string.
quotedStringMatches a double-quoted or single-quoted string.
uuidMatches a UUID.
macMatches a MAC address.
ipv4Matches an IPV4.
ipv6Matches an IPV6.
ipMatches an IP (v4 or v6).
hostnameMatches a hostname.
ipOrHostMatches a hostname or IP.
portMatches a port number.
PatternUsage
numberParses a match as double precision number.
integerParses a match as an integer number.
booleanParses ‘true’ and ‘false’ strings as booleans ignoring case.
date("pattern"[, "timezoneId"[, "localeId"]])Parses a date with the specified pattern to produce a Unix timestamp. See date Filter examples.
nullIf("value")Returns null if the match is equal to the provided value.
jsonParses properly formatted JSON.
rubyhashParses properly formatted Ruby hash (e.g. {name => "John", "job" => {"company" => "Big Company", "title" => "CTO"}}).
useragent([decodeuricomponent:true/false])Parses a user-agent and returns a JSON object that contains the device, OS, and the browser represented by the Agent. Check the User Agent processor.
querystringExtracts all the key-value pairs in a matching URL query string (e.g. ?productId=superproduct&promotionCode=superpromo).
decodeuricomponentThis core filter decodes URI components.
lowercaseReturns the lower-cased string.
uppercaseReturns the upper-cased string.
keyvalue([separatorStr[, characterWhiteList [, quotingStr]])Extracts key value pattern and returns a JSON object. See key-value Filter examples.
scale(factor)Multiplies the expected numerical value by the provided factor.
array([[openCloseStr, ] separator][, subRuleOrFilter)Parses a string sequence of tokens and returns it as an array.
urlParses a URL and returns all the tokenized members (domain, query params, port, etc.) in a JSON object. More info on how to parse URLs.

Advanced Settings

At the bottom of your grok processor tiles there is an Advanced Settings section:

  • Use the Extract from field to apply your grok processor on a given attribute instead of the default message attribute.

  • Use the Helper Rules field to define tokens for your parsing rules. Helper rules helps you factorize grok patterns across your parsing rules which is useful when you have several rules in the same grok parser that uses the same tokens.

Example for this classic unstructured log:

john id:12345 connected on 11/08/2017 on server XYZ in production

You could use the following parsing rule:

MyParsingRule %{user} %{connection} %{server}

with the following helpers:

user %{word:user.name} id:%{integer:user.id}
connection connected on %{date("MM/dd/yyyy"):connect_date}
server on server %{notSpace:server.name} in %{notSpace:server.env}

Examples

Find below some examples demonstrating how to use parsers:

Key value

This is the key value core filter : keyvalue([separatorStr[, characterWhiteList [, quotingStr]]) where:

  • separatorStr : defines the separator. Default =
  • characterWhiteList: defines additional non escaped value chars. Default \\w.\\-_@
  • quotingStr : defines quotes. Default behavior detects quotes (<>, "\"\"", …). When defined default behavior is replaced by allowing only defined quoting char. For example <> matches test= test2=test.

Use filters such as keyvalue() to more-easily map strings to attributes:

log:

user=john connect_date=11/08/2017 id=123 action=click

Rule

rule %{data::keyvalue}

You don’t need to specify the name of your parameters as they were already contained in the log. If you add an extract attribute my_attribute in your rule pattern you would have:

If = is not the default separator between your key and values, add a parameter in your parsing rule with the wanted splitter.

log:

user: john connect_date: 11/08/2017 id: 123 action: click

Rule

rule %{data::keyvalue(": ")}

If logs contain specials characters in an attribute value such as / in a url for instance, add it to the white-list in the parsing rule:

log:

url=https://app.datadoghq.com/event/stream user=john

Rule:

rule %{data::keyvalue("=","/:")}

Other examples:

Raw stringParsing ruleResult
key=valueStr%{data::keyvalue}{“key”: “valueStr}
key=<valueStr>%{data::keyvalue}{“key”: “valueStr”}
key:valueStr%{data::keyvalue(":")}{“key”: “valueStr”}
key:“/valueStr”%{data::keyvalue(":", "/")}{“key”: “/valueStr”}
key:={valueStr}%{data::keyvalue(":=", "", "{}")}{“key”: “valueStr”}
key:=valueStr%{data::keyvalue(":=", "")}{“key”: “valueStr”}

Parsing dates

The date matcher transforms your timestamp in the EPOCH format.

Raw stringParsing ruleResult
14:20:15%{date("HH:mm:ss"):date}{“date”: 51615000}
11/10/2014%{date("dd/MM/yyyy"):date}{“date”: 1412978400000}
Thu Jun 16 08:29:03 2016%{date("EEE MMM dd HH:mm:ss yyyy"):date}{“date”: 1466065743000}
Tue Nov 1 08:29:03 2016%{date("EEE MMM d HH:mm:ss yyyy"):date}{“date”: 1466065743000}
06/Mar/2013:01:36:30 +0900%{date("dd/MMM/yyyy:HH:mm:ss Z"):date}{“date”: 1362501390000}
2016-11-29T16:21:36.431+0000%{date("yyyy-MM-dd'T'HH:mm:ss.SSSZ"):date}{“date”: 1480436496431}
2016-11-29T16:21:36.431+00:00%{date("yyyy-MM-dd'T'HH:mm:ss.SSSZZ"):date}{“date”: 1480436496431}
06/Feb/2009:12:14:14.655%{date("dd/MMM/yyyy:HH:mm:ss.SSS"):date}{“date”: 1233922454655}
Thu Jun 16 08:29:03 20161%{date("EEE MMM dd HH:mm:ss yyyy","Europe/Paris"):date}{“date”: 1466058543000}
2007-08-31 19:22:22.427 ADT%{date("yyyy-MM-dd HH:mm:ss.SSS z"):date}{“date”: 1188675889244}

1 Use this format if you perform your own localizations and your timestamps are not in UTC. Timezone IDs are pulled from the TZ Database. For more information, see the TZ database names.

Note: Parsing a date doesn’t set its value as the log official date, for this use the Log Date Remapper Log Date Remapper in a subsequent Processor.

Conditional pattern

You might have logs with two possible formats which differ in only one attribute. These cases can be handled with a single rule, using conditionals with |.

Log:

john connected on 11/08/2017
12345 connected on 11/08/2017

Rule: Note that “id” is an integer and not a string thanks to the “integer” matcher in the rule.

MyParsingRule (%{integer:user.id}|%{word:user.firstname}) connected on %{date("MM/dd/yyyy"):connect_date}

Results:

Optional attribute

Some logs contain values that only appear part of the time. In those cases, you can make attribute extraction optional with ()? extracting it only when the attribute is contained in your log.

Log:

john 1234 connected on 11/08/2017

Rule:

MyParsingRule %{word:user.firstname} (%{integer:user.id} )?connected on %{date("MM/dd/yyyy"):connect_date}

Note: you may usually need to include the space in the optional part otherwise you would end up with two spaces and the rule would not match anymore.

Nested JSON

Use the json filter to parse a JSON object nested after a raw text prefix:

Log:

Sep 06 09:13:38 vagrant program[123]: server.1 {"method":"GET", "status_code":200, "url":"https://app.datadoghq.com/logs/pipelines", "duration":123456}

Rule:

parsing_rule %{date("MMM dd HH:mm:ss"):timestamp} %{word:vm} %{word:app}\[%{number:logger.thread_id}\]: %{notSpace:server} %{data::json}

Regex

Use the regex matcher to match any substring of your log message based on literal regex rules.

Log:

john_1a2b3c4 connected on 11/08/2017

Rule: Here we just look for the id to extract

MyParsingRule %{regex("[a-z]*"):user.firstname}_%{regex("[a-zA-Z0-9]*"):user.id} .*

Further Reading