- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
This processor splits nested arrays into distinct events so that you can query, filter, alert, and visualize data within an array. The arrays need to already be parsed. For example, the processor can process [item_1, item_2]
, but cannot process "[item_1, item2]"
. The items in the array can be JSON objects, strings, integers, floats, or Booleans. All unmodified fields are added to the child events. For example, if you are sending the following items to the Observability Pipelines Worker:
{
"host": "my-host",
"env": "prod",
"batched_items": [item_1, item_2]
}
Use the Split Array processor to send each item in batched_items
as a separate event:
{
"host": "my-host",
"env": "prod",
"batched_items": item_1
}
{
"host": "my-host",
"env": "prod",
"batched_items": item_2
}
See the split array example for a more detailed example.
To set up this processor:
Click Manage arrays to split to add an array to split or edit an existing array to split. This opens a side panel.
<OUTER_FIELD>.<INNER_FIELD>
to match subfields. See the Path notation example below.This is an example event:
{
"ddtags": ["tag1", "tag2"],
"host": "my-host",
"env": "prod",
"message": {
"isMessage": true,
"myfield" : {
"timestamp":14500000,
"firstarray":["one", 2]
},
},
"secondarray": [
{
"some":"json",
"Object":"works"
}, 44]
}
If the processor is splitting the arrays "message.myfield.firstarray"
and "secondarray"
, it outputs child events that are identical to the parent event, except for the values of "message.myfield.firstarray"
and "secondarray",
which becomes a single item from their respective original array. Each child event is a unique combination of items from the two arrays, so four child events (2 items * 2 items = 4 combinations) are created in this example.
{
"ddtags": ["tag1", "tag2"],
"host": "my-host",
"env": "prod",
"message": {
"isMessage": true,
"myfield" : {"timestamp":14500000, "firstarray":"one"},
},
"secondarray": {
"some":"json",
"Object":"works"
}
}
{
"ddtags": ["tag1", "tag2"],
"host": "my-host",
"env": "prod",
"message": {
"isMessage": true,
"myfield" : {"timestamp":14500000, "firstarray":"one"},
},
"secondarray": 44
}
{
"ddtags": ["tag1", "tag2"],
"host": "my-host",
"env": "prod",
"message": {
"isMessage": true,
"myfield" : {"timestamp":14500000, "firstarray":2},
},
"secondarray": {
"some":"json",
"object":"works"
}
}
{
"ddtags": ["tag1", "tag2"],
"host": "my-host",
"env": "prod",
"message": {
"isMessage": true,
"myfield" : {"timestamp":14500000, "firstarray":2},
},
"secondarray": 44
}
For the following message structure, use outer_key.inner_key.double_inner_key
to refer to the key with the value double_inner_value
.
{
"outer_key": {
"inner_key": "inner_value",
"a": {
"double_inner_key": "double_inner_value",
"b": "b value"
},
"c": "c value"
},
"d": "d value"
}
Each processor has a corresponding filter query in their fields. Processors only process logs that match their filter query. And for all processors except the filter processor, logs that do not match the query are sent to the next step of the pipeline. For the filter processor, logs that do not match the query are dropped.
For any attribute, tag, or key:value
pair that is not a reserved attribute, your query must start with @
. Conversely, to filter reserved attributes, you do not need to append @
in front of your filter query.
For example, to filter out and drop status:info
logs, your filter can be set as NOT (status:info)
. To filter out and drop system-status:info
, your filter must be set as NOT (@system-status:info)
.
Filter query examples:
NOT (status:debug)
: This filters for only logs that do not have the status DEBUG
.status:ok service:flask-web-app
: This filters for all logs with the status OK
from your flask-web-app
service.status:ok AND service:flask-web-app
.host:COMP-A9JNGYK OR host:COMP-J58KAS
: This filter query only matches logs from the labeled hosts.@user.status:inactive
: This filters for logs with the status inactive
nested under the user
attribute.Queries run in the Observability Pipelines Worker are case sensitive. Learn more about writing filter queries in Datadog’s Log Search Syntax.