This page is not yet available in Spanish. We are working on its translation.
If you have any questions or feedback about our current translation project,
feel free to reach out to us!The reduce processor groups multiple log events into a single log, based on the fields specified and the merge strategies selected. Logs are grouped at 10-second intervals. After the interval has elapsed for the group, the reduced log for that group is sent to the next step in the pipeline.
To set up the reduce processor:
- Define a filter query. Only logs that match the specified filter query are processed. Reduced logs and logs that do not match the filter query are sent to the next step in the pipeline.
- In the Group By section, enter the field you want to group the logs by.
- Click Add Group by Field to add additional fields.
- In the Merge Strategy section:
- In On Field, enter the name of the field you want to merge the logs on.
- Select the merge strategy in the Apply dropdown menu. This is the strategy used to combine events. See the following Merge strategies section for descriptions of the available strategies.
- Click Add Merge Strategy to add additional strategies.
Merge strategies
These are the available merge strategies for combining log events.
Name | Description |
---|
Array | Appends each value to an array. |
Concat | Concatenates each string value, delimited with a space. |
Concat newline | Concatenates each string value, delimited with a newline. |
Concat raw | Concatenates each string value, without a delimiter. |
Discard | Discards all values except the first value that was received. |
Flat unique | Creates a flattened array of all unique values that were received. |
Longest array | Keeps the longest array that was received. |
Max | Keeps the maximum numeric value that was received. |
Min | Keeps the minimum numeric value that was received. |
Retain | Discards all values except the last value that was received. Works as a way to coalesce by not retaining `null`. |
Shortest array | Keeps the shortest array that was received. |
Sum | Sums all numeric values that were received. |
Sintaxis de las consultas de filtro
Cada procesador tiene una consulta de filtro correspondiente en sus campos. Los procesadores sólo procesan los logs que coinciden con su consulta de filtro. Y en todos los procesadores, excepto el procesador de filtro, los logs que no coinciden con la consulta se envían al siguiente paso de la cadena. Para el procesador de filtro, los logs que no coinciden con la consulta se descartan.
Para cualquier atributo, etiqueta (tag) o par key:value
que no sea un atributo reservado, la consulta debe empezar por @
. Por el contrario, para filtrar atributos reservados, no es necesario añadir @
delante de la consulta de filtro.
Por ejemplo, para filtrar y descartar logs status:info
, tu filtro puede definirse como NOT (status:info)
. Para filtrar y descartar system-status:info
, el filtro debe ser NOT (@system-status:info)
.
Ejemplos de consulta de filtro:
NOT (status:debug)
: Esto filtra sólo los logs que no tienen el estado DEBUG
.status:ok service:flask-web-app
: Esto filtra todos los logs con el estado OK
de tu servicioflask-web-app
.- Esta consulta también se puede escribir como:
status:ok AND service:flask-web-app
.
host:COMP-A9JNGYK OR host:COMP-J58KAS
: Esta consulta de filtro sólo coincide con los logs de hosts etiquetados.@user.status:inactive
: Esto filtra los logs con el estado inactive
anidado bajo el atributo user
.
Las consultas ejecutadas en el worker de Observability Pipelines distinguen entre mayúsculas y minúsculas. Obtén más información sobre cómo escribir consultas de filtro con la sintaxis de búsqueda de logs de Datadog.