---
title: Upgrade the Worker Guide
description: >-
  Learn about new features, enhancements, and fixes for Worker versions 2.7 to
  2.15.
breadcrumbs: >-
  Docs > Observability Pipelines > Observability Pipelines Guides > Upgrade the
  Worker Guide
---

# Upgrade the Worker Guide

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site.md). ().
{% /alert %}

{% /callout %}

## Overview{% #overview %}

{% alert level="info" %}
Datadog recommends updating the Observability Pipelines Worker (OPW) with every minor and patch release, or monthly at a minimum.Upgrading to the latest major OPW version and keeping it updated is the only supported way to get new OPW functionalities, fixes, and security updates.
{% /alert %}

This guide goes over how to upgrade to a specific Worker version and the updates for that version.

## Worker version 2.15.0{% #worker-version-2150 %}

To upgrade to Worker version 2.15.0:

- Docker: Run the `docker pull` command for the [2.15.0 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.15.0).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.15.0`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.15.0`.

Worker version 2.15.0 gives you access to the following:

#### New features{% #new-features %}

- New OCSF mappings have been added for the following log types:
  - AWS GuardDuty for all finding types (for example, EKS Audit, EC2, Lambda, IAM, DNS) in a single mapping named `AWS GuardDuty`.
  - Infoblox NIOS: DNS Activity, DHCP Activity, audit (Authentication and API Activity), and port and syslog Base Event (Infoblox DNS Query, Infoblox DHCP, Infoblox Audit Authentication, Infoblox Audit API, Infoblox Port).
  - Zscaler ZPA App Connector Status logs to OCSF schema version 1.3.0 (Authentication, class 3002) with `datetime` and `host` profiles.
  - Zscaler ZPA User Activity logs to OCSF schema version 1.3.0 (Network Activity, class 4001) with `datetime`, `host`, `network_proxy`, and `security_control` profiles.
  - AWS WAF Web ACL logs. Transforms WAF log events into OCSF HTTP Activity (class 4002) with `cloud` and `security_control` profiles.
  - Zscaler ZPA User Status logs to OCSF schema version 1.3.0 (Authentication, class 3002) with `datetime` and `host` profiles.
- The OpenTelemetry source now supports metrics pipelines.
- The Elasticsearch destination is now available for metrics pipelines.
- The `parse_yaml` function is now available for the Custom Processor. This function parses YAML according to the [YAML 1.1 spec](https://yaml.org/spec/1.1/).

#### Enhancements{% #enhancements %}

- The Amazon S3 source now accepts compressed data.
- The Elasticsearch destination has been updated with new options: `auto_routing`, `compression`, `id_key`, `pipeline`, `request_retry_partial`, `sync_fields`, and `tls`.
- Mapping array-of-object source fields into OCSF array-of-object destinations is now supported.
- The Datadog Metrics destination now defaults to the Datadog series v2 endpoint (`/api/v2/series`).
- The Enrichment Table processor's GeoIP option now includes a network field containing the CIDR network associated with the lookup result. The network field is available for all database types (City, ISP/ASN, Connection-Type, Anonymous-IP).
- The Custom Processor now has an `encode_csv` function that encodes an array of values into a CSV-formatted string. This is the inverse of the `parse_csv` function and supports an optional single-byte delimiter (defaults to `,`).
- Field names now support `.`, such as `foo."bar.baz"`.

#### Fixes{% #fixes %}

- Improved the accuracy of the buffer utilization metric tracking.
- For the Custom Processor, incorrect parameter types for `floor`, `md5`, `parse_key_value`, `precision`, and `seahash` have been fixed.

## Worker version 2.14.1{% #worker-version-2141 %}

To upgrade to Worker version 2.14.1:

- Docker: Run the `docker pull` command for the [2.14.1 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.14.1).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.14.1`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.14.1`.

Worker version 2.14.1 gives you access to the following:

#### Fixes{% #fixes-1 %}

- Fixed how an empty path in a processor field is handled. For example, how the Parse JSON processor handles the `Field to parse JSON on` with the value `.`.

## Worker version 2.14.0{% #worker-version-2140 %}

To upgrade to Worker version 2.14.0:

- Docker: Run the `docker pull` command for the [2.14.0 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.14.0).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.14.0`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.14.0`.

Worker version 2.14.0 gives you access to the following:

#### New features{% #new-features-1 %}

- OCSF mappings for Palo Alto Networks Threat events have been added.
- The Database source has been updated with timeout-related changes.
- The `component_latency_seconds` histogram and `component_latency_mean_seconds` gauge internal metrics have been added. The metrics are based on the time an event spends in a single processor, including in the processor buffer.

#### Enhancements{% #enhancements-1 %}

- Enrichment Table error reporting now uses Reference Tables metrics to reduce the count of similar logs.
- The Splunk HEC destination now supports extracting index fields from events.
- The OCSF mapper now has an option to retain unmatched fields.
- For the Enrichment Table processor, the local cache retention time of entries not found in a Reference Table has been increased. The retention time is now 30 minutes, up from 10 minutes.
- The Database Source SQL validation checks have been improved.
- The Sensitive Data Scanner library now has new and updated out-of-the-box scanning rules for PII, credentials, and financial data. Minor bugs have also been fixed.
- The `observability-pipelines-worker top` command has new keybinds for scrolling, sorting, and filtering.
- The Datadog Logs destination has been updated to default to `zstd` compression instead of no compression.
- The environment variable for the Datadog Agent source address is now configurable.

#### Fixes{% #fixes-2 %}

- Fixed a bug with sticky error state when Remote Configuration is successfully polled.
- Fixed buffer utilization metrics to properly record actual utilization level.
- Fixed a Worker shutdown race condition between closing the memory buffer and in-progress send operations that could potentially cause event loss.
- The Generate Metrics processor now handles aggregated histogram and aggregated summary metrics correctly.
- Live Capture now supports child events in the split array processor.
- Reference Tables buffer size and request frequency have been reduced to avoid out-of-memory (OOM) and rate limit errors.
- The Reference Tables processor now rejects empty or blank lookup keys and supports integer keys.

## Worker version 2.13.2{% #worker-version-2132 %}

To upgrade to Worker version 2.13.2:

- Docker: Run the `docker pull` command for the [2.13.2 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.13.2).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.13.2`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.13.2`.

Worker version 2.13.2 gives you access to the following:

#### Fixes{% #fixes-3 %}

- Fixed `exists` and `missing` queries to match with objects.

## Worker version 2.13.1{% #worker-version-2131 %}

To upgrade to Worker version 2.13.1:

- Docker: Run the `docker pull` command for the [2.13.1 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.13.1).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.13.1`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.13.1`.

Worker version 2.13.1 gives you access to the following:

#### Fixes{% #fixes-4 %}

- All processors have been updated to gracefully handle incorrect filter query syntax.

## Worker version 2.13.0{% #worker-version-2130 %}

To upgrade to Worker version 2.13.0:

- Docker: Run the `docker pull` command for the [2.13.0 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.13.0).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.13.0`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.13.0`.

Worker version 2.13.0 gives you access to the following:

#### New features{% #new-features-2 %}

- [Custom Processor](https://docs.datadoghq.com/observability_pipelines/processors/custom_processor.md) for metrics: Use VRL to transform metric events.
- [Secrets Management](https://docs.datadoghq.com/observability_pipelines/configuration/secrets_management.md): Observability Pipelines can retrieve secrets using Datadog Secrets Management.
- [Live capture](https://docs.datadoghq.com/observability_pipelines/configuration/live_capture.md) is available for metrics pipelines.
- The [Enrichment Tables](https://docs.datadoghq.com/observability_pipelines/processors/enrichment_table.md) processor can use datasets in Reference Tables.

#### Enhancements{% #enhancements-2 %}

- [Disk buffers](https://docs.datadoghq.com/observability_pipelines/scaling_and_performance/buffering_and_backpressure.md#destination-buffers) have been updated to drop logs when the buffer is full.
- The Dedupe processor has been updated with a configurable cache size.
- The Datadog Agent source has been updated with configurable request timeouts.
- Source buffers have been updated to record the utilization level of the buffer with these metrics:
  - `source_buffer_max_byte_size`
  - `source_buffer_max_event_size`
  - `source_buffer_utilization`
  - `source_buffer_utilization_level`
- Processor buffers have been updated to record the utilization level of the buffers with these metrics:
  - `transform_buffer_max_byte_size`
  - `transform_buffer_max_event_size`
  - `transform_buffer_utilization`
  - `transform_buffer_utilization_level`
- The TLS implementation has been updated to store credentials in FIPS-compliant PEM format.

#### Fixes{% #fixes-5 %}

- Live Capture has been updated and bugs have been fixed.
- The Search Syntax bug with handling hyphenated segments has been fixed.
- The syslog source in UDP mode emits the standard `component_received` metrics, like how it does with TCP mode:
  - `component_received_events_total`
  - `component_received_event_bytes_total`
  - `component_received_bytes_total`

## Worker version 2.12.0{% #worker-version-2120 %}

To upgrade to Worker version 2.12.0:

- Docker: Run the `docker pull` command for the [2.12.0 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.12.0).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.12.0`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.12.0`.

Worker version 2.12.0 gives you access to the following:

#### New features{% #new-features-3 %}

- [HTTP destination](https://docs.datadoghq.com/observability_pipelines/destinations/http_client.md) for metrics pipelines: Routes metrics to an HTTP client endpoint.
- [MySQL Source](https://docs.datadoghq.com/observability_pipelines/sources/mysql.md): Sends logs from a MySQL database to Observability Pipelines.

#### Enhancements{% #enhancements-3 %}

- The HTTP Client source and destination have been updated so you can set a custom authorization strategy.
- The metrics filter processor was updated to filter metrics on `kind` and `value`.
- Processor groups that route and process only targeted events have been updated to reduce processing overhead.
- The Datadog Agent source has been updated to support timeouts, incrementing the `component_timed_out_events_total` and `component_timed_out_requests_total` metrics.

#### Fixes{% #fixes-6 %}

- The Amazon S3 destination has been updated to ensure the `message` field is always a string, JSON-encoding it if necessary.
- A Worker bug has been fixed to ensure Worker logs are reported correctly.
- The `hostname` is renamed to `host` when sending logs to Datadog Archives.
- For metrics sources, Workers have been updated to use their own copy of the Datadog key for authentication, disregarding any keys sent in by the Datadog Agent to prevent the use of stale keys.
- The Worker uses proxy settings configured with environment variables (for example, the `DD_PROXY_HTTPS` environment variable) or in the bootstrap file when it publishes events to Live Capture.

## Worker Version 2.11.0{% #worker-version-2110 %}

To upgrade to Worker version 2.11.0:

- Docker: Run the `docker pull` command for the [2.11.0 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.11).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.11.0`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.11.0`.

{% alert level="info" %}
For pipelines that are running Worker 2.10 or older:- After you upgrade to Worker 2.11, your processor filter queries continue to run the legacy search syntax.- You must manually update your filter queries to the new Search Syntax.- Then, enable the New Search Syntax toggle in the UI, or set `use_legacy_search_syntax` to `false` using the API or Terraform.See [Upgrade Your Filter Queries to the New Search Syntax](https://docs.datadoghq.com/observability_pipelines/guide/upgrade_your_filter_queries_to_the_new_search_syntax.md) for more information.
{% /alert %}

Version 2.11.0 gives you access to the following:

#### New features{% #new-features-4 %}

- More than 100 out-of-the-box rules for the Sensitive Data Scanner processor have been added. These rules redact Personally Identifiable Information (PII) and access key information.
- The updated [Search Syntax](https://docs.datadoghq.com/observability_pipelines/search_syntax/logs.md) that lets you:
  - Dereference arrays
  - Perform case insensitive search within log messages
  - Deterministically target log attributes without using `@` symbol

## Worker Version 2.10.0{% #worker-version-2100 %}

To upgrade to Worker version 2.10.0:

- Docker: Run the `docker pull` command for the [2.10.0 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.10).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.10.0`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.10.0`.

Worker version 2.10.0 gives you access to the following:

#### New features{% #new-features-5 %}

- [Kafka destination](https://docs.datadoghq.com/observability_pipelines/destinations/kafka.md): Send logs from Observability Pipelines to your Kafka topics.
- New and updated [Custom Processor functions](https://docs.datadoghq.com/observability_pipelines/processors/custom_processor.md#custom-functions):
  - The `pop` function removes the last item from an array.
  - The cryptographic functions `encrypt_ip` and `decrypt_ip` for IP address encryption.
    - These functions use the IPCrypt specification and support both IPv4 and IPv6 addresses with two encryption modes:
      - aes128 (IPCrypt deterministic, 16-byte key)
      - pfx (IPCryptPfx, 32-byte key).
      - Both algorithms are format-preserving (output is a valid IP address) and deterministic.
  - The `xxhash` function implements `xxh32`, `xxh64`, `xxh3_64`, and `xxh3_128` hashing algorithms.
  - The `parse_aws_alb_log` function has been updated with an optional `strict_mode` parameter.
    - When `strict_mode` is set to `false`, the parser ignores any newly added or trailing fields in AWS ALB logs, instead of failing.
    - Defaults to `true` to preserve current behavior.
  - [Metrics pipelines](https://docs.datadoghq.com/observability_pipelines/configuration/set_up_pipelines.md?tab=metrics#set-up-a-pipeline-in-the-ui):
    - [Datadog Agent source](https://docs.datadoghq.com/observability_pipelines/sources/datadog_agent.md?tab=metrics): Send metrics from the Datadog Agent to Observability Pipelines for processing.
    - [Filter processor](https://docs.datadoghq.com/observability_pipelines/processors/filter.md?tab=metrics): Filter the metrics you want to process.
    - [Tag processor](https://docs.datadoghq.com/observability_pipelines/processors/tag_control/metrics.md): Include or exclude specific tags in your metrics.
    - [Datadog Metrics destination](https://docs.datadoghq.com/observability_pipelines/destinations/datadog_metrics.md?tab=secretsmanagement): Send your processed metrics Datadog.

#### Enhancements{% #enhancements-4 %}

- The Custom Processor's performance has been improved.
- Workers have been updated to use their own copy of the Datadog key for authentication, disregarding any keys sent in by the Datadog Agent to prevent the use of stale keys.
- Error reporting has been improved when validating JSON schema in custom functions that use the `validate_json_schema` function.

#### Fixes{% #fixes-7 %}

- Group-level filtering logic has been fixed to exclude correct logs.

## Worker Version 2.9.1{% #worker-version-291 %}

To upgrade to Worker version 2.9.1:

- Docker: Run the `docker pull` command to pull the [2.9.1 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.9.1).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.9.1`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.9.1`.

Worker version 2.9.1 gives you access to the following:

#### Fixes{% #fixes-8 %}

- The Microsoft Sentinel destination has been limited to batch sizes of 1 MB when reading logs using the Azure Logs Ingestion API. The limit size was determined based on the [Azure documentation](https://learn.microsoft.com/en-us/azure/azure-monitor/fundamentals/service-limits#logs-ingestion-api).

## Worker Version 2.9.0{% #worker-version-290 %}

To upgrade to Worker version 2.9.0:

- Docker: Run the `docker pull` command to pull the [2.9.0 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.9.0).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.9.0`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.9.0`.

Worker version 2.9.0 gives you access to the following:

#### New features{% #new-features-6 %}

- [OpenTelemetry Collector source](https://docs.datadoghq.com/observability_pipelines/sources/opentelemetry.md): Ingest logs from your OpenTelemetry Collector into Observability Pipelines.
- [Datadog CloudPrem destination](https://docs.datadoghq.com/observability_pipelines/destinations/cloudprem.md): Route logs to the Datadog CloudPrem destination.
- [Google Pub/Sub destination](https://docs.datadoghq.com/observability_pipelines/destinations/google_pubsub.md): Send logs from Observability Pipelines to the Google Pub/Sub messaging system.
- The `haversine` custom function to calculate haversine distance and bearing.

#### Enhancements{% #enhancements-5 %}

- The Observability Pipelines Worker's internal logs have been updated to partially redact the Datadog API key (first 28 characters only), to help investigate API-key related issues.
- The performance of Remote Configuration delivery time has been improved.
- The `parse_cef` and `parse_syslog` custom functions have enhanced parsing.

## Worker Version 2.8.1{% #worker-version-281 %}

To upgrade to Worker version 2.8.1:

- Docker: Run the `docker pull` command to pull the [2.8.1 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.8.1).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.8.1`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.8.1`.

Worker version 2.8.1 gives you access to the following:

#### Fixes{% #fixes-9 %}

- The HTTP Client source's authorization strategy has been fixed.

## Worker Version 2.8.0{% #worker-version-280 %}

To upgrade to Worker version 2.8.0:

- Docker: Run the `docker pull` command to pull the [2.8.0 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.8.0).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.8.0`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.8.0`.

Worker version 2.8.0 gives you access to the following:

#### New features{% #new-features-7 %}

- All sources and destinations have been updated to support custom environment variables.

#### Enhancements{% #enhancements-6 %}

- The Elasticsearch destination's [indexing strategy](https://docs.datadoghq.com/observability_pipelines/destinations/elasticsearch.md#set-up-the-destination) has been updated to include data streams.
- The HTTP Client destination supports template syntax.

#### Fixes{% #fixes-10 %}

- The HTTP Server source's TLS enablement has been fixed.
- Worker health metrics have been fixed.
- OpenSearch's basic authentication has been fixed.

## Worker Version 2.7.0{% #worker-version-270 %}

To upgrade to Worker version 2.7.0:

- Docker: Run the `docker pull` command to pull the [2.7.0 image](https://hub.docker.com/r/datadog/observability-pipelines-worker/tags?name=2.7.0).
- Kubernetes: See the [Helm chart](https://github.com/DataDog/helm-charts/tree/main/charts/observability-pipelines-worker#observability-pipelines-worker).
- APT: Run the command `apt-get install observability-pipelines-worker=2.7.0`.
- RPM: Run the command `sudo yum install observability-pipelines-worker-2.7.0`.

Worker version 2.7.0 gives you access to the following:

#### New features{% #new-features-8 %}

- [The HTTP Client destination](https://docs.datadoghq.com/observability_pipelines/destinations/http_client.md): Send logs to an HTTP Client, such as a logging platform or SIEM.
- [Processor Groups](https://docs.datadoghq.com/observability_pipelines/processors.md#processor-groups): Organize your processors into logical groups to help you manage them.
- [Disk and memory](https://docs.datadoghq.com/observability_pipelines/scaling_and_performance/buffering_and_backpressure.md#destination-buffers) buffering options are available for destinations.

#### Enhancements{% #enhancements-7 %}

- The `decode_lz4` custom function has been updated to support decompressing `lz4` frame data.
- The Azure Blob Storage and Google Cloud Storage archive destinations' prefix fields support template syntax.
- The Splunk HEC destination has a custom environment variable.
- The sample processor has an optional [`group_by` parameter](https://docs.datadoghq.com/observability_pipelines/processors/sample.md#group-by-example).

#### Fixes{% #fixes-11 %}

- The Datadog Logs destination's default compression has been updated to `zstd`, which matches Datadog Agent's default compression.
- The Amazon S3, Google Cloud Storage, and Azure Blob Storage destinations have been fixed to resolve log timestamps correctly.
- The custom OCSF mapper's performance has been improved.
- The filter processor has flag logic enabled to pass events to the next processor.
