---
title: Update Existing Pipelines
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Observability Pipelines > Configuration > Update Existing Pipelines
---

# Update Existing Pipelines

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

## Overview{% #overview %}

For existing pipelines in Observability Pipelines, you can update and deploy changes for source settings, destination settings, and processors in the Observability Pipelines UI. But if you are using environment variables and want to update source and destination environment variables, you must manually update the Worker with the new values.

This document goes through updating the pipeline in the UI. You can also use the [update a pipeline](https://docs.datadoghq.com/api/latest/observability-pipelines/#update-a-pipeline) API or [datadog_observability_pipeline](https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/observability_pipeline) Terraform resource to update existing pipelines.

See [Export a Pipeline Configuration to JSON or Terraform](https://docs.datadoghq.com/observability_pipelines/configuration/export_pipeline_configuration/) if you want to programmatically deploy a pipeline updated in the UI.

## Update an existing pipeline{% #update-an-existing-pipeline %}

1. Navigate to [Observability Pipelines](https://app.datadoghq.com/observability-pipelines).
1. Select the pipeline you want to update.
1. Click **Edit Pipeline** in the top right corner.
1. Make changes to the pipeline.
   - If you are updating the source or destination settings shown in the tiles, or updating and adding processors, make the changes and then click **Deploy Changes**.
   - To update source or destination environment variables, click **Go to Worker Installation Steps** and see Update source or destination environment variables for instructions.
1. If you add, update, or delete a source, destination, or corresponding secrets, you must restart the Worker using a command such as `sudo systemctl restart observability-pipelines-worker` for the change to take effect.

### Update source or destination environment variables{% #update-source-or-destination-environment-variables %}

On the Worker installation page:

1. Select your platform in the **Choose your installation platform** dropdown menu.

1. If you want to update source environment variables, update the information for your data source.

   {% tab title="Amazon Data Firehose" %}

   - Amazon Data Firehose address:
     - The Observability Pipelines Worker listens to this socket address to receive logs from Amazon Data Firehose.
     - The default environment variable is `DD_OP_SOURCE_AWS_DATA_FIREHOSE_ADDRESS`.
   - Amazon Data Firehose TLS passphrase (when enabled):
     - The default environment variable is `DD_OP_SOURCE_AWS_DATA_FIREHOSE_KEY_PASS`.

   {% /tab %}

   {% tab title="Amazon S3" %}

   - Amazon S3 SQS URL:
     - The URL of the SQS queue to which the S3 bucket sends the notification events.
     - The default environment variable is `DD_OP_SOURCE_AWS_S3_SQS_URL`
   - AWS_CONFIG_FILE path:
     - The path to the AWS configuration file local to this node.
     - The default environment variable is `AWS_CONFIG_FILE`.
   - AWS_PROFILE name:
     - The name of the profile to use within these files.
     - The default environment variable is `AWS_PROFILE`.
   - AWS S3 TLS passphrase (when enabled):
     - The default environment variable is `DD_OP_SOURCE_AWS_S3_KEY_PASS`.

   {% /tab %}

   {% tab title="Datadog Agent" %}

   - Datadog Agent address:
     - The Observability Pipelines Worker listens to this socket address to receive logs from the Datadog Agent.
     - The default environment variable is `DD_OP_SOURCE_DATADOG_AGENT_ADDRESS`.
   - Datadog Agent TLS passphrase (when enabled):
     - The default environment variable is `DD_OP_SOURCE_DATADOG_AGENT_KEY_PASS`.

   {% /tab %}

   {% tab title="Fluent" %}

   - Fluent socket address and port:
     - The Observability Pipelines Worker listens on this address for incoming log messages.
     - The default environment variable is `DD_OP_SOURCE_FLUENT_ADDRESS`.
   - Fluent Bit TLS passphrase (when enabled):
     - The default environment variable is `DD_OP_SOURCE_FLUENT_KEY_PASS`.

   {% /tab %}

   {% tab title="Google Pub/Sub" %}
There are no environment variables for the Google Pub/Sub source.
   {% /tab %}

   {% tab title="HTTP Client" %}

   - HTTP/s endpoint URL:
     - The Observability Pipelines Worker collects log events from this endpoint. For example, `https://127.0.0.8/logs`.
     - The default environment variable is `DD_OP_SOURCE_HTTP_CLIENT_ENDPOINT_URL`.
   - HTTP/S Client TLS passphrase (when enabled):
     - The default environment variable is `DD_OP_SOURCE_HTTP_CLIENT_KEY_PASS`.
   - If you are using basic authentication:
     - HTTP/S endpoint authentication username and password.
     - The default environment variable is `DD_OP_SOURCE_HTTP_CLIENT_USERNAME` and `DD_OP_SOURCE_HTTP_CLIENT_PASSWORD`.
   - If you are using bearer authentication:
     - HTTP/S endpoint bearer token.
     - The default environment variable is `DD_OP_SOURCE_HTTP_CLIENT_BEARER_TOKEN`.

   {% /tab %}

   {% tab title="HTTP Server" %}

   - HTTP/S server address:
     - The Observability Pipelines Worker listens to this socket address, such as `0.0.0.0:9997`, for your HTTP client logs.
     - The default environment variable is `DD_OP_SOURCE_HTTP_SERVER_ADDRESS`.
   - If you are using plain authentication:
     - HTTP/S endpoint authentication username.
       - The default environment variable is `DD_OP_SOURCE_HTTP_SERVER_USERNAME`.
     - HTTP/S endpoint authentication password.
       - The default environment variable is `DD_OP_SOURCE_HTTP_SERVER_PASSWORD`.

   {% /tab %}

   {% tab title="Kafka" %}

   - The host and port of the Kafka bootstrap servers.
     - The bootstrap server that the client uses to connect to the Kafka cluster and discover all the other hosts in the cluster. The host and port must be entered in the format of `host:port`, such as `10.14.22.123:9092`. If there is more than one server, use commas to separate them.
     - The default environment variable is `DD_OP_SOURCE_KAFKA_BOOTSTRAP_SERVERS`.
   - SASL (when enabled):
     - Kafka SASL username
       - The default environment variable is `DD_OP_SOURCE_KAFKA_SASL_USERNAME`.
     - Kafka SASL password
       - The default environment variable is `DD_OP_SOURCE_KAFKA_SASL_PASSWORD`.
   - Kafka TLS passphrase (when enabled):
     - The default environment variable is `DD_OP_SOURCE_KAFKA_KEY_PASS`.

   {% /tab %}

   {% tab title="Logstash" %}

   - Logstash address and port:
     - The Observability Pipelines Worker listens on this address, such as `0.0.0.0:9997`, for incoming log messages.
     - The default environment variable is `DD_OP_SOURCE_LOGSTASH_ADDRESS`
   - Logstash TLS passphrase:
     - The default environment variable is `DD_OP_SOURCE_LOGSTASH_KEY_PASS`.

   {% /tab %}

   {% tab title="OpenTelemetry" %}
You must provide both HTTP and gRPC endpoints. Configure your OTLP exporters to point to one of these endpoints. See [Send logs to the Observability Pipelines Worker](https://docs.datadoghq.com/observability_pipelines/sources/opentelemetry/#send-logs-to-the-observability-pipelines-worker) for more information.

   - HTTP listener address

     - The Observability Pipelines Worker listens to this socket address to receive logs from the OTel collector.
     - The default environment variable is `DD_OP_SOURCE_OTEL_HTTP_ADDRESS`.

   - gRPC listener address

     - The Observability Pipelines Worker listens to this socket address to receive logs from the OTel collector.
     - The default environment variable is `DD_OP_SOURCE_OTEL_GRPC_ADDRESS`.

If TLS is enabled:

   - OpenTelemetry TLS passphrase
     - The default environment variable is `DD_OP_SOURCE_OTEL_KEY_PASS`.

      {% /tab %}

   {% tab title="Socket" %}

   - Socket address:

     - The address and port where the Observability Pipelines Worker listens for incoming logs.
     - The default environment variable is `DD_OP_SOURCE_SOCKET_ADDRESS`.

   - TLS passphrase (when enabled):

     - The default environment variable is `DD_OP_SOURCE_SOCKET_KEY_PASS`.

   {% /tab %}

   {% tab title="Splunk HEC" %}

   - Splunk HEC address:
     - The bind address that your Observability Pipelines Worker listens on to receive logs originally intended for the Splunk indexer. For example, `0.0.0.0:8088`**Note**: `/services/collector/event` is automatically appended to the endpoint.
     - The default environment variable is `DD_OP_SOURCE_SPLUNK_HEC_ADDRESS`.
   - Splunk HEC TLS passphrase (when enabled):
     - The default environment variable is `DD_OP_SOURCE_SPLUNK_HEC_KEY_PASS`.

   {% /tab %}

   {% tab title="Splunk TCP" %}

   - Splunk TCP address:
     - The Observability Pipelines Worker listens to this socket address to receive logs from the Splunk Forwarder. For example, `0.0.0.0:9997`.
     - The default environment variable is `DD_OP_SOURCE_SPLUNK_TCP_ADDRESS`.
   - Splunk TCP TLS passphrase (when enabled):
     - The default environment variable is `DD_OP_SOURCE_SPLUNK_TCP_KEY_PASS`.

   {% /tab %}

   {% tab title="Sumo Logic" %}

   - Sumo Logic address:
     - The bind address that your Observability Pipelines Worker listens on to receive logs originally intended for the Sumo Logic HTTP Source. For example, `0.0.0.0:80`.**Note**: `/receiver/v1/http/` path is automatically appended to the endpoint.
     - The default environment variable is `DD_OP_SOURCE_SUMO_LOGIC_ADDRESS`.

   {% /tab %}

   {% tab title="Syslog" %}

   - rsyslog or syslog-ng address:
     - The Observability Pipelines Worker listens on this bind address to receive logs from the Syslog forwarder. For example, `0.0.0.0:9997`.
     - The default environment variable is `DD_OP_SOURCE_SYSLOG_ADDRESS`.
   - rsyslog or syslog-ng TLS passphrase (when enabled):
     - The default environment variable is `DD_OP_SOURCE_SYSLOG_KEY_PASS`.

   {% /tab %}



1. If you want to update destination environment variables, update the information for your data destination.

   {% tab title="Amazon OpenSearch" %}

   - Amazon OpenSearch authentication username:
     - The default environment variable is `DD_OP_DESTINATION_AMAZON_OPENSEARCH_USERNAME`.
   - Amazon OpenSearch authentication password:
     - The default environment variable is `DD_OP_DESTINATION_AMAZON_OPENSEARCH_PASSWORD`.
   - Amazon OpenSearch endpoint URL:
     - The default environment variable is `DD_OP_DESTINATION_AMAZON_OPENSEARCH_ENDPOINT_URL`.

   {% /tab %}

   {% tab title="Amazon Security Lake" %}

   - Amazon Security Lake TLS passphrase (when enabled):
     - The default environment variable is `DD_OP_DESTINATION_AMAZON_SECURITY_LAKE_KEY_PASS`.

   {% /tab %}

   {% tab title="CrowdStrike NG-SIEM" %}

   - CrowdStrike HEC ingestion URL:
     - **Note**: Do **not** include the suffix `/services/collector` in the URL. The URL must follow this format: `https://<your_instance_id>.ingest.us-1.crowdstrike.com`.
     - The default environment variable is `DD_OP_DESTINATION_CROWDSTRIKE_NEXT_GEN_SIEM_ENDPOINT_URL`.
   - CrowdStrike HEC API token:
     - The default environment variable is `DD_OP_DESTINATION_CROWDSTRIKE_NEXT_GEN_SIEM_TOKEN`.
   - CrowdStrike Next-Gen SIEM HEC TLS passphrase:
     - The default environment variable is `DD_OP_DESTINATION_CROWDSTRIKE_NEXT_GEN_SIEM_KEY_PASS`.

   {% /tab %}

   {% tab title="Datadog Logs" %}
No environment variables required.
   {% /tab %}

   {% tab title="Datadog Metrics" %}
No environment variables required.
   {% /tab %}

   {% tab title="Datadog Archives" %}
   Amazon S3: 
There are no environment variables to configure.
Google Cloud Storage: 
There are no environment variables to configure.
Azure Storage: 
   - Azure connections string to give the Worker access to your Azure Storage bucket.
     - The default environment variable is `DD_OP_DESTINATION_DATADOG_ARCHIVES_AZURE_BLOB_CONNECTION_STRING`.

To get the connection string:

   1. Navigate to [Azure Storage accounts](https://portal.azure.com/#browse/Microsoft.Storage%2FStorageAccounts).
   1. Click **Access keys** under **Security and networking** in the left navigation menu.
   1. Copy the connection string for the storage account and paste it into the **Azure connection string** field on the Observability Pipelines Worker installation page.

      {% /tab %}

   {% tab title="Elasticsearch" %}

   - Elasticsearch authentication username:
     - The default environment variable is `DD_OP_DESTINATION_ELASTICSEARCH_USERNAME`.
   - Elasticsearch authentication password:
     - The default environment variable is `DD_OP_DESTINATION_ELASTICSEARCH_PASSWORD`.
   - Elasticsearch endpoint URL:
     - The default environment variable is `DD_OP_DESTINATION_ELASTICSEARCH_ENDPOINT_URL`.

   {% /tab %}

   {% tab title="Google Pub/Sub" %}
By default the Worker sends data to the global endpoint: `https://pubsub.googleapis.com`.

If your Pub/Sub topic is region-specific, configure the Google Pub/Sub alternative endpoint URL with the regional endpoint. See [About Pub/Sub endpoints](https://cloud.google.com/pubsub/docs/reference/service_apis_overview#pubsub_endpoints) for more information.

The default environment variable is `DD_OP_DESTINATION_GCP_PUBSUB_ENDPOINT_URL`.
TLS (when enabled): 
   - Google Pub/Sub TLS passphrase:
     - The default environment variable is `DD_OP_DESTINATION_GCP_PUBSUB_KEY_PASS`.

      {% /tab %}

   {% tab title="Google SecOps" %}

   - Google SecOps endpoint URL:
     - The default environment variable is `DD_OP_DESTINATION_GOOGLE_CHRONICLE_UNSTRUCTURED_ENDPOINT_URL`.

   {% /tab %}

   {% tab title="HTTP Client" %}

   - HTTP/S client URI endpoint:
     - The default environment variable is `DD_OP_DESTINATION_HTTP_CLIENT_URI`.
   - HTTP/S Client TLS passphrase (when enabled):
     - The default environment variable is `DD_OP_DESTINATION_HTTP_CLIENT_KEY_PASS`.
   - If you are using basic authentication:
     - HTTP/S endpoint authentication username and password.
     - The default environment variable is `DD_OP_DESTINATION_HTTP_CLIENT_USERNAME` and `DD_OP_DESTINATION_HTTP_CLIENT_PASSWORD`.
   - If you are using bearer authentication:
     - HTTP/S endpoint bearer token.
     - The default environment variable is `DD_OP_DESTINATION_HTTP_CLIENT_BEARER_TOKEN`.

   {% /tab %}

   {% tab title="Kafka" %}
   Kafka bootstrap servers: 
   - The host and port of the Kafka bootstrap servers.
   - This is the bootstrap server that the client uses to connect to the Kafka cluster and discover all the other hosts in the cluster. The host and port must be entered in the format of `host:port`, such as `10.14.22.123:9092`. If there is more than one server, use commas to separate them.
   - The default environment variable is `DD_OP_DESTINATION_KAFKA_BOOTSTRAP_SERVERS`.
TLS (when enabled): 
   - If TLS is enabled, the Kafka TLS passphrase is needed.
   - The default environment variable is `DD_OP_DESTINATION_KAFKA_KEY_PASS`.
SASL (when enabled): 
   - Kafka SASL username
     - The default environment variable is `DD_OP_DESTINATION_KAFKA_SASL_USERNAME`.
   - Kafka SASL password
     - The default environment variable is `DD_OP_DESTINATION_KAFKA_SASL_PASSWORD`.

      {% /tab %}

   {% tab title="Microsoft Sentinel" %}

   - Data collection endpoint (DCE)
     - The DCE endpoint URL is shown as the **Logs Ingestion Endpoint** or **Data Collection Endpoint** on the DCR Overview page. An example URL: `https://<DCE-ID>.ingest.monitor.azure.com`.
     - The default environment variable is `DD_OP_DESTINATION_MICROSOFT_SENTINEL_DCE_URI`
   - Client secret
     - This is the Azure AD application's client secret, such as `550e8400-e29b-41d4-a716-446655440000`.
     - The default environment variable is `DD_OP_DESTINATION_MICROSOFT_SENTINEL_CLIENT_SECRET`

   {% /tab %}

   {% tab title="New Relic" %}

   - New Relic account ID:
     - The default environment variable is `DD_OP_DESTINATION_NEW_RELIC_ACCOUNT_ID`.
   - New Relic license:
     - The default environment variable is `DD_OP_DESTINATION_NEW_RELIC_LICENSE_KEY`.

   {% /tab %}

   {% tab title="OpenSearch" %}

   - OpenSearch authentication username:
     - The default environment variable is `DD_OP_DESTINATION_OPENSEARCH_USERNAME`.
   - OpenSearch authentication password:
     - The default environment variable is `DD_OP_DESTINATION_OPENSEARCH_PASSWORD`.
   - OpenSearch endpoint URL:
     - The default environment variable is `DD_OP_DESTINATION_OPENSEARCH_ENDPOINT_URL`.

   {% /tab %}

   {% tab title="SentinelOne" %}

   - SentinelOne write access token:
     - The default environment variable is `DD_OP_DESTINATION_SENTINEL_ONE_TOKEN`.

   {% /tab %}

   {% tab title="Socket" %}

   - Socket address:
     - The address to which the Observability Pipelines Worker sends processed logs.
     - The default environment variable is `DD_OP_DESTINATION_SOCKET_ADDRESS`.
   - TLS passphrase:
     - The default environment variable is `DD_OP_DESTINATION_SOCKET_KEY_PASS`.

   {% /tab %}

   {% tab title="Splunk HEC" %}

   - Splunk HEC token:
     - The Splunk HEC token for the Splunk indexer. **Note**: Depending on your shell and environment, you may not want to wrap your environment variable in quotes.
     - The default environment variable is `DD_OP_DESTINATION_SPLUNK_HEC_TOKEN`.
   - Base URL of the Splunk instance:
     - The Splunk HTTP Event Collector endpoint your Observability Pipelines Worker sends processed logs to. For example, `https://hec.splunkcloud.com:8088`. **Note**: `/services/collector/event` path is automatically appended to the endpoint.
     - The default environment variable is `DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL`.

   {% /tab %}

   {% tab title="Sumo Logic" %}

   - Unique URL generated for the HTTP Logs and Metrics Source to receive log data.
     - The Sumo Logic HTTP Source endpoint. The Observability Pipelines Worker sends processed logs to this endpoint. For example, `https://<ENDPOINT>.collection.sumologic.com/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>`, where:
       - `<ENDPOINT>` is your Sumo collection endpoint.
       - `<UNIQUE_HTTP_COLLECTOR_CODE>` is the string that follows the last forward slash (`/`) in the upload URL for the HTTP source.
     - The default environment variable is `DD_OP_DESTINATION_SUMO_LOGIC_HTTP_COLLECTOR_URL`.

   {% /tab %}

   {% tab title="Syslog" %}

   - The rsyslog or syslog-ng endpoint URL. For example, `127.0.0.1:9997`.
     - The Observability Pipelines Worker sends logs to this address and port.
     - The default environment variable is `DD_OP_DESTINATION_SYSLOG_ENDPOINT_URL`.
   - The ryslog or syslog-ng TLS passphrase (when enabled):
     - The default environment variable is `DD_OP_DESTINATION_SYSLOG_KEY_PASS`.

   {% /tab %}



1. Follow the instructions for your environment to update the worker:

   {% tab title="Docker" %}

   1. Click **Select API key** to choose the Datadog API key you want to use.
   1. Run the command provided in the UI to install the Worker. The command is automatically populated with the environment variables you entered earlier.
      ```shell
      docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
          -e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
          -e DD_SITE=<DATADOG_SITE> \
          -e <SOURCE_ENV_VARIABLE> \
          -e <DESTINATION_ENV_VARIABLE> \
          -p 8088:8088 \
          datadog/observability-pipelines-worker run
      ```
**Note**: By default, the `docker run` command exposes the same port the Worker is listening on. If you want to map the Worker's container port to a different port on the Docker host, use the `-p | --publish` option:
      ```
      -p 8282:8088 datadog/observability-pipelines-worker run
      ```
   1. Click **Navigate Back** to go back to the Observability Pipelines edit pipeline page.
   1. Click **Deploy Changes**.

   {% /tab %}

   {% tab title="Kubernetes" %}

   1. Download the [Helm chart values file](https://docs.datadoghq.com/resources/yaml/observability_pipelines/v2/setup/values.yaml).
   1. Click **Select API key** to choose the Datadog API key you want to use.
   1. Update the Datadog Helm chart to the latest version:
      ```shell
      helm repo update
      ```
   1. Run the command provided in the UI to install the Worker. The command is automatically populated with the environment variables you entered earlier.
      ```shell
      helm upgrade --install opw \
      -f values.yaml \
      --set datadog.apiKey=<DATADOG_API_KEY> \
      --set datadog.pipelineId=<PIPELINE_ID> \
      --set <SOURCE_ENV_VARIABLES> \
      --set <DESTINATION_ENV_VARIABLES> \
      --set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
      datadog/observability-pipelines-worker
      ```
**Note**: By default, the Kubernetes Service maps incoming port `<SERVICE_PORT>` to the port the Worker is listening on `(<TARGET_PORT>)`. If you want to map the Worker's pod port to a different incoming port of the Kubernetes Service, use the following `service.ports[0].port` and `service.ports[0].targetPort` values:
      ```
      --set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
      ```
   1. Click **Navigate Back** to go back to the Observability Pipelines edit pipeline page.
   1. Click **Deploy Changes**.

   {% /tab %}

   {% tab title="Linux (APT)" %}
   
   1. Click **Select API key** to choose the Datadog API key you want to use.

   1. Run the one-step command provided in the UI to re-install the Worker.

**Note**: The environment variables used by the Worker in `/etc/default/observability-pipelines-worker` are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.

If you prefer not to use the one-line installation script, follow these step-by-step instructions:

   1. Run the following commands to update your local `apt` repo and install the latest Worker version:
      ```shell
      sudo apt-get update
      sudo apt-get install observability-pipelines-worker datadog-signing-keys
      ```
   1. Add your keys, site (for example `datadoghq.com` for US1), source, and destination environment variables to the Worker's environment file:
      ```shell
      sudo cat <<EOF > /etc/default/observability-pipelines-worker
      DD_API_KEY=<DATADOG_API_KEY>
      DD_OP_PIPELINE_ID=<PIPELINE_ID>
      DD_SITE=<DATADOG_SITE>
      <SOURCE_ENV_VARIABLES>
      <DESTINATION_ENV_VARIABLES>
      EOF
      ```
   1. Restart the worker:
      ```shell
      sudo systemctl restart observability-pipelines-worker
      ```
   1. Click **Navigate Back** to go back to the Observability Pipelines edit pipeline page.
   1. Click **Deploy Changes**.

      {% /tab %}

   {% tab title="Linux (RPM)" %}
   
   1. Click **Select API key** to choose the Datadog API key you want to use.

   1. Run the one-step command provided in the UI to re-install the Worker.

**Note**: The environment variables used by the Worker in `/etc/default/observability-pipelines-worker` are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.

If you prefer not to use the one-line installation script, follow these step-by-step instructions:

   1. Update your packages and install the latest version of Worker:
      ```shell
      sudo yum makecache
      sudo yum install observability-pipelines-worker
      ```
   1. Add your keys, site (for example `datadoghq.com` for US1), source, and destination updated environment variables to the Worker's environment file:
      ```shell
      sudo cat <<-EOF > /etc/default/observability-pipelines-worker
      DD_API_KEY=<API_KEY>
      DD_OP_PIPELINE_ID=<PIPELINE_ID>
      DD_SITE=<SITE>
      <SOURCE_ENV_VARIABLES>
      <DESTINATION_ENV_VARIABLES>
      EOF
      ```
   1. Restart the worker:
      ```shell
      sudo systemctl restart observability-pipelines-worker
      ```
   1. Click **Navigate Back** to go back to the Observability Pipelines edit pipeline page.
   1. Click **Deploy Changes**.

      {% /tab %}

   {% tab title="CloudFormation" %}

   1. Select the expected log volume for the pipeline from the dropdown.
   1. Select the AWS region you want to use to install the Worker.
   1. Click **Select API key** to choose the Datadog API key you want to use.
   1. Click **Launch CloudFormation Template** to navigate to the AWS Console to review the stack configuration and then launch it. Make sure the CloudFormation parameters are set as expected.
   1. Select the VPC and subnet that you want to use to install the Worker.
   1. Review and check the necessary permissions checkboxes for IAM. Click **Submit** to create the stack. CloudFormation handles the installation at this point; the Worker instances are launched, the necessary software is downloaded, and the Worker starts automatically.
   1. Delete the previous CloudFormation stack and resources associated with it.
   1. Click **Navigate Back** to go back to the Observability Pipelines edit pipeline page.
   1. Click **Deploy Changes**.

   {% /tab %}
