---
title: Run Multiple Pipelines on a Host
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Observability Pipelines > Configuration > Install the Worker > Run
  Multiple Pipelines on a Host
---

# Run Multiple Pipelines on a Host

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site.md). ().
{% /alert %}

{% /callout %}

## Overview{% #overview %}

If you want to run multiple pipelines on a single host to send logs or metrics (Preview (PREVIEW indicates an early access version of a major product or feature that you can opt into before its official release.)) from different sources, you need to manually add the Worker files for any additional Workers. This document explains which files you need to add and modify to run those Workers.

## Prerequisites{% #prerequisites %}

[Set up the first pipeline](https://docs.datadoghq.com/observability_pipelines/configuration/set_up_pipelines.md?tab=pipelineui) and install the Worker on your host.

## Create an additional pipeline{% #create-an-additional-pipeline %}

[Set up another pipeline](https://docs.datadoghq.com/observability_pipelines/configuration/set_up_pipelines.md?tab=pipelineui) for the additional Worker that you want to run on the same host. When you reach the Install page, follow the below steps to run the Worker for this pipeline.

## Run the Worker for the additional pipeline{% #run-the-worker-for-the-additional-pipeline %}

When you installed the first Worker, by default you have:

- A service binary: `/usr/bin/observability-pipelines-worker`
- A service definition file that looks like:
In the `/lib/systemd/system/observability-pipelines-worker.service` file:

  ```bash
      [Unit]
      Description="Observability Pipelines Worker"
      Documentation=https://docs.datadoghq.com/observability_pipelines/
      After=network-online.target
      Wants=network-online.target
  
      [Service]
      User=observability-pipelines-worker
      Group=observability-pipelines-worker
      ExecStart=/usr/bin/observability-pipelines-worker run
      Restart=always
      AmbientCapabilities=CAP_NET_BIND_SERVICE
      EnvironmentFile=-/etc/default/observability-pipelines-worker
  
      [Install]
      WantedBy=multi-user.target
      
```

- An environment file that looks like:
In the `/etc/default/observability-pipelines-worker` file:

  ```bash
      DD_API_KEY=<datadog_api_key>
      DD_SITE=<dd_site>
      DD_OP_PIPELINE_ID=<pipeline_id>
      
```

- A data directory: `/var/lib/observability-pipelines-worker`

### Configure the additional Worker{% #configure-the-additional-worker %}

For this example, another pipeline was created with the Fluent source. To configure a Worker for this pipeline:

1. Run the following command to create a new data directory, replacing `op-fluent` with a directory name that fits your use case:

   ```shell
   sudo mkdir /var/lib/op-fluent
   ```

1. Run the following command to change the owner of the data directory to `observability-pipelines-worker:observability-pipelines-worker`. Make sure to update `op-fluent` to your data directory's name.

   ```gdscript3
   sudo chown -R observability-pipelines-worker:observability-pipelines-worker /var/lib/op-fluent/
   ```

1. Create an environment file for the new systemd service, such as `/etc/default/op-fluent` where `op-fluent` is replaced with your specific filename. Example of the file content:

In the `/etc/default/op-fluent` file:

   ```bash
       DD_API_KEY=<datadog_api_key>
       DD_OP_PIPELINE_ID=<pipeline_id>
       DD_SITE=<dd_site>
       <destintation_environment_variables>
       DD_OP_SOURCE_FLUENT_ADDRESS=0.0.0.0:9091
       DD_OP_DATA_DIR=/var/lib/op-fluent
       
```
In this example:


   - `DD_OP_DATA_DIR` is set to `/var/lib/op-fluent`. Replace `/var/lib/op-fluent` with the path to your data directory.
   - `DD_OP_SOURCE_FLUENT_ADDRESS=0.0.0.0:9091` is the environment variable required for the Fluent source in this example. Replace it with the [environment variable](https://docs.datadoghq.com/observability_pipelines/guide/environment_variables.md?tab=sources) for your source.

Also, make sure to replace:

   - `<datadog_api_key>` with your [Datadog API key](https://app.datadoghq.com/organization-settings/api-keys).
   - `<pipeline_id>` with the ID of the [pipeline](https://docs.datadoghq.com/observability_pipelines/configuration/set_up_pipelines.md?tab=pipelineui) for this Worker.
   - `<dd_site>` with your [Datadog Site](https://docs.datadoghq.com/getting_started/site.md).
   - `<destination_environment_variables>` with the [environment variables](https://docs.datadoghq.com/observability_pipelines/guide/environment_variables.md?tab=sources) for your destinations.

1. Create a new systemd service entry, such as `/lib/systemd/system/op-fluent.service`. Example content for the entry:

In the `/lib/systemd/system/op-fluent.service` file:

   ```bash
       [Unit]
       Description="OPW for Fluent Pipeline"
       Documentation=https://docs.datadoghq.com/observability_pipelines/
       After=network-online.target
       Wants=network-online.target
   
       [Service]
       User=observability-pipelines-worker
       Group=observability-pipelines-worker
       ExecStart=/usr/bin/observability-pipelines-worker run
       Restart=always
       AmbientCapabilities=CAP_NET_BIND_SERVICE
       EnvironmentFile=-/etc/default/op-fluent
   
       [Install]
       WantedBy=multi-user.target
       
```
In this example:


   - The service name is `op-fluent` because the pipeline is using the Fluent source. Replace `op-fluent.service` with a service name for your use case.
   - The `Description` is `OPW for Fluent Pipeline`. Replace `OPW for Fluent Pipeline` with a description for your use case.
   - `EnvironmentFile` is set to `-/etc/default/op-fluent`. Replace `-/etc/default/op-fluent` with the systemd service environment variables file you created for your Worker.

1. Run this command to reload systemd:

   ```shell
   sudo systemctl daemon-reload
   ```

1. Run this command to start the new service:

   ```shell
   sudo systemctl enable --now op-fluent
   ```

1. Run this command to verify the service is running:

   ```shell
   sudo systemctl status op-fluent
   ```

Additionally, you can use the command `sudo journalctl -u op-fluent.service` to help you debug any issues.

## Deploy the pipeline{% #deploy-the-pipeline %}

1. Navigate to the additional pipeline's Install page.
1. In the **Deploy your pipeline** section, you should see your additional Worker detected. Click **Deploy**.

## Further reading{% #further-reading %}

- [Set up a pipeline](https://docs.datadoghq.com/observability_pipelines/configuration/set_up_pipelines.md)
- [Environment variable for sources, processors, and components](https://docs.datadoghq.com/observability_pipelines/guide/environment_variables.md)
