---
title: Splunk Heavy or Universal Forwarders (TCP) Source
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Observability Pipelines > Sources > Splunk Heavy or Universal
  Forwarders (TCP) Source
---

# Splunk Heavy or Universal Forwarders (TCP) Source

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}
Available for:
{% icon name="icon-logs" /%}
 Logs 
## Overview{% #overview %}

Use Observability Pipelines' Splunk Heavy and Universal Forwards (TCP) source to receive logs sent to your Splunk forwarders.

## Prerequisites{% #prerequisites %}

To use Observability Pipelines' Splunk TCP source, you have a Splunk Enterprise or Cloud Instance alongside either a Splunk Universal Forwarder or a Splunk Heavy Forwarder routing data to your Splunk instance. You also have the following information available:

- The bind address that your Observability Pipelines Worker will listen on to receive logs from your applications. For example, `0.0.0.0:8088`. Later on, you configure your applications to send logs to this address.
- The appropriate [TLS certificates](https://docs.splunk.com/Documentation/Splunk/9.2.0/Security/StepstosecuringSplunkwithTLS#2._Obtain_the_certificates_that_you_need_to_secure_your_Splunk_platform_deployment) and the password you used to create your private key if your forwarders are globally configured to enable SSL.

See [Deploy a Universal Forwarder](https://docs.splunk.com/Documentation/Forwarder/9.2.0/Forwarder/Installtheuniversalforwardersoftware) or [Deploy a Heavy Forwarder](https://docs.splunk.com/Documentation/Splunk/9.2.1/Forwarding/Deployaheavyforwarder) for more information on Splunk forwarders.

## Setup{% #setup %}

Set up this source when you [set up a pipeline](https://docs.datadoghq.com/observability_pipelines/configuration/set_up_pipelines/). You can set up a pipeline in the [UI](https://app.datadoghq.com/observability-pipelines), using the [API](https://docs.datadoghq.com/api/latest/observability-pipelines/), or with [Terraform](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/observability_pipeline). The instructions in this section are for setting up the source in the UI.

{% alert level="danger" %}
Only enter the identifiers for the Splunk TCP address and, if applicable, the TLS key pass. Do not enter the actual values.
{% /alert %}

- Enter the identifier for your Splunk TCP address. If you leave it blank, the default is used.

### Optional TLS settings{% #optional-tls-settings %}

Toggle the switch to **Enable TLS**.

- If you are using Secrets Management, enter the identifier for the key pass. See Set secrets for the default used if the field is left blank.
- The following certificate and key files are required:
  - `Server Certificate Path`: The path to the certificate file that has been signed by your Certificate Authority (CA) root file in DER, PEM, or CRT (X.509).
  - `CA Certificate Path`: The path to the certificate file that is your Certificate Authority (CA) root file in DER, PEM, or CRT (X.509).
  - `Private Key Path`: The path to the `.key` private key file that belongs to your Server Certificate Path in DER, PEM, or CRT (PKCS #8) format.
  - **Notes**:
    - The configuration data directory `/var/lib/observability-pipelines-worker/config/` is automatically appended to the file paths. See [Advanced Worker Configurations](https://docs.splunk.com/Documentation/Splunk/9.2.0/Security/StepstosecuringSplunkwithTLS#2._Obtain_the_certificates_that_you_need_to_secure_your_Splunk_platform_deployment) for more information.
    - The file must be readable by the `observability-pipelines-worker` group and user.

## Set secrets{% #set-secrets %}

These are the defaults used for secret identifiers and environment variables.

**Note**: If you enter secret identifiers and then choose to use environment variables, the environment variable is the identifier entered and prepended with `DD_OP`. For example, if you entered `PASSWORD_1` for a password identifier, the environment variable for that password is `DD_OP_PASSWORD_1`.

{% tab title="Secrets Management" %}

- Splunk TCP address identifier:
  - References the socket address, such as `0.0.0.0:9997` on which the Observability Pipelines Worker listens to receive logs from the Splunk Forwarder.
  - The default identifier is `SOURCE_SPLUNK_TCP_ADDRESS`.
- Splunk TCP TLS passphrase identifier (when TLS is enabled):
  - The default identifier is `SOURCE_SPLUNK_TCP_KEY_PASS`.

{% /tab %}

{% tab title="Environment Variables" %}

- Splunk TCP address:
  - The Observability Pipelines Worker listens to this socket address to receive logs from the Splunk Forwarder. For example, `0.0.0.0:9997`.
  - The default environment variable is `DD_OP_SOURCE_SPLUNK_TCP_ADDRESS`.
- Splunk TCP TLS passphrase (when enabled):
  - The default environment variable is `DD_OP_SOURCE_SPLUNK_TCP_KEY_PASS`.

{% /tab %}

## Connect Splunk Forwarder to the Observability Pipelines Worker{% #connect-splunk-forwarder-to-the-observability-pipelines-worker %}

To forward your logs to the Worker, add the following configuration to your Splunk Heavy/Universal Forwarder's `etc/system/local/outputs.conf` and replace `<OPW_HOST>` with the IP/URL of the host (or load balancer) associated with the Observability Pipelines Worker:

```
[tcpout]
compressed=false
sendCookedData=false
defaultGroup=opw

[tcpout:opw]
server=<OPW_HOST>:8099
```

`<OPW_HOST>` is the IP/URL of the host (or load balancer) associated with the Observability Pipelines Worker. For CloudFormation installs, the `LoadBalancerDNS` CloudFormation output has the correct URL to use. For Kubernetes installs, the internal DNS record of the Observability Pipelines Worker service can be used. For example: `opw-observability-pipelines-worker.default.svc.cluster.local`.

At this point, your logs should be going to the Worker, processed by the pipeline, and delivered to the configured destination.
