---
title: Splunk HTTP Event Collector (HEC) Destination
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Observability Pipelines > Destinations > Splunk HTTP Event Collector
  (HEC) Destination
---

# Splunk HTTP Event Collector (HEC) Destination

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site.md). ().
{% /alert %}

{% /callout %}
Available for:
{% icon name="icon-logs" /%}
 Logs 
Use Observability Pipelines' Splunk HTTP Event Collector (HEC) destination to send logs to Splunk HEC.

## Setup{% #setup %}

Set up the Splunk HEC destination and its environment variables when you [set up a pipeline](https://app.datadoghq.com/observability-pipelines). The information below is configured in the pipelines UI.

### Set up the destination{% #set-up-the-destination %}

{% alert level="danger" %}
Observability Pipelines compresses logs with the gzip (level 6) algorithm.Only enter the identifiers for the Splunk HEC token and endpoint. Do not enter the actual values.
{% /alert %}

1. Enter the identifier for your token. If you leave it blank, the default is used.
1. Enter the identifier for your endpoint URL. If you leave it blank, the default is used.

#### Optional settings{% #optional-settings %}

##### Splunk index{% #splunk-index %}

Enter the name of the Splunk index you want your data in. This has to be an allowed index for your HEC. See [template syntax](https://docs.datadoghq.com/observability_pipelines/destinations.md#template-syntax) if you want to route logs to different indexes based on specific fields in your logs.

##### Auto-extract timestamp{% #auto-extract-timestamp %}

Select whether the timestamp should be auto-extracted. If set to `true`, Splunk extracts the timestamp from the message with the expected format of `yyyy-mm-dd hh:mm:ss`.

##### Sourcetype override{% #sourcetype-override %}

Set the `sourcetype` to override Splunk's default value, which is `httpevent` for HEC data. See [template syntax](https://docs.datadoghq.com/observability_pipelines/destinations.md#template-syntax) if you want to route logs to different source types based on specific fields in your logs.

##### Encoding{% #encoding %}

Select the **Encoding** in the dropdown menu (**JSON** or **Raw**).

- If you selected **JSON**, optionally click **Add Field** to add keys of fields you want extracted as [indexed fields](https://help.splunk.com/en/splunk-enterprise/get-started/get-data-in/9.0/get-data-with-http-event-collector/automate-indexed-field-extractions-with-http-event-collector). This indexes the specified fields when the Splunk HTTP Event Collector ingests the logs.

##### Buffering{% #buffering %}

Toggle the switch to enable **Buffering Options**. Enable a configurable buffer on your destination to ensure intermittent latency or an outage at the destination doesn't create immediate backpressure, and allow events to continue to be ingested from your source. Disk buffers can also increase pipeline durability by writing data to disk, ensuring buffered data persists through a Worker restart. See [Destination buffers](https://docs.datadoghq.com/observability_pipelines/scaling_and_performance/buffering_and_backpressure.md#destination-buffers) for more information.

- If left unconfigured, your destination uses a memory buffer with a capacity of 500 events.
- To configure a buffer on your destination:
  1. Select the buffer type you want to set (**Memory** or **Disk**).
  1. Enter the buffer size and select the unit.
     1. Maximum memory buffer size is 128 GB.
     1. Maximum disk buffer size is 500 GB.
  1. In the **Behavior on full buffer** dropdown menu, select whether you want to **block** events or **drop new events** when the buffer is full.

### Set secrets{% #set-secrets %}

These are the defaults used for secret identifiers and environment variables.

**Note**: If you enter secret identifiers and then choose to use environment variables, the environment variable is the identifier entered and prepended with `DD_OP`. For example, if you entered `PASSWORD_1` for a password identifier, the environment variable for that password is `DD_OP_PASSWORD_1`.

{% tab title="Secrets Management" %}

- Splunk HEC token identifier:
  - References the Splunk HEC token for the Splunk indexer.
  - The default identifier is `DESTINATION_SPLUNK_HEC_TOKEN`.
- Splunk HEC endpoint URL identifier:
  - References the Splunk HTTP Event Collector endpoint your Observability Pipelines Worker sends processed logs to. For example, `https://hec.splunkcloud.com:8088`.
  - **Note**: `/services/collector/event` path is automatically appended to the endpoint.
  - The default identifier is `DESTINATION_SPLUNK_HEC_ENDPOINT_URL`.

{% /tab %}

{% tab title="Environment Variables" %}

- Splunk HEC token:
  - The Splunk HEC token for the Splunk indexer. **Note**: Depending on your shell and environment, you may not want to wrap your environment variable in quotes.
  - The default environment variable is `DD_OP_DESTINATION_SPLUNK_HEC_TOKEN`.
- Base URL of the Splunk instance:
  - The Splunk HTTP Event Collector endpoint your Observability Pipelines Worker sends processed logs to. For example, `https://hec.splunkcloud.com:8088`. **Note**: `/services/collector/event` path is automatically appended to the endpoint.
  - The default environment variable is `DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL`.

{% /tab %}

### How the destination works{% #how-the-destination-works %}

#### Event batching{% #event-batching %}

A batch of events is flushed when one of these parameters is met. See [event batching](https://docs.datadoghq.com/observability_pipelines/destinations.md#event-batching) for more information.

| Maximum Events | Maximum Size (MB) | Timeout (seconds) |
| -------------- | ----------------- | ----------------- |
| None           | 1                 | 1                 |
