---
title: Sumo Logic Hosted Collector
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Observability Pipelines > Sources > Sumo Logic Hosted Collector
---

# Sumo Logic Hosted Collector

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}
Available for:
{% icon name="icon-logs" /%}
 Logs 
## Overview{% #overview %}

Use Observability Pipelines' Sumo Logic Hosted Collector source to receive logs sent to your Sumo Logic Hosted Collector.

## Prerequisites{% #prerequisites %}

To use Observability Pipelines' Sumo Logic source, you have applications sending data to Sumo Logic in the [expected format](https://help.sumologic.com/docs/send-data/hosted-collectors/http-source/logs-metrics/upload-logs/).

To use Observability Pipelines' Sumo Logic destination, you have a Hosted Sumo Logic Collector with a HTTP Logs source, and the following information available:

- The bind address that your Observability Pipelines Worker will listen on to receive logs. For example, `0.0.0.0:80`.
- The URL of the Sumo Logic HTTP Logs Source that the Worker will send processed logs to. This URL is provided by Sumo Logic once you configure your hosted collector and set up an HTTP Logs and Metrics source.

See [Configure HTTP Logs Source on Sumo Logic](https://help.sumologic.com/docs/send-data/hosted-collectors/http-source/logs-metrics/) for more information.

## Setup{% #setup %}

Set up this source when you [set up a pipeline](https://docs.datadoghq.com/observability_pipelines/configuration/set_up_pipelines/). You can set up a pipeline in the [UI](https://app.datadoghq.com/observability-pipelines), using the [API](https://docs.datadoghq.com/api/latest/observability-pipelines/), or with [Terraform](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/observability_pipeline). The instructions in this section are for setting up the source in the UI.

- Enter the identifier for your Sumo Logic address. If you leave it blank, the default is used.
  - **Note**: Only enter the identifier for the address. Do **not** enter the actual address.

### Optional settings{% #optional-settings %}

In the **Decoding** dropdown menu, select whether your input format is raw **Bytes**, **JSON**, Graylog Extended Log Format (**Gelf**), or **Syslog**. If no decoding is selected, the decoding defaults to JSON.

## Set secrets{% #set-secrets %}

These are the defaults used for secret identifiers and environment variables.

**Note**: If you enter secret identifiers and then choose to use environment variables, the environment variable is the identifier entered and prepended with `DD_OP`. For example, if you entered `PASSWORD_1` for a password identifier, the environment variable for that password is `DD_OP_PASSWORD_1`.

{% tab title="Secrets Management" %}

- Sumo Logic address identifier:
  - References the bind address, such as `0.0.0.0:80.`, that your Observability Pipelines Worker listens on to receive logs originally intended for the Sumo Logic HTTP Source.
  - The default identifier is `SOURCE_SUMO_LOGIC_ADDRESS`.

{% /tab %}

{% tab title="Environment Variables" %}

- Sumo Logic address:
  - The bind address that your Observability Pipelines Worker listens on to receive logs originally intended for the Sumo Logic HTTP Source. For example, `0.0.0.0:80`.**Note**: `/receiver/v1/http/` path is automatically appended to the endpoint.
  - The default environment variable is `DD_OP_SOURCE_SUMO_LOGIC_ADDRESS`.

{% /tab %}

## Send logs to the Observability Pipelines Worker over Sumo Logic HTTP Source{% #send-logs-to-the-observability-pipelines-worker-over-sumo-logic-http-source %}

After you install the Observability Pipelines Worker and deploy the configuration, the Worker exposes HTTP endpoints that uses the [Sumo Logic HTTP Source API](https://help.sumologic.com/docs/send-data/hosted-collectors/http-source/logs-metrics/upload-logs/).

To send logs to your Sumo Logic HTTP Source, you must point your existing logs upstream to the Worker:

```shell
curl -v -X POST -T [local_file_name] http://<OPW_HOST>/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>
```

`<OPW_HOST>` is the IP/URL of the host (or load balancer) associated with the Observability Pipelines Worker. For CloudFormation installs, the `LoadBalancerDNS` CloudFormation output has the correct URL to use. For Kubernetes installs, the internal DNS record of the Observability Pipelines Worker service can be used, such as `opw-observability-pipelines-worker.default.svc.cluster.local`.

`<UNIQUE_HTTP_COLLECTOR_CODE>` is the string that follows the last forward slash (`/`) in the upload URL for the HTTP source that you provided in the Install the Observability Pipelines Worker step.

At this point, your logs should be going to the Worker, processed by the pipeline, and delivered to the configured destination.
