---
title: OpenTelemetry Source
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Observability Pipelines > Sources > OpenTelemetry Source
---

# OpenTelemetry Source

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}
Available for:
{% icon name="icon-logs" /%}
 Logs | 
{% icon name="icon-metrics" /%}
 Metrics 
{% callout %}
##### Join the Preview!

Sending metrics to Observability Pipelines is in Preview. Fill out the form to request access.

[Request Access](https://www.datadoghq.com/product-preview/metrics-ingestion-and-cardinality-control-in-observability-pipelines/)
{% /callout %}

## Overview{% #overview %}

Use Observability Pipelines' OpenTelemetry (OTel) source to collect logs or metrics (Preview (PREVIEW indicates an early access version of a major product or feature that you can opt into before its official release.)) from your OTel Collector through HTTP or gRPC. Select and set up this source when you set up a pipeline.

**Notes**:

- If you are using the Datadog Distribution of OpenTelemetry (DDOT) Collector, use the OpenTelemetry source to send data to Observability Pipelines.
- If you are using the Splunk HEC Distribution of the OpenTelemetry Collector, use the [Splunk HEC source](https://docs.datadoghq.com/observability_pipelines/sources/splunk_hec/#send-logs-from-the-splunk-distributor-of-the-opentelemetry-collector-to-observability-pipelines) to send logs to Observability Pipelines.

### When to use this source{% #when-to-use-this-source %}

Common scenarios when you might use this source:

- You are using [OpenTelemetry](https://opentelemetry.io/docs/collector/) as your standard method for collecting and routing data, and you want to normalize that data, before routing them to different destinations.
- You are collecting data from multiple sources and want to aggregate them in a central place for consistent processing.
  - For example, if some of your services export logs using OpenTelemetry, while other services use Datadog Agents or other Observability Pipelines [sources](https://docs.datadoghq.com/observability_pipelines/sources/), you can aggregate all of your data in Observability Pipelines for processing.

## Prerequisites{% #prerequisites %}

If your forwarders are globally configured to enable SSL, you need the appropriate TLS certificates and the password you used to create your private key.

## Setup{% #setup %}

Set up this source when you [set up a pipeline](https://docs.datadoghq.com/observability_pipelines/configuration/set_up_pipelines/). You can set up a pipeline in the [UI](https://app.datadoghq.com/observability-pipelines), using the [API](https://docs.datadoghq.com/api/latest/observability-pipelines/), or with [Terraform](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/observability_pipeline). The instructions in this section are for setting up the source in the UI.

{% alert level="danger" %}
Only enter the identifiers for the OpenTelemetry HTTP and gRPC listener addresses and, if applicable, the TLS key pass. Do not enter the actual values.
{% /alert %}

1. Enter the identifier for your HTTP listener address. If you leave it blank, the default is used.
1. Enter the identifier for your gRPC listener address. If you leave it blank, the default is used.

### Optional TLS settings{% #optional-tls-settings %}

Toggle the switch to enable TLS. The following certificate and key files are required for TLS.**Note**: All file paths are made relative to the configuration data directory, which is `/var/lib/observability-pipelines-worker/config/` by default. See [Advanced Worker Configurations](https://docs.datadoghq.com/observability_pipelines/configuration/install_the_worker/advanced_worker_configurations/#bootstrap-options) for more information. The file must be owned by the `observability-pipelines-worker group` and `observability-pipelines-worker` user, or at least readable by the group or user.

- Enter the identifier for your OTel TLS key pass. If you leave it blank, the default is used.
- `Server Certificate Path`: The path to the certificate file that has been signed by your Certificate Authority (CA) root file in DER or PEM (X.509).
- `CA Certificate Path`: The path to the certificate file that is your Certificate Authority (CA) root file in DER or PEM (X.509).
- `Private Key Path`: The path to the `.key` private key file that belongs to your Server Certificate Path in DER or PEM (PKCS #8) format.

{% image
   source="https://datadog-docs.imgix.net/images/observability_pipelines/sources/otel_settings.3a2ebf246f37024001e8e2f6ef02e2f3.png?auto=format"
   alt="The OpenTelemetry source settings" /%}

## Set secrets{% #set-secrets %}

These are the defaults used for secret identifiers and environment variables.

**Note**: If you enter secret identifiers and then choose to use environment variables, the environment variable is the identifier entered and prepended with `DD_OP`. For example, if you entered `PASSWORD_1` for a password identifier, the environment variable for that password is `DD_OP_PASSWORD_1`.

{% tab title="Secrets Management" %}

- HTTP address identifier:
  - References the HTTP socket address on which the Observability Pipelines Worker listens for data from the OTel collector.
  - The default identifier is `SOURCE_OTEL_HTTP_ADDRESS`.
- gRPC address identifier:
  - References the gRPC socket address on which the Observability Pipelines Worker listens for data from the OTel collector.
  - The default identifier is `SOURCE_OTEL_GRPC_ADDRESS`.
- TLS passphrase identifier (when TLS is enabled):
  - The default identifier is `SOURCE_OTEL_KEY_PASS`.

{% /tab %}

{% tab title="Environment Variables" %}
You must provide both HTTP and gRPC endpoints. Configure your OTLP exporters to point to one of these endpoints. See [Send data to the Observability Pipelines Worker](https://docs.datadoghq.com/observability_pipelines/sources/opentelemetry/#send-data-to-the-observability-pipelines-worker) for more information.

- HTTP listener address

  - The Observability Pipelines Worker listens to this socket address to receive data from the OTel collector.
  - The default environment variable is `DD_OP_SOURCE_OTEL_HTTP_ADDRESS`.

- gRPC listener address

  - The Observability Pipelines Worker listens to this socket address to receive data from the OTel collector.
  - The default environment variable is `DD_OP_SOURCE_OTEL_GRPC_ADDRESS`.

If TLS is enabled:

- OpenTelemetry TLS passphrase
  - The default environment variable is `DD_OP_SOURCE_OTEL_KEY_PASS`.

{% /tab %}

## Send data to the Observability Pipelines Worker{% #send-data-to-the-observability-pipelines-worker %}

Configure your OTel exporters to point to HTTP or gRPC. The Worker exposes configurable listener ports for each protocol.

{% alert level="info" %}
The ports 4318 (HTTP) and 4317 (gRPC) shown below are examples only. You can configure the port value for either protocol in the Worker. Ensure your OTel exporters match the port value you choose.
{% /alert %}

{% tab title="Logs" %}
### HTTP configuration example{% #http-configuration-example %}

The Worker exposes the HTTP endpoint on port 4318, which is the default port. You can configure the port value in the Worker.

For example, to configure an OTel log exporter over HTTP in Python:

```python
    from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
    http_exporter = OTLPLogExporter(
        endpoint="http://worker:4318/v1/logs"
    )
```

### gRPC configuration example{% #grpc-configuration-example %}

The Worker exposes the gRPC endpoint on port 4317, which is the default port. You can configure the port value in the Worker.

For example, to configure an OTel log exporter over gRPC in Python:

```python
    from opentelemetry.exporter.otlp.proto.grpc._log_exporter import OTLPLogExporter
    grpc_exporter = OTLPLogExporter(
        endpoint="grpc://worker:4317"
    )
```

Set the listener address environment variables to the following default values. If you configured different port values in the Worker, use those instead.

- HTTP listener address: `worker:4318`
- gRPC listener address: `worker:4317`

{% /tab %}

{% tab title="Metrics" %}
### HTTP configuration example{% #http-configuration-example %}

The Worker exposes the HTTP endpoint on port 4318, which is the default port. You can configure the port value in the Worker.

For example, to configure an OTel metric exporter over HTTP in Python:

```python
    from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter
    http_exporter = OTLPMetricExporter(
        endpoint="http://worker:4318/v1/metrics"
    )
```

### gRPC configuration example{% #grpc-configuration-example %}

The Worker exposes the gRPC endpoint on port 4317, which is the default port. You can configure the port value in the Worker.

For example, to configure an OTel metric exporter over gRPC in Python:

```python
    from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter
    grpc_exporter = OTLPMetricExporter(
        endpoint="grpc://worker:4317"
    )
```

Set the listener address environment variables to the following default values. If you configured different port values in the Worker, use those instead.

- HTTP listener address: `worker:4318`
- gRPC listener address: `worker:4317`

{% /tab %}

## Send data from the Datadog Distribution of OpenTelemetry Collector to Observability Pipelines{% #send-data-from-the-datadog-distribution-of-opentelemetry-collector-to-observability-pipelines %}

{% tab title="Logs" %}
To send logs from the Datadog Distribution of the OpenTelemetry (DDOT) Collector:

1. Deploy the DDOT Collector using Helm. See [Install the DDOT Collector as a Kubernetes DaemonSet](https://docs.datadoghq.com/opentelemetry/setup/ddot_collector/install/kubernetes_daemonset/?tab=datadogoperator) for instructions.
1. [Set up a pipeline](https://docs.datadoghq.com/observability_pipelines/configuration/set_up_pipelines/) on Observability Pipelines using the OpenTelemetry source.
   1. (Optional) Datadog recommends adding an [Edit Fields processor](https://docs.datadoghq.com/observability_pipelines/processors/edit_fields#add-field) to the pipeline that appends the field `op_otel_ddot:true`.
   1. When you install the Worker, for the OpenTelemetry source environment variables:
      1. Set your HTTP listener to `0.0.0.0:4318`.
      1. Set your gRPC listener to `0.0.0.0:4317`.
   1. After you install the Worker and deployed the pipeline, update the OpenTelemetry Collector's [`otel-config.yaml`](https://docs.datadoghq.com/opentelemetry/setup/ddot_collector/install/kubernetes_daemonset/?tab=helm#configure-the-opentelemetry-collector) to include an exporter that sends logs to Observability Pipelines. For example:
      ```gdscript3
      exporters:
          otlphttp:
              endpoint: http://opw-observability-pipelines-worker.<NAMESPACE>.svc.cluster.local:4318
      ...
      service:
          pipelines:
              logs:
                  exporters: [otlphttp]
      ```
Replace `<NAMESPACE>` with the Kubernetes namespace where the Observability Pipelines Worker is deployed (for example, `default`).
   1. Redeploy the Datadog Agent with the updated [`otel-config.yaml`](https://docs.datadoghq.com/opentelemetry/setup/ddot_collector/install/kubernetes_daemonset/?tab=helm#configure-the-opentelemetry-collector). For example, if the Agent is installed in Kubernetes:
      ```
      helm upgrade --install datadog-agent datadog/datadog \
      --values ./agent.yaml \
      --set-file datadog.otelCollector.config=./otel-config.yaml
      ```

**Notes**:

- Because DDOT is sending logs to Observability Pipelines, and not the Datadog Agent, the following settings do not work for sending logs from DDOT to Observability Pipelines:
  - `DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_ENABLED`
  - `DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_URL`
- Logs sent from DDOT might have nested objects that prevent Datadog from parsing the logs correctly. To resolve this, Datadog recommends using the [Custom Processor](https://docs.datadoghq.com/observability_pipelines/processors/custom_processor) to flatten the nested `resource` object.
- If the DDOT Collector and the Observability Pipelines Worker are running on the same host, their default OTLP receiver ports (4317/4318) may conflict. In a typical Kubernetes deployment, the Collector and the Worker run in separate pods, so this is not an issue.

{% /tab %}

{% tab title="Metrics" %}
To send metrics from the Datadog Distribution of the OpenTelemetry (DDOT) Collector:

1. Deploy the DDOT Collector using Helm. See [Install the DDOT Collector as a Kubernetes DaemonSet](https://docs.datadoghq.com/opentelemetry/setup/ddot_collector/install/kubernetes_daemonset/?tab=datadogoperator) for instructions.
1. [Set up a pipeline](https://docs.datadoghq.com/observability_pipelines/configuration/set_up_pipelines/) on Observability Pipelines using the OpenTelemetry source.
   1. (Optional) Datadog recommends adding an [Edit Fields processor](https://docs.datadoghq.com/observability_pipelines/processors/edit_fields#add-field) to the pipeline that appends the field `op_otel_ddot:true`.
   1. When you install the Worker, for the OpenTelemetry source environment variables:
      1. Set your HTTP listener to `0.0.0.0:4318`.
      1. Set your gRPC listener to `0.0.0.0:4317`.
   1. After you install the Worker and deployed the pipeline, update the OpenTelemetry Collector's [`otel-config.yaml`](https://docs.datadoghq.com/opentelemetry/setup/ddot_collector/install/kubernetes_daemonset/?tab=helm#configure-the-opentelemetry-collector) to include an exporter that sends metrics to Observability Pipelines. For example:
      ```gdscript3
      exporters:
          otlphttp:
              endpoint: http://opw-observability-pipelines-worker.<NAMESPACE>.svc.cluster.local:4318
      ...
      service:
          pipelines:
              metrics:
                  exporters: [otlphttp]
      ```
Replace `<NAMESPACE>` with the Kubernetes namespace where the Observability Pipelines Worker is deployed (for example, `default`).
   1. Redeploy the Datadog Agent with the updated [`otel-config.yaml`](https://docs.datadoghq.com/opentelemetry/setup/ddot_collector/install/kubernetes_daemonset/?tab=helm#configure-the-opentelemetry-collector). For example, if the Agent is installed in Kubernetes:
      ```
      helm upgrade --install datadog-agent datadog/datadog \
      --values ./agent.yaml \
      --set-file datadog.otelCollector.config=./otel-config.yaml
      ```

**Notes**:

- Because DDOT is sending metrics to Observability Pipelines, and not the Datadog Agent, the following settings do not work for sending metrics from DDOT to Observability Pipelines:
  - `DD_OBSERVABILITY_PIPELINES_WORKER_METRICS_ENABLED`
  - `DD_OBSERVABILITY_PIPELINES_WORKER_METRICS_URL`
- Metrics sent from DDOT might have nested objects that prevent Datadog from parsing the metrics correctly. To resolve this, Datadog recommends using the [Custom Processor](https://docs.datadoghq.com/observability_pipelines/processors/custom_processor) to flatten the nested `resource` object.
- If the DDOT Collector and the Observability Pipelines Worker are running on the same host, their default OTLP receiver ports (4317/4318) may conflict. In a typical Kubernetes deployment, the Collector and the Worker run in separate pods, so this is not an issue.

{% /tab %}

## Further Reading{% #further-reading %}

- [Manage metric volume and tags in your environment with Observability Pipelines](https://www.datadoghq.com/blog/manage-metrics-cost-control-with-observability-pipelines)
