---
title: Google Pub/Sub Destination
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Observability Pipelines > Destinations > Google Pub/Sub Destination
---

# Google Pub/Sub Destination

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site.md). ().
{% /alert %}

{% /callout %}
Available for:
{% icon name="icon-logs" /%}
 Logs 
## Overview{% #overview %}

Use Observability Pipelines' Google Pub/Sub destination to publish logs to the Google Pub/Sub messaging system, so the logs can be sent to downstream services, data lakes, or custom applications.

### When to use this destination{% #when-to-use-this-destination %}

Common scenarios when you might use this destination:

- For analytics pipelines: Route logs downstream into Google BigQuery, Data Lake, or custom machine learning workflows.
- For event-driven processing: Publish logs to a Pub/Sub topic so that Google Cloud Functions, Cloud Run functions, and Dataflow jobs can carry out actions in real time based on the log data.

## Prerequisites{% #prerequisites %}

Before you configure the destination, you need the following:

- Pub/Sub subscription: Create a Pub/Sub topic and at least one subscription to consume the messages.
- Authentication: Set up a [standard Google Cloud authentication method](https://cloud.google.com/docs/authentication#auth-flowchart). These options include:
  - A service account key (JSON file)
  - A workload identity (Google Kubernetes Engine (GKE))
- IAM roles:
  - `roles/pubsub.publisher` is required for publishing events.
  - `roles/pubsub.viewer` is recommended for health checks.
    - If the role is missing, the error `Healthcheck endpoint forbidden` is logged and the Worker proceeds as usual.
  - See [Available Pub/Sub roles](https://cloud.google.com/pubsub/docs/access-control#roles) for more information.

### Set up a service account for the Worker{% #set-up-a-service-account-for-the-worker %}

A service account in Google Cloud is a type of account used only by applications or services.

- It has its own identity and credentials (a JSON key file).
- You assign it IAM roles so it can access specific resources.
- In this case, the Observability Pipelines Worker uses a service account to authenticate and send logs to Pub/Sub on your behalf.

To authenticate using a service account:

1. In the Google Cloud console, navigate to **IAM & Admin** > **[Service Accounts](https://console.cloud.google.com/iam-admin/serviceaccounts)**.
1. Click **+ Create service account**.
1. Enter a name and click **Create and continue**.
1. Assign roles:
   - **Pub/Sub Publisher**
   - **Pub/Sub Viewer**
1. Click **Done**.

#### Authentication methods{% #authentication-methods %}

After you've created the service account with the correct roles, set up one of the following authentication methods:

##### Option A: Workload Identity method (for GKE, recommended){% #option-a-workload-identity-method-for-gke-recommended %}

1. Bind the service account to a Kubernetes service account (KSA).
1. Allow the service account to be impersonated by that KSA.
1. Annotate the KSA so the GKE knows which service account to use.
1. Authentication then comes from the GCP's metadata server.

##### Option B: Attach the GSA directly to a VM (for Google Compute Engine){% #option-b-attach-the-gsa-directly-to-a-vm-for-google-compute-engine %}

Use this authentication method if you're running the Observability Pipelines Worker on a Google Compute Engine (GCE) VM.

- When you create or edit the VM, specify the Google service account under **Identity and API access** > **Service account**.

##### Option C: Run the service as the GSA (for Cloud Run or Cloud Functions){% #option-c-run-the-service-as-the-gsa-for-cloud-run-or-cloud-functions %}

Use this authentication method if you're deploying the Worker as a Cloud Run service or Cloud Function.

- In the Cloud Run or Cloud Functions deployment settings, set the **Execution service account** to the Google service account you created.

##### Option D: JSON key method (any environment without identity bindings){% #option-d-json-key-method-any-environment-without-identity-bindings %}

1. Open the new service account and navigate to **Keys** > **Add key** > **Create new key**.
1. Choose the JSON format.
1. Save the downloaded JSON file in a secure location.
1. After you install the Worker, copy or mount JSON the file into `DD_OP_DATA_DIR/config/`. You reference this file in the Google Pub/Sub destination's **Credentials path** field when you set up the destination in the Pipelines UI.

## Setup{% #setup %}

Set up the Google Pub/Sub destination and its environment variables when you [set up a pipeline](https://app.datadoghq.com/observability-pipelines). The information below is configured in the pipelines UI.

### Set up the destination{% #set-up-the-destination %}

1. Enter the destination project name.
   - This is the GCP project where your Pub/Sub topic lives.
1. Enter the topic.
   - This is the Pub/Sub topic to publish logs to.
1. In the **Encoding** dropdown menu, select whether you want to encode your pipeline's output in **JSON** or **Raw message**.
   - **JSON**: Logs are structured as JSON (recommended if downstream tools need structured data).
   - **Raw**: Logs are sent as raw strings (preserves the original format).
1. If you have a credentials JSON file, enter the path to your credentials JSON file.
   - If you using a service account JSON: enter the path `DD_OP_DATA_DIR/config/<your-service-account>.json`.
   - Or set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable.
   - Credentials are automatically managed if you're using [workload identity](https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity) on GKE.

#### Optional settings{% #optional-settings %}

##### Enable TLS{% #enable-tls %}

Toggle the switch to **Enable TLS**.

- If you are using Secrets Management, enter the identifier for the key pass. See Set secrets for the default used if the field is left blank.
- The following certificate and key files are required:
  - `Server Certificate Path`: The path to the certificate file that has been signed by your Certificate Authority (CA) root file in DER, PEM, or CRT (X.509).
  - `CA Certificate Path`: The path to the certificate file that is your Certificate Authority (CA) root file in DER, PEM, or CRT (X.509).
  - `Private Key Path`: The path to the `.key` private key file that belongs to your Server Certificate Path in DER, PEM, or CRT (PKCS #8) format.
  - **Notes**:
    - The configuration data directory `/var/lib/observability-pipelines-worker/config/` is automatically appended to the file paths. See [Advanced Worker Configurations](https://docs.datadoghq.com/observability_pipelines/configuration/install_the_worker/advanced_worker_configurations.md) for more information.
    - The file must be readable by the `observability-pipelines-worker` group and user.

##### Buffering{% #buffering %}

Toggle the switch to enable **Buffering Options**. Enable a configurable buffer on your destination to ensure intermittent latency or an outage at the destination doesn't create immediate backpressure, and allow events to continue to be ingested from your source. Disk buffers can also increase pipeline durability by writing data to disk, ensuring buffered data persists through a Worker restart. See [Destination buffers](https://docs.datadoghq.com/observability_pipelines/scaling_and_performance/buffering_and_backpressure.md#destination-buffers) for more information.

- If left unconfigured, your destination uses a memory buffer with a capacity of 500 events.
- To configure a buffer on your destination:
  1. Select the buffer type you want to set (**Memory** or **Disk**).
  1. Enter the buffer size and select the unit.
     1. Maximum memory buffer size is 128 GB.
     1. Maximum disk buffer size is 500 GB.
  1. In the **Behavior on full buffer** dropdown menu, select whether you want to **block** events or **drop new events** when the buffer is full.

{% image
   source="https://docs.dd-static.net/images/observability_pipelines/destinations/google_pubsub_settings.0cf669a55345e21f5ed25f980befc402.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/observability_pipelines/destinations/google_pubsub_settings.0cf669a55345e21f5ed25f980befc402.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="The google pub/sub destination with sample values" /%}

### Set secrets{% #set-secrets %}

These are the defaults used for secret identifiers and environment variables.

**Note**: If you enter secret identifiers and then choose to use environment variables, the environment variable is the identifier entered and prepended with `DD_OP`. For example, if you entered `PASSWORD_1` for a password identifier, the environment variable for that password is `DD_OP_PASSWORD_1`.

{% tab title="Secrets Management" %}

- (Optional) Google Pub/Sub endpoint URL identifier:
  - By default the Worker sends data to the global endpoint: `https://pubsub.googleapis.com`.
  - If your Pub/Sub topic is region-specific, configure the Google Pub/Sub alternative endpoint URL with the regional endpoint. See [About Pub/Sub endpoints](https://docs.cloud.google.com/pubsub/docs/reference/service_apis_overview#pubsub_endpoints) for more information. Enter the configured endpoint URL into your secrets manager.
  - The default identifier is `DESTINATION_GCP_PUBSUB_ENDPOINT_URL`.
- Google Pub/Sub TLS passphrase identifier (when TLS is enabled):
  - The default identifier is `DESTINATION_GCP_PUBSUB_KEY_PASS`.

{% /tab %}

{% tab title="Environment Variables" %}
#### Optional alternative Pub/Sub endpoints{% #optional-alternative-pubsub-endpoints %}

{% image
   source="https://docs.dd-static.net/images/observability_pipelines/destinations/google_pubsub_env_var.25fae691b6f3c86a2c3b3d9dc1918b42.png?auto=format&fit=max&w=850 1x, https://docs.dd-static.net/images/observability_pipelines/destinations/google_pubsub_env_var.25fae691b6f3c86a2c3b3d9dc1918b42.png?auto=format&fit=max&w=850&dpr=2 2x"
   alt="The install page showing the Google Pub/Sub environment variable field" /%}

By default the Worker sends data to the global endpoint: `https://pubsub.googleapis.com`.

If your Pub/Sub topic is region-specific, configure the Google Pub/Sub alternative endpoint URL with the regional endpoint. See [About Pub/Sub endpoints](https://cloud.google.com/pubsub/docs/reference/service_apis_overview#pubsub_endpoints) for more information.

The default environment variable is `DD_OP_DESTINATION_GCP_PUBSUB_ENDPOINT_URL`.

#### TLS (when enabled){% #tls-when-enabled %}

- Google Pub/Sub TLS passphrase:
  - The default environment variable is `DD_OP_DESTINATION_GCP_PUBSUB_KEY_PASS`.

{% /tab %}

## Troubleshooting{% #troubleshooting %}

Common issues and fixes:

- Healthcheck forbidden
  - Check the `roles/pubsub.viewer` IAM role.
- Permission denied
  - Ensure the service account has `roles/pubsub.publisher`.
- Authentication errors
  - Verify the credentials JSON path or GKE Workload Identity setup.
- Dropped events
  - Check the `pipelines.component_discarded_events_total` and `pipelines.buffer_discarded_events_total` metrics.
  - Increase the buffer size or fix misconfigured filters as needed to resolve the issue.
- High latency
  - Reduce buffer sizer and timeout, or scale your Workers.
- No logs are arriving
  - In your Google Pub/Sub destination setup, double-check the topic name, project, and Pub/Sub endpoint (global vs regional).

## Metrics{% #metrics %}

### Worker health metrics{% #worker-health-metrics %}

See the [Observability Pipelines Metrics](https://docs.datadoghq.com/observability_pipelines/monitoring_and_troubleshooting/pipeline_usage_metrics.md) for a full list of available health metrics.

### Component metrics{% #component-metrics %}

Monitor the health of your destination with the following key metrics:

{% dl %}

{% dt %}
`pipelines.component_sent_events_total`
{% /dt %}

{% dd %}
Events successfully delivered.
{% /dd %}

{% dt %}
`pipelines.component_discarded_events_total`
{% /dt %}

{% dd %}
Events dropped.
{% /dd %}

{% dt %}
`pipelines.component_errors_total`
{% /dt %}

{% dd %}
Errors in the destination component.
{% /dd %}

{% dt %}
`pipelines.component_sent_events_bytes_total`
{% /dt %}

{% dd %}
Total event bytes sent.
{% /dd %}

{% dt %}
`pipelines.utilization`
{% /dt %}

{% dd %}
Worker resource usage.
{% /dd %}

{% /dl %}

### Buffer metrics (when enabled){% #buffer-metrics-when-enabled %}

These metrics are specific to destination buffers, located upstream of a destination. Each destination emits its own respective buffer metrics.

- Use the `component_id` tag to filter or group by individual components.
- Use the `component_type` tag to filter or group by the destination type, such as `datadog_logs` for the Datadog Logs destination.

{% dl %}

{% dt %}
`pipelines.buffer_size_events`
{% /dt %}

{% dd %}
**Description**: Number of events in a destination's buffer.
{% /dd %}

{% dd %}
**Metric type**: gauge
{% /dd %}

{% dt %}
`pipelines.buffer_size_bytes`
{% /dt %}

{% dd %}
**Description**: Number of bytes in a destination's buffer.
{% /dd %}

{% dd %}
**Metric type**: gauge
{% /dd %}

{% dt %}
`pipelines.buffer_received_events_total`
{% /dt %}

{% dd %}
**Description**: Events received by a destination's buffer. **Note**: This metric represents the count per second and not the cumulative total, even though `total` is in the metric name.
{% /dd %}

{% dd %}
**Metric type**: counter
{% /dd %}

{% dt %}
`pipelines.buffer_received_bytes_total`
{% /dt %}

{% dd %}
**Description**: Bytes received by a destination's buffer. **Note**: This metric represents the count per second and not the cumulative total, even though `total` is in the metric name.
{% /dd %}

{% dd %}
**Metric type**: counter
{% /dd %}

{% dt %}
`pipelines.buffer_sent_events_total`
{% /dt %}

{% dd %}
**Description**: Events sent downstream by a destination's buffer. **Note**: This metric represents the count per second and not the cumulative total, even though `total` is in the metric name.
{% /dd %}

{% dd %}
**Metric type**: counter
{% /dd %}

{% dt %}
`pipelines.buffer_sent_bytes_total`
{% /dt %}

{% dd %}
**Description**: Bytes sent downstream by a destination's buffer. **Note**: This metric represents the count per second and not the cumulative total, even though `total` is in the metric name.
{% /dd %}

{% dd %}
**Metric type**: counter
{% /dd %}

{% dt %}
`pipelines.buffer_discarded_events_total`
{% /dt %}

{% dd %}
**Description**: Events discarded by the buffer. **Note**: This metric represents the count per second and not the cumulative total, even though `total` is in the metric name.
{% /dd %}

{% dd %}
**Metric type**: counter
{% /dd %}

{% dd %}
**Additional tags**: `intentional:true` means an incoming event was dropped because the buffer was configured to drop the newest logs when it's full. `intentional:false` means the event was dropped due to an error.
{% /dd %}

{% dt %}
`pipelines.buffer_discarded_bytes_total`
{% /dt %}

{% dd %}
**Description**: Bytes discarded by the buffer. **Note**: This metric represents the count per second and not the cumulative total, even though `total` is in the metric name.
{% /dd %}

{% dd %}
**Metric type**: counter
{% /dd %}

{% dd %}
**Additional tags**: `intentional:true` means an incoming event was dropped because the buffer was configured to drop the newest logs when it's full. `intentional:false` means the event was dropped due to an error.
{% /dd %}

{% /dl %}

#### Deprecated buffer metrics{% #deprecated-buffer-metrics %}

These metrics are still emitted by the Observability Pipelines Worker for backwards compatibility. Datadog recommends using the replacements when possible.

{% dl %}

{% dt %}
`pipelines.buffer_events`
{% /dt %}

{% dd %}
**Description**: Number of events in a destination's buffer. Use `pipelines.buffer_size_events` instead.
{% /dd %}

{% dd %}
**Metric type**: gauge
{% /dd %}

{% dt %}
`pipelines.buffer_byte_size`
{% /dt %}

{% dd %}
**Description**: Number of bytes in a destination's buffer. Use `pipelines.buffer_size_bytes` instead.
{% /dd %}

{% dd %}
**Metric type**: gauge
{% /dd %}

{% /dl %}

### Event batching{% #event-batching %}

A batch of events is flushed when one of these parameters is met. See [event batching](https://docs.datadoghq.com/observability_pipelines/destinations.md#event-batching) for more information.

| Maximum Events | Maximum Size (MB) | Timeout (seconds) |
| -------------- | ----------------- | ----------------- |
| 1,000          | 10                | 1                 |
