Google Pub/Sub Destination
Este producto no es compatible con el
sitio Datadog seleccionado. (
).
Overview
Use Observability Pipelines’ Google Pub/Sub destination to publish logs to the Google Cloud Pub/Sub messaging system, so the logs can be sent to downstream services, data lakes, or custom applications.
When to use this destination
Common scenarios when you might use this destination:
- For analytics pipelines: Route logs downstream into Google BigQuery, Data Lake, or custom machine learning workflows.
- For event-driven processing: Publish logs to a Pub/Sub topic so that Google Cloud Functions, Cloud Run functions, and Dataflow jobs can carry out actions in real time based on the log data.
Prerequisites
Before you configure the destination, you need the following:
- Pub/Sub subscription: Create a Pub/Sub topic and at least one subscription to consume the messages.
- Authentication: Set up a standard Google Cloud authentication method. These options include:
- A service account key (JSON file)
- A workload identity (Google Kubernetes Engine (GKE))
- IAM roles:
roles/pubsub.publisher
is required for publishing events.roles/pubsub.viewer
is recommended for health checks.- If the role is missing, the error
Healthcheck endpoint forbidden
is logged and the Worker proceeds as usual.
- See Available Pub/Sub roles for more information.
Set up a service account for the Worker
A service account in Google Cloud is a type of account used only by applications or services.
- It has its own identity and credentials (a JSON key file).
- You assign it IAM roles so it can access specific resources.
- In this case, the Observability Pipelines Worker uses a service account to authenticate and send logs to Pub/Sub on your behalf.
To authenticate using a service account:
- In the Google Cloud console, navigate to IAM & Admin > Service Accounts.
- Click + Create service account.
- Enter a name and click Create and continue.
- Assign roles:
- Pub/Sub Publisher
- Pub/Sub Viewer
- Click Done.
Authentication methods
After you’ve created the service account with the correct roles, set up one of the following authentication methods:
Option A: Workload Identity method (for GKE, recommended)
- Bind the service account to a Kubernetes service account (KSA).
- Allow the service account to be impersonated by that KSA.
- Annotate the KSA so the GKE knows which service account to use.
- Authentication then comes from the GCP’s metadata server.
Option B: Attach the GSA directly to a VM (for Google Compute Engine)
Use this authentication method if you’re running the Observability Pipelines Worker on a Google Compute Engine (GCE) VM.
- When you create or edit the VM, specify the Google service account under Identity and API access > Service account.
Option C: Run the service as the GSA (for Cloud Run or Cloud Functions)
Use this authentication method if you’re deploying the Worker as a Cloud Run service or Cloud Function.
- In the Cloud Run or Cloud Functions deployment settings, set the Execution service account to the Google service account you created.
Option D: JSON key method (any environment without identity bindings)
- Open the new service account and navigate to Keys > Add key > Create new key.
- Choose the JSON format.
- Save the downloaded JSON file in a secure location.
- After you install the Worker, copy or mount JSON the file into
DD_OP_DATA_DIR/config/
.
You reference this file in the Google Pub/Sub destination’s Credentials path field when you set up the destination in the Pipelines UI.
Setup
Set up the Google Pub/Sub destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
Set up the destination
- Enter the destination project name.
- This is the GCP project where your Pub/Sub topic lives.
- Enter the topic.
- This is the Pub/Sub topic to publish logs to.
- In the Encoding dropdown menu, select whether you want to encode your pipeline’s output in JSON or Raw message.
- JSON: Logs are structured as JSON (recommended if downstream tools need structured data).
- Raw: Logs are sent as raw strings (preserves the original format).
- If you have a credentials JSON file, enter the path to your credentials JSON file.
- If you using a service account JSON: enter the path
DD_OP_DATA_DIR/config/<your-service-account>.json
. - Or set the
GOOGLE_APPLICATION_CREDENTIALS
environment variable. - Credentials are automatically managed if you’re using workload identity on GKE.
Optional settings
- Toggle the switch to Enable TLS if your organization requires secure connections with custom certificates.
Server Certificate Path
: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).CA Certificate Path
: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).Private Key Path
: The path to the .key
private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.
- Toggle the switch to enable Buffering Options (Preview).
Note: Contact your account manager to request access to the Preview.- If disabled (default): Up to 500 events are buffered before flush.
- If enabled:
- Select the buffer type you want to set.
- Memory: Fast, limited by RAM
- Buffer size: Durable, survives restarts
- Enter the buffer size and select the unit.
- Maximum capacity in MB or GB.
Set environment variables
Optional alternative Pub/Sub endpoints
By default the Worker sends data to the global endpoint: https://pubsub.googleapis.com
.
If your Pub/Sub topic is region-specific, configure the Google Pub/Sub alternative endpoint URL with the regional endpoint. See About Pub/Sub endpoints for more information.
Stored as the environment variable: DD_OP_DESTINATION_GCP_PUBSUB_ENDPOINT_URL
.
Troubleshooting
Common issues and fixes:
- Healthcheck forbidden
- Check the
roles/pubsub.viewer
IAM role.
- Permission denied
- Ensure the service account has
roles/pubsub.publisher
.
- Authentication errors
- Verify the credentials JSON path or GKE Workload Identity setup.
- Dropped events
- Check the
pipelines.component_discarded_events_total
and pipelines.buffer_discarded_events_total
metrics. - Increase the buffer size or fix misconfigured filters as needed to resolve the issue.
- High latency
- Reduce buffer sizer and timeout, or scale your Workers.
- No logs are arriving
- In your Google Pub/Sub destination setup, double-check the topic name, project, and Pub/Sub endpoint (global vs regional).
How the destination works
Worker health metrics
See the Observability Pipelines Metrics for a full list of available health metrics.
Component metrics
Monitor the health of your Pub/Sub destination with the following key metrics:
pipelines.component_sent_events_total
- Events successfully delivered.
pipelines.component_discarded_events_total
- Events dropped.
pipelines.component_errors_total
- Errors in the destination component.
pipelines.component_sent_event_bytes_total
- Total event bytes sent.
pipelines.utilization
- Worker resource usage.
Buffer metrics (when buffering is enabled)
Track buffer behavior with these additional metrics:
pipelines.buffer_events
- Number of events currently in the buffer.
pipelines.buffer_byte_size
- Current buffer size in bytes.
pipelines.buffer_received_events_total
- Total events added to the buffer.
pipelines.buffer_received_event_bytes_total
- Total bytes added to the buffer.
pipelines.buffer_sent_events_total
- Total events successfully flushed from the buffer.
pipelines.buffer_sent_event_bytes_total
- Total bytes successfully flushed from the buffer.
pipelines.buffer_discarded_events_total
- Events discarded from the buffer (for example, due to overflow).
Event batching
A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|
1,000 | 10,000,000 | 1 |