Google BigQuery

이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.
Join the Preview!

Advanced BigQuery monitoring is in Preview. Use this form to sign up to start gaining insights into your query performance.

Request Access

Overview

BigQuery is Google’s fully managed, petabyte scale, low cost enterprise data warehouse for analytics.

Get metrics from Google BigQuery to:

  • Visualize the performance of your BigQuery queries.
  • Correlate the performance of your BigQuery queries with your applications.

Setup

Installation

If you haven’t already, set up the Google Cloud Platform integration first. There are no other installation steps that need to be performed.

Log collection

Google BigQuery logs are collected with Google Cloud Logging and sent to a Dataflow job through a Cloud Pub/Sub topic. If you haven’t already, set up logging with the Datadog Dataflow template.

Once this is done, export your Google BigQuery logs from Google Cloud Logging to the Pub/Sub topic:

  1. Go to the Google Cloud Logging page and filter the Google BigQuery logs.
  2. Click Create Export and name the sink.
  3. Choose “Cloud Pub/Sub” as the destination and select the Pub/Sub topic that was created for that purpose. Note: The Pub/Sub topic can be located in a different project.
  4. Click Create and wait for the confirmation message to show up.

Data Collected

Metrics

gcp.bigquery.job.num_in_flight
(gauge)
Number of in flight jobs.
Shown as job
gcp.bigquery.query.biengine_fallback_count
(count)
The reasons that queries failed BI Engine execution.
Shown as query
gcp.bigquery.query.column_metadata_index_staleness.avg
(gauge)
The average distribution of staleness in milliseconds of the column metadata index for queries that successfully used the column metadata index in the last sampling interval.
Shown as millisecond
gcp.bigquery.query.column_metadata_index_staleness.samplecount
(gauge)
The sample count for distribution of staleness in milliseconds of the column metadata index for queries that successfully used the column metadata index in the last sampling interval.
Shown as millisecond
gcp.bigquery.query.column_metadata_index_staleness.sumsqdev
(gauge)
The sum of squared deviation for distribution of staleness in milliseconds of the column metadata index for queries that successfully used the column metadata index in the last sampling interval.
Shown as millisecond
gcp.bigquery.query.count
(gauge)
Queries in flight.
Shown as query
gcp.bigquery.query.execution_count
(count)
Number of queries executed.
Shown as query
gcp.bigquery.query.execution_times.avg
(gauge)
Average of query execution times.
Shown as second
gcp.bigquery.query.execution_times.samplecount
(count)
Sample Count of query execution times.
Shown as second
gcp.bigquery.query.execution_times.sumsqdev
(gauge)
Sum of Squared Deviation for query execution times.
Shown as second
gcp.bigquery.query.scanned_bytes
(rate)
Number of scanned bytes. Note: this metric is available with a six-hour delay.
Shown as byte
gcp.bigquery.query.scanned_bytes_billed
(rate)
Number of scanned bytes billed. Note: this metric is available with a six-hour delay.
Shown as byte
gcp.bigquery.query.statement_scanned_bytes
(count)
Scanned bytes broken down by statement type. Note: this metric is available with a six-hour delay.
Shown as byte
gcp.bigquery.query.statement_scanned_bytes_billed
(count)
Scanned bytes billed broken down by statement type. Note: this metric is available with a six-hour delay.
Shown as byte
gcp.bigquery.slots.allocated
(gauge)
Number of BigQuery slots currently allocated for project, slot allocation can be broken down based on reservation and job type.
gcp.bigquery.slots.allocated_for_project
(gauge)
Number of BigQuery slots currently allocated for the project.
gcp.bigquery.slots.allocated_for_project_and_job_type
(gauge)
Number of BigQuery slots currently allocated for the project and job type.
gcp.bigquery.slots.allocated_for_reservation
(gauge)
Number of BigQuery slots currently allocated for project in the reservation.
gcp.bigquery.slots.assigned
(gauge)
The number of slots assigned to the given project or organization.
gcp.bigquery.slots.capacity_committed
(gauge)
The total slot capacity commitments purchased through this administrator project or organization.
gcp.bigquery.slots.max_assigned
(gauge)
The maximum number of slots assigned to the given project or organization.
gcp.bigquery.slots.total_allocated_for_reservation
(gauge)
Number of BigQuery slots currently allocated across all projects in the reservation.
gcp.bigquery.storage.insertall_inserted_bytes
(count)
The number of bytes uploaded by the project using the InsertAll streaming API.
Shown as byte
gcp.bigquery.storage.insertall_inserted_rows
(count)
The number of rows uploaded by the project using the InsertAll streaming API.
Shown as row
gcp.bigquery.storage.stored_bytes
(gauge)
Number of bytes stored. Note: this metric is available with a three-hour delay.
Shown as byte
gcp.bigquery.storage.table_count
(gauge)
Number of tables. Note: this metric is available with a three-hour delay.
Shown as table
gcp.bigquery.storage.uploaded_bytes
(count)
Number of uploaded bytes. Note: this metric is available with a six-hour delay.
Shown as byte
gcp.bigquery.storage.uploaded_bytes_billed
(count)
Number of uploaded bytes billed. Note: this metric is available with a six-hour delay.
Shown as byte
gcp.bigquery.storage.uploaded_row_count
(count)
Number of uploaded rows. Note: this metric is available with a six-hour delay.
Shown as row

Events

The Google BigQuery integration does not include any events.

Service Checks

The Google BigQuery integration does not include any service checks.

Troubleshooting

Need help? Contact Datadog support.