Confluent Cloud

Información general

La integración de Confluent Cloud no es compatible con el sitio Datadog .

Confluent Cloud es un servicio de transmisión de datos alojado en la nube y totalmente gestionado. Conecta Datadog con Confluent Cloud para visualizar y recibir alertas sobre métricas clave para tus recursos de Confluent Cloud.

El dashboard de Confluent Cloud listo para usar de Datadog te muestra métricas de clúster clave para monitorizar el estado y el rendimiento de tu entorno, incluida información como la tasa de cambio en las conexiones activas y tu relación entre el promedio de registros consumidos y producidos.

Puedes utilizar los monitores recomendados para notificar y alertar a tu equipo cuando el retraso del tema sea demasiado alto, o utilizar estas métricas para crear las tuyas propias.

Configuración

Instalación

Instala la integración con el cuadro de integración de Confluent Cloud en Datadog.

Configuración

  1. En Confluent Cloud, haz clic en + Add API Key para ingresar tu clave y secreto de API de Confluent Cloud.
    • Crea una clave y un secreto de API Cloud Resource Management.
    • Haz clic en Save. Datadog busca las cuentas asociadas a esas credenciales.
    • En la configuración de la integración con Datadog, añade la clave y el secreto de API a los campos de API Key and API Secret.
  2. Añade tu ID de clúster o ID de conector de Confluent Cloud. Datadog rastrea las métricas de Confluent Cloud y las carga en cuestión de minutos.
  3. Para recopilar tus etiquetas (tags) definidas en Confluent Cloud (opcional):
    • Crea una clave y un secreto de API Schema Registry. Obtén más información sobre Gestión de esquemas en Confluent Cloud.
    • Haz clic en Save. Datadog recopila las etiquetas definidas en Confluent Cloud.
    • En la configuración de la integración con Datadog, añade la clave y el secreto de API a los campos de Schema Registry API Key and Secret.
  4. Si utilizas Cloud Cost Management y habilitas la recopilación de datos de costes:

Para obtener más información sobre los recursos de configuración, como clústeres y conectores, consulta la documentación sobre la integración de Confluent Cloud.

Clave y secreto de API

Para crear tu clave y secreto de API de Confluent Cloud, consulta Añadir el rol MetricsViewer a una nueva cuenta de servicio en la interfaz de usuario.

ID de clúster

Para encontrar tu ID de clúster de Confluent Cloud:

  1. En Confluent Cloud, navega hasta Environment Overview y selecciona el clúster que desees monitorizar.
  2. En la navegación de la izquierda, haz clic en Cluster overview > Cluster settings.
  3. En Identification, copia el ID de clúster que empieza con lkc.

ID de conector

Para encontrar tu ID de conector de Confluent Cloud:

  1. En Confluent Cloud, navega hasta Environment Overview y selecciona el clúster que desees monitorizar.
  2. En la navegación de la izquierda, haz clic en Data integration > Connectors.
  3. En Connectors, copia el ID de conector que empieza con lcc.

Dashboards

Después de configurar la integración, consulta el dashboard de Confluent Cloud listo para usar para obtener información general de las métricas de conector y de clúster de Kafka.

Por defecto, se muestran todas las métricas recopiladas en Confluent Cloud.

Datos recopilados

Métricas

confluent_cloud.kafka.received_bytes
(count)
The delta count of bytes received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.kafka.sent_bytes
(count)
The delta count of bytes sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.kafka.received_records
(count)
The delta count of records received. Each sample is the number of records received since the previous data sample. The count is sampled every 60 seconds.
Shown as record
confluent_cloud.kafka.sent_records
(count)
The delta count of records sent. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds.
Shown as record
confluent_cloud.kafka.retained_bytes
(gauge)
The current count of bytes retained by the cluster. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.kafka.active_connection_count
(gauge)
The count of active authenticated connections.
Shown as connection
confluent_cloud.kafka.request_count
(count)
The delta count of requests received over the network. Each sample is the number of requests received since the previous data point. The count is sampled every 60 seconds.
Shown as request
confluent_cloud.kafka.partition_count
(gauge)
The number of partitions.
confluent_cloud.kafka.successful_authentication_count
(count)
The delta count of successful authentications. Each sample is the number of successful authentications since the previous data point. The count is sampled every 60 seconds.
Shown as attempt
confluent_cloud.kafka.cluster_link_destination_response_bytes
(count)
The delta count of cluster linking response bytes from all request types. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.kafka.cluster_link_source_response_bytes
(count)
The delta count of cluster linking source response bytes from all request types. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.kafka.cluster_active_link_count
(gauge)
The current count of active cluster links. The count is sampled every 60 seconds. The implicit time aggregation for this metric is MAX.
confluent_cloud.kafka.cluster_load_percent
(gauge)
A measure of the utilization of the cluster. The value is between 0.0 and 1.0.
Shown as percent
confluent_cloud.kafka.cluster_load_percent_max
(gauge)
A measure of the maximum broker utilization across the cluster. The value is between 0.0 and 1.0.
Shown as percent
confluent_cloud.kafka.cluster_load_percent_avg
(gauge)
A measure of the average utilization across the cluster. The value is between 0.0 and 1.0.
Shown as percent
confluent_cloud.kafka.consumer_lag_offsets
(gauge)
The lag between a group member's committed offset and the partition's high watermark.
confluent_cloud.kafka.cluster_link_count
(gauge)
The current count of cluster links. The count is sampled every 60 seconds. The implicit time aggregation for this metric is MAX.
confluent_cloud.kafka.cluster_link_mirror_topic_bytes
(count)
The delta count of cluster linking mirror topic bytes. The count is sampled every 60 seconds.
confluent_cloud.kafka.cluster_link_mirror_topic_count
(gauge)
The cluster linking mirror topic count for a link. The count is sampled every 60 seconds.
confluent_cloud.kafka.cluster_link_mirror_topic_offset_lag
(gauge)
TThe cluster linking mirror topic offset lag maximum across all partitions. The lag is sampled every 60 seconds.
confluent_cloud.kafka.request_bytes
(gauge)
The delta count of total request bytes from the specified request types sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
confluent_cloud.kafka.response_bytes
(gauge)
The delta count of total response bytes from the specified response types sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
confluent_cloud.kafka.rest_produce_request_bytes
(count)
The delta count of total request bytes from Kafka REST produce calls sent over the network requested by Kafka REST.
confluent_cloud.connect.sent_records
(count)
The delta count of total number of records sent from the transformations and written to Kafka for the source connector. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds.
Shown as record
confluent_cloud.connect.received_records
(count)
The delta count of total number of records received by the sink connector. Each sample is the number of records received since the previous data point. The count is sampled every 60 seconds.
Shown as record
confluent_cloud.connect.sent_bytes
(count)
The delta count of total bytes sent from the transformations and written to Kafka for the source connector. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.connect.received_bytes
(count)
The delta count of total bytes received by the sink connector. Each sample is the number of bytes received since the previous data point. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.connect.dead_letter_queue_records
(count)
The delta count of dead letter queue records written to Kafka for the sink connector. The count is sampled every 60 seconds.
Shown as record
confluent_cloud.ksql.streaming_unit_count
(gauge)
The count of Confluent Streaming Units (CSUs) for this KSQL instance. The count is sampled every 60 seconds. The implicit time aggregation for this metric is MAX.
Shown as unit
confluent_cloud.ksql.query_saturation
(gauge)
The maximum saturation for a given ksqlDB query across all nodes. Returns a value between 0 and 1. A value close to 1 indicates that ksqlDB query processing is bottlenecked on available resources.
confluent_cloud.ksql.task_stored_bytes
(gauge)
The size of a given task's state stores in bytes.
Shown as byte
confluent_cloud.ksql.storage_utilization
(gauge)
The total storage utilization for a given ksqlDB application.
confluent_cloud.schema_registry.schema_count
(gauge)
The number of registered schemas.
confluent_cloud.schema_registry.request_count
(count)
The delta count of requests received by the schema registry server. Each sample is the number of requests received since the previous data point. The count sampled every 60 seconds.
confluent_cloud.schema_registry.schema_operations_count
(count)
The delta count of schema related operations. Each sample is the number of requests received since the previous data point. The count sampled every 60 seconds.
confluent_cloud.flink.num_records_in
(count)
Total number of records all Flink SQL statements leveraging a Flink compute pool have received.
confluent_cloud.flink.num_records_out
(count)
Total number of records all Flink SQL statements leveraging a Flink compute pool have emitted.
confluent_cloud.flink.pending_records
(gauge)
Total backlog of all Flink SQL statements leveraging a Flink compute pool.
confluent_cloud.flink.compute_pool_utilization.current_cfus
(gauge)
The absolute number of CFUs at a given moment.
confluent_cloud.flink.compute_pool_utilization.cfu_minutes_consumed
(count)
The number of how many CFUs consumed since the last measurement.
confluent_cloud.flink.compute_pool_utilization.cfu_limit
(gauge)
The possible max number of CFUs for the pool.
confluent_cloud.flink.current_input_watermark_ms
(gauge)
The last watermark this statement has received (in milliseconds) for the given table.
confluent_cloud.flink.current_output_watermark_ms
(gauge)
The last watermark this statement has produced (in milliseconds) to the given table.
confluent_cloud.custom.kafka.consumer_lag_offsets
(gauge)
The lag between a group member's committed offset and the partition's high watermark.

Eventos

La integración de Confluent Cloud no incluye eventos.

Checks de servicio

La integración de Confluent Cloud no incluye checks de servicio.

Solucionar problemas

¿Necesitas ayuda? Consulta el servicio de asistencia de Datadog.

Referencias adicionales