Confluent Cloud

Présentation

L’intégration Confluent Cloud n’est pas prise en charge pour le site Datadog.

Associez Datadog à Confluent Cloud pour visualiser les métriques sur les clusters Kafka par rubrique ainsi que les métriques sur les connecteurs Kafka. Vous pouvez créer des monitors et des dashboards à partir de ces métriques.

Configuration

Installation

Installez l’intégration avec le carré d’intégration Confluent Cloud de Datadog .

Configuration

  1. Dans le carré d’intégration, accédez à l’onglet Configuration.
  2. Cliquez sur + Add API Key pour saisir votre clé d’API et votre secret d’API Confluent Cloud .
  3. Cliquez sur Save. Datadog recherche alors tous les comptes associés aux informations fournies.
  4. Ajoutez votre ID de cluster ou votre ID de connecteur Confluent Cloud. Datadog récupère les métriques Confluent Cloud et les affiche après quelques minutes.

Clé et secret d’API

Pour créer votre clé et votre secret d’API Confluent Cloud, consultez la rubrique Ajouter le rôle MetricsViewer à un nouveau compte de service dans l’interface (en anglais).

ID de cluster

Pour récupérer votre ID de cluster Confluent Cloud, procédez comme suit :

  1. Dans Confluent Cloud, accédez à Environment Overview et sélectionnez le cluster à surveiller.
  2. Depuis la navigation de gauche, cliquez sur Cluster overview > Cluster settings.
  3. Sous Identification, copiez l’ID de cluster commençant par lkc.

ID de connecteur

Pour récupérer votre ID de connecteur Confluent Cloud, procédez comme suit :

  1. Dans Confluent Cloud, accédez à Environment Overview et sélectionnez le cluster à surveiller.
  2. Depuis la navigation de gauche, cliquez sur Data integration > Connectors.
  3. Sous Connectors, copiez l’ID de connecteur commençant par lcc.

Dashboards

Une fois l’intégration configurée, utilisez le dashboard Confluent Cloud prêt à l’emploi pour consulter une vue d’ensemble de vos métriques sur les clusters et connecteurs Kafka.

Par défaut, toutes les métriques recueillies à partir de Confluent Cloud sont affichées.

Données collectées

Métriques

confluent_cloud.kafka.received_bytes
(count)
The delta count of bytes received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.kafka.sent_bytes
(count)
The delta count of bytes sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.kafka.received_records
(count)
The delta count of records received. Each sample is the number of records received since the previous data sample. The count is sampled every 60 seconds.
Shown as record
confluent_cloud.kafka.sent_records
(count)
The delta count of records sent. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds.
Shown as record
confluent_cloud.kafka.retained_bytes
(gauge)
The current count of bytes retained by the cluster. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.kafka.active_connection_count
(gauge)
The count of active authenticated connections.
Shown as connection
confluent_cloud.kafka.consumer_lag_offsets
(gauge)
The lag between a group member's committed offset and the partition's high watermark. This metric will be tagged with the kafka id, consumer group id, and topic id.
confluent_cloud.custom.kafka.consumer_lag_offsets
(gauge)
The lag between a group member's committed offset and the partition's high watermark. This metric will be tagged with kafka id, consumer group id, topic, consumer group member id, client id, and partition. Enabling this metric will result in the generation of custom metrics, which are billable. Each unique combination of tags will result in a single custom metric.
confluent_cloud.kafka.request_count
(count)
The delta count of requests received over the network. Each sample is the number of requests received since the previous data point. The count is sampled every 60 seconds.
Shown as request
confluent_cloud.kafka.partition_count
(gauge)
The number of partitions.
confluent_cloud.kafka.successful_authentication_count
(count)
The delta count of successful authentications. Each sample is the number of successful authentications since the previous data point. The count is sampled every 60 seconds.
Shown as attempt
confluent_cloud.kafka.cluster_link_destination_response_bytes
(count)
The delta count of cluster linking response bytes from all request types. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.kafka.cluster_link_source_response_bytes
(count)
The delta count of cluster linking source response bytes from all request types. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.kafka.cluster_active_link_count
(gauge)
The current count of active cluster links. The count is sampled every 60 seconds. The implicit time aggregation for this metric is MAX.
confluent_cloud.kafka.cluster_load_percent
(gauge)
A measure of the utilization of the cluster. The value is between 0.0 and 1.0.
Shown as percent
confluent_cloud.connect.sent_records
(count)
The delta count of total number of records sent from the transformations and written to Kafka for the source connector. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds.
Shown as record
confluent_cloud.connect.received_records
(count)
The delta count of total number of records received by the sink connector. Each sample is the number of records received since the previous data point. The count is sampled every 60 seconds.
Shown as record
confluent_cloud.connect.sent_bytes
(count)
The delta count of total bytes sent from the transformations and written to Kafka for the source connector. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.connect.received_bytes
(count)
The delta count of total bytes received by the sink connector. Each sample is the number of bytes received since the previous data point. The count is sampled every 60 seconds.
Shown as byte
confluent_cloud.connect.dead_letter_queue_records
(count)
The delta count of dead letter queue records written to Kafka for the sink connector. The count is sampled every 60 seconds.
Shown as record
confluent_cloud.ksql.streaming_unit_count
(gauge)
The count of Confluent Streaming Units (CSUs) for this KSQL instance. The count is sampled every 60 seconds. The implicit time aggregation for this metric is MAX.
Shown as unit
confluent_cloud.ksql.query_saturation
(gauge)
The maximum saturation for a given ksqlDB query across all nodes. Returns a value between 0 and 1. A value close to 1 indicates that ksqlDB query processing is bottlenecked on available resources.
confluent_cloud.ksql.task_stored_bytes
(gauge)
The size of a given task's state stores in bytes.
Shown as byte
confluent_cloud.ksql.storage_utilization
(gauge)
The total storage utilization for a given ksqlDB application.
confluent_cloud.schema_registry.schema_count
(gauge)
The number of registered schemas.
confluent_cloud.schema_registry.request_count
(count)
The delta count of requests received by the schema registry server. Each sample is the number of requests received since the previous data point. The count sampled every 60 seconds.
confluent_cloud.schema_registry.schema_operations_count
(count)
The delta count of schema related operations. Each sample is the number of requests received since the previous data point. The count sampled every 60 seconds.
confluent_cloud.flink.num_records_in
(count)
Total number of records all Flink SQL statements leveraging a Flink compute pool have received.
confluent_cloud.flink.num_records_out
(count)
Total number of records all Flink SQL statements leveraging a Flink compute pool have emitted.
confluent_cloud.flink.pending_records
(gauge)
Total backlog of all Flink SQL statements leveraging a Flink compute pool.

Événements

L’intégration Confluent Cloud n’inclut aucun événement.

Checks de service

L’intégration Confluent Cloud n’inclut aucun check de service.

Dépannage

Besoin d’aide ? Contactez l’assistance Datadog .