ClickHouse

Supported OS Linux Windows Mac OS

Integration version3.1.0
이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

Overview

This check monitors ClickHouse through the Datadog Agent.

Setup

Follow the instructions below to install and configure this check for an Agent running on a host. For containerized environments, see the Autodiscovery Integration Templates for guidance on applying these instructions.

Installation

The ClickHouse check is included in the Datadog Agent package. No additional installation is needed on your server.

Configuration

Host

To configure this check for an Agent running on a host:

Metric collection

  1. To start collecting your ClickHouse performance data, edit the clickhouse.d/conf.yaml file in the conf.d/ folder at the root of your Agent’s configuration directory. See the sample clickhouse.d/conf.yaml for all available configuration options.

  2. Restart the Agent .

Log collection
  1. Collecting logs is disabled by default in the Datadog Agent, enable it in your datadog.yaml file:

    logs_enabled: true
    
  2. Add the log files you are interested in to your clickhouse.d/conf.yaml file to start collecting your ClickHouse logs:

      logs:
        - type: file
          path: /var/log/clickhouse-server/clickhouse-server.log
          source: clickhouse
          service: "<SERVICE_NAME>"
    

    Change the path and service parameter values and configure them for your environment. See the sample clickhouse.d/conf.yaml for all available configuration options.

  3. Restart the Agent .

Containerized

For containerized environments, see the Autodiscovery Integration Templates for guidance on applying the parameters below.

Metric collection

ParameterValue
<INTEGRATION_NAME>clickhouse
<INIT_CONFIG>blank or {}
<INSTANCE_CONFIG>{"server": "%%host%%", "port": "%%port%%", "username": "<USER>", "password": "<PASSWORD>"}
Log collection

Collecting logs is disabled by default in the Datadog Agent. To enable it, see Kubernetes log collection .

ParameterValue
<LOG_CONFIG>{"source": "clickhouse", "service": "<SERVICE_NAME>"}

Validation

Run the Agent’s status subcommand and look for clickhouse under the Checks section.

Data Collected

Metrics

clickhouse.background_pool.processing.task.active
(gauge)
The number of active tasks in BackgroundProcessingPool (merges, mutations, fetches, or replication queue bookkeeping)
Shown as task
clickhouse.background_pool.move.task.active
(gauge)
The number of active tasks in BackgroundProcessingPool for moves.
Shown as task
clickhouse.background_pool.schedule.task.active
(gauge)
The number of active tasks in BackgroundSchedulePool. This pool is used for periodic ReplicatedMergeTree tasks, like cleaning old data parts, altering data parts, replica re-initialization, etc.
Shown as task
clickhouse.cache_dictionary.update_queue.batches
(gauge)
Number of 'batches' (a set of keys) in update queue in CacheDictionaries.
clickhouse.cache_dictionary.update_queue.keys
(gauge)
Exact number of keys in update queue in CacheDictionaries.
Shown as key
clickhouse.thread.lock.context.waiting
(gauge)
The number of threads waiting for lock in Context. This is global lock.
Shown as thread
clickhouse.query.insert.delayed
(gauge)
The number of INSERT queries that are throttled due to high number of active data parts for partition in a MergeTree table.
Shown as query
clickhouse.dictionary.request.cache
(gauge)
The number of requests in fly to data sources of dictionaries of cache type.
Shown as request
clickhouse.merge.disk.reserved
(gauge)
Disk space reserved for currently running background merges. It is slightly more than the total size of currently merging parts.
Shown as byte
clickhouse.table.distributed.file.insert.pending
(gauge)
The number of pending files to process for asynchronous insertion into Distributed tables. Number of files for every shard is summed.
Shown as file
clickhouse.table.distributed.connection.inserted
(gauge)
The number of connections to remote servers sending data that was INSERTed into Distributed tables. Both synchronous and asynchronous mode.
Shown as connection
clickhouse.zk.node.ephemeral
(gauge)
The number of ephemeral nodes hold in ZooKeeper.
Shown as node
clickhouse.thread.global.total
(gauge)
The number of threads in global thread pool.
Shown as thread
clickhouse.thread.global.active
(gauge)
The number of threads in global thread pool running a task.
Shown as thread
clickhouse.connection.http
(gauge)
The number of connections to HTTP server
Shown as connection
clickhouse.connection.interserver
(gauge)
The number of connections from other replicas to fetch parts
Shown as connection
clickhouse.replica.leader.election
(gauge)
The number of Replicas participating in leader election. Equals to total number of replicas in usual cases.
Shown as shard
clickhouse.table.replicated.leader
(gauge)
The number of Replicated tables that are leaders. Leader replica is responsible for assigning merges, cleaning old blocks for deduplications and a few more bookkeeping tasks. There may be no more than one leader across all replicas at one moment of time. If there is no leader it will be elected soon or it indicate an issue.
Shown as table
clickhouse.thread.local.total
(gauge)
The number of threads in local thread pools. Should be similar to GlobalThreadActive.
Shown as thread
clickhouse.thread.local.active
(gauge)
The number of threads in local thread pools running a task.
Shown as thread
clickhouse.query.memory
(gauge)
Total amount of memory allocated in currently executing queries. Note that some memory allocations may not be accounted.
Shown as byte
clickhouse.merge.memory
(gauge)
Total amount of memory allocated for background merges. Included in MemoryTrackingInBackgroundProcessingPool. Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks.
Shown as byte
clickhouse.background_pool.processing.memory
(gauge)
Total amount of memory allocated in background processing pool (that is dedicated for background merges, mutations and fetches). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks.
Shown as byte
clickhouse.background_pool.move.memory
(gauge)
Total amount of memory (bytes) allocated in background processing pool (that is dedicated for background moves). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks.
Shown as byte
clickhouse.background_pool.schedule.memory
(gauge)
Total amount of memory allocated in background schedule pool (that is dedicated for bookkeeping tasks of Replicated tables).
Shown as byte
clickhouse.merge.active
(gauge)
The number of executing background merges
Shown as merge
clickhouse.file.open.read
(gauge)
The number of files open for reading
Shown as file
clickhouse.file.open.write
(gauge)
The number of files open for writing
Shown as file
clickhouse.query.mutation
(gauge)
The number of mutations (ALTER DELETE/UPDATE)
Shown as query
clickhouse.query.active
(gauge)
The number of executing queries
Shown as query
clickhouse.query.waiting
(gauge)
The number of queries that are stopped and waiting due to 'priority' setting.
Shown as query
clickhouse.thread.query
(gauge)
The number of query processing threads
Shown as thread
clickhouse.thread.lock.rw.active.read
(gauge)
The number of threads holding read lock in a table RWLock.
Shown as thread
clickhouse.thread.lock.rw.active.write
(gauge)
The number of threads holding write lock in a table RWLock.
Shown as thread
clickhouse.thread.lock.rw.waiting.read
(gauge)
The number of threads waiting for read on a table RWLock.
Shown as thread
clickhouse.thread.lock.rw.waiting.write
(gauge)
The number of threads waiting for write on a table RWLock.
Shown as thread
clickhouse.syscall.read
(gauge)
The number of read (read, pread, io_getevents, etc.) syscalls in fly.
Shown as read
clickhouse.table.replicated.readonly
(gauge)
The number of Replicated tables that are currently in readonly state due to re-initialization after ZooKeeper session loss or due to startup without ZooKeeper configured.
Shown as table
clickhouse.table.replicated.part.check
(gauge)
The number of data parts checking for consistency
Shown as item
clickhouse.table.replicated.part.fetch
(gauge)
The number of data parts being fetched from replica
Shown as item
clickhouse.table.replicated.part.send
(gauge)
The number of data parts being sent to replicas
Shown as item
clickhouse.connection.send.external
(gauge)
The number of connections that are sending data for external tables to remote servers. External tables are used to implement GLOBAL IN and GLOBAL JOIN operators with distributed subqueries.
Shown as connection
clickhouse.connection.send.scalar
(gauge)
The number of connections that are sending data for scalars to remote servers.
Shown as connection
clickhouse.table.buffer.size
(gauge)
Size of buffers of Buffer tables.
Shown as byte
clickhouse.table.buffer.row
(gauge)
The number of rows in buffers of Buffer tables.
Shown as row
clickhouse.connection.mysql
(gauge)
Number of client connections using MySQL protocol.
Shown as connection
clickhouse.connection.tcp
(gauge)
The number of connections to TCP server (clients with native interface).
Shown as connection
clickhouse.syscall.write
(gauge)
The number of write (write, pwrite, io_getevents, etc.) syscalls in fly.
Shown as write
clickhouse.zk.request
(gauge)
The number of requests to ZooKeeper in fly.
Shown as request
clickhouse.zk.connection
(gauge)
The number of sessions (connections) to ZooKeeper. Should be no more than one, because using more than one connection to ZooKeeper may lead to bugs due to lack of linearizability (stale reads) that ZooKeeper consistency model allows.
Shown as connection
clickhouse.zk.watch
(gauge)
The number of watches (event subscriptions) in ZooKeeper.
Shown as event
clickhouse.lock.context.acquisition.count
(count)
The number of times the lock of Context was acquired or tried to acquire during the last interval. This is global lock.
Shown as event
clickhouse.lock.context.acquisition.total
(gauge)
The total number of times the lock of Context was acquired or tried to acquire. This is global lock.
Shown as event
clickhouse.syscall.write.wait
(gauge)
The percentage of time spent waiting for write syscall during the last interval. This include writes to page cache.
Shown as percent
clickhouse.file.open.count
(count)
The number of files opened during the last interval.
Shown as file
clickhouse.file.open.total
(gauge)
The total number of files opened.
Shown as file
clickhouse.query.count
(count)
The number of queries to be interpreted and potentially executed during the last interval. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries.
Shown as query
clickhouse.query.total
(gauge)
The total number of queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries.
Shown as query
clickhouse.file.read.count
(count)
The number of reads (read/pread) from a file descriptor during the last interval. Does not include sockets.
Shown as read
clickhouse.file.read.total
(gauge)
The total number of reads (read/pread) from a file descriptor. Does not include sockets.
Shown as read
clickhouse.thread.process_time
(gauge)
The percentage of time spent processing (queries and other tasks) threads during the last interval.
Shown as percent
clickhouse.query.insert.count
(count)
The number of INSERT queries to be interpreted and potentially executed during the last interval. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries.
Shown as query
clickhouse.query.insert.total
(gauge)
The total number of INSERT queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries.
Shown as query
clickhouse.query.select.count
(count)
The number of SELECT queries to be interpreted and potentially executed during the last interval. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries.
Shown as query
clickhouse.query.select.total
(gauge)
The total number of SELECT queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries.
Shown as query
clickhouse.thread.system.process_time
(gauge)
The percentage of time spent processing (queries and other tasks) threads executing CPU instructions in OS kernel space during the last interval. This includes time CPU pipeline was stalled due to cache misses, branch mispredictions, hyper-threading, etc.
Shown as percent
clickhouse.thread.user.process_time
(gauge)
The percentage of time spent processing (queries and other tasks) threads executing CPU instructions in user space during the last interval. This includes time CPU pipeline was stalled due to cache misses, branch mispredictions, hyper-threading, etc.
Shown as percent
clickhouse.file.write.count
(count)
The number of writes (write/pwrite) to a file descriptor during the last interval. Does not include sockets.
Shown as write
clickhouse.file.write.total
(gauge)
The total number of writes (write/pwrite) to a file descriptor. Does not include sockets.
Shown as write
clickhouse.file.write.size.count
(count)
The number of bytes written to file descriptors during the last interval. If the file is compressed, this will show compressed data size.
Shown as byte
clickhouse.file.write.size.total
(gauge)
The total number of bytes written to file descriptors during the last interval. If the file is compressed, this will show compressed data size.
Shown as byte
clickhouse.mmapped.file.current
(gauge)
Total number of mmapped files.
Shown as file
clickhouse.mmapped.file.size
(gauge)
Sum size of mmapped file regions.
Shown as byte
clickhouse.network.receive.elapsed.time
(gauge)
Total time spent waiting for data to receive or receiving data from the network.
Shown as microsecond
clickhouse.network.receive.size.count
(count)
The number of bytes received from network.
Shown as byte
clickhouse.network.receive.size.total
(gauge)
The total number of bytes received from network.
Shown as byte
clickhouse.network.send.elapsed.time
(gauge)
Total time spent waiting for data to send to network or sending data to network.
Shown as microsecond
clickhouse.network.send.size.count
(count)
The number of bytes sent to the network.
Shown as byte
clickhouse.network.send.size.total
(gauge)
The total number of bytes sent to the network.
Shown as byte
clickhouse.network.threads.receive
(gauge)
Number of threads receiving data from the network.
Shown as thread
clickhouse.network.threads.send
(gauge)
Number of threads sending data to the network.
Shown as thread
clickhouse.node.remove.count
(count)
The number of times an error happened while trying to remove ephemeral node during the last interval. This is usually not an issue, because ClickHouse's implementation of ZooKeeper library guarantees that the session will expire and the node will be removed.
Shown as error
clickhouse.node.remove.total
(gauge)
The total number of times an error happened while trying to remove ephemeral node. This is usually not an issue, because ClickHouse's implementation of ZooKeeper library guarantees that the session will expire and the node will be removed.
Shown as error
clickhouse.buffer.write.discard.count
(count)
The number of stack traces dropped by query profiler or signal handler because pipe is full or cannot write to pipe during the last interval.
Shown as error
clickhouse.buffer.write.discard.total
(gauge)
The total number of stack traces dropped by query profiler or signal handler because pipe is full or cannot write to pipe.
Shown as error
clickhouse.compilation.attempt.count
(count)
The number of times a compilation of generated C++ code was initiated during the last interval.
Shown as event
clickhouse.compilation.attempt.total
(gauge)
The total number of times a compilation of generated C++ code was initiated.
Shown as event
clickhouse.compilation.size.count
(count)
The number of bytes used for expressions compilation during the last interval.
Shown as byte
clickhouse.compilation.size.total
(gauge)
The total number of bytes used for expressions compilation.
Shown as byte
clickhouse.compilation.time
(gauge)
The percentage of time spent for compilation of expressions to LLVM code during the last interval.
Shown as percent
clickhouse.compilation.llvm.attempt.count
(count)
The number of times a compilation of generated LLVM code (to create fused function for complex expressions) was initiated during the last interval.
Shown as event
clickhouse.compilation.llvm.attempt.total
(gauge)
The total number of times a compilation of generated LLVM code (to create fused function for complex expressions) was initiated.
Shown as event
clickhouse.compilation.success.count
(count)
The number of times a compilation of generated C++ code was successful during the last interval.
Shown as event
clickhouse.compilation.success.total
(gauge)
The total number of times a compilation of generated C++ code was successful.
Shown as event
clickhouse.compilation.function.execute.count
(count)
The number of times a compiled function was executed during the last interval.
Shown as execution
clickhouse.compilation.function.execute.total
(gauge)
The total number of times a compiled function was executed.
Shown as execution
clickhouse.connection.http.create.count
(count)
The number of created HTTP connections (closed or opened) during the last interval.
Shown as connection
clickhouse.connection.http.create.total
(gauge)
The total number of created HTTP connections (closed or opened).
Shown as connection
clickhouse.table.mergetree.insert.delayed.count
(count)
The number of times the INSERT of a block to a MergeTree table was throttled due to high number of active data parts for partition during the last interval.
Shown as throttle
clickhouse.table.mergetree.insert.delayed.total
(gauge)
The total number of times the INSERT of a block to a MergeTree table was throttled due to high number of active data parts for partition.
Shown as throttle
clickhouse.table.mergetree.insert.delayed.time
(gauge)
The percentage of time spent while the INSERT of a block to a MergeTree table was throttled due to high number of active data parts for partition during the last interval.
Shown as percent
clickhouse.syscall.read.wait
(gauge)
The percentage of time spent waiting for read syscall during the last interval. This includes reads from page cache.
Shown as percent
clickhouse.table.mergetree.replicated.insert.deduplicate.count
(count)
The number of times the INSERTed block to a ReplicatedMergeTree table was deduplicated during the last interval.
Shown as operation
clickhouse.table.mergetree.replicated.insert.deduplicate.total
(gauge)
The total number of times the INSERTed block to a ReplicatedMergeTree table was deduplicated.
Shown as operation
clickhouse.table.insert.size.count
(count)
The number of bytes (uncompressed; for columns as they stored in memory) INSERTed to all tables during the last interval.
Shown as byte
clickhouse.table.insert.size.total
(gauge)
The total number of bytes (uncompressed; for columns as they stored in memory) INSERTed to all tables.
Shown as byte
clickhouse.table.insert.row.count
(count)
The number of rows INSERTed to all tables during the last interval.
Shown as row
clickhouse.table.insert.row.total
(gauge)
The total number of rows INSERTed to all tables.
Shown as row
clickhouse.table.mergetree.replicated.leader.elected.count
(count)
The number of times a ReplicatedMergeTree table became a leader during the last interval. Leader replica is responsible for assigning merges, cleaning old blocks for deduplications and a few more bookkeeping tasks.
Shown as event
clickhouse.table.mergetree.replicated.leader.elected.total
(gauge)
The total number of times a ReplicatedMergeTree table became a leader. Leader replica is responsible for assigning merges, cleaning old blocks for deduplications and a few more bookkeeping tasks.
Shown as event
clickhouse.merge.count
(count)
The number of launched background merges during the last interval.
Shown as merge
clickhouse.merge.total
(gauge)
The total number of launched background merges.
Shown as merge
clickhouse.table.mergetree.insert.block.count
(count)
The number of blocks INSERTed to MergeTree tables during the last interval. Each block forms a data part of level zero.
Shown as block
clickhouse.table.mergetree.insert.block.total
(gauge)
The total number of blocks INSERTed to MergeTree tables. Each block forms a data part of level zero.
Shown as block
clickhouse.table.mergetree.insert.block.already_sorted.count
(count)
The number of blocks INSERTed to MergeTree tables that appeared to be already sorted during the last interval.
Shown as block
clickhouse.table.mergetree.insert.block.already_sorted.total
(gauge)
The total number of blocks INSERTed to MergeTree tables that appeared to be already sorted.
Shown as block
clickhouse.table.mergetree.insert.write.size.compressed.count
(count)
The number of bytes written to filesystem for data INSERTed to MergeTree tables during the last interval.
Shown as byte
clickhouse.table.mergetree.insert.write.size.compressed.total
(gauge)
The total number of bytes written to filesystem for data INSERTed to MergeTree tables.
Shown as byte
clickhouse.table.mergetree.insert.row.count
(count)
The number of rows INSERTed to MergeTree tables during the last interval.
Shown as row
clickhouse.table.mergetree.insert.row.total
(gauge)
The total number of rows INSERTed to MergeTree tables.
Shown as row
clickhouse.table.mergetree.insert.write.size.uncompressed.count
(count)
The number of uncompressed bytes (for columns as they are stored in memory) INSERTed to MergeTree tables during the last interval.
Shown as byte
clickhouse.table.mergetree.insert.write.size.uncompressed.total
(gauge)
The total number of uncompressed bytes (for columns as they are stored in memory) INSERTed to MergeTree tables.
Shown as byte
clickhouse.merge.row.read.count
(count)
The number of rows read for background merges during the last interval. This is the number of rows before merge.
Shown as row
clickhouse.merge.row.read.total
(gauge)
The total number of rows read for background merges. This is the number of rows before merge.
Shown as row
clickhouse.merge.read.size.uncompressed.count
(count)
The number of uncompressed bytes (for columns as they are stored in memory) that was read for background merges during the last interval. This is the number before merge.
Shown as byte
clickhouse.merge.read.size.uncompressed.total
(gauge)
The total number of uncompressed bytes (for columns as they are stored in memory) that was read for background merges. This is the number before merge.
Shown as byte
clickhouse.merge.time
(gauge)
The percentage of time spent for background merges during the last interval.
Shown as percent
clickhouse.cpu.time
(gauge)
The percentage of CPU time spent seen by OS during the last interval. Does not include involuntary waits due to virtualization.
Shown as percent
clickhouse.thread.cpu.wait
(gauge)
The percentage of time a thread was ready for execution but waiting to be scheduled by OS (from the OS point of view) during the last interval.
Shown as percent
clickhouse.thread.io.wait
(gauge)
The percentage of time a thread spent waiting for a result of IO operation (from the OS point of view) during the last interval. This is real IO that doesn't include page cache.
Shown as percent
clickhouse.disk.read.size.count
(count)
The number of bytes read from disks or block devices during the last interval. Doesn't include bytes read from page cache. May include excessive data due to block size, readahead, etc.
Shown as byte
clickhouse.disk.read.size.total
(gauge)
The total number of bytes read from disks or block devices. Doesn't include bytes read from page cache. May include excessive data due to block size, readahead, etc.
Shown as byte
clickhouse.fs.read.size.count
(count)
The number of bytes read from filesystem (including page cache) during the last interval.
Shown as byte
clickhouse.fs.read.size.total
(gauge)
The total number of bytes read from filesystem (including page cache).
Shown as byte
clickhouse.disk.write.size.count
(count)
The number of bytes written to disks or block devices during the last interval. Doesn't include bytes that are in page cache dirty pages. May not include data that was written by OS asynchronously.
Shown as byte
clickhouse.disk.write.size.total
(gauge)
The total number of bytes written to disks or block devices. Doesn't include bytes that are in page cache dirty pages. May not include data that was written by OS asynchronously.
Shown as byte
clickhouse.fs.write.size.count
(count)
The number of bytes written to filesystem (including page cache) during the last interval.
Shown as byte
clickhouse.fs.write.size.total
(gauge)
The total number of bytes written to filesystem (including page cache).
Shown as byte
clickhouse.query.mask.match.count
(count)
The number of times query masking rules were successfully matched during the last interval.
Shown as occurrence
clickhouse.query.mask.match.total
(gauge)
The total number of times query masking rules were successfully matched.
Shown as occurrence
clickhouse.query.signal.dropped.count
(count)
The number of times the processing of a signal was dropped due to overrun plus the number of signals that the OS has not delivered due to overrun during the last interval.
Shown as occurrence
clickhouse.query.signal.dropped.total
(gauge)
The total number of times the processing of a signal was dropped due to overrun plus the number of signals that the OS has not delivered due to overrun.
Shown as occurrence
clickhouse.query.read.backoff.count
(count)
The number of times the number of query processing threads was lowered due to slow reads during the last interval.
Shown as occurrence
clickhouse.query.read.backoff.total
(gauge)
The total number of times the number of query processing threads was lowered due to slow reads.
Shown as occurrence
clickhouse.file.read.size.count
(count)
The number of bytes read from file descriptors during the last interval. If the file is compressed, this will show the compressed data size.
Shown as byte
clickhouse.file.read.size.total
(gauge)
The total number of bytes read from file descriptors. If the file is compressed, this will show the compressed data size.
Shown as byte
clickhouse.file.read.fail.count
(count)
The number of times the read (read/pread) from a file descriptor have failed during the last interval.
Shown as read
clickhouse.file.read.fail.total
(gauge)
The total number of times the read (read/pread) from a file descriptor have failed.
Shown as read
clickhouse.compilation.regex.count
(count)
The number of regular expressions compiled during the last interval. Identical regular expressions are compiled just once and cached forever.
Shown as event
clickhouse.compilation.regex.total
(gauge)
The total number of regular expressions compiled. Identical regular expressions are compiled just once and cached forever.
Shown as event
clickhouse.table.mergetree.insert.block.rejected.count
(count)
The number of times the INSERT of a block to a MergeTree table was rejected with Too many parts exception due to high number of active data parts for partition during the last interval.
Shown as block
clickhouse.table.mergetree.insert.block.rejected.total
(gauge)
The total number of times the INSERT of a block to a MergeTree table was rejected with Too many parts exception due to high number of active data parts for partition.
Shown as block
clickhouse.table.replicated.leader.yield.count
(count)
The number of times Replicated table yielded its leadership due to large replication lag relative to other replicas during the last interval.
Shown as event
clickhouse.table.replicated.leader.yield.total
(gauge)
The total number of times Replicated table yielded its leadership due to large replication lag relative to other replicas.
Shown as event
clickhouse.table.replicated.part.loss.count
(count)
The number of times a data part we wanted doesn't exist on any replica (even on replicas that are offline right now) during the last interval. Those data parts are definitely lost. This is normal due to asynchronous replication (if quorum inserts were not enabled), when the replica on which the data part was written failed and when it became online after fail it doesn't contain that data part.
Shown as item
clickhouse.table.replicated.part.loss.total
(gauge)
The total number of times a data part that we wanted doesn't exist on any replica (even on replicas that are offline right now). That data parts are definitely lost. This is normal due to asynchronous replication (if quorum inserts were not enabled), when the replica on which the data part was written failed and when it became online after fail it doesn't contain that data part.
Shown as item
clickhouse.table.mergetree.replicated.fetch.replica.count
(count)
The number of times a data part was downloaded from replica of a ReplicatedMergeTree table during the last interval.
Shown as fetch
clickhouse.table.mergetree.replicated.fetch.replica.total
(gauge)
The total number of times a data part was downloaded from replica of a ReplicatedMergeTree table.
Shown as fetch
clickhouse.table.mergetree.replicated.fetch.merged.count
(count)
The number of times ClickHouse prefers to download already merged part from replica of ReplicatedMergeTree table instead of performing a merge itself (usually it prefers doing a merge itself to save network traffic) during the last interval. This happens when ClickHouse does not have all source parts to perform a merge or when the data part is old enough.
Shown as fetch
clickhouse.table.mergetree.replicated.fetch.merged.total
(gauge)
The total number of times ClickHouse prefers to download already merged part from replica of ReplicatedMergeTree table instead of performing a merge itself (usually it prefers doing a merge itself to save network traffic). This happens when ClickHouse does not have all source parts to perform a merge or when the data part is old enough.
Shown as fetch
clickhouse.file.seek.count
(count)
The number of times the lseek function was called during the last interval.
Shown as operation
clickhouse.file.seek.total
(gauge)
The total number of times the lseek function was called.
Shown as operation
clickhouse.table.mergetree.mark.selected.count
(count)
The number of marks (index granules) selected to read from a MergeTree table during the last interval.
Shown as index
clickhouse.table.mergetree.mark.selected.total
(gauge)
The total number of marks (index granules) selected to read from a MergeTree table.
Shown as index
clickhouse.table.mergetree.part.selected.count
(count)
The number of data parts selected to read from a MergeTree table during the last interval.
Shown as item
clickhouse.table.mergetree.part.selected.total
(gauge)
The total number of data parts selected to read from a MergeTree table.
Shown as item
clickhouse.table.mergetree.range.selected.count
(count)
The number of non-adjacent ranges in all data parts selected to read from a MergeTree table during the last interval.
Shown as item
clickhouse.table.mergetree.range.selected.total
(gauge)
The total number of non-adjacent ranges in all data parts selected to read from a MergeTree table.
Shown as item
clickhouse.file.read.slow.count
(count)
The number of reads from a file that were slow during the last interval. This indicates system overload. Thresholds are controlled by readbackoff* settings.
Shown as read
clickhouse.file.read.slow.total
(gauge)
The total number of reads from a file that were slow. This indicates system overload. Thresholds are controlled by readbackoff* settings.
Shown as read
clickhouse.query.sleep.time
(gauge)
The percentage of time a query was sleeping to conform to the max_network_bandwidth setting during the last interval.
Shown as percent
clickhouse.file.write.fail.count
(count)
The number of times the write (write/pwrite) to a file descriptor have failed during the last interval.
Shown as write
clickhouse.file.write.fail.total
(gauge)
The total number of times the write (write/pwrite) to a file descriptor have failed.
Shown as write
clickhouse.CompiledExpressionCacheCount
(gauge)
Total entries in the cache of JIT-compiled code.
Shown as item
clickhouse.table.mergetree.storage.mark.cache
(gauge)
The size of the cache of marks for StorageMergeTree.
Shown as byte
clickhouse.MarkCacheFiles
(gauge)
The number of mark files cached in the mark cache.
Shown as item
clickhouse.part.max
(gauge)
The maximum number of active parts in partitions.
Shown as item
clickhouse.database.total
(gauge)
The current number of databases.
Shown as instance
clickhouse.table.total
(gauge)
The current number of tables.
Shown as table
clickhouse.replica.delay.absolute
(gauge)
The maximum replica queue delay relative to current time.
Shown as millisecond
clickhouse.ReplicasMaxInsertsInQueue
(gauge)
Maximum number of INSERT operations in the queue (still to be replicated) across Replicated tables.
Shown as item
clickhouse.ReplicasMaxMergesInQueue
(gauge)
Maximum number of merge operations in the queue (still to be applied) across Replicated tables.
Shown as item
clickhouse.ReplicasMaxQueueSize
(gauge)
Maximum queue size (in the number of operations like get, merge) across Replicated tables.
Shown as item
clickhouse.replica.delay.relative
(gauge)
The maximum difference of absolute delay from any other replica.
Shown as millisecond
clickhouse.ReplicasSumInsertsInQueue
(gauge)
Sum of INSERT operations in the queue (still to be replicated) across Replicated tables.
Shown as item
clickhouse.ReplicasSumMergesInQueue
(gauge)
Sum of merge operations in the queue (still to be applied) across Replicated tables.
Shown as item
clickhouse.replica.queue.size
(gauge)
The number of replication tasks in queue.
Shown as task
clickhouse.UncompressedCacheBytes
(gauge)
Total size of uncompressed cache in bytes. Uncompressed cache does not usually improve the performance and should be mostly avoided.
Shown as byte
clickhouse.UncompressedCacheCells
(gauge)
Total number of entries in the uncompressed cache. Each entry represents a decompressed block of data. Uncompressed cache does not usually improve performance and should be mostly avoided.
Shown as item
clickhouse.uptime
(gauge)
The amount of time ClickHouse has been active.
Shown as second
clickhouse.jemalloc.active
(gauge)
(EXPERIMENTAL)
Shown as byte
clickhouse.jemalloc.allocated
(gauge)
The amount of memory allocated by ClickHouse.
Shown as byte
clickhouse.jemalloc.background_thread.num_runs
(gauge)
(EXPERIMENTAL)
Shown as byte
clickhouse.jemalloc.background_thread.num_threads
(gauge)
(EXPERIMENTAL)
Shown as thread
clickhouse.jemalloc.background_thread.run_interval
(gauge)
(EXPERIMENTAL)
Shown as byte
clickhouse.jemalloc.mapped
(gauge)
The amount of memory in active extents mapped by the allocator.
Shown as byte
clickhouse.jemalloc.metadata
(gauge)
The amount of memory dedicated to metadata, which comprise base allocations used for bootstrap-sensitive allocator metadata structures and internal allocations.
Shown as byte
clickhouse.jemalloc.metadata_thp
(gauge)
(EXPERIMENTAL)
Shown as byte
clickhouse.jemalloc.resident
(gauge)
The amount of memory in physically resident data pages mapped by the allocator, comprising all pages dedicated to allocator metadata, pages backing active allocations, and unused dirty pages.
Shown as byte
clickhouse.jemalloc.retained
(gauge)
The amount of memory in virtual memory mappings that were retained rather than being returned to the operating system.
Shown as byte
clickhouse.dictionary.memory.used
(gauge)
The total amount of memory used by a dictionary.
Shown as byte
clickhouse.dictionary.item.current
(gauge)
The number of items stored in a dictionary.
Shown as item
clickhouse.dictionary.load
(gauge)
The percentage filled in a dictionary (for a hashed dictionary, the percentage filled in the hash table).
Shown as percent
clickhouse.table.mergetree.size
(gauge)
The total size of all data part files of a MergeTree table.
Shown as byte
clickhouse.table.mergetree.part.current
(gauge)
The total number of data parts of a MergeTree table.
Shown as object
clickhouse.table.mergetree.row.current
(gauge)
The total number of rows in a MergeTree table.
Shown as row
clickhouse.table.replicated.part.future
(gauge)
The number of data parts that will appear as the result of INSERTs or merges that haven't been done yet.
Shown as item
clickhouse.table.replicated.part.suspect
(gauge)
The number of data parts in the queue for verification. A part is put in the verification queue if there is suspicion that it might be damaged.
Shown as item
clickhouse.table.replicated.version
(gauge)
Version number of the table structure indicating how many times ALTER was performed. If replicas have different versions, it means some replicas haven't made all of the ALTERs yet.
Shown as operation
clickhouse.table.replicated.queue.size
(gauge)
Size of the queue for operations waiting to be performed. Operations include inserting blocks of data, merges, and certain other actions. It usually coincides with clickhouse.table.replicated.part.future.
Shown as operation
clickhouse.table.replicated.queue.insert
(gauge)
The number of inserts of blocks of data that need to be made. Insertions are usually replicated fairly quickly. If this number is large, it means something is wrong.
Shown as operation
clickhouse.table.replicated.queue.merge
(gauge)
The number of merges waiting to be made. Sometimes merges are lengthy, so this value may be greater than zero for a long time.
Shown as merge
clickhouse.table.replicated.log.max
(gauge)
Maximum entry number in the log of general activity.
Shown as item
clickhouse.table.replicated.log.pointer
(gauge)
Maximum entry number in the log of general activity that the replica copied to its execution queue, plus one. If this is much smaller than clickhouse.table.replicated.log.max, something is wrong.
Shown as item
clickhouse.table.replicated.total
(gauge)
The total number of known replicas of this table.
Shown as table
clickhouse.table.replicated.active
(gauge)
The number of replicas of this table that have a session in ZooKeeper (i.e., the number of functioning replicas).
Shown as table
clickhouse.read.compressed.block.count
(count)
The number of compressed blocks (the blocks of data that are compressed independent of each other) read from compressed sources (files, network) during the last interval.
Shown as block
clickhouse.read.compressed.block.total
(gauge)
The total number of compressed blocks (the blocks of data that are compressed independent of each other) read from compressed sources (files, network).
Shown as block
clickhouse.read.compressed.raw.size.count
(count)
The number of uncompressed bytes (the number of bytes after decompression) read from compressed sources (files, network) during the last interval.
Shown as byte
clickhouse.read.compressed.raw.size.total
(gauge)
The total number of uncompressed bytes (the number of bytes after decompression) read from compressed sources (files, network).
Shown as byte
clickhouse.read.compressed.size.count
(count)
The number of bytes (the number of bytes before decompression) read from compressed sources (files, network) during the last interval.
Shown as byte
clickhouse.read.compressed.size.total
(gauge)
The total number of bytes (the number of bytes before decompression) read from compressed sources (files, network).
Shown as byte
clickhouse.table.mergetree.replicated.fetch.replica.fail.count
(count)
The number of times a data part was failed to download from replica of a ReplicatedMergeTree table during the last interval.
Shown as byte
clickhouse.table.mergetree.replicated.fetch.replica.fail.total
(gauge)
The total number of times a data part was failed to download from replica of a ReplicatedMergeTree table.
Shown as byte
clickhouse.table.mergetree.replicated.merge.count
(count)
The number of times data parts of ReplicatedMergeTree tables were successfully merged during the last interval.
Shown as byte
clickhouse.table.mergetree.replicated.merge.total
(gauge)
The total number of times data parts of ReplicatedMergeTree tables were successfully merged.
Shown as byte
clickhouse.background_pool.buffer_flush_schedule.task.active
(gauge)
Number of active tasks in BackgroundBufferFlushSchedulePool. This pool is used for periodic Buffer flushes
Shown as task
clickhouse.background_pool.distributed.task.active
(gauge)
Number of active tasks in BackgroundDistributedSchedulePool. This pool is used for distributed sends that is done in background.
Shown as task
clickhouse.background_pool.fetches.task.active
(gauge)
Number of active tasks in BackgroundFetchesPool
Shown as task
clickhouse.background_pool.message_broker.task.active
(gauge)
Number of active tasks in BackgroundProcessingPool for message streaming
Shown as task
clickhouse.ddl.max_processed
(gauge)
Max processed DDL entry of DDLWorker.
clickhouse.parts.committed
(gauge)
Active data part, used by current and upcoming SELECTs.
Shown as item
clickhouse.parts.compact
(gauge)
Compact parts.
Shown as item
clickhouse.parts.active
(gauge)
[Only versions >= 22.7.1] Active data part used by current and upcoming SELECTs.
Shown as item
clickhouse.parts.pre_active
(gauge)
[Only versions >= 22.7.1] The part is in data_parts but not used for SELECTs.
Shown as item
clickhouse.parts.delete_on_destroy
(gauge)
Part was moved to another disk and should be deleted in own destructor.
Shown as item
clickhouse.parts.deleting
(gauge)
Not active data part with identity refcounter, it is deleting right now by a cleaner.
Shown as item
clickhouse.parts.inmemory
(gauge)
In-memory parts.
Shown as item
clickhouse.parts.outdated
(gauge)
Not active data part, but could be used by only current SELECTs, could be deleted after SELECTs finishes.
Shown as item
clickhouse.parts.precommitted
(gauge)
The part is in data_parts, but not used for SELECTs.
Shown as item
clickhouse.parts.temporary
(gauge)
The part is generating now, it is not in data_parts list.
Shown as item
clickhouse.parts.wide
(gauge)
Wide parts.
Shown as item
clickhouse.postgresql.connection
(gauge)
Number of client connections using PostgreSQL protocol
Shown as connection
clickhouse.tables_to_drop.queue.total
(gauge)
Number of dropped tables, that are waiting for background data removal.
Shown as table
clickhouse.log.entry.merge.created.total
(gauge)
Total successfully created log entryies to merge parts in ReplicatedMergeTree.
Shown as event
clickhouse.log.entry.merge.created.count
(count)
Successfully created log entry to merge parts in ReplicatedMergeTree.
Shown as event
clickhouse.log.entry.mutation.created.total
(gauge)
Total successfully created log entry to mutate parts in ReplicatedMergeTree.
Shown as event
clickhouse.log.entry.mutation.created.count
(count)
Successfully created log entry to mutate parts in ReplicatedMergeTree.
Shown as event
clickhouse.error.dns.total
(gauge)
Total count of errors in DNS resolution
Shown as error
clickhouse.error.dns.count
(count)
Number of errors in DNS resolution
Shown as error
clickhouse.distributed.connection.fail_at_all.total
(gauge)
Total count when distributed connection fails after all retries finished
Shown as connection
clickhouse.distributed.connection.fail_at_all.count
(count)
Count when distributed connection fails after all retries finished
Shown as connection
clickhouse.distributed.connection.fail_try.total
(gauge)
Total count when distributed connection fails with retry
Shown as connection
clickhouse.distributed.connection.fail_try.count
(count)
Count when distributed connection fails with retry
Shown as connection
clickhouse.distributed.inserts.delayed.total
(gauge)
Total number of times the INSERT of a block to a Distributed table was throttled due to high number of pending bytes.
Shown as query
clickhouse.distributed.inserts.delayed.count
(count)
Number of times the INSERT of a block to a Distributed table was throttled due to high number of pending bytes.
Shown as query
clickhouse.distributed.delayed.inserts.time
(gauge)
Total number of milliseconds spent while the INSERT of a block to a Distributed table was throttled due to high number of pending bytes.
Shown as microsecond
clickhouse.distributed.inserts.rejected.total
(gauge)
Total number of times the INSERT of a block to a Distributed table was rejected with 'Too many bytes' exception due to high number of pending bytes.
Shown as query
clickhouse.distributed.inserts.rejected.count
(count)
Number of times the INSERT of a block to a Distributed table was rejected with 'Too many bytes' exception due to high number of pending bytes.
Shown as query
clickhouse.query.insert.failed.total
(gauge)
Same as FailedQuery, but only for INSERT queries.
Shown as query
clickhouse.query.insert.failed.count
(count)
Same as FailedQuery, but only for INSERT queries.
Shown as query
clickhouse.query.failed.total
(gauge)
Total number of failed queries.
Shown as query
clickhouse.query.failed.count
(count)
Number of failed queries.
Shown as query
clickhouse.select.query.select.failed.total
(gauge)
Same as FailedQuery, but only for SELECT queries.
Shown as query
clickhouse.select.query.select.failed.count
(count)
Same as FailedQuery, but only for SELECT queries.
Shown as query
clickhouse.insert.query.time
(gauge)
Total time of INSERT queries.
Shown as microsecond
clickhouse.log.entry.merge.not_created.total
(gauge)
Total log entries to merge parts in ReplicatedMergeTree not created due to concurrent log update by another replica.
Shown as event
clickhouse.log.entry.merge.not_created.count
(count)
Log entry to merge parts in ReplicatedMergeTree is not created due to concurrent log update by another replica.
Shown as event
clickhouse.log.entry.mutation.not_created.total
(gauge)
Total log entries to mutate parts in ReplicatedMergeTree not created due to concurrent log update by another replica.
Shown as event
clickhouse.log.entry.mutation.not_created.count
(count)
Log entry to mutate parts in ReplicatedMergeTree is not created due to concurrent log update by another replica.
Shown as event
clickhouse.perf.alignment.faults.total
(gauge)
Total number of alignment faults. These happen when unaligned memory accesses happen; the kernel can handle these but it reduces performance. This happens only on some architectures (never on x86).
Shown as event
clickhouse.perf.alignment.faults.count
(count)
Number of alignment faults. These happen when unaligned memory accesses happen; the kernel can handle these but it reduces performance. This happens only on some architectures (never on x86).
Shown as event
clickhouse.perf.branch.instructions.total
(gauge)
Total retired branch instructions. Prior to Linux 2.6.35, this used the wrong event on AMD processors.
Shown as unit
clickhouse.perf.branch.instructions.count
(count)
Retired branch instructions. Prior to Linux 2.6.35, this used the wrong event on AMD processors.
Shown as unit
clickhouse.perf.branch.misses.total
(gauge)
Total mispredicted branch instructions.
Shown as unit
clickhouse.perf.branch.misses.count
(count)
Mispredicted branch instructions.
Shown as unit
clickhouse.perf.bus.cycles.total
(gauge)
Total bus cycles, which can be different from total cycles.
Shown as unit
clickhouse.perf.bus.cycles.count
(count)
Bus cycles, which can be different from total cycles.
Shown as unit
clickhouse.perf.cache.misses.total
(gauge)
Cache misses. Usually this indicates total Last Level Cache misses; this is intended to be used in conjunction with the PERFCOUNTHWCACHEREFERENCES event to calculate cache miss rates.
Shown as miss
clickhouse.perf.cache.misses.count
(count)
Cache misses. Usually this indicates Last Level Cache misses; this is intended to be used in conjunction with the PERFCOUNTHWCACHEREFERENCES event to calculate cache miss rates.
Shown as miss
clickhouse.perf.cache.references.total
(gauge)
Cache accesses. Usually this indicates total Last Level Cache accesses but this may vary depending on your CPU. This may include prefetches and coherency messages; again this depends on the design of your CPU.
Shown as unit
clickhouse.perf.cache.references.count
(count)
Cache accesses. Usually this indicates Last Level Cache accesses but this may vary depending on your CPU. This may include prefetches and coherency messages; again this depends on the design of your CPU.
Shown as unit
clickhouse.perf.context.switches.total
(gauge)
Total number of context switches
clickhouse.perf.context.switches.count
(count)
Number of context switches
clickhouse.perf.cpu.clock
(gauge)
The CPU clock, a high-resolution per-CPU timer.
Shown as unit
clickhouse.perf.cpu.cycles.total
(gauge)
Total CPU cycles. Be wary of what happens during CPU frequency scaling.
Shown as unit
clickhouse.perf.cpu.cycles.count
(count)
CPU cycles. Be wary of what happens during CPU frequency scaling.
Shown as unit
clickhouse.perf.cpu.migrations.total
(gauge)
Total number of times the process has migrated to a new CPU
Shown as unit
clickhouse.perf.cpu.migrations.count
(count)
Number of times the process has migrated to a new CPU
Shown as unit
clickhouse.perf.data.tlb.misses.total
(gauge)
Total data TLB misses
Shown as miss
clickhouse.perf.data.tlb.misses.count
(count)
Data TLB misses
Shown as miss
clickhouse.perf.data.tlb.references.total
(gauge)
Total data TLB references
Shown as unit
clickhouse.perf.data.tlb.references.count
(count)
Data TLB references
Shown as unit
clickhouse.perf.emulation.faults.total
(gauge)
Total number of emulation faults. The kernel sometimes traps on unimplemented instructions and emulates them for user space. This can negatively impact performance.
Shown as fault
clickhouse.perf.emulation.faults.count
(count)
Number of emulation faults. The kernel sometimes traps on unimplemented instructions and emulates them for user space. This can negatively impact performance.
Shown as fault
clickhouse.perf.instruction.tlb.misses.total
(gauge)
Total instruction TLB misses
Shown as miss
clickhouse.perf.instruction.tlb.misses.count
(count)
Instruction TLB misses
Shown as miss
clickhouse.perf.instruction.tlb.references.total
(gauge)
Total instruction TLB references
Shown as unit
clickhouse.perf.instruction.tlb.references.count
(count)
Instruction TLB references
Shown as unit
clickhouse.perf.instructions.total
(gauge)
Total retired instructions. Be careful, these can be affected by various issues, most notably hardware interrupt counts.
Shown as unit
clickhouse.perf.instructions.count
(count)
Retired instructions. Be careful, these can be affected by various issues, most notably hardware interrupt counts.
Shown as unit
clickhouse.perf.local_memory.misses.total
(gauge)
Total local NUMA node memory read misses
Shown as miss
clickhouse.perf.local_memory.misses.count
(count)
Local NUMA node memory read misses
Shown as miss
clickhouse.perf.local_memory.references.total
(gauge)
Total local NUMA node memory reads
Shown as unit
clickhouse.perf.local_memory.references.count
(count)
Local NUMA node memory reads
Shown as unit
clickhouse.perf.min_enabled.running_time
(gauge)
Running time for event with minimum enabled time. Used to track the amount of event multiplexing
Shown as microsecond
clickhouse.perf.min_enabled.min_time
(gauge)
For all events, minimum time that an event was enabled. Used to track event multiplexing influence.
Shown as microsecond
clickhouse.perf.cpu.ref_cycles.total
(gauge)
Total cycles; not affected by CPU frequency scaling.
Shown as unit
clickhouse.perf.cpu.ref_cycles.count
(count)
CPU cycles; not affected by CPU frequency scaling.
Shown as unit
clickhouse.perf.stalled_cycles.backend.total
(gauge)
Total stalled cycles during retirement.
Shown as unit
clickhouse.perf.stalled_cycles.backend.count
(count)
Stalled cycles during retirement.
Shown as unit
clickhouse.perf.stalled_cycles.frontend.total
(gauge)
Total stalled cycles during issue.
Shown as unit
clickhouse.perf.stalled_cycles.frontend.count
(count)
Stalled cycles during issue.
Shown as unit
clickhouse.perf.task.clock
(gauge)
A clock count specific to the task that is running
clickhouse.query.memory.limit_exceeded.total
(gauge)
Total number of times when memory limit exceeded for query.
clickhouse.query.memory.limit_exceeded.count
(count)
Number of times when memory limit exceeded for query.
clickhouse.query.time
(gauge)
Total time of all queries.
Shown as microsecond
clickhouse.table.replica.partial.shutdown.total
(gauge)
Total times Replicated table has to deinitialize its state due to session expiration in ZooKeeper. The state is reinitialized every time when ZooKeeper is available again.
clickhouse.table.replica.partial.shutdown.count
(count)
How many times Replicated table has to deinitialize its state due to session expiration in ZooKeeper. The state is reinitialized every time when ZooKeeper is available again.
clickhouse.s3.read.bytes.total
(gauge)
Total read bytes (incoming) in GET and HEAD requests to S3 storage.
Shown as byte
clickhouse.s3.read.bytes.count
(count)
Read bytes (incoming) in GET and HEAD requests to S3 storage.
Shown as byte
clickhouse.s3.read.time
(gauge)
Time of GET and HEAD requests to S3 storage.
Shown as microsecond
clickhouse.s3.read.requests.total
(gauge)
Total number of GET and HEAD requests to S3 storage.
Shown as request
clickhouse.s3.read.requests.count
(count)
Number of GET and HEAD requests to S3 storage.
Shown as request
clickhouse.s3.read.requests.errors.total
(gauge)
Total number of non-throttling errors in GET and HEAD requests to S3 storage.
Shown as error
clickhouse.s3.read.requests.errors.count
(count)
Number of non-throttling errors in GET and HEAD requests to S3 storage.
Shown as error
clickhouse.s3.read.requests.redirects.total
(gauge)
Total number of redirects in GET and HEAD requests to S3 storage.
Shown as unit
clickhouse.s3.read.requests.redirects.count
(count)
Number of redirects in GET and HEAD requests to S3 storage.
Shown as unit
clickhouse.s3.read.requests.throttling.total
(gauge)
Total number of 429 and 503 errors in GET and HEAD requests to S3 storage.
Shown as error
clickhouse.s3.read.requests.throttling.count
(count)
Number of 429 and 503 errors in GET and HEAD requests to S3 storage.
Shown as error
clickhouse.s3.write.bytes.total
(gauge)
Total write bytes (outgoing) in POST, DELETE, PUT and PATCH requests to S3 storage.
Shown as byte
clickhouse.s3.write.bytes.count
(count)
Write bytes (outgoing) in POST, DELETE, PUT and PATCH requests to S3 storage.
Shown as byte
clickhouse.s3.write.time
(gauge)
Time of POST, DELETE, PUT and PATCH requests to S3 storage.
Shown as microsecond
clickhouse.s3.write.requests.total
(gauge)
Total number of POST, DELETE, PUT and PATCH requests to S3 storage.
Shown as request
clickhouse.s3.write.requests.count
(count)
Number of POST, DELETE, PUT and PATCH requests to S3 storage.
Shown as request
clickhouse.s3.write.requests.errors.total
(gauge)
Total number of non-throttling errors in POST, DELETE, PUT and PATCH requests to S3 storage.
Shown as request
clickhouse.s3.write.requests.errors.count
(count)
Number of non-throttling errors in POST, DELETE, PUT and PATCH requests to S3 storage.
Shown as request
clickhouse.s3.write.requests.redirects.total
(gauge)
Total number of redirects in POST, DELETE, PUT and PATCH requests to S3 storage.
Shown as request
clickhouse.s3.write.requests.redirects.count
(count)
Number of redirects in POST, DELETE, PUT and PATCH requests to S3 storage.
Shown as request
clickhouse.s3.write.requests.throttling.total
(gauge)
Total number of 429 and 503 errors in POST, DELETE, PUT and PATCH requests to S3 storage.
Shown as request
clickhouse.s3.write.requests.throttling.count
(count)
Number of 429 and 503 errors in POST, DELETE, PUT and PATCH requests to S3 storage.
Shown as request
clickhouse.query.select.time
(gauge)
Total time of SELECT queries.
Shown as microsecond
clickhouse.selected.bytes.total
(gauge)
Total number of bytes (uncompressed; for columns as they stored in memory) SELECTed from all tables.
Shown as byte
clickhouse.selected.bytes.count
(count)
Number of bytes (uncompressed; for columns as they stored in memory) SELECTed from all tables.
Shown as byte
clickhouse.selected.rows.total
(gauge)
Number of rows SELECTed from all tables.
Shown as row
clickhouse.selected.rows.count
(count)
Total number of rows SELECTed from all tables.
Shown as row
clickhouse.aio.read.total
(gauge)
Total number of reads with Linux or FreeBSD AIO interface.
Shown as read
clickhouse.aio.read.count
(count)
Number of reads with Linux or FreeBSD AIO interface.
Shown as read
clickhouse.aio.read.size.total
(gauge)
Total number of bytes read with Linux or FreeBSD AIO interface.
Shown as byte
clickhouse.aio.read.size.count
(count)
Number of bytes read with Linux or FreeBSD AIO interface.
Shown as byte
clickhouse.aio.write.size.total
(gauge)
Total number of bytes read with Linux or FreeBSD AIO interface.
Shown as byte
clickhouse.aio.write.size.count
(count)
Number of bytes read with Linux or FreeBSD AIO interface.
Shown as byte
clickhouse.aio.write.total
(gauge)
Total number of writes with Linux or FreeBSD AIO interface.
Shown as write
clickhouse.aio.write.count
(count)
Number of writes with Linux or FreeBSD AIO interface.
Shown as write
clickhouse.drained_connections.async
(gauge)
Number of connections drained asynchronously.
Shown as connection
clickhouse.drained_connections.sync
(gauge)
Number of connections drained synchronously.
Shown as connection
clickhouse.drained_connections.async.active
(gauge)
Number of active connections drained asynchronously.
Shown as connection
clickhouse.drained_connections.sync.active
(gauge)
Number of active connections drained synchronously.
Shown as connection
clickhouse.table.distributed.file.insert.broken
(gauge)
Number of files for asynchronous insertion into Distributed tables that has been marked as broken. This metric will starts from 0 on start. Number of files for every shard is summed.
Shown as file
clickhouse.table.mergetree.insert.block.projection.total
(gauge)
Total number of blocks INSERTed to MergeTree tables projection. Each block forms a data part of level zero.
Shown as block
clickhouse.table.mergetree.insert.block.projection.count
(count)
Number of blocks INSERTed to MergeTree tables projection. Each block forms a data part of level zero.
Shown as block
clickhouse.table.mergetree.insert.block.already_sorted.projection.total
(gauge)
Total number of blocks INSERTed to MergeTree tables projection that appeared to be already sorted.
Shown as block
clickhouse.table.mergetree.insert.block.size.compressed.projection.count
(count)
Number of blocks INSERTed to MergeTree tables projection that appeared to be already sorted.
Shown as block
clickhouse.table.mergetree.insert.write.row.projection.total
(gauge)
Total number of rows INSERTed to MergeTree tables projection.
Shown as row
clickhouse.table.mergetree.insert.write.row.projection.count
(count)
Number of rows INSERTed to MergeTree tables projection.
Shown as row
clickhouse.table.mergetree.insert.write.size.uncompressed.projection.total
(gauge)
Total uncompressed bytes (for columns as they stored in memory) INSERTed to MergeTree tables projection.
Shown as byte
clickhouse.table.mergetree.insert.write.size.uncompressed.projection.count
(count)
Uncompressed bytes (for columns as they stored in memory) INSERTed to MergeTree tables projection.
Shown as byte
clickhouse.table.replica.change.hedged_requests.total
(gauge)
Total count when timeout for changing replica expired in hedged requests.
Shown as timeout
clickhouse.table.replica.change.hedged_requests.count
(gauge)
Count when timeout for changing replica expired in hedged requests.
Shown as timeout

Events

The ClickHouse check does not include any events.

Service Checks

clickhouse.can_connect
Returns CRITICAL if the Agent is unable to connect to the monitored ClickHouse database, otherwise returns OK.
Statuses: ok, critical

Troubleshooting

Need help? Contact Datadog support .