- Essentials
- In The App
- Infrastructure
- Application Performance
- Log Management
- Security Platform
- UX Monitoring
- Administration
Database Monitoring provides deep visibility into your Postgres databases by exposing query metrics, query samples, explain plans, database states, failovers, and events.
The Agent collects telemetry directly from the database by logging in as a read-only user. Do the following setup to enable Database Monitoring with your Postgres database:
postgresql-contrib
package.127.0.0.1
or the socket is preferred. The Agent should not connect to the database through a proxy, load balancer, or connection pooler such as pgbouncer
. While this can be an anti-pattern for client applications, each Agent must have knowledge of the underlying hostname and should stick to a single host for its lifetime, even in cases of failover. If the Datadog Agent connects to different hosts while it is running, the values of metrics will be incorrect.Configure the following parameters in the postgresql.conf
file and then restart the server for the settings to take effect. For more information about these parameters, see the Postgres documentation.
Parameter | Value | Description |
---|---|---|
shared_preload_libraries | pg_stat_statements | Required for postgresql.queries.* metrics. Enables collection of query metrics using the pg_stat_statements extension. |
track_activity_query_size | 4096 | Required for collection of larger queries. Increases the size of SQL text in pg_stat_activity and pg_stat_statements . If left at the default value then queries longer than 1024 characters will not be collected. |
pg_stat_statements.track | ALL | Optional. Enables tracking of statements within stored procedures and functions. |
pg_stat_statements.max | 10000 | Optional. Increases the number of normalized queries tracked in pg_stat_statements . This setting is recommended for high-volume databases that see many different types of queries from many different clients. |
track_io_timing | on | Optional. Enables collection of block read and write times for queries. |
The Datadog Agent requires read-only access to the database server in order to collect statistics and queries.
Choose a PostgreSQL database on the database server to which the Agent will connect. The Agent can collect telemetry from all databases on the database server regardless of which one it connects to, so a good option is to use the default postgres
database. Choose a different database only if you need the Agent to run custom queries against data unique to that database.
Connect to the chosen database as a superuser (or another user with sufficient permissions). For example, if your chosen database is postgres
, connect as the postgres
user using psql by running:
psql -h mydb.example.com -d postgres -U postgres
Create the datadog
user:
CREATE USER datadog WITH password '<PASSWORD>';
Create the following schema in every database:
CREATE SCHEMA datadog;
GRANT USAGE ON SCHEMA datadog TO datadog;
GRANT USAGE ON SCHEMA public TO datadog;
GRANT pg_monitor TO datadog;
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
Create the following schema in every database:
CREATE SCHEMA datadog;
GRANT USAGE ON SCHEMA datadog TO datadog;
GRANT USAGE ON SCHEMA public TO datadog;
GRANT SELECT ON pg_stat_database TO datadog;
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
Create functions in every database to enable the Agent to read the full contents of pg_stat_activity
and pg_stat_statements
:
CREATE OR REPLACE FUNCTION datadog.pg_stat_activity() RETURNS SETOF pg_stat_activity AS
$$ SELECT * FROM pg_catalog.pg_stat_activity; $$
LANGUAGE sql
SECURITY DEFINER;
CREATE OR REPLACE FUNCTION datadog.pg_stat_statements() RETURNS SETOF pg_stat_statements AS
$$ SELECT * FROM pg_stat_statements; $$
LANGUAGE sql
SECURITY DEFINER;
Note: When generating custom metrics that require querying additional tables, you may need to grant the SELECT
permission on those tables to the datadog
user. Example: grant SELECT on <TABLE_NAME> to datadog;
. See PostgreSQL custom metric collection explained for more information.
Create the function in every database to enable the Agent to collect explain plans.
CREATE OR REPLACE FUNCTION datadog.explain_statement(
l_query TEXT,
OUT explain JSON
)
RETURNS SETOF JSON AS
$$
DECLARE
curs REFCURSOR;
plan JSON;
BEGIN
OPEN curs FOR EXECUTE pg_catalog.concat('EXPLAIN (FORMAT JSON) ', l_query);
FETCH curs INTO plan;
CLOSE curs;
RETURN QUERY SELECT plan;
END;
$$
LANGUAGE 'plpgsql'
RETURNS NULL ON NULL INPUT
SECURITY DEFINER;
To verify the permissions are correct, run the following commands to confirm the Agent user is able to connect to the database and read the core tables:
psql -h localhost -U datadog postgres -A \
-c "select * from pg_stat_database limit 1;" \
&& echo -e "\e[0;32mPostgres connection - OK\e[0m" \
|| echo -e "\e[0;31mCannot connect to Postgres\e[0m"
psql -h localhost -U datadog postgres -A \
-c "select * from pg_stat_activity limit 1;" \
&& echo -e "\e[0;32mPostgres pg_stat_activity read OK\e[0m" \
|| echo -e "\e[0;31mCannot read from pg_stat_activity\e[0m"
psql -h localhost -U datadog postgres -A \
-c "select * from pg_stat_statements limit 1;" \
&& echo -e "\e[0;32mPostgres pg_stat_statements read OK\e[0m" \
|| echo -e "\e[0;31mCannot read from pg_stat_statements\e[0m"
psql -h localhost -U datadog postgres -A \
-c "select * from pg_stat_database limit 1;" \
&& echo -e "\e[0;32mPostgres connection - OK\e[0m" \
|| echo -e "\e[0;31mCannot connect to Postgres\e[0m"
psql -h localhost -U datadog postgres -A \
-c "select * from pg_stat_activity limit 1;" \
&& echo -e "\e[0;32mPostgres pg_stat_activity read OK\e[0m" \
|| echo -e "\e[0;31mCannot read from pg_stat_activity\e[0m"
psql -h localhost -U datadog postgres -A \
-c "select * from pg_stat_statements limit 1;" \
&& echo -e "\e[0;32mPostgres pg_stat_statements read OK\e[0m" \
|| echo -e "\e[0;31mCannot read from pg_stat_statements\e[0m"
When it prompts for a password, use the password you entered when you created the datadog
user.
Installing the Datadog Agent also installs the Postgres check which is required for Database Monitoring on Postgres. If you haven’t already installed the Agent for your Postgres database host, see the Agent installation instructions.
conf.d/postgres.d/conf.yaml
file to point to your host
/ port
and set the hosts to monitor. See the sample postgres.d/conf.yaml for all available configuration options.init_config:
instances:
- dbm: true
host: localhost
port: 5432
username: datadog
password: '<PASSWORD>'
## Optional: Connect to a different database if needed for `custom_queries`
# dbname: '<DB_NAME>'
init_config:
instances:
- dbm: true
host: localhost
port: 5432
username: datadog
password: '<PASSWORD>'
pg_stat_statements_view: datadog.pg_stat_statements()
pg_stat_activity_view: datadog.pg_stat_activity()
## Optional: Connect to a different database if needed for `custom_queries`
# dbname: '<DB_NAME>'
PostgreSQL default logging is to stderr
, and logs do not include detailed information. It is recommended to log into a file with additional details specified in the log line prefix. Refer to the PostgreSQL documentation on this topic for additional details.
/etc/postgresql/<VERSION>/main/postgresql.conf
. For regular log results, including statement outputs, uncomment the following parameters in the log section: logging_collector = on
log_directory = 'pg_log' # directory where log files are written,
# can be absolute or relative to PGDATA
log_filename = 'pg.log' # log file name, can include pattern
log_statement = 'all' # log all queries
#log_duration = on
log_line_prefix= '%m [%p] %d %a %u %h %c '
log_file_mode = 0644
## For Windows
#log_destination = 'eventlog'
log_statement
and log_duration
options are commented out. See discussion on this topic here.This config logs all statements, but to reduce the output to those which have a certain duration, set the log_min_duration_statement
value to the desired minimum duration in milliseconds (check that logging the full SQL statement complies with your organization’s privacy requirements):
log_min_duration_statement = 0 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
#log_statement = 'all'
#log_duration = on
datadog.yaml
file:logs_enabled: true
conf.d/postgres.d/conf.yaml
file to start collecting your PostgreSQL logs:logs:
- type: file
path: "<LOG_FILE_PATH>"
source: postgresql
service: "<SERVICE_NAME>"
#To handle multi line that starts with yyyy-mm-dd use the following pattern
#log_processing_rules:
# - type: multi_line
# pattern: \d{4}\-(0?[1-9]|1[012])\-(0?[1-9]|[12][0-9]|3[01])
# name: new_log_start_with_date
service
and path
parameter values to configure for your environment. See the sample postgres.d/conf.yaml for all available configuration options.Run the Agent’s status subcommand and look for postgres
under the Checks section. Or visit the Databases page to get started!
If you have installed and configured the integrations and Agent as described and it is not working as expected, see Troubleshooting
Additional helpful documentation, links, and articles: