PostgreSQL is a widely-used Open Source database and has been the DBMS of the year for the past 2 years in DB-Engine rankings. As such, there is always a need for reliable and robust monitoring solutions. While there are some commercial monitoring tools, there is an equally good number of open source tools available for monitoring PostgreSQL. Percona Monitoring and Management (PMM) is one of those open source solutions that have continuous improvements and is maintained forever by Percona. It is simple to set up and easy to use.

PMM can monitor not only PostgreSQL but also MySQL and MongoDB databases, so it is a simple monitoring solution for monitoring multiple types of databases. In this blog post, you will see all the steps involved in monitoring PostgreSQL databases using PMM.

This is what we will be discussing:

Using the PMM docker image to create a PMM server. Installing PMM client on a Remote PostgreSQL server and connecting the PostgreSQL Client to PMM Server. Creating required users and permissions on the PostgreSQL server. Enabling PostgreSQL Monitoring with and without QAN (Query Analytics)

If you already know how to create a PMM Server, please skip the PMM server setup and proceed to the PostgreSQL client setup.

Using the PMM docker image to create a PMM server

PMM is a client-server architecture where clients are the PostgreSQL, MySQL, or MongoDB databases and the server is the PMM Server. We see a list of metrics on the Grafana dashboard by connecting to the PMM server on the UI. In order to demonstrate this setup, I have created 2 virtual machines where one of them is the PMM Server and the second server is the PostgreSQL database server.

192.168.80.10 is my PMM-Server 192.168.80.20 is my PG 11 Server 1 2 192.168.80.10 is my PMM - Server 192.168.80.20 is my PG 11 Server

Step 1 :

On the PMM Server, install and start docker.

# yum install docker -y # systemctl start docker 1 2 # yum install docker -y # systemctl start docker

Here are the installation instructions of PMM Server.

Step 2 :

Pull the pmm-server docker image. I am using the latest PMM2 docker image for this setup.

$ docker pull percona/pmm-server:2 1 $ docker pull percona / pmm - server : 2

You see a docker image of size 1.48 GB downloaded after the above step.

$ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/percona/pmm-server 2 cd30e7343bb1 2 weeks ago 1.48 GB 1 2 3 $ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE docker .io / percona / pmm - server 2 cd30e7343bb1 2 weeks ago 1.48 GB

Step 3 :

Create a container for persistent PMM data.

$ docker create \ -v /srv \ --name pmm-data \ percona/pmm-server:2 /bin/true 1 2 3 4 $ docker create \ - v / srv \ -- name pmm - data \ percona / pmm - server : 2 / bin / true

Step 4 :

Create and launch the PMM Server. In the following step, you can see that we are binding the port 80 of the container to the port 80 of the host machine. Likewise for port 443.

$ docker run -d \ -p 80:80 \ -p 443:443 \ --volumes-from pmm-data \ --name pmm-server \ --restart always \ percona/pmm-server:2 1 2 3 4 5 6 7 $ docker run - d \ - p 80 : 80 \ - p 443 : 443 \ -- volumes - from pmm - data \ -- name pmm - server \ -- restart always \ percona / pmm - server : 2

At this stage, you can modify certain settings such as the memory you wish to allocate to the container or the CPU share, etc. You can also see more such configurable options using docker run --help. The following is just an example of how you can modify the above step with some memory or CPU allocations.

$ docker run -d \ -p 80:80 \ -p 443:443 \ --volumes-from pmm-data \ --name pmm-server \ --cpu-shares 100 \ --memory 1024m \ --restart always \ percona/pmm-server:2 1 2 3 4 5 6 7 8 9 $ docker run - d \ - p 80 : 80 \ - p 443 : 443 \ -- volumes - from pmm - data \ -- name pmm - server \ -- cpu - shares 100 \ -- memory 1024m \ -- restart always \ percona / pmm - server : 2

You can list the containers started for validation using docker ps.

$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bb6043082d3b percona/pmm-server:2 "/opt/entrypoint.sh" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp pmm-server 1 2 3 $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bb6043082d3b percona / pmm - server : 2 "/opt/entrypoint.sh" About a minute ago Up About a minute 0.0.0.0 : 80 -> 80 / tcp , 0.0.0.0 : 443 -> 443 / tcp pmm - server

Step 5 :

You can now see the PMM Server Dashboard in the browser using the Host IP address. For my setup, the PMM Server’s IP Address is 192.168.80.10. As soon as you put the IP in the browser, you will be asked to enter the credentials as seen in the image below. Default user and password are both: admin

And then you will be asked to change the password or skip.

PMM Server setup is completed after this step.

Installing PMM client on a Remote PostgreSQL server

I have a PostgreSQL 11.5 Server running on 192.168.80.20. The following steps demonstrate how we can install and configure the PMM client to enable monitoring from the PMM server ( 192.168.80.10).

Before you proceed further, you must ensure that ports 80 and 443 are both enabled on the PMM server for the PG 11 Server to connect. In order to test that, I have used telnet to validate whether ports 80 and 443 are open on the PMM Server for the pg11 server.

[root@pg11]$ hostname -I 192.168.80.20 [root@pg11]$ telnet 192.168.80.10 80 Trying 192.168.80.10... Connected to 192.168.80.10. Escape character is '^]'. [root@pg11]$ telnet 192.168.80.10 443 Trying 192.168.80.10... Connected to 192.168.80.10. Escape character is '^]'. 1 2 3 4 5 6 7 8 9 10 11 12 [ root @ pg11 ] $ hostname - I 192.168.80.20 [ root @ pg11 ] $ telnet 192.168.80.10 80 Trying 192.168.80.10... Connected to 192.168.80.10. Escape character is '^]' . [ root @ pg11 ] $ telnet 192.168.80.10 443 Trying 192.168.80.10... Connected to 192.168.80.10. Escape character is '^]' .

Step 6 :

There are very few steps you need to perform on the PostgreSQL server to enable it as a client for PMM server. The first step is to install the PMM Client on the PostgreSQL Database server as follows. Based on the current PMM release, I am installing pmm2-client today. But, this may change once we have a new PMM release.

$ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm $ sudo yum install pmm2-client -y 1 2 $ sudo yum install https : / / repo .percona .com / yum / percona - release - latest .noarch .rpm $ sudo yum install pmm2 - client - y

Step 7 :

The next step is to connect the client (PostgreSQL server) to the PMM Server. We could use pmm-admin config in order to achieve that. Following is a simple syntax that you could use in general.

$ pmm-admin config [<flags>] [<node-address>] [<node-type>] [<node-name>] 1 $ pmm - admin config [ < flags > ] [ < node - address > ] [ < node - type > ] [ < node - name > ]

The following are the flags and other options I could use with my setup.

flags : --server-insecure-tls --server-url=https://admin:admin@192.168.80.10:443 (--server-url should contain the PMM Server Host information) node-address : 192.168.80.20 (My PostgreSQL Server) node-type : generic (As I am running my PostgreSQL database on a Virtual Machine but not on a Container, it is generic.) node-name : pg-client (Can be any nodename you could use to uniquely identify this database server on your PMM Server Dashboard) 1 2 3 4 5 6 7 8 9 10 11 12 flags : -- server - insecure - tls -- server - url = https : / / admin : admin @ 192.168.80.10 : 443 ( -- server - url should contain the PMM Server Host information ) node - address : 192.168.80.20 ( My PostgreSQL Server ) node - type : generic ( As I am running my PostgreSQL database on a Virtual Machine but not on a Container , it is generic . ) node - name : pg - client ( Can be any nodename you could use to uniquely identify this database server on your PMM Server Dashboard )

So the final syntax for my setup looks like the below. We can run this command as root or by using the sudo command.

Syntax : 7a

$ pmm-admin config --server-insecure-tls --server-url=https://admin:admin@192.168.80.10:443 192.168.80.20 generic pg-client 1 $ pmm - admin config -- server - insecure - tls -- server - url = https : / / admin : admin @ 192.168.80.10 : 443 192.168.80.20 generic pg - client

$ pmm-admin config --server-insecure-tls --server-url=https://admin:admin@192.168.80.10:443 192.168.80.20 generic pg-client Checking local pmm-agent status... pmm-agent is running. Registering pmm-agent on PMM Server... Registered. Configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml updated. Reloading pmm-agent configuration... Configuration reloaded. Checking local pmm-agent status... pmm-agent is running. 1 2 3 4 5 6 7 8 9 10 $ pmm - admin config -- server - insecure - tls -- server - url = https : / / admin : admin @ 192.168.80.10 : 443 192.168.80.20 generic pg - client Checking local pmm - agent status . . . pmm - agent is running . Registering pmm - agent on PMM Server . . . Registered . Configuration file / usr / local / percona / pmm2 / config / pmm - agent .yaml updated . Reloading pmm - agent configuration . . . Configuration reloaded . Checking local pmm - agent status . . . pmm - agent is running .

Syntax : 7b

You could also use a simple syntax such as following without node-address, node-type, node-name :

$ pmm-admin config --server-insecure-tls --server-url=https://admin:admin@192.168.80.10:443 1 $ pmm - admin config -- server - insecure - tls -- server - url = https : / / admin : admin @ 192.168.80.10 : 443

But when you use such a simple syntax as above, node-address, node-type, node-name are defaulted to certain values. If the defaults are incorrect due to your server configuration, you may better pass these details explicitly like I have done in the syntax : 7a. In order to validate whether the defaults are correct, you can simply use # pmm-admin config --help. In the following log, you see that the node-address defaults to 10.0.2.15 which is incorrect for my setup. It should be 192.168.80.20.

# pmm-admin config --help usage: pmm-admin config [<flags>] [<node-address>] [<node-type>] [<node-name>] Configure local pmm-agent Flags: -h, --help Show context-sensitive help (also try --help-long and --help-man) --version Show application version ... ... ... Args: [<node-address>] Node address (autodetected default: 10.0.2.15) 1 2 3 4 5 6 7 8 9 10 11 12 13 # pmm-admin config --help usage : pmm - admin config [ < flags > ] [ < node - address > ] [ < node - type > ] [ < node - name > ] Configure local pmm - agent Flags : - h , -- help Show context - sensitive help ( also try -- help - long and -- help - man ) -- version Show application version . . . . . . . . . Args : [ < node - address > ] Node address ( autodetected default : 10.0.2.15 )

Below is an example where the default settings were perfect because I had configured my database server the right way.

# pmm-admin config --help usage: pmm-admin config [<flags>] [<node-address>] [<node-type>] [<node-name>] Configure local pmm-agent Flags: -h, --help Show context-sensitive help (also try --help-long and --help-man) ... ... Args: [<node-address>] Node address (autodetected default: 192.168.80.20) [<node-type>] Node type, one of: generic, container (default: generic) [<node-name>] Node name (autodetected default: pg-client) 1 2 3 4 5 6 7 8 9 10 11 12 13 # pmm-admin config --help usage : pmm - admin config [ < flags > ] [ < node - address > ] [ < node - type > ] [ < node - name > ] Configure local pmm - agent Flags : - h , -- help Show context - sensitive help ( also try -- help - long and -- help - man ) . . . . . . Args : [ < node - address > ] Node address ( autodetected default : 192.168.80.20 ) [ < node - type > ] Node type , one of : generic , container ( default : generic ) [ < node - name > ] Node name ( autodetected default : pg - client )

Using steps 6 and 7a, I have finished installing the PMM client on the PostgreSQL server and also connected it to the PMM Server. If the above steps are successful, you should see the client listed under Nodes, as seen in the following image. Else, something went wrong.

Creating required users and permissions on the PostgreSQL server

In order to monitor your PostgreSQL server using PMM, you need to create a user *using* which the database stats can be collected by the PMM agent. However, starting from PostgreSQL 10, you do not need to grant SUPERUSER or use SECURITY DEFINER (to avoid granting SUPERUSER). You can simply grant the role pg_monitor to a user (monitoring user). In my next blog post, you will see how we could use SECURITY DEFINER to avoid granting SUPERUSER for monitoring PostgreSQL databases with 9.6 or older.

Assuming that your PostgreSQL Version is 10 or higher, you can use the following steps.

Step 1 :

Create a postgres user that can be used for monitoring. You could choose any username; pmm_user in the following command is just an example.

$ psql -c "CREATE USER pmm_user WITH ENCRYPTED PASSWORD 'secret'" 1 $ psql - c "CREATE USER pmm_user WITH ENCRYPTED PASSWORD 'secret'"

Step 2 :

Grant pg_monitor role to the pmm_user.

$ psql -c "GRANT pg_monitor to pmm_user" 1 $ psql - c "GRANT pg_monitor to pmm_user"

Step 3 :

If you are not using localhost, but using the IP address of the PostgreSQL server while enabling monitoring in the next steps, you should ensure to add appropriate entries to enable connections from the IP and the pmm_user in the pg_hba.conf file.

$ echo "host all pmm_user 192.168.80.20/32 md5" >> $PGDATA/pg_hba.conf $ psql -c "select pg_reload_conf()" 1 2 $ echo "host all pmm_user 192.168.80.20/32 md5" >> $PGDATA / pg_hba .conf $ psql - c "select pg_reload_conf()"

In the above step, replace 192.168.80.20 with the appropriate PostgreSQL Server’s IP address.

Step 4 :

Validate whether you are able to connect as pmm_user to the postgres database from the postgres server itself.

# psql -h 192.168.80.20 -p 5432 -U pmm_user -d postgres Password for user pmm_user: psql (11.5) Type "help" for help. postgres=> 1 2 3 4 5 6 # psql -h 192.168.80.20 -p 5432 -U pmm_user -d postgres Password for user pmm_user : psql ( 11.5 ) Type "help" for help . postgres = >

Enabling PostgreSQL Monitoring with and without QAN (Query Analytics)

Using PMM, we can monitor several metrics in PostgreSQL such as database connections, locks, checkpoint stats, transactions, temp usage, etc. However, you could additionally enable Query Analytics to look at the query performance and understand the queries that need some tuning. Let us see how we can simply enable PostgreSQL monitoring with and without QAN.

Without QAN

Step 1 :

In order to start monitoring PostgreSQL, we could simply use pmm-admin add postgresql. It accepts additional arguments such as the service name and PostgreSQL address and port. As we are talking about enabling monitoring without QAN, we could use the flag: --query-source=none to disable QAN.

# pmm-admin add postgresql --query-source=none --username=pmm_user --password=secret postgres 192.168.80.20:5432 PostgreSQL Service added. Service ID : /service_id/b2ca71cf-a2a4-48e3-9c5b-6ecd1a596aea Service name: postgres 1 2 3 4 # pmm-admin add postgresql --query-source=none --username=pmm_user --password=secret postgres 192.168.80.20:5432 PostgreSQL Service added . Service ID : / service_id / b2ca71cf - a2a4 - 48e3 - 9c5b - 6ecd1a596aea Service name : postgres

Step 2 :

Once you have enabled monitoring, you could validate the same using pmm-admin list.

# pmm-admin list Service type Service name Address and port Service ID PostgreSQL postgres 192.168.80.20:5432 /service_id/b2ca71cf-a2a4-48e3-9c5b-6ecd1a596aea Agent type Status Agent ID Service ID pmm-agent connected /agent_id/13fd2e0a-a01a-4ac2-909a-cae533eba72e node_exporter running /agent_id/f6ba099c-b7ba-43dd-a3b3-f9d65394976d postgres_exporter running /agent_id/1d046311-dad7-467e-b024-d2c8cb7f33c2 /service_id/b2ca71cf-a2a4-48e3-9c5b-6ecd1a596aea 1 2 3 4 5 6 7 8 # pmm-admin list Service type Service name Address and port Service ID PostgreSQL postgres 192.168.80.20 : 5432 / service_id / b2ca71cf - a2a4 - 48e3 - 9c5b - 6ecd1a596aea Agent type Status Agent ID Service ID pmm - agent connected / agent_id / 13fd2e0a - a01a - 4ac2 - 909a - cae533eba72e node_exporter running / agent_id / f6ba099c - b7ba - 43dd - a3b3 - f9d65394976d postgres_exporter running / agent_id / 1d046311 - dad7 - 467e - b024 - d2c8cb7f33c2 / service_id / b2ca71cf - a2a4 - 48e3 - 9c5b - 6ecd1a596aea

You can now access the PostgreSQL Dashboards and see several metrics being monitored.

With QAN

With PMM2, there is an additional step needed to enable QAN. You should create a database with the same name as the monitoring user ( pmm_user here). And then, you should create the extension: pg_stat_statements in that database. This behavior is going to change on the next release so that you can avoid creating the database.

Step 1 :

Create the database with the same name as the monitoring user. Create the extension: pg_stat_statements in the database.

$ psql -c "CREATE DATABASE pmm_user" $ psql -c -d pmm_user "CREATE EXTENSION pg_stat_statements" 1 2 $ psql - c "CREATE DATABASE pmm_user" $ psql - c - d pmm _ user "CREATE EXTENSION pg_stat_statements"

Step 2 :

If shared_preload_libraries has not been set to pg_stat_statements, we need to set it and restart PostgreSQL.

$ psql -c "ALTER SYSTEM SET shared_preload_libraries TO 'pg_stat_statements'" $ pg_ctl -D $PGDATA restart -mf waiting for server to shut down.... done server stopped ... ... done server started 1 2 3 4 5 6 7 8 $ psql - c "ALTER SYSTEM SET shared_preload_libraries TO 'pg_stat_statements'" $ pg_ctl - D $PGDATA restart - mf waiting for server to shut down . . . . done server stopped . . . . . . done server started

Step 3 :

In the previous steps, we used the flag: --query-source=none to disable QAN. In order to enable QAN, you could just remove this flag and use pmm-admin add postgresql without the flag.

# pmm-admin add postgresql --username=pmm_user --password=secret postgres 192.168.80.20:5432 PostgreSQL Service added. Service ID : /service_id/24efa8b2-02c2-4a39-8543-d5fd54314f73 Service name: postgres 1 2 3 4 # pmm-admin add postgresql --username=pmm_user --password=secret postgres 192.168.80.20:5432 PostgreSQL Service added . Service ID : / service_id / 24efa8b2 - 02c2 - 4a39 - 8543 - d5fd54314f73 Service name : postgres

Step 4 :

Once the above step is completed, you could validate the same again using pmm-admin list. But this time, you should see an additional service: qan-postgresql-pgstatements-agent .

# pmm-admin list Service type Service name Address and port Service ID PostgreSQL postgres 192.168.80.20:5432 /service_id/24efa8b2-02c2-4a39-8543-d5fd54314f73 Agent type Status Agent ID Service ID pmm-agent connected /agent_id/13fd2e0a-a01a-4ac2-909a-cae533eba72e node_exporter running /agent_id/f6ba099c-b7ba-43dd-a3b3-f9d65394976d postgres_exporter running /agent_id/7039f7c4-1431-4518-9cbd-880c679513fb /service_id/24efa8b2-02c2-4a39-8543-d5fd54314f73 qan-postgresql-pgstatements-agent running /agent_id/7f0c2a30-6710-4191-9373-fec179726422 /service_id/24efa8b2-02c2-4a39-8543-d5fd54314f73 1 2 3 4 5 6 7 8 9 # pmm-admin list Service type Service name Address and port Service ID PostgreSQL postgres 192.168.80.20 : 5432 / service_id / 24efa8b2 - 02c2 - 4a39 - 8543 - d5fd54314f73 Agent type Status Agent ID Service ID pmm - agent connected / agent_id / 13fd2e0a - a01a - 4ac2 - 909a - cae533eba72e node_exporter running / agent_id / f6ba099c - b7ba - 43dd - a3b3 - f9d65394976d postgres_exporter running / agent_id / 7039f7c4 - 1431 - 4518 - 9cbd - 880c679513fb / service_id / 24efa8b2 - 02c2 - 4a39 - 8543 - d5fd54314f73 qan - postgresql - pgstatements - agent running / agent_id / 7f0c2a30 - 6710 - 4191 - 9373 - fec179726422 / service_id / 24efa8b2 - 02c2 - 4a39 - 8543 - d5fd54314f73

After this step, you can now see the Queries and their statistics captured on the Query Analytics Dashboard.

Meanwhile, have you tried Percona Distribution for PostgreSQL? It is a collection of finely-tested and implemented open source tools and extensions along with PostgreSQL 11, maintained by Percona. PMM works for both Community PostgreSQL and also the Percona Distribution for PostgreSQL. Please subscribe to our blog posts to learn more interesting features in PostgreSQL.