We’ve recently had need to use an instance of PostgreSQL as part of our development environment which gave rise to a number of questions — do we provision an instance in IBM Cloud? Or do we host an instance using our own (virtualised) hardware? Or do we take a pre-built Docker image and run it locally?

Since we’re already using Docker as part of our development environment the choice was not that difficult (not to mention the fact that no-one wants to incur cross-charging for provisioning an instance in IBM Cloud). Docker image it was. So far, so good. We take the official 9.6 image from https://hub.docker.com/_/postgres/ and we’re up and running.

Next we need to ensure all of our PostgreSQL client code requires SSL connections to the PostgreSQL server — hence we need to modify the Docker image to start using SSL connections. This is easier said than done.

Simple self-signed certificate

Our first attempt was to generate a self-signed certificate for the PostgreSQL server and create our own Docker image based upon the official PostgreSQL image which references the certificate we generated.

After digging out the correct sequence of openssl commands to do this we had something that worked when connecting to PostgreSQL from a Python client (https://pypi.org/project/psycopg2). The Python client allowed us to specify the self-signed certificate of the PostgreSQL server which meant that the client would trust that certificate even though it wasn’t signed by a Certificate Authority (CA).

However this was not sufficient to allow an SSL connection from a Node.js client (https://node-postgres.com) because the underlying SSL connection used by the Node.js client would not allow a self-signed certificate to be trusted unless it had be signed by another certificate (which can be self-signed). Without doing this it produces the error DEPTH_ZERO_SELF_SIGNED_CERT. There’s plenty of chatter about working around this by setting NODE_TLS_REJECT_UNAUTHORIZED but that’s not something we wanted to do as it would turn off all SSL verification.

Signed certificate as a custom Certificate Authority (CA)

To overcome this problem we realised we had to create a certificate and sign it using our own custom Certificate Authority (CA). The sequence of openssl commands now becomes sufficiently complex that we created a script to generate the required certificate artefacts, as shown.

This script produces several files, but the main ones of interest are described below. For the interested the full output of the script can be seen here.

rootCA.crt Our custom CA’s public certificate. This is the certificate which should be added to client’s trust stores (typically done by base64 encoding the certificate file).

rootCA.key The private key for our custom CA, required to sign the certificate for the PostgreSQL server with our custom CA.

server.crt The public certificate for the PostgreSQL server which has been signed by our custom CA, required by the PostgreSQL Docker image.

server.key The private key for our PostgreSQL certificate, required by the PostgreSQL Docker image.

We also have a development environment requirement to allow the PostgreSQL server instance to be addressed using 3 different hostnames: localhost, aios-localhost, and postgres_ssl. At the SSL handshake level, this is allowed by specifying a subject alternative name (SAN) extension both when the PostgreSQL server certificate is generated and when it is signed by our custom CA.

PostgreSQL Dockerfile

With the SSL certificate now generated and signed, the Dockerfile to pull it altogether is actually pretty trivial, as shown:

FROM postgres:9.6

LABEL "Product"="PostgreSQL (SSL enabled)" COPY ssl/server.key /var/lib/postgresql/server.key

COPY ssl/server.crt /var/lib/postgresql/server.crt

RUN chown postgres /var/lib/postgresql/server.key && \

chmod 600 /var/lib/postgresql/server.key

Verifying the connection with pgAdmin shows the connection is SSL based and that I can use the SSL mode of Verify-Full having specified the path to my custom CA public certificate.