Install CentOS via USB

Installing CentOS to the NUC is as easy as plugging it into one of the USB drives, powering it on, and running through the installer. After the installation is complete, the OS should be installed to the internal SSD, so you can shut down, unplug the boot drive, and power back on to boot to the internal M.2 SSD.

The “everything” installer has options to install GNOME for you. Although the “everything” installer includes an option to install PostgreSQL, Blackmagic Design specifies PostgreSQL 9.5 specifically, so wait to install 9.5 specifically via yum , after the installation. When choosing options, I only added GNOME.

Set up a DHCP address from the USB installer

From the USB installer, set up a DHCP connection to the wired ethernet port, just so that you can automatically get connected to the Internet and install PostgreSQL and other OS updates. Later, we’ll assign a static IP address so that different Resolve workstations can consistently connect.

Install the correct version of PostgreSQL for Resolve

For Resolve 14.3 specifically, PostgreSQL 9.5 is recommended. Since CentOS is in the RHEL family, install PostgreSQL 9.5 via yum with the following commands.

Install the repository RPM:

sudo yum install https://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-7-x86_64/pgdg-centos95-9.5-3.noarch.rpm

Install the client packages:

sudo yum install postgresql95

Install the server packages:

sudo yum install postgresql95-server

Initialize the database and enable automatic start:

sudo /usr/pgsql-9.5/bin/postgresql95-setup initdb sudo systemctl enable postgresql-9.5 sudo systemctl start postgresql-9.5

Set up a static IP address

IT’zGeeK provides some guidance on setting up a static IP address.

To have different remote workstations log into the PostgreSQL server, we’ll need to make sure that the IP address for the NUC is static — if we were to leave it as DHCP, the IP address on the NUC would change unpredictably, and then client workstations wouldn’t be able connect. By setting a static IP address, client workstations will store just the NUC’s one IP address, and the clients will be able to connect reliably to the NUC without any hassle.

By running ifconfig -a or ip a , you can see the name of the ethernet device that was connected earlier via DHCP.

Then, you can vi into /etc/sysconfig/network-scripts/ifcfg-<yourinterfacename> .

Modify the specific parameters:

BOOTPROTO=none # Here, after the =, you can enter whatever you had previously with DHCP, since we know that the address assigned via DHCP worked

IPADDR=<yourIPaddress> # Here's the subnet mask. Mine is 255.255.255.0, but your network might be different

NETMASK=255.255.255.0 # This one is going to depend on your particular router. Check your router

GATEWAY=<yourgatewayIPaddress> # This will depend on your ISP. Since mine is Comcast, I use 75.75.75.75

DNS1=75.75.75.75 DEFROUTE=yes IPV4_FAILURE_FATAL=no # Disable IPv6

IPV6INIT=no # Activate on Boot

ONBOOT=yes

Then restart the network service for these changes to take effect.

systemctl restart network

Now, you have a static IP address.

Configure PostgreSQL for sharing by configuring the pg_hba.conf file

In the Resolve 14.3 manual, Blackmagic Design removed information about manually configuring the PostgreSQL pg_hba.conf file on macOS. To some degree, it’s understandable, since using the 14.3 “Project Server” GUI application for macOS and Windows is a much more user-friendly experience than fiddling around in a command-line interface. On the other hand, no such “Project Server” app exists for Linux, so we can refer back to the 12.5 manual, which does have some information about modifying the pg_hba.conf file, but it’s geared toward macOS. I poked around the Linux and PostgreSQL documentation scattered across the Internet to create this guide.

To set up sharing, become the postgres superuser:

sudo su - postgres

This will get you right to where the relevant PostgreSQL files are. If you ls out what’s here, you’ll just see a folder named for this version of PostgreSQL, 9.5 .

cd into 9.5 and then into data . If you check pwd , you’ll see that you’re in /var/lib/pgsql/9.5/data . Here we’ll need to configure a couple of important files: pg_hba.conf and postgresql.conf . Let’s start with pg_hba.conf .

Let’s make a copy of it, just in case anything goes wrong:

cp pg_hba.conf pg_hba.conf.backup

Now, let’s modify it so as to enable sharing.

Add a line in the very bottom of this file to reflect the range of IP addresses on your local network:

host all all <your NUC's static IP>/24 md5

About notation for ranges of IP addresses

It’s worth reading over Digital Ocean’s guide to IP addresses, subnets, and CIDR notation for networking.

We’re using IPv4 addresses on the local network, and by specifying the range with /24 appended, plenty of workstations on the local network will be able to connect.

Modify postgresql.conf to allow incoming TCP/IP sockets

Let’s modify the other file, postgresql.conf , to allow incoming TCP/IP sockets.

vi into /var/lib/pgsql/9.5/data/postgresql.conf .

Inside postgresql.conf , scroll all the way to the bottom, and add the uncommented line:

listen_addresses = '*'

This is required for other computers on the local network to be able to connect and use PostgreSQL.

Assign a default password of DaVinci to the postgres user account

When we added a line to the pg_hba.conf specifying a range of IP addresses allowed to connect, the md5 parameter made it so that a client workstation could connect with a password. There’s just one problem — the postgres user account by default doesn’t actually have a password! So, let’s create a password for the postgres user.

While you’re still the postgres superuser, enter the psql shell:

psql

The prompt should change from -bash-4.2$ to postgres=# .

Create a password for the user postgres by entering:

\password

Enter DaVinci [case-sensitive] for your password, since that’ll be the default value stored in the DaVinci Resolve Studio client workstations. The psql shell will prompt you to reenter the password to confirm that you’ve typed it correctly.

You can then exit the psql shell by entering \q .

You can then exit from being the postgres superuser and get back to your regular user account by entering exit .

Allow the client workstations through the default CentOS firewall

By default, CentOS has a firewall that will prevent potentially malicious connections. We’ll need to except our local network’s range of IP addresses so that the DaVinci Resolve workstations are allowed to connect.

Enter the following commands to allow workstations on the local network to connect through CentOS’s default firewall.

sudo firewall-cmd --permanent --zone=trusted --add-source=<yourstaticIPaddress>/24

sudo firewall-cmd --permanent --zone=trusted --add-port=5432/tcp

sudo firewall-cmd --reload

Then go ahead and reboot to make sure that everything takes effect.

Verify that the PostgreSQL server is running properly

Let’s just check that the NUC is actually running PostgreSQL correctly.

Enter:

cat /etc/services | grep 5432

You should see:

postgres 5432/tcp postgresql # POSTGRES

postgres 5432/udp postgresql # POSTGRES

This means that PostgreSQL is good to go through port 5432.

You can also check:

netstat -tulpn | grep 5432

Which should show that 0.0.0.0:5432 is good to go for TCP.

Enter:

service postgresql-9.5 status

Here you’ll see some information about how TCP connections are listening through port 5432, with IPv4 IP addresses:

tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN

Connect client workstations

Now all the different DaVinci Resolve Studio clients on the network should be able to create and connect to PostgreSQL databases. The parameters at a client workstation would be:

Name: <whatever database name you want, all lowercase, only numbers and letters>

Location: the NUC's static IP

Username: postgres

Password: DaVinci

Have a backup strategy

It would be irresponsible, in a guide about how to setup a PostgreSQL server, not to include a strong recommendation to back up the PostgreSQL databases.

I have a script available on GitHub that will let you effortlessly set up systemd units and timers that will automatically backup and optimize your PostgreSQL databases.

Personally, I use the script to set my backups to upload into a Google Drive folder via Insync.

Get in touch

I hope you’ve found this guide useful. If you have questions or respectful comments, leave a comment here on Medium or tweet at me.