Redhat OpenShift 4.1 is the latest version where the CoreOS features are embedded and the setup is totally refreshed. While the AWS installation is straight-forward, the bare metal installation requires a lot of preparation work. This paper explores those prerequisites for the 4.1 installations.

Test Environment

The test environment is using VirtualBox to simulate the bare metal servers. In a real-life setup, the same idea applies no matter its a VM or a bare metal box.

I am using the latest VirtualBox. Additionally, to make the network booting working, VirtualBox Extension Pack is installed after the VirtualBox is setup.

I have one VirtualBox VM where all the required components such as the DNS, DHCP, tftp, Load Balancer are installed. I called it as the gateway server. There are other VMs that are simulating the boxes for the OpenShift clusters, such as the bootstrap server, the control plane server.

There is only one NIC card in these cluster VMs, the adaptor is set as “Internal Network”. The Gateway server has two NIC cards, one is the Internal Network. The other one is the NAT adaptor, which can access the internet through the host network.

The gateway VM is using the Ubuntu 18.04. The network configuration is set with the file /etc/netplan/50-cloud-init.yaml

network:

ethernets:

enp0s3:

dhcp4: true

enp0s8:

addresses:

- 172.16.1.10/24

gateway4: 172.16.1.10

nameservers:

addresses:

- 8.8.8.8

- 4.4.4.4

version: 2

To allow the internal network to access the internet, the following settings are applied in the Gateway server.

sudo sysctl -w net.ipv4.ip_forward=1

echo net.ipv4.ip_forward=1 | sudo tee -a /etc/sysctl.conf

To make sure the IP forwarding to work, restart the VM.

Then configure the iptable rules,

sudo iptables --flush

sudo iptables --table nat --flush sudo iptables -t nat -A POSTROUTING -o enp0s3 -j MASQUERADE

sudo iptables -A FORWARD -i enp0s8 -o enp0s3 -m state --state RELATED,ESTABLISHED -j ACCEPT

sudo iptables -A FORWARD -i enp0s3 -o enp0s8 -j ACCEPT

Dnsmasq

The OpenShift installation requires DHCP server to allocate IP addresses. It also requires DNS server to resolve some predefined hostname and SRV records names during reboot and cluster installation.

Instead of individual DHCP, DNS server, here I am utilizing Dnsmasq, which provides network infrastructure for small networks: DNS, DHCP, and network boot.

On the Ubuntu Gateway VM, install the Dnsmasq,

sudo apt update

sudo apt -y install dnsmasq

DHCP Server

Update the /etc/dnsmasq.conf with the following to enable the DHCP service

interface=enp0s8

domain=oc.io #Gateway

dhcp-option=3,172.17.1.10 #DNS

dhcp-option=6,172.17.1.10

dhcp-range=172.16.1.20,172.16.1.100,12h dhcp-host=08:00:27:36:0F:CA,172.16.1.30

dhcp-host=08:00:27:2E:1E:61,172.16.1.31

Let Dnsmasq listen on the internal network interface. Set the default gateway and DNS server to the Gateway server. Copy the MAC address of the VMs, assign a fixed IP to it.

Create a new VM with Ubuntu, boot up. Validate the IP address and DNS is correctly set

$ ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000

link/ether 08:00:27:2e:1e:61 brd ff:ff:ff:ff:ff:ff

inet 172.16.1.31/24 brd 172.16.1.255 scope global dynamic enp0s3

valid_lft 42122sec preferred_lft 42122sec

inet6 fe80::a00:27ff:fe2e:1e61/64 scope link

valid_lft forever preferred_lft forever

and

$ systemd-resolve --status | grep DNS

DNSSEC NTA: 10.in-addr.arpa

Current Scopes: DNS

MulticastDNS setting: no

DNSSEC setting: no

DNSSEC supported: no

DNS Servers: 172.16.1.10

DNS Domain: oc.io

DNS Server

OpenShift installer requires some of the DNS records to be resolved. This can be achieved with the Dnsmasq also. Append the /etc/dnsmasq.conf file with the following additional settings.

# static DNS assignements

address=/bootstrap.oc.io/172.16.1.20

address=/master1.oc.io/172.16.1.30

address=/master2.oc.io/172.16.1.31

address=/master3.oc.io/172.16.1.32 address=/worker1.oc.io/172.16.1.40

address=/worker2.oc.io/172.16.1.41

address=/worker3.oc.io/172.16.1.42 address=/matchbox.oc.io/172.16.1.10 address=/api.oc.io/172.16.1.10

address=/api-int.oc.io/172.16.1.10

address=/etcd-0.oc.io/172.16.1.30

address=/etcd-1.oc.io/172.16.1.31

address=/etcd-2.oc.io/172.16.1.32

address=/.apps.oc.io/172.16.1.10 srv-host=_etcd-server-ssl._tcp,etcd-0.oc.io,2380

srv-host=_etcd-server-ssl._tcp,etcd-1.oc.io,2380

srv-host=_etcd-server-ssl._tcp,etcd-2.oc.io,2380

First, we match the DHCP address to the server name with the settings like address=/master1.oc.io/172.16.1.30

OpenShift requires the DNS name such as api, api-int, etcd-[012] and the wildcard domain name. Configure them accordingly as shown in above.

It also requires the SRV records for etcd. The srv-hos t entries in the config define these records.

Validate the DNS with the following command,

$ dig api.oc.io +short

172.16.1.10 $ dig api-int.oc.io +short

172.16.1.10 $ dig any.app.oc.io +short

172.16.1.10 $ dig etcd-0.oc.io +short

172.16.1.30 $ dig etcd-1.oc.io +short

172.16.1.31 $ dig etcd-2.oc.io +short

172.16.1.32

; <<>> DiG 9.11.3-1ubuntu1.8-Ubuntu <<>> srv _etcd-server-ssl._tcp.oc.io

;; global options: +cmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22003

;; flags: qr aa rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:

; EDNS: version: 0, flags:; udp: 4096

;; QUESTION SECTION:

;_etcd-server-ssl._tcp.oc.io. IN SRV

;; ANSWER SECTION:

_etcd-server-ssl._tcp.oc.io. 0 IN SRV 0 0 2380 etcd-2.oc.io.

_etcd-server-ssl._tcp.oc.io. 0 IN SRV 0 0 2380 etcd-1.oc.io.

_etcd-server-ssl._tcp.oc.io. 0 IN SRV 0 0 2380 etcd-0.oc.io.

;; Query time: 0 msec

;; SERVER: 172.16.1.10#53(172.16.1.10)

;; WHEN: Sun Jul 28 13:03:08 UTC 2019

;; MSG SIZE rcvd: 152 $ dig srv _etcd-server-ssl._tcp.oc.io; <<>> DiG 9.11.3-1ubuntu1.8-Ubuntu <<>> srv _etcd-server-ssl._tcp.oc.io @172 .16.1.10;; global options: +cmd;; Got answer:;; ->>HEADER<

Network boot

We can boot the VM with the RHCOS iso file, but during the boot, the image URL, ignition URL still need to be manually updated. Instead, we want network boot to fully automate the whole setup. This is what Dnsmasq and coreos matchbox can provide.

Append the /etc/dnsmasq.conf file with the following additional settings.

enable-tftp

tftp-root=/var/lib/tftp

tftp-secure # Legacy PXE

dhcp-match=set:bios,option:client-arch,0

dhcp-boot=tag:bios,undionly.kpxe # UEFI

dhcp-match=set:efi32,option:client-arch,6

dhcp-boot=tag:efi32,ipxe.efi

dhcp-match=set:efibc,option:client-arch,7

dhcp-boot=tag:efibc,ipxe.efi

dhcp-match=set:efi64,option:client-arch,9

dhcp-boot=tag:efi64,ipxe.efi

dhcp-userclass=set:ipxe,iPXE

dhcp-boot=tag:ipxe, # iPXE - chainload to matchbox ipxe boot scriptdhcp-userclass=set:ipxe,iPXEdhcp-boot=tag:ipxe, http://matchbox.oc.io:8080/boot.ipxe

Install iPXE on the gateway server,

sudo apt -y install ipxe



sudo mkdir -p /var/lib/tftp

sudo cp /usr/lib/ipxe/{undionly.kpxe,ipxe.efi} /var/lib/tftp sudo chown dnsmasq:nogroup /var/lib/tftp/*

When network boot, the Dnsmasq set the boot script, chainload to the matchbox services for the boot up. The matchbox will then based on the VM settings to determine the profile, the os image, and the ignition file to boot VM accordingly.

Matchbox

On the Gateway server, install the matchbox service.



tar zxvf matchbox-v0.8.0-linux-amd64.tar.gz

cd matchbox-v0.8.0-linux-amd64 curl -LO https://github.com/poseidon/matchbox/releases/download/v0.8.0/matchbox-v0.8.0-linux-amd64.tar.gz tar zxvf matchbox-v0.8.0-linux-amd64.tar.gzcd matchbox-v0.8.0-linux-amd64 sudo cp matchbox /usr/local/bin

sudo useradd -U matchbox

sudo mkdir -p /var/lib/matchbox/{assets,groups,ignition,profiles}

sudo chown -R matchbox:matchbox /var/lib/matchbox sudo cp contrib/systemd/matchbox-local.service /etc/systemd/system/matchbox.service sudo systemctl enable matchbox

sudo systemctl start matchbox

Assets

Download the RHCOS binaries, save them into the matchbox assets directory.

sudo mkdir -p /var/lib/matchbox/{assets,groups,ignition,profiles}



sudo curl -LO cd /var/lib/matchbox/assetssudo curl -LO https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.1/latest/rhcos-4.1.0-x86_64-installer-initramfs.img sudo curl -LO https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.1/latest/rhcos-4.1.0-x86_64-installer-kernel sudo curl -LO https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.1/latest/rhcos-4.1.0-x86_64-metal-bios.raw.gz sudo chown -R matchbox:matchbox /var/lib/matchbox

Groups

Based on the MAC address, define the group, for example,

$ sudo cat /var/lib/matchbox/groups/bootstrap.json

{

"id": "bootstrap",

"name": "Bootstrap server",

"profile": "bootstrap",

"selector": {

"mac": "08:00:27:36:0F:CA"

}

}

So whichever VM has the above-defined MAC address, will match this group and the profile is set as bootstrap .

Profiles

Create the bootstrap.json file to define the profile

Note the ignition_id, points to the ignition file for this profile.

Ignition

The OpenShift installer program creates the ignition files based on the install-config.yaml file.

Upload those ignition files to the /var/lib/matchbox/iginition directory. Remember to change the ownership to user matchbox.

Validation

Run the following test with the MAC address of the bootstrap server.



#!ipxe

kernel /assets/rhcos-4.1.0-x86_64-installer-kernel ip=dhcp rd.neednet=1 console=tty0 console=ttyS0 coreos.inst=yes coreos.inst.install_dev=sda coreos.inst.image_url=http://matchbox.oc.io:8080/assets/rhcos-4.1.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=http://matchbox.oc.io:8080/ignition?mac=${mac:hexhyp}

initrd /assets/rhcos-4.1.0-x86_64-installer-initramfs.img

boot curl http://localhost:8080/ipxe?mac=08:00:27:36:0f:CA #!ipxekernel /assets/rhcos-4.1.0-x86_64-installer-kernel ip=dhcp rd.neednet=1 console=tty0 console=ttyS0 coreos.inst=yes coreos.inst.install_dev=sda coreos.inst.image_url=http://matchbox.oc.io:8080/assets/rhcos-4.1.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=http://matchbox.oc.io:8080/ignition?mac=${mac:hexhyp}initrd /assets/rhcos-4.1.0-x86_64-installer-initramfs.imgboot

Run curl http://localhost:8080/ignition?mac=08:00:27:36:0f:CA to validate the ignition file is correctly selected.

Load Balancer

Following Data Mattsson, the Load Balancer I choose Gobewteen, which is a single binary high-performance L4 TCP, TLS & UDP based load balancer.

We use the same gateway server as the load balancer. Install Gobetween on it,





mkdir gobetween

cd gobetween

tar xzvf ../gobetween_0.7.0_linux_amd64.tar.gz

sudo cp gobetween /usr/local/bin curl -LO https://github.com/yyyar/gobetween/releases/download/0.7.0/gobetween_0.7.0_linux_amd64.tar.gz mkdir gobetweencd gobetweentar xzvf ../gobetween_0.7.0_linux_amd64.tar.gzsudo cp gobetween /usr/local/bin

Create the following TOML config file,

[servers]

[servers.api]

protocol = "tcp"

bind = "0.0.0.0:6443" [servers.api.discovery]

kind = "static"

static_list = [

"172.16.1.20:6443",

"172.16.1.30:6443",

"172.16.1.31:6443",

"172.16.1.32:6443"

] [servers.mcs]

protocol = "tcp"

bind = "0.0.0.0:22623" [servers.mcs.discovery]

kind = "static"

static_list = [

"172.16.1.20:22623",

"172.16.1.30:22623",

"172.16.1.31:22623",

"172.16.1.32:22623"

]

Start the LB, gobetween -c gobetween.toml

Take the API server as an example. Working together with the Dnsmasq, the API server api.oc.io will be resolved as the gateway server, where the Gobetween load balancer is sitting. Once the traffic to port 6443 reaches, gobetween will forward them to the internal list of API servers.

For the api server, and the Machine Config server (mcs) records, after the cluster initialized, the initial bootstrap records can be removed.

Install of Redhat OpenShift

After all the prerequisites are set, we can start the installation of OpenShift which is then pretty straightforward.

Reference