Assuming we have a big enough box, let’s setup OpenShift Container Platform (OCP) 4.2 using VMs created and managed by KVM.

Architecture consideration

OCP 4.2 requires DHCP and network bootup, it also requires the full-blown DNS feature to provide A, PTR and SRV records. KVM is able to meet some of these but not fully. Therefore we will use external DNS, DHCP services to meet these requirements fully. I am using Dnsmasq to serve as the DNS, DHCP and netboot server. Additionally, the Matchbox is used to provide the ignitions during CoreOS bootup respectively for the different node roles.

Inside the KVM, we provision VMs to form the OCP cluster. OCP 4.2 requires LoadBalancer to distribute the load for API (6443), Machine Configuration Service (22623), Application HTTP and HTTPS. I dedicate a minimalist Ubuntu VM within the cluster for the LoadBalancer purpose to avoid the internal traffic unnecessarily going through outside. I choose the L4 LB software gobetween.

Once the OCP cluster is set up, to access the cluster from the KVM host server, we need an L7 reverse proxy to redirect the traffic to the underlying OCP cluster based on the host information. Here I am using traefik.

In the following section, I will cover

KVM Setup

Dnsmasq set up and common configuration

Configure Ubuntu to use local Dnsmasq

Dnsmasq cluster-specific configuration

Matchbox setup

LoadBalancer VM and Gobetween configuration

OCP installation configuration

OCP VMs

OCP cluster bootstrap and completion of the installation

OCP authentication configuration

Traefik configuration

KVM Setup

Install KVM on the Host OS, Ubuntu 18.04.

First, validate the KVM is supported on this machine. Run the following

sudo apt install -y cpu-checker

sudo kvm-ok

If we see the following output, then the KVM can be installed and used.

INFO: /dev/kvm exists

KVM acceleration can be used

Install the KVM, start it

sudo apt install -y libosinfo-bin

sudo apt -y install qemu qemu-kvm libvirt-bin bridge-utils virt-manager sudo systemctl enable libvirtd

sudo systemctl start libvirtd

Additionally, install the uvt-kvm tool and synchronize Ubuntu 18.04 locally so that we could create Ubuntu-based VM easily.

sudo apt -y install uvtool sudo uvt-simplestreams-libvirt --verbose sync release=bionic arch=amd64 # create the key to access the VM

ssh-keygen -b 4096 -t rsa -f ~/.ssh/id_rsa -N ""

Create a KVM network but disable the DNS and DHCP (No dhcp settings in the ip element), as shown in the following XML file, net_ocp.xml

<network>

<name>ocp</name>

<forward mode='nat'/>

<bridge name='br-ocp' stp='on' delay='0'/>

<dns enable="no"/>

<ip address='192.168.10.1' netmask='255.255.255.0'>

</ip>

</network>

Define the network and make it auto start.

virsh net-define net_ocp.xml

virsh net-autostart ocp

virsh net-start ocp systemctl restart libvirt-bin

The virtual bridge is created

# ifconfig br-ocp

br-ocp: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500

inet 192.168.10.1 netmask 255.255.255.0 broadcast 192.168.10.255

ether 52:54:00:d1:12:f6 txqueuelen 1000 (Ethernet)

RX packets 43449450 bytes 37736240429 (37.7 GB)

RX errors 0 dropped 0 overruns 0 frame 0

TX packets 54047625 bytes 179923041069 (179.9 GB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Dnsmasq setup and common configuration

On the KVM host, install the Dnsmasq and the required tftp, ipxe.

sudo apt install -y dnsmasq

sudo systemctl enable dnsmasq

sudo systemctl start dnsmasq sudo apt -y install ipxe

sudo mkdir -p /var/lib/tftp

sudo cp /usr/lib/ipxe/{undionly.kpxe,ipxe.efi} /var/lib/tftp

sudo chown dnsmasq:nogroup /var/lib/tftp/*

Create the file /etc/dnsmasq.d/common.conf that defines the common portion across different OCP clusters.

# Listen on lo and br-ocp only

bind-interfaces

interface=lo,br-ocp # DHCP

dhcp-option=option:router,192.168.10.1

dhcp-option=option:dns-server,192.168.10.1

dhcp-range=192.168.10.10,192.168.10.254,12h # forward, use original DNS server

server=10.0.xxx.xxx

server=10.0.xxx.xxx enable-tftp

tftp-root=/var/lib/tftp

tftp-secure # Legacy PXE

dhcp-match=set:bios,option:client-arch,0

dhcp-boot=tag:bios,undionly.kpxe # UEFI

dhcp-match=set:efi32,option:client-arch,6

dhcp-boot=tag:efi32,ipxe.efi

dhcp-match=set:efibc,option:client-arch,7

dhcp-boot=tag:efibc,ipxe.efi

dhcp-match=set:efi64,option:client-arch,9

dhcp-boot=tag:efi64,ipxe.efi

dhcp-userclass=set:ipxe,iPXE

# matchbox can be shared across different cluster

dhcp-boot=tag:ipxe, # iPXE - chainload to matchbox ipxe boot scriptdhcp-userclass=set:ipxe,iPXE# matchbox can be shared across different clusterdhcp-boot=tag:ipxe, http://matchbox.ibmcloud.io.cpak:8080/boot.ipxe address=/matchbox.ibmcloud.io.cpak/192.168.10.1

Listen only onthe lo and br-ocp interfaces.

Define the DHCP default route and the default DNS server.

Chain the iPXE to the matchbox server, matchbox.ibmcloud.io.cpak which we will install and make it listen on the bridge interface. Define the DNS entry for it, so the VMs are able to resolve the matchbox server during netboot.

Configure Ubuntu to use local Dnsmasq

Additionally, configure the Ubuntu server (18.04) to use this DNS server.

Rename the traditional resolv.conf file as some traditional app like ping is still using this configuration.

mv /etc/resolv.conf /etc/resolv.conf.orig

Backup and replace the file /etc/systemd/resolved.conf with the content,

[Resolve]

DNS=127.0.0.1

Restart systemd-resolved service

sudo systemctl restart systemd-resolved

systemd-resolve --status

Now the KVM host will use the local Dnsmasq as its DNS server.

Dnsmasq cluster-specific configuration

For each OCP cluster to be created in the KVM, create a Dnsmasq conf file, for an example, /etc/dnsmasq.d/exp-ocp4.conf .

Inside this file, for every VM in the cluster define the following

DHCP Ip address by MAC address DNS A record DNS PTR record. This is required for RHCOS to set its hostname.

A sample snippet is listed as below,

dhcp-host=52:54:00:25:2e:77,192.168.10.40

address=/exp-bootstrap.exp-ocp4.ibmcloud.io.cpak/192.168.10.40

ptr-record=40.10.168.192.in-addr.arpa,exp-bootstrap.exp-ocp4.ibmcloud.io.cpak

Followed by that define the clusters DNS records to point to the LB, including the wildcard record for apps.

address=/api.exp-ocp4.ibmcloud.io.cpak/192.168.10.49

address=/api-int.exp-ocp4.ibmcloud.io.cpak/192.168.10.49

address=/.apps.exp-ocp4.ibmcloud.io.cpak/192.168.10.49

Lastly, for each etcd instance define the A and SRV records. If there is a etcd member in 3 masters each respectively, then

address=/etcd-0.exp-ocp4.ibmcloud.io.cpak/192.168.10.41

srv-host=_etcd-server-ssl._tcp.exp-ocp4.ibmcloud.io.cpak,etcd-0.exp-ocp4.ibmcloud.io.cpak,2380 address=/etcd-1.exp-ocp4.ibmcloud.io.cpak/192.168.10.42

srv-host=_etcd-server-ssl._tcp.exp-ocp4.ibmcloud.io.cpak,etcd-1.exp-ocp4.ibmcloud.io.cpak,2380 address=/etcd-2.exp-ocp4.ibmcloud.io.cpak/192.168.10.43

srv-host=_etcd-server-ssl._tcp.exp-ocp4.ibmcloud.io.cpak,etcd-2.exp-ocp4.ibmcloud.io.cpak,2380

Restart the Dnsmasq service to let the change in effect. We may need to clear the lease cache, before restarting the service to make sure the DHCP Ip allocation is not cached.

sudo rm -rf /var/lib/misc/dnsmasq.leases

sudo touch /var/lib/misc/dnsmasq.leases sudo systemctl restart dnsmasq

Matchbox Setup

Install matchbox on the KVM host and set it up,



tar zxvf matchbox-v0.8.3-linux-amd64.tar.gz

cd matchbox-v0.8.3-linux-amd64

sudo cp matchbox /usr/local/bin curl -LO https://github.com/poseidon/matchbox/releases/download/v0.8.3/matchbox-v0.8.3-linux-amd64.tar.gz tar zxvf matchbox-v0.8.3-linux-amd64.tar.gzcd matchbox-v0.8.3-linux-amd64sudo cp matchbox /usr/local/bin sudo useradd -U matchbox

sudo mkdir -p /var/lib/matchbox/{assets,groups,ignition,profiles}

sudo chown -R matchbox:matchbox /var/lib/matchbox

sudo cp contrib/systemd/matchbox-local.service /etc/systemd/system/matchbox.service sudo systemctl enable matchbox

sudo systemctl start matchbox

Assets

Download the RHCOS images into the assets directory,





sudo axel -o rhcos-{{ .ver }}.0-x86_64-installer-initramfs.img cd /var/lib/matchbox/assetssudo axel -o rhcos-{{ .ver }}.0-x86_64-installer-initramfs.img https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/{{ .ver }}/latest/rhcos-{{ .ver }}.0-x86_64-installer-initramfs.img sudo axel -o rhcos-{{ .ver }}.0-x86_64-installer-kernel https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/{{ .ver }}/latest/rhcos-{{ .ver }}.0-x86_64-installer-kernel sudo axel -o rhcos-{{ .ver }}.0-x86_64-metal-bios.raw.gz https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/{{ .ver }}/latest/rhcos-{{ .ver }}.0-x86_64-metal-bios.raw.gz sudo chown -R matchbox:matchbox /var/lib/matchbox

I am using the Golang template format. {{ .ver }} could be 4.2.0

Group

For each of the VM, prepare the group file based on its Mac address. For example, /var/lib/matchbox/groups/exp-master1.json

{

"id": "exp-master1",

"name": "exp-master1",

"profile": "exp-master1",

"selector": {

"mac": "52:54:00:34:c7:90"

}

}

Profile

For each of the VM prepare its profile. For example, for the first master VM, exp-master1, create the following save it as /var/lib/matchbox/profiles/exp-master1.json

Update the kernel, initrd arguments to the corresponding file downloaded in the assets directory. Update the coreos URL arguments to the matchbox URL. As we are using KVM, the disk is named as vda .

Ignition

The OpenShift installer program creates the ignition files based on the install-config.yaml file. Upload those ignition files to the /var/lib/matchbox/iginition directory. Remember to change the ownership to user matchbox. See the section OCP installation configuration for more detail.

Upon the network bootup, the CoreOS VM will contact the matchbox services based on its Mac address and therefore get the installation image and ignition files accordingly.

LoadBalancer VM and Gobetween configuration

For each of the OCP cluster, create a minimal Ubuntu VM,

uvt-kvm create exp-lb release=bionic --memory 4 --cpu 4 --disk 50 --bridge br-ocp --password password

The bridge is the KVM bridge defined.

Find out the Mac address,

virsh dumpxml exp-lb | grep 'mac address' | cut -d\' -f 2

Update the Dnsmasq DHCP Mac and IP address pair, DNS A and PTR records for the LB.

dhcp-host=52:54:00:71:a6:e0,192.168.10.49

address=/exp-lb.exp-ocp4.ibmcloud.io.cpak/192.168.10.49

ptr-record=49.10.168.192.in-addr.arpa,exp-lb.exp-ocp4.ibmcloud.io.cpak

Clear the lease cache, restart the Dnsmasq. Restart the VM to make sure it gets the assigned IP,

virsh destroy exp-lb

virsh start exp-lb

Install the Gobetween LB software,



mkdir -p gobetween

cd gobetween

tar xzvf ../gobetween_0.7.0_linux_amd64.tar.gz

sudo cp gobetween /usr/local/bin curl -LO https://github.com/yyyar/gobetween/releases/download/0.7.0/gobetween_0.7.0_linux_amd64.tar.gz mkdir -p gobetweencd gobetweentar xzvf ../gobetween_0.7.0_linux_amd64.tar.gzsudo cp gobetween /usr/local/bin

Create a Systemd service,



Description=Gobetween - modern LB for cloud era

Documentation=

After=network.target [Unit]Description=Gobetween - modern LB for cloud eraDocumentation= https://github.com/yyyar/gobetween/wiki After=network.target [Service]

Type=simple

PIDFile=/run/gobetween.pid

#ExecStartPre=prestart some command

ExecStart=/usr/local/bin/gobetween -c /etc/gobetween/gobetween.toml

ExecReload=/bin/kill -s HUP $MAINPID

ExecStop=/bin/kill -s QUIT $MAINPID

PrivateTmp=true [Install]

WantedBy=multi-user.target

Prepare the following TOML config file /etc/gobetween/gobetween.toml

[servers]

[servers.api]

protocol = "tcp"

bind = "0.0.0.0:6443" [servers.api.discovery]

kind = "static"

static_list = [ "192.168.10.40:6443","192.168.10.41:6443","192.168.10.42:6443","192.168.10.43:6443" ] [servers.api.healthcheck]

kind = "ping"

fails = 1

passes = 1

interval = "2s"

timeout="1s"

ping_timeout_duration = "500ms" [servers.mcs]

protocol = "tcp"

bind = "0.0.0.0:22623" [servers.mcs.discovery]

kind = "static"

static_list = [ "192.168.10.40:22623","192.168.10.41:22623","192.168.10.42:22623","192.168.10.43:22623" ] [servers.mcs.healthcheck]

kind = "ping"

fails = 1

passes = 1

interval = "2s"

timeout="1s"

ping_timeout_duration = "500ms" [servers.http]

protocol = "tcp"

bind = "0.0.0.0:80" [servers.http.discovery]

kind = "static"

static_list = [ "192.168.10.44:80","192.168.10.45:80","192.168.10.46:80" ] [servers.http.healthcheck]

kind = "ping"

fails = 1

passes = 1

interval = "2s"

timeout="1s"

ping_timeout_duration = "500ms" [servers.https]

protocol = "tcp"

bind = "0.0.0.0:443" [servers.https.discovery]

kind = "static"

static_list = [ "192.168.10.44:443","192.168.10.45:443","192.168.10.46:443" ] [servers.https.healthcheck]

kind = "ping"

fails = 1

passes = 1

interval = "2s"

timeout="1s"

ping_timeout_duration = "500ms"

As of this moment, the API and MCS servers include the bootstrap server. After the Cluster is bootstrapped, the records need an update to remove the bootstrap VM’s Ip.

Restart the service to let the configuration in effect.

OCP installation configuration

Prepare the following configuration, install-config.yaml

apiVersion: v1

baseDomain: ibmcloud.io.cpak

compute:

- hyperthreading: Enabled

name: worker

# must be 0 for user provisioned infra as cluster will not create these workers

replicas: 0

controlPlane:

hyperthreading: Enabled

name: master

replicas: 3

metadata:

name: exp-ocp4

networking:

clusterNetwork:

- cidr: 10.128.0.0/14

hostPrefix: 23

networkType: OpenShiftSDN

serviceNetwork:

- 172.30.0.0/16

platform:

none: {} pullSecret: 'Get From the Redhat Website, copy and paste here'

sshKey: 'SSH Public Key to access the coreos'

Create a directory, generate the ignition file and copy to the matchbox folder.

rm -rf ocp-install

mkdir -p ocp-install cp install-config.yaml ocp-install

cd ocp-install

openshift-install create ignition-configs --dir . cp *ign /var/lib/matchbox/ignition

sudo chown -R matchbox:matchbox /var/lib/matchbox

Note here we remove the old directory to clean all the temp files from the last installation.

OCP VMs

Create the required OCP VMs based on its recommended sizing. Let's say 3 master VM and 3 workers VM, and one bootstrap VM. For example,

virsh vol-create-as ocp-pool exp-bootstrap.qcow2 120G virt-install --name=exp-bootstrap --ram=16 --vcpus=8 --disk path=/ocp-pool/kvm-image/exp-bootstrap.qcow2,bus=virtio --pxe --noautoconsole --graphics=vnc --hvm --network network=ocp,model=virtio --boot hd,network

Notice the VM has specified with “ — pxe” for network bootup. The boot order is set as harddisk and network. After the first network bootup, the CoreOS will be installed on the harddisk and the subsequent bootup will be from harddisk.

Get the VM Mac address and update the Dnsmasq configuration file as described in the Dnsmasq section. Remove the lease cache file, restart the Dnsmasq service.

Prepare the matchbox group and profile file based on the mac address for each of the VM. Restart the matchbox service.

It’s advised to automate these process to avoid any typo and other human errors. I am using the magefile and sshkit to achieve automation.

Restart the VM with the virsh destroy and virsh start command to let the CoreOS installed.

Tips: We can use virsh console vmName to monitor the boot up.

OCP cluster bootstrap and completion of the installation

Once the VMs are bootup or even earlier, we can start the OCP cluster bootstrap.

cd ocp-install openshift-install --dir=. wait-for bootstrap-complete --log-level=debug

Wait for the command to finish successfully. You will see the logs similar as below,



level=debug msg="Built from commit 6b629f0c847887f22c7a95586e49b0e2434161c3"

level=info msg="Waiting up to 30m0s for the Kubernetes API at

level=debug msg="Still waiting for the Kubernetes API: the server could not find the requested resource"

level=debug msg="Still waiting for the Kubernetes API: the server could not find the requested resource" level=debug msg="OpenShift Installer v4.2.11"level=debug msg="Built from commit 6b629f0c847887f22c7a95586e49b0e2434161c3"level=info msg="Waiting up to 30m0s for the Kubernetes API at https://api.exp-ocp4.ibmcloud.io.cpak:6443 ..."level=debug msg="Still waiting for the Kubernetes API: the server could not find the requested resource"level=debug msg="Still waiting for the Kubernetes API: the server could not find the requested resource" ...SKIP SOME LINES... level=info msg="API v1.14.6+32dc4a0 up"

level=info msg="Waiting up to 30m0s for bootstrapping to complete..."

level=debug msg="Bootstrap status: complete"

level=info msg="It is now safe to remove the bootstrap resources"

Update the LB Gobetween configuration to remove the bootstrap Ip and restart the Gobetween service. We can also stop and remove the bootstrap VM now.

If we have external persistent storage ready, such as GlusterFS, create the storage class, create the PVC for the image registry. Update the registry configs to make the storage persistent.

export KUBECONFIG=/root/ocp-install/auth/kubeconfig oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"pvc":{"claim": "pvc-image-registry"}}}}'

You can also complete the config by using emptyDir for the storage though this is not recommended for Prod setup.

oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'

Finish the cluster creation with the command below,

cd ocp-install

openshift-install --dir=. wait-for install-complete --log-level=debug

You will see some logs as below,



level=debug msg="Built from commit 6b629f0c847887f22c7a95586e49b0e2434161c3"

level=info msg="Waiting up to 30m0s for the cluster at

level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.12: 100% complete, waiting on image-registry"

level=debug msg="Cluster is initialized"

level=info msg="Waiting up to 10m0s for the openshift-console route to be created..."

level=debug msg="Route found in openshift-console namespace: console"

level=debug msg="Route found in openshift-console namespace: downloads"

level=debug msg="OpenShift console route is created"

level=info msg="Install complete!"

level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/ocp-install/auth/kubeconfig'"

level=info msg="Access the OpenShift web-console here:

level=info msg="Login to the console with user: kubeadmin, password: xxxxxxxxx" level=debug msg="OpenShift Installer v4.2.11"level=debug msg="Built from commit 6b629f0c847887f22c7a95586e49b0e2434161c3"level=info msg="Waiting up to 30m0s for the cluster at https://api.exp-ocp4.ibmcloud.io.cpak:6443 to initialize..."level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.12: 100% complete, waiting on image-registry"level=debug msg="Cluster is initialized"level=info msg="Waiting up to 10m0s for the openshift-console route to be created..."level=debug msg="Route found in openshift-console namespace: console"level=debug msg="Route found in openshift-console namespace: downloads"level=debug msg="OpenShift console route is created"level=info msg="Install complete!"level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/ocp-install/auth/kubeconfig'"level=info msg="Access the OpenShift web-console here: https://console-openshift-console.apps.exp-ocp4.ibmcloud.io.cpak level=info msg="Login to the console with user: kubeadmin, password: xxxxxxxxx"

OCP authentication configuration

We can configure the http-passwd file for authentication.

apt install -y apache2-utils

htpasswd -c -B -b htpasswd.txt admin password export KUBECONFIG=/root/ocp-install/auth/kubeconfig

oc create secret generic htpass-secret --from-file=htpasswd=htpasswd.txt -n openshift-config

Prepare the following YAML file htpasswd.yaml

apiVersion: config.openshift.io/v1

kind: OAuth

metadata:

name: cluster

spec:

identityProviders:

- name: my_htpasswd_provider

mappingMethod: claim

type: HTPasswd

htpasswd:

fileData:

name: htpass-secret

Apply it and assign cluster-admin rights.

export KUBECONFIG=/root/ocp-install/auth/kubeconfig

oc apply -f htpasswd.yaml oc adm policy add-cluster-role-to-user cluster-admin admin

Now we are ready to use the oc command to log in to the cluster

Input the user name and password to log in.

Traefik configuration

To allow remote access to the cluster inside the KVM, we need to configure the L7 reverse proxy.

Install traefik on the KVM host,

curl -LO https://github.com/containous/traefik/releases/download/v2.1.1/traefik_v2.1.1_linux_amd64.tar.gz tar zxvf traefik_v2.1.1_linux_amd64.tar.gz

sudo mv traefik /usr/local/bin sudo mkdir -p /etc/traefik

sudo mkdir -p /etc/traefik/conf.d

Create the following Systemd service file and enable the service

[Unit]

Description=Traefik

After=network.target [Service]

Type=simple

PIDFile=/run/traefik.pid

ExecStart=/usr/local/bin/traefik

ExecReload=/bin/kill -s HUP $MAINPID

ExecStop=/bin/kill -s QUIT $MAINPID

PrivateTmp=true [Install]

WantedBy=multi-user.target

Create a base common config file /etc/traefik/traefik.toml

[log]

level = "DEBUG"

[accessLog] [entryPoints] [entryPoints.api]

address = ":6443" [entryPoints.https]

address = ":443" [providers]

[providers.file]

directory = "/etc/traefik/conf.d"

watch = true

Listen to the KVM host port 6443 for cluster API access, 443 for https access.

For each of the cluster, for example the cluster named as exp-ocp, create the following file /etc/traefik/conf.d/exp-ocp.toml

[tcp.routers]

[tcp.routers.exp-ocp4-api]

entryPoints = ["api"] #entry points are shared among the clusters

rule = "HostSNI(`api.exp-ocp4.ibmcloud.io.cpak`)"

service = "service-exp-ocp4-api"

[tcp.routers.exp-ocp4-api.tls]

passthrough = true [tcp.routers.exp-ocp4-https]

entryPoints = ["https"]

rule = "HostSNI(`oauth-openshift.apps.exp-ocp4.ibmcloud.io.cpak`,`console-openshift-console.apps.exp-ocp4.ibmcloud.io.cpak`)"

service = "service-exp-ocp4-https"

[tcp.routers.exp-ocp4-https.tls]

passthrough = true [tcp.services]

[tcp.services.service-exp-ocp4-api.loadBalancer]

[[tcp.services.service-exp-ocp4-api.loadBalancer.servers]]

address = "192.168.10.49:6443" [tcp.services.service-exp-ocp4-https.loadBalancer]

[[tcp.services.service-exp-ocp4-https.loadBalancer.servers]]

address = "192.168.10.49:443"

We create two routers one for cluster API and one for cluster app through https. The TLS mode is passthrough. Use HostSNI to let Traefik route the request to the corresponding OCP cluster. Currently, the full names of each host are required to be defined as a list in the HostSNI expression. For example, the cluster HTTPS router has to define the two hosts for the console login success.

HostSNI(`oauth-openshift.apps.exp-ocp4.ibmcloud.io.cpak`,`console-openshift-console.apps.exp-ocp4.ibmcloud.io.cpak`)

Define the Traefik service to point to the specific cluster’s LoadBalancer.

Note the Traefik routers and the services are all named with the OCP cluster’s name so that the configurations won’t overwriting each other.

On the remote laptop configure the local DNS server to resolve the cluster’s name to the KVM’s host. Refer to my paper https://medium.com/@zhimin.wen/setup-local-dns-server-on-macbook-82ad22e76f2a for more details.

Now we can access the OCP4.2 web console using the URL, https://console-openshift-console.apps.exp-ocp4.ibmcloud.io.cpak