This tutorial teaches you how to execute a deployment scenario of your Kubernetes resources to a Kubernetes cluster on Google Cloud. I describe step by step what to do from creating an account, setting up the cluster, setting up a registry, accessing the cluster with your local client, up to the actual deployment and running the Kubernetes Dashboard via proxy.

If you want to deploy your containerized application to a Kubernetes cluster, you have a choice between several cloud providers. The major cloud providers for Kubernetes are:

Amazon AWS,

Microsoft Azure,

IBM Kubernetes Service (IKS) on IBM Cloud,

RedHat OpenShift (now IBM), and

Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP).

Credit goes to the big drivers of Kubernetes, Linux Foundation, Open Container Initiative (OCI) and the Cloud Native Computing Foundation (CNCF) and also great advocates of open source software for a long time now: Google, IBM and RedHat. I really like the way IBM offers IBM Cloud Kubernetes Services (IKS) and as an IBM employee that is what I know best. IBM actually offers a free, open source Community Edition (CE) on Github of their on-prem cloud called IBM Cloud Private (ICP). There is also RedHat’s OpenShift platform which has a firm market share of the Kubernetes market, but here I want to venture into how to use Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP) instead. A more complete list of Kubernetes Certified Service Providers (KCSPs) can be found on the Kubernetes Partners site.

These are the steps to deploy a service to GKE:

Create a Google Cloud Account

Create a Project

Create a Kubernetes Cluster

Install Google Cloud SDK

Initialize Google Cloud SDK

Set kube config

Push Image to the Container Registry

Deploy Kubernetes Resources

Create an Ingress Load Balancer

Run Kubernetes Web UI Dashboard

Note that I do not explain how to define your Kubernetes resources in this tutorial, that was not my objective with this tutorial, and I assume you have already created the Kubernetes resources. Although I found the Google documentation very complete and clear, I also thought it was a bit scattered compared to a deployment scenario, and I had to jump from here to there quite a bit. Hopefully, you will find this tutorial useful to get started with GKE.

Create a Google Cloud Account

First things first, go to https://cloud.google.com/ to sign up for a Google Cloud account if you do not already have one. If you already have an account, sign in, otherwise click ‘Get Started for Free’ and create a new account.

Sign in and go to console Home.

Create a Project

If you do not have a project, click the drop down in the header toolbar and create a new project by clicking the ‘NEW PROJECT’ button.

If you already have a project, click the drop down in the header toolbar and select your project, then click the ‘Open’ button.

Create a Kubernetes Cluster

Go to the main navigation menu drop down and select Kubernetes Engine > Clusters,

In the Clusters overview, click the ‘Create cluster’ button,

Configure your cluster as needed. Just be aware that if you choose the ‘Machine type’ to be too small, your containers might not have sufficient capacity to deploy.

I choose the default Zone ‘us-central1-a’, which is in the Region ‘us’ (see documentation here which regions are in which zones).

Click the ‘Create’ button, after a few minutes your new Google Kubernetes Engine (GKE) cluster will be ready.

Install Google Cloud SDK

To access your new GKE cluster from your client, you need to install the gclouc cli that is included in the Google Cloud SDK.

Go to https://cloud.google.com/sdk and follow the instructions to install the SDK on your client platform, e.g. click the button for ‘INSTALL FOR MACOS’ if your client is a Mac OSX.

Initialize Google Cloud SDK

Initialize Google Cloud SDK,

$ gcloud init

Welcome! This command will take you through the configuration of gcloud. Settings from your current configuration [default] are:

compute:

region: us

zone: us-central1-a

core:

account: remkohdev@gmail.com

disable_usage_reporting: 'True'

project: szirine Pick configuration to use:

[1] Re-initialize this configuration [default] with new settings

[2] Create a new configuration

Please enter your numeric choice:

Update Google Clouds SDK components,

$ gcloud components update

Set kube config

By default, kubectl looks for a file named ‘config’ in the ‘~/.kube’ directory to access the API server of a cluster.

The ‘gcloud auth login’ command, obtains access credentials via a web-based authorization flow and sets the configuration.

To authenticate with Google Cloud SDK,

$ gcloud auth login

Your browser has been opened to visit: https://accounts.google.com/o/oauth2/auth?redirect_uri=http%3A%2F%2Flocalhost%3A8085%2F&prompt=select_account&response_type=code&client_id=32555940559.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth&access_type=offline WARNING: `gcloud auth login` no longer writes application default credentials.

If you need to use ADC, see:

gcloud auth application-default --help You are now logged in as [remkohdev@gmail.com].

Your current project is [szirine]. You can change this setting by running:

$ gcloud config set project PROJECT_ID

Click the Allow button,

If you get an authentication error,

error: cannot construct google default token source: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.

You can update an existing kube config file with the credentials of a specific cluster by running the following command,

$ gcloud container clusters get-credentials standard-cluster

You might get an error for missing location,

$ gcloud container clusters get-credentials standard-cluster

ERROR: (gcloud.container.clusters.get-credentials) One of [--zone, --region] must be supplied: Please specify location.

Add the location and change the config settings by including the zone, project settings for your cluster,

$ gcloud container clusters get-credentials standard-cluster --zone us-central1-a --project szirine

Fetching cluster endpoint and auth data.kubeconfig entry generated for standard-cluster.

You can set the location and project, zone and region settings separately by running the following individual commands,

$ gcloud config set project szirine

$ gcloud config set compute/zone us-central1-a

$ gcloud config set compute/region us

View your current-context,

$ kubectl config current-context

gke_szirine_us-central1-a_standard-cluster

Your client is now connected to the remote cluster on GKE.

Push Image to Container Registry

See Pushing and Pulling Images,

You can register the gcloud cli as a Docker credentials helper to access the Google Container Registry (GCR). GKE can use GCR to pull images for the Kubernetes resources.

I want to use gcloud authentication, to allow my ‘docker push’ command to push Docker images to the GCR.

Run the command,

$ gcloud auth configure-docker

WARNING: Your config file at

[/Users/remkohdev@us.ibm.com/.docker/config.json] contains these credential helper entries: {

"credHelpers": {}

} These will be overwritten. The following settings will be added to your Docker config file

located at [/Users/remkohdev@us.ibm.com/.docker/config.json]:

{

"credHelpers": {

"gcr.io": "gcloud",

"us.gcr.io": "gcloud",

"eu.gcr.io": "gcloud",

"asia.gcr.io": "gcloud",

"staging-k8s.gcr.io": "gcloud",

"marketplace.gcr.io": "gcloud"

}

}

Do you want to continue (Y/n)? Y

Docker configuration file updated.

This will update the ‘~/.docker/config.json’ file by registering gcloud as credHelpers, and configure the ‘osxkeychain’ as the credentials store,

"credHelpers" : {

"us.gcr.io" : "gcloud",

"asia.gcr.io" : "gcloud",

"staging-k8s.gcr.io" : "gcloud",

"marketplace.gcr.io" : "gcloud",

"gcr.io" : "gcloud",

"eu.gcr.io" : "gcloud"

},

"credsStore" : "osxkeychain",

If you go to the GCR console, you will see an empty registry,

Now, run your ‘docker build’, ‘docker tag’ and ‘docker push’, for example, I have a docker container for an API server called ‘szirine-api,’

$ docker build --no-cache -t szirine-api .

$ docker tag szirine-api:latest us.gcr.io/szirine/szirine-api:0.1.0

$ docker push us.gcr.io/szirine/szirine-api:0.1.0

Go to the GCR registry again and you should see your image being pushed to the registry,

Now, the image is available to be pulled by the Kubernetes deployment resource,

spec:

containers:

- name: szirine-api

image: us.gcr.io/szirine/szirine-api:0.1.0

Deploy Kubernetes Resources

Now you are set up and configured your kubectl access to GKE, we can deploy our Kubernetes resources.

I created the following bash script to deploy my resources for the application called Szirine API. You can of course look at great projects like Ansible, TerraForm, or other deployment and configuration tools, but I have found that writing my own ‘bash’ scripts and include them in my Jenkins CICD (Continuous Integration and Continuous Deployment), in many cases, is the leanest and cleanest thing to do.

echo '=====>delete dev-ns'

kubectl delete namespace dev-ns

echo '=====>create dev-ns'

kubectl create -f ./k8s/templates/dev-namespace.yaml echo '=====>delete szirine-api-configmap'

kubectl delete configmap -n dev-ns szirine-api-configmap

echo '=====>create szirine-api-configmap'

kubectl create -f ./k8s/templates/dev-configmap.yaml echo '=====>delete szirine-api-deployment<====='

kubectl delete deployment -n dev-ns szirine-api-deployment

# while resource still exists wait

rc=$(eval 'kubectl get deployment -n dev-ns szirine-api-deployment')

while [ ! -z "$rc" ]

do

rc=$(eval 'kubectl get deployment -n dev-ns szirine-api-deployment')

done

echo '=====>create szirine-api-deployment<====='

kubectl create -f ./k8s/templates/dev-deployment.yaml echo '=====>delete szirine-api-svc<====='

kubectl delete svc -n dev-ns szirine-api-svc

# while resource still exists wait

rc=$(eval 'kubectl get svc -n dev-ns szirine-api-svc')

while [ ! -z "$rc" ]

do

rc=$(eval 'kubectl get svc -n dev-ns szirine-api-svc')

done

echo '=====>create szirine-api-svc<====='

kubectl create -f ./k8s/templates/dev-svc.yaml echo '=====>delete szirine-api-hpa<====='

kubectl delete hpa -n dev-ns szirine-api-hpa

# while resource still exists wait

rc=$(eval 'kubectl get svc -n dev-ns szirine-api-hpa')

while [ ! -z "$rc" ]

do

rc=$(eval 'kubectl get svc -n dev-ns szirine-api-hpa')

done

echo '=====>create szirine-api-hpa<====='

kubectl create -f ./k8s/templates/dev-hpa.yaml

My deployment resource ‘k8s/templates/dev-deployment.yaml’ looks as follows,

apiVersion: apps/v1beta2

kind: Deployment

metadata:

name: szirine-api-deployment

namespace: dev-ns

labels:

app: szirine-api

spec:

replicas: 1

selector:

matchLabels:

app: szirine-api

template:

metadata:

labels:

app: szirine-api

spec:

containers:

- name: szirine-api

image: us.gcr.io/szirine/szirine-api:0.1.0

ports:

- name: main

protocol: TCP

containerPort: 3000

envFrom:

- configMapRef:

name: szirine-api-configmap

telling Kubernetes to pull the Docker image from the GCR via ‘image: us.gcr.io/szirine/szirine-api:0.1.0’.