Part of our major project work at Caylent is introducing and migrating clients to new platforms away from legacy architecture and obsolete tech. A significant player in our suite of tools is, of course, Kubernetes as it lends itself to comprehensive customization right from the installation phase.



Accordingly, we can tailor business-critical solutions to a multitude of requirements as well as being on cloud, on-premise or a hybrid solution. Naturally, each Kubernetes construction is individually tailored to meet different client needs: control, ease of maintenance, security, resource provision, and proficiency required to run and manage a cluster.



This case study outlines the Kubernetes installation and configuration documentation we provided to a particular client. Sharing this information considerably improved awareness, visibility, and navigation experience for the team we were working with. So, we thought we’d share it here too for our Caylent audience too.

Kubeconfig

One of the fundamental challenges when working with multiple Kubernetes environments is in the process of managing exactly which Kube environment you are interacting with and with which namespace you are working inside said environment.



In Kubernetes, each individual Kubernetes environment and the associated working namespaces are managed within a file referred to as a kubeconfig file. There is a kubeconfig file behind every working kubectl command. By default, utilities like kubectl expect your kubeconfig file to be located here ~/.kube/config , but you can override this behavior.



One way to do so in real-time is to pass --kubeconfig=<kubeconfig filename> as an argument to kubectl, like so:

kubectl config --kubeconfig=/home/dev01/.kube/config-dev view

A more direct way is by creating an environment variable of kubeconfig and pointing it at an alternative kubeconfig file you want to be using. A common way to do this would be to add a line to your ~/.bashrc as below:

export KUBECONFIG=~/.kube/config-dev

Changing your working kubeconfig file is as simple as overriding this kubeconfig variable. It’s possible to override the file you’re using per the following kubectl command:



kubectl get pods --kubeconfig=file1 kubectl get pods --kubeconfig=file2

# or:



KUBECONFIG=file1 kubectl get pods KUBECONFIG=file2 kubectl get pods

Contexts

Inside each kubeconfig file is 1 or more clusters along with 1 or more contexts. These are typically used to identify distinct Kubernetes environments that you interact with regularly.



For example, a developer might have a local minikube environment on their laptop, as well as an integration environment in AWS which they’re interacting with often. It would be typical to put both of these clusters within the same kubeconfig file, but with different context names. For instance:



apiVersion: v1 kind: Config current-context: minikube clusters: - cluster: certificate-authority-data: <redacted> server: https://AC9854B932B5BFA5606730C0F43340B53.yl4.us-east-1.eks.amazonaws.com name: Integration-EKS - cluster: certificate-authority: /home/dev01/.minikube/ca.crt server: https://127.0.0.1:8443 name: minikube contexts: - context: cluster: Integration-EKS namespace: my-microservice user: aws-Int-EKSAdmin - context: cluster: minikube namespace: default user: minikube

By selecting from different contexts, you can change the place where your actions will take effect using kubectl. This document doesn’t go into setting up new clusters or configuring your client to connect to them, but we recommend the following Kubernetes page to go into detail on how to do so: https://kubernetes.io/docs/setup/



One way to set both the kubeconfig file and a context is to use kubectl arguments directly, like this:

kubectl config --kubeconfig=/home/dev01/.kube/config-dev use-context minikube

In many cases, developers are best-served to keep their development activities in a single kubeconfig file, and use contexts to switch between them. This allows for greater productivity, but creates a new consideration—the need to be sure you are applying your changes in the place that you intended to…not somewhere else! While an individual can always figure this out by running kubectl config current-context , when you have plenty of other work to do, this doesn’t always cross your mind A much better solution is to improve awareness and visibility of context and namespace passively so it does not rely on an explicit action on the part of the Developer.

Visibility

A great solution to improve awareness and visibility is a suite of four tools:

kubectx (a utility which lets you switch between contexts easily and adds bash tab-completion support for Kubernetes contexts)

kubens (a utility which lets you switch between namespaces within the current context easily and adds bash tab-completion support for Kubernetes namespaces)

kube-ps1 (a binary utility which can add current context and namespace to the user’s bash PS1 prompt in real-time, for greatly enhanced awareness and visibility)

fzf (a fuzzy finder utility used to make an arbitrary list of data navigable with the keyboard)

When using kubectx and kubens, you can easily navigate contexts and namespaces using a convenient command line utility, as shown here:

kubectx:

kubens:

When you add fzf into the equation, you now gain the ability to both see all available options and make them navigable via keyboard up/down/enter selection, as shown here:

When you add kube-ps1, you now have real-time visibility into your contexts and namespace as part of your bash prompt:

Kubernetes Installation

To install these tools, you should start with fzf, then add kubectx and kubens.

Installing fzf : For Linux (and pretty much anywhere else), utilize the git method (from here):

:

git clone --depth 1 https://github.com/junegunn/fzf.git ~/.fzf ~/.fzf/install

For MacOS (from here):

brew install fzf

# To install useful key bindings and fuzzy completion:

$(brew --prefix)/opt/fzf/install

Installing kubectx/kubens : For Linux (debian-based only):

:

Enable unstable/testing repos (outside the scope of this document, but see here for more info).

Install the kubectx package from the unstable repos: sudo apt install kubectx

For Linux (manual install from here):

Clone the kubectx repo locally and add kubectx and kubens to a directory within your $PATH, for example (the following assumes /usr/local/bin is in your $PATH, adjust to your own needs):

sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubens

Add bash tab completion support to your shell, by adding the following to the bottom of your ~/.bashrc file:

# kubectx and kubens tab completion source /opt/kubectx/completion/kubens.bash source /opt/kubectx/completion/kubectx.bash

For MacOS:

Install kubectx using Homebrew (from here):

$ brew update$ brew install kubectx

Installing kube-ps1 (from here): For Linux:

(from here):

Clone the kube-ps1 repo:

sudo git clone https://github.com/jonmosco/kube-ps1.git /opt/kube-ps1

Add kube-ps1 to the users bash prompt by adding the following to the bottom of their ~/.bashrc file:

# Add kube-ps1.sh to the prompt source /opt/kube-ps1/kube-ps1.sh PS1='[\u@\h \W $(kube_ps1)]\$ '

For MacOS:

Install kube-ps1 using Homebrew (from here):

$ brew update$ brew install kube-ps1

After brew install, MAKE SURE you do the update to your shell that it instructs you to perform at the conclusion of the brew install command or your prompt will not change.

To chat about any of the Kuberenetes’ installation code outlined or discuss our DevOps as a Service offering for a customized solution for your business contact us here today.

Caylent provides a critical DevOps-as-a-Service function to high growth companies looking for expert support with Kubernetes, cloud security, cloud infrastructure, and CI/CD pipelines. Our managed and consulting services are a more cost-effective option than hiring in-house, and we scale as your team and company grow. Check out some of the use cases, learn how we work with clients, and read more about our DevOps-as-a-Service offering.

