Kubernetes kubectl Tips and Tricks

• By Duffie Cooley

Kubectl is a familiar tool if you use Kubernetes, and it has a broad functionality that takes time to master; it can be a more powerful tool than many people expect. Great resources exist for working with the kubectl command line interface. This is a collection of tips and tricks that will allow you to expand your ability to work with kubectl . Be sure to take a look at the cheat sheet in the kubernetes.io docs section as well!

We'll go over all the following tips and more on our June 8 webinar. Register here for more information.

Shell Tips with kubectl

kubectl comes with great shell completion for bash and zsh built in, making it much easier to autocomplete command, flags and objects like namespaces or pod names. Here's the reference guide for installing kubectl and shell completion. The gif below shows how shell completion works:

# Install bash completion on a Mac using homebrew brew install bash-completion printf "

# Bash completion support

source $(brew --prefix)/etc/bash_completion

" >> $HOME/.bash_profile source $HOME/.bash_profile # Load the kubectl completion code for bash into the current shell source <(kubectl completion bash) # Write bash completion code to a file and source if from .bash_profile kubectl completion bash > ~/.kube/completion.bash.inc printf "

# Kubectl shell completion

source '$HOME/.kube/completion.bash.inc'

" >> $HOME/.bash_profile source $HOME/.bash_profile # Load the kubectl completion code for zsh[1] into the current shell source <(kubectl completion zsh)

Merging Kubernetes configurations is a common pattern if you are interacting with multiple Kubernetes clusters. When working with multiple configs you use the concept of context to describe the parameters that kubectl will use to target a specific cluster. It can be complex to achieve properly. To make it easier, you can use the environment variable KUBECONFIG to point at your configuration files and to merge them. Learn more about KUBECONFIG .

Say you have two Kubernetes config files for different Kubernetes clusters that you want to merge.

Here's cluster1-config:

$ kubectl config view --minify > cluster1-config apiVersion: v1 clusters: - cluster: certificate-authority: cluster1_ca.crt server: https://cluster1 name: cluster1 contexts: - context: cluster: cluster1 user: cluster1 name: cluster1 current-context: cluster1 kind: Config preferences: {} users: - name: cluster1 user: client-certificate: cluster1_apiserver.crt client-key: cluster1_apiserver.key

And here's cluster2-config:

$ cat cluster2-config apiVersion: v1 clusters: - cluster: certificate-authority: cluster2_ca.crt server: https://cluster2 name: cluster2 contexts: - context: cluster: cluster2 user: cluster2 name: cluster2 current-context: cluster2 kind: Config preferences: {} users: - name: cluster2 user: client-certificate: cluster2_apiserver.crt client-key: cluster2_apiserver.key

You can then leverage KUBECONFIG to merge them.

The benefit of merging these files is being able to switch between contexts dynamically. A context is a map that describes a cluster, user and name that allows you to reference the configuration to authenticate and interact with a cluster. With the --kubeconfig flag, you can take a look at the context of each file.

$ kubectl --kubeconfig=cluster1-config config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * cluster1 cluster1 cluster1 $ kubectl --kubeconfig=cluster2-config config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * cluster2 cluster2 cluster2

We can see that each config file has a single context and that they don’t conflict. Merging the two files with KUBECONFIG displays both contexts. To hold the current context, create a new empty file called cluster-merge .

$ export KUBECONFIG=cluster-merge:cluster-config:cluster2-config dcooley@lynx ~ $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * cluster1 cluster1 cluster1 cluster2 cluster2 cluster2

Since the list of files exported with KUBECONFIG is loaded in order, the context that it selects is the one specified as current-context in the first config file. Changing context to cluster2 moves the * down to that context, and all the kubectl commands will apply to that second context.

$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * cluster1 cluster1 cluster1 cluster2 cluster2 cluster2 $ kubectl config use-context cluster2 Switched to context "cluster2". $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE cluster1 cluster1 cluster1 * cluster2 cluster2 cluster2

$ cat cluster-merge apiVersion: v1 clusters: [] contexts: [] current-context: cluster2 kind: Config preferences: {} users: []

The only thing it's keeping is current-context . Kubernetes contexts are powerful and can be used and merged in lots of ways. For example, you can create a context that sets the namespace that all kubectl commands will apply to.

$ kubectl config set-context cluster1_kube-system --cluster=cluster1 --namespace=kube-system --user=cluster1 Context "cluster1_kube-system" set. $ cat cluster-merge apiVersion: v1 clusters: [] contexts: - context: cluster: cluster1 namespace: kube-system user: cluster1 name: cluster1_kube-system current-context: cluster2 kind: Config preferences: {} users: []

We can use the new context like this:

$ kubectl config use-context cluster_kube-system Switched to context "cluster1_kube-system". $ kubectl get pods NAME READY STATUS RESTARTS AGE default-http-backend-fwx3g 1/1 Running 0 28m kube-addon-manager-cluster 1/1 Running 0 28m kube-dns-268032401-snq3h 3/3 Running 0 28m kubernetes-dashboard-b0thj 1/1 Running 0 28m nginx-ingress-controller-b15xz 1/1 Running 0 28m

For those interested in leveraging the Kubernetes API, you can get the swagger.json document:

kubectl proxy curl -O 127.0.0.1:8001/swagger.json

You can also browse to http://localhost:8001/api/ and look at how the pathing works in the Kubernetes API.

Since the swagger.json is a JSON document, we can inspect it with jq . The jq tool is a lightweight JSON processor that can do comparisons, and quite a bit more. Learn more about jq .

Looking at the swagger.json can help you make sense of the Kubernetes API. This is a complex API, as the functions are broken into groups that can be difficult to understand.

$ cat swagger.json | jq '.paths | keys[]' "/api/" "/api/v1/" "/api/v1/configmaps" "/api/v1/endpoints" "/api/v1/events" "/api/v1/namespaces" "/api/v1/nodes" "/api/v1/persistentvolumeclaims" "/api/v1/persistentvolumes" "/api/v1/pods" "/api/v1/podtemplates" "/api/v1/replicationcontrollers" "/api/v1/resourcequotas" "/api/v1/secrets" "/api/v1/serviceaccounts" "/api/v1/services" "/apis/" "/apis/apps/" "/apis/apps/v1beta1/" "/apis/apps/v1beta1/statefulsets" "/apis/autoscaling/" "/apis/batch/" "/apis/certificates.k8s.io/" "/apis/extensions/" "/apis/extensions/v1beta1/" "/apis/extensions/v1beta1/daemonsets" "/apis/extensions/v1beta1/deployments" "/apis/extensions/v1beta1/horizontalpodautoscalers" "/apis/extensions/v1beta1/ingresses" "/apis/extensions/v1beta1/jobs" "/apis/extensions/v1beta1/networkpolicies" "/apis/extensions/v1beta1/replicasets" "/apis/extensions/v1beta1/thirdpartyresources" "/apis/policy/" "/apis/policy/v1beta1/poddisruptionbudgets" "/apis/rbac.authorization.k8s.io/" "/apis/storage.k8s.io/" "/logs/" "/version/"

The following command describes the APIs exposed by the Kubernetes cluster that you have access to.

This command is being run as an admin user. With RBAC enabled, your result may describe a different API set.

$ kubectl api-versions apps/v1beta1 authentication.k8s.io/v1beta1 authorization.k8s.io/v1beta1 autoscaling/v1 batch/v1 batch/v2alpha1 certificates.k8s.io/v1alpha1 coreos.com/v1 etcd.coreos.com/v1beta1 extensions/v1beta1 oidc.coreos.com/v1 policy/v1beta1 rbac.authorization.k8s.io/v1alpha1 storage.k8s.io/v1beta1 v1

The kubectl explain functionality helps better understand what all the parts do.

$ kubectl explain You must specify the type of resource to explain. Valid resource types include: * all * certificatesigningrequests (aka 'csr') * clusters (valid only for federation apiservers) * clusterrolebindings * clusterroles * componentstatuses (aka 'cs') * configmaps (aka 'cm') * daemonsets (aka 'ds') * deployments (aka 'deploy') * endpoints (aka 'ep') * events (aka 'ev') * horizontalpodautoscalers (aka 'hpa') * ingresses (aka 'ing') * jobs * limitranges (aka 'limits') * namespaces (aka 'ns') * networkpolicies * nodes (aka 'no') * persistentvolumeclaims (aka 'pvc') * persistentvolumes (aka 'pv') * pods (aka 'po') * poddisruptionbudgets (aka 'pdb') * podsecuritypolicies (aka 'psp') * podtemplates * replicasets (aka 'rs') * replicationcontrollers (aka 'rc') * resourcequotas (aka 'quota') * rolebindings * roles * secrets * serviceaccounts (aka 'sa') * services (aka 'svc') * statefulsets * storageclasses * thirdpartyresources error: Required resource not specified. See 'kubectl explain -h' for help and examples.

Try running the command kubectl explain deploy . The explain functionality works at different depths. This allows you to reference dependent objects as well.

$ kubectl explain deploy.spec.template.spec.containers.livenessProbe.exec RESOURCE: exec <Object> DESCRIPTION: One and only one of the following should be specified. Exec specifies the action to take. ExecAction describes a "run in container" action. FIELDS: command <[]string> Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

Stay tuned for part two of this series, where we’ll explore some of the pod and node functionality kubectl offers.

Don’t miss the upcoming webinar where we’ll cover all of these features and more, June 8 at 10 a.m. PT. Sign up today for further information and an opportunity to ask Duffie questions directly.

If you want to hear about our other Kubernetes CLI tips and tricks, ask the CoreOS team directly at CoreOS Fest! The Kubernetes and distributed systems conference takes place May 31 and June 1 in San Francisco - join us for two days of talks from the community on the latest developments in the open source container ecosystem. Register today!