After moving from Docker, to Docker Swarm, to Kubernetes, and then dealing with all of the various API changes over the years, I’ve become quite comfortable with finding out what’s wrong with my deployments and finding a fix.

It wasn’t always that way, and you have to start somewhere, perhaps that’s where you are today — at the start? Wherever you are on that journey, I want to give you 5 tips for troubleshooting that I have found useful along with some additional tips on usage.

Introducing your “Swiss Army Knife”

Good tools show signs of use.

Your Swiss Army Knife (read “Leatherman”, if you’re from the US) is a multi-purpose tool, and like all good tools should be well used and kind of worn out.

And you guessed it, it’s kubectl . Let's start with 5 "attachments" and how you can use them when things go wrong.

The scenarios are going to be: my YAML was accepted, but my service isn’t started and it started, but it’s not working right.

kubectl get deployment/pods

This is your first port of call and you probably know this one already, but the reason it’s so important is that it surfaces the top-level information without you having to do much typing.

If you are using a Deployment for your workload (which you should be), then you have a couple of options:

kubectl get deploy

kubectl get deploy -n namespace

kubectl get deploy --all-namespaces [or "-A"] (you're welcome)

Ideally you’re looking to see 1/1 or the equivalent, 2/2 etc. This would show that your deployment was accepted and has tried to deploy something.

Next, you may want to look at kubectl get pods to see if the backing Pod for the deployment started correctly.

2. kubectl get events

I’m surprised how often I have to explain this little gem to folks having issues with Kubernetes. This command prints out events in a given namespace and is great for finding key problems like a crashing pod or a container image that cannot be pulled.

The logs in Kubernetes are *not ordered* therefore, you will want to add the following, taken from the OpenFaaS docs.

$ kubectl get events --sort-by=.metadata.creationTimestamp

The cousin of kubectl get events is kubectl describe, and just like get deploy/pod , it works with the name of an object:

kubectl describe deploy/figlet -n openfaas

You’ll get very detailed information here. And before we forget, you can describe most things, including nodes which will show if you cannot schedule a Pod due to resource constraints or other problems.

3. kubectl logs

You knew this one was coming, but again, so many people are using this wrong, the hard way.

If you have a deployment, let’s say cert-manager in the cert-manager namespace, so many people think that they first have to find the long (unique) name of the Pod and use that for their parameter. Wrong!

kubectl logs deploy/cert-manager -n cert-manager

To follow the logs as they come in add -f

kubectl logs deploy/cert-manager -n cert-manager -f

To avoid fetching every possible log entry, you can also use --tail to limit that to the last few lines:

kubectl logs deploy/cert-manager -n cert-manager --tail 100

And you can combine all three.

If your Deployment or Pod has any labels, you can use -l app=name or any other set of labels to attach to the logs of one or more matching Pods.

kubectl logs -l app=nginx

There are some tools like stern and kail that can help you match patterns and save on a little typing, but I find them to be a distraction.

4. kubectl get -o yaml

When you begin to work with YAML that is generated by another project, or another tool like Helm, you are going to need this pretty fast. It’s also useful in production to check the version of an image or the annotations that you have set somewhere.

kubectl run nginx-1 --image=nginx --port=80 --restart=Always

Did I deploy that with restart “Always” or restart “Never”?

kubectl get deploy/nginx-1 -o yaml

Now we know. And what is more, we can add --export and save the YAML locally to edit it and attach it again.

The other option for editing YAML live is kubectl edit and if you are getting stuck with vim and have no idea what’s going on, prefix the command with VISUAL=nano for an easier editor.

5. kubectl scale — have you turned it on and off again yet?

kubectl scale can be used to scale a Deployment and its Pods down to zero replicas, in effect killing all of them. When you scale back up to 1/1, a new Pod will be created, restarting your application.

The syntax is really easy and you’ll be able to restart your code and run through testing again.

kubectl scale deploy/nginx-1 --replicas=0

kubectl scale deploy/nginx-1 --replicas=1

An alternative you can use is kubectl rollout restart deploy/nginx-1.

To know whether a deployment is “ready” use kubectl rollout status deploy/nginx-1 and it will block until it’s passed a health-check. You may also need to use an extended timeout, or an unlimited timeout --timeout=0s / --timeout=60s .

6. Port forwarding

I know I said there were 5 tips, but we need another one. Port forwarding via kubectl allows us to expose a service on a local or remote cluster on our own computer to access it on any configured port without exposing it on the Internet.

Here’s an example of accessing the Nginx deployment, but locally:

kubectl port-forward deploy/nginx-1 8080:80

Again some people think this only works with deployments or Pods, they are wrong. Services are fair game and often are the right thing to port-forward since they will mimic the configuration in your production cluster.

If you do want to expose a service on the Internet, you’ll usually use a LoadBalancer service, or run kubectl expose:

kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer

The alternative is to use a Cloud Native Tunnel like the inlets-operator.

Now try it out

I hope you found these 6 commands and tips useful. Now it’s over to you to go and test them out on a real cluster.

You can create a cluster on your laptop or use a managed cloud offering.

My favourite options for local development are:

And of course, KinD — Kubernetes In Docker.

Did you enjoy this post? Would you like to see more? Subscribe to my Premium Newsletter via GitHub Sponsors or Follow me on Twitter/LinkedIn