How `kubectl exec` works

If you are interested in finding out how kubectl exec works, then I hope you will find this post useful. We will look into how the command works by examing the relevant code in kubectl, K8s API Server, Kubelet and the Container Runtime Interface (CRI) Docker API.

About This Command

The kubectl exec command is an invaluable tool for those of us who regularly work with containerized workloads on Kubernetes. It allows us to inspect and debug our applications, by executing commands inside our containers.

Let’s use kubectl v1.15.0 to run an example:

`kubectl exec` example

The first exec command runs a date command inside my Nginx container. The second exec command uses the -i and -t flags to get a shell to my Nginx container.

The CLI Code

Let’s repeat the command with increased log verbosity:

`kubectl exec` with verbose output

Notice that there are two HTTP requests:

The server responds with a 101 Upgrade response header indicating to the client that it has switched to the SPDY protocol.

The API Server Code

Let’s examine the API Server’s code to see how it registers the rest.ExecRest handler to handle /exec subresource requests. This handler is used to determine the node endpoint to exec to.

One of the things that the API Server does when starting is to instruct its embedded GenericAPIServer to install the ‘legacy’ API:

During the API installation, an instance of the LegacyRESTStorage type is instantiated, which creates a storage.PodStorage instance:

This storeage.PodStorage instance is then added to the restStorageMap map. Notice that in this map, the pods/exec path is mapped to the podStorage ’s rest.ExecRest handler:

This map then becomes part of an apiGroupInfo instance, which gets added to the GenericAPIServer :

The GoRestfulContainer has a ServeMux that knows how to map incoming requests URL to the different handlers.

Let’s take a closer look at how the rest.ExecRest handler works. Its Connect() method calls the pod.ExecLocation() function to determine the exec subresource URL of a pod container:

The URL returned by the pod.ExecLocation() function is used by the API Server to determine which node to connect to.

Now let’s look at the Kubelet code.

The Kubetlet Code

How does the Kubelet register its exec handler? What does its interaction with the Docker API look like?

The Kubelet initialization process is quite involved. The following two functions are most relevant to this post:

PreInitRuntimeService() which initializes the CRI using the dockershim package RunKubelet() which registers handler and starts the server

Setting up the Handler

As the Kubelet is starting up, its RunKubelet() function calls the unexported startKubelet() function to starts the ListenAndServe() method of the kubelet.Kubelet instance. This method then calls the ListenAndServeKubeletServer() function, which uses the NewServer() constructor to install the “debugging” handlers:

The InstallDebuggingHandlers() function registers the HTTP request patterns with the getExec() handler:

The getExec() handler calls the GetExec() method of the s.host instance:

The s.host is instantiated as an instance of the kubelet.Kubelet type. It has a nested reference to the StreamingRuntime interface, which is instantiated as a kubeGenericRuntimeManager instance. This runtime manager is the key component that interacts with the Docker API. It implements the GetExec() method:

This method invokes the runtimeService.Exec() method. Upon further investigation, we discover that the runtimeService is an interface defined in the CRI package. The kuberuntime.kubeGenericRuntimeManager ‘s runtimeService object was instantiated as a kuberuntime.instrumentedRuntimeService type, which implements the runtimeService.Exec() method:

Furthermore, the nested service object of this instrumentedRuntimeService instance is instantiated as an instance of the remote.RemoteRuntimeService type. This type owns an Exec() method:

This Exec() method issues a GRPC call to the /runtime.v1alpha2.RuntimeService/Exec endpoint, to prepare a streaming endpoint which will be used to execute commands in the container. (See the next subsection titled Setting up the Docker shim for more on setting up the Docker shim as a GRPC server.)

The GRPC server handles this by invoking the RuntimeServiceServer.Exec() method. This method is implemented by the dockershim.dockerService struct:

The streamingServer in line 10 is a streaming.Server interface. It is instantiated in the dockershim.NewDockerService() constructor:

Let’s look at the implementation of its GetExec() method:

This is where the streaming endpoint is built and returned to the GRPC client.

As seen above, the restful.WebService instance then routes pod exec requests to this endpoint.

Setting up the Docker shim

The PreInitRuntimeService() function creates and starts the Docker shim, as a GRPC server. While instantiating an instance of the dockershim.dockerService type, its nested streamingRuntime instance is assigned a reference to an instance of dockershim.NativeExecHandler , which implements the dockershim.ExecHandler interface:

The NativeExecHandler.ExecInContainer() method is the key to executing commands in containers using the Docker’s exec API: