Telepresence is a versatile CNCF sandbox project that aims to provide “Fast, local development for Kubernetes and OpenShift microservice.”

I personally believe that Telepresence is more than that — it simplifies configuration management for local development, allowing developers to work as though the application was running in the remote environment.

This article contains two sections: an introduction to the Telepresence shell and swapping remote deployments. The goal is to, from a functional standpoint, outline when you would use the Telepresence shell and how it can be used to bridge the gap between remote and local development configuration.

Making use of the Telepresence shell

You just joined a team working on a shopping website. There are three non-public services running in Kubernetes that you want to use: Orders, Products and Users. Since the services are only available inside the cluster, you open three kubectl port-forwards to make the services accessible for local development.

The first and most obvious problem with the setup above is the maintenance of the tunnels. Depending on Node configuration, the connection will time out and close if it’s not actively being used (on AKS, timeout happens after 5 minutes).

A more serious problem is that of environment configuration drift — you now have a local configuration and a remote configuration. The remote configuration will use the DNS of the services, and the local configuration will use the localhost URLs.

With Telepresence, there is no need to make this local/remote distinction.

Starting a shell

To start a new Telepresence shell, run:

telepresence [--run-shell]

And… it seems like nothing happened! The first time I ran Telepresence, this really threw me off — I didn’t really know what was going on.

Under the surface, there is now a proxy running inside the cluster that makes the local shell act as if it was a Pod inside the Kubernetes cluster:

By running printenv in the local shell, you will notice environment variables such as KUBERNETES_SERVICE_HOST that are normally injected into Kubernetes pods.

To check the new deployment and pod that was created by Telepresence, run:

kubectl get all -l "telepresence"

The pod that proxies the environment runs with the Docker image datawire/telepresence-k8s . This deployment will be cleaned up as soon as you exit the Telepresence shell.

Going back to the shopping example, there would no longer be a need to use these localhost addresses:

ORDERS_SERVICE_URL=localhost:10001

PRODUCTS_SERVICE_URL=localhost:10002

USERS_SERVICE_URL=localhost:10003

Instead they could be the same as you’d expect in the Kubernetes environment:

ORDERS_SERVICE_URL=orders.default.svc

PRODUCTS_SERVICE_URL=products.default.svc

USERS_SERVICE_URL=users.default.svc

Note: there are some limitations to using --method vpn-tcp (default proxy method). For more info, visit the docs.

Running the shell inside a Docker container

At some point before the first deployment, you’ll want to test the Docker image of your application. Just like running telepresence --run-shell starts a new shell, using --docker-run will give you a local Docker container which has its environment proxied into the Kubernetes cluster:

docker build . -t shopweb telepresence --docker-run --rm -it \

-p 3000:3000 \

-v $(pwd):/path/to/workdir \

shoppingweb \

bash

Note how there is no need to push the image to a remote registry. As long as the local Telepresence container starts successfully, you’ll be able to try out the image as if it was in the cluster.

Running a container changes the proxy method from vpn-tcp to container .

Testing things in your cluster

With Telepresence in your toolbelt, there is no need to run commands such as this one:

kubectl run -it --rm --restart=Never --image=pstauffer/curl test

Instead, you may just as well start a new Telepresence shell and enjoy the availability of all the tools you have on your local machine:

telepresence --run bash

You can also run Postman to query your services directly using their internal domain names:

telepresence --run postman

# on MacOS

telepresence --run open /Applications/Postman.app

Swapping Deployments

The shopping application has now grown and now consists of three components: the web client responsible for serving web pages, a GraphQL API acting as the interface against the back-end services, and a Redis session store. To manage deployment and configuration of the application, releases are now done via Helm. The Helm chart manages configuration relating to connectivity between services within the chart. For example, when Alice installs a Helm release with the name alice-shopweb , then its Redis store will, by convention, be given the name alice-shopweb-redis . The Redis hostname is put in the env section of both the Web and GraphQL pod specifications.

Let’s consider local development with the setup above. A traditional approach would be to run Docker Compose on the host network, containing the GraphQL Server, Web Server and Redis Cache. Then, either the dependencies towards back-end services would be mocked, or the containers would use the host network to gain access to port-forwards into Kubernetes.

The local development situation with Docker Compose would look something like this: