In this blog post I want to tell you a story about developers’ everyday problems and challenges when working on microservices and how a right tool can save you a lot of time. And money.

Imagine you are a developer working on some microservice — let’s call it users . The service is quite easy and you quickly wrote the MVP. It’s time to run it together with other microservices to test if your service can communicate with them. OK, let’s setup an environment.

First approach — docker-compose

Every microservice is packed into docker image, so the natural choice is the docker-compose. Piece of cake. Your application communicates with auth , balances and transactions . So you contact other teams to ask them how to run it. Unfortunately, the team working on auth is extremely busy today and the guy who knows everything about transactions is on holidays. Great. You’re blocked. Ok, let’s refactor something…

Next day you know everything: the balances needs a MySQL database, the transactions needs Cassandra. Moreover — balances communicates with transactions using Kafka. Ah, one more thing: transactions is pretty resource demanding and needs 4GB of memory minimum. Have I mentioned Kafka needs Zookeeper to run?

So you have to run three services, MySQL, Cassandra, Kafka and Zookeeper. After some struggling with Kafka (it’s not so easy to configure Kafka to run in docker) you eventually start your setup, and… well, you didn’t know you have so many fans in your laptop. The noise is remarkable and when you start your IDE — your laptop gets unresponsive. OK, so probably it’s not the way to go.

Second approach — port forwarding

Well, you have all services and databases already deployed on the Kubernetes cluster. Maybe you can just connect to the remote services using port forwarding provided by Kubernetes?

So you fire 6 terminal sessions (you also need access to databases and Kafka to debug some things), reconfigure your application using complicated command line switches to use localhost instead of k8s service names and it works. Well almost works, because Kafka needs some extra configuration to work in such way ( advertised.listeners has to be changed). But it’s much better — quiet, and your laptop works again (you were already thinking about buying new 32GB MBP, but when you checked the price you had to breathe using paper bag for a while).

But this approach is complicated and unconvenient. It’s easy to forget about one of port forwardings or misconfigure your application to use custom endpoints.

And then you find Telepresence

You start thinking: Am I the first one who has this problem? There should be better solution for this.

Actually, there is.

You start googling, and find Telepresence.io. What is it? According to documentation:

Telepresence is an open source tool that lets you run a single service locally, while connecting that service to a remote Kubernetes cluster. This lets developers working on multi-service applications to: Do fast local development of a single service, even if that service depends on other services in your cluster. Make a change to your service, save, and you can immediately see the new service in action. Use any tool installed locally to test/debug/edit your service. For example, you can use a debugger or IDE! Make your local development machine operate as if it’s part of your Kubernetes cluster. If you’ve got an application on your machine that you want to run against a service in the cluster — it’s easy to do.

Sounds promising! Let’s try it.

Instalation

It’s very easy — on Mac or Linux it’s 2 lines in the terminal. Windows is currently not supported.

On Mac:

brew cask install osxfuse

brew install socat datawire/blackbird/telepresence

On Linux — it depends on the distribution, detailed instructions are here.

The package manager will install all dependencies.

Trying it out

Telepresence proxies traffic from your local machine to remote k8s cluster. You can access kubernetes services directly — in fact your laptop (or docker containers running on it, depending on proxying method) behaves like a part of the cluster. Let’s try it out:



T: Invoking sudo. Please enter your sudo password.

Password:

T: Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected,

T: only one telepresence can run per machine, and you can't use other VPNs. You may need to add cloud

T: hosts and headless services with --also-proxy. For a full list of method limitations see

T:

T: Volumes are rooted at $TELEPRESENCE_ROOT. See telepresenceT: Invoking sudo. Please enter your sudo password.Password:T: Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected,T: only one telepresence can run per machine, and you can't use other VPNs. You may need to add cloudT: hosts and headless services with --also-proxy. For a full list of method limitations seeT: https://telepresence.io/reference/methods.html T: Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details. T: No traffic is being forwarded from the remote Deployment to your local machine. You can use the

T: --expose option to specify which ports you want to forward.

T: inaccessible if are outside this range; restart telepresence if you can't access a new Service.

test_cluster|bash-3.2$ T: Guessing that Services IP range is 10.15.240.0/20. Services started after this point will beT: inaccessible if are outside this range; restart telepresence if you can't access a new Service. @ test_cluster|bash-3.2$

Telepresence can use several proxying methods, vpn-tcp is the default one. It works for all processes running on local machine, so you can access services running on the cluster from shell for example:

Telepresence also spawns the bash shell where k8s environments variables are set, like for example:

AUTH_SERVICE_PORT=9300

AUTH_PORT_9300_TCP_PROTO=tcp

It can be useful if your service rely on those vars.

Proxying traffic from cluster to local machine

You may ask “Ok, I can access services running on k8s. But can these services access my locally running application?” Yes, they can.

You can run Telepresence with --expose option followed by port number and the traffic from k8s to this port will be forwarded to your local machine. Internally Telepresence creates the deployment and the corresponding service. You can set the deployment and service name with --new-deployment NAME option. If you don’t set the name a random one is used (which is rather useless). The traffic goes to this service, then to the pod created by Telepresence deployment, and then to your machine. For example:

telepresence --expose 8080 --new-deployment example \

--run python3 -m http.server 8080

BEWARE! Trying it yourself will expose your local directory to the Kubernetes cluster, don’t do it if you are not 100% sure what you’re doing!

It will run new deployment on k8s named example and local python process running simple http server. Let’s check if it works :



root@ubuntu-8fb7b556f-jgx7n:/# apt update && apt install -y curl

[cut]

root@ubuntu-8fb7b556f-jgx7n:/# curl

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "

<html>

<head>

<meta http-equiv="Content-Type" content="text/html; charset=utf-8">

<title>Directory listing for /</title>

</head>

<body>

<h1>Directory listing for /</h1>

<hr>

<ul>

[cut] kubectl run -i -t ubuntu --image ubuntu -- bashroot@ubuntu-8fb7b556f-jgx7n:/# apt update && apt install -y curl[cut]root@ubuntu-8fb7b556f-jgx7n:/# curl http://example:8080 ttp://www.w3.org/TR/html4/strict.dtd" class="cl hl kq kr ks kt" rel="noopener">http://www.w3.org/TR/html4/strict.dtd "> Directory listing for / Directory listing for / [cut]

Debugging local application

One of the most common use cases is debugging the application running in k8s cluster.

Telepresence can “fake” the remote deployment with it’s own one, making all services on k8s “talk” to your local process:

telepresence --swap-deployment users

It will scale the original users deployment to 0 replicas and create new one with the same labels, so the service will forward the traffic to your local application.

There are also tutorials in Telepresence documentation how to debug java application or how to use Telepresence with IntelliJ.

Summary

Telepresence can proxy traffic from your machine to the Kubernetes cluster and vice-versa. It’s a great tool which can speed up developing microservices running on Kubernetes cluster. It allows to run applications locally as if they are a part of the cluster. Moreover, there is no need to use external tools like docker-compose which creates unrealistic environment.

Plus, you don’t need to buy a new laptop! ;-)

It’s not a complete telepresence tutorial. It has an excellent documentation with many tutorials and use cases, it’s worth to read it and play a bit with it. I’m pretty sure you’ll love it.