tunnel-tool A dashboard and API to share services.

Easy and secure access to your team services.By vicjicama

Introduction

This post is about a tool to help developers have their services where they need them in a easy and secure way. This is a dashboard to let you control, share and review which are the services available in your devices. This dashboard is useful for developers without experience with SSH tunnels that want to share their services or helpful if you have a lot of experience with tunnel but you need a tool to help you to share multiple services across devices.

You can see the tool in action is the next video, I create an outlet from the banana-pi device and then create an inlet to my laptop. The goal of this tool is to save time and help a team of remote developers share their services easily with other team members, IOT devices, Kubernetes cluster or integration test environments in a secure and intuitive way.

Features

Here is a list of the highlight features compared with other alternatives.

The tool is completely self hosted and you don't need to register or get a token.

Share services from/to only the target devices without expose the service to the exit node.

You can have the same port multiple times on you local machine. (super useful if you want to have the same hostname:port across the team devices or test environments).

A GUI to control and review the state of the devices, their ports and IPs.

A GraphQL API to query and control (everything that you see/control on the UI) for easy automation and scripting.

Aware of multiple devices, this makes the collaboration easier. A device can be your laptop, a RaspberryPI, a Kubernetes deployment, a node, etc...

Save and control (Start/Stop) multiple endpoints per device.

Split connection loads between multiple sshd instances/exit nodes to avoid slowness and instability. (For example to copy big files between devices/Shared docker registry/media streams)

How it works?

The tool is just a helper for something that you might be already doing to share your services: ssh -R port-forward remote + ssh -L port-forward listen The tool consist of two parts, one for the server that is running on the exit node and and one for the client/dashboard that is executed on the target devices. If you want to share to the public your services you might be doing something like: ssh -R port-forward remote + proxy

The tool manage and control multiple endpoints that follows the next steps for all the connections:

Forward a local service port to the exit node as an "outlet" using an SSH -R connection.

To access the service a port-forward is done from the exit node to the target receptor as an "inlet" using a SSH -L connection.

You can as well expose the service to the exit node directly using the reverse proxy of your preference (if that is what you need).

The tool will use containers to separate you current SSH configuration and ports for the server and the clients. The containers allow us to have the same port in our client device, for example you can access www.device.local:6379 and www.another-device.local:6379 on the same device.

Some additional considerations that we have for the connections and executions are:

Never use the root user, execute as root or privileged executions.

Use non standard ssh port by default (25000), but you can configure any port

Disable password auth for SSH.

Independent SSH/SSHD services and configuration for the server and target devices.

Disable command execution on the port forward connections

GraphQL API

The UI on the examples is just a way to present and control the underlying API for the devices, outlet and inlets, here is an example of a query to get the list of the devices and its outlets/inlets. Also here are the examples on how the start and stop of a connection is done.

query List { viewer { devices { list { deviceid outlets { list { outletid src { host port } state { status worker { workerid ip port } } } } inlets { list { inletid dest { host port } state { status } } } } } } } mutation Start { viewer { devices { device(deviceid: "banana-pi") { inlets { inlet(inletid: "vicjicama-lap.local:7099") { state { start { deviceid } } } } } } } } mutation Stop { viewer { devices { device(deviceid: "banana-pi") { inlets { inlet(inletid: "vicjicama-lap.local:7099") { state { stop { deviceid } } } } } } } }

Getting Started

You need to have nodeJs, docker-compose and docker installed for the server and client devices that have control access to the server. (For pure edge devices like a raspberry all you need is nodeJs or no additional requirements in case of a kubernetes deployment edge device)

nodeJs v10.13

docker v19.03

docker-compose v1.20

Server exit node

For the server side you only need to execute the startup script and allow the port that you selected for the sshd service, 25000 is the default. (you can change this to 80 or 443 for example)

In our example we are going to use an E2C instance that can be reached on tunnels.repoflow.com

cd ~/server #Use the path of your preference curl -s "https://raw.githubusercontent.com/vicjicaman/tunnel-server/master/start.sh" > start.sh bash start.sh

After you execute the script you will see the keys folder printed on the console, in this example the folder is: /home/gn5/server/workspace/keys, we will need it to add the client devices public keys.

You can start both the server and a client on your localhost in case you are working offline with a network of local IOT devices, Minikube or a baremetal cluster in your network, just use the network IP.

Local client device

Once you have the server up and running you need to initializate the the client device, first get the startup script for the client:

cd ~/local #Use the path of your preference curl -s "https://raw.githubusercontent.com/vicjicaman/tunnel-local/master/start.sh" > start.sh

To initialize the device you have to run the script just with the deviceid argument: bash start.sh DEVICEID

bash start.sh vicjicama-lap

This will create a key file that we need to copy to the keys folder on the server. For this particular example we need to copy the file /home/victor/local/workspace/keys/vicjicama-lap/vicjicama-lap.json to the folder /home/gn5/server/workspace/keys on the server.

Once you have the key file in place you need to run the start script again but now with the target server hostname or IP like this: bash start.sh DEVICEID HOSTNAME|IP

bash start.sh vicjicama-lap tunnels.repoflow.com

In our example we are using the server name tunnels.repoflow.com, but you can use the public IP as well.

If you installed the server in your localhost make sure that you use your network IP instead of tunnels.repoflow.com

We are going to repeat the process for another client device named kube-node, the UI will look like the next screen after start the server and both client devices.

The next video shows the process to add one outlet from kube-node to vicjicama-lap, for this example we are forwarding a React app port.

If you add more device, outlet and inlets your dashboard will look like something like the next screen.A general dashboard with all this information is very useful once you start having multiple service across multiple envs and multiple developers.

Use cases

Here is a list of some useful scenarios for services forward that we use:

Easy bulk scripts for access and control IOT devices.

Share a single env of the services from multiple team members, especially useful for integration test.

Kubernetes development based on forward services. (more details here: link)

Dedicated tunnels to sync a development docker registry across nodes and remote developers.

Inject local diagnostic patched or debugger attached services to integration test envs.

Sharing services with remote developers out of the office or private show case to customers

I will write more about related use cases and features like: pure edge containers, integration with Kubernetes and integration with a microservices workflow.

Support and development

This version of the tool is free. The development and maintenance of this tool is covered with dedicated support, enterprise/custom features, managed exit nodes and other services.

Conclusion

Thanks for reading, I hope that you liked the presentation of the tool and that you give it a try!, I am sure that this will help to save time while working with multiple services, their versions and environments.

The idea and motivation behind this tool is based on the feedback from the users of the linker-tool , the linker-tool is heavily integrated with the Kubernetes API and services... a lot of users of the linker ask me if the linker-tool could be used stand alone without need a Kubernetes cluster, remove the user tokens, easier public shared ports, pure edge clients and more feedback. The tunnel-tool is the result of that feedback and ideas, we will be moving all those improvements to the linker-tool as well, stay tuned!

If you have any feedback, any question, any use case to discuss or if you want to reach out don't hesitate to contact me, my email is vic@repoflow.com