Hello all,

In my previous articles, I presented our vision of the ideal microservices stack, and why we were building it.

Today, I am proud to announce that a working prototype is available at https://empower.sh. This article will be illustrated by screenshots from our demo platform, on Kubernetes. You can follow this tutorial to install the prototype on your own Kubernetes cluster.

You can find the code of the prototype on the example repository. This example project is also the starting point to use the stack, you can fork the repository and adapt it to create your own architecture.

The example repository will deploy nine services:

- The product CRUD service and its database, serving a product model

- The customer CRUD service and its database, serving a customer model

- The invoicing CRUD service and its database, serving an invoice and invoice line models

- The login service and its database, serving an account and role models. It also serves the login web page and is used as the Hydra consent application.

- The GraphQL API gateway, which aggregates the API of all services and exposes them to the Internet.

- The admin service, which will serve the Admin GUI web application, connected to the gateway.

- The Hydra service, which will perform the authentication controls

- The NATS service, which will manage the asynchronous messaging in your backend

- The Jaeger service, which will implement the distributing tracing

All the CI/CD pipeline is described in the .gitlab-ci.yml file.

After linking the Gitlab repository to our Kubernetes cluster (which you can now get at almost any cloud provider), we can trigger a new pipeline.

This default pipeline has essentially three steps:

The build steps will build the Admin GUI image and the Golang services into a go-factory image, and store them in the Gitlab registry.

The test steps will execute the test suite and code linter on the images, and will block the deployment if any error arises.

Finally, the deployment step will deploy the images on Kubernetes, and update any missing configuration.

All Kubernetes files are in the ./kubernetes folder. With these files and the .gitlab-ci.yml, you keep complete control of what you are deploying on your cluster.

You can connect on your Kubernetes Dashboard. A new namespace should have been created, here example-13078944-production. You can configure your repository to deploy your dev branches on another namespace, or a completely other Kubernetes cluster.

If you check the deployments in this namespace, you shall see all our services which were deployed.

And their databases as stateful sets. Their data will be stored as volume blocks at your cloud provider.

In the Ingress resources, you can see all services exposed to the Internet.

All theses URLs are automatically SSL encrypted, thanks to LetsEncrypt.

Finally, let’s see what our stack is capable of. First, let’s connect to our Admin GUI.

We are automatically redirected to the login lock screen. If we make any action here, the Admin GUI will redirect us to the Hydra service, which will itself redirect us to the Login service for global authentication.

At a successful login, a lot of things happen behind the screen:

The Login service will contact the Hydra service to say the access was granted.

Hydra will issue a new token and redirect you to the Admin GUI.

The Admin GUI will detect the token, and try it on the GraphQL API Gateway.

If successful, finally, the menu will open on your resources.

You can see all Customers/Products/Invoices/Invoice Lines resources have their menu here. It’s also the same for the Accounts and Roles resources, because the data of your Login service itself is accessible through the API Gateway.

Right now, our customers list is empty. Let’s create one.

At save, again many things happen:

The Admin GUI will send the customer creation request to the GraphQL API gateway, with the authentication token.

The API gateway will check the authentication token against the Hydra service to check the validity of the token.

If successful, it will get the UUID of the user. It will also request the Login service to get any role associated with this user. These roles and user UUID shall be inserted in the request header as a signed JWT.

The gateway will now redirect the request to the service managing CRUD operations on the customer resource, the customer service. Communication between the gateway and backend services are done in gRPC.

The customer service will receive the request, verify the JWT signature, and also verify that the user has the role needed to perform the operation.

Finally, the customer service will create the customer in its database, and return the customer data, with its new UUID, to the gateway and Admin GUI.

All these operations can be seen on the Jaeger service, through a distributed trace.

Distributed tracing is essential to monitor your architecture. This is the only way to see the whole life of your request, and to trace where errors are occurring.

We can now see our new customer on the Admin GUI

Let’s now create an invoice.

You can notice the many2one field to customer. When we insert our invoice, the invoicing service will contact the customer service to check if the customer UUID exist, and return an error otherwise.

Let’s now create a product

And now an invoice line, which has a invoiceUUID and a productUUID fields.

Things are a little different here. The product resource is configured to be stored in the invoicing service, this means the invoicing service has its own product table.

This is where NATS come into play. Each time a service perform CRUD operation, an event is fired at NATS. Each event is also stored in the service database, to keep a full log of all operations.

This is what happened when we created our product, the product service fired a product creation event at NATS. The invoicing service, which subscribed to such event, get the event, and update his product table accordingly.

Finally, when we created our invoice line, the invoicing service just checked if the product UUID existed in his own product table, and granted the creation thanks to this. We don’t consider this process foolproof, and if the product UUID is not locally found, a check request to the product service will be made before triggering an error.

And that’s it. With this, you have already a good overview of what the Empower Stack is capable of, especially regarding its CRUD capabilities. Of course, you can also create your own custom functions to implement in your backend the logic your organization needs.

In this article, I presented the result you get when you install the prototype on Kubernetes, but for an even easier start a docker-compose file is available, so you can get the prototype working on your local computer in just a few minutes. Check the installation documentation to know more.

The Empower Stack is still an early-alpha project, please be careful using it. If you like what you see here, I kindly invite you to join the community in our mailing list at https://www.freelists.org/list/empower-stack, and check out our website https://empower.sh to know more about the project. The website also provides more information about the global architecture and principles behind the stack.

Thank you for your attention.