Backend Challenge

The challenge is, essentially, the scalability of services surrounding integration with both the Big Data and Blockchain environments. It is of vital importance that the global platform, including the communication among all the environments, experiences minimal downtime. Thus, a microservice architecture is needed.

The backend stack is formed by PosgreSQL, Cassandra and Elasticsearch data bases. Because Blockchain stores every transaction generated in its environment and consulting them could penalise the user experience, the previous databases are used like a cache system, storing all operations, both from Blockchain and from other application sources, and making the user experience faster and better.

In order to develop the backend Spring Boot was selected for a number of reasons:

Ease of learning, there exists a lot of public documentation about the framework.

Numerous integrations, the Spring ecosystem has a lot of modules for integrating different technologies.

Excellent reactive extension with Spring 5, our architecture is hexagonal and is prepared for this change.

For deployments, the technologies employed are Docker with Kubernetes, supported with a Jenkins pipeline that automates all necessary steps. We have selected Docker because it is the most adopted vendor technology for containers. On the other hand, we chose to use Kubernetes for the following reasons:

It supports multiple cloud service providers and on-premises data centres. Additionally, it is able to connect several Kubernetes clusters.

For the deployed software, it has both self-repair and health check capabilities.

It is prepared to auto-scale the software in scenarios where there is large scale demand.

Big Data Integration

The core of big data is designed with Spark Jobs, which are used for the following tasks:

PFM values generation from user data. Forecast prediction and regeneration of these models. Product matching based on the economic profile of the user and their real needs. Credit scoring calculation

The backend does not contain the logic about when to initiate certain tasks, the architecture selected is event driven. The backend sends every event within the platform to the big data environment using RabbitMQ.

In the platform, there are a lot of big data listeners, with the same micro service architecture to ingest all the events and manage the launch of the Spark Jobs. As this execution is initiated and forgotten, we have notifications to indicate if anything went wrong.

Why not Kafka?

RabbitMQ supports several standardised protocols

RabbitMQ accepts complex routing scenarios

Blockchain Integration

Public

The transactional information occurs in blockchain. We utilize the Ethereum platform and in order to connect with this environment we use Infura.

Private

We have deployed it in the Kubernetes environment and we access the Ethereum smart contracts through a customised rest API, which works as another microservice. We defined a custom Ethereum node architecture to provide a network that is both full scalable and recoverable in case of a disaster.

All blockchain keys are stored in secret vaults and managed by the backend logic.

Summary

We use a number of different tools and techniques to optimise the backend for scalability.

Fundamental to our product is the user experience. It must be both fluid and without issue.

Using different architectural technology in approaching the design of this backend is necessary to ensure an optimal user experience. We have combined the available technologies to bring the best user experience without sacrificing scalability.