Let’s take a look at the minimum requirements for web developers in the 2000s. In the good old days, you could build more or less working website with a backend programming language (e.g. PHP, Python, Ruby), some Javascript + jQuery and the ability to create an HTML layout from PSD. That’s all. The code was deployed mainly by FTP. CI/CD, Pipelines, DevOps, SRE, and other mainstream concepts were not in the list of minimum requirements for web developers.

Yet technologies change rapidly. The number of users increases every day, hence we faced scaling issues and almost every popular Web service nowadays became distributed. CQRS or Event Sourcing, SOA, and CAP theorem became a must-have for a developer job position. The minimum requirements have risen for backend developers as well.

Nowadays web developers should care about interactions with mobile phones too. Responsive layout, SPA, adaptive layout were introduced and quickly became trends for a few upcoming years in web development. Today, developers mostly split into the following camps: frontend and backend. Web development became more challenging for beginners. The frontend part of a web project has turned into an independent project with integration with a backend via API.

For frontend and backend integration, as web developers, we have two basic options to create an HTTP API: RPC and REST.

What is an RPC API?

RPC stands for “Remote Procedure Call”. It’s almost the same as calling a function in JS/Python/PHP/Go or other programming languages, with a method name and arguments. You have the freedom to give names for your methods and it’s a very convenient way to create APIs.

What is a REST API?

REST stands for “REpresentational State Transfer”. REST is about the representation of data in simple formats (mostly JSON) and it’s all about resources. We use HTTP methods for getting (GET), updating (POST/PUT/PATCH), and deleting resources(DELETE). Unfortunately, many developers have their idea of what REST is, which leads to a lot of confusion and disagreement across teams and the development community in general. Yet, some big players are trying to come up with a universal guide for REST API development. In my opinion, Zalando REST API guidelines are the most comprehensive, detailed, and well-written guide on REST API that exists today.

A few years ago Facebook implemented GraphQL “the query language for your APIs”. Also, Google came up with gRPC, which is similar to any RPC or SOAP with some differences: tt uses HTTP/2 for transport protocol and protocol buffers for data serialization. Hence the choice of API implementation became even more challenging.

Methods for microservice communication

In my previous article about REST or gRPC with the Go programming language, I explained how to use gRPC to build APIs for microservice communication and mobile apps. Now let’s talk about microservices. Nginx has a detailed guide about building communication between microservices but in short, developers have two ways to implement communication between microservices: synchronous and asynchronous. Synchronous communication is often implemented with HTTP API and asynchronous with message queues. Kafka, or NATS, or RabbitMQ will be a great choice for your project. Yet, gRPC instead of REST/GraphQL can be asynchronous and we can use it to build simple async communication between our services. We can use it for a long time until we need to add a message queue to our system even if we need a guaranteed delivery between services.

Asynchronous communication using gRPC without MQ

In one of our projects, the development team had to organize guaranteed delivery between services, however, the addition of MQ would be overkill. The project was about processing events and events delivery should be with the delivery order.

We had two microservices:

The Core works simple. It has an external REST API for clients. We save all accepted events to the PostgreSQL database and pass it to the Processor. We save results once we process an event.

The Processor is even simpler. We need to process an event using one of our algorithms for each event received got from the Core.

In the first version of service, we made communication between microservices using a one-direction gRPC stream. We had the AddEvent method on the Core. It accepts events, writes them to Postgres and passes it to the Go channel

The Go channel passes events to Processor using events stream. The Processor opens a stream to the Core gets data from Postgres and then it receives data passed to the channel.

Guaranteed delivery on gRPC streams

We have a distributed system once we have more than one service communicating with each other. It causes various problems.

We can’t be sure that a network will work properly if there is a need for guaranteed delivery. The network is unreliable: sometimes delivery order is broken. Moreover, there is a chance to face packet loss and packet duplication.

We need to solve network issues once we’re going to build on an application-level delivery guarantee.

We need to retransmit packet once we didn’t get a message There can be duplicates once we have packet retransmission

Also, we need to choose a delivery method from the following:

at most once which can lead to messages being lost, but they cannot be redelivered or duplicated

which can lead to messages being lost, but they cannot be redelivered or duplicated at least once ensures messages are never lost but may be duplicated

We had to come up with a protocol to build any of them.

The protocol covers at most once delivery, yet, in our case at least once was enough because we have only one running instance of a service. Still, we could have duplicated messages in our system.

To prevent duplicated messages we had a simple map with mutex on the Processor side, and we sent events with exponential backoff on the Core side.

Conclusion

Usually, I don’t like to reinvent the wheel. Yet sometimes it’s easier to implement your solution for a problem if takes a short time. You can live without any MQ for a very long time if you chose gRPC for communication between your microservices. Also, it does not save you from issues completely since there can be a couple with load balancing which will be explained in the next post. Stay tuned!