The speed of feature delivery is the key aspect of microservice-based architectures. We use this architectural style to deliver solutions faster and more frequently. Instead of building large monolithic systems, we divide them into small autonomous components – services that are easier to develop and maintain. However, there is a price for that autonomy and speed. Microservice-based architecture comes with a set of new challenges that must be addressed.

For example, we have to leave our safe harbor of RDBMSs and ACID and connect our services to different data stores. We need to implement secure and reliable communication between services in synchronous and asynchronous manners. And we need to provide proper ways to monitor and scale our solutions.

For all these purposes, we need tools. At our company, we have been building microservice-based systems since 2015. Our tech stack is based primarily on the Spring framework and its extension, Spring Boot, with a little help from Spring Cloud. These are great tools, created to make the development of web applications in Java easier and faster, but like all tools, they have their shortcomings. That’s why we monitor new frameworks, and we are constantly looking for tools that will improve our efficiency.

Micronaut is a very promising candidate to achieve these goals.

Micronaut: A New Hope

When we heard about Micronaut for the first time, we were very excited. Finally, a tool targeting microservices and serverless computing for Java developers, a tool that addresses common challenges and increases developer productivity and satisfaction, a tool that makes Java development fun again.

Micronaut was built by the same team that brought us Grails. We were great fans of Grails productivity, so we decided to give it a try.

What Is Micronaut?

Micronaut is a framework designed with microservices and cloud computing in mind. It is lightweight and reactive. It aims to provide developers with the productivity features of Grails while producing small and fast executables.

Micronaut supports Java, Kotlin, and Groovy development, and it supports both Maven and Gradle as build tools.

Micronaut’s key features, according to its creators, are as follows:

Compile-time AOP and dependency injection (which, of course, is much faster than reflection-based runtime dependency injection).

Reactive HTTP client and server (based on Netty)

A suite of cloud-native features, like support for service discovery, distributed tracing and logging, asynchronous communication using Kafka, retriable HTTP clients, circuit breakers, scalability and load balancing, and security using JWT and OAuth2.

Sample Project: Insurance Sales Portal

In order to test the Micronaut framework, we decided to implement it in an extremely simplified version of an insurance sales system. We removed a lot of real-world business complexity for the test, but retained enough requirements to test the following aspects of microservice development:

Project creation and development

Access to both relational and NoSQL databases

Blocking and non-blocking operations implementation

Microservice to microservice communication (synchronous and asynchronous)

Securing access with JWT

Distributed tracing

Service discovery

Running background jobs

Management and monitoring

Our example system had the architecture and components displayed below.

agent-portal-gateway – Gateway pattern from EAA Catalog. The complexity of “business microservices” was hidden by using a Gateway pattern. This component was responsible for the proper redirection of requests to the appropriate services based on the configuration. The frontend application could only communicate with this component. This component showed the usage of non-blocking http declarative clients.

– Gateway pattern from EAA Catalog. The complexity of “business microservices” was hidden by using a Gateway pattern. This component was responsible for the proper redirection of requests to the appropriate services based on the configuration. The frontend application could only communicate with this component. This component showed the usage of non-blocking http declarative clients. payment-service – Main responsibilities: created Policy Account, showed Policy Account list, and registered payments from the bank statement file. This module managed policy accounts. Once the policy was created, an account was created in this service with the expected income. Payment-service also had an implementation of a scheduled process in which a CSV file with payments was imported, and payments were assigned to policy accounts. This component showed asynchronous communication between services using Kafka and had the ability to create background jobs using Micronaut. It also allowed for the database to be accessed using JPA.

– Main responsibilities: created Policy Account, showed Policy Account list, and registered payments from the bank statement file. This module managed policy accounts. Once the policy was created, an account was created in this service with the expected income. Payment-service also had an implementation of a scheduled process in which a CSV file with payments was imported, and payments were assigned to policy accounts. This component showed asynchronous communication between services using Kafka and had the ability to create background jobs using Micronaut. It also allowed for the database to be accessed using JPA. policy-service – created offers, converted offers to insurance policies, and allowed for termination of policies. In this service, we demonstrated the use of a CQRS pattern for better read/write operation isolation. This service demonstrated two ways of communicating between services: synchronous REST-based calls to a pricing-service through an HTTP client to get the price, and asynchronous event-based calls using Apache Kafka to publish information about newly created policies. In this service, we also accessed an RDBMS using JPA.

– created offers, converted offers to insurance policies, and allowed for termination of policies. In this service, we demonstrated the use of a for better read/write operation isolation. This service demonstrated two ways of communicating between services: synchronous REST-based calls to a through an HTTP client to get the price, and asynchronous event-based calls using Apache Kafka to publish information about newly created policies. In this service, we also accessed an RDBMS using JPA. policy-search-service – provided an insurance policy search function. This module listened for events from Kafka, converted received DTOs to “read model” (used later in search), and saved the results in a database. It also exposed a REST endpoint for search policies.

pricing-service – calculated a price for the selected insurance product. For each product, a tariff was defined. The tariff was a set of rules on the basis of which the price was calculated. MVEL language was used to define these rules. During the policy purchase process, the policy-service connected with this service to calculate a price. The price was calculated based on the user’s answers to defined questions.

Here is an example:

product-service – a simple insurance product catalog. This held information about products that were stored in MongoDB. Each product had a code, name, image, description, cover-list, and question-list, which affected the price defined by the tariff. This module showed usage of a reactive Mongo client.

web-vue – an SPA application built with Vue.js and Bootstrap for Vue.

– an SPA application built with Vue.js and Bootstrap for Vue. auth-service – a JWT based authentication service for login functionality. Based on login and password credentials, users were authenticated and a JWT token, with their privileges, was created and returned. This service used built-in Micronaut support for JWT-based security.

Each business microservice had an *-API module (payment-service-API, policy-service-API etc.) where we defined commands, events, queries, and operations.

In the picture, you can also see the internal-command-bus. This component was used internally for microservices if we wanted to use a CQS pattern inside (you can view a simple example in OfferController in policy-service).

In most modules, Lombok is used, so if you don’t already know it, it’s high time to check it out.

Building Services

Project Generation

Micronaut has a great command line interface (CLI) – Micronaut CLI. Thanks to this, you can generate projects directly from the console. If you have ever worked with Spring, you are probably familiar with Spring Initializr or Spring Roo. In Micronaut, the CLI plays this role.

You can install the CLI using SDKMAN or through binary installation. The best option is to use a Unix system. On Windows, it’s a bit problematic because you have to do it through a bash (cygwin/GIT bash) or Linux subsystem for Windows. I have done it both ways on Windows, and I recommend doing it the second way.

Let’s get to the CLI itself. If you want to create an app (microservice) with Maven as a build tool, Spock as a test framework, and Java as your source code language, write the following in the console:

mn create-app pl.altkom.asc.lab.[SERVICE-NAME] -f spock -b maven

You can add a lot of features from the CLI, including Consul/Eureka as a discovery server, Hibernate, Kafka, Mongo, Neo4j, Redis, security (JWT/session), Zipkin and Jaeger.

You can check out the full list here.

You can also create functions, command line apps, federations (services with shared profile/features), and profiles.

The CLI is a powerful tool that can help a lot in a programmers’ daily work.

Accessing Relational Databases With JPA

The first step is to add the required dependencies. We used Mavenas a build tool, and the import looked like this:

<dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <version>1.4.197</version> </dependency> <dependency> <groupId>io.micronaut.configuration</groupId> <artifactId>jdbc-hikari</artifactId> </dependency> <dependency> <groupId>io.micronaut.configuration</groupId> <artifactId>hibernate-jpa</artifactId> </dependency>

h2database.h2 – An in-memory relational database

– An in-memory relational database micronaut.configuration.jdbc-hikari – Configures SQL DataSource instances using a Hikari connection pool

– Configures SQL DataSource instances using a Hikari connection pool micronaut.configuration.hibernate-jpa – Configures Hibernate/JPA EntityManagerFactory beans

The second step is to add the configuration to your application.yml file, which looks like this:

--- datasources: default: url: jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE driverClassName: org.h2.Driver username: sa password: '' --- jpa: default: packages-to-scan: - 'pl.altkom.asc.lab.micronaut.poc.policy.domain' properties: hibernate: hbm2ddl: auto: update show_sql: true ---

This configuration allows for the arrangement of multiple data sources (more info in the docs). We configured a connection to H2 (in our opinion, this is sufficient for PoC). In packages-to-scan, there should be packages in which entries are defined.

We started this project when M3 was the latest version of Micronaut. During development, a new version (M4) was released.

During the update to the new version, we decided to replace the SessionFactory with EntityManager , and the current repository looks like this:

@Singleton public class HibernateOffersRepository implements OfferRepository { @Inject @CurrentSession private EntityManager entityManager; @Transactional @Override public void add(Offer offer) { entityManager.persist(offer); } @Transactional @Override public Offer getByNumber(String number) { return query("from Offer o where o.number = :number") .setParameter("number", number) .getSingleResult(); } private TypedQuery query(String queryText) { return entityManager.createQuery(queryText, Offer.class); } }

Changes after the Micronaut upgrade and transition to EntityManager from SessionFactory.

Mock Database for Testing

Thanks to two annotations ( @Replaces and @Requires ), we can use a very simple hashtable-based database with pre-defined data instead of an injected repository with an EntityManager bean.

import io.micronaut.context.annotation.Replaces; import io.micronaut.context.annotation.Requires; import io.micronaut.context.env.Environment; import io.micronaut.spring.tx.annotation.Transactional; import pl.altkom.asc.lab.micronaut.poc.policy.domain.Offer; import pl.altkom.asc.lab.micronaut.poc.policy.domain.OfferRepository; import pl.altkom.asc.lab.micronaut.poc.policy.infrastructure.adapters.db.HibernateOffersRepository; import javax.inject.Singleton; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; @Replaces(HibernateOffersRepository.class) @Requires(env = Environment.TEST) @Singleton public class MockOfferRepository implements OfferRepository { private Map<String, Offer> map = new ConcurrentHashMap<>(); @Transactional @Override public void add(Offer offer) { map.put(offer.getNumber(), offer); } @Transactional @Override public Offer getByNumber(String number) { return map.get(number); } }





Accessing MongoDB

Micronaut features the ability to automatically configure the native MongoDB Java driver.

Currently, we have two options for configuring MongoDB: non-blocking or blocking. These options differ with the added dependency and configuration in application.yml. We decided to use the non-blocking Reactive Streams MongoClient.

pom.xml

<dependency> <groupId>io.micronaut.configuration</groupId> <artifactId>mongo-reactive</artifactId> </dependency>

application.yml

mongodb: uri: "mongodb://${MONGO_HOST:localhost}:${MONGO_PORT:27017}/products-demo" cluster: maxWaitQueueSize: 5 connectionPool: maxSize: 20

Then the non-blocking MongoClient will be available for injection and can be used in our repository:

import com.mongodb.client.model.Filters; import com.mongodb.reactivestreams.client.MongoClient; import com.mongodb.reactivestreams.client.MongoCollection; import io.reactivex.Flowable; import io.reactivex.Maybe; import io.reactivex.Single; import lombok.RequiredArgsConstructor; import pl.altkom.asc.lab.micronaut.poc.product.service.domain.Product; import pl.altkom.asc.lab.micronaut.poc.product.service.domain.Products; import javax.inject.Singleton; import java.util.List; @Singleton @RequiredArgsConstructor public class ProductsRepository implements Products { private final MongoClient mongoClient; @Override public Single add(Product product) { return Single.fromPublisher( getCollection().insertOne(product) ).map(success -> product); } @Override public Single<List> findAll() { return Flowable.fromPublisher( getCollection().find() ).toList(); } @Override public Maybe findOne(String productCode) { return Flowable.fromPublisher( getCollection() .find(Filters.eq("code", productCode)) .limit(1) ).firstElement(); } private MongoCollection getCollection() { return mongoClient .getDatabase("products-demo") .getCollection("product", Product.class); } }

Exposing REST Endpoints

REST endpoints are a basic way of communicating between the server application and the client application.

The easiest way is to create a controller with annotation from the io.micronaut.http.annotation package:

import io.micronaut.http.annotation.Controller; import io.micronaut.http.annotation.Get; import io.micronaut.http.HttpStatus; import pl.altkom.asc.lab.micronaut.poc.policy.service.api.v1.Health; @Controller("/hello") //main path apply to all paths defines in class public class HelloController { @Get public HttpStatus index() { return HttpStatus.OK; } @Get("/version") // example: http://localhost:XXXX/hello/version public Health version() { return new Health("1.0", "OK"); } }

To maintain better consistency between client definitions and controllers, we tried to keep this convention:

In *-API modules, we created an *Operations interface, where we defined all operations supported by this microservice.

Example Interface:

import io.micronaut.http.annotation.Body; import io.micronaut.http.annotation.Get; import io.micronaut.http.annotation.Post; import pl.altkom.asc.lab.micronaut.poc.policy.service.api.v1.commands.*; import pl.altkom.asc.lab.micronaut.poc.policy.service.api.v1.queries.*; public interface PolicyOperations { @Get("/{policyNumber}") GetPolicyDetailsQueryResult get(String policyNumber); @Post CreatePolicyResult create(@Body CreatePolicyCommand cmd); @Post("/terminate") TerminatePolicyResult terminate(@Body TerminatePolicyCommand cmd); }

We use Micronaut annotations like @Get , @Post , and @Body to tell Micronaut how we want our operations to be exposed and what parameters should be bound from HTTP request to method parameters.

The *Operations interface should be implemented by the Controller in the module and by all clients who want to use the service methods (more about this will follow later).

PolicyOperations implementations.

Example Controller:

@RequiredArgsConstructor @Controller("/policies") public class PolicyController implements PolicyOperations { private final CommandBus bus; @Override public GetPolicyDetailsQueryResult get(String policyNumber) { return bus.executeQuery(new GetPolicyDetailsQuery(policyNumber)); } @Override public CreatePolicyResult create(CreatePolicyCommand cmd) { return bus.executeCommand(cmd); } @Override public TerminatePolicyResult terminate(TerminatePolicyCommand cmd) { return bus.executeCommand(cmd); } }

PolicyController is a simple proxy into which CommandBus is injected, that transmits command and query to the appropriate handlers.

In Controller, we do not repeat @Get/@Post/@Body annotations (all that was defined in the interface).

Talking to Other Services Using Kafka

In systems based on microservice architecture, the preferred method of communication is asynchronous.

We often have to deal with the situation in which one service must say the other: “Hi, I finished my work.”

In our example system, all the most important events are related to the policy.

Let’s look at “register a policy” from a business point of view. After registering a new policy (PolicyRegisteredEvent), we should create an account (PolicyAccount) for which premiums can be paid. The same module that manages the policies should not be responsible for creating a new account and accepting payments.

Message brokers and publish-subscribe pattern are ideally suited for solving such problems. In my opinion, the best open source solutions of this type are Apache Kafka and RabbitMQ.

Kafka can be used for even more applications. This is from official site:

Kafka® is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.

Micronaut fully supports Kafka. RabbitMQ is not supported in the same way as Kafka yet, but an issue has been created on GitHub.

To add support for Kafka, first, add the Micronaut Kafka configuration to your build configuration and set the value of the kafka.bootstrap.servers in application.yml.

pom.xml

<dependency> <groupId>io.micronaut.configuration</groupId> <artifactId>kafka</artifactId> </dependency>

application.yml

kafka: bootstrap: servers: "${KAFKA_HOST:localhost}:${KAFKA_PORT:9092}"

If you have never used Kafka before, look to our PoC where we created scripts for provisioning required infrastructure. Everything is described in the README.

To send messages to Kafka, we created the EventPublisher interface with the @KafkaClient annotation and two methods with the @Topic annotation.

import io.micronaut.configuration.kafka.annotation.KafkaClient; import io.micronaut.configuration.kafka.annotation.KafkaKey; import io.micronaut.configuration.kafka.annotation.Topic; import pl.altkom.asc.lab.micronaut.poc.policy.service.api.v1.events.PolicyRegisteredEvent; import pl.altkom.asc.lab.micronaut.poc.policy.service.api.v1.events.PolicyTerminatedEvent; @KafkaClient public interface EventPublisher { @Topic("policy-registered") void policyRegisteredEvent(@KafkaKey String policyNumber, PolicyRegisteredEvent event); @Topic("policy-terminated") void policyTerminatedEvent(@KafkaKey String policyNumber, PolicyTerminatedEvent event); }

To define a message listener, we use the @KafkaListener annotation.

@RequiredArgsConstructor @KafkaListener(offsetReset = OffsetReset.EARLIEST) public class PolicyRegisteredListener { private final PolicyAccountRepository policyAccountRepository; private final PolicyAccountNumberGenerator policyAccountNumberGenerator; @Topic("policy-registered") void onPolicyRegistered(PolicyRegisteredEvent event) { Optional accountOpt = policyAccountRepository.findForPolicy(event.getPolicy().getNumber()); if (!accountOpt.isPresent()) createAccount(event.getPolicy()); } private void createAccount(PolicyDto policy) { policyAccountRepository.add(new PolicyAccount(policy.getNumber(), policyAccountNumberGenerator.generate())); } }

Our simple example shows only basic features that Micronaut offers. With Micronaut Kafka support you can also:

Add message headers,

Change default serializers,

Send records in batch,

Create a consumer thread pool configuration.

Talking to Other Services With HttpClient

We created a lot of clients in agent-portal-gateway, because the main responsibility of this module is the proper redirection of requests to the appropriate services based on the configuration.





Example Client:

import io.micronaut.http.client.Client; import pl.altkom.asc.lab.micronaut.poc.policy.search.service.api.v1.PolicySearchOperations; @Client(id = "policy-search-service", path = "/policies") public interface PolicySearchGatewayClient extends PolicySearchOperations { }

Thanks to the service discovery mechanism and Consul (more about this later), we can define a client with an id as "application name"instead of using the exact address of the service (for example: localhost:5065).

In our PoC project, we have a situation in which we want to create two @Clients with the same id but different paths. Unfortunately, this is impossible, but in the next version (RC1), this should already be possible.

For now, we have solved this problem by adding a method to an existing client and overwriting defined paths. Check the interface PolicyGatewayClient for details.

Client Usage in Controller:

import io.micronaut.http.annotation.Controller; import io.micronaut.http.annotation.Get; import pl.altkom.asc.lab.micronaut.poc.gateway.client.v1.PaymentGatewayClient; import pl.altkom.asc.lab.micronaut.poc.payment.service.api.v1.PolicyAccountDto; import javax.inject.Inject; import java.util.Collection; @Controller("/api/payments") public class PaymentGatewayController { @Inject private PaymentGatewayClient paymentClient; @Get("/accounts") Collection accounts() { return paymentClient.accounts(); } }

There are many other topics related to HTTP clients, such as retries, fallback, and circuit breakers.

In real-world applications, issues occur and we should be prepared for them. The above-mentioned topics are patterns that help us handle unexpected situations.

Retry

For example, say the agent-portal-gateway sends a request to policy-search-service because the user wants to search for a policy. Policy-search-service is located in the Data Center, which currently does not work.

Maybe the unavailability of the policy-search-service will last only 2 seconds and maybe we will be trying to send a request, again, in 3 seconds?

We can achieve this scenario thanks to the @Retryable annotation.

@Client(id = "policy-search-service", path = "/policies") @Retryable(attempts = "2", delay = "3s") public interface PolicySearchGatewayClient extends PolicySearchOperations { }

This results in a retry two times with a delay of three seconds between each.

Fallback

But what if the service still does not work?

We should have an emergency plan called a fallback mechanism. A fallback mechanism is a second way of doing things, in case the first way fails.

For each client, we should define a fallback client, which should be called in an emergency situation and return some standard values in line with business requirements.

@Singleton @Fallback public class PolicySearchGatewayClientFallback implements PolicySearchOperations { @Override public FindPolicyQueryResult policies() { return FindPolicyQueryResult.empty(); } }

Circuit Breaker

In microservice architecture, retry is useful, but in some cases, using the Circuit Breaker pattern is a better choice.

From the Micronaut docs:

The Circuit Breaker pattern is designed to resolve this issue by essentially allowing a certain number of failing requests and then opening a circuit that remains open for a period before allowing any additional retry attempts...The Circuit Breaker annotation is a variation of the @Retryable annotation that supports a reset member that indicates how long the circuit should remain open before it is reset (the default is 20 seconds).

@Client(id = "policy-search-service", path = "/policies") public interface PolicySearchGatewayClient extends PolicySearchOperations { @CircuitBreaker(reset = "25s") FindPolicyQueryResult policies(); }

In the example above, we retry to method policies three times and then open the circuit for 25 seconds.

Service Discovery With Consul

Service discovery is one of the basic patterns used in a microservice architecture.

Our microservices cannot assume a fixed port at startup. Instead, each microservice needs a dynamic port allocation to avoid collisions during replication.

Micronaut supports Consul, Eureka, and Kubernetes. For this PoC, we used Consul.

Consul support in Micronaut is great! Just add a dependency, enter the address, and enable self-registration in the configuration.

pom.xml

<dependency> <groupId>io.micronaut</groupId> <artifactId>discovery-client</artifactId> <scope>compile</scope> </dependency>

application.yml

consul: client: registration: enabled: true defaultZone: "${CONSUL_HOST:localhost}:${CONSUL_PORT:8500}"

If you have never used Consul before, we created scripts, in our PoC, for provisioning required infrastructure. Everything is described in the README.

We added the above dependency and configuration to all microservices. After starting (from IDE or by script), we can view the services list in Consul’s dashboard:

Consul dashboard.

The interesting thing is, Micronaut has its own Consul client implementation.

Why? You’ll find the answer in the Micronaut FAQ:

The majority of Consul and Eureka clients that exist are blocking and include a mountain of external dependencies that inflate your JAR files...Micronaut’s DiscoveryClient uses Micronaut’s native HTTP client, thus greatly reducing the need for external dependencies and providing a reactive API onto both discovery servers.

Client-Side Load Balancing

Client-side load balancing is the next important pattern in a microservice architecture. The default load balancing algorithm in Micronaut is Round Robin. This algorithm continuously rotates a list of services that are attached to it. When a request arrives, the algorithm assigns the connection to the first service on the list and then moves that service to the bottom of the list.

However, sometimes, this standard algorithm is not enough, for example, when we want to direct traffic to servers that have the best overall response time in a first place.

In this situation, Netflix Ribbon comes with help.

Ribbon is an Inter-Process Communication (remote procedure calls) library with built in software load balancers. The primary usage model involves REST calls with various serialization scheme support. ( source).

This commit contains everything you need to add Ribbon support to your Micronaut microservice. To add Ribbon, just add a new dependency to the app and three lines of configuration.

In our PoC, most communication goes through the agent-portal-gateway, so it is the best place to add this.

pom.xml

<dependency> <groupId>io.micronaut.configuration</groupId> <artifactId>netflix-ribbon</artifactId> <scope>compile</scope> </dependency>

application.yml

ribbon: VipAddress: test ServerListRefreshInterval: 2000

More information about available configuration settings is available in the docs.

Securing Services With JWT

Micronaut comes with JWT support built in, so securing our application requires only a few steps. For the sake of simplicity, we only did this for our gateway service.

Let’s begin with the dependencies:

pom.xml

<dependency> <groupId>io.micronaut</groupId> <artifactId>security-jwt</artifactId> <scope>compile</scope> </dependency>

And add some basic configuration:

application.yml

micronaut: security: enabled: true token: jwt: enabled: true signatures: secret: generator: secret: pleaseChangeThisSecretForANewOne

Please note that hardcoding passwords is very bad practice. Normally, they should be obtained from external configuration (using, for example, Vault).

At this point, all endpoints exposed by the gateway are secured. In order to make them accessible, we need to define some rules using the @Secured annotation, for example:

@Secured("isAuthenticated()") @Controller("/api/policies") public class PolicyGatewayController { [...] }

This commit contains all the described changes required to secure gateway endpoints.

To make it work, we need one more thing – a service that will authenticate a user and generate JWT tokens. In this commit , we built a simple one with a pre-populated in-memory database (for demonstration purposes).

Running Scheduled Tasks

In Micronaut, the annotation @Scheduled is used to define scheduled tasks.

import io.micronaut.context.annotation.Prototype; import io.micronaut.scheduling.annotation.Scheduled; import java.time.LocalDate; import lombok.RequiredArgsConstructor; import lombok.extern.slf4j.Slf4j; import pl.altkom.asc.lab.micronaut.poc.payment.domain.InPaymentRegistrationService; @Prototype @Slf4j @RequiredArgsConstructor public class BankStatementImportJob { private final BankStatementImportJobCfg jobCfg; private final InPaymentRegistrationService inPaymentRegistrationService; @Scheduled(fixedRate = "8h") public void importBankStatement() { log.info("Starting bank statement import job"); inPaymentRegistrationService.registerInPayments(jobCfg.getImportDir(), LocalDate.now()); } }

Our task executed every 8 hours.

Scheduling can be configured at a fixed rate ( fixedRate ), with a fixed delay ( fixedDelay ), or as a cron task ( cron ). More examples are available in docs.

We defined this Bean as @Prototype , but @Singleton could also be good. It all depends on what we want to achieve.

Remember that the scope of the bean has an impact on behavior. @Singleton beans share state (the fields of the instance) for each execution, while with a @Prototype bean, a new instance is created for each execution (source).

Tracing With Zipkin

Tracing is the next important step when building microservices. In real-world applications, requests can be sent between many services. Well-designed architecture should allow for tracing requests end to end and visualizing interactions between components.

Distributed tracing solutions offer such functionalities. The most well-known functions are Zipkin from Twitter and Jaeger from Uber. If you want to read more about how it works, check it out here.

We chose Zipkin because we had prior experience with it.

If you have never used Zipkin before, we created scripts, in our PoC for provisioning required infrastructure or you can run this one-line docker command. Everything is described in the README.

To add Zipkin support, just add a few dependencies and a few lines with configuration. Remember, this should be added to all microservices that you want to participate in tracing.

pom.xml

<dependency> <groupId>io.micronaut</groupId> <artifactId>tracing</artifactId> <scope>compile</scope> </dependency> <dependency> <groupId>io.zipkin.brave</groupId> <artifactId>brave-instrumentation-http</artifactId> <version>4.19.0</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.zipkin.reporter2</groupId> <artifactId>zipkin-reporter</artifactId> <scope>runtime</scope> <version>2.5.0</version> </dependency> <dependency> <groupId>io.opentracing.brave</groupId> <artifactId>brave-opentracing</artifactId> <scope>compile</scope> <version>0.30.0</version> </dependency>

application.yml

tracing: zipkin: enabled: true http: url: http://localhost:9411 sampler: probability: 1.0

The above configuration takes 100% (probability: 1.0) of the request to be processed by Zipkin. In a real production system, that could be overwhelming.

After these procedures, we went to the Zipkin’s Dashboard to track our requests.

Zipkin Dashboard.

Management and Monitoring

Micronaut adds support for monitoring your application via endpoints: special URIs that return details about the health and state of your application.

Built-in endpoints can return a lot of information about systems, such as metrics, a list of loaded beans, health, application state, a list of available loggers, and URIs. We configured two example endpoints: health and metrics.

pom.xml

<dependency> <groupId>io.micronaut</groupId> <artifactId>management</artifactId> </dependency> <dependency> <groupId>io.micronaut.configuration</groupId> <artifactId>micrometer-core</artifactId> </dependency> <dependency> <groupId>io.micronaut.configuration</groupId> <artifactId>micrometer-registry-statsd</artifactId> </dependency>

application.yml

micronaut: application: name: product-service metrics: enabled: true --- endpoints: health: enabled: true sensitive: false metrics: enabled: true sensitive: false

Some of the information provided by these endpoints is sensitive. Sensitive data must be restricted to authenticated users. Micronaut’s built-in endpoints are integrated with security, and each of them can be easily configured to be secured or not, with the sensitive option.

Summary

Micronaut is a very promising framework. Even though it’s still not in RC phase, it already has most of the features required to quickly and easily build microservices.

Pros

You can access various data stores, both in blocking and non-blocking ways, connect your services via REST HTTP calls or asynchronously through a message broker, and secure your system with JWT. You can also easily connect your service-to-service discovery and tracing infrastructure.

Cons

As long-time Spring Data users, we miss the ability to easily create data access code using annotations and queries dynamically generated based on method names (if you need something like this in Micronaut you have to use Groovy and GORM Data Services). We also miss RabbitMQ support (we use this message broker in most of our production systems), but this should be available in RC.

There are also still some bugs (Micronaut is still under development), but the Micronaut team is very responsive and helpful, and problems are fixed quickly.

We will continue to upgrade our demo application with the next versions of the Framework and run performance and scalability tests when it reaches the 1.0 version.

We recommend that everyone building microservices on JVM give Micronaut a try.

Authors:

Wojciech Suwała, Robert Witkowski, Robert Kuśmierek