Developing failsafe and scalable Java applications contains domain specific challenges and the solutions contain misunderstandings. Even developing failsafe and scalable Java backends, on top of frameworks or toolkits claimed to be easy-to-use, it is still a problem to automate these services in dynamically changing environments such as microservices, for multiple environments. Recently, we published an article related to dynamically changing configuration problem. For a faster development process in a rapidly changing environment, dynamic configuration management should not be an obstacle for developers.

Vert.x is a tool-set to make reactive applications by applying an event-driven and non-blocking model which lets us scale applications easily with a bunch of configurations. Therefore we can create high-performance microservices at scale. This becomes possible with the Vert.x’s nervous system, which is called event bus. Any node whose member of the specific cluster has its own responsibility and the information is being exchanged via event-bus. Vert.x relies on its cluster-manager interface to be able to discover other nodes on the network. One popular cluster manager of Vert.x is the Hazelcast cluster manager.

It is a little tricky to understand how the cluster initialization progress actually runs. Thanks to discussions like this, it is easier to understand which configuration set changes the discovery logic accordingly in the background.

Any newly created node with clustered configuration enabled, reads its Hazelcast configuration (cluster.xml or default-cluster.xml) and tries to join to the cluster by discovering the environment. Hazelcast provides few options for different cases: Multicast enabled means that nodes will be discovered by providing the multicast ip:port bindings in the given network interface, therefore any newly created node will publish join message all over the network so that it can be found with given multicast group id. Tcp-ip enabled option has a section which takes members as inputs for discovery. In this scenario, any new incoming node should now the already running members to join them. AWS specific configuration is a domain specific configuration for the cloud. When the nodes are aware of each other, they can engage a cluster. All nodes simply watch one or more topics and either receive or send messages to these topics over the event bus. Event bus has its configuration parameters such as clusterHost (which interface the pub-sub service should bind on startup) or clusterPort (which port the pub-sub service should listen on startup). If no port information is provided in the configuration, then an arbitrary port will be used.

Since this seems fair enough, in a microservice environment where configurations change rapidly, these static definitions simply do not work out practically. In a containerized environment where there are virtual network stack layers between processes, each deployment means a new interface or even maybe a new VM. Therefore, the new processes requires a dynamically generated cluster.xml file.

Before the version of 3.3.0 of Vert.x, all these service discoveries were statically defined properties. Whoever wants to use these Vert.x in a Docker container needs to solve service discovery problem internally. However, with Vert.x 3.3.0, many changes have been applied and below is the one that solves our Service Discovery issue:

Latest stable version of Hazelcast (3.6.3) becomes standard. With Hazelcast 3.6.3, plugin based service discovery feature added. Within this release, service discovery backends (such as zookeeper, etcd, consul) become available as configuration to the Hazelcast. Therefore in the cluster.xml file you can see that there is a new section called discovery-strategies. Within this section, one can specify the desired service discovery backends. We found out that the Zookeeper plugin (developed by the same community) is the most mature one to get it to the production line. Zookeeper-Plugin requires curator-x-discovery as a dependency in the project. Here please note that Zookeeper 3.5.x is still alpha and compatible with the latest stable version of curator which is 3.2. In order to use a stable version of Zookeeper (3.4.8 for now) the curator needs to be downgraded to 2.11.0. Below sample dependency from our build.gradle.