Key Takeaways Service meshes transparently adds required technical cross-cutting concerns to microservices.

Concerns such as routing, resiliency, or authentication become a responsibility of the service mesh.

Application code become leaner and focuses more on the actual business logic.

Istio transparently enhances workloads, such as Kubernetes pods, via sidecar proxy containers.

Java EE with modern application servers integrates well with cloud native technology by enabling developers to implement lean business logic.

Java EE, cloud native and service meshes — this doesn’t really sound like a good fit. Or does it? Is it possible to develop modern, cloud native Java Enterprise applications that fulfill concerns such as scalability, monitoring, tracing, or routing — without implementing everything ourselves? And if so, how?

In an enterprise landscape of microservices there is the challenge of adding technical concerns, such as discovery, security, monitoring, tracing, routing, or failure handling, to multiple or all services in a consistent way. Software teams can potentially implement their individual services in different technologies, yet they need to comply with organizational standards. Adding a shared asset such as an API gateway tangles the services together and somehow defeats the purpose of a microservice architecture. Redundancy, however, should be avoided as well.

Service meshes transparently enhance each microservice that is part of the mesh with consistent technical concerns. These enhancements are added in a technology-agnostic way, without affecting the application. The application therefore focuses on implementing the business logic; the environment adds the technical necessities on top.

Instruments and 3D printers

The showcase applications that we’re going to use are an instrument craft shop and a 3D printer maker bot. Imagine an instrument craft shop SaaS application where clients order crafted instruments. Our shop doesn’t offer the best quality instruments and only forwards the requests to a maker bot backend which 3D-prints our instruments.

The two cloud native microservices are implemented in Java EE 8, are deployed to a Kubernetes cluster, and managed by Istio.

Enter cloud native technologies

In order to manage Java EE applications with Kubernetes and Istio we need to package them as containers. Docker images are created by defining Dockerfiles. These Infrastructure as Code files specify the runtime of the whole application, including configuration, the Java runtime, that is the JRE and an application container, and the required operating system binaries. The target environment only starts a fully-configured container from that image.

The following shows a Dockerfile for the packaged instrument craft shop application that uses a custom base image including an OpenLiberty application server:

FROM docker.example.com/open-liberty:1 COPY target/instrument-craft-shop.war $DEPLOYMENT_DIR

The OpenLiberty base image already includes what’s necessary to run the application server. The application’s Dockerfile will add potential configuration and as a last step the application archive. By using a thin deployment artifact approach we leverage Docker’s Copy-On-Write file system and the possibility to get extremely fast builds and short transmission times.

The built image will be run in an orchestrated environment, in our case in a Kubernetes cluster.

Therefore, what also becomes part of the application repository is the Infrastructure as Code files for the Kubernetes environment. The YAML descriptors include how the cluster should run, distribute and organize our application, our Docker containers.

The following shows the service definition for the instrument-craft-shop . A Kubernetes service is a logical abstraction over an application.

kind: Service apiVersion: v1 metadata: name: instrument-craft-shop labels: app: instrument-craft-shop spec: selector: app: instrument-craft-shop ports: - port: 9080 name: http

The service will load-balance the requests to the actual running instances. The containers are managed by a Kubernetes deployment. The deployment defines how the Kubernetes pods, the actual running workloads, are executed and how many replicas are desired:

kind: Deployment apiVersion: apps/v1beta1 metadata: name: instrument-craft-shop spec: replicas: 1 template: metadata: labels: app: instrument-craft-shop version: v1 spec: containers: - name: instrument-craft-shop image: docker.example.com/instrument-craft-shop:1 imagePullPolicy: IfNotPresent restartPolicy: Always

The service will take the pods that match the defined selector into account. Here the app label, which is a de-facto standard name, matches our application. It’s good practice to also define a version label to be able to further customize the service routing once multiple application versions exist simultaneously.

The instrument craft shop will be invoked by a client from outside of the cluster. Kubernetes ingress resources route the ingress traffic to the corresponding services:

kind: Ingress apiVersion: extensions/v1beta1 metadata: name: instrument-craft-shop annotations: kubernetes.io/ingress.class: istio spec: rules: - http: paths: - path: /instrument-craft-shop/.* backend: serviceName: instrument-craft-shop servicePort: 9080

The ingress.class annotation specifies istio as the ingress implementation. Kubernetes will therefore deploy the correct Istio ingress for our system.

The instrument craft shop application will communicate with the maker bot backend via HTTP. The maker bot application defines similar Kubernetes service and deployment resources named maker-bot .

Since both applications are part of the Kubernetes cluster, they can communicate using the service definitions as host names. Kubernetes internally resolves the services names via DNS.

The following shows the maker bot client which is part of the instrument craft shop application:

@ApplicationScoped public class MakerBot { private Client client; private WebTarget target; @PostConstruct private void initClient() { client = ClientBuilder.newBuilder() .connectTimeout(1, TimeUnit.SECONDS) .readTimeout(3, TimeUnit.SECONDS) .build(); target = client.target("http://maker-bot:9080/maker-bot/resources/jobs"); } public void printInstrument(InstrumentType type) { JsonObject requestBody = createRequestBody(type); Response response = sendRequest(requestBody); validateResponse(response); } private JsonObject createRequestBody(InstrumentType type) { return Json.createObjectBuilder() .add("instrument", type.name().toLowerCase()) .build(); } private Response sendRequest(JsonObject requestBody) { try { return target .request() .post(Entity.json(requestBody)); } catch (Exception e) { throw new IllegalStateException("Could not print instrument, reason: " + e.getMessage(), e); } } private void validateResponse(Response response) { if (response.getStatusInfo().getFamily() != Response.Status.Family.SUCCESSFUL) throw new IllegalStateException("Could not print instrument, status: " + response.getStatus()); } @PreDestroy private void closeClient() { client.close(); } }

Since Java EE 8, the JAX-RS client builder API supports the connectTimeout and readTimeout methods. It’s highly advisable to set these timeouts to prevent long blocking threads.

As you can see the maker bot backend is configured via the host name maker-bot and the port 9080 which matches the Kubernetes service definition. This enables us to get rid of the service discovery configuration, such as defining different target endpoints, IP addresses or host names, for different environments. The URL will be stable in all Kubernetes cluster environments and resolved appropriately.

Enter Istio

We’re going to showcase Istio, which is one of the most widely used examples of a service mesh in the Java / JVM world.

Istio transparently adds technical cross-cutting concerns to applications. It enhances the application pods with proxy sidecar containers that capture the inbound and outbound traffic from and to the main container. The main application container connects to the desired service and has no knowledge about the proxy. We can think of Istio as aspects, as in aspect oriented programming, that are added to applications in a transparent way. Istio can use several orchestration framework implementations, including Kubernetes

Our example applications are deployed to a Kubernetes cluster that is enhanced with Istio and automatic sidecar-injection. The sidecar injector automatically adds the Istio proxy container to each pod.

The Istio Pilot is responsible for configuring the sidecar proxies in regard to routing and resiliency. We configure the Istio aspects in declarative YAML files, similar to Kubernetes resources.

As a best practice, we add default routes for the corresponding application services:

apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: instrument-craft-shop-default spec: destination: name: instrument-craft-shop precedence: 1 route: - weight: 100 labels: version: v1

This route rule specifies that all traffic to the instrument-craft-shop service will be routes to the instances with version v1 . The Istio resources are added to the cluster in the same way as Kubernetes resources, for example via the kubectl command line. We can now enhance these routes with further aspects.

The following route rule for the maker bot backends adds a timeout of 2 seconds:

apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: maker-bot-default spec: destination: name: maker-bot precedence: 1 route: - weight: 100 labels: version: v1 httpReqTimeout: simpleTimeout: timeout: 2s

The timeout will be triggered independently to other application level timeouts and will cause the proxy to return a 503 error code. This prevents the system from infinitely blocking, even if no timeout such as the JAX-RS client configuration in the MakerBot class has been defined. The client will receive the timeout whichever is triggered first.

Another feature of Istio is to add circuit breakers to prevent a application from being overloaded and from failing as a whole. The following destination policy for the maker bot backend adds circuit breaking behavior:

apiVersion: config.istio.io/v1alpha2 kind: DestinationPolicy metadata: name: maker-bot-circuit-breaker spec: destination: name: maker-bot circuitBreaker: simpleCb: httpConsecutiveErrors: 1 sleepWindow: 10s httpDetectionInterval: 10s httpMaxEjectionPercent: 100 maxConnections: 1 httpMaxPendingRequests: 1 httpMaxRequestsPerConnection: 1

This overly strict destination policy allows only one connection at a time and will reject additional connections. There are different ways to configure how the circuit is opened and closed again, which needs to be adjusted for the specific system setup.

Other aspects that are added transparently to the existing application are monitoring, logging and tracing, as well as authentication. The Envoy proxies which are contained in the sidecar containers add these cross-cutting concerns and expose them to the environment.

The DevOps engineers can access the required information, for example by inspecting the Grafana and Prometheus extension or the Tracing solutions that are part of the Istio cluster. Authentication is added by mutually encrypting the connection between sidecar proxies. Users can add their own certificates and additionally configure the policy which communication will be allowed.

Conclusion

Java EE fits the idea behind service meshes very well. Technical cross-cutting concerns, such as routing, resiliency, or authentication become a responsibility of the environment, of the service mesh.

In fact, Java EE was always built around that idea. The application itself should regard itself with the business logic, the actual problem domain to solve. This is what ultimately provides value to the application’s users. Technical responsibilities, however, such as life cycle management, dependency injection, transactions, or threading were part of the application container.

Orchestration frameworks and service meshes take that approach further and make service discovery, resilience, authentication, monitoring, or tracing a responsibility of the environments. These responsibilities are thus not longer a concern of the application code. They shouldn’t be; the application should focus on implementing the business logic.

The packaged application is sufficiently built with plain Java EE 8, or Jakarta EE in the future. Technical cross-cutting concerns are added from outside of the application.

If the domain requires additional concerns such as business-related metrics, these can be added by integrating third-party extensions, for example MicroProfile Metrics. Using a container that supports MicroProfile, or installing third-party libraries to the application container, as a lower Docker image layer, still allows us to leverage the advantages of thin deployment artifacts. This idea matches the principle of separation of concerns.

The combination of cloud native technologies, such as Docker, Kubernetes, and Istio together with Java EE, or what will be Jakarta EE in the future, is thus a future-proof choice to realize productive enterprise applications.

Further resources

About the Author

Sebastian Daschner is a self-employed Java consultant, author and trainer and is enthusiastic about programming and Java (EE). He is the author of the book ‘Architecting Modern Java EE Applications’. Sebastian is participating in the JCP, helping forming the future standards of Java EE, serving in the JAX-RS, JSON-P and Config Expert Groups and collaborating on various open source projects. For his contributions in the Java community and ecosystem he was recognized as a Java Champion, Oracle Developer Champion and double 2016 JavaOne Rockstar. Besides Java, Sebastian is also a heavy user of Linux and cloud native technologies. He evangelizes computer science practices on his blog, his newsletter, and on Twitter via @DaschnerS. When not working with Java, he also loves to travel the world — either by plane or motorbike.