From the security point-of-view, benefits of this first transformation can be summed up that way:

Components deployed as containers can take advantages of multi-tenancy, isolation and resource densification features of the underlying host system. Have a look at Ten Layers of Container Security white paper for full details on that topic,

Kubernetes / OpenShift allow fine grained control on what’s exposed to other services and to the outer-world. Deployments units (aka Pod) are no longer exposed and directly addressable,

Database credentials are managed as secrets independently from the application deployment. They can be viewed/edited from dedicated Ops guys using a powerful RBAC model,

Exposition to the outer-world is controlled via an OpenShift Route with TLS support.

How to apply it?

It is assumed that you have some kind of OpenShift cluster instance running and available. This instance can take several forms depending on your environment and needs :

Full blown OpenShift cluster at your site or your cloud instance, see how to Install OpenShift at your site,

Red Hat Container Development Kit on your laptop, see how to Get Started with CDK,

Lightweight Minishift on your laptop, see Minishift project page.

Once you’re logged onto your OpenShift environment, start creating a new project (a Kubernetes namespace++) for our components.

$ oc new-project fruits-catalog --display-name="Fruits Catalog"

Then, from the root of your repository clone, start creating new elements into your project. First, deploy a new MongoDB database after having relaxed some constraints:

$ oc adm policy add-scc-to-user anyuid -z default -n fruits-catalog

$ oc adm policy add-scc-to-user privileged -z default -n fruits-catalog

$ oc new-app mongodb-persistent --name=mongodb -p DATABASE_SERVICE_NAME=mongodb -p MONGODB_DATABASE=sampledb -l app=fruits-catalog -n fruits-catalog

Thus it is not mandatory right now to relax some constraints and allow anyuid and privileged , we’ll need it in part 5 for advanced tweaking.

MongoDB deployment on OpenShift automatically creates a new Kubernetes secret for you with admin, username and password to connect to the database. Then, simply deploy our application as a new OpenShift DeploymentConfig on your cluster with this command:

$ mvn fabric8:deploy -Popenshift

[...]

Wait for a few minutes before everything deploy — you can check logs from console of from oc logs command. After that, you should get something like that when checking the pods and the routes:

$ oc get pods -n fruits-catalog NAME READY STATUS RESTARTS AGE

fruits-catalog-1-xx7nd 1/1 Running 0 1h

fruits-catalog-s2i-1-build 0/1 Completed 0 1h

mongodb-1-t85wm 1/1 Running 0 1h $ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD

fruits-catalog fruits-catalog-fruits-catalog-2.apps.x.x.x.x.nip.io fruits-catalog 8080 None

Wow! All this objects created with a simple command! That’s the magic of the Maven Fabric8 plugin that uses some conventions (that you can override with fragments) to create all this resources for you. You can see that the default Route is created with no termination. Finally, you’ll have to patch the Route in order to add the Edge TLS termination:

$ oc patch route/fruits-catalog --type=json -p '[{"op":"add", "path":"/spec/tls", "value":{"termination":"edge"}}]' -n fruits-catalog

You should now get everything things running for this first part!

Check it works as expected

Simply open a browser window using the URL of the route previously upgraded and check the application is running as expected. Application is now served using TLS and you can access the details of the certificates.

We’ve got some errors in the screenshot above as my installation is using a custom Certificate Authority that produces self-signed certificates, but you can easily configure OpenShift to use external providers like Let’sEncrypt.

Finally, we see that our application is talking to our backend without having configured it. Well, that’s not really the case… things aren’t magic to that point. Have a look at the src/main/figure8/deployment.yml fragment in the repo. You’ll notice that we have prepared things so that our application will be able to retrieve and build a connection string from the credentials within the Kubernetes secret.

env:

- name: SPRING_DATA_MONGODB_USER

valueFrom:

secretKeyRef:

key: database-user

name: mongodb

- name: SPRING_DATA_MONGODB_PASSWORD

valueFrom:

secretKeyRef:

key: database-password

name: mongodb

- name: SPRING_DATA_MONGODB_URI

value: mongodb://${SPRING_DATA_MONGODB_USER}:${SPRING_DATA_MONGODB_PASSWORD}@mongodb/sampledb

Remember the secret was created when deploying the MongoDB pod? Then, we are now able to refer it and use it even if we don’t know the actual values. The SPRING_DATA_* environment variables are created from these values and those typical variables are well-known by the application for creating connections at start-up.

Conclusion and next step

In this first part, we have seen how using the simple Maven Fabric8 plugin can help us easily deploy our application to OpenShift Kubernetes distribution. Deploying our application as containers allow us to take advantages of platform services like secret management for our apps. Moreover it allows efficient isolation, resource control and multi-tenancy from the host system. And finally, it allows us to easily control the exposure of our application through TLS without having to modify it, configure Java keystore, truststore or this kind of stuffs.

After this little warm-up, we’ll get into serious things: we’ll see how to add authentication and authorization to our application using Keycloak. Stay tuned for second part!