APPLICATION MODERNIZATION

Part 2: Incremental App Migration from VMs to Kubernetes — Pitfalls, Pipelines, and Avoiding Complexity

Migration guidelines for implementing continuous delivery and avoiding common pitfalls and antipatterns

One of the core goals when modernising software systems is to decouple applications from the underlying infrastructure on which they are running. This can provide many benefits, including: workload portability, integration with cloud AI/ML services, reducing costs, and improving/delegating specific aspects of security. The use of containers and orchestration frameworks like Kubernetes can decouple the deployment and execution of applications from the underlying hardware.

In the previous article of this series I explored how to begin the technical journey within an application modernisation program by deploying an Ambassador API gateway at the edge of your system and routing user traffic across existing VM-based services and newly deployed Kubernetes-based services.

This second article builds on this journey, and provides an overview of how you can plan the migration, and also provides guidance on containerising workloads and some networking gotchas to watch out for. The next article in the series will look at using a service mesh, like HashiCorp’s Consul, to route service-to-service traffic seamlessly across all platform types, regardless of whether your applications have been containerised or not.

Planning a Migration: Common Pitfalls

I’m going to assume that you are already sold on the benefits of modernising your application stack, but there are some caveats that need to be stated upfront:

You can’t expect to migrate your stack overnight. There are simply too many moving parts and too much complexity within a typical existing (legacy / heritage) stack. Any migration needs to be planned and undertaken in a piecemeal fashion, and the plan and the underlying infrastructure need to be flexible enough to adapt, for example, if one team decides that they will continue to run their applications on VMs for the next year, but also wants to utilise the new SSO authentication or rate limiting protection. Your migration must be resilience and capable of adapting to the inevitable issues that you encounter.

There are simply too many moving parts and too much complexity within a typical existing (legacy / heritage) stack. Any migration needs to be planned and undertaken in a piecemeal fashion, and the plan and the underlying infrastructure need to be flexible enough to adapt, for example, if one team decides that they will continue to run their applications on VMs for the next year, but also wants to utilise the new SSO authentication or rate limiting protection. Your migration must be resilience and capable of adapting to the inevitable issues that you encounter. You should not plan to rollout a cloud migration as a big bang. Even for teams with a relatively small IT estate, the amount of risk involved with updating practically anything in a big bang fashion is too high, let alone changing your entire underlying infrastructure stack. Your migration must support incremental rollout.

Even for teams with a relatively small IT estate, the amount of risk involved with updating practically anything in a big bang fashion is too high, let alone changing your entire underlying infrastructure stack. Your migration must support incremental rollout. You will have to ensure that all teams (both dev and ops) understand the new technologies, and update their shared mental models accordingly. Traditionally, operations may have thought of an infrastructure platform as consisting of compute nodes and layer 3/4 networking that they fully control. Typically the concept of component identity within the system is thought of as an IP address and ports. In tandem, developers often believe that the configuration of the underlying platform infrastructure and communication properties, such as service discovery, security, and rate limiting, are “someone else’s problem”. A migration towards cloud technologies must ensure that everyone embraces the concept of a shared, self-service platform, that system identity is based around service identity, and dev and ops work together to configure runtime communication properties of applications.

Migration Tactics

Given the above requirements, let’s now look at a several tactics of how this might be implemented.

Packaging in Containers

I talked about the challenges of packaging existing “heritage” applications within containers at DockerCon EU last year in “Continuous Delivery with Docker Containers and Java: The Good, the Bad, and the Ugly”. The talk is focused on the Java platform, but there should be useful takeaways for other language stacks, too.

If you have subscribed to Docker Enterprise, then the Docker team provide several tools for automatically packaging existing .NET applications into a Docker container. There are also initiatives be other organisations, such as Google, who have released the Jib container build tool. The CloudBees and Red Hat teams provide buildpack style integration with their Jenkins X and OpenShift tooling, respectively, that assists with automatically generating a Dockerfile for existing applications. There have been demonstrations at previous DockerCons’ where technology like the Cloud Native Application Bundle (CNAB) has been combined with CLI tooling to automatically package applications, too.

Adapt Your Delivery Pipeline

Containerising existing applications can require some shell script magic, but fundamentally the approach to this task is quite formulaic — understand how your application runs now, and replicate this within a container. The biggest challenge is often verifying that the app runs correctly over a variety of use cases. To perform this quality assurance you will typically need to enhance your delivery pipeline, or to create one if you don’t already have this in place. Delivery pipeline tooling like the previously mentioned Jenkins X will help here, and there are a variety of open source and commercial products, too, such as CircleCI, GoCD, and GitLab.

I talked about adapting a continuous delivery pipeline to build containers in my DockerCon EU presentation, and the accompanying example project provides some practical demonstrations. The key takeaway is to ensure that you are executing all of your component-level and service-integration tests against the application or service running with a container.

I have seen some organisations that continue to execute tests as they always have done against the application binary, and then package the app in a container as the final step of the pipeline. This approach frequently results in problems, as container technology can subtly alter the runtime characteristics of the infrastructure, such as limiting CPU time or memory, differing I/O performance from an underlying block store, or not providing enough entropy via /dev/random to run cryptographic operations such as token generation

Watch for Network Complexity

In the next article in this series I will demonstrate how to use the Consul service mesh to extend the example application included with part one of this series, which was deployed on Google Cloud Platform VMs and Kubernetes. However, it is worth talking about one of the primary issues that you will encounter is the need for a fully connected network, which typically means the use of either a flat network or a series of routers or gateways to bridge disparate networks.

There are several users of Ambassador that use this to segment networks, or join existing segments, and other organisations like HashiCorp and Rancher are working on implementing gateways that can bridge multiple clusters, with Consul Gateways and Submariner, respectively.

Stay Tuned

In this second article in the series on application migration, I have signposted some of the challenges that the Datawire team and I have seen when customers are modernising applications. In the next article, I’ll introduce the use of the Consul service mesh, and demonstrate how this integrates with Ambassador in order to ease the transition between VM-based applications and container-based services.

If you have any questions, please contact us via the website or at @getambassadorio on Twitter.