APPLICATION MODERNIZATION

Part 1: Incremental App Migration from VMs to Kubernetes — Routing Traffic Across Platforms & Clouds

Using the Ambassador API gateway and Consul to route traffic across multiple platforms and infrastructure

At Datawire, we are seeing more organizations migrating to their “next-generation” cloud-native platform built around Docker and Kubernetes. However, this migration doesn’t happen overnight. Instead, we see the proliferation of multi-platform data centers and cloud environments where applications span both VMs and containers. In these data centers the Ambassador API gateway is being used as a central point of ingress, consolidating authentication, rate limiting, and other cross-cutting operational concerns.

This article is the first in a series on how to use Ambassador as a multi-platform ingress solution when incrementally migrating applications to Kubernetes. We’ve added sample Terraform code to the Ambassador Pro Reference Architecture GitHub repo which enables the creation of a multi-platform “sandbox” infrastructure on Google Cloud Platform. This will allow you to spin up a Kubernetes cluster and several VMs, and practice routing traffic from Ambassador to the existing applications.

Edge Routing in a Multi-Platform World

I’ve written previously about using an edge proxy or gateway to help with a migration from a monolith to microservices, or a migration from on premises to the cloud. Ambassador can act as an API gateway or edge router for all types of platform, and although it was designed and built to run exclusively on Kubernetes, it is trivial to configure traffic routing from the cluster to external network targets, such as endpoints within VPNs or virtual private clouds (VPCs), cloud services, cloud load balancers, or individual VMs. If you have network access to the endpoint, then Ambassador can route to it.

Our Ambassador Pro Reference Architecture GitHub repo contains several folders that provide documentation and examples to help you understand how best to use all of the features that Ambassador supports, like rate limiting and distributed tracing. There is also a “cloud-infrastructure” folder that contains the necessary Terraform code and scripts to spin up a sample multi-platform VM / Kubernetes infrastructure using Google Cloud Platform (GCP). The resulting infrastructure stack is show below:

Building an Example VM / Kubernetes Platform

The Terraformed infrastructure example provided in the Ambassador Reference Architecture repo will create a simple regional network in GCP with a Kubernetes (GKE) cluster and several VM-based services deployed behind (publicly addressable) load balancers. The application deployed on the VMs has been taken from my “Docker Java Shopping” example of a very simple e-commerce shop, and this consists of two Java services using Spring Boot and one using Dropwizard.

Deploying Ambassador within the Kubernetes cluster enables the simplification of ingress for the entire network, and also allows the engineering team to centralise and standardise the management of the this gateway. Centralising operations of the gateway and edge of the network provides many benefits, such as the reduction of “authentication sprawl” and the ability to standardise cross-cutting concerns such as TLS termination or pass-through, context-based routing (e.g. using Filters to route based on HTTP headers), and rate limiting.

After cloning the reference architecture repo, navigate to the folder containing the GCP Terraform code and you will find a README with step-by-step instructions required to replicate our configuration. Be aware that spinning up this infrastructure will cost you money if you are outside of your GCP free trial credit:



$ cd pro-ref-arch/cloud-infrastructure/google-cloud-platform $ git clone git@github.com :datawire/pro-ref-arch.git$ cd pro-ref-arch/cloud-infrastructure/google-cloud-platform

Once you have everything configured and have run terraform apply successfully (which may take several minutes to complete), the infrastructure shown in the diagram above will have been created within your GCP account. You will also see some outputs from Terraform that can be used to configure your local kubectl tool, and also set up Ambassador.

...

Apply complete! Resources: 15 added, 0 changed, 0 destroyed. Outputs: gcloud_get_creds = gcloud container clusters get-credentials ambassador-demo --project nodal-flagstaff-XXXX --zone us-central1-f

shop_loadbalancer_ip_port = 35.192.25.31:80

shopfront_ambassador_config =

---

apiVersion: v1

kind: Service

metadata:

name: shopfront

annotations:

getambassador.io/config: |

---

apiVersion: ambassador/v1

kind: Mapping

name: shopfront_mapping

prefix: /shopfront/

service: 35.192.25.31:80

spec:

ports:

- name: shopfront

port: 80

The first output, with the name gcloud_get_creds , can be run to configure your local kubectl to point to the newly Terraformed Kubernetes cluster e.g. from the output above, I would run at my local terminal:

$ gcloud container clusters get-credentials ambassador-demo --project nodal-flagstaff-XXXX --zone us-central1-f $ kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 28m

You can now install Ambassador into the cluster by following the Getting Started instructions, or following the quick-start in the README. Once the gateway is up and running and you have obtained the external GCP load balancer IP for the Ambassador Kubernetes Service, you can now deploy an Ambassador Mapping that routes to a GCP load balancer that is located outside of the Kubernetes cluster. I’ve deliberately kept the network routing and firewall rules simple with the current infrastructure, but future iterations of this tutorial will introduce more challenging configurations.

The Terraform output named shopfront_ambassador_config provides Kubernetes configuration that can be copy-pasted into a YAML file and applied into cluster. You should then be able to access the Shopfront service that is running on a VM (and communicating with other upstream services also running on VMs), via the Ambassador IP and the associated mapping e.g.: http://{AMBASSADOR_LB_IP}/shopfront/

If all goes well you should be able to see the following in your browser:

This is just the beginning of a range of tutorials we will present about over the coming months. We are keen to add more complexity, for example, creating network segments with peered VPCs and more complicated firewall rules, and we will also be looking to demonstrate using Kubernetes ExternalName services and Consul Connect to implement a multicluster service mesh for the implementation of full end-to-end TLS.

When you’ve finished experimenting with the Terraformed infrastructure, don’t forget to delete this and clean up, or otherwise you could be facing a unexpected GCP invoice!

$ terraform destroy -force

Wrapping Up

This article and associated multi-platform data center example have been designed to help engineers migrating applications from VMs to a Kubernetes cluster. Ambassador is often used as a central point of ingress for the entire estate, and this allows the consolidation of authentication, rate limiting, and other cross-cutting operational concerns.

We will continue to iterate on the example infrastructure code, and also plan to support for additional cloud platforms like Digital Ocean and AWS. Please do reach out to me if you have any particular requests for cloud vendors or complicated routing scenarios.

As usual, you can also ask any questions you may have via Twitter (@getambassadorio), Slack or raise issues via GitHub.