Why did the PhraseApp decide to leverage the open-source Kubernetes container management tool for its core software product? Maturity, flexibility, and openness ranked high on the list of reasons, says Tobias Schwab, co-founder of Dynport, better known by its flagship product name, PhraseApp.

Schwab had also considered Amazon EC2 Container Service (ECS). “But for us, Kubernetes feels much more like a framework, and that means we can pick the parts we like and not use the parts we do not like,” he says. That said, moving forward with Kubernetes has been a journey with a learning curve.

It's been 10 months since deployment. Here what Schwab's team has learned along the way.

A journey to Kubernetes begins

PhraseApp sells a software localization platform, built using Ruby on Rails, that lets users ship software in different languages faster, and with less effort. The application, also called PhraseApp, has multiple dependencies in terms of system packages and libraries. Containers now provide the immutable infrastructure required to ensure the greatest robustness and efficiency for the application architecture and meet the high-agility demands of supporting multiple source code deployments per day, Schwab says.

PhraseApp and its database began life on a bare-metal server at a German hosting company, where Ruby on Rails and its system packages were manually installed. Schwab's team used Capistrano, the open source remote server automation tool, to migrate PhraseApp and its database to AWS in early 2013, and used golden master Amazon Machine Images (AMIs) to horizontally scale up its application servers.

The team then stepped PhraseApp up to Docker to deal with a few remaining challenges, such as slow deployments. With Docker, Schwab could maintain the OS, packages, libraries, and code in one image to reduce deployment times.

"The container model helped us because we now only need to provide Docker images for the application, whose service-oriented architecture encompasses log indices, metrics, offsite backups of its database, and so on," he says. “We no longer need to package these, for example, as Linux packages or provide some other reliable way to deliver them. We have separate repositories in our Docker Amazon EC2 Container Registry (ECR) and run all these services in containers.”

The PhraseApp team initially used a custom infrastructure management tool, an internal server cluster scheduler the team dubbed Wunderproxy, to create Docker images and new containers by way of APIs. While this led to faster deployments and simpler rollbacks, Schwab's team wanted to get away from having to support its own custom controller management code and being limited to a legacy version of Docker.

Kubernetes vs. Amazon ECS

In 2016, Schwab began evaluating Kubernetes and Amazon ECS to see which would better serve as a highly scalable system for PhraseApp’s container management requirements.

“When we evaluated ECS, it still felt a bit immature,” Schwab says, describing incidents such as the ECS agent crashing on its nodes for no apparent reason. The AWS Elastic Load Balancing (ELB) Classic Load Balancer (CLB) also had limitations, such as not supporting the ability to run multiple containers of the same kind on a single host.

“But the biggest issue we had was the lack of transparency,” he says, noting that ECS is closed source and “felt like a black box.” The PhraseApp team found itself dealing with many broken requests due to time-outs, but there was no way for it to determine if that was due to the ELB or ECS itself.

Although PhraseApp’s cloud version lives on AWS, its developers evaluated Kubernetes on Google Container Engine. Things started off on the right foot because the group did not have to set up any parts of its cluster itself, which had been a requirement of ECS at the time of its testing.

That meant the product team could focus more on implementing its continuous delivery pipeline for PhraseApp, Schwab says. Adding to its appeal was the fact that Kubernetes didn’t force-feed the developers capabilities they didn’t want, such as using Ingress controllers to handle incoming traffic to the cluster servers.

“ECS gave us more of a ‘this way or the highway’ feeling, that we were forced to use it as the people at AWS thought it should be used. Kubernetes gives us quite a lot of options that help us solve the challenges we need to solve, like running cronjobs and asynchronous jobs.”

—Tobias Schwab

Today with Kubernetes, the PhraseApp cron jobs can be distributed in the whole cluster instead of using a dedicated host where things would slow down whenever multiple jobs were running. And it can deploy the same codebase for both its front-end API and worker processes for long-running web requests in sync. In fact, Schwab says that the most important test it ran to decide between Kubernetes and ECS was to deploy the front-end API component of PhraseApp on both architectures.

Another option available is to run PhraseApp services such as the In-Context Editor application—where translators can edit the copy of a website through an overlay interface while browsing that website—in the Kubernetes cluster, too. “We also provide an on-premises solution of PhraseApp which is already based on Docker,” Schwab adds. “Kubernetes might also give us the option to provide a much easier setup.”

PhraseApp, Kubernetes go to production

In 2017, Schwab's team finally deployed PhraseApp with Kubernetes. Between the evaluation that began in 2016 and that step, the team also committed to kc, its generic tool to build and deploy applications on Kubernetes, including the business-growth application its sales and marketing team uses to gather data about customers and trials.

“It was the first client of our AWS-based Kubernetes setup while we learned how to run our own cluster,” Schwab says. His company, he adds, likes to first introduce new technologies in ways that don’t directly impact customers, in case any issues arise that would affect their operations.

“This also allows us to move much faster in these areas, compared to the caution we would need to take if we migrated our production system first,” he says. Additionally, it’s a good way to build the same amount of confidence in the brand-new Kubernetes deployment that it had in its previous architecture, which was rock solid and hardly ever gave the organization problems.

It was invaluable, Schwab says, to have time to gain experience by using its growth app to build trust in Kubernetes itself; to determine how to set up its clusters (which changed a few times in the process and which Schwab says was definitely the “biggest pain”); and to shape its ideas about how it would use the container management platform.

The big wins

Schwab calls the company's experience since February, when it deployed PhraseApp with Kubernetes 100% positive. "We never had any production issues which were related to the Kubernetes cluster at all.”

The gains included:

Not having to think about how to deploy and configure a specific service (such as one for monitoring its SQS queues), especially in combination with the kc tool.

Being able to quickly test specific changes in the codebase before deploying them to end users.

Doing away with configuration management updates. Instead of changing already-running services, it can create new containers and terminate the old ones.

Having a fully transparent infrastructure, with the ability to get lists of the services and exact configurations needed to run all aspects of its application.

Driving better resource utilization. It can deploy many services on a single node without have them interfere with one another

Having the scalability to launch new nodes into its cluster in less than ten minutes, and remove them even faster.

Taking Kubernetes to the next level

PhraseApp's plans include moving from running the cluster with kubeadm, part of Kubernetes, to the kops opinionated provisioning system for AWS. That would bring to the table things such as independent node groups, nodes running in autoscaling groups, and the possibility to update the Kubernetes version more or less for free, Schwab says.

Kubernetes gives the company a tool to move PhraseApp away from a monolithic architecture to one that is more microservices-oriented, with reasonable, rather than excessive, operational overhead, says Schwab.

“The effort and overhead to deploy a microservice architecture would have been way too high for us if we had wanted to deploy it without a cluster scheduler like Kubernetes,” he says. “We do not have a dedicated IT operations team, but we try to ‘live DevOps,’ focusing instead on delivering features to our clients and then on maintaining our infrastructure.”

Kubernetes lessons from the trenches

If you're wondering whether Kubernetes is the right container management platform for your organization, Schwab has a few recommendations.

If you’re starting a new project, use the Google Container Engine cluster manager. It is “by far the simplest and most robust way to deploy a Kubernetes cluster in production,” he says.

If you’re already bound to AWS or prefer AWS over Google Cloud, though, use kops or work with a service provider that can help you set up and run the cluster for you.

“Kubeadm is already a really nice tool, but we would like to have somebody who manages our cluster for us, and provide us with a highly available master and a robust way to upgrade the cluster,” he says.

Schwab acknowledges that dedicated Ops teams can probably be trained to run and maintain a Kubernetes cluster. But, he advises:

Bring in external consultants to get your Ops team up to speed.

Finally, Schwab says, join the Kubernetes Slack community. “You will find a lot of people who are willing to help you with any kind of problem.”

Keep learning