Interview Toward the end of this month, CoreOS CEO Alex Polvi expects Amazon will introduce a managed Kubernetes service at its AWS re:Invent event.

If so – CoreOS CTO Brandon Philips cites some Kubernetes bug reports from Amazon as evidence – it will be an admission of what most people focused on software containers already know: that Kubernetes has become the industry standard for container orchestration.

After Docker's announcement last month that it will support Kubernetes in its enterprise product, Amazon is the largest major cloud vendor that hasn't yet made a serious commitment to the Google-spawned open-source project. It did however tip its hand by joining the Cloud Native Computing Foundation, which oversees Kubernetes, in August.

"Kubernetes has clearly won the space," said Polvi during lunch with The Register and other tech press at its San Francisco, California, headquarters.

Polvi and Philips anticipate a Kubernetes colonization race, as enterprise vendors scramble to create the management layer for running containerized IT infrastructure.

CoreOS is already on its way, with its Tectonic enterprise Kubernetes platform. So is Red Hat, with OpenShift. Google has GKE. Microsoft has AKS. IBM is offering its Bluemix, er, Cloud Container Service. Pivotal has PKS. Oracle has teamed with CoreOS. Cloud Foundry has Cloud Foundry Container Runtime. Cisco too has thrown its hat into the ring through a Google partnership. And the list goes on.

"What Kubernetes really solves is how do you run a ton of different applications with a consistent model," said Polvi. "That consistency is what allows a company with 20,000 applications to have a small operations team running it all. Essentially you have software running these applications instead of humans doing it."

For experienced IT professionals, the emerging Kubernetes frenzy may evoke a sense of deja vu. "We're doing a redo of everything that happened in VMs," said Philips. "So it's things like monitoring and management, identity and integration with identity services, security, and lifecycle management."

Polvi said the plan for CoreOS is to offer a path toward more automated IT operations on Kubernetes. The upstart already provides the means to automate open source Prometheus monitoring software for container clusters, and it will soon be doing so with other open-source projects like Vault, which does secrets management.

CoreOS is laying the foundation for any business to do this, with its own software, the chief exec added.

Automated

"Overall, our whole thesis as a company been bringing automated operations forward," said Polvi. "We think that automated operations, which is really about simplifying operations, is the key to security, to making the cloud-side of the web more robust."

The traditional cloud service provider, said Polvi, provides hosting and operations. CoreOS, he added, wants to just provide the operations, because the hosting is a commodity.

Kubernetes has won. Docker Enterprise Edition will support rival container-wrangling tech READ MORE

Essentially, he's focused on selling software that runs other software. What software might need such automation? Enterprise applications for scaling, failure recovery, secrets management, provisioning, deprovisioning, installation, and monitoring – the sort of code that fills out the container orchestration layer.

"When the value of the software we're selling you is the automated operations instead of the functionality of the code itself, like the traditional proprietary IP side of things, it means we're aligned with open source," Polvi said. "It means we can take upstream Prometheus and we want that to be as big and popular as possible, so that drives more demand for our code that runs your code."

Polvi said the closest equivalent to this model is the Rackspace Managed Cloud, where that provider would go into a customer's data center to run it.

"It's like that except it's built in pure software," he explained. "The closest equivalent is like the autopilot in a self-driving car. Traditional IT operations is: you buy the car and you hire a driver. That's your ops person. Then there's cloud, where you hire their car and their driver. It's like a chauffeur service. You just sit in the back and don't have to do anything. We're actually proposing a self-driving model, where you buy the car and you push a button and it drives itself around."

Cannibalized by containers

That may sound a bit like automation offered by the likes of configuration management toolmakers Puppet and Chef, but Polvi and Philips see those tools operating at a lower level: deploying apps. And containerization, they contend, is replacing that.

"In the past, people hooked up Puppet or Chef to the CI/CD pipelines of their app and now they're hooking up the Kubernetes APIs for the CI/CD pipelines to deploy a new version or for testing," said Philips.

Polvi described Puppet and Chef as languages to tell a computer how to run infrastructure. They have advantages for some operations teams and there's no reason people can't keep using them, he said. "But I think those companies need to keep a close eye on this [container-focused] world because a lot of the functionality is being replaced," he added.

That's a better state of affair that the platform-as-a-service (PaaS) market. "I think PaaS is dead," said Polvi. "That's why you see OpenShift and Cloud Foundry and everyone pivoting to Kubernetes. What's going to happen is PaaS will be reborn as serverless on the other side of the Kubernetes transition."

Polvi subsequently walked back the death declaration, and suggested PaaS is evolving. He sees Kubernetes and container vendors building services atop the Kubernetes layer, and PaaS adopting serverless architecture.

Serverless computing, for those who have managed to escape the hype thus far, works like this: rather than spin up virtual or physical machines, install a web server on them and develop an application that sits on top talking to clients via your API, and then manage and patch all those layers, you simply go serverless. In that case, someone, like AWS, takes care of all the fiddly stuff of deploying and maintaining the infrastructure, leaving you to write the application logic on top. However, in doing so, your software is hardwired into the provider's interfaces, so that your code can receive and service requests and events from clients and mobile apps that connect in.

"Serverless is going on its own right now but the enterprise application of serverless will happen in the post-Kubernetes deployment phase of things," he said.

The problem with PaaS, as Polvi put it, is that it's too restrictive and not broad enough. "It was never the entire way the company did business," he said. "Kubernetes fixes that."

That doesn't mean Polvi is a fan. "Lambda and serverless is one of the worst forms of proprietary lock-in that we've ever seen in the history of humanity," said Polvi, only partly in jest, referring to the most widely used serverless offering, AWS Lambda. "It's seriously as bad as it gets."

He elaborated: "It's code that tied not just to hardware – which we've seen before – but to a data center, you can't even get the hardware yourself. And that hardware is now custom fabbed for the cloud providers with dark fiber that runs all around the world, just for them. So literally the application you write will never get the performance or responsiveness or the ability to be ported somewhere else without having the deployment footprint of Amazon."

That, Polvi says, is why the open-source community has to provide alternatives.

"We've heard from our customers, if you cross $100,000 a month on AWS, they'll negotiate your bill down," said Polvi. "If you cross a million a month, they'll no longer negotiate with you because they know you're so locked that you're not going anywhere. That's the level where we're trying to provide some relief."

With a grin, Polvi said: "We haven't really used this in our messaging but we could make the argument 'put us down and your ROI is your ability to negotiate down your Amazon bill later.'" ®