ADAM GLICK: My Thanksgiving was lovely. I got to visit with friends and family and share good news and hear from other folks and have a nice day off and a day with much to be thankful for.

CRAIG BOX: Great. That sounds exactly what you'd hope. Well, being-- I don't want to say distinctly un-American, because that--

ADAM GLICK: Although you are distinctly un-American. [LAUGHS]

CRAIG BOX: I do have some friends who are connected to the US, if not from there directly, lived there for a few years. And they invited us around for a Thanksgiving meal on Saturday, which I guess seeing as we don't get Thursday off, we can be given a pass for. And had a lovely turkey, three types of stuffing. It's fantastic.

You can have the stuffing that was cooked outside the bird or the stuffing that was cooked inside the bird, but just generally, a lot of sausage to go along with the turkey. And we were instructed to bring along a pecan pie. So that was a fun experiment is figuring out how one would bake a pecan pie, especially using ingredients that you can source in the UK as opposed to in the US. No corn syrup was harmed in the making of this pie.

ADAM GLICK: Wonderful. I also wanted to say thank you for all of our listeners. One of the things that Craig and I both feel really thankful for is all of you that have tuned in each week and helped the growth of this podcast. It's been great to be a part of all of this. And thank you all for your listenership. It's one of the things that really has made my year special.

CRAIG BOX: I agree completely.

ADAM GLICK: Shall we get to the news?

[MUSIC PLAYING]

CRAIG BOX: Our show last week looked at the Kubernetes ecosystem in China. And this week, we have a new Chinese project to highlight. Wayne is a web-based, multi-cluster, management platform for Kubernetes. It was published on GitHub this week by 360 Search, the second largest search engine in China. Wayne has been powering the Kubernetes environment at 360 Search for more than three years, running nearly 1,000 applications and tens of thousands of containers.

Some translation to English has begun, as the documentation is primarily in Chinese. Projects at 360 Search are named after DC Comics characters. And so Wayne is named after Batman's alter ego. Hopefully, that's not a spoiler to anyone. Interestingly, a 2016 comic called "New Super-Man, Volume 1" introduces a Chinese version of the Justice League with a Batman of China named Baixi Wang.

ADAM GLICK: Weaveworks has released version 1.10 of Weave Scope, a visualization tool for Docker and Kubernetes environments. Scope automatically generates a map of your application enabling you to understand, monitor, and control microservices. 1.10 adds support for snapshotting and cloning Kubernetes persistent volumes from the application and improves performance when querying Kubernetes objects. Weave Scope is also an upcoming candidate for donation to the CNCF Sandbox.

CRAIG BOX: Another recent open-source release is Dive, which was built by Alex Goodman and has released 0.3 this week. Dive is a tool for exploring a container image and its constituent layers. It lets you visually understand the differences between layers and is a great tool for discovering ways to shrink a Docker container.

ADAM GLICK: KubeCon US is now officially sold out two weeks prior to the start of the event. And all 7,500 tickets are spoken for. The lucky ticket holders can expect attendance almost twice that of last year's conference in Austin. A wait list is available for people who want to attend but didn't get a ticket. Tickets for Ice Cube-Con are still showing as available, but you will need a KubeCon badge to attend.

CRAIG BOX: Over the past few months, the industry has declared containerd production-ready for use with Kubernetes. And you can now use the containerd runtime with Google's GKE in beta. Nodes now include the crictl tool, a runtime-independent tool for managing the container runtime interface. Eventually, containerd will replace the Docker runtime in GKE. So we encourage you to try it out and send feedback to the Google Cloud team.

ADAM GLICK: Finally, if you like certification and bargains, the Linux Foundation are offering a Cyber Monday special on training and certification bundles. Usually around $599, you can buy both online training course and certification exam for $179 US. They'll also throw in a free limited-edition t-shirt. Bundles are available for Kubernetes, Linux, and OpenStack. And the offer is valid until December 3rd of this year.

CRAIG BOX: And that's the news.

[MUSIC PLAYING]

ADAM GLICK: Jari Kolehmainen is the CTO of Kontena and a container hacker since the early days of Docker. Previously, he was a Cloud Architect at Digia building cloud services. Welcome to the show, Jari.

JARI KOLEHMAINEN: Thank you. Nice to be here.

CRAIG BOX: You were an early user of Docker. Take us back to that time at Digia. What is it that you were doing that Docker became a good solution for?

JARI KOLEHMAINEN: Yeah, we were building a private platform as a service. And I think it was Docker 0.4 or 0.5 that we actually started to experiment. Could this be used as a basis for this platform? And finally, we got into the production in a small-scale with Docker 0.6 or something.

CRAIG BOX: What year would that have been?

JARI KOLEHMAINEN: 2014, I think it was, something like that. It was kind of crazy, you know. It wasn't super stable back then. And the migrations between Docker versions were quite tricky to get right. But it worked. And we saw that the idea was really good. And you could actually build something on top of a container. So, yeah.

CRAIG BOX: What was the functionality of that platform?

JARI KOLEHMAINEN: If you know Heroku, maybe everyone knows Heroku. So it was something similar. But for the company, internal use and stuff like that.

ADAM GLICK: For those that aren't familiar, is it safe to say that Heroku is a PaaS platform?

JARI KOLEHMAINEN: Yeah.

ADAM GLICK: How did you decide to build a company around this?

JARI KOLEHMAINEN: When we were experimenting with early Docker versions, we saw the opportunity and, like, the idea that the people want to use the technology. But clustering and everything was really, like, there wasn't actually good solutions that people could spin up easily. And without too much maintenance burden, you actually have the clusters up and running and go to production.

So we thought that we should build one. And it should be something that the small teams could actually, you know, maintain and use by themselves. So later on, Docker Swarm was something similar that we did back in those days. But the idea was that small teams can handle the complexity by themselves.

CRAIG BOX: In leaving Digia and founding the company Kontena, you were building a platform. Was your goal to build a similar platform in that it was addressed to people pushing code? Or were you looking directly now at the container abstraction as being the right way to go?

JARI KOLEHMAINEN: We didn't want to make yet another platform as a service. So that's why we went to the container orchestration and tried to make that as easy as possible.

CRAIG BOX: Before Kubernetes came along, what were the features of the first version of your platform.

JARI KOLEHMAINEN: First version had like basic scheduling of stateful services, load balancing, stuff like that. And the first 1.0 version had this fancy UI, something similar that the Kontena Pharos 2.0 has. The 1.0 had that, but it was like a cloud service. So you could connect your own clusters to cloud, and then you could see the state of the cluster.

ADAM GLICK: So you started, interestingly enough, with stateful scenarios, even though I normally think of containers as being most often associated with stateless.

JARI KOLEHMAINEN: Yeah, I wanted to highlight that because that was one of the key features back then. Because nobody else did anything stateful. But we did.

ADAM GLICK: Well before StatefulSets.

JARI KOLEHMAINEN: Yeah, yeah.

CRAIG BOX: Where was the lifecycle of your product at the time of the release of Kubernetes?

JARI KOLEHMAINEN: I think we might have started at the same time. When the actual Kubernetes went public, we had already done a code and--

CRAIG BOX: A number of people were in that situation. And on behalf of Google, we're very sorry.

JARI KOLEHMAINEN: Yeah, yeah. And quite many people asked, why didn't you just use Kubernetes back then? But Kubernetes was really raw. It didn't have the stateful, you know, things.

CRAIG BOX: Of course.

JARI KOLEHMAINEN: And we wanted to have something that can handle stateful apps. But later on, these paths crossed again. Because nowadays, Kubernetes has very good support, almost, for any work load. So--

CRAIG BOX: When did it become apparent that Kubernetes was the right horse to bet on and that you should re-platform Kontena on top of it?

JARI KOLEHMAINEN: It was quite obvious, let's say, one year ago that most of the industry had moved on behind Kubernetes. It wasn't anymore behind Docker so much. So all that, like, new stuff was happening actually around Kubernetes. And that was the key thing for us, that we should actually move from playing Docker like our own orchestrator to Kubernetes.

ADAM GLICK: How would you describe what Pharos is?

JARI KOLEHMAINEN: Pharos is a Kubernetes distribution. It's certified by the CNCF. It's open source. But we run like open core model. So the primary goal for Kontena Pharos was to make setup of on-prem, bare metal, Kubernetes clusters as easy as possible. And we tried almost everything out there and tried to learn what are the good things about these products or projects and bad things. And then we make our own decision based on those.

Should we actually do the product or not? Maybe there is something out there that is actually so good that we don't have to do anything. But quite fast, we realized that there is no project out there that actually has the features that we want to have. So that's why we made Pharos. And the basis of Pharos is just a single binary that can actually bootstrap the cluster. It uses SSH connection to machines. And then it's just one single command and you have the whole cluster up and running.

CRAIG BOX: And then once the cluster is running, there are various add-ons which are available as part of your commercial product?

JARI KOLEHMAINEN: Yeah.

CRAIG BOX: What are the areas that you've chosen to differentiate Pharos from open-source Kubernetes?

JARI KOLEHMAINEN: Maybe the most like visual thing is, of course, the dashboard that we now released with 2.0. So, yeah, that's the big thing. We saw that many people are struggling with storage on-prem, bare metal. So we decided to include a storage solution as a commercial add-on. And, of course, backups are always critical in the production set up. So that was the third one in 2.0. But, of course, we are going to bring more of these add-ons later on. Some of them will be open source. Some of them will be commercial.

CRAIG BOX: How do you decide which to open source?

JARI KOLEHMAINEN: That's a good question. We try to keep these kind of like foundation pieces in the cluster as open source, like ingress or maybe in the future service mesh is something that we want to have in the open-source version. Yeah, something fundamental to the cluster needs to be open source. And something that can run something better on top of that, like service meshes.

CRAIG BOX: How lucky was it that you named your company something starting with a K?

JARI KOLEHMAINEN: Yeah, quite lucky.

CRAIG BOX: So "Kontena" is the Japanese word for "container?"

JARI KOLEHMAINEN: Yeah, yeah, yeah, yeah. "Kontena" means in Japanese, "container." And the history behind the name is also our CEO who had a background in Japanese, so Japan. So that's the history behind the name.

ADAM GLICK: Where'd you get the name Pharos from?

JARI KOLEHMAINEN: You know, naming things is pretty hard. So we tried to, you know, look to the ancient Greek, whatever there is. And we even found "Pharos," which is the lighthouse of Alexandria. So it comes from there. So some relationship to Kubernetes, but pretty loosely.

ADAM GLICK: Who are you building this for? It sounds like it may be developer focused, as it takes a lot of the setup and configuration pieces out of it.

JARI KOLEHMAINEN: Yeah, our history is in developer-focused platforms. But I think the main focus for this system is actually the companies that might have the ops talent to configure everything, like the storage, the back-ups, or whatever is related to the Kubernetes cluster. So in a sense, developers can actually quite easily get the Pharos cluster running. But I think Kubernetes still needs a bit of ops side, too. So you cannot just, you know, turn on the Kubernetes without actually knowing the internals of the system, at least not today.

CRAIG BOX: How do you support people along the lifecycle, after they've installed the product? Are you looking to support them on premise with version upgrades?

JARI KOLEHMAINEN: Yeah.

CRAIG BOX: Or is it a managed service?

JARI KOLEHMAINEN: Yeah, one of the key features of Pharos is actually the lifecycle management. So usually people are a bit afraid of upgrading Kubernetes between Kubernetes versions, at least after they have been trying it out. There is something usually breaking . And Pharos is trying to solve this problem for them. So you can just download the new version of Pharos and then execute the same binary that you used for the initial installation. And it will handle the upgrade for you.

CRAIG BOX: One of the features in the Enterprise Edition of Pharos is multi-cluster management.

JARI KOLEHMAINEN: Yeah.

CRAIG BOX: What does that mean to you?

JARI KOLEHMAINEN: Multi-cluster management is something that is basically like manager for the Kontena Pharos cluster. So it's a overview for the multiple clusters that the companies might have. And in the future, it will have also like provisioning capabilities. So you can just do some kind of self service of Kubernetes clusters within a company or enterprise or whatever the environment is.

CRAIG BOX: If I want to run some of my infrastructure on a cloud and the cloud provider has a service like GKE and I want to run some on premise, how does Kontena Pharos help me with that?

JARI KOLEHMAINEN: Actually, many people are using Kontena Pharos like that. So if they have a public cloud like Google's Kubernetes engine, but they have some need for running workloads on-prem or in a different cloud provider-- And for some reason, they cannot use there some built-in tool. So then they usually come to us and ask, could this be used as on-prem or private cloud or multi-cloud like a solution. And usually it works quite nicely for them. Because it's so easy to spin up the cluster.

ADAM GLICK: What is Magneto?

JARI KOLEHMAINEN: It's our new tool for, like, automating worker node provisioning on bare metal. So if you are familiar with CoreOS Matchbox, that is the PXE boot thing. Magneto is something similar, but it's actually like a Kubernetes native. So the Magneto server or the master is actually running on top of Kubernetes. And when you boot on those bare metal boxes, they will call to the Magneto and they will start configuring the boxes. So that's something that quite many enterprises are after. Because provisioning bare metal is usually quite a complex topic.

ADAM GLICK: You've mentioned bare metal a couple of times.

JARI KOLEHMAINEN: Yes.

ADAM GLICK: What made you decide to focus on bare metal versus other abstraction layers like virtual machines?

JARI KOLEHMAINEN: First of all, we like bare metal because there's always a bit of a overhead running something on top of VM. Yeah, that's the main reason.

CRAIG BOX: The secondary reason is bare metal generates heat. And Finland is a very cold country. You need all the heat.

JARI KOLEHMAINEN: Yeah, that's true. Yeah. So basically, with Kubernetes, you don't necessarily need actual virtualization layer. So why not use all the power that you have? If you have the private data center and you have the machines, why not?

ADAM GLICK: Have you had to go through a hardware certification process then? Or can this install on any x86 box?

JARI KOLEHMAINEN: We haven't done that, yeah. But maybe someday we have to, yeah. And actually speaking of bare metal boxes, we are also supporting RM64 architecture. So there are quite crazy machines out there like 96-core beast boxes that you can use for bare metal. And if you put Kubernetes on those, you will have quite a powerful cluster.

CRAIG BOX: When you're building a distribution of Kubernetes, even if we think of a Linux distribution, the vendors have opinions. And they pick what software to include in their package repository or what software to pre-install.

JARI KOLEHMAINEN: Yeah.

CRAIG BOX: How do you make the choices on behalf of your customers as to which things you will support in the distribution?

JARI KOLEHMAINEN: Yeah, that's quite a complex process, actually. When we released the initial version of Pharos, we tried to look at something that most people are using, like, for example, Ingress-NGINX. So we decided to pick that, even though there might be something better out there. But if it's something that most people are using, we decided to pick that as a add-on for the Pharos distribution.

But then there are some things in commercial offering like the storage that is actually built on top of Rook, on top of Ceph. And I'm not sure if that's actually the most common storage system today for Kubernetes. But that was something that we tested and asked from users and also tried to, you know, check what are the actually use cases for storage. You don't necessarily need this storage in the cloud. Because we can use the cloud whatever persistence there is, the cloud integration that Kubernetes has.

So we decided to focus on on-prem, something that works on-prem where the actual cluster is like semi-static. You can dedicate boxes for the storage. So that was the idea behind why we did choose, for example, Rook. So semi-opionated but, yeah, there is some common sense behind our choices.

ADAM GLICK: What's next for Kontena?

JARI KOLEHMAINEN: Yeah, I think to 2.0, Pharos has been well-received by the community. So I think we are going to add features, maybe make it more stable and build upon that. So more add-ons, more beautiful UI, maybe someday something for developers like KN80 integration or something like that.

CRAIG BOX: Are there any features that you would like to see the community build in order to help support the bare-metal use case?

JARI KOLEHMAINEN: Yeah, I think there are many things that actually are improving the situation on bare metal. So the bare metal is also interesting because of these device plug-ins. So there might be some use cases where you want to have a GPU for Kubernetes. There might be some cases where you're running actually Kubernetes clusters on edge computing. So I think the vendors are actually doing a very good job there. So, yeah.

CRAIG BOX: You mentioned ARM64. Do you feel that clusters of low-powered devices or for IoT, for example, do you think that's a use case that either Kubernetes or your business might move in the direction of?

JARI KOLEHMAINEN: Yeah, I think there is some demand for this kind of edge computing clusters. We are already seeing those out in the wild. And I think when the next generation of mobile networks come along, they will generate so much more data that the edge computing, the IoT stuff will become more common. And there might be a big opportunity for ARM64-based machines. You never know.

CRAIG BOX: A Beowulf cluster of iPhones.

JARI KOLEHMAINEN: Yeah, yeah.

ADAM GLICK: Thanks, Jari. It was really great having you on.

JARI KOLEHMAINEN: Thank you.

ADAM GLICK: To learn more about Kontena Pharos, you can go to kontena-- that's K-O-N-T-E-N-A-- dot io.

[MUSIC PLAYING]

Thanks for listening. As always, if you've enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter @kubernetespod or reach us by email at kubernetespodcast@google.com.

CRAIG BOX: You can also check out our website at kubernetespodcast.com for show notes and transcriptions. Until next week, take care.

ADAM GLICK: Catch you next week.

[MUSIC PLAYING]