He's at 1993 now. And this week's article was about what is actually my favorite game of all time, a game called "Day of the Tentacle."

I love a blog series by a guy called Jimmy Maher, who writes this great blog called "The Digital Antiquarian," where every couple of weeks, he's working his way through computer gaming history, starting at the 8 bit era, the 1980s, working his way up.

ADAM GLICK: Oh, yes.

CRAIG BOX: It's inspired me that I need to go out-- there's been a remastered version of it made in the last few years. So I'm going to go and replay that game and relive the experience of being a teenager and trying to solve all those puzzles.

ADAM GLICK: All those old scum games were just so good. But I think it was the writing that really made those games. The jokes are fantastic.

CRAIG BOX: Yeah. So one of the things you can do-- I remember a lot of the answers to the puzzles, so there's not going to be so much of a challenge there. But it's great being able to do things like turn on the director's commentary and listen to what they were thinking about when they designed all these features.

ADAM GLICK: Awesome. Speaking of interesting games and writing, I stumbled across a game this week called "The Vigil Files." It's another one of these where your device-- basically, they give you an OS experience within the OS. So they create fake email, fake texting, calls, files, all that stuff. But it ties it into the real world in a way that other ones of these games haven't.

So as you are doing your investigation into about what has happened-- a person has gone missing and you're trying to figure out what happened-- you actually have to go to real world websites and actually search the regular internet. Because they've actually taken it a step further than just the experience in the device, but actually built websites and built profiles of things out on the internet for you to go find and use that data back into the game. So it's another level of immersion. If we continue to follow this to its logical conclusion, at some point, you'll just have regular old life mysteries requiring no devices at all.

CRAIG BOX: I wonder, though. What happens when people solve the game and put up a walk-through? And the action you have to take in order to solve the game is to search for something on the internet. Surely you'll just find the answers before you find the things that they tried to hide for you.

ADAM GLICK: That is an interesting question.

CRAIG BOX: Better play it quickly before it's spoiled for you.

ADAM GLICK: I have only just begun the game, and I can tell you, from the internet searches that I've done, the things that show up are indeed-- like, the sites that I'm meaning to find, I've not found any spoilers yet. So--

CRAIG BOX: In a week's time, it'll all be "Gamasutra."

[LAUGHTER]

ADAM GLICK: We'll see. They've done a good job so far.

CRAIG BOX: Let's get to the news.

[MUSIC PLAYING]

ADAM GLICK: Break out your party hats, streamers, and cake because Kubernetes' birthday is upon us. Last week marks the fifth anniversary of Google's making Kubernetes available, and there have been lots of events, interviews, meetups, tweet storms, AMAs, and infographics this past week. Personally, we'd like to wish Kubernetes a happy 5th birthday, with many more good years to come.

CRAIG BOX: Apple held their Worldwide Developers Conference last week, and I am somewhat surprised to announce that there was some Kubernetes-related news. MacStadium, a hosting provider for Mac hardware commonly used by iOS developers, announced Orka with a K, which stands for Orchestration with Kubernetes on Apple. Orka puts Mac OS in a Docker container, and lets you use Kubernetes to manage the lifecycle of Mac OS VMs running on real Mac hardware, as required by Apple's license agreement. The platform has been launched into public beta and is now taking sign-ups, with full release slated for later this year, alongside the return of the cheese grater Mac Pro.

ADAM GLICK: Platform9 ran a survey at their booth at KubeCon EU, and point to five big takeaways in a blog post this week. They said that Kubernetes has had massive increase in production use, with over 47% of respondents saying that they run Kubernetes in production today. Operational complexity is cited as the biggest challenge for organizations, followed by the migration of legacy apps and access to talent. They also noted that hybrid deployments are growing and a majority of respondents stated that they run both in a public cloud and on prem, though on prem still is the most popular deployment location. Other OSS projects in the cloud native world are growing fast, with Prometheus being the most common in production, and Istio being the one that is being evaluated by the most people. Finally, and perhaps unsurprisingly, CI/CD was identified as the most common workload running in people's Kubernetes clusters.

CRAIG BOX: Did he just say access to talent is a challenge? Fine people such as yourselves with Kubernetes skills continue to be in high demand. An article in the Enterprise's project ran the numbers. And while their method is decidedly unscientific, they call out how much you could be making bringing container orchestration to the world. In particular, the article claims the US national average for jobs that mention Kubernetes is over $140,000 per year. The number of jobs asking for Kubernetes has increased 30% since January on one US job site. Plus, we're always hiring at Google.

ADAM GLICK: And that's the news.

[MUSIC PLAYING]

Darren Shepherd is the co-founder and chief architect at Rancher Labs, where he's led the development and creation of Rancher OS, Longhorn, k3OS, and Rio, amongst others. Prior to Rancher, Darren served as senior principal engineer at Citrix, where he worked on cloud stack, OpenStack, Docker, and building infrastructure orchestration technology. Prior to Citrix, he worked at Go Daddy, where he designed and led the team that implemented both public and private IS clouds. Welcome to the show, Darren.

DARREN SHEPHERD: Thank you. I'm really excited to be here.

CRAIG BOX: Your Twitter handle is @ibuildthecloud, and with that bio, that seems quite valid.

DARREN SHEPHERD: Yeah.

CRAIG BOX: You've been working at Rancher since 2014. That was the year that Kubernetes was founded. What was the motivation for starting the company?

DARREN SHEPHERD: My background-- my handle is @ibuildthecloud, so I've been doing cloud-based technology. Specifically, I started really in IaaS, so I worked a lot in the OpenStack cloud stack base. And so I was working in that space. That's what I was doing when I was at Citrix. And so when Docker came on the scene, with all the containers and everything, it was pretty clear that it was going to be the next major thing.

CRAIG BOX: Yeah.

DARREN SHEPHERD: And so really, why we started Rancher, was just trying to figure out some way to make containers usable and manage them. So we had a history of building orchestration systems around VMs, so build orchestration systems around containers.

CRAIG BOX: So when Rancher was launched as a platform, what orchestration systems existed and how did you adopt them?

DARREN SHEPHERD: At the time when we started, there wasn't really a lot. I mean, because it was-- I believe it was before Kubernetes was actually announced. So there really wasn't much orchestration systems. So we actually started building our own. So our original product, like the Rancher 1.0-- we're on Rancher 2.2 or something now. So our original Rancher 1.0 was built on our own orchestration technology. And so that was, at the time-- we've always been a very, very, very user-focused company. So we just followed where the users were at the time. Users were all basically just running Docker, like single Docker daemons. And so we put together an orchestration system that would piece together all the single Docker Daemons, make them into cluster and into plug containers. So that's where we started with.

Then eventually, as Kubernetes became more and more popular, we ended up with our Rancher 2.0, doing kind of a hard pivot and just moving completely to Kubernetes. So it was pretty clear that it's where the market was going, but also, it's just a superior technology to anything we built.

ADAM GLICK: How did you work from your own orchestration system? When people built on the 1.0 product to 2.0, how did you make that change? Was that a complete rewrite?

DARREN SHEPHERD: Unfortunately, yeah, it was a complete rewrite. I mean, it was a pretty drastic change. So what we had before was actually written in Java. It was a Java-based system and it was just talking to the Docker daemons. And so when we initially picked up Kubernetes, we integrated Kubernetes into Rancher.

So our Rancher 1.6 product does support Kubernetes. And so it was kind of a mixture of our orchestration and Kubernetes. And technically, that's really not the best way to do things. You kind of fight between the two systems. So when we did 2.0, we were like, we're going to go completely all in on Kubernetes, just 110%.

I think we were kind of ahead of the curve, in terms of writing controllers and whatnot. Because the entire Rancher architecture and everything is based on top of Kubernetes. It's just a collection of controllers. Rancher just actually runs as a set of controllers on top of Kubernetes. So you have a dedicated Kubernetes cluster for the management plane, which then manages a bunch of other clusters.

So it was a pretty big rewrite, pretty massive undertaking, and it was quite a learning experience. We learned quite a bit. I mean, it was nice to program everything in Kubernetes style because that helped us understand a lot of what users are going through. And then eventually, as people-- like the operator market and those things have taken off, we have a really good understanding of how to build those things.

CRAIG BOX: As you made that pivot to being more pure play Kubernetes, the cloud vendors were starting to bring services out as well. Do you target your product more at people who want to run in the cloud or on premises, or a combination of the two?

DARREN SHEPHERD: It's really a combination of the two. What we're selling right now-- our pitch or product or whatever-- is really a multi-cluster management tool. And that's where we've always focused. Because it was-- there was kind of this idea in the beginning of I'm going to run this big, massive Kubernetes cluster. And then as we saw users adopt it and pick it up in enterprise, we weren't really seeing that. They were actually building a lot of clusters. And so we pretty quickly determined that we needed some solution that was not just oriented towards a single cluster, but managing a bunch of them.

And so we have our own distribution of Kubernetes called RKE, but that's not an essential part of Rancher. If you're using GKE, EKS, AKS, one of those cloud-hosted-- or Digital Ocean-- basically any Kubernetes will support. And I think that's what's helped us from an open source perspective. People love the tools that will work pretty much anywhere. It's not heavily tied to our distribution.

ADAM GLICK: You mentioned in the intro that you worked on Longhorn.

DARREN SHEPHERD: Yeah.

ADAM GLICK: You want to say what Longhorn is? I assume it is not the ill-fated version of Microsoft Windows.

DARREN SHEPHERD: So Longhorn is actually our storage technology. The funny story there is actually, when we first pitched the company, we actually pitched building a storage technology, and pitched to the investors or whatever. By the time we started the company, we were pretty heavily focused on containers. But the original pitch was actually storage technology, so we started developing a storage technology quite early, and that's what Longhorn is. And so we really wanted to orient it towards just container-native storage, and we've worked on that technology for quite a few years.

That market is evolving. It's coming along. So we're finally just going to be pushing out probably a GA of that product this year, so four or five years later. So it's like, we started with storage, but we ended up doing all this orchestration and management.

But one of the interesting things about-- so Longhorn, as we put that out there, is that-- Open EBS was actually built initially on Longhorn. Yeah. I think, at this point, they've replaced the storage engine or whatever. But it was-- we started working with them pretty early.

CRAIG BOX: In retrospect, do you regret choosing that name?

[DARREN CHUCKLES]

DARREN SHEPHERD: Not really, no. When we started, it was at the time of Docker, and everything was a whale or a dolphin, or I don't know. So we wanted something that was just completely different, so we went with cows. So all of our terms and everything is typically a lot more like cow and rodeo and cowboy-type stuff.

CRAIG BOX: So they're from the pets versus cattle? Is that where that came from?

DARREN SHEPHERD: Yeah, that was actually where-- so the origin of the company, Rancher, was-- our first orchestration system, it was written as an open source project when I was working at Citrix and it was called Cattle, and it was that analogy. So that analogy became really popular in the OpenStack world. I mean, it's existed longer before that, but in the OpenStack world, it started becoming this popular thing that people were throwing around.

And so when I built that orchestration system, I called it Cattle. And then it was like, well, you need a name for our company, and then Rancher was something related in the same realm.

CRAIG BOX: You can see a picture of Darren in his Rancher shirt in the show notes, shaped like the food pyramid of beef, I guess, chopping up all of the parts of the cow and how they relate to Kubernetes.

DARREN SHEPHERD: Yeah, I always feel bad when people think that I actually ranch cows.

ADAM GLICK: They're like, I've never heard of the infrastructure part of a cow.

DARREN SHEPHERD: Yeah, yeah.

ADAM GLICK: Which cut is that?

DARREN SHEPHERD: Yeah.

CRAIG BOX: Is that tasty? How do I roast that?

DARREN SHEPHERD: They'll come up and they're like, oh, my brother's a rancher or something. Then they get a little closer and start reading the shirt and they're like, wait, what is that? I'm sorry, I don't know anything about that. I just program computers.

CRAIG BOX: Well, in fairness to you, we have been having a bit of fun with some of the names of your products recently.

DARREN SHEPHERD: Oh, yeah.

ADAM GLICK: So recently, you launched k3s.

DARREN SHEPHERD: Mm-hmm.

CRAIG BOX: Or "Keys," as I like to call it.

DARREN SHEPHERD: Yeah.

ADAM GLICK: First off, how do you pronounce it?

DARREN SHEPHERD: Yeah, so we haven't actually said there's an official pronunciation. I always say K-3-S.

CRAIG BOX: Would you like to make that official here and now?

DARREN SHEPHERD: I don't know. I appreciate all the funny different ways that people pronounce it.

ADAM GLICK: The bug thread, I do believe, has Boaty McBoatFace as one of the potential pronunciations.

DARREN SHEPHERD: Yes, yes. That was the one I was hoping would officially catch on.

CRAIG BOX: Not cow-themed.

DARREN SHEPHERD: No, no. And so k3s-- where the project came from and the name came from-- it was a side project of my own. I was working on this project Rio, which is something we actually just recently announced. So at the time, we were just looking for-- we were like, I just need a simple, simple, simple way to spin up a Kubernetes cluster, and so I built k3s. And so when I first created the repo, I just called it k3s, and it still has the title. So it's five less than Kubernetes. Because the idea was that it was just supposed to be smaller.

CRAIG BOX: Was it not called K-A-T-E-S at one point? Like, KATES as an abbreviation?

DARREN SHEPHERD: It was, it was. We dropped that because--

CRAIG BOX: I said that wasn't half-confusing.

DARREN SHEPHERD: So the actual-- it used to be-- so it was k3s, which stands for KATES, K-A-T-E-S, which is abbreviated. What was it? No, it's--

CRAIG BOX: It's a longer abbreviation of a shorter term.

DARREN SHEPHERD: Yeah. So it was like, "KATES" stands for KATES, which is another word for K-8-S, which is Kubernetes-- so it was completely geeky.

CRAIG BOX: So Boaty McBoatFace it is.

DARREN SHEPHERD: Yes, yes. Yeah, yeah. Our marketing team hates it.

ADAM GLICK: What changes did you make from standard upstream Kubernetes with k3s?

DARREN SHEPHERD: So the idea-- when I first built it-- because I was saying it was just kind of a side project-- I just ripped out all the functionality I didn't like. So that was the objective criteria there. But no, once we finally launched it as a project-- there's a whole story of how it kind of came to be, but when we finally launched it, what we focused on was just pulling out the functionality that most likely will be removed from Kubernetes anyways.

So we're just kind of ahead of the curve on lightening up Kubernetes. So we pulled out cloud providers, all the in-storage driver plug-ins because pretty much all these things could be replaced with an out-of-tree solution. But then we also deleted all of the non-default admission controllers. And the reason for that is just our experiences with the cloud-hosted Kubernetes. You don't have the ability to flip on arbitrary admission controllers.

CRAIG BOX: Mm-hmm.

DARREN SHEPHERD: So as a user, you really shouldn't build a solution around one of these custom things because it's not very portable. It's not going to work anywhere. So we just kind of said, well, you really shouldn't be using these things, so we just deleted them all.

CRAIG BOX: Do we need replica sets? They can go.

DARREN SHEPHERD: We actually-- we tried to delete them, but no.

[LAUGHTER]

So we started off deleting APIs too-- older APIs. But we found various weird use cases. People are still using them. So we brought them back in. So the majority of things that are missing right now is the emission controllers, storage drivers, and cloud providers.

ADAM GLICK: I've seen people talking about k3s in the context of edge computing, and on low-power machinery. But a lot of the chatter that I've seen, when people talk about it online, and some of the things you've talked about today, is about really simplification. What's your vision for where this will fit for people?

DARREN SHEPHERD: All of the above. So going back to why we ended up launching this thing, it was a side project that some people noticed that I was working on, and then they got interested in. They were like, hey, that'd be really cool for a little Kubernetes. So we saw that there was some interest.

And so when we actually looked at putting together a real project and making a big announcement, at the same time, we were basically starting to get all these requests from companies of they wanted to book Kubernetes on the edge. So it really made sense that there was an immediate use case and a business case for this, and that was edge computing.

So when we marketed this, we put everything towards edge. That was the message we put out there. But the reality is k3s is really two things. It's a smaller distribution. It's trying to reduce the memory consumption. But it's also very easy to install. So for the edge, that works well because it's a low-touch environment. You want more of an appliance. So it worked well for the edge. But that easy-to-install, I think, is what's really caught on with community.

So even though we marketed it towards edge, and we're having good success there and working with a lot of companies putting it on the edge, from a user perspective, they've just been putting it in development, CICD. It's just so easy to install that people are just using it everywhere now.

ADAM GLICK: What architectures are supported with it?

DARREN SHEPHERD: So it's still just Kubernetes, but we don't have an HA. So the way we package it is-- Kubernetes has the various components-- API server, controller manager, scheduler, Q proxy. We package everything into two basic components of server and an agent. So for a user, it's pretty obvious if you launch a server, you run agents on the-- right now, we don't support HA.

HA is coming. That's kind of the big blocker for us to say that it's beta. But we wanted to make sure that-- obviously, Kubernetes already supports HA, but the idea of putting HA in k3s was like, well, HA should be just as easy as the standalone. And so we're really putting a lot of effort into making it really easy, not have to worry about all the load balancing issues and spinning up etcd and stuff.

ADAM GLICK: For the edge scenarios, do you support ARM as well as X86?

DARREN SHEPHERD: Yeah, so out of the box, we support Intel X86 64-bit, and then ARM64 an ARMv7. So we do actually get a lot of requests for ARMv6. We don't support v6 and older. That's just because the ecosystem-- there's just not a lot that's available. But technically, you can compile it. It's just a lot of effort.

But so we support ARMv7. And there was actually a lot of effort put in to getting things to just seamlessly work on ARM, especially Raspberry Pi because it's so popular among hobbyists. And I mean, there's valid business cases, too, where people are putting Raspberry Pi's in production.

CRAIG BOX: When it comes to edge or appliance computing, quite often, you're talking about devices that, by definition, they're not in the corporate center and they are possibly in stores or in factories or so on. What are the considerations in making something like that easy to install for people who are in an environment where maybe there's no IT person around, or you're connected to the larger corporate network, but you're not in the core of it?

DARREN SHEPHERD: Yeah. So one of the real common use cases that we're seeing right now is putting it in the store, so some type of retail branch. And so for those situations, they don't really have a lot of technical expertise on premise. They just need something that basically they can ship the devices. That something's already pre-installed. It should basically just work. If it crashes or something, it should be able to be rebuilt very easily. Basically just re-image it or reflash the device.

So the operations need to-- it really needs to be treated like an appliance. And these days, what we're seeing with a lot of-- the way people are deploying-- if you're deploying Kubernetes yourself-- when we talk about like that pets versus cattle thing, is people really treat clusters as pets. Because they hold state and they're very protective of them. And so we can't treat these things like that. It has to just be you can just kind of blow away the cluster and it comes back. So that's one of the considerations.

The other is that when you're looking at, I've got 2,000 stores, well now, how do I manage 2,000 clusters? How do I roll out applications across all these things? And to make it even worse, they typically have very poor connections. Some of them might still have dial up or whatever. So how do we manage those things? So those become some of the challenges.

And one of the things that we're focused on besides just the k3s, which is just the distribution, is also how do we put together the full solution? So for like k3OS, and then also a management portion of managing the applications.

CRAIG BOX: You've just mentioned the operating system. So it seems like if you want to help people get something installed, then that's something that would benefit you to have more control of it. What's the story of how k3OS came to be?

DARREN SHEPHERD: Yeah, k3OS was just kind of the logical extension of, OK, well, if I'm putting this distribution on the edge, what's the operating system that I use? And it's like, well you can really use any operating system. It works on anything. But the majority of the OS is not really needed anymore. So the idea with k3OS was, what's the minimal amount of OS that we can have to run k3s, which is basically nothing.

But then, the key thing that we wanted to do was, well, can we actually manage the operating system using Kubernetes? So one of the key components of the k3OS is an operator that runs in the cluster, which is also looking at the OS. And so the way that you upgrade the OS is you're actually just updating a resource in Kubernetes.

So now, I could push an update to my cluster for the operating system through a helm chart. It just fits into whatever Kubernetes flow. One of the things that we absolutely love about Kubernetes is the API, the consistency, how it plugs into all these tools and everything. So it's like, if I have operators that can, let's say, deploy a database, why can't they manage the OS too?

CRAIG BOX: You already had a Rancher OS. Why did you build a new operating system?

DARREN SHEPHERD: Really, the reason for it is like Rancher OS, when we built it, was-- always, the idea was that it was supposed to be singular focused on just running Docker containers. We didn't want any scope creep. We just wanted to just do one thing and do it really well.

And so we built Rancher OS and we have a good user base and customers, and people really like that product. And it didn't really make sense to take that, and then turn it into a Kubernetes thing. It was like, if Rancher OS 2 was then Kubernetes-- it just didn't seem to be the right thing. So we really wanted to build an operating system, which was something that was an OS really tailored for Kubernetes. And so we really felt it was a different product. So that's why we have Rancher OS and then k3OS. They're two different realms right now.

ADAM GLICK: How should we think about k3OS in the existing ecosystem of container and Kubernetes-focused operating systems?

DARREN SHEPHERD: Oh, that's a good question. I think it fits into the realm of container distributions, but I think it goes a little bit further than just a container distribution. Because k3OS is really intended to be an operating system, specifically for a Kubernetes cluster. So if you're not running a Kubernetes cluster, it only has half of its functionality. You can't really upgrade it, for example. So it's very similar to Core OS, Atomic, Rancher OS-- the previous one.

So it kind of fits into that realm. It has a lot of features similar to those things. But then it kind of takes it one step further of not just being a container distribution. It's more like a-- I don't know-- distribution for a cluster. So it's a little bit of uncharted territory.

CRAIG BOX: Now, again, we didn't have any guidance on how to pronounce this, so a little birdie told me it might be pronounced "chaos," and I'm going to tell you, the little birdie was probably me. I don't want to pretend that Adam made me do it or anything.

DARREN SHEPHERD: That's a good one, yeah. Well, the k3s name was so bad, then we figured the logical thing would be a k3s OS, but that just seemed like too much. So we just dropped the S and went with k3OS.

CRAIG BOX: You could drop the S and call it "keso."

DARREN SHEPHERD: Yeah.

ADAM GLICK: That's kind of a cheesy name.

DARREN SHEPHERD: No, I think it works out perfect because when we launched Rancher OS, the joke was always that it was rancheros, like huevos rancheros.

CRAIG BOX: Oh.

DARREN SHEPHERD: Yeah. So we could have huevos rancheros and queso.

CRAIG BOX: Cheese comes from cows.

DARREN SHEPHERD: Yeah.

ADAM GLICK: Can we look forward to more things that have the k3 prefix on them?

DARREN SHEPHERD: Yeah. So there was something that came out of the community, and then we kind of adopted it. It's kind of this official community project. So there's k3d, which is this wrapper around k3s, specifically for development, that follows the same style as "kind", which is Kubernetes in Docker. So people really, really like this tool. It's just a simple command line that will spin up k3s clusters in Docker on your laptop or whatever. So for development, you get a cluster in like three seconds. So it's been really cool.

So our marketing team, they hate k3s. But we keep pitching more. So there's another project that we just started working on, but we haven't really fully put it out there yet, which is called k3v, which is in the virtualized Kubernetes realm. Yeah, so you'll probably see us fool around with the name a little bit more.

CRAIG BOX: Have you heard of a band called the Kaiser Chiefs? No one ever abbreviates them K10s.

DARREN SHEPHERD: Yeah.

CRAIG BOX: That's dumb.

ADAM GLICK: I think you should write that into them.

DARREN SHEPHERD: Yeah.

ADAM GLICK: So breaking from the k3 prefix, you recently announced at KubeCon, the release of Rio.

DARREN SHEPHERD: Mm-hmm.

ADAM GLICK: What is Rio?

DARREN SHEPHERD: Rio-- I mean, I think for a lot of people, they'll kind of understand-- so Rio is built on top of Istio in Knative, so it brings in a lot of functionality from those platforms. Really, from an end user perspective, what we've been calling it is a MicroPaaS. Because it has a lot of the same functionality, historically, you would see in a PaaS. But we feel a lighter-weight, simpler approach to this. But in terms of the type of functionality that you're going to get out of it, it's going to be automatic DNS, TLS, HTTPS, auto scaling, source to image building, so like git Webhooks. All the service mesh type functionality, which is monitoring and the metrics, routing, circuit breakers-- so all those type of things.

But fundamentally, Rio is really focused on the end user of trying to get users using Kubernetes. So not so much-- we do a lot right now around people operating and running clusters. And Rio is much more of how do you, within an organization, expose Kubernetes to people and get them to easily consume it?

CRAIG BOX: What is the interface that one of those users will have?

DARREN SHEPHERD: The whole thing is written Kubernetes-style architecture, so it's basically a controller running. So it's represented as a couple very simple Kubernetes types, so custom resources. So you can interact with it through kubectl or any other existing Kubernetes flow that you have. But then we also provide a CLI, which then makes the experience really, really nice and really easy. So even if you don't even want to deal with the underlying types, you can just use the CLI. But if you're more savvy, you can use kubectl and automate it however you want.

ADAM GLICK: Is it tied to a particular language that people would use, or is it language agnostic?

DARREN SHEPHERD: Under the hood, we're using Knative. So Knative has these build templates, and the way those templates-- they're very agnostic. They can build pretty much anything. So that Rio, we actually start with just the Docker file-based builds. And so it'll work with any language-- basically anything. But with the build templates, you can bring in build packs, which then would support specific Java or the Heroku style, or you can even customize it further and just do one-off things that you might see within-- like, Enterprise would do that.

CRAIG BOX: Do you see that it's practical for developers to be given a hook in a Git repository where they push things and the magic happens? Or do you feel that they will actually want to have a bit more control and customization over how their code is built?

DARREN SHEPHERD: I think it depends. And that was kind of the idea of why we say Micro-PaaS. We were trying to figure out smaller, less restrictive components or primitives. And so if you want, you can just basically give Rio a Git location and it will build it and deploy it. If you don't, then you can plug it into a pipeline really easily because it is still just a Kubernetes API, and you can just build your image your own way, and then just push a resource into Kubernetes.

So we think it's flexible enough that it will accomplish both. Because we see it even within our own company, the stuff that we deploy to production. Some things, we have running through a pipeline and it's very extensive, and then some things are like our website, which we just really want to push out to production through a very simple build.

ADAM GLICK: We've talked about a number of the projects that you've put together with Rancher and put out there. We haven't talked about foundations. Do you intend for these to eventually go into a foundation, or are they things that you want to kind of keep it Rancher, but make sure that they're open source?

DARREN SHEPHERD: No, all of these projects that we've mentioned-- k3s, k3OS, and Rio-- we'd very much like to be able to push those into a foundation if we can. Rio, since it just came out at KubeCon, that's a little early to start those discussions. But k3s, which is the one we announced first, that one-- talks are going on right now of how we can push that, if it makes sense. We'd very much like to see it go into CNCF. So we have no-- as a company, we're very, very focused on open source. We actually don't sell-- we don't have any proprietary stuff. And so we're very interested in getting these things into a foundation.

CRAIG BOX: I've just realized that Rio can legitimately be abbreviated R1o.

[DARREN CHUCKLES]

There you go. That's tying together all the naming.

ADAM GLICK: I think, if they're going to do that, they need to make it dance on the sand.

DARREN SHEPHERD: Wow, that's pretty good.

CRAIG BOX: And with that, we'd like to say thank you very much to Darren for joining us today.

DARREN SHEPHERD: All right, thank you.

CRAIG BOX: You can find Rancher Labs at Rancher.com, find Darren on Twitter @ibuildthecloud.

[MUSIC PLAYING]

ADAM GLICK: Thanks for listening. As always, if you've enjoyed this show, please help us spread the word by telling a friend. If you have any feedback for us, you can find us on Twitter @KubernetesPod, or reach us by email at kubernetespodcast@google.com.

CRAIG BOX: You can also check out our website at kubernetespodcast.com with our show notes and episode transcripts. Until next time, take care.

ADAM GLICK: Catch you next week.

[MUSIC PLAYING]