ADAM GLICK: Having a little fun this week enjoying some puzzles. I don't know how many people out there have gotten into any of the letterboxing, or puzzle hunt pieces, or some of the famous ones that MIT does one every year here in the States, and a number of the companies do. But in some of the buildings here at Google, they have puzzles that go throughout the building. And you can start going and trying to solve them. And I'm working my way through the floors in one of the buildings right now. I've made it through three floors, and it's just a lot of fun to dig through the puzzles.

ADAM GLICK: There's a storyline as always, and usually there's some sort of meta puzzle after you solve the others. But in this case, it is a story about lost data from a data center and finding the encryption keys to decrypt the data and put the data back. And so you have a bunch of sets of letters that are usually some sort of coded piece and something that can become some sort of description key if you can put it all together. And the question is, how can you do that? And each one of them is a little different.

CRAIG BOX: Given that these buildings are in California and you're nomally in Seattle, what do you do when you need the next clue?

ADAM GLICK: That is a good question. I've taken a bunch of pictures and hope I have them all. But since they are literally scattered and hidden throughout the building, if I have missed one, then I'll may have to wait until my next trip down to California.

CRAIG BOX: Now you mentioned letterboxing before. I haven't heard of that. What's letterboxing?

ADAM GLICK: Letterboxing, I think it was actually English originally. But it is kind of a mixture of puzzle solving and a physical challenge. And so typically, you will get a set of puzzles. And those puzzles will give you a location that you have to go to. And those locations usually had little boxes with stamps, and you would get a stamp and you put the stamp in the book. And so you have a competition as to who can get the most number of stamps within a fixed period of time.

CRAIG BOX: So this is a little bit like geocaching.

ADAM GLICK: It is, but it's a competition. So there are a bunch of teams that all start at the same time. Usually, it's a daytime activity, so it's six hours during a day. And if you have a team of four people, three people will be solving some place, and one person will be the runner. And they're out running for most of the time, going to these locations.

So depending on which order you pick, you don't know where the solutions take you. And so you're trying to kind of solve enough of them that you have a certain number batched up in an area. So you send them somewhere so they're not running miles between different areas back and forth. And usually the runner, you trade off because you don't want to be running for six hours straight.

CRAIG BOX: So It's like nerd orienteering.

ADAM GLICK: [LAUGHS] Yes. Yeah, maybe. It's a lot of fun. I encourage you to try one sometime.

CRAIG BOX: I look forward to it. Let's get to the news.

[MUSIC PLAYING]

ADAM GLICK: Security engineers from Netflix and Google have found multiple vulnerabilities in the HTTP/2 protocol, which could lead to denial of service attacks. If you use HTTP/2 either directly or using something like gRPC, which uses it as a transport, and you allow connections from untrusted clients or to untrusted servers, you may be at risk.

Vendors, including Microsoft, Apple, Google, CloudFlare, and Akamai, have updated their software stack for downloadable software. New releases are available for Node.js, Nginx, Envoy, and gRPC, amongst others.

CRAIG BOX: The HTTP library in Go is also vulnerable to this issue, and new releases of Kubernetes have just been published to address it.

The CNCF announced the archival of the rkt (pronounced 'Rocket'), container runtime. Projects like OCI, Containerd, and CRI-O achieved rkt's stated goal of having a standalone runtime and a published spec for running Linux containers. And as such, rkt's user base has mostly moved towards those technologies. rkt remains open source software, though it will no longer be actively promoted by the CNCF.

Original authors CoreOS and their many acquires are no longer involved. Latter day maintainers, Kinvolk, a German software company, say they still have plans for the software. They describe it as the Firefox of container runtimes, saying even if you don't use it, you should appreciate the standards that it has helped to drive.

ADAM GLICK: GitHub recently announced new continuous integration and deployment support for its Actions Workflow service available in beta. GitHub's parent Microsoft has now announced the preview of GitHub Actions for Azure, joining actions already available for Google Cloud in AWS. Two of the actions are specific to Azure, which allow users to easily authenticate to Azure and deploy to Azure's app service. The other two actions work with common tools to allow connections to container registries and deployment to any Kubernetes cluster.

CRAIG BOX: Another week, another new Kubernetes web UI. OK, so two points doesn't exactly make a trend. But following on from last week's announcement of Octant from VMware, this week, Kubernetes's failure stories curator and Episode 38 guest, Henning Jacobs, announced Kubernetes Web View to help with support and troubleshooting across multiple clusters.

Henning's use case was the support and incident response teams for Zalando's 100-plus Kubernetes clusters with 900 plus users. The system supports links to objects with well-known URLs and even deep linking into the YAML. So you can pass around URLs, and multiple people can be sure they're looking at the same part of the same object. The dashboard is read only, so entire classes of security concerns can be mitigated. The code is on GitHub with a live demo pointing at a k3s cluster.

ADAM GLICK: Speaking of Rancher Labs k3s, or keys, which you learned about in Episode 57, you might think, well, k3s takes installing Kubernetes manually down from hours to minutes, but who has minutes these days? k3sup, which the author Alex Ellis has asked us to pronounce 'ketchup', promises to get you from 0 to kubeconfig in 60 seconds or your money back. Thankfully, it's a free utility on GitHub, so no money need change hands.

CRAIG BOX: Louis Ryan, our guest in Episode 58, has posted a blog post, along with his colleague Sandeep Parikh, talking about design decisions in service mesh APIs and some hints on where Isio is going. This is a must read for anyone who has ever asked why there are so many different API objects required to configure anything in Istio.

Louis and Sandeep talk about how the project supports higher level abstractions and is working to infer more configuration, like container ports and protocols, to reduce the configuration surface. Some of these features are starting to appear in the upcoming Istio 1.3. The released branch for 1.3 has been cut this week, and it is scheduled to launch in mid-September.

ADAM GLICK: Not all GPU workloads are for machine learning and the data center. If you're running on commodity hardware at home, you probably have a GPU available to you. Brian Carey was able to use the same device plug-in technology used for NVIDIA GPUs with the integrated Intel GPU in his home theater PC running Kubernetes, cutting CPU usage for transcoding by four times and allowing five simultaneous streams. We applaud Brian for his commitment to latest in home technology.

CRAIG BOX: GoDaddy has created a tool called Kubernetes Gated Deployments, which automates the deployment of a Canary version of a service to their Kubernetes environment, measures its performance against a series of operator defined metrics, and allows rolling it back if the deployment is causing harm to the service. The plugin is on GitHub and soliciting contributions. And if you really love contributing to it, GoDaddy are hiring.

ADAM GLICK: The CNCF announced last week that it had hit 100 members of the end user community. These are member organizations that have joined the CNCF, but do not sell CNCF projects as services external to their company. This basically means companies who aren't cloud sellers like Google, Amazon, or Microsoft, or on-premise sellers like VMware, Docker, or Red Hat. The end user community has a strong list of members, including Intuit, Apple, Capital One, and the New York Times. This group helps suggest new projects for inclusion into the CNCF and helps ensure that the CNCF stays vendor neutral.

CRAIG BOX: VMware has announced that they are looking to buy Pivotal, reuniting the two companies which were split in 2012. The acquisition, which is not confirmed and could be descheduled by VMware at any time, will take $15 per share from Dell's left pocket and deposit it into Dell's right pocket. In an interview published this week, Pivotal's CTO Cornelia Davis said that Kubernetes has, quote, "an energy that is independent of Pivotal," and that it has captured the mindshare of thousands of organizations, more so than their Cloud Foundry based services.

VMware likewise touting the number of Kubernetes sessions at their upcoming VM World Conference, claiming that Kubernetes going to, quote, "take over the event." In other Dell news, the hardware selling parent has announced its intention to work with AT&T on Airship, their platform for running 5G network services on top of Kubernetes.

ADAM GLICK: Finally, the CNCF has announced the first Helm Summit in Europe. The summit will be held in Amsterdam on September 11 and 12. Early bird pricing is available until August 26.

CRAIG BOX: And that's the news.

[MUSIC PLAYING]

ADAM GLICK: Chris Chapman is the SVP of software development for MacStadium and the former CTO of Virtual Command, which was acquired by MacStadium last year. Welcome to the show, Chris.

CHRIS CHAPMAN: Thanks for having me. Good to be here.

ADAM GLICK: These days, when it comes to desktop, we basically have three main platforms. You've got the Windows machines, the Linux machines, and Macs. One of these is not like the other. What is it that makes Mac development different?

CHRIS CHAPMAN: Part of it is how Apple helps them be creative developers, but it's also completely opposite of how modern development works. You run in Xcode, you run on Mac OS, and you run on app hardware. So things like GCE, or clouds, or these scalable platforms that people are used to developing on on the other side of the world just don't work for iOS and Mac. You have to build it in their platform and then check it up to their app store. And there's a lot of sort of manual steps, so it feels more like 2000 than the day it is today.

ADAM GLICK: Many of the technologies you mentioned-- I mean, Mac moved off of the 68000 series of processors quite a while ago. Everyone's running on x86 or x64 processors. What is it that makes it different?

CHRIS CHAPMAN: It is honestly the combination of their OS and their hardware and how they like to tune them both together in a specific way. And then, quite frankly, the iPhone is the big driver for everything. So it's the iPhone architecture and the security around that that makes the way you build on Apple very different.

CRAIG BOX: A few years ago, you weren't allowed to run virtual Mac OS instances at all. What's changed since then?

CHRIS CHAPMAN: I think with the advent of, I believe it was Snow Leopard, they did have the ability to start creating virtual Mac instances within the desktop. It was primarily honestly a start to sandbox things so that you could perform an upgrade or a test of something on your Mac in a virtual instance and then shut it back down.

So the EULA was written a little wonky in that you're technically supposed to have only two VMs per machine. But Apple will not clarify what constitutes a VM or what constitutes a machine. And we have asked several times, so there are virtualization technologies that don't really, from a technical perspective, gate how many you can do it.

And Apple's never gone after anyone for spinning up more than two VMs, but that's sort of how the EULA reads. But and that effect of that is there are virtual Macs, and you can now use them. And it's at your discretion and peril as to how many you run and when.

CRAIG BOX: You work for MacStadium. What is MacStadium?

CHRIS CHAPMAN: We are an enterprise scale cloud for app hardware, so we're effectively what a Google Cloud is or AWS or Azure is, but it's all Apple top to bottom. So we take the thing that lives on your desktop because, again, a big difference between Apple and other platforms at this point is all of their compute experiences designed to be end user based and sit on the desktop, which is great for the user experience, but not so great for the enterprise developer.

So when you transition to managing that at scale, what we do is take that consumer platform, bake it into proprietary racks that have redundant power, and fiber connected network, and Cisco firewalls on top of it with flash connected storage. And we really scale it out so that it can run 24 by 7 in a data center, and we spread it all over the place.

CRAIG BOX: If I'm a traditional MacStadium customer, am I renting an entire physical Mac machine from you, or am I renting a VM that runs on one of those pieces of Apple hardware?

CHRIS CHAPMAN: That is a key point with Apple, actually, and I think why some of the larger cloud players don't necessarily provide the service that we do, is Apple is insistent that there is no such thing as a public cloud when it comes to Apple. So you have to control or own the hardware that you use. So what we do is provide a private dedicated cloud of hardware.

Now, you can either use that as bare metal, or you can virtualize it. And when you virtualize it, once you own and control it, you can carve it up into your own public cloud internal to your company or to your purpose. But for us, it's a dedicated private cloud of hardware.

CRAIG BOX: There are services that you can call upon to do testing on devices. My understanding is that you're actually connecting to a real physical phone in someone's data center as opposed to a virtual instance. Is that correct?

CHRIS CHAPMAN: Yeah. In those cases, you're absolutely doing that. So typically a lifecycle for iOS build is as you build it on a Mac, you run it in the simulator. And then certain subsets of applications that maybe you want to take advantage of, gyroscopic controls or things that are very specific to the hardware device itself, will require an actual physical device to test.

So a lot of people use things like Hockey app or Test Flight to kind of push that back down and distribute it, so they can test physically. But other companies prefer to try to use a build form of iOS devices that are in a rack somewhere.

CRAIG BOX: Your Kubernetes product is Orka. It stands for Orchestration for Kubernetes on Apple. Your target audience being Mac and iPhone developers, when did you start seeing Docker and Kubernetes in use with that audience?

CHRIS CHAPMAN: Well, there was definitely Docker for Mac. That kind of probably started it all for them. And they don't use it for iOS and Apple development, but they start tinkering around and seeing how containers work and that sort of thing. And moreover for that community, they're usually sitting next to the fancy Android guy or the non-Mac guy. And he's doing all the cool builds and pushing everything up to the cloud and getting it all done super fast. And there's a little bit of envy there.

So we've always noticed them kind of exploring that world without being able to play in it. So for us as developers who live on Macs and know about Macs but live in the Docker Kubernetes world, we thought it'd be great to blend those two together.

CRAIG BOX: Did you want to build this thing and then look to put Mac OS in Docker, or did you find that [you could] and then say, cool, that's a thing we can do, and build a product from it?

CHRIS CHAPMAN: Yeah, it was the second. It sort of spun out of virtual command. We were connecting and orchestrating Apple hardware with physical and VMware, and it was a pain in the butt. And we started trying to find ways to virtualize that. And Docker was one of the approaches we took, and we sort of got that working.

And then it was a neat thing to do. But once we figured out we could actually make it perform, that sort of was the lightning in the bottle of well, if this performs and we get all the flexibility and power of Docker and Kubernetes out of this, that's a product. That's a thing. And it's going to bring the developer in closer, instead of having to get the infrastructure guy and the DevOps guy involved. So that's when it really took off.

CRAIG BOX: If what you're doing is automating the provisioning of machines, there are many different tools you could have used to do this. Why did you settle on using Docker?

CHRIS CHAPMAN: Well, a little bit with my previous company, we use Docker a lot in the technology we built. And it was how we deployed our code. And even at MacStadium, the technology we're developing with isn't iOS. That's who we serve, but we live in the Node.js and React Native world and databases. So we're used to wrapping all of our things in Docker containers, and spreading them around, and orchestrating that way.

We just find that it's a more developer centric and a developer focused way to deliver infrastructure and scale it. And given our customer base at MacStadium, it seemed like the right way to go because the biggest complaint, back to the previous discussion around how it's different with Apple, is that it really hacks people off to get a bunch of infrastructure from us and then go, all right, guys, here it is. Here's a bunch of bare metal. Have a blast.

Or oh, no, wait, we made it better for you. Here's VMware, but go orchestrate VMware and figure it out. And you see all the developers just kind of get super sad super fast because they just want to focus on code and they want to develop. And that's the opposite of that.

So for us, this was an opportunity to say, all right, if we took the tools that we use every day and brought that experience to the Apple ecosystem, these guys could kind of be on par with their peers on the non-Apple side and use development tools, but get the infrastructure experience they want.

ADAM GLICK: Orka meets our standards for a great name, given that it contains a K and can be unambiguously pronounced. How did it get that name?

CHRIS CHAPMAN: I would love to take credit for that, but honestly, our sales and marketing folks are super creative and talented. But we were trying to figure out a name for it. And they literally kind of went in the room and they were like, well, it's all about orchestration. And it's got this Kubernetes stuff in it, and we're Apple guys, and what can we do?

And I said, well, if you're going that route, there's a lot of nautical stuff in this whole situation. You've got Kubernetes, and Helm, and Docker's a whale, and I kind of went off. And they came back super fast, and they're like, you're going to call this Orka. And I'm like, what? It's a whale, and it's nautical, but it describes what it is and it's an acronym. But it's the thing, and it just fit. And everybody loved it kind of right off the start, and we ran with it. So it worked out.

ADAM GLICK: What do you need to get Mac OS running in Docker?

CHRIS CHAPMAN: It's a little bit wonky. We've basically created a virtualization pass through inside of a Docker container to get down to the hardware. So you're still running on Mac hardware with Mac OS, but we've sort of created a virtual sandwich in there and wrapped it with YAML. My lead developer, Taylor Moss, he likes to say that it is the pill that delivers the OS with all the considerations for virtualization and mapping rolled into it. That's kind of why Docker for us. It helped us really YAML-ly encapsulate things that we wanted to do around this and make it easy to use.

CRAIG BOX: People use Docker to virtualize applications. In this case, if you're virtualizing an entire machine, there are services like KubeVirt from Red Hat, for example, that use the Docker image format and then run on top of a hypervisor. Are you running Mac OS as a process in the way that you'd virtualize an application, or are you using the Docker tooling to work one of those hypervisors?

CHRIS CHAPMAN: It's the second. We use it to work on the hypervisor. So Docker, again, kind of comes the way to encapsulate the way we do the OS. Now we do a little weird segmentation of it, so it isn't a super lightweight Docker application like you're used to. It is a full weight OS, but it is a split image. So it's really the full weight VM lives on storage.

And then when we actually run it, it is a differential split that goes into a Docker container and runs on the local host. So it's lighter weight, but it is virtual, and it runs down to the hardware as a virtual emulation to the hardware. And then Docker wraps the whole thing.

CRAIG BOX: Who provides the image that runs here? Is this something that is provided and maintained by you or by Apple, or does the customer provide it?

CHRIS CHAPMAN: We can provide images, but the customer provides the licensing. So you're allowed to go to Apple and get free OS license. You just have to be the owner of the license.

ADAM GLICK: And is there a way that you actually, like, plug that into the image, or is that, just, you need to prove to Apple if they want to do a software audit that you have such a thing?

CHRIS CHAPMAN: It is the latter. So we have a library of licenses and ISOs that anybody can use for sure. They just need to make sure that they're covered on the licensing side. And we tell them that when they get it. They just have to make sure they're covered on that, but there's no technical step that they really have to go through to create that.

ADAM GLICK: What kind of performance hit do you take when using this?

CHRIS CHAPMAN: Well, after a lot of suffering, and testing, and tweaking, you don't take much of a performance hit at all. We have gotten in testing with a lot of different types of bills pretty much bare metal performance, which is pretty special because normally what you see on a VMware type deployment is about a 20% to 30% downgrade on the hardware. And like I said, we've pretty much gotten that to zero or near zero.

CRAIG BOX: If you're using QEMU or VMware or something to virtualize Mac OS, running on top of Mac OS, or running on top of Linux, for example, did you have to build anything in terms of drivers or runtime interfaces to be able to support the use case of running it in the virtualization system you're now using with Docker?

CHRIS CHAPMAN: We don't build our own custom drivers at all, so one of the things we were trying to make sure we were very careful with, especially with Mac OS, is that we didn't do anything that Apple could break quickly and easily for us, by messing up a driver. So we do a lot of passthrough, and we do a lot of generic emulation that lets us basically stay clean and not have to modify the Mac OS in any way or modify the hardware in any way.

ADAM GLICK: So if you're using Docker to control the virtual Mac OS that you have there, why use Kubernetes?

CHRIS CHAPMAN: Well, again, for us previously, we started getting in the Kubernetes world because of how easily it handles the infrastructure scale concerns. It made it a lot easier, and we thought with data centers and mass deployments of Apple, it was sort of a natural fit. I prefer Kubernetes to some other ways that you can orchestrate large deployments of Docker. And we found that it basically gave us a whole range of tools and built-in technologies out of the box that made it a lot less work for us to have to write and orchestrate things across the enterprise.

And it's just, quite frankly, the most large scale capable orchestration tool that I can think of. So it was far better than trying to do some sort of weird MDM or image management pushy thing or some sort of homegrown version of something. Plus again, our goal was to bring a best of breed tools that modern development teams use to the Mac OS community, and Kubernetes is top of the list.

ADAM GLICK: Does this mean that you're effectively using Kubernetes to manage a bunch of Mac Minis as your nodes?

CHRIS CHAPMAN: Well, technically, for Orka, we actually take Pros. And a normal Orka cluster is three Mac Pros with a HA-mesh kube cluster across them to start. And then each Pro that you add becomes both a physical machine and a Kubernetes node in and unto itself. So then it just grows and spreads that way.

CRAIG BOX: What do you do to stop the Mac Pros rolling away?

CHRIS CHAPMAN: [LAUGHS] We actually pin them. It is literally a square peg in a round hole kind of situation. We take the round thing and slam it into a sled that goes into a rack. And so we have a sled that we typically bring to trade shows, and it looks somewhat like a bomb. But it's very effective. It's got fiber connected stuff and power on the back. And it locks the Mac in, and you strap it down. And then it goes in. You can do two side by side in a rack, and then we can rack them all the way up.

CRAIG BOX: Carrying on that theme, Apple stopped making rack mount hardware in 2008. And they've just reintroduced it with the conversion kit for their new super expensive Mac Pro cheese grater 2019. So most of what people are using, as I understand, is the Mac Mini. You've obviously mentioned here the Mac Pro. Is this sort-of signaling Apple moving more in the direction of wanting to support these kind of use cases?

CHRIS CHAPMAN: Oh, we think so, but still not quite there yet. So a lot of people use the Mini for development. But what they typically find, as they virtualize it and they scale up, the Pro still makes a lot of sense. So then they switch to the Pros with us.

The new Pro still has some things to sort out because as you say, it's a massively expensive interesting machine. And the rack mount version of it is really so that the guy on the set of "Game of Thrones" in the production truck doing the video editing can slam it up in there and do his thing. It's not a typical fit data center rack, and it's got some strange power considerations. It's a single power supply. It's 5U. It's got a lot of weirdness to it, so there's going to be some interesting things.

And then on the EULA side and the usage side, they still haven't modified that EULA I talked about before. So you've got this giant server with a terabyte and a half of RAM in it and 24 cores, but if you can only run a couple of VMs on it fairly, then you're talking about a couple thousand dollar build machine or a $20,000 VM to run your code, which that presents other challenges.

So we're still trying to figure that one out, but they are stopped. They have stopped making the 2013 Mac Pro. And this is the next thing. So what we see them doing is probably modifying the EULA at some point and making this be a better fit for the enterprise and for some other use cases beyond video production.

CRAIG BOX: One of the other new custom pieces of Mac hardware is the T2 chip, which, as I understand it, is a chip that basically lets you have provenance of the code that runs on the hardware, so that Apple basically needs to sign the boot loaders and so on. You're running Linux as your underlying operating system that then runs the macro SVMs on top of it. Do you have any problems with that running with the new T2 chip?

CHRIS CHAPMAN: Oh, yes. It's a lot of fun. So it does cause interference. We are currently deploying Orka on the non-T2 version. And we do have a pathway to get it working on the T2 version. Longer term, what we feel like will ultimately happen is that Apple will mature the hypervisor to the point where that's the better route to go than the Linux kernel currently. But again, performance isn't there with that yet. So until they reach that point, and realistically, there's going to be a T3, or a T4, or whatever.

But they have said that they are going to tighten the security circle more, and more, and more as they go forward. Not only with the T2 chips, but with the kernel extensions and the OS. So ultimately, they're not going to want someone between their software and their hardware, which means we'll ultimately have to play with their virtualization. So I think that's the longer term roadmap. But in the short term, there is no clear answer from Apple to what to use to do that. So Linux is the answer for now, and you can make it work. There are ways to do it.

ADAM GLICK: You've mentioned a lot of things about Apple licensing and what you can do with certain Apple hardware and things that you're doing. Are you doing this with Apple's knowledge, or it's kind of under the radar in terms of how you build this out?

CHRIS CHAPMAN: Thankfully, we are doing this with Apple's knowledge. It makes us feel good that they know who we are. They know who we are very well and they like us. We've actually been brought up in a keynote last year, that when they rolled out the 2018 Mini, we were shown on stage and described as the best way to deploy Mac at scale.

So Apple likes us, knows us, and blesses us. And we do our best again to-- as much as they can share or not share, talk to them regularly and make sure we're lining up with them. Because the reality is our business is based on what they do, so we need to stay in compliance with what they want.

CRAIG BOX: Aside from the Mac OS VM running in a pod, what other supporting infrastructure, either in terms of sidecars or other pods that run in the same namespace, are used in order to turn the Mac machine into something that's connected to an enterprise customer's network or to the build systems that they're interacting with?

CHRIS CHAPMAN: For us, the way our Orka technology works is that we have a namespace pod that is dedicated to Orka. It's a restricted pod, but it runs our software, which actually does some of the hardware consideration, and scheduling, and resource scaling concerns. It modifies that so that you can't oversubscribe the machine or do something that would be a configuration that would basically screw up your performance.

And then we provide multiple what I would call standard Kubernetes namespaces alongside of that, where you basically have free reign. You can use Kube Control. You can use Helm and anything else you'd like to do to deploy control systems or any other types of software that's non-Mac OS build related and drive it from there.

So our goal is to basically serve two things. One, to provide a Kubernetes space to do anything and everything you would normally do with Kubernetes and still be able to interact with Mac OS on the build side.

But two, for the Apple developers, it's an interesting ecosystem because what we get is a lot of people excited. They're like, oh, gosh, you brought some Kubernetes stuff to Apple. This is great. And you're like, yes, what do you know about Kubernetes? And you get like, well, it seems like it's going to be really great. And so they don't actually know how to use Kubernetes.

So one of the things the Orka namespace does is provide a CLI and a UI that basically create those pods and all the considerations for them for the customer with a simple CLI command. And then as they get more comfortable with how Kubernetes actually works, they can jump to the other side and do it in a more piecemeal manual or more customized way.

ADAM GLICK: So you create kind of an abstraction layer through your own UI and API, but you still expose the underlying kubectl access if people want to go directly to that layer?

CHRIS CHAPMAN: Yeah, and a lot of our customers have kind of both sides of the coin. Because they'll run Mac OS for their build, which is what the one namespace is doing. And then they'll have databases, or Git repos, or Jenkins masters, or things of that nature that they'll want to deploy on the other side in a normal Kubernetes namespace. And they'll spin them up there.

ADAM GLICK: What are some of the most common use cases for Orka?

CHRIS CHAPMAN: It's pretty much built, test, and simulate for iOS and Mac apps. So again, a lot of the folks that come to MacStadium are solving that problem regularly. But they do a lot of DevOps and CI/CD. And Orka's making that much, much simpler and much more straightforward.

We're really trying to drive MacStadium to become more developer friendly, more self-service based, and less of like a traditional hosted data center. So this is a step in that direction because you get a REST API and command line options. And now you get the interfaces of Kubernetes to be able to basically manage that deployment.

So the normal use case is a company comes in with their dev team. They're building iOS apps all day long. They're doing continuous development on it. They're pushing versions left, right, and center. They want to test every different version of Mac OS they can think of in five different versions of Xcode in 10 different browsers. So they're scaling this across a set of hardware. And then every time they run a build, it goes out and runs all those things.

Orka helps do that ephemerally because traditionally, folks have used just bare metal to do it. And as you get them used to having a big cloud, they start to understand the benefit of virtualization. So then they do VMs, but then what they end up doing is spinning up ever present longstanding VMs that are just sitting there forever and ever. And that's great.

At least the VM has more consideration around being able to clean itself up, but then you eventually have to coach them into the real power of these clouds is the flexibility of having scalable resource and using ephemeral machines that come and go as you need them. And Orka really drives in that direction where each VM that spins up lives as long as it needs to to perform the task and then goes away because it's an ephemeral container that lives in a pod that comes and goes.

CRAIG BOX: With ephemeral workloads, you normally build an image for each particular workload that you want to run. For a more VM based workload, you might turn on a base install, which is automatically updated by your vendor and then when that turns on, have it run some sort of startup script in order to pull down the software that it needs. Which approach do you see people taking for testing builds and so on with Orka?

CHRIS CHAPMAN: A bit of both. There are a lot of folks that do have the pull start once they spin the machine up. Or they'll put it, like, a Jenkins master or a GitLab run or something in the container and have it start pulling things for the pipeline's build at that point.

The way we actually construct Orka is sort of a hybridization of that in that you have a set of base images and the ephemeral containers are spun up off of those base images. So a lot of times, customers can do a set of base images that they configure, and then they can do commits and saves to create as many images and versions as they want. And then in a more of a Docker fashion, they basically describe that image set and the tooling that they want for the runtime when they build.

ADAM GLICK: Do you ever see people using it for what I say is more unique use cases? You know, things like VDI or running unsupported operating systems that are out there.

CHRIS CHAPMAN: It's technically possible to do things like that. With the VDI situation, Apple wants to control the user experience as tightly as possible, so they're not a fan of VDI use case at this point. But we do see people wanting to do that from a development perspective of having a dev team have access to this system and use that remote desktop to sort of set up the environment.

So from a build perspective and what is fair is to have folks log in, and do a lot of the configuration work and the environmental settings, and have an easy experience that way. And it's nice to have a container that you can configure special laws for one type of build and have it just revert back to the base state anytime you want to.

CRAIG BOX: Do you know of people running Docker for Mac and Mac on Docker?

CHRIS CHAPMAN: [LAUGHS] I think it's possible. It's going to be a little--

CRAIG BOX: 'Yo dawg, I hear you like Docker and Mac'.

CHRIS CHAPMAN: Yeah, exactly.

ADAM GLICK: I feel like you should be spinning a little top at this moment.

CHRIS CHAPMAN: [LAUGHS] Exactly. Well, ironically, one of the code names for Orka when it first started was Turducken. It's that horrible thing of the thing stuffed in the thing. So we do run Docker for Mac on our Macs to do development, and then we develop a Mac that goes in a Docker container when we deploy Orka. So I guess that's the best answer I can give for that question.

CRAIG BOX: You announced the beta in the week of the Apple Worldwide Developers Conference. And then general availability was last week at the DevOps World and Jenkins World Conference. You've also got plugins for using Orka with Jenkins. Do you see Jenkins as the tooling that is most commonly used by people developing apps in the space?

CHRIS CHAPMAN: Yeah, it really is. It's kind of the de facto standard for build. It's not necessarily everybody's passion to have it, but it seems to be the thing that everybody does to get the job done. So we felt like that was the best place to start, and it's definitely the widest use case. There are other choices out there like GitLab. GitHub's obviously about to jump into the game, and Buildkite and the others like that. But Jenkins is definitely the widest spread and the most popular.

ADAM GLICK: Orka is currently a hosted product. Do you intend to open source the technology?

CHRIS CHAPMAN: It is hosted because of, again, the considerations we put around staying tightly compliant with Apple and also some of the performance enhancements we make to the hardware in the data center to make it build very fast and effectively. Some of the technology in Orka if it were open sourced could potentially be used for non-compliant purposes. So we're probably going to keep a tight grip on the parts that make it work well in MacStadium data centers.

That said, there's definitely an opportunity in the roadmap to do some hybrid pods and hybrid connectivity to on-prem capacity. And as we go down that road, there will certainly be parts of it that would be open sourced. Custom resource definitions might be created, for example, to allow it to do certain things. And that would be an open source segment of it.

ADAM GLICK: So what's next for Orka?

CHRIS CHAPMAN: Kind of along that path, we're adding a lot more integrations and plug-ins. It has a REST API, so we're really trying to scale the useful tooling that people use in continuous build and continuous integration into it. And then we're constantly trying to make sure that it can leverage as many new Kubernetes features as possible.

But one of the things we definitely see moving forward is what we kind of call our remote node, where we can let you put it on the Mac in your closet or put it on the Mac in your desktop and have the primary cluster treat that as an additional node that you can run a single build on. What the idea of the goal of developers to get closer to Docker Kubernetes is really to be able to fix something on your local machine, push it up to the cloud, and know that what you did there is what's going to happen in the cloud at scale. So we're really driving toward that workflow and that use case for our developers.

ADAM GLICK: How would people get started with it?

CHRIS CHAPMAN: Well, they can go to macstadium.com/orka and talk to us about it. And we can put them on a POC and let them try it out. We're working on creating some demo space where they can actually go play with it. But you can also just go to macstadium.com/orka, and you can read the full set of docs and look at the API. You can check out the UI and see some of the demos and workflows. And then you can talk to us. And we love to talk to people and set it up and get them going.

CRAIG BOX: Chris, thank you very much for joining us today.

CHRIS CHAPMAN: Awesome, guys. I appreciate it. I've had a lot of fun.

CRAIG BOX: You can find MacStadium on Twitter at @MacStadium. If you want to learn more about Orka, go to macstadium.com/orka, with a K.

[MUSIC PLAYING]

ADAM GLICK: Thanks for listening. As always, if you enjoyed this show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter at @KubernetesPod, or reach us by email at kubernetespodcast@google.com.

CRAIG BOX: You can also check out our website at kubernetespodcast.com where you will find transcripts and show notes. Until next time, take care.

ADAM GLICK: Catch you next week.

[MUSIC PLAYING]