CRAIG BOX: Well, it's summertime. And you can tell it's summertime because there's cricket on. The Cricket World Cup is happening here. But going out on the weekend, there was also rugby on. And rugby is a winter sport. And in my head, winter sports are played in winter. And at 27 of my British degrees, it wasn't really the right kind of day. Why is it that these two sports are being played at the same time, do you think? Riddle me that.

ADAM GLICK: Well, would it have to do with hemispheres?

CRAIG BOX: No. No. This is exactly in the same place. I think even in the southern hemisphere, I find that the same thing is happening. It's that there's so much money to be made simply by playing more of the winter sports that they just keep going longer and longer. And they're gradually working their way into the fact that we have this now combined rugby and cricket season where you have to pick a side, really. You go out on the weekend and watch either sport.

ADAM GLICK: Are these like the full seven day contests? Or are these the abbreviated ones?

CRAIG BOX: Well, no. This is 90 minutes of rugby and seven hours of cricket, which is the medium length version.

ADAM GLICK: That's quite a day.

CRAIG BOX: Yeah. Do you have that in America? Do you find that the seasons are getting longer and longer?

ADAM GLICK: Not that I've seen. Mostly because there's so many sports and every one of them has their spot on the calendar that they take up. You've got your summer sports with baseball. And then you get into your football later in the year, kind of take you through the end and finishes up in January or February.

CRAIG BOX: Is there ever a time when both are on concurrently?

ADAM GLICK: Maybe in the postseason. I'd have to check. Not during the regular season, I don't think.

CRAIG BOX: Why do we have a postseason? We have a season. And then we're like, we want to play some more. We'll just extend the season already.

ADAM GLICK: Because people are willing to watch. So speaking of times in the summer, I was out at our favorite park. And always something new and interesting in the park, as usual. This time a musical flavor to it all. It was a gentleman who had set up a whole bunch of band instruments, everything from drums, to guitars, tambourines, and just was like, come jam with me. So he's kind of up in the game. You had the person who's like come talk to me. No, no. Now you need an instrument. They're upping the game.

CRAIG BOX: "I desire a musical conversation."

ADAM GLICK: Exactly. He had an incredible looking bass there, which I'll see if I can link a picture of it. It would fit on stage with Queensryche certainly. And just that out in the park and seeing little kids picking up and playing with it was awesome.

CRAIG BOX: Were you inspired?

ADAM GLICK: I was. Though, I had little one with me. So I didn't want to do anything too noisy. But otherwise, I would totally do something like that. What's the worst that could happen, right?

CRAIG BOX: I hear that early stage parenting comes with a lot of opportunity for watching TV.

ADAM GLICK: Indeed it does. I've gotten to be catching up on some things. Glad to see that "Black Mirror" season 5 will be coming out soon. I've enjoyed that. And I caught season two of a show called "The Rain." Now, have you seen this show?

CRAIG BOX: I have not.

ADAM GLICK: What do you think would be a key element that you would see in an episode of "The Rain,"

CRAIG BOX: Well, it sounds like Scandinavian noir just by the title. But I'd guess there might be some weather involved.

ADAM GLICK: Oh, that would be really, really keen of you there. And indeed, it is Scandinavian. It is a really interesting show. But season two, the thing that surprised me most about it is I've now finished the entire season and the one thing that I do not think I saw in any of the episodes was, indeed, rain, which seemed to be kind of a little bit of false advertising there. The first season, plenty of rain, lots of rain. Felt very comfortable for those of us from Seattle. Season two, not as much.

CRAIG BOX: Do you think it was a bit more of a drought?

ADAM GLICK: Perhaps. "The Rain" season two, the drought. Special shout out to David Youngman who I ran into while at an event this week, and came up and said hi, got some stickers. Great to see you. Shall we get to the news?

CRAIG BOX: Let's get to the news.

[MUSIC PLAYING]

CRAIG BOX: With most companies having spent all their news budget on the week of KubeCon EU, we bring you a security news week. First, a Docker vulnerability discovered by Aleksa Sarai of SUSE. A "time of check, time of use" attack is possible using symbolic links where a container running on a host could be used to get access to the contents of that host.

To simplify, you have a link point somewhere that is allowed when it is checked. You overwrite that link to point somewhere forbidden. And then you use it to access any files you wish. A common path to this exploit would give you read-write access to host files using the doc scp command. And a workaround is to pause containers before you copy files to them.

Docker said in a statement that the attack scenario needed to exploit this vulnerability is unlikely and that a fix to automatically pause when copying will be merged in the next monthly Docker.

ADAM GLICK: Next up, a Kubernetes vulnerability of medium severity. A bug in two specific Kubelet versions will return 0 when asked what user to run an image as. And user 0 is, of course, root. If you're using run as user context, either in your pod spec or via policy, you will see the correct behavior. But if you have set must run as non-root, and this bug is triggered, your pod will refuse to start. Fixes include setting a run as user, downgrading the Kubelet by one patch version, or waiting for the next release.

CRAIG BOX: One company who hadn't spent all their budget during the KubeCon week was Palo Alto Networks who this week announced their intention to acquire container security vendor Twistlock for $410 million. Twistlock, who have 120 employees and 290 customers, provides container vulnerability scanning, compliance, and runtime protection tools that run in your Kubernetes cluster. They've also made a name finding those vulnerabilities with 15 CVs discovered and reported by Twistlock labs.

ADAM GLICK: The official Kubernetes client library for JavaScript released version 0.9 last week. The library is written in TypeScript and made for use with no JS. While there are plans to write a jQuery compatible version, use in a browser is made more complicated by the self signed certificates used by most Kubernetes masters. Official client libraries are also maintained for Go, Python, Java and Dot.net.

CRAIG BOX: NVidia has announced an edge computing platform for machine learning built on Kubernetes. The EGX platform, supported by over 25 hardware vendors, is powered by the edge stack software which combines the NVidia kernel drivers with a plugin to Kubernetes and containers from the NGC registry of GPU ready software.

ADAM GLICK: If you're one of the 1,700 certified Kubernetes administrators or the 9,000 people who have registered for the exam, you can have another year on your certification. The CNCF has announced that the certificate, previously valid for 24 months, will now be valid for 36 months. And the upcoming refresh of the curriculum will now come in 2020.

CRAIG BOX: Microsoft has announced GA of the Azure Kubernetes service in the South Africa North region. This means that almost half of Azure regions now have the AKS service available. Additionally, they have added OCI types and Helm 3 charts to the container registry and preview, and announced that the Azure Monitor can now monitor AKS clusters with Windows Server nodes.

ADAM GLICK: And that's the news.

[MUSIC PLAYING]

ADAM GLICK: Evan Powell is the CEO and chairman of MayaData. Prior to MayaData, he was the founder and CEO of Stack Storm which was acquired by Brocade in 2016. Welcome to the show, Evan.

EVAN POWELL: Oh, thanks for having me, guys. I'm really excited about chatting with you.

CRAIG BOX: Congratulations on becoming the latest CNCF hosted project with OpenEBS.

EVAN POWELL: Thank you. Thank you.

CRAIG BOX: For the audience, what is OpenEBS?

EVAN POWELL: OpenEBS is the leading open source example of a new kind of storage. And it's one in which we're actually using Kubernetes itself as really the storage substrate. So the storage services are in containers, can be orchestrated by Kubernetes. And we call it container attached storage. You can think of it as cloud native storage. We can get into all sorts of definitions here in a second. But that's what OpenEBS is.

CRAIG BOX: Now the name sort of hints to a connection to EBS, which I take to be Amazon's Elastic Block Storage, which is a way of provisioning disks on Amazon. Is that true?

EVAN POWELL: Well, it is block storage. And it is elastic. But it does more, in some ways, than that. You can also do file out the front for example. And it is open. So that part is accurate.

CRAIG BOX: But why that exact name, I guess?

EVAN POWELL: It's a historical reason. The projects now a few years old, since it's open sourced. And it's really intended to convey, hey, you need block. What we're seeing a lot in Kubernetes environments is a proliferation of microservices that have little databases running in them. We have users with hundreds of these.

So how can you very simply manage these? And so we're using storage class. We can get into all of that. So elastic, we call it data agility now. That's another. It can be a little bit of a buzzword. But we're very focused on this idea that just like at Stack Storm, we had a lot of DevOps users that achieved unbelievable throughput or agility. How can you enable storage to not be an impediment but actually fit into those pipelines?

CRAIG BOX: I quite often think of storage in the context of a cloud provider. And I can call an API and I can get an attached disk. Or I can get access to block storage. Is this a problem that you're trying to solve only for on-premises users who don't necessarily have those APIs? Or do you see this being a broad thing that's applicable to Kubernetes in all environments?

EVAN POWELL: We're really trying to extend Kubernetes to the data layer. And so we run and OpenEBS itself is in the user space. So it runs anywhere you can run Kubernetes. No kernel module is necessary.

So most of our paying customers, for sure, have on-premise. But if you look at the actual-- as we do. We actually published some data on this. But the OpenEBS community, more than half of the clusters are up in the cloud. It's very typical to have both. And part of the value prop is have a consistent experience, not just in terms of APIs but in terms of capabilities whether you're running on Amazon, on-prem, on-prem with legacy storage, whatever.

ADAM GLICK: Why did you create OpenEBS?

EVAN POWELL: OpenEBS really scratches an itch that we've seen for years in the storage industry and in the management of these workloads. Many years ago, a number of us founded a software defined storage company. And what we found is that storage is too hard. And while everyone was, a couple few years ago, saying, well, Kubernetes makes storage harder. We saw almost the opposite. We saw a possibility of really using Kubernetes to fix some of these pain points around automated provisioning.

Automated, we get, let's say, a flavor of HA for free. Because some of what we do is stateless controllers. So all of that is done for us for free. And really, it's about the application calling the tune. In fact, OpenEBS really is per workload storage. So we saw this as a way to enable these users to treat storage in a way that they're familiar with in the DevOps world, let's say.

We were in a keynote recently at KubeCon in Europe. And that has definitely helped us get more attention. The other thing that KubeCon has done is you meet your users. In the open source world, you often don't actually know who these folks are. You see counts for downloads, Docker pulls, et cetera. But when a car company comes up to you and says, hey, we're using it on the website, in production, it's a whole another thing. So KubeCon has been amazing.

CRAIG BOX: We have the same thing with our listeners. At interview, guest comes and asks me how the foxes in my back yard are doing. Really shows that we're getting out there.

ADAM GLICK: You founded MayaData. What did you found MayaData to do?

EVAN POWELL: So MayaData, we call it the data agility company. And we're with OpenEBS with other open source like Litmus. With our monitoring, we're trying to really make the life of a Kubernetes administrator easier when it comes to the management of stateful workloads. So that's what MayaData is about.

ADAM GLICK: Where do you get the name from?

EVAN POWELL: Well maya means magic in Sanskrit. And so the goal is to make data, or at least the management of data-- you may want to not make the data disappear. But you want to make the management of the data as simple as possible. And so it really resonates with our core community which we started. The community really started out of Bangalore in India.

ADAM GLICK: OpenEBS runs within a container. And I typically think of containers as being immutable. And so storage, I normally think of it as you want to be able to change what's in it. Immutability is not always what you're looking for. So how do you handle the fact of persistence with OpenEBS?

EVAN POWELL: Well, one of the things we talk about is OpenEBS is not yet another scale out storage system. And so what you're actually doing with OpenEBS is you're writing data to a number of replicas. Those replicas themselves are containerized. And that can be one replica which means you have one copy of your data. It can be three. It can be more.

And those are synchronous writes. So what that means is we will not tell the database or whatever, the logging system, whatever the stateful workload is that the write has been accepted until it's been written in places. And so you're adding a level of resilience there.

You can also do asynchronous, not to geek out too much. But I think for the listeners you can, in addition, when you have high latency or you're going from cloud A to cloud B, you don't want that to be synchronous because your writes will take a long time. So then you do async. And you have that kind of capability as well.

But it's pretty cool because your actual storage services, things like the target that you write to, those are in containers and those are stateless. So those can be blown away. And we're benefiting from Kubernetes rescheduling those.

The underlying data, if it's blown away, we have another issue. And we do a background rebuild. Let's say you lose a node that has a replica. We-- we being OpenEBS and the community project-- would do a background rebuild of that data. So it's sort of a contained mini storage system almost per workload.

CRAIG BOX: And conceptually that sounds to be quite similar to things like Cassandra and replicate databases. But this is doing this at the file and block layer.

EVAN POWELL: Yes. Exactly. And you know the rise of Cassandra, specifically, or NoSQL, or other scale out workloads is one reason. We kind of reimagine storage. Because traditionally, storage was always scale out. And your applications weren't. And traditionally, disk drives were really slow. Very slow. And so you needed to scale out to get performance.

Now, you actually want to scale out as little as possible in order to get to the performance of the underlying-- let's say you're paying for high IOPs, I don't know, cloud volumes. You want to get to that performance. You don't want to wait for the write to cross lots of locations. So we minimize it. We're really a slimmed down storage system that just gives you the capabilities you need.

ADAM GLICK: Is there a way to do some of the writes out, maybe asynchronously, to some sort of persistent disk to avoid something like a cluster getting deleted and all of your replica nodes basically going down at the same time?

EVAN POWELL: Yes. So clusters are becoming surprisingly ephemeral, at least surprising to me. And it's fascinating to see. And so in upgrades, I was just chatting with somebody who's using us behind Kafka, and these rolling sort of upgrades for Kafka, that's exactly what happens. You want the data to be across multiple clusters in this particular case. And they're blowing away whole clusters. So that is done typically asynchronously.

One of the things we kind of soft launched or at least we published the GitHub for recently is something we're calling KUBEMOVE, which is quite nascent. But it's basically the idea that you should be able to, from the Kubernetes layer, say, I'd like the data moved from here to there. Or I'd like the data back. I'd like to do a switch from this cluster to that cluster.

You can do all of that in OpenEBS. But we prefer to be called by Kubernetes. When you do these higher level things like movement, it's our own APIs. And being tiny and new, like what APIs are these, we're seeing some fracturing in the community or let's say the broader cloud world where everyone is starting to implement their own APIs. So this is above Kubernetes when you're doing things like data mobility.

So the idea behind KUBEMOVE, and it really needs to be picked up by folks who are setting the agenda in Kubernetes a little more than us, is to make this really a standard and an open approach. We have some, for instance, CRDs out there. But they're quite nascent, as I said.

CRAIG BOX: The people who are setting the agenda are SIG Storage. And I had a chance at KubeCon recently to catch up with Saad Ali who is the lead for SIG Storage. And he explained to me the difference in the two ways that you think about storage.

So you have things on Kubernetes that need to consume storage. And then you also have the ability for Kubernetes to provide storage. And that provision, running things like OpenEBS, may not be to workloads that are running on Kubernetes. But it might be using that as the substrate for a storage service that has VMs or tin in a data center, perhaps. Do you see OpenEBS as having its clients being primarily container workloads?

EVAN POWELL: Yes. So we have really focused on using Kubernetes with OpenEBS to deliver storage to Kubernetes workloads. But you're absolutely right. You could, it's iSCSI as an example out the front. We have a, it'll be NVME over fabric in the future. And you could run other block workloads or file workloads on top. But you've got to pick your spots. And really, for us, it's about Kubernetes workloads today.

CRAIG BOX: Is it mostly people calling into the storage from the same cluster it's being served from? Or for those redundancy reasons you mentioned before, are people doing it across clusters?

EVAN POWELL: They're typically doing it on the same cluster and then it kind of complicates or gives you more options around HCI. So you may logically be on the same cluster, but very likely be on separate hardware or separate cloud volumes, may be identified because it has certain performance capabilities.

And so part of what a system like OpenEBS-- and I should say, it's not the only one of the category. We're the open source one that people maybe have heard of, OpenEBS. But there are other container attached storage solutions out there.

What we tend to do is then flag nodes. This node is really good again for performance. And just incidentally, I think we probably have storage vendors listening to this, I would imagine. And I would just say, take a look at NDM which is a subcomponent of OpenEBS.

CRAIG BOX: What does that stand for?

EVAN POWELL: Node Disk Manager. Although typically, they're not disks. They're cloud volumes. It doesn't matter. It's a place to store the data. Because a challenge you have in storage, of course, is where can I store my data, and what's the status of it, oh no this one's failing. So what we are using is CRDs and effectively extending etcd itself as that source of truth. So OpenEBS is not the only natural consumer of that kind of subsystem. There's even databases and certainly other storage systems that might be interested, we think, in NDM.

CRAIG BOX: Are you making use of Kubernetes features like node affinity to make sure that you're not scheduling various workloads on the same physical hardware?

EVAN POWELL: Yes. Affinity, anti-affinity, and you get into these interesting where you may want the controller, which is telling the workload hey I've got your data, to be close to the workload. And you may want the data, which is the replica in our nomenclature, to not be close to it, to be on another physical host. So you have that kind of nuance.

And the beautiful thing about Kubernetes, amongst other things, is of course you have this defined via YAML. I know there's been discussions about YAML and the beauty of YAML. But at least its infrastructure is code and you have storage classes. And there you go. So you now have these parameters set by workload. And the developer can think about it as much or as little as they'd like, which is really exciting.

ADAM GLICK: So through OpenEBS and the connection that you have to Kubernetes, is your goal to eventually make that schedulable the same way that it can schedule containers so that can also make sure that the storage pods are available?

EVAN POWELL: Effectively, that is what is happening. We are, through affinity and other rules, enabling Kubernetes to if we need to treat our pods or the pods that we're running on slightly differently than other pods. And you can set that up. And we have to be very cognizant of the fact that if we become as an example in storage, if you're on a piece of hardware, you can be a memory hog. If you're in a shared host and you become a memory hog, you may get evicted. So there goes your storage. What about that? So we have to be very smart about that. But when you are, it gives you unprecedented level of control, we think, and simplicity.

ADAM GLICK: Are there other companies that are involved in building OpenEBS?

EVAN POWELL: Yes. We are a sandbox project. So I think technically we didn't need another maintainer. But there is a company called Sanmina which is a large-- if you're not living in storage, may not have heard of it. But it makes systems for other storage companies as well as cloud companies. And they have a pretty, again in storage, well known storage engineer as a maintainer, Richard Elling of now OpenEBS. And then there are other companies including Wipro providing services that include OpenEBS or effectively enterprise support for OpenEBS based systems.

CRAIG BOX: The economics of storage have changed substantially over the last 10 years as we've moved from spinning rust to solid state devices and memory based storage. How has that impacted the storage ecosystem?

EVAN POWELL: That's a great question as well. To roll it back a little bit, so the move to a world in which the direct attached storage is massively faster than the scale out storage, that has led to the sort of collective market share of storage dropping. And I gave this talk a couple of years ago at a storage developer conference with a lot of Homer Simpson references, which was interesting.

And my basic point there is we have more or less failed. Because increasingly workloads, we're saying, hey, storage industry, thanks but no thanks. I'll just figure it out. The complexity you bring to me and the cost on top of these commodity pieces of SSD and so forth is not worth it. So I'd rather just figure it out myself. And if I lose a disk, maybe I just say the node is dead.

It's a little embarrassing as somebody who's spent many, many, many years in storage that we came to that point, which again comes back to why you might want just enough storage which is kind of CAS or Container Attached Pattern. But the economics as well in the cloud,

I have a friend who says the storage companies are doing really well. I'm like, really? I mean, are they? No. No. The clouds are storage companies. So if you actually look at the economics and where the gross margins and so forth are coming from, it is profitable if you're running it at that scale.

CRAIG BOX: Do you think there will be a sea change in technology as things like Optane and things that make memory and storage look the same to a node start becoming commercially available?

EVAN POWELL: Absolutely. And we are beginning to see it. And one of the really fascinating things to geek out slightly is NVME over fabric is an initiative that has been pushed for years and has now gotten pretty mature. And they added NVME over TCP.

And what the listeners might be interested in is there's a requirement in that standard that you can add, I think it's no more than 10 microseconds of latency. So what this means is if you're in a data center, the device all the way over there is effectively attached directly to me. So there's things like this that will change the architecture quite a bit.

Another thing is something called SPDK, Storage Developers Kit out of Intel. But basically with this approach to handle the massive speeds of these underlying pieces of hardware, you may want to not go in the kernel. So now user space is faster than the kernel, which an old storage guy is like, what. Dogs and cats living together. How is this possible?

But in fact, it is the case. So it's not to say that OpenEBS is your fastest storage in the world today. We do have a forthcoming rust based engine that'll be a lot faster. We do support local PV, which is again if you want us to help you manage the local disk and that local disk is fast, we can do that for you. But it's the right architecture to embrace this kind of thing going forward.

ADAM GLICK: Which fabric introduces the lowest latency? Is it linen or denim?

EVAN POWELL: Oh, yes. Good question. It's the polyester pantsuit fabric.

ADAM GLICK: You're also working on other open source projects. In particular, one of them is called Litmus. What does Litmus do?

EVAN POWELL: So what Litmus is is a chaos engineering tool kit that is very containerized for Kubernetes and stateful workloads on Kubernetes. So you install now via Helm Chart. Your workloads that you want to test, also via Helm Chart. And then you can do things like, I wonder if my Postgres really will serve data back to me when I take down three nodes. And you can do that, as we do.

On openebs.ci, we test every commit to master against, I don't know, it's like 36 permutations of clouds and workloads including Postgres under different scenarios. So it's not really chaos engineering. It's E to E testing. Or you can also, as some of our users do, use it in production to kind of keep your environment, and your environmental engineers who built the environment, honest. So you can do true chaos engineering.

And it's really amazing. At my last company Stack Storm one of its big use cases at Netflix actually was responding to chaos engineering, which was truly awesome. So chaos monkeys would take down nodes. And we'd bring them back up. And it's like the war of the machines. But we are seeing chaos engineering especially given, how is this Kubernetes working. Is it rescheduling my data in OpenEBS itself and my database is in such a way that the end user will still be happy? So Litmus really fits there.

We have not contributed it to the CNCF. But that could happen in the future. It's 100% open source.

ADAM GLICK: This reminds me a little bit of the Chaos Monkey tool out there. Was it inspired by that? Or is this a completely parallel project?

EVAN POWELL: It was definitely inspired by that and the whole simian army coming out of Netflix. But it really grows out of, well, we had an idea and an itch to scratch. What we've really tried to do as a community and as a company is listen. And what we have found is that Kubernetes administrators are smart folks. They're not necessarily people that have spent 20 years in storage.

And so it's a different persona. So what do they need to be able to sleep at night? And so Litmus came out of that. That's what we were using to sort of unnaturally advance more quickly the quality of OpenEBS. And so we thought, hey, let's productize this and package it at least and make it open source for folks.

We do a similar thing on the monitoring side. We contribute upstream to a project called Weave Scope. It's from Weave. We're a core maintainer. And that answers questions like, dude, where's my data. And it gives you that visibility.

Because again, if you're this Kubernetes administrator, good news. You can run the same storage anywhere, any environment that simplifies your life. But at the end of the day, you want to know that when things start to break, you'll still get the data back and that you can kind of point to it and tell your teams, no, I see what's happening here. So we're trying to do the job for them, help them do their job in software.

CRAIG BOX: Is the chaos that you're introducing killing the processes that serve things like the database? Or are you disconnecting them from the storage or removing the storage explicitly?

EVAN POWELL: Both. Or you can actually take down the entire Kubernetes node. I mean, it is quite an attack vector. You want to be pretty smart about it of course. But you've got a whole range of things. And if you want to try it out, and I think you need to do it on your phone. But openebs.ci, you can inject live chaos into our own ci pipeline.

CRAIG BOX: That's brave.

EVAN POWELL: That is brave. And it'll keep the engineers very excited as people try that over the next years.

CRAIG BOX: Vote now for which node you'd like to see taken offline.

EVAN POWELL: Yes, exactly.

ADAM GLICK: What's the most interesting thing that you've found as a bug and fixed, based upon the use of Litmus?

EVAN POWELL: There is a long running bug in the iSCSI Linux subsystem which ironically was introduced by, of course, the people that developed iSCSI, open iSCSI, which I founded a company with years ago on the storage space. So we were very tempted to ping Dmitri and Alex and say, dudes, can you fix this bug. But it's been in there a long time. It's not getting fixed.

And it manifests itself under certain conditions. And so what storage like us that's up at the container level or anyone else in using the Kubernetes iSCSI subsystems need to do is figure out a way to work around that. And so we found that thankfully in a combination of load testing and chaos engineering.

CRAIG BOX: The OpenEBS logo is a mule. That sounds like something there will be a story behind.

EVAN POWELL: Yes. And actually our corporate values are plow. So we really buy into this whole mule theme. And we have various stories about why we came up with the mule. But it's something about hybrid, dev and ops, storage and Kubernetes.

CRAIG BOX: Or stubbornness.

EVAN POWELL: And stubbornness. We're proud to be stubborn. We're also US headquartered. Our real engineer core is out of India. So there's a lot of these sort of hybrid aspects to the company and the project.

CRAIG BOX: Evan, thank you very much for joining us today.

EVAN POWELL: Thank you. It's a real pleasure. Thanks, guys.

CRAIG BOX: You can find Evan Powell on Twitter @epowell101 and OpenEBS at openebs.io.

[MUSIC PLAYING]

CRAIG BOX: Thanks for listening. As always, if you've enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter @kubernetespod or reach us by email at kubernetespodcast@google.com.

ADAM GLICK: You can also check out our website at kubernetespodcast.com to get copies of our show notes and transcripts of each of the shows. Until next time, take care.

CRAIG BOX: See you next week!