But there was one in the picture round, which really got me this week and a lot of other people in the internet as well. They showed a picture of a computer. And the computer had its name and number scrubbed off. And you had to look at this bread box computer with its obvious keys that gave me so many hours of pleasure as a child, and then watched these teams full of people with an average age of 20 completely fail to recognize it. And it just hurts your heart really.

I like to watch a TV show here called "University Challenge." For anyone who's familiar with any kind of quiz bowl thing, you've basically got two teams of four competitors. And they are asked questions by a wizened old gentleman named Jeremy Paxman, who challenges them on various topics. There's a lot of classical music, literature, all the sort of things that you might expect from higher education in Britain in the 1950s. For some reason, the questions haven't really moved on past that point in a lot of ways.

ADAM GLICK: Would this computer happened to have come in a distinctive brown case?

CRAIG BOX: Indeed, a fine beige, if you will. We've got a link to the video in the show notes. But the follow up questions, they were asked to identify a white box that you would put video game cartridges into from about a similar time. And it was just funny watching them guess what that could be.

They got halfway there. I wont spoil it too much for anyone who wants to check it out. It's about 30 seconds worth of pure, oh my god, how can people not know this for a very thin slice of the population that I think you and I are both right in the middle of.

ADAM GLICK: Oh yeah, it was. One of my first machines. I believe it actually was the first one that I owned myself. And I loved it.

Speaking of gaming systems, I've been playing a little bit more games. I love mobile games while I'm on the go. And checked out a game this week called Golf Peaks, which is this interesting mix of a puzzle game, a card game, and it has a golf theme. I wasn't sure where I was going to go with that because golf isn't quite my game.

The closest I get to golf is really riding around with a bunch of buddies, drinking and playing. My handicap isn't so good. But it's actually a really fun puzzle game. And it's rated really well in both the iTunes and the Google stores. And I give it a shot. I've been having fun playing that once. That's been my little bit of joy for the week.

CRAIG BOX: Did you ever play a game called Desert Golfing?

ADAM GLICK: I did not.

CRAIG BOX: I didn't personally, but it was a bit of a meme on the internet for a while. I think the goal was basically you're on Mars or insert some inhospitable faraway place. And you just have to hit the ball. And then you walk to the ball and hit it again. And the goal is just to go as far as you can. And that's all you do. It was a eight-bit theme, I guess, to tie it back to our original discussion.

ADAM GLICK: I was like, it's kind of the futility of papers, please, put into an '80s eight-bit.

CRAIG BOX: A little bit.

ADAM GLICK: Shall we get to the news?

CRAIG BOX: Let's get to the news.

ADAM GLICK: Apache Flink is at version 1.10. With that, Kubernetes support has come into beta. Flink is an open source data stream processing tool for distributed computing that differs from other such tools and that it handles its processing in a stateful way. Kubernetes joins YARN and Mesos as supported platforms.

CRAIG BOX: Linkerd 2.7 is out. The project has added support for external certificate issuers like Vault and Cert Manager, and finally, added the ability to rotate Taylor certificates. Rounding out the release are extra dashboard features and upgraded Helm charts with a braking change to watch out for.

ADAM GLICK: If you're using Azure Container Registry for your container storage, make sure you're using TLS 1.2 connections to access your containers. Starting March 13th, the currently supported TLS 1.0 and 1.1 connections will be turned off. To avoid any service disruptions, make sure you update your transport security protocols for your container pulls now.

CRAIG BOX: You still have Linux running under your Kubernetes-- well, unless, of course, you are running Windows. And you still have to be aware of bugs and vagaries in the kernel. Fayiz Musthafa from Omio writes about a bug in the Linux CFS scheduler which was causing unnecessary throttling and preventing containers from reaching their allowed quota. His solution was to disable CFS quota and the Kublet until rolling out a patched kernel.

ADAM GLICK: Kiosk is a new multi-tenancy extension for Kubernetes built by DevSpace Cloud. The core idea is to use Kubernetes' namespaces as workspaces, where tenant applications can run isolated from each other. To minimize admin overhead, cluster admins configure kiosk, conveniently named with a K on each end, which then becomes a self-service system for provisioning namespaces for teams.

This soft multi-tenancy environment can be extended with a sandbox like gvisor to provide hard multi-tenancy. Author Lukas Gentele says that a guide for how to do this is coming soon, as well as multi-cluster support.

CRAIG BOX: Docker has announced the donation of the CNAB to OCI library to the CNAB project. This tool let you save a bundle into a container registry. Learn more about CNAB in episode 61 with Ralph Squillace and Jeremy Rickard from Microsoft.

ADAM GLICK: Ever tried to debug a Kubernetes application? If so, you know it can be very different from what you may have been used to in the past. The CNCF has a blog post this week covering ways you can debug your Kubernetes applications. Although the post starts off slow with things that might seem obvious, like Googling for the answer, the post does eventually get into a discussion of resource constraints, accessing run containers through the command shell, and tools that can be very helpful-- a good read for any of the new users.

CRAIG BOX: Hyperconverger Nutanix has updated its Kubernetes platform Karbon, with a K, to 2.0. New features include single-click version upgrades, air gapped clusters, and tighter integration with their Prism user interface.

ADAM GLICK: Emily Omier posted a good reminder this week that KubeCon will have free childcare available for all attendees. It's a great service for those of us that are parents and a wonderful way to enable people to enjoy KubeCon without having to choose between taking care of your child and developing professionally. Additionally, if you want to attend KubeCon, there's a 15% off discount code available for our listeners in the show notes.

In case you're worried about the health implications of the novel coronavirus, the CNCF are monitoring the situation and have no plans to change the event. Although they will be encouraging everyone not to shake hands and to keep three feet between each other while talking.

CRAIG BOX: Back in August, IBM announced that after closing their Red Hat acquisition, they would be launching OpenShift on their Zed or Z series mainframes and their LinuxONE platform. This week sees that support move to GA, proving that you can teach old iron new tricks.

ADAM GLICK: Chip Zoller writes about why Kubernetes runs in VMs. He's taken a slightly humorous style of writing to point out many of the challenges that exists with running Kubernetes on bare metal. He makes a number of important points for anyone thinking about running on bare metal, including the challenges with persistent storage, networking, hardware management, failure recovery, and more. It's a good read if you're thinking about running Kubernetes on bare metal to make sure you've thought through all of the implications.

CRAIG BOX: As people are building multi-cloud applications, authentication becomes a challenge. Vendors have good integration with their own services. But calling hosted services from one vendor with a Kubernetes cluster in another requires some negotiation of identity and keys. Exporting and storing keys with long expiration times is an anti-pattern. Alexei Ledenev from DoiT has written up the situation using the example of AWS and Google Cloud. By granting a Google account access to an AWS role and then exchanging tokens using a webhook, he was able to build a franken-auth system, which gives AWS credentials to pods running on GKE.

ADAM GLICK: Finally, Carbon Relay, a Kubernetes management and deployment tool startup, has closed a $63 million funding round this past week. Following on the previous $5 million round last January, Carbon Relay is aiming to simplify and automate many common Kubernetes tasks for their users. Congratulations to the team.

CRAIG BOX: And that's the news.

[MUSIC PLAYING]

ADAM GLICK: Leonardo Di Donato is an open-source software engineer at Sysdig and a core maintainer of Falco, as well as the creator of kubectl trace. He was previously an InfluxData and is a longtime open-source lover. Welcome to the show, Leonardo.

LEONARDO DI DONATO: Thank you two for having me here. The pleasure is all mine.

ADAM GLICK: How did you get started in the world of Kubernetes?

LEONARDO DI DONATO: It was 2015. With some friends of mine, I was building the tech platform for a startup aiming to sell early stage video games-- crazy idea. At that time, we were all really young and crazy, so we decided to orchestrate the microservices platform composing the startup software with a cutting-edge technology called Kubernetes.

I don't remember the exact version we launched the product with in the end. But I remember myself spending some time playing with PetSets and things like that. So I would assume it was like Kubernetes version 1.3. Maybe I'm wrong.

Anyway, in the third quarter of 2016, we launched the startup with Kubernetes under the roots-- yay-- and guess what? The startup went really bad really soon. But not because of Kubernetes.

CRAIG BOX: Well, that's good to hear.

LEONARDO DI DONATO: Yeah. So I moved on. But I'm still very, very grateful for that experience, because it taught me a lot. Especially, I started learning Kubernetes there, and I never stopped using and expanding it every job I had from that moment on.

CRAIG BOX: When did you first get involved with the debugging and tracing of systems running Kubernetes?

LEONARDO DI DONATO: The first time that I really get into this topic was two years ago, at InfluxData. Basically, I was working in a team responsible of creating the 2.0 cloud for InfluxDB. And while doing that, we encountered some unique challenges about the topic of debugging the InfluxDB without even touching it or debugging the platform without even changing its code.

So we needed a way to go look under the hood, go look where everything's really at, into the kernel, where everything is. So that was the first time that I found myself in the need of tracing things for a production system for a real system.

CRAIG BOX: Tracing can mean many things to many people. How do you define "tracing"?

LEONARDO DI DONATO: This is a difficult question, because in computer science, the difference between tracing, debugging, it's shady. It's not so clear. Things often overlap.

But I often think about tracing as a way to record things happening, who made them happen, and when they happen, with the aim, often the hope, to be able to understand why they happen. But the tracing that I like the most is the one that records without getting noticed too much. This is probably the reason I love eBPF and I get involved into the eBPF world.

CRAIG BOX: What are the systems for tracing a Linux kernel, and how have they evolved over time?

LEONARDO DI DONATO: There are a lot of technologies and tools for tracing the Linux kernel. There are tools like perf. There are tools like strace. There is a whole set of tools for tracing a Linux kernel that then can be built during the time. Every tool has its cons.

But nowadays, in a cloud native world, we have an issue. The issue is that every tool that we have for tracing Linux kernel have not been made for the nowadays world where we build distributed software in cloud native environments. So basically, we are passing through a process of creating other tools that use the existing tools to make them work better in cloud native environments.

This is the evolution that I see about Linux tracing during the years. What I found from my experience is that among the various tools that we have for tracing the Linux kernel, eBPF is the one that is better suited for working also in cloud native environments.

ADAM GLICK: What is eBPF? And how did it come about?

LEONARDO DI DONATO: If I should describe eBPF in 10 words, I could say that eBPF is to the kernel what JavaScript is to the browser, with the exception that eBPF cannot crash the kernel. [CHUCKLING] Because eBPF code gets compiled, gets verified by the eBPF verifier that, for example, verifies there is no amount of loops, that there is no access to nil pointers, and things like that.

This is the reason that basically eBPF is really a technology growing in adoption and becoming pervasive in cloud native environments. Because basically, we can describe eBPF like a technology that makes the Linux kernel fully programmable without having to write a kernel module. I think that everyone that ever tried to write a kernel module knows what I'm talking about, knows what are the issues that you can encounter writing a kernel module and destroying your kernel like every 10 minutes.

CRAIG BOX: BPF was the Berkeley Packet Filter, which was technology that developed in the early Unix systems around routing network packets. How did we get the E?

LEONARDO DI DONATO: There is a bit of history here. During 1992, if I don't remember wrong, Steven McCanne and Van Jacobson wrote a paper describing how they implemented a network package filter for the Unix kernel that was like 20 times faster than the state-of-the-art in packet filtering at that time. There were really big innovation in that field.

That paper introduced a new virtual machine that was designed to work efficiently with register based CPUs, the usage of per-application buffers that could filter packets without copying all the packet data, thus minimizing the amount of data BPF requires to filter packets. To summarize, it was like a very efficient packet filtering mechanism.

Then in early 2014, Alexei [Starovoitov] extended the BPF implementation, completely reworking its implementation. He optimized it for modern hardware, and he made the resulting instruction set way faster than the machine code generated by the existing BPF interpreter at that time. He also increased, to give some technical details, the number of registers in the BPF virtual machine from two 32-bit registers to 10 64-bit registers, which opened the possibility to write more and more complex programs using function parameters.

So that was the time when eBPF was born, proving to be four times faster than previous BPF implementation. In the June of that year, if I don't remember wrong, the extended version of BPF was finally exposed to the user space. That was the inflection point, no return from this point on. BPF became a top level kernel subsystem. And it stopped being limited to the network stack only.

CRAIG BOX: How do you run eBPF programs on a single machine?

LEONARDO DI DONATO: It depends on the kind of eBPF programs. You can have XDP-- eXpress Data Path-- programs, and you have to attach them to network interfaces after compiling them clearly, because BPF, it's restricted C. it's a kind of C that you have to write.

Or you can attach it to raw tracepoints. Or you can write you probes to trace function calls in user space. Or you can attach to k-probes for tracing functions up at the kernel level. So that there are various ways to run BPF. But basically, the first step is to use the Clang LLVM project to compile this restricted C against the BPF virtual machine target.

ADAM GLICK: You created the kubectl trace tool. What is that tool?

LEONARDO DI DONATO: I built that tool with my friend Lorenzo Fontana, because we both felt the need to provide an easy way to schedule the execution of eBPF programs in Kubernetes cluster. Well, technically, kube-cuttle trace, or kube-C-T-L trace-- I don't want to pick this battle-- schedules BPF trace programs. Anyway, at the time we open-sourced, there were not many options out there to schedule and run eBPF programs in a cloud native way.

And we needed a way of doing it and doing it effortlessly, because it was part, basically, of our daily job at InfluxData. To give more context, the complete story is that, while at InfluxData in 2018, Lorenzo and I were part of a team responsible to build the cloud at InfluxDB, a cloud backed by Kubernetes. And during the development, we encountered some challenges. We need tools to easily trace what's happening under the hoods on the platform.

Thus, after some digging around I started investigating, learning, and then using eBPF programs targeting that cloud. I remember spending whole days pairing with Lorenzo, writing C code for eBPF programs able to trace I/O operations and latencies of the InfluxDB operations. I had very long days writing XDP programs for IP networking policies for the clusters.

Soon I realized three very important things. Tracing tools, as I said, exist. But they are not made to be aware of the abstractions that a distributed systems like Kubernetes imposes. Writing restricted C for the BPF virtual machine can be really frustrating and hard. And scheduling and running against Kubernetes such programs is error-prone and really time consuming. So I started digging around again and then found bpftrace.

bpftrace is a tracing language for eBPF from the Linux Foundation IO Visor group. It's more high level than C. So it matched our needs. We basically switched from having to write five lines of C to using bpftrace one-liners. And that solved half of the problem that we were having on a daily basis.

Then we only needed a simple way to easily schedule and execute them on Kubernetes cluster avoiding YAML boilerplate. Suddenly, I realized that SSH for Kubernetes is kubectl, right? And this is how we ended up writing a kubectl plugin for bpftrace on Kubernetes.

When we put it on GitHub, it soon gained some really good traction, confirming to us that we were not the only people that want to experiment and use eBPF programs on Kubernetes. And this is sort of the tool and what that tool does. It enables people to schedule and execute bpftrace programs on their Kubernetes cluster with simple one-liners through kubectl.

CRAIG BOX: What is the IO Visor Project, and how did kubectl trace end up in that?

LEONARDO DI DONATO: IO Visor Project is a group in the Linux Foundation focused on the eBPF world. In fact, the bpftrace language I was talking about is part of the IO Visor Project. It's basically a high level language, as I said. But in that project, in that group, there are also other tools relating to the eBPF environment, to the eBPF world. For example, there is BCC, which is a compiler, a collection of BPF tools written in Python. There is also gobpf, a Go library for writing BPF from Go.

So that's the main goal of that group. How we ended up in that group, I remember receiving an email from someone-- I don't remember the name, but someone in that group-- asking me and Lorenzo if we were interested in donating the project to that group. And we said, why not?

ADAM GLICK: Are tracing and security the same discipline?

LEONARDO DI DONATO: Well, I consider tracing and security, especially runtime security, two opposite sides of the same coin.

ADAM GLICK: How do you think the two relate to each other then?

LEONARDO DI DONATO: Recently, we introduced a lot of abstractions. We create cloud native software. We deploy things in a different way. We write software in a different way. So we introduce a lot of complexity that now we have to take into account.

When I mentioned abstraction and complexity, I explicitly refer to Kubernetes, but not in a negative manner. It's just that abstractions hides some complexity by definition. But at the same time, the same abstractions also increase the entropy. Turns out that, to secure things, you need to dig deeper into them, uncovering all the complexity that you carefully tried to avoid putting those abstractions in place, right?

For example, to securely run our applications on our Kubernetes clusters, we first need to understand how Kubernetes layers interface with the Linux kernel. And to understand it, we need to have full visibility from the kernel up. So we need tracing tools to do that.

This is the relationship I see between tracing and security. They are two different sides of the same coin. You can't have security without tracing things. In fact, I believe that runtime security has yet to be solved in Kubernetes for this reason, because we need tracing signals, from the kernel up, to understand what's really going on into our Kubernetes clusters, into our applications running on it.

To be honest, this is what really inspires me nowadays.

ADAM GLICK: How did you end up at Sysdig?

LEONARDO DI DONATO: This is a funny story. I was working, as I said, at InfluxData. I was working on a daily basis with eBPF. And at some point, we open-sourced kubectl trace. Someone from Sysdig, since Sysdig is based on eBPF too, reached out to me and Lorenzo, both of us. First me, but then I also said, well, there's a friend of mine. I would like to work with him, too.

And we started talking. They initially wanted to hire me for the commercial side, the commercial product of Sysdig. But in reality, I was not interested in going to build another Kubernetes-backed cloud. And I just want-- at that time, I was in love with eBPF. And I've always been in love with open-source. I just want to do open-source and do cutting-edge things with eBPF, and into the tracing or into the security field, or both.

So basically, I declined their offer. And then I started talking with the CTO of Sysdig, who is Italian, so it was simple for me speaking in Italian explaining my needs, explaining my desires and-- with my hands.

[LAUGHTER]

So basically, Loris Degioanni, the CTO of Sysdig, the creator of Wireshark, contacted me again, saying, well, we really want you two, because you are doing cool things on the cloud native ecosystem with eBPF. We in reality have an open-source project we would like you two to take care of. And that's how I ended up at Sysdig.

CRAIG BOX: The open-source project that you're working on is obviously Falco.

LEONARDO DI DONATO: Yeah.

CRAIG BOX: How would you describe Falco?

LEONARDO DI DONATO: Falco is a runtime security project originally created by Sysdig. Falco was contributed, by the way, to the CNCF in October 2018. The CNCF now owns and runs the Falco Project as an incubating project. To make a long story short, Falco consumes signals from the Linux kernel and from container management tools such as Docker and Kubernetes. Also, Falco is capable of consuming Kubernetes audit logs.

Then all that Falco does is pass in all these signals together and assess them again security rules. If a rule has been violated, Falco triggers an alert. That's it. This is the short way I would describe Falco.

ADAM GLICK: What problem is Falco being built to solve?

LEONARDO DI DONATO: Falco is being male to solve runtime security problem, to help people detecting malicious behavior, mainly in cloud native environments but not only. Some examples of malicious activities are exploits of unpatched and new vulnerabilities in applications or in Kubernetes itself; insecure configurations in application or in Kubernetes itself; leaked or weak credentials or secrets, insider threats from agents and applications running at the same layer.

So we're building Falco so that by plugging it into the security response workflow, the end user can reduce the risk, an example, immediately responding to policy violation alert, leverage up-to-date rules using community source detection of malicious activities and CNF exploits; strengthen their security by creating the custom rules using a context-switching flexible engine to define unexpected behaviors on their applications.

CRAIG BOX: Falco started out consuming those signals from the Linux kernel via a kernel module that was installed on every kernel that it wanted to monitor. When did it change to using eBPF?

LEONARDO DI DONATO: That changed some time ago, because I say that having to use a kernel module is not something that every end user can do. It's more complicated. And also, eBPF, I said, is something that allows us to have basically feature parity with the kernel module, so to extract the same amount of signals but without having to deploy a kernel module into a node, into a kernel.

So that was something that was really needed, and we did it. That's it.

ADAM GLICK: How is Falco installed? Is it something that people put into their cluster? Does it run on every node? How does it get set up within your environment?

LEONARDO DI DONATO: Nope. At the moment, the main way of deploying Falco is a DaemonSet. And then for example, you can point the Kubernetes audit logs towards it, and it will process Kubernetes audit logs in order to trigger alerts. We are working to change the way that Falco can be deployed towards a Deployment, because we want to be able-- we don't want to impose such a requirement, to be honest.

But the main way, at the moment, is to deploy Falco like a sidecar when you need it or like, with only one Falco. It depends on the need that you have. We are working to make it more easy, more simple to deploy on Kubernetes cluster.

CRAIG BOX: Are there any risks that come along with effectively adding code to the kernel on each machine in the cluster?

LEONARDO DI DONATO: The risk that I see from a security standpoint, to be honest, is that to do so you have to use privileged: true, which is not something that I will suggest to everyone. We are working, for this reason, on that. We are designing a new architecture that will make Falco not have these requirements.

ADAM GLICK: What made Sysdig decide to donate Falco to the CNCF?

LEONARDO DI DONATO: I said Loris, the founder of Sysdig, is the creator of Wireshark, a really famous open-source tool. He, like me, deeply believes in open-source. So the main reason for donating Falco to the CNCF was the belief that doing so, it will then help the whole community consider the runtime security problem. It was a way to expose a topic, a field that needs work, that needs someone taking care of it.

Also, being backed by the CNCF and not by a commercial company means gaining access to a set of policies, communities, and different kind of support that helps the project be really a healthy open-source project. Because being open-source does not mean being on GitHub. Putting code on GitHub does not mean, to me, to be open-source.

Open-source is way more. It's the processes that you use, the way that you make decisions about the project and its future, its architecture, listening to everyone, making everyone taking part of the process. It's about being open in all the process, not only the code. Because software is not just code nowadays. So this was the reason that Sysdig decided to donate Falco to the CNCF.

ADAM GLICK: Sysdig has a commercial product as well. What's in the Sysdig commercial product that's not in the Falco open-source product?

LEONARDO DI DONATO: As I said, I'm an open-source software engineer working and taking care of Falco. I'm 100% on it. So I'm lucky enough to not have to care about proprietary products which is something that I always dreamed of. But anyway, I know for sure that the Sysdig proprietary product is composed also by other wonderful open-source products in order to cover broader security topics.

But I don't really know the details. So this is all I can say.

ADAM GLICK: Currently, Falco with incubating within the CNCF. When do you think that it will make the graduated stage?

LEONARDO DI DONATO: I'm Italian. I'm superstitious. I can't reply to this question.

[LAUGHTER]

In reality, the main focus now is not graduating, to be honest. It's doing things well, as I say, doing open-source in the right way-- improve Falco, for example, improving the way that it can be deployed on Kubernetes, shaping the future of Falco, cultivating the project and the community in the correct way. I'm pretty sure that as long we do all these things together, and we do that well, the adoption will continue to grow, and the graduation step will be a natural one.

I don't know if in one year or two. I don't care. I care to do things well.

ADAM GLICK: KubeCon EU is coming up shortly. And you are speaking at it, is that correct?

LEONARDO DI DONATO: Yes, that's correct.

ADAM GLICK: What are you talking about?

LEONARDO DI DONATO: OK. I have two talks-- one about the way that we in Falco, security, GitHub organization, set up rules for healthy contributions, for healthy discussions, and the way that we automate the enforcement of such rules, basically using a similar system that Kubernetes itself uses.

And then I have another talk about work I'm doing for Falco. Basically, I'm trying to design a gRPC interface for input signals. In that talk, we'll present all the findings, all the challenges that I'm encountering while designing and implementing such interface. You can agree that passing millions of syscall using eBPF over gRPC presents a set of unique challenges and performance concerns, right?

ADAM GLICK: Yeah.

LEONARDO DI DONATO: Things complicate even more since the interface will also have to accept other kinds of signals, like all the Kubernetes audit logs. So it's going to be fun for sure. I'm already investigating, for example, the usage of Google FlatBuffers rather than protobuf to avoid decoding and coding time, for example. But I will talk more at KubeCon about this.

ADAM GLICK: Sounds great. What comes next for Falco?

LEONARDO DI DONATO: After the CNCF incubation, as I said, we are now focusing primarily on making Falco extremely consumable and modular. The main focus is to release the first stable version, the 1.0 or 1.x, with a set of well-defined mutually TLS-authenticated gRPC endpoints. I recently created a gRPC streaming API to let users receive output alerts over the wire.

My main goal now for Falco 1.x is to decided and implement a gRPC API for input signals. When this interface will be ready, I say it will also impact and change the way that Falco can be deployed on Kubernetes. The plan is also to create a whole set of Falco clients to interact with the inputs and alerts and then with rules or general Falco configuration through gRPC.

Some clients are already in the making-- a Go one, we have a Python one, a Rust one. The way that we decided what's come next for Falco, as I said, has been made in the open. Yeah, sometime I, Lorenzo, and Kris Nova met in person in various times during past months in order to make such plans. But also-- and this is very important for me and for the project-- we run weekly community calls, during which we discuss in the open with everyone-- we welcome everyone-- choices, directions, plans, even issues and pull requests-- everything, everything in the open.

So in case you want to contribute some Go code, some C++ code, Rust one, we have things to do. We need contributors. We are open to-- and we welcome them. Or maybe you just can join the calls because you're only interested in digging into Falco. Join in with the calls. Maybe in the notes of the podcast, we can share links about this.

ADAM GLICK: Yeah. And we'll be glad to add links to the GitHub and the Slack channel, ways for people to connect with you and get involved in the Falco project. We have listeners around the world. Is there anything you'd like to say to our Italian listeners?

LEONARDO DI DONATO: We have listeners from all around the world. So I will stick with English. What I can say is something about open-source. I really love open-source. It's something that changed my life. It's something that I love since I was little child alone in a southern Italy room, discovering things, thanks to open-source, learning things thanks to open-source. It was, and still is, something really important for me.

And it also changed the world and continues to change in the world. If you think, thanks to open-source, tools for example that Google open-sourced like Borg then became Kubernetes, the whole world has changed. The way that we write software has changed. The way that we deploy has changed. And this approach proved to help us in building better software, building better communities, building a better place.

And this is something that I really care about a lot and inspires me on a daily basis. So I will say to people to get involved into the open-source. Maybe you can't write code, you can help with documentation. Please join and help other people. It's a way to share knowledge, to create relationships, to create a better place. This is something that I'd love to say to everyone.

ADAM GLICK: Thank you for your passion and your commitment to open-source and Falco. It's been great having you on the show, Leonardo.

LEONARDO DI DONATO: Grazie. Thank you all.

ADAM GLICK: You can find Leonardo Di Donato on Twitter at @leodido.

[MUSIC PLAYING]

ADAM GLICK: Thank you for listening. As always, if you've enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter @KubernetesPod or reach us by email at kubernetespodcast@google.com.

CRAIG BOX: You can also check out our website at kubernetespodcast.com, where you will find transcripts and show notes. Until next time, take care.

ADAM GLICK: Catch you next week.

[THEME MUSIC]