04/11/2019

At one end of the spectrum is the young Zuck encouraging his hackers to “move fast and break things.” And then there’s Hillel Wayne with a very different sort of advice: move a bit slower and get things right. Unsurprisingly, the more mature Mark Zuckerberg of today would now agree with Hillel. “When you build something that you don’t have to fix 10 times, you can move forward on top of what you’ve built,” Zuckerberg told BI.

Hillel’s road to this wisdom was much shorter than Zuck’s. A couple years ago Hillel was working at a web development company that ran into a hairy distributed systems problem. The sheer complexity of it was overwhelming, so he starting looking for a way to make it manageable. That’s when he stumbled on TLA+. Long story short: Hillel fell in love.

Now Hillel is a renowned formal methods consultant, advising and training companies on TLA+, Alloy, and various other formal methods. It’s part of his personal mission to evanalize the benefits of formal methods to everyday programmers.

Most engineers in industry don’t get particularly excited with they hear “formal methods.” They seem like they’d be, well, formal. But that’s not entirely true. Formal methods is a big field, and some parts are more “formal” than others. There are two main categories: specification and verification. And then within each of those two categories there are: design and code. In other words, there’s:

Most of us are familiar with the benefits (and detriments) of a few kinds of code verification: type systems, tests, contracts, etc. Hillel’s schtick is educating the public on the virtues of a more obscure corner of formal methods: design specification.

TLA+ isn’t the first design specification language created, but it was the first Hillel came in contact with, and it’s still his favorite. It was created by Leslie Lamport, better known for LaTex and his seminal work on distributed computing (earning him the 2013 Turing Award). Hillel explains that the main benefit behind TLA+ is codifying all the scattered whiteboardings, UML diagrams, and documentation into a single formal notation that can also be automatically stress-tested for issues. It’s documentation for the broader system’s design, with the added assurances provided for by the TLA+ brute-force model checker. From Wikipedia: “TLA+ has been described as exhaustively-testable pseudocode, and its use likened to drawing blueprints for software systems.”

As Hillel would be the first to tell you, formal methods aren’t going to change your life. If you aren’t getting enough sleep currently, do that first. Then maybe consider formal methods, such as TLA+ or Alloy. If you want less customers getting upset at you for production bugs, and don’t want to ever again spend two weeks of your life crawling through distributed systems logs: formal methods may be right for you.

Transcript sponsored by repl.it

Corrections to this transcript are much appreciated!

SK: Hello, and welcome to the Future of Coding. This is Steve Krouse. Today, we have a guest on the podcast that, if you've been listening carefully to the other interviews, has actually been mentioned at least two, maybe more, times by other guests. Hillel Wayne is best known for his work trying to explain and promote TLA+ to a broader audience of more practical engineers, people who might not think that what we call, "formal methods" would apply to, you know, building products, building web applications, building technologies for the startup they work for. Hello, and welcome to the Future of Coding. This is Steve Krouse. Today, we have a guest on the podcast that, if you've been listening carefully to the other interviews, has actually been mentioned at least two, maybe more, times by other guests. Hillel Wayne is best known for his work trying to explain and promote TLA+ to a broader audience of more practical engineers, people who might not think that what we call, "formal methods" would apply to, you know, building products, building web applications, building technologies for the startup they work for.

SK: If you aren't familiar with the term, "formal methods," I think this is a really great podcast to get your foot in the door. We start by contextualizing what formal methods are. We break up the field into four quadrants, and we go kind of quadrant by quadrant, and think about what each of the different techniques is used for, and the practicality of it. Well, I think I might be overselling how ordered this conversation is. Hillel will explain something, and I'll think I understand it, and then maybe 10 minutes later, I'll be like, "Wait a second. How is that different from that other thing we were talking about?" Then he kind of has to backtrack and clarify for me. But I think you'll be able to follow. And I think you'll get a greater sense for this small but active community of research that has a lot to offer to the future of what software engineering could look like. If you aren't familiar with the term, "formal methods," I think this is a really great podcast to get your foot in the door. We start by contextualizing what formal methods are. We break up the field into four quadrants, and we go kind of quadrant by quadrant, and think about what each of the different techniques is used for, and the practicality of it. Well, I think I might be overselling how ordered this conversation is. Hillel will explain something, and I'll think I understand it, and then maybe 10 minutes later, I'll be like, "Wait a second. How is that different from that other thing we were talking about?" Then he kind of has to backtrack and clarify for me. But I think you'll be able to follow. And I think you'll get a greater sense for this small but active community of research that has a lot to offer to the future of what software engineering could look like.

SK: Before I bring you Hillel, a quick message from our sponsor, Repl.it. Before I bring you Hillel, a quick message from our sponsor, Repl.it.

SK: Repl.it is an online REPL for over 30 languages. It started out as a code playground, but now it scales up to a full development environment where you can do everything from deploying web servers to training ML models, all driven by the REPL. They're a small startup in San Francisco, but they reach millions of programmers, students, and teachers. They're looking for hackers interested in the future of coding and making software tools more accessible and enjoyable. So email jobs@repl.it if you're interested in learning more. Repl.it is an online REPL for over 30 languages. It started out as a code playground, but now it scales up to a full development environment where you can do everything from deploying web servers to training ML models, all driven by the REPL. They're a small startup in San Francisco, but they reach millions of programmers, students, and teachers. They're looking for hackers interested in the future of coding and making software tools more accessible and enjoyable. So email jobs@repl.it if you're interested in learning more.

SK: Without any further ado, I bring you Hillel Wayne. Welcome, Hillel. Without any further ado, I bring you Hillel Wayne. Welcome, Hillel.

HW: Thanks for having me on. Thanks for having me on.

SK: Yeah. It's really great to have you. I'm excited for this conversation. Yeah. It's really great to have you. I'm excited for this conversation.

HW: Mm-hmm (affirmative). Mm-hmm (affirmative).

SK: I think I originally heard of you as the TLA+ guy. Potentially, I think I may have heard of you for the first time via another interview or two on this podcast. I think we have a few mutual friends in common. I think I originally heard of you as the TLA+ guy. Potentially, I think I may have heard of you for the first time via another interview or two on this podcast. I think we have a few mutual friends in common.

HW: Yeah. I think it was a couple people. I think it was Kevin and I think James Koppel, both ... I think they were both interviewed by you, and they're sort of, we're sort of in the same circles, so I imagine it's one of those too. Yeah. I think it was a couple people. I think it was Kevin and I think James Koppel, both ... I think they were both interviewed by you, and they're sort of, we're sort of in the same circles, so I imagine it's one of those too.

SK: Yeah, and I think it may have actually been both of them saying- Yeah, and I think it may have actually been both of them saying-

HW: Oh, hey. Oh, hey.

SK: ... "You really have to talk to-" ... "You really have to talk to-"

HW: I'm popular. I'm popular.

SK: Yeah, I think you are. I think you are. At least with the people I talk to. You have the illusion of popularity, given who I talk to. So, given that I know you as the TLA+ guy, I'd be curious to hear about your origin stories as this TLA+ superhero. Were you originally bitten by a radioactive Leslie Lamport, or did it happen some other way? Yeah, I think you are. I think you are. At least with the people I talk to. You have the illusion of popularity, given who I talk to. So, given that I know you as the TLA+ guy, I'd be curious to hear about your origin stories as this TLA+ superhero. Were you originally bitten by a radioactive Leslie Lamport, or did it happen some other way?

HW: God, you sort of put me on the spot there, because now I've got to think of like a really clever comeback to that, but I can't. So actually, it wasn't really anything that interesting. I was doing some work at a web development company, and ran into a really complicated distributed systems problem with their product. And what happened is I was looking for ways to make it a little bit more manageable. I stumbled on TLA+ and it worked out really well in my favor. And I'm like, "Hey, this is really great. Like, this is incredibly useful for my problem, and nobody really expected it to be that way. Why is there so little documentation?" So I figured I'd write some documentation for it, and then I wrote documentation. Then I figured I'd give a talk on it, and then I figured I'd write a book on it, and then it just kept going from there. God, you sort of put me on the spot there, because now I've got to think of like a really clever comeback to that, but I can't. So actually, it wasn't really anything that interesting. I was doing some work at a web development company, and ran into a really complicated distributed systems problem with their product. And what happened is I was looking for ways to make it a little bit more manageable. I stumbled on TLA+ and it worked out really well in my favor. And I'm like, "Hey, this is really great. Like, this is incredibly useful for my problem, and nobody really expected it to be that way. Why is there so little documentation?" So I figured I'd write some documentation for it, and then I wrote documentation. Then I figured I'd give a talk on it, and then I figured I'd write a book on it, and then it just kept going from there.

SK: Yeah, okay, so you have, I think the first thing I saw was Yeah, okay, so you have, I think the first thing I saw was learntla.com . Was that the documentation you were talking about?

HW: Yeah. It was supposed to be a tutorial, because there weren't any easy tutorials online that I could find. And then it just kept going from there, but that was the first thing. That was like early 2017. Yeah. It was supposed to be a tutorial, because there weren't any easy tutorials online that I could find. And then it just kept going from there, but that was the first thing. That was like early 2017.

SK: Yeah, and then I saw some talks. Yeah, and then I saw some talks.

HW: Yeah. Yeah.

SK: And then you also have a workshop too? And then you also have a workshop too?

HW: Yeah. I also published a book on it, actually just a few months back, Practical TLA+, with Apress. Yeah. I also published a book on it, actually just a few months back, Practical TLA+, with Apress.

SK: Cool, and then I saw that, consulting, that's TLA+ specific? You do trainings? Is that- Cool, and then I saw that, consulting, that's TLA+ specific? You do trainings? Is that-

HW: Well, it's a lot of TLA+ but I also do a few other things. I do Alloy, I've done some consulting on Minizinc constraint optimization. Just essentially whatever I feel comfortable with in the formal method space that I feel like I can really teach well. It's a lot of those stuff too. Well, it's a lot of TLA+ but I also do a few other things. I do Alloy, I've done some consulting on Minizinc constraint optimization. Just essentially whatever I feel comfortable with in the formal method space that I feel like I can really teach well. It's a lot of those stuff too.

SK: Yeah. I just find it fascinating that you came across formal methods in your start-up work, and it seems like you just, "Well, now I just want to do this and only this for my life, for my career." Is that kind of like how it happened? You fell in love with this topic? Yeah. I just find it fascinating that you came across formal methods in your start-up work, and it seems like you just, "Well, now I just want to do this and only this for my life, for my career." Is that kind of like how it happened? You fell in love with this topic?

HW: Well, sort of. Because, I mean, it's definitely a really interesting topic. I obviously relate to it. But I think what I really enjoy, sort of technical writing and technical communication. If you see a lot of the writings I do, a lot of it is on formal methods because that's what I think I'm best at. Some of it's on accent analysis, lightweight specification, and the history of programming. I just really like communicating and teaching ideas. And I think formal methods at this stage has the highest sort of strength to obscurity ratio, where it's way more useful to a lot of people than know about it. And that's sort of why I focus on teaching it, for that reason. Well, sort of. Because, I mean, it's definitely a really interesting topic. I obviously relate to it. But I think what I really enjoy, sort of technical writing and technical communication. If you see a lot of the writings I do, a lot of it is on formal methods because that's what I think I'm best at. Some of it's on accent analysis, lightweight specification, and the history of programming. I just really like communicating and teaching ideas. And I think formal methods at this stage has the highest sort of strength to obscurity ratio, where it's way more useful to a lot of people than know about it. And that's sort of why I focus on teaching it, for that reason.

SK: Got it. Got it. That makes a lot of sense. Yeah, I can tell from the way that you write that you enjoy it. Or at least it feels that way, that you enjoy writing. I enjoy your writing and I think it comes across. Got it. Got it. That makes a lot of sense. Yeah, I can tell from the way that you write that you enjoy it. Or at least it feels that way, that you enjoy writing. I enjoy your writing and I think it comes across.

HW: Nobody actually enjoys writing. It's more of a compulsive thing for most people, for most writers. Nobody actually enjoys writing. It's more of a compulsive thing for most people, for most writers.

SK: Yeah. I liked, in one of your essays you talked about how you asked your editor to be needlessly cruel and that he gets extra points if he makes you cry. Yeah. I liked, in one of your essays you talked about how you asked your editor to be needlessly cruel and that he gets extra points if he makes you cry.

HW: Yeah. He didn't. I win. Yeah. He didn't. I win.

SK: Did he make you cry? Did he make you cry?

HW: No. He didn't. I won. No. He didn't. I won.

SK: You won. Yeah, I feel empathy for you, because, maybe we'll talk about it later, but I think you'd have interesting things to say about this, but I spent a lot of time writing an essay this past week, and I got some terrible reviews that almost but didn't quite bring me to tears. So it is hard getting cruel feedback. Or not cruel, but just harsh feedback. You won. Yeah, I feel empathy for you, because, maybe we'll talk about it later, but I think you'd have interesting things to say about this, but I spent a lot of time writing an essay this past week, and I got some terrible reviews that almost but didn't quite bring me to tears. So it is hard getting cruel feedback. Or not cruel, but just harsh feedback.

HW: Yeah. Yeah.

SK: Okay. So let's get into the formal method stuff. I thought it would be useful to start by situating ourselves and defining terms. I think your recent essay, "Why People Don't Use Formal Methods", that was on the front page of Hacker News, I think you did a really wonderful job. Okay. So let's get into the formal method stuff. I thought it would be useful to start by situating ourselves and defining terms. I think your recent essay, "Why People Don't Use Formal Methods", that was on the front page of Hacker News, I think you did a really wonderful job.

HW: Thank you. Thank you.

SK: Yeah. There are a lot of separate topics, and they all, I think the words that correspond to those topics are pretty good, and I'm glad that there are separate words for all these things. Could you roll off the top of your head? Or I wrote some of them down if you want me to kind of tee them up for you... Yeah. There are a lot of separate topics, and they all, I think the words that correspond to those topics are pretty good, and I'm glad that there are separate words for all these things. Could you roll off the top of your head? Or I wrote some of them down if you want me to kind of tee them up for you...

HW: Sure, I can talk a little bit more about this. So one thing to keep in mind is that any sort of field in programming, formal methods is a big field. Saying, "I do formal methods", is kind of like saying, "I do web." It kind of gives somebody an impression, but there's a lot of nuance there. But also given that is that, a lot of fields in programming are sort of very big. For example, with web, there's probably more people who do web development than live in New York, the State. But formal methods is extremely small, and it's also very fractured because people, everybody who sort of is in it often knows one or two things but doesn't really know the whole space of it. And the consequence of that is that there's a lot of ideas in there and some of the ideas overlap, but the people who overlapped with their ideas don't necessarily share the same terms. Sure, I can talk a little bit more about this. So one thing to keep in mind is that any sort of field in programming, formal methods is a big field. Saying, "I do formal methods", is kind of like saying, "I do web." It kind of gives somebody an impression, but there's a lot of nuance there. But also given that is that, a lot of fields in programming are sort of very big. For example, with web, there's probably more people who do web development than live in New York, the State. But formal methods is extremely small, and it's also very fractured because people, everybody who sort of is in it often knows one or two things but doesn't really know the whole space of it. And the consequence of that is that there's a lot of ideas in there and some of the ideas overlap, but the people who overlapped with their ideas don't necessarily share the same terms.

HW: So I ended up inventing a lot of new terms for that essay, not necessarily because I think these are better terms, but just because since, again, not everybody sort of shares the same terminology. It was easier for me to talk to a public audience about it by just inventing terms of being clear, that they were terms that I just invented on the spot to talk about differences. So I ended up inventing a lot of new terms for that essay, not necessarily because I think these are better terms, but just because since, again, not everybody sort of shares the same terminology. It was easier for me to talk to a public audience about it by just inventing terms of being clear, that they were terms that I just invented on the spot to talk about differences.

SK: Oh. Okay. Yeah, that's a good clarification. Oh. Okay. Yeah, that's a good clarification.

HW: Yeah. So yeah, I divided into two categories. Again, these are very, very fuzzy categories. There's a lot of overlap and there's things that don't belong to either category, things that belong to both categories of thinking about code and thinking about designs. And then we divide each of those into two separate categories of how do we specify and how do we verify. Specifying being how do we describe what we want to be the case. And verification being how we show that what we want to be the case is the case. And that's pretty much all formal methods is specification and verification to one or two degrees. Yeah. So yeah, I divided into two categories. Again, these are very, very fuzzy categories. There's a lot of overlap and there's things that don't belong to either category, things that belong to both categories of thinking about code and thinking about designs. And then we divide each of those into two separate categories of how do we specify and how do we verify. Specifying being how do we describe what we want to be the case. And verification being how we show that what we want to be the case is the case. And that's pretty much all formal methods is specification and verification to one or two degrees.

SK: Got it. Okay. So just to recap, so is it like we imagine is four quadrants specifying, verifying on one axis and code and design on the other? Got it. Okay. So just to recap, so is it like we imagine is four quadrants specifying, verifying on one axis and code and design on the other?

HW: Yeah. Yeah.

SK: So there are four? Yeah. So there are four? Yeah.

HW: Again, yeah, very, very, very, very, broad, probably wrong, but wrong in a very useful way. Again, yeah, very, very, very, very, broad, probably wrong, but wrong in a very useful way.

SK: Okay. So yeah, do you want to talk about each of the four quadrants a bit? Or what's the next important distinctions to make? Okay. So yeah, do you want to talk about each of the four quadrants a bit? Or what's the next important distinctions to make?

HW: Yeah, so actually, I should probably just mention this right now is that, because I realized I didn't actually define the term. Formal methods is sort of the study of how we can show that things are correct in ways that are sort of irrefutable. So for example, you might be familiar with say testing, right? Yeah, so actually, I should probably just mention this right now is that, because I realized I didn't actually define the term. Formal methods is sort of the study of how we can show that things are correct in ways that are sort of irrefutable. So for example, you might be familiar with say testing, right?

SK: Yeah. Yeah.

HW: So testing works, but it only shows very limited amounts of sort of verification. If you prove your thing works through inputs one through a hundred, maybe it fails for input 101. So formal verification is a way of sort of saying, "Okay, we're going to test every possible thing, and we're going to show that no matter what you put in, it will always give what we expect." So testing works, but it only shows very limited amounts of sort of verification. If you prove your thing works through inputs one through a hundred, maybe it fails for input 101. So formal verification is a way of sort of saying, "Okay, we're going to test every possible thing, and we're going to show that no matter what you put in, it will always give what we expect."

SK: Well, so I think you use a few terms there that are interesting. You use, in particular, irrefutable. Well, so I think you use a few terms there that are interesting. You use, in particular, irrefutable.

HW: Yeah. Yeah.

SK: I think is an interesting word. I think is an interesting word.

HW: Which is also incredibly misleading. Which is also incredibly misleading.

SK: Okay. Okay.

HW: Yeah. Yeah.

SK: Okay, well, I'll let you go give the high level and then we'll drill down into the specifics in a sec. Okay, well, I'll let you go give the high level and then we'll drill down into the specifics in a sec.

HW: Yeah. So, drill down into irrefutable or give the high level of the rest of the space first? Which would you prefer? Yeah. So, drill down into irrefutable or give the high level of the rest of the space first? Which would you prefer?

SK: Yeah, yeah, sorry. Let's give the high level and then we'll drill down into some of the specifics later. Yeah, yeah, sorry. Let's give the high level and then we'll drill down into some of the specifics later.

HW: Okay, so, in those four quadrant-ish things, and again, this sort of a formal methods thing of just always qualifying all my statements. Qualifying, again, this is more of just like a very rough model than anything else. So for code specification, you have a few different things. You have external theorem, which is essentially writing your code and writing in a separate file, essentially, here are the properties of this code. That's very similar to what we call testing, but more rigorous. We have really strong type systems, like dependent types or refinement types. Again, some were static types, but harder to check but more comprehensive. And then we have this thing called logics and conditions, originally called Hoare Logic, but now there's a bunch of different branches, where you essentially say in a function given these inputs, this probably should be true of the outputs. And this corresponds to something called contracts in programming, which is a very powerful verification technique. But of the three ways that we verify code informally, it's the most obscure by far. Essentially, the easiest way to describe it is assertions. Okay, so, in those four quadrant-ish things, and again, this sort of a formal methods thing of just always qualifying all my statements. Qualifying, again, this is more of just like a very rough model than anything else. So for code specification, you have a few different things. You have external theorem, which is essentially writing your code and writing in a separate file, essentially, here are the properties of this code. That's very similar to what we call testing, but more rigorous. We have really strong type systems, like dependent types or refinement types. Again, some were static types, but harder to check but more comprehensive. And then we have this thing called logics and conditions, originally called Hoare Logic, but now there's a bunch of different branches, where you essentially say in a function given these inputs, this probably should be true of the outputs. And this corresponds to something called contracts in programming, which is a very powerful verification technique. But of the three ways that we verify code informally, it's the most obscure by far. Essentially, the easiest way to describe it is assertions.

SK: Yep. Yep. Okay. And so just to, this question probably should have been asked earlier, but you just in the last description informally versus formally. Yep. Yep. Okay. And so just to, this question probably should have been asked earlier, but you just in the last description informally versus formally.

HW: Yeah. Yeah.

SK: The way you're using that term, informally means kind of like eyeballing it and formally means a computer is checking something? The way you're using that term, informally means kind of like eyeballing it and formally means a computer is checking something?

HW: Informally, here what I'm using informally to mean it can be automated. In fact, it usually should be automated, but it's automated in a way that doesn't give you complete confidence. Essentially, informal verification is still automated verification, it's just done in a way that's one, much easier and two, not as comprehensive as formal verification. Informally, here what I'm using informally to mean it can be automated. In fact, it usually should be automated, but it's automated in a way that doesn't give you complete confidence. Essentially, informal verification is still automated verification, it's just done in a way that's one, much easier and two, not as comprehensive as formal verification.

SK: Okay, so I guess it's a spectrum-y thing? Okay, so I guess it's a spectrum-y thing?

HW: Yes, it's a spectrum-y thing. Yes, it's a spectrum-y thing.

SK: Okay, so formal is at 100 or it's just past 50%? Okay, so formal is at 100 or it's just past 50%?

HW: It's basically 100%. In fact, that's one of the things that people miss, is that for the most part, formal verification is, while the most powerful sort of way of sort of verifying that stuff is correct, probably not the most productive in most cases. Because to get to 100%, you have to go much, much, much harder than it takes to go to 99%. It's basically 100%. In fact, that's one of the things that people miss, is that for the most part, formal verification is, while the most powerful sort of way of sort of verifying that stuff is correct, probably not the most productive in most cases. Because to get to 100%, you have to go much, much, much harder than it takes to go to 99%.

SK: But 99% is informal. But 99% is informal.

HW: Yes, ish. I mean, that's why we have to sort of put these things on a spectrum because what does it mean to be 99% Correct versus 98% correct, right? Yes, ish. I mean, that's why we have to sort of put these things on a spectrum because what does it mean to be 99% Correct versus 98% correct, right?

SK: Yes. Well, I guess I'm just trying to figure out what the whole field is because, you know, when you talk about formal methods versus informal, I don't know. Maybe this was a bad digression. I'll let you get back to the high level. Yes. Well, I guess I'm just trying to figure out what the whole field is because, you know, when you talk about formal methods versus informal, I don't know. Maybe this was a bad digression. I'll let you get back to the high level.

HW: I know. We should probably drill into that digression at some point. So in any case, now the thing is that like all these ways that we can spec, we could test both formally and informally. For example, if I write sort of this is the specification of my function, I could test it using full verification or I could write a million auto manual tests or put it through a really intense code review. But if I want to sort of formally verify it, what I could do is I could write, for example, what's called a proof. Which is essentially a mathematical statement sort of showing from our basic premises how we can conclude that this is going to be correct in a way that a machine can check. Often these days, that's considered really, really hard. So when we can we usually shell out to a solver. For example, a SAT solver or what's called an SMT solver to automate some steps for us. That's pretty much the main way that we verify code is correct formally. I know. We should probably drill into that digression at some point. So in any case, now the thing is that like all these ways that we can spec, we could test both formally and informally. For example, if I write sort of this is the specification of my function, I could test it using full verification or I could write a million auto manual tests or put it through a really intense code review. But if I want to sort of formally verify it, what I could do is I could write, for example, what's called a proof. Which is essentially a mathematical statement sort of showing from our basic premises how we can conclude that this is going to be correct in a way that a machine can check. Often these days, that's considered really, really hard. So when we can we usually shell out to a solver. For example, a SAT solver or what's called an SMT solver to automate some steps for us. That's pretty much the main way that we verify code is correct formally.

HW: And it works but it's also really intensive, labor intensive. I think the fastest anybody's ever done it was four lines a day using cutting edge, all the resources of Microsoft combined. And that's one of the reasons why, for the foreseeable future, code verification is probably going to remain in the realm of experts. Code specification is really powerful and I think could be more widely used. But code verification at least formally isn't really on the horizon for mainstream use. And it works but it's also really intensive, labor intensive. I think the fastest anybody's ever done it was four lines a day using cutting edge, all the resources of Microsoft combined. And that's one of the reasons why, for the foreseeable future, code verification is probably going to remain in the realm of experts. Code specification is really powerful and I think could be more widely used. But code verification at least formally isn't really on the horizon for mainstream use.

SK: Got it. Okay. So, okay, so just to recap because I feel like that got a little bit messy there. Got it. Okay. So, okay, so just to recap because I feel like that got a little bit messy there.

HW: Yeah. No. It's messy topic. Yeah. No. It's messy topic.

SK: Yeah. Yeah. And I interrupted with sorts of things. Yeah. Yeah. And I interrupted with sorts of things.

HW: No worries. No worries.

SK: I thought, maybe let's just start over in a sense of- I thought, maybe let's just start over in a sense of-

HW: With an example? With an example?

SK: Well, I was thinking maybe, just a picture in my head to, even just for me, the field is called formal methods. Well, I was thinking maybe, just a picture in my head to, even just for me, the field is called formal methods.

HW: Yes. Yes.

SK: And in that field has four subcategories: code verification, design verification, code specifications, design specification. Is that correct? And in that field has four subcategories: code verification, design verification, code specifications, design specification. Is that correct?

HW: Four ways of thinking about it. Usually people do either code or design. So they do both code verification specification or design verification specification. I mostly make that division to make it a big clear, what we're talking about, we talk about sort of like proven correct for specifying correct whatnot. Four ways of thinking about it. Usually people do either code or design. So they do both code verification specification or design verification specification. I mostly make that division to make it a big clear, what we're talking about, we talk about sort of like proven correct for specifying correct whatnot.

SK: So what you're saying usually any given person would only deals with code or design? So what you're saying usually any given person would only deals with code or design?

HW: Usually, and I'm waving my hands very dramatically here when I say usually. Usually, and I'm waving my hands very dramatically here when I say usually.

SK: Okay. So does that hold for you too? Okay. So does that hold for you too?

HW: I'm mostly design verification and specification. I mostly do design work. I'm mostly design verification and specification. I mostly do design work.

SK: Design and specification like the same thing? Design and specification like the same thing?

HW: So when I say sort of I do design, most people when they say that they do formal spec, okay, this is weird. Usually if a person says that they primarily work with specification language, they mostly do formal specification, they usually mean that they do design specification and design verification. So when I say sort of I do design, most people when they say that they do formal spec, okay, this is weird. Usually if a person says that they primarily work with specification language, they mostly do formal specification, they usually mean that they do design specification and design verification.

SK: Yep, okay. Yep, okay.

HW: Which you'd not be surprised because I said formal specification why are they doing verification. But you know, again, fuzzy terms, small field, fractured field, lots of different pieces to it. Which you'd not be surprised because I said formal specification why are they doing verification. But you know, again, fuzzy terms, small field, fractured field, lots of different pieces to it.

SK: Okay. One question that I have here my notes, I wrote down was what's the difference between verification and validation? Okay. One question that I have here my notes, I wrote down was what's the difference between verification and validation?

HW: So verification is basically taking a description of how the code should be and proving the code matches that. So for example, let's say I say that this code should always sort a list in descending order, the specification would be that say every two indices if one indice is longer than the other one, it's going to be higher than that one. And then verification is basically showing how that's going to always be true, right? Validation is when you say "Wait, do we actually need this sorting function, maybe we actually need a maximum function. Maybe we are doing the wrong thing entirely." It's sort of at the level of what are the human requirements we need and how do we show we match the human requirements of the total system. So yeah. And that's usually outside the scope of formal methods because that deals a lot more with sort of social systems and understanding customer requirements. So verification is basically taking a description of how the code should be and proving the code matches that. So for example, let's say I say that this code should always sort a list in descending order, the specification would be that say every two indices if one indice is longer than the other one, it's going to be higher than that one. And then verification is basically showing how that's going to always be true, right? Validation is when you say "Wait, do we actually need this sorting function, maybe we actually need a maximum function. Maybe we are doing the wrong thing entirely." It's sort of at the level of what are the human requirements we need and how do we show we match the human requirements of the total system. So yeah. And that's usually outside the scope of formal methods because that deals a lot more with sort of social systems and understanding customer requirements.

SK: Okay. So it sounds like you have on one level, you have reality and you're trying to match your specification to reality. And then below that you have a code and trying to match the code to a specification. Okay. So it sounds like you have on one level, you have reality and you're trying to match your specification to reality. And then below that you have a code and trying to match the code to a specification.

HW: Yeah. That's, I think, a really good way of putting it. And then the formal would be validation. The second one would be verification. Yeah. That's, I think, a really good way of putting it. And then the formal would be validation. The second one would be verification.

SK: Okay, got it. Yeah. So we're trying to validate our business specification with the market and then we're trying to verify that our code meets the business specification? Okay, got it. Yeah. So we're trying to validate our business specification with the market and then we're trying to verify that our code meets the business specification?

HW: Exactly. Exactly.

SK: Got it. And that involves both code verification and design verification because in order to validate that the specification meets a business needs, we then will do the design verification stuff. And then once the design is verified, then we'll write the code and then try and verify that the code meets the specifications. Got it. And that involves both code verification and design verification because in order to validate that the specification meets a business needs, we then will do the design verification stuff. And then once the design is verified, then we'll write the code and then try and verify that the code meets the specifications.

HW: Yeah. In an ideal world with an infinite amount of resources, yes, that's how it would look. Yeah. In an ideal world with an infinite amount of resources, yes, that's how it would look.

SK: Got it. Okay. And so today, and what you were alluding to, is that, you know, at fastest you can do four lines of code, potentially, in 100 years will have some improvements in theoretical understanding of this stuff, or just machines will be a lot faster and potentially we'll be able to do all this stuff for every line of code and it'll be just as fast as we write code today? Got it. Okay. And so today, and what you were alluding to, is that, you know, at fastest you can do four lines of code, potentially, in 100 years will have some improvements in theoretical understanding of this stuff, or just machines will be a lot faster and potentially we'll be able to do all this stuff for every line of code and it'll be just as fast as we write code today?

HW: I don't know. I'm hesitant to sort of make predictions about 10 years from now. Because again, remember, 50 years ago we first started doing formal verification. We were like, "Oh, yeah, in 10 years, we're going to have everything verified." As it was like, "No, that's crazy. In like 20 years we'll have everything verified." And now it's been 50 years and nothing's verified. So I mean, it's really hard. I mean, look, proofs are hard, validation's hard. We often don't really know how to represent specs, these are all really difficult topics. It's hard to sort of make predictions of how things will go out. I think that there's definitely going to be expansive design of verification. But just because I think that right now we've seen it be really successful. But code verification, I think, for the foreseeable future, will remain sort of a niche topic that's done in special cases by experts. And I don't know if or when it will ever be a thing that everybody's doing. I don't know. I'm hesitant to sort of make predictions about 10 years from now. Because again, remember, 50 years ago we first started doing formal verification. We were like, "Oh, yeah, in 10 years, we're going to have everything verified." As it was like, "No, that's crazy. In like 20 years we'll have everything verified." And now it's been 50 years and nothing's verified. So I mean, it's really hard. I mean, look, proofs are hard, validation's hard. We often don't really know how to represent specs, these are all really difficult topics. It's hard to sort of make predictions of how things will go out. I think that there's definitely going to be expansive design of verification. But just because I think that right now we've seen it be really successful. But code verification, I think, for the foreseeable future, will remain sort of a niche topic that's done in special cases by experts. And I don't know if or when it will ever be a thing that everybody's doing.

SK: Okay. Interesting. Okay. Interesting.

HW: Yeah. Yeah.

SK: I guess to drill down a bit, you talked about irrefutable proofs, things that are checked by the computer. I guess to drill down a bit, you talked about irrefutable proofs, things that are checked by the computer.

HW: Yeah. Yeah.

SK: One of the things I found in your writing that relates to this is you were talking about how 20% of published mathematical proofs aren't actually correct. There is an error that the person who wrote it missed and the reviewers missed? One of the things I found in your writing that relates to this is you were talking about how 20% of published mathematical proofs aren't actually correct. There is an error that the person who wrote it missed and the reviewers missed?

HW: At least according to that one reference I found. It could be that reference is wrong or could be that reference to understating things. At least according to that one reference I found. It could be that reference is wrong or could be that reference to understating things.

SK: Yes. Well, yeah, I agree. You know, for you to question in the spirit of, you know, questioning everything, you have to even question the thing that refutes things. Yes. Well, yeah, I agree. You know, for you to question in the spirit of, you know, questioning everything, you have to even question the thing that refutes things.

HW: Yeah. So when I say the proof is irrefutable, I mean, assuming the following is true: assuming that the prover is correct, assuming that sort of any auxiliary tools that are using with the prover is correct, and finally, assuming that you have all the requirements. At which point we show it's irrefutable for the specific context you're talking about. So a common thing gets sort of said, "You can never actually prove a thing will always work because for all you know you're going to start the equation, somebody's going to hit the server with a baseball bat." Yeah. So when I say the proof is irrefutable, I mean, assuming the following is true: assuming that the prover is correct, assuming that sort of any auxiliary tools that are using with the prover is correct, and finally, assuming that you have all the requirements. At which point we show it's irrefutable for the specific context you're talking about. So a common thing gets sort of said, "You can never actually prove a thing will always work because for all you know you're going to start the equation, somebody's going to hit the server with a baseball bat."

SK: Yes, of course. And so, I guess that's, I think the word irrefutable is interesting because you could say the same thing about mathematics. A proof is irrefutable if the people who reviewed it for the journal didn't make any mistakes. Yes, of course. And so, I guess that's, I think the word irrefutable is interesting because you could say the same thing about mathematics. A proof is irrefutable if the people who reviewed it for the journal didn't make any mistakes.

HW: Yeah. Yeah.

SK: If you assume that to be true. Because that's essentially what you're doing when you're assuming that the code verifier has no bugs. If you assume that to be true. Because that's essentially what you're doing when you're assuming that the code verifier has no bugs.

HW: Yeah. Yeah, so I think, I believe that some of them for example, I believe the core for Coq, and don't quote me on this, but the core for the Coq prover has been sort of proven by hand to be correct. So that we know that the core is essentially irrefutable. But all the auxiliary tooling is sort of hodgepodge together as academic projects. So that's less trustworthy. Yeah. Yeah, so I think, I believe that some of them for example, I believe the core for Coq, and don't quote me on this, but the core for the Coq prover has been sort of proven by hand to be correct. So that we know that the core is essentially irrefutable. But all the auxiliary tooling is sort of hodgepodge together as academic projects. So that's less trustworthy.

SK: Well, so I'm struggling. Why is Coq irrefutably correct? Because it was done by hand? Well, so I'm struggling. Why is Coq irrefutably correct? Because it was done by hand?

HW: So here's the thing, that's what I've heard and this is like what I've heard from people who worked on it, I cannot say exactly how that's the case and I cannot, if you basically put a gun to my head said, "Is this true?" I'll say, "Maybe." I don't know enough about the topic. Yeah, again, I do have to clarify here that one of the things that has been affected by me talking a lot about formal methods and working with it is that I don't really, I'm not really comfortable saying things I don't know are absolutely the case. So given that I've not worked direct with Coq and I haven't looked at the papers, I don't know enough about how they verified it to tell you how it's been verified. So here's the thing, that's what I've heard and this is like what I've heard from people who worked on it, I cannot say exactly how that's the case and I cannot, if you basically put a gun to my head said, "Is this true?" I'll say, "Maybe." I don't know enough about the topic. Yeah, again, I do have to clarify here that one of the things that has been affected by me talking a lot about formal methods and working with it is that I don't really, I'm not really comfortable saying things I don't know are absolutely the case. So given that I've not worked direct with Coq and I haven't looked at the papers, I don't know enough about how they verified it to tell you how it's been verified.

SK: I see. I see. So I guess what I'm driving towards is that the main difference between formal methods, like computer methods, and mathematical proofs is whether a human or computer is doing the checking? I see. I see. So I guess what I'm driving towards is that the main difference between formal methods, like computer methods, and mathematical proofs is whether a human or computer is doing the checking?

HW: Mathematical proofs tend to be less rigorous than formal methods. Mathematical proofs tend to be less rigorous than formal methods.

SK: Yeah, why is that? Yeah, why is that?

HW: Because most mathematical proofs aren't automated. So the thing is, is that if we have sort of a computer checking the things, assuming that we built this all correctly, assuming, assuming, assuming, we can essentially say whether every step is a correct or incorrect type inference, given that we had to break it down for level compute. But with mathematics often, the purpose of mathematical proof is to convince people not to 100% prove something's the case. So there will be things like this one step, we can sort of show what this heuristic argued. And everybody looks at it and goes like, "Yeah, that makes sense." They can sort of skip that one step of it, of the mathematical proof. And that's often done because, you know, you don't want to sort of sit down and sort of say, "Okay, in this context, we can prove that we can associative addition, and this kind of is we're going to prove that we can use induction and that induction is actually a reasonable theorem to have in the first place." Whereas the computer will have to do all that stuff. So it reduces the chance that you will accidentally assume something is possible or easy when it turns out that in this one very particular instance, you can't do it. Because most mathematical proofs aren't automated. So the thing is, is that if we have sort of a computer checking the things, assuming that we built this all correctly, assuming, assuming, assuming, we can essentially say whether every step is a correct or incorrect type inference, given that we had to break it down for level compute. But with mathematics often, the purpose of mathematical proof is to convince people not to 100% prove something's the case. So there will be things like this one step, we can sort of show what this heuristic argued. And everybody looks at it and goes like, "Yeah, that makes sense." They can sort of skip that one step of it, of the mathematical proof. And that's often done because, you know, you don't want to sort of sit down and sort of say, "Okay, in this context, we can prove that we can associative addition, and this kind of is we're going to prove that we can use induction and that induction is actually a reasonable theorem to have in the first place." Whereas the computer will have to do all that stuff. So it reduces the chance that you will accidentally assume something is possible or easy when it turns out that in this one very particular instance, you can't do it.

SK: I see. That's quite a claim. I think I've heard it before, but I just want to repeat what you said, that mathematics is about convincing and explaining to other humans, it's not about making sure that you're not fooling yourself. Is that kind of what you're getting at? I see. That's quite a claim. I think I've heard it before, but I just want to repeat what you said, that mathematics is about convincing and explaining to other humans, it's not about making sure that you're not fooling yourself. Is that kind of what you're getting at?

HW: I mean, you're trying to convince other humans who are very, very invested in not fooling themselves. I mean, you're trying to convince other humans who are very, very invested in not fooling themselves.

SK: I see. I see.

HW: I think one good example of what I mean, this was something I read, I think it was my Terence Tao, one difference between sort of the recent proof of the ABC conjecture, the proof of Fermat's Last Theorem is that the first five pages of the proof Fermat's Last Theorem, people were getting interesting results from it. So even though it was a really, really, really huge proof, very early on in the proof people were saying, "Oh, this is interesting. This gives us some really cool new machinery to work with. This is already been useful to us." That convinces people that there's something apparently interesting here. Whereas with the ABC proof, that I think has recently been claimed to be invalid, you had to read the entire thousand page document to get any value out of it whatsoever. So that made people less convinced, mathematicians less convinced that it was all correct, that it was a useful document. Does that make sense? I think one good example of what I mean, this was something I read, I think it was my Terence Tao, one difference between sort of the recent proof of the ABC conjecture, the proof of Fermat's Last Theorem is that the first five pages of the proof Fermat's Last Theorem, people were getting interesting results from it. So even though it was a really, really, really huge proof, very early on in the proof people were saying, "Oh, this is interesting. This gives us some really cool new machinery to work with. This is already been useful to us." That convinces people that there's something apparently interesting here. Whereas with the ABC proof, that I think has recently been claimed to be invalid, you had to read the entire thousand page document to get any value out of it whatsoever. So that made people less convinced, mathematicians less convinced that it was all correct, that it was a useful document. Does that make sense?

SK: So I understand the story. I'm not exactly sure how it relates, what I supposed to get out of the story. So I understand the story. I'm not exactly sure how it relates, what I supposed to get out of the story.

HW: Yeah, basically just the idea that mathematics, any anything else we do is sort of also in addition to being a technical institution is also social institution, it's all about how mathematicians interact and how we all do things as a group. And similarly, formal methods is also social institution as well as a technical institution. One of the consequences of this is that with mathematics as the social institution, some amount of mathematics is in the social act of convincing and rhetoric, which is how it should be given that we're not machines. Whereas with formal verification, often the only thing that we care about is sort of make something pass the formal verification tooling, which means that it's almost entirely in that one context about making sure that every single thing is correct. Yeah, basically just the idea that mathematics, any anything else we do is sort of also in addition to being a technical institution is also social institution, it's all about how mathematicians interact and how we all do things as a group. And similarly, formal methods is also social institution as well as a technical institution. One of the consequences of this is that with mathematics as the social institution, some amount of mathematics is in the social act of convincing and rhetoric, which is how it should be given that we're not machines. Whereas with formal verification, often the only thing that we care about is sort of make something pass the formal verification tooling, which means that it's almost entirely in that one context about making sure that every single thing is correct.

SK: Got it. Okay. Well, I think this is a good point to transition to the difference between theorem provers, which I think most of what we've been talking about, and model checkers. Got it. Okay. Well, I think this is a good point to transition to the difference between theorem provers, which I think most of what we've been talking about, and model checkers.

HW: So essentially, there's two ways to sort of show something's correct. You can either sort of construct a rigorous argument showing it's correct or you can sort of show how it's impossible to be incorrect by brute force of the entire possibilities. So I guess, here's a simple example of what I'm talking about. Let's say you have sort of something that works over 32 bit floats, right, you have some function that takes two 32 bit floats and returns a float, right? So essentially, there's two ways to sort of show something's correct. You can either sort of construct a rigorous argument showing it's correct or you can sort of show how it's impossible to be incorrect by brute force of the entire possibilities. So I guess, here's a simple example of what I'm talking about. Let's say you have sort of something that works over 32 bit floats, right, you have some function that takes two 32 bit floats and returns a float, right?

SK: Sure. Sure.

HW: So there's only about two/three billion 32 bit floats, right? So you could literally just go by hand and check every single one of those combinations. And if you do that, you can actually just check and brute force and make sure that every single thing does what you expect it to. And that does sound like a bit of a burden, but proving stuffs kind of also a giant like cluster, so... That make sense? So there's only about two/three billion 32 bit floats, right? So you could literally just go by hand and check every single one of those combinations. And if you do that, you can actually just check and brute force and make sure that every single thing does what you expect it to. And that does sound like a bit of a burden, but proving stuffs kind of also a giant like cluster, so... That make sense?

SK: Yeah, that makes a lot of sense. Yeah, that makes a lot of sense.

HW: Yeah. Yeah.

SK: And then the last kind of thing I'll ask in this abstracting before we get concrete is, I think, a very common programmer question. You know, I don't even have to speak for other programmers, I could just speak very personally. When I hear about specification and verification, I really want these things to be tied to my code. I don't having to duplicate the effort of specifying and then writing code and then having to eyeball or verify. Basically, you know, I wonder if I could just write the specification and then the compiler writes the code for me, or the specification is the code. So I feel like you have terms for that...? And then the last kind of thing I'll ask in this abstracting before we get concrete is, I think, a very common programmer question. You know, I don't even have to speak for other programmers, I could just speak very personally. When I hear about specification and verification, I really want these things to be tied to my code. I don't having to duplicate the effort of specifying and then writing code and then having to eyeball or verify. Basically, you know, I wonder if I could just write the specification and then the compiler writes the code for me, or the specification is the code. So I feel like you have terms for that...?

HW: Yeah. So the term for sort of taking a specification and generating the code from that is called synthesis. This is not something that we can do mainstream, it's still a very niche academic topic, and one that a lot of people are obviously working really hard on. But it turns out that generating code is really hard. I can actually link some stuff, because Nadia Polikarpova is one of the big people doing a lot of the really cool work in this space. And she recently did a talk at Strange Loop about some of the work she did. And it's all really cool stuff. But she's also very clear this is not going to be mainstream in the next 10 years. Not gonna be getting close to mainstream the next 10 years. So yeah, it turns out that that's a lot harder to do than just sitting down and showing that code matches specification, which is itself a lot harder to do than showing that code almost, most likely, matches specification informally. So yeah, there's all these terms of difficulty. And I think one of the things that happens is that people get fixated on sort of the golden mean, the sort of end state, but to the point where they sort of ignore all the really big benefits that we can get in between. Yeah. So the term for sort of taking a specification and generating the code from that is called synthesis. This is not something that we can do mainstream, it's still a very niche academic topic, and one that a lot of people are obviously working really hard on. But it turns out that generating code is really hard. I can actually link some stuff, because Nadia Polikarpova is one of the big people doing a lot of the really cool work in this space. And she recently did a talk at Strange Loop about some of the work she did. And it's all really cool stuff. But she's also very clear this is not going to be mainstream in the next 10 years. Not gonna be getting close to mainstream the next 10 years. So yeah, it turns out that that's a lot harder to do than just sitting down and showing that code matches specification, which is itself a lot harder to do than showing that code almost, most likely, matches specification informally. So yeah, there's all these terms of difficulty. And I think one of the things that happens is that people get fixated on sort of the golden mean, the sort of end state, but to the point where they sort of ignore all the really big benefits that we can get in between.

SK: Okay, yeah. Yeah, I know what you mean. And so I think let's start talking about some of those benefits you can get even now, even in the not end perfect state, in the imperfect world we live in. You have been singing from the rooftops the virtues of using TLA+ for design verification. So let's hear more about that. Okay, yeah. Yeah, I know what you mean. And so I think let's start talking about some of those benefits you can get even now, even in the not end perfect state, in the imperfect world we live in. You have been singing from the rooftops the virtues of using TLA+ for design verification. So let's hear more about that.

HW: Yes. So one thing I do want to say first, really quickly, we do also get benefits with code verification. For example, a lot of type systems do some partial verification. Rust has a borrow checker and that basically let's you do a lot of verification automatically. So we are making a lot of steps to make certain aspects of verifying code more accessible and we've seen a lot of success for a lot those steps. So it's not just sort of working with designs where we see immediate benefits. Yes. So one thing I do want to say first, really quickly, we do also get benefits with code verification. For example, a lot of type systems do some partial verification. Rust has a borrow checker and that basically let's you do a lot of verification automatically. So we are making a lot of steps to make certain aspects of verifying code more accessible and we've seen a lot of success for a lot those steps. So it's not just sort of working with designs where we see immediate benefits.

SK: Got it. So you're saying that, both for designs and for code, even though we're not at the end stage, intermediate or 80/20 versions of these formal methods are useful, both in code and design? Got it. So you're saying that, both for designs and for code, even though we're not at the end stage, intermediate or 80/20 versions of these formal methods are useful, both in code and design?

HW: Yes. But I think what happens is that a lot of sort of the code verification stuff has basically been tied to a language, which is really good. But design verification has not been tied to language, so you don't have to use a particular language in your code base to get the benefits of design verification, which is one of the reasons I think it's so valuable. One of the reasons I think things like TLA+ and Alloy have a lot of really good uses even today. Yes. But I think what happens is that a lot of sort of the code verification stuff has basically been tied to a language, which is really good. But design verification has not been tied to language, so you don't have to use a particular language in your code base to get the benefits of design verification, which is one of the reasons I think it's so valuable. One of the reasons I think things like TLA+ and Alloy have a lot of really good uses even today.

SK: Oh, interesting. So if you're using Agda you can get those benefits. But if you're not, then you're kind of screwed. And that's the appeal of something like TLA+, you can use it with any language? Oh, interesting. So if you're using Agda you can get those benefits. But if you're not, then you're kind of screwed. And that's the appeal of something like TLA+, you can use it with any language?

HW: Yes? Part of the appeal anyway. Yes? Part of the appeal anyway.

SK: Part of the appeal. I see. Part of the appeal. I see.

HW: Yeah. Yeah.

SK: Yeah, that definitely makes a lot of sense. And I guess it's similar to test driven development, like unit tests can be done in any language. Yeah, that definitely makes a lot of sense. And I guess it's similar to test driven development, like unit tests can be done in any language.

HW: Exactly. Exactly.

SK: Yeah. Okay. Same idea. Or Agile, Agile can be done in any language. Yeah. Okay. Same idea. Or Agile, Agile can be done in any language.

HW: Yeah. And that ends up being really important for the development, socially, a lot of these ideas because if you can start getting the benefits without having to change your entire code base, you're more likely to do it than if you have to sort of rewrite everything from scratch to get some value out of it. Yeah. And that ends up being really important for the development, socially, a lot of these ideas because if you can start getting the benefits without having to change your entire code base, you're more likely to do it than if you have to sort of rewrite everything from scratch to get some value out of it.

SK: Yeah, of course. Okay, so let's finally dig into it. TLA+. So for those who aren't familiar, could you do your, you know, whatever, two minute spiel of what TLA+ is, the motivations behind it, how it came about? All that jazz. Yeah, of course. Okay, so let's finally dig into it. TLA+. So for those who aren't familiar, could you do your, you know, whatever, two minute spiel of what TLA+ is, the motivations behind it, how it came about? All that jazz.

HW: Okay. Sure. It's actually one of the interesting challenges for how do you explain this without demos? I found that the easiest way to describe it is to show people demos, but obviously we can't do that on a podcast. So okay, so obviously when we sort of are designing, basically, we're building systems that involve say, multiple actors or multiple programs or client servers. We have the code, right, that actually embeds all of these thing. But the code is simply how we do these implementations. It doesn't sort of show our high level understanding what should be going on what is going on. For example, imagine that you sort of even have something as simple as say, a web app that has both a front end and a back end and then service in a deployment system. You're sort of looking at a space that can't really be encoded in just a single code base. You're at the very least looking at multiple code bases all interacting with each other. Right? Okay. Sure. It's actually one of the interesting challenges for how do you explain this without demos? I found that the easiest way to describe it is to show people demos, but obviously we can't do that on a podcast. So okay, so obviously when we sort of are designing, basically, we're building systems that involve say, multiple actors or multiple programs or client servers. We have the code, right, that actually embeds all of these thing. But the code is simply how we do these implementations. It doesn't sort of show our high level understanding what should be going on what is going on. For example, imagine that you sort of even have something as simple as say, a web app that has both a front end and a back end and then service in a deployment system. You're sort of looking at a space that can't really be encoded in just a single code base. You're at the very least looking at multiple code bases all interacting with each other. Right?

SK: Yep. Yep.

HW: So none of this, of the code that you've written, really expresses or is aware of the full design of your system. And because of that, it can't really help you with verifying the design itself. So people sort of implicitly understand this. That's why people do things like whiteboarding or draw UML diagrams or sort of talk about doing acceptance driven development. And sort of this is additional understanding of, "Hey, there's this broader design that has its own challenges beyond just how we're each in the line of code is working or not working." But if we have this idea that we have a larger scale design that we care about, why not specify it and then why not test that specification for issues. And that's sort of a lot of the motivation behind TLA+, which is by Leslie Lamport, the same who did Latex and basically half of distributed computing. So none of this, of the code that you've written, really expresses or is aware of the full design of your system. And because of that, it can't really help you with verifying the design itself. So people sort of implicitly understand this. That's why people do things like whiteboarding or draw UML diagrams or sort of talk about doing acceptance driven development. And sort of this is additional understanding of, "Hey, there's this broader design that has its own challenges beyond just how we're each in the line of code is working or not working." But if we have this idea that we have a larger scale design that we care about, why not specify it and then why not test that specification for issues. And that's sort of a lot of the motivation behind TLA+, which is by Leslie Lamport, the same who did Latex and basically half of distributed computing.

SK: So it sounds like you can almost think about TLA+ as a direct replacement for documentation or whiteboarding or UML diagrams? So it sounds like you can almost think about TLA+ as a direct replacement for documentation or whiteboarding or UML diagrams?

HW: Augmentation not replacement. Still write your documents, please document stuff. Augmentation not replacement. Still write your documents, please document stuff.

SK: Okay. Because the specification isn't understandable, you can't just read a specification and understand the system. You still need documentation. Okay. Because the specification isn't understandable, you can't just read a specification and understand the system. You still need documentation.

HW: Oh, no, you can totally read, often people, you can read a specification and can give you a lot of insight. But I think it was David McKeever who was like, "Sure, caffeine can help you replace sleep, but caffeine isn't sleep." Things like tests and specifications can help you understated a system, but they're not documentation. Documentation exists at a human level, even higher than any specifications you can write. Still write your documentation, write your requirements analysis and then write your specifications. Oh, no, you can totally read, often people, you can read a specification and can give you a lot of insight. But I think it was David McKeever who was like, "Sure, caffeine can help you replace sleep, but caffeine isn't sleep." Things like tests and specifications can help you understated a system, but they're not documentation. Documentation exists at a human level, even higher than any specifications you can write. Still write your documentation, write your requirements analysis and then write your specifications.

SK: Yeah. Yeah.

HW: I'm not really selling using TLA+, am I? Basically just going like, "No, it's not that great. It's not a great everyone." I'm not really selling using TLA+, am I? Basically just going like, "No, it's not that great. It's not a great everyone."

SK: Well, I guess what I'm reacting to is it seems like we just keep layering things on, you know? Well, I guess what I'm reacting to is it seems like we just keep layering things on, you know?

HW: Yes. Yes.

SK: We have our code and then, okay, well actually, you have to write these tests for code. Oh, I have to do all this Agile stuff to write the code in the right way. And oh, actually, you also need these integration tests. And oh, actually, now you need to document your code. And then also, you have to write now this TLA+ specification for your code. So it would be nice if one of these things could replace some of the other ones so we can simplify some of these other things. It feels like we're just going to keep layering on things, and eventually will all be stuck writing four lines of code a day. We have our code and then, okay, well actually, you have to write these tests for code. Oh, I have to do all this Agile stuff to write the code in the right way. And oh, actually, you also need these integration tests. And oh, actually, now you need to document your code. And then also, you have to write now this TLA+ specification for your code. So it would be nice if one of these things could replace some of the other ones so we can simplify some of these other things. It feels like we're just going to keep layering on things, and eventually will all be stuck writing four lines of code a day.

HW: I mean, there's a reason we're paid a lot of money as engineers. This is hard stuff. I mean, this is really fundamentally hard stuff. And there's a reason we're paid a lot of money do software. Right? I mean, there's a reason we're paid a lot of money as engineers. This is hard stuff. I mean, this is really fundamentally hard stuff. And there's a reason we're paid a lot of money do software. Right?

SK: Well, I think that's an interesting claim. Are we paid a lot of money to do software because it's fundamentally hard or is it, you know, incidentally hard? Well, I think that's an interesting claim. Are we paid a lot of money to do software because it's fundamentally hard or is it, you know, incidentally hard?

HW: I guess both. Okay. So I guess what you're sort of asking is because just to clarify, it's like you're sort of asking it seems like we have to do all this extra stuff is it worth the effort? Because you're sort of talking about it from a productivity perspective, you're worried about it sort of slowing everything down, right? I guess both. Okay. So I guess what you're sort of asking is because just to clarify, it's like you're sort of asking it seems like we have to do all this extra stuff is it worth the effort? Because you're sort of talking about it from a productivity perspective, you're worried about it sort of slowing everything down, right?

SK: It's not is it all worth it, it's more that it feels like each thing we add on, unit test, integration tests, formal specification, documentation, feels a bit like an ad hoc solution to one part of the problem. And it's not like a unified solution to anything, does that make sense at all? It's not is it all worth it, it's more that it feels like each thing we add on, unit test, integration tests, formal specification, documentation, feels a bit like an ad hoc solution to one part of the problem. And it's not like a unified solution to anything, does that make sense at all?

HW: It does. It does.

SK: Yeah. Okay. Yeah. Okay.

HW: So my thoughts there is that a unified solution would be nice, that sort of solves everything for us. Historically and empirically, almost all the ones we've tried have not worked out. It turns out that coding, it turns out that complicated problems often do unfortunately require complicated solutions. So my thoughts there is that a unified solution would be nice, that sort of solves everything for us. Historically and empirically, almost all the ones we've tried have not worked out. It turns out that coding, it turns out that complicated problems often do unfortunately require complicated solutions.

SK: Yeah, well, actually, now just hearing myself talk and hearing what you just said, it reminds me of the no silver bullet essay. Yeah, well, actually, now just hearing myself talk and hearing what you just said, it reminds me of the no silver bullet essay.

HW: Yeah. Yeah.

SK: Which most people misunderstand. But the the central metaphor of that, that I remember, is medicine, how before germ theory we thought there'd be some magical cure, some simple magical cure to diseases. But then once we finally accepted germ theory, we realized that there would be no one big solution, it'd be a lot of tiny little solutions that'd be hard to find. Which most people misunderstand. But the the central metaphor of that, that I remember, is medicine, how before germ theory we thought there'd be some magical cure, some simple magical cure to diseases. But then once we finally accepted germ theory, we realized that there would be no one big solution, it'd be a lot of tiny little solutions that'd be hard to find.

HW: Yeah. Yeah.

SK: I guess that's kind of what you're saying with software. There's gonna be no unified one solution. It's going to be a bunch of little add on things that we'll have to keep adding on to software to make it better incrementally over time. I guess that's kind of what you're saying with software. There's gonna be no unified one solution. It's going to be a bunch of little add on things that we'll have to keep adding on to software to make it better incrementally over time.

HW: Yeah. Yeah.

SK: Just like we have to take a flu shot every year and we also take a tetanus shot and we also take a polio vaccine, there's no one magical shot that will do all of those vaccines. We have to take them all. Just like we have to take a flu shot every year and we also take a tetanus shot and we also take a polio vaccine, there's no one magical shot that will do all of those vaccines. We have to take them all.

HW: Yeah. And I think that that's true with almost any sort of human system. I think with almost every system you're gonna look at, whether it's sort of software engineering or medicine, and I'm assuming and I could be wrong, as we all know, we do not know other fields very well. With other kinds of engineering and also with [inaudible] such, it's just that there's really complicated problems that there's a million small solutions for and no one ever finds one magical thing that just fixes everything. Yeah. And I think that that's true with almost any sort of human system. I think with almost every system you're gonna look at, whether it's sort of software engineering or medicine, and I'm assuming and I could be wrong, as we all know, we do not know other fields very well. With other kinds of engineering and also with [inaudible] such, it's just that there's really complicated problems that there's a million small solutions for and no one ever finds one magical thing that just fixes everything.

SK: Yep. So I hear that for sure. And then I feel like on the other hand, there are times when you have the geocentric theory and you add epicycles and epicycles and epicycles, and all of a sudden you realize, "Oh, wait, if we just make it a heliocentric theory, we get rid of all those epicycles and everything's more elegant." And we've replaced these ad hoc things with a new elegant foundation. So that happens too sometimes. Yep. So I hear that for sure. And then I feel like on the other hand, there are times when you have the geocentric theory and you add epicycles and epicycles and epicycles, and all of a sudden you realize, "Oh, wait, if we just make it a heliocentric theory, we get rid of all those epicycles and everything's more elegant." And we've replaced these ad hoc things with a new elegant foundation. So that happens too sometimes.

HW: And then you have to start figuring out the perceptions of stuff, then you realize you have to add in general relativity and special relativity to sort of adjust for other things, which are verified but also incredibly complicated. And then you have to start figuring out the perceptions of stuff, then you realize you have to add in general relativity and special relativity to sort of adjust for other things, which are verified but also incredibly complicated.

SK: Yeah, yeah. Well, I guess that's kind of the beginning of infinity thing. And we'll never quite be able to explain everything. Yeah, yeah. Well, I guess that's kind of the beginning of infinity thing. And we'll never quite be able to explain everything.

HW: Yeah. Yeah.

SK: I guess, to go back to my original skepticism, it's really skepticism of ad hoc-ness. I guess, to go back to my original skepticism, it's really skepticism of ad hoc-ness.

HW: Can you clarify what you mean by ad hoc-ness? Can you clarify what you mean by ad hoc-ness?

SK: Yeah. Ad hoc-ness is a hard thing to define. But I guess what I'm getting at is if I asked you to list all of the practices you would recommend for an engineering team, like unit tests, maybe just list them for example. What would what would be all of the, so writing code, version control, you know, maybe just list some of the things you would recommend. Yeah. Ad hoc-ness is a hard thing to define. But I guess what I'm getting at is if I asked you to list all of the practices you would recommend for an engineering team, like unit tests, maybe just list them for example. What would what would be all of the, so writing code, version control, you know, maybe just list some of the things you would recommend.

HW: I guess some of the ones that I think would be recommended would be formal specification, I think obviously, I've got to sort of say that. Obviously writing code, you have to do writing code. Version control is important. Code review is extremely important. It's one of the few things that we are empirically sure, with multiple studies, is a great idea. I guess some of the ones that I think would be recommended would be formal specification, I think obviously, I've got to sort of say that. Obviously writing code, you have to do writing code. Version control is important. Code review is extremely important. It's one of the few things that we are empirically sure, with multiple studies, is a great idea.

SK: Code review, probably? Code review, probably?

HW: Yeah, sorry. Code review. Did I say something else? Yeah, sorry. Code review. Did I say something else?

SK: Oh, I thought maybe you said, I'm sorry. Nevermind. I'm sure that's what you said. Oh, I thought maybe you said, I'm sorry. Nevermind. I'm sure that's what you said.

HW: Yeah. Yeah.

SK: You're just dropping out a it. You're just dropping out a it.

HW: Yeah. My mistake. Yeah, basically, code review, really, really good. Taking time to do stuff, adequate sleep, exercise, good relationships with clients, constant feedback, really careful post mortem system analysis, really careful pre mortems. I realize a lot of this isn't actually in the code level. Do you want what I think would be effective things for coding? Yeah. My mistake. Yeah, basically, code review, really, really good. Taking time to do stuff, adequate sleep, exercise, good relationships with clients, constant feedback, really careful post mortem system analysis, really careful pre mortems. I realize a lot of this isn't actually in the code level. Do you want what I think would be effective things for coding?

SK: Oh, well, yes. Oh, well, yes.

HW: Yeah. Yeah.

SK: Yeah, go for it. Yeah, go for it.

HW: Coding, probably unified style, randomized testing, although that sort of is interesting, because we're not quite sure what are the best test to write. I think a lot of people are really fond of unit tests. I think those are great and you can write them really fast. But there's also other things that are really powerful large scale testing, probably some measure of observability. I'm not sure if this is really supporting your point or mine here. Coding, probably unified style, randomized testing, although that sort of is interesting, because we're not quite sure what are the best test to write. I think a lot of people are really fond of unit tests. I think those are great and you can write them really fast. But there's also other things that are really powerful large scale testing, probably some measure of observability. I'm not sure if this is really supporting your point or mine here.

SK: Yeah, I'm not sure either, but I kind of like where this is going. I think neither of us really know where it's going. Yeah, I'm not sure either, but I kind of like where this is going. I think neither of us really know where it's going.

HW: Yeah. Yeah.

SK: Yeah, I guess maybe this is just how it is. To pick another example, if you were to say if you want to be the best tennis player in the world what are all the things you have to do. I guess the list would kind of be long and complicated. And then someone be like, "No, actually, you have to add this thing too no that we know this feature of our rackets are important." You have to worry about that too. And actually, you know, we didn't really that gluten was bad or whatever, gluten was good or whatever it is. Yeah, I guess maybe this is just how it is. To pick another example, if you were to say if you want to be the best tennis player in the world what are all the things you have to do. I guess the list would kind of be long and complicated. And then someone be like, "No, actually, you have to add this thing too no that we know this feature of our rackets are important." You have to worry about that too. And actually, you know, we didn't really that gluten was bad or whatever, gluten was good or whatever it is.

HW: Yeah. Yeah.

SK: I guess- I guess-

HW: I do see one simplification that I think you would find as interesting as simplification. I think a lot of unit tests, integration tests, not all of them, but a lot of them can be folded into a combination of property tests and contracts. I do see one simplification that I think you would find as interesting as simplification. I think a lot of unit tests, integration tests, not all of them, but a lot of them can be folded into a combination of property tests and contracts.

SK: Okay. Okay.

HW: That's just my opinion though. That's just my opinion though.

SK: Yeah, yeah. Tell me more about that. Yeah, yeah. Tell me more about that.

HW: So are you familiar with contracts? So are you familiar with contracts?

SK: Yes, but let's assume not. Yes, but let's assume not.

HW: Okay. So essentially, a contract is an assertion that you make as either usually a pre condition or condition of your function. So say, if you have something that takes the tail of a list, you can make the post condition saying that it will have one less element than the original list. And also if you append to the head of the list to the output you'll get the same thing. So essentially, these are essentially specifications that ride in the code itself. And they can be used for formal verification, but they can also be, and are more commonly used for run time verification. Every time you call the function, you just check the preconditions and post conditions. And if they're wrong, you just sort of stopped execution; raise an error. And it turns out that if you do this, one, it's really effective, but two, you can now start to test by just randomized inputs, pumping it through a system. And just if you have a bug, the appropriate contract will stop and raise the issue. So you start to get really simple integration tests from that. Okay. So essentially, a contract is an assertion that you make as either usually a pre condition or condition of your function. So say, if you have something that takes the tail of a list, you can make the post condition saying that it will have one less element than the original list. And also if you append to the head of the list to the output you'll get the same thing. So essentially, these are essentially specifications that ride in the code itself. And they can be used for formal verification, but they can also be, and are more commonly used for run time verification. Every time you call the function, you just check the preconditions and post conditions. And if they're wrong, you just sort of stopped execution; raise an error. And it turns out that if you do this, one, it's really effective, but two, you can now start to test by just randomized inputs, pumping it through a system. And just if you have a bug, the appropriate contract will stop and raise the issue. So you start to get really simple integration tests from that.

SK: Okay, yeah, that's very interesting. Okay, yeah, that's very interesting.

HW: Yeah. Yeah.

SK: So, you know, the chaos monkey approach. Well, chaos monkey, I guess, is more about letting servers do it and stuff. So, you know, the chaos monkey approach. Well, chaos monkey, I guess, is more about letting servers do it and stuff.

HW: Yeah, but essentially, the randomized testing with fine grained responses. Yeah, but essentially, the randomized testing with fine grained responses.

SK: Well, the randomized testing reminds me of Haskell's QuickCheck, where you generate tests based on types. Well, the randomized testing reminds me of Haskell's QuickCheck, where you generate tests based on types.

HW: Yeah. Yeah.

SK: And then these are runtime assertions, which I guess in a type world would be kind of like dependent types. And then these are runtime assertions, which I guess in a type world would be kind of like dependent types.

HW: Sort of. So the thing is, is that contracts, so I guess a quick description between contracts and types, a lot of overlap between the two ideas. The main difference is that types aim for what's called legibility. They aim for being able to really easily analyze them statically, while contracts aim for expressivity. They aim for the ability to encode arbitrary assertions. So, for example, if you really wanted to, you could write a contract that says this function is only called by functions that have palindromic names. Sort of. So the thing is, is that contracts, so I guess a quick description between contracts and types, a lot of overlap between the two ideas. The main difference is that types aim for what's called legibility. They aim for being able to really easily analyze them statically, while contracts aim for expressivity. They aim for the ability to encode arbitrary assertions. So, for example, if you really wanted to, you could write a contract that says this function is only called by functions that have palindromic names.

SK: I see. I see. I see. I see.

HW: Yeah, which is probably not something you want to do, but you can easily do things like say refinement typing, this should only be called with values that are greater zero and will always return values that are less than zero. Yeah, which is probably not something you want to do, but you can easily do things like say refinement typing, this should only be called with values that are greater zero and will always return values that are less than zero.

SK: That's more of a type level thing. But you're saying that contracts are much more expressive? That's more of a type level thing. But you're saying that contracts are much more expressive?

HW: Yeah. Yeah.

SK: It sounds like contracts have access to not only the static AST, but also the actual code. So you can get the the name of the function and also have access to runtime information. It sounds like contracts have access to not only the static AST, but also the actual code. So you can get the the name of the function and also have access to runtime information.

HW: Yeah. Yeah.

SK: Maybe even have access to past runs. If I've been tested three times before then fail? Maybe even have access to past runs. If I've been tested three times before then fail?

HW: I mean, I think that's probably not something you want to be doing. But it's more long line. I guess, here's the more reasonable thing, after this is run, this mutation should happen in this class kind of thing. I mean, I think that's probably not something you want to be doing. But it's more long line. I guess, here's the more reasonable thing, after this is run, this mutation should happen in this class kind of thing.

SK: Yeah. Okay. Got it. So, yeah, well, I think that's, I agree that that's something that excites me. I like the idea of simplifying. It's like a mathematical idea. You know, being able to describe the same amount of things or more things with less words. But then, yeah, I also understand the no silver bullet solution of it's a complicated thing, we just have to keep layering on improvements over time. Yeah. Okay. Got it. So, yeah, well, I think that's, I agree that that's something that excites me. I like the idea of simplifying. It's like a mathematical idea. You know, being able to describe the same amount of things or more things with less words. But then, yeah, I also understand the no silver bullet solution of it's a complicated thing, we just have to keep layering on improvements over time.

HW: Yeah. Yeah.

SK: Or maybe, at one point, we'll get a paradigm shift and we can have a new foundation. But yeah, anyway. Or maybe, at one point, we'll get a paradigm shift and we can have a new foundation. But yeah, anyway.

HW: I mean, and also, the thing is that you might end up in a case where things are simplified but on the whole, things are more complicated. It might turn out that we can fold five ideas into one, but also, we need six ideas total. So I'm not doing a great job explaining this. I'm sorry. I mean, and also, the thing is that you might end up in a case where things are simplified but on the whole, things are more complicated. It might turn out that we can fold five ideas into one, but also, we need six ideas total. So I'm not doing a great job explaining this. I'm sorry.

SK: No, no. No, no.

HW: There's a reason I'm much better at writing than public speaking. There's a reason I'm much better at writing than public speaking.

SK: Yeah, well, you're quite a good writer. So well, I guess, let's focus on the things that I think we do a good job of, focusing on the discussion kinds of things. Because your essays, and I've done this a lot in other podcasts too, when I interview people who are good writers, I try and start with their writing as a foundation and then kind of go where I find interesting. And hopefully that's where other people who will read your things and wish that they could have asked you this question, hopefully. Yeah, well, you're quite a good writer. So well, I guess, let's focus on the things that I think we do a good job of, focusing on the discussion kinds of things. Because your essays, and I've done this a lot in other podcasts too, when I interview people who are good writers, I try and start with their writing as a foundation and then kind of go where I find interesting. And hopefully that's where other people who will read your things and wish that they could have asked you this question, hopefully.

HW: Oh, I see. Okay. So in that case, let me try changing sort of my answer to that. Oh, I see. Okay. So in that case, let me try changing sort of my answer to that.

SK: Sure. Sure.

HW: Because, okay, so I think that I'm going to actually shift this about why do we have to keep on layering stuff. And I'm going to sort of shift this in a slightly different direction then. Into something that I think might be more interesting for discussion. Are you familiar with systems theory? Because, okay, so I think that I'm going to actually shift this about why do we have to keep on layering stuff. And I'm going to sort of shift this in a slightly different direction then. Into something that I think might be more interesting for discussion. Are you familiar with systems theory?

SK: Let's drill into it regardless. Let's drill into it regardless.

HW: Okay. So it's a mixture of really interesting ideas and really cultish ideas that sort of started forming around early last century and sort of developing, which is the idea that often we approach systems, we approach problems and think, "Oh, this really complicated problem, there's actually a simplification to it that we can make, that makes it easy and we can sort of abstract it out." And the systems theory is the group of people going, "Wait a minute, what if that's actually not always the right approach? Maybe there's a better approach, which is what if we sort of look at this complicated problem and say, 'Hey, this complicated problem is actually still a complicated problem. And what we should be doing instead is trying to find all the patterns inside the complicated problems that helps make it complicated. And then try to sort of think of the complicated solutions, or the simple solutions that address all these interconnected issues.'" Okay. So it's a mixture of really interesting ideas and really cultish ideas that sort of started forming around early last century and sort of developing, which is the idea that often we approach systems, we approach problems and think, "Oh, this really complicated problem, there's actually a simplification to it that we can make, that makes it easy and we can sort of abstract it out." And the systems theory is the group of people going, "Wait a minute, what if that's actually not always the right approach? Maybe there's a better approach, which is what if we sort of look at this complicated problem and say, 'Hey, this complicated problem is actually still a complicated problem. And what we should be doing instead is trying to find all the patterns inside the complicated problems that helps make it complicated. And then try to sort of think of the complicated solutions, or the simple solutions that address all these interconnected issues.'"

HW: So the idea here is that instead of sort of saying, "Oh, there's an easy answer here," we say, "No, there's no easy answer. And now that we accept that there's no easy answer, how does this help us, to have that revelation?" Does that make sense? So the idea here is that instead of sort of saying, "Oh, there's an easy answer here," we say, "No, there's no easy answer. And now that we accept that there's no easy answer, how does this help us, to have that revelation?" Does that make sense?

SK: Yeah. Well, maybe, can you walk us through a concrete example? Yeah. Well, maybe, can you walk us through a concrete example?

HW: Okay, sure. So here's a concrete example and this is the stuff that I've mostly been interested in applying to a system safety, which is how properties of safety and security arise in a system emergently. So actually, yeah, actually, I think I might have actually had a better idea for an example here, which is future direction, which I think is a little bit easier to talk about. So imagine you have a system where you've got the following. People can sort of sign up and register for an account. And they have to validate the email address, right? Okay, sure. So here's a concrete example and this is the stuff that I've mostly been interested in applying to a system 