06/13/2019

This episode explores the intersections between various flavors of math and programming, and the ways in which they can be mixed, matched, and combined. Michael Arntzenius, “rntz” for short, is a PhD student at the University of Birmingham building a programming language that combines some of the best features of logic, relational, and functional programming. The goal of the project is “to find a sweet spot of something that is more powerful than Datalog, but still constrained enough that we can apply existing optimizations to it and imitate what has been done in the database community and the Datalog community.” The challenge is combining the key part of Datalog (simple relational computations without worrying too much underlying representations) and of functional programming (being able to abstract out repeated patterns) in a way that is reasonably performant.

This is a wide-ranging conversation including: Lisp macros, FRP, Eve, miniKanren, decidability, computability, higher-order logics and their correspondence to higher-order types, lattices, partial orders, avoiding logical paradoxes by disallowing negation (or requiring monotonicity) in self reference (or recursion), modal logic, CRDTS (which are semi-lattices), and the place for formalism is programming. This was a great opportunity for me to brush up on (or learn for the first time) some useful mathematical and type theory key words. Hope you get a lot out of it as well – enjoy!

Transcript sponsored by repl.it

Corrections to this transcript are much appreciated!

SK: So, welcome, Michael. So, welcome, Michael.

MA: Hello. Hello.

SK: So, you go by Michael, yeah? So, you go by Michael, yeah?

MA: Yeah. Yeah.

SK: Because your online username is @rntz. Because your online username is @rntz.

MA: Yes, rntz. Yes, rntz.

SK: And that's how it's pronounced, rntz? And that's how it's pronounced, rntz?

MA: That's how it's pronounced, rntz. That's how it's pronounced, rntz.

SK: Where did you undergrad? Where did you undergrad?

MA: Carnegie Mellon. Carnegie Mellon.

SK: And you studied CS? And you studied CS?

MA: Yeah, computer science. Yeah, computer science.

SK: And so, when did you get into programming languages specifically in computer science? And so, when did you get into programming languages specifically in computer science?

MA: Very shortly after I got into programming. Very shortly after I got into programming.

SK: Oh, interesting. Oh, interesting.

MA: So, I think the thing that I, sort of, vaguely wanted to do when I started programming was make video games, which- So, I think the thing that I, sort of, vaguely wanted to do when I started programming was make video games, which-

SK: That's a common one, yeah. That's a common one, yeah.

MA: Yeah. But I, sort of, very quickly got frustrated with the tools available to me, right? And started bouncing through programming languages. What program language did I start with? It might have been Python. It might have been Visual Basic. It might have been... I don't even remember now. But anyway, I eventually found my way to Lisp and to Scheme. And that was really sort of a revelation. I really enjoyed listing Scheme. Yeah. But I, sort of, very quickly got frustrated with the tools available to me, right? And started bouncing through programming languages. What program language did I start with? It might have been Python. It might have been Visual Basic. It might have been... I don't even remember now. But anyway, I eventually found my way to Lisp and to Scheme. And that was really sort of a revelation. I really enjoyed listing Scheme.

MA: And I just sort of started going down the building tools for making your own tools rabbit hole, right? Because any time I would try to do something concrete, I would get frustrated that it was hard. And I would think, how could I make it easier to do this thing? I started thinking about building a tool for that thing and if you keep doing that, you end up with program language. And I've been going down that rabbit hole for, I guess, more than a decade now. And I just sort of started going down the building tools for making your own tools rabbit hole, right? Because any time I would try to do something concrete, I would get frustrated that it was hard. And I would think, how could I make it easier to do this thing? I started thinking about building a tool for that thing and if you keep doing that, you end up with program language. And I've been going down that rabbit hole for, I guess, more than a decade now.

SK: You just never stopped yak shaving. You just never stopped yak shaving.

MA: Yeah. Yeah, sort of. I've like narrowed my scope a lot, right? Which academia will do to you, right? You have to focus if you're going to get anything done. Yeah. Yeah, sort of. I've like narrowed my scope a lot, right? Which academia will do to you, right? You have to focus if you're going to get anything done.

SK: Yeah, yeah, well said. So, I find that a lot of us go through various paradigms and topics, like blocks-based programming or structured editors or logic programming or functional programming. Databases, you know. There are a bunch of different ways to improve programming. What was your arc through all those topics? Or was it not as winding? Did you kind of know early on? Yeah, yeah, well said. So, I find that a lot of us go through various paradigms and topics, like blocks-based programming or structured editors or logic programming or functional programming. Databases, you know. There are a bunch of different ways to improve programming. What was your arc through all those topics? Or was it not as winding? Did you kind of know early on?

MA: No, I mean, there's been some ramblings. So, the first, the place I embarked was Lisp and Scheme. Which is sort of- No, I mean, there's been some ramblings. So, the first, the place I embarked was Lisp and Scheme. Which is sort of-

SK: It's a common starting place for programming interested people. It's a common starting place for programming interested people.

MA: Right. And Lisp and Scheme had, sort of, a couple of interesting ideas that have stayed with me. Lisp when it first came out had dozens of ideas that other languages didn't have, like garbage collection. But nowadays, garbage collection is really common. So, garbage collection didn't leave a lasting impact on me, other than that, like, yeah. I don't like having deleting the manual memory, but that's solved. We know how to do that now. Right. And Lisp and Scheme had, sort of, a couple of interesting ideas that have stayed with me. Lisp when it first came out had dozens of ideas that other languages didn't have, like garbage collection. But nowadays, garbage collection is really common. So, garbage collection didn't leave a lasting impact on me, other than that, like, yeah. I don't like having deleting the manual memory, but that's solved. We know how to do that now.

MA: But the things that are, even now, not entirely mainstream about it are s-expressions and using s-expressions to represent all of your data, right, or most of your data. Functional programming, that's obviously getting much more attraction nowadays, but it's still not entirely mainstream. I don't know. I guess it depends on who you ask. And macros. And so, if early on, the one that most... seemed the most exciting to me and most cool was macros. But the things that are, even now, not entirely mainstream about it are s-expressions and using s-expressions to represent all of your data, right, or most of your data. Functional programming, that's obviously getting much more attraction nowadays, but it's still not entirely mainstream. I don't know. I guess it depends on who you ask. And macros. And so, if early on, the one that most... seemed the most exciting to me and most cool was macros.

SK: Yeah. And I guess it goes hand-in-hand with the s-expressions thing. I guess it's almost less the s-expressions and more the homoiconicity. Yeah. And I guess it goes hand-in-hand with the s-expressions thing. I guess it's almost less the s-expressions and more the homoiconicity.

MA: That's another phrase that I- That's another phrase that I-

SK: You don't like the phrase? You don't like the phrase?

MA: Well, I find it kind of ambiguous. People make a big deal out of it, but they can never define exactly what counts as being homoiconic. The thing that I think is important is it has a built-in data structure for representing the syntax of your language, but that's not unique to it. Python also has this. Most people don't know it, but Python has an AST datatype in the standard library. Well, I find it kind of ambiguous. People make a big deal out of it, but they can never define exactly what counts as being homoiconic. The thing that I think is important is it has a built-in data structure for representing the syntax of your language, but that's not unique to it. Python also has this. Most people don't know it, but Python has an AST datatype in the standard library.

SK: Oh, wow. Oh, wow.

MA: But also, this datatype is s-expressions is sort of the thing used to represent almost everything, right? It's not a special purpose datatype, s-expressions. You use its building blocks everywhere else. You use lists. You use symbols. You use numbers, right? In Python, if I went on to understand the AST, I have to go read the documentation specifically for the AST. In Lisp, I'm already... there's very little distance between the tools that you familiarize yourself for general programming and the tools that you use to write macros. But also, this datatype is s-expressions is sort of the thing used to represent almost everything, right? It's not a special purpose datatype, s-expressions. You use its building blocks everywhere else. You use lists. You use symbols. You use numbers, right? In Python, if I went on to understand the AST, I have to go read the documentation specifically for the AST. In Lisp, I'm already... there's very little distance between the tools that you familiarize yourself for general programming and the tools that you use to write macros.

MA: And so, that's sort of what I think was making macro easier is what homoiconicity is. It's that the same data structures and concepts you used to write ordinary code are the ones you us to write macros. There's not a huge gulf between them, which makes it really easy to get started writing macros. And so, that's sort of what I think was making macro easier is what homoiconicity is. It's that the same data structures and concepts you used to write ordinary code are the ones you us to write macros. There's not a huge gulf between them, which makes it really easy to get started writing macros.

SK: Yeah, that's really well said. I think that captures part of what's really, really powerful about the s-expressions, macros, pairing. You can turn the tool on itself and use it in the same way you've been using it to do other things, but on itself. Yeah, that's really well said. I think that captures part of what's really, really powerful about the s-expressions, macros, pairing. You can turn the tool on itself and use it in the same way you've been using it to do other things, but on itself.

MA: Yeah. Yeah.

SK: Yeah, I guess it's quite empowering, because I think it's in the theme of blurring the line between a creator of a tool and a user of a tool. Yeah, I guess it's quite empowering, because I think it's in the theme of blurring the line between a creator of a tool and a user of a tool.

MA: Yeah, definitely. I mean, it's kind of intoxicatingly powerful, right? Everybody gets turned on to... I don't know about everybody, but a lot of people get turned on to macros. And then, some never stop trying to use macros to solve every problem, right? It's really fun to write a macro that gives you a little language resolving a particular problem. So, right, oh, and also, the other thing that's relevant. I applied to work at my dream job, which was a rank on Eve. Yeah, definitely. I mean, it's kind of intoxicatingly powerful, right? Everybody gets turned on to... I don't know about everybody, but a lot of people get turned on to macros. And then, some never stop trying to use macros to solve every problem, right? It's really fun to write a macro that gives you a little language resolving a particular problem. So, right, oh, and also, the other thing that's relevant. I applied to work at my dream job, which was a rank on Eve.

SK: Oh, with Chris Granger and, yeah, yeah. Oh, with Chris Granger and, yeah, yeah.

MA: With Chris Granger and Jamie Brandon. And I should remember the names of the other people who were working on it, but I don't. With Chris Granger and Jamie Brandon. And I should remember the names of the other people who were working on it, but I don't.

SK: Corey and... yeah. Corey and... yeah.

MA: Corey was after I applied. Corey was after I applied.

SK: Yeah, I also sent an email. I don't know if applied is the right word for, "I want to work for you.", in an email. So, I think we have that in common. I imagine a lot of the listeners to the podcast are, well, saying, "Yeah, yeah, I emailed Chris, too." Yeah, I also sent an email. I don't know if applied is the right word for, "I want to work for you.", in an email. So, I think we have that in common. I imagine a lot of the listeners to the podcast are, well, saying, "Yeah, yeah, I emailed Chris, too."

MA: Yeah. Well, I know them. And then, they flew me out to interview. Yeah. Well, I know them. And then, they flew me out to interview.

SK: Oh, wow. Okay, great. Oh, wow. Okay, great.

MA: And then, turned me down. And then, turned me down.

SK: You got farther than I did. I got a, I think, less than a sentence of like, "Sorry, we're not interested." Or like, "Sorry, no." You got farther than I did. I got a, I think, less than a sentence of like, "Sorry, we're not interested." Or like, "Sorry, no."

MA: Yeah, but talking with them was really cool and gave me a clear idea of what they were trying to do. And part of the core technology we're building on was this Datalog-like stuff. Yeah, but talking with them was really cool and gave me a clear idea of what they were trying to do. And part of the core technology we're building on was this Datalog-like stuff.

SK: Yeah. Oh, that's when you first heard about... You first got into it? Yeah. Oh, that's when you first heard about... You first got into it?

MA: That's really where I first got interested in the relational algebra. That's really where I first got interested in the relational algebra.

SK: No way! That's fascinating. So, I guess, I probably have said this in the intro to the podcast, but you're like really into Datalog, it seems. That's what you're basically about. So, that's a fascinating... yeah, a fascinating little historical tidbit that you got it from the- No way! That's fascinating. So, I guess, I probably have said this in the intro to the podcast, but you're like really into Datalog, it seems. That's what you're basically about. So, that's a fascinating... yeah, a fascinating little historical tidbit that you got it from the-

MA: Yeah, my research direction has been determined by getting turned down for my dream job. Yeah, my research direction has been determined by getting turned down for my dream job.

SK: Well, it's just so funny to think that Eve was... which just feels so outside of academia. They took things from academia, but like the fact that they were then able to influence academia, I just find that somehow fascinating and wonderful. Well, it's just so funny to think that Eve was... which just feels so outside of academia. They took things from academia, but like the fact that they were then able to influence academia, I just find that somehow fascinating and wonderful.

MA: Yeah, I mean, I think that academia is more open-minded than a lot of people might think. Yeah, I mean, I think that academia is more open-minded than a lot of people might think.

SK: Yeah, of course, than their reputation. Yeah. I guess they're just people on like, you know- Yeah, of course, than their reputation. Yeah. I guess they're just people on like, you know-

MA: Especially grad students. Especially grad students.

SK: Especially grad students. Especially grad students.

MA: You give less as a person when you get tenure. You give less as a person when you get tenure.

SK: I see. I see. I see. I see.

MA: I'll regret saying that at some point in my life. I'll regret saying that at some point in my life.

SK: Well, I guess, there's like a period of time in which you can like choose which various sliver of knowledge you're going to be an expert on. And once you've established that, it's not... You can still change, but once you've established- Well, I guess, there's like a period of time in which you can like choose which various sliver of knowledge you're going to be an expert on. And once you've established that, it's not... You can still change, but once you've established-

MA: But it's hard, right? But it's hard, right?

SK: It's kind of a sunk cost thing, but it's also like... You're already there. You might as well just keep going. It's kind of a sunk cost thing, but it's also like... You're already there. You might as well just keep going.

MA: Yeah. You can think of it as sudden cost or you can think of it as you had built up an expertise in a very specific area, and it's sort of a matter of your relative advantages, right? You have a lot of knowledge of this area, so you have a relative advantage in working at... starting in a new area is like starting all over again. Yeah. You can think of it as sudden cost or you can think of it as you had built up an expertise in a very specific area, and it's sort of a matter of your relative advantages, right? You have a lot of knowledge of this area, so you have a relative advantage in working at... starting in a new area is like starting all over again.

SK: Yeah, yeah, exactly. So, when you're like a grad student, it's very easy to be influenced by things. But then, 10 years from now, you're not going to want to... If the next Chris Granger comes up with a new company in 10 years with a new direction, you're like not going to switch to that thing, you know? It's like a one-time thing, maybe. Yeah, yeah, exactly. So, when you're like a grad student, it's very easy to be influenced by things. But then, 10 years from now, you're not going to want to... If the next Chris Granger comes up with a new company in 10 years with a new direction, you're like not going to switch to that thing, you know? It's like a one-time thing, maybe.

MA: Yeah, or it gets harder. Yeah, or it gets harder.

SK: It gets harder, yeah. It gets harder, yeah.

MA: Or less common. Yeah. Or less common. Yeah.

SK: Cool. Cool.

MA: I've lost track of where we are. I've lost track of where we are.

SK: I'm curious to how you originally got an FRP. I'm curious to how you originally got an FRP.

MA: Right, FRP. Right, FRP.

SK: And how you found that. And how you found that.

MA: I got- I got-

SK: I'm still obsessed with it. I've been obsessed with it for years. Since I saw React JS, I was obsessed with it. I was into all the front-end frameworks. Then, finally, I found Conal Elliott's work. And then, I was like... Ahhhhh. And I'm really into it. And now, I'm like annoying, because I'm so into it. And I have like nobody to talk to, because I almost feel like it was like... Or anyways, the people I talk to aren't really interested in it the way that I am. I'm still obsessed with it. I've been obsessed with it for years. Since I saw React JS, I was obsessed with it. I was into all the front-end frameworks. Then, finally, I found Conal Elliott's work. And then, I was like... Ahhhhh. And I'm really into it. And now, I'm like annoying, because I'm so into it. And I have like nobody to talk to, because I almost feel like it was like... Or anyways, the people I talk to aren't really interested in it the way that I am.

MA: Yeah. I mean, I think it's sort of gained a brief moment of being slightly more mainstream, especially with Elm, right? And then, Elm actually kind of abandoned the FRP and approach. And there haven't been a lot of attempts to really push it forward since then. I mean, there's been academic work on it, but it's not in the spotlight anymore. And it was never hugely in the spotlight. Yeah. I mean, I think it's sort of gained a brief moment of being slightly more mainstream, especially with Elm, right? And then, Elm actually kind of abandoned the FRP and approach. And there haven't been a lot of attempts to really push it forward since then. I mean, there's been academic work on it, but it's not in the spotlight anymore. And it was never hugely in the spotlight.

MA: So, I got interested in it, more or less, because, yeah. I think Elm might have been part of it, exposing me to it. And it seemed like a nicer way to write user facing programs. It still seems like it might be a nicer way to write user facing programs. Although, I think my attention has turned more generally to the problem with incremental computation. Which FRP, I think, is... or dealing with change is how I would summarize the problem as I see it of front-end programming. So, I got interested in it, more or less, because, yeah. I think Elm might have been part of it, exposing me to it. And it seemed like a nicer way to write user facing programs. It still seems like it might be a nicer way to write user facing programs. Although, I think my attention has turned more generally to the problem with incremental computation. Which FRP, I think, is... or dealing with change is how I would summarize the problem as I see it of front-end programming.

SK: Yeah. Well, I guess because like events, like, you know, you have some UI. And it mostly stays the same. But as user interact with it, the UI slowly changes. Yeah. Well, I guess because like events, like, you know, you have some UI. And it mostly stays the same. But as user interact with it, the UI slowly changes.

MA: Yeah. But also, the external world is changing, right? You're running a website where someone has a shopping basket, right? And maybe you're trying to do the distributed setting. Now, things can change. They can change at different places, at different points in time. And you have to integrate all of that somehow. Yeah. But also, the external world is changing, right? You're running a website where someone has a shopping basket, right? And maybe you're trying to do the distributed setting. Now, things can change. They can change at different places, at different points in time. And you have to integrate all of that somehow.

SK: That's interesting. I guess, because I don't... I usually make the distinction in my head between batch programs and then, reactive programs. Reactive programs like respond to the environment. Batch programs need to do something to process once. And I guess what a reactive program is is something that changes over time, but it has inertia. It's not a step functioning thing. It's usually a smooth kind of changing thing, smoothish. Occasionally, I'll press a button and the entire page will change. But usually, it's like- That's interesting. I guess, because I don't... I usually make the distinction in my head between batch programs and then, reactive programs. Reactive programs like respond to the environment. Batch programs need to do something to process once. And I guess what a reactive program is is something that changes over time, but it has inertia. It's not a step functioning thing. It's usually a smooth kind of changing thing, smoothish. Occasionally, I'll press a button and the entire page will change. But usually, it's like-

MA: Most changes are small. Most changes are small.

SK: Most changes are small. Anyways, maybe that doesn't make any sense. Most changes are small. Anyways, maybe that doesn't make any sense.

MA: No, I mean, it makes sense. Continuity is sort of a huge theme that connects to everything, if you look into it deep enough. And I don't fully understand it. No, I mean, it makes sense. Continuity is sort of a huge theme that connects to everything, if you look into it deep enough. And I don't fully understand it.

SK: Almost like differentiability. Almost like differentiability.

MA: Yeah. In a certain sense, only continuous functions are computable. There's this connection with topology and computation that I do not fully understand. Yeah. In a certain sense, only continuous functions are computable. There's this connection with topology and computation that I do not fully understand.

SK: I see. That's interesting. I see. That's interesting.

MA: Anyway, yeah. So, I got into FRP, because I was interested in it as a better, or nicer, model of writing UI programs. And why did I end up not so interested in it? I guess I basically got sidelined in my mind by the Datalog and Datafun and relational programming ideas. Anyway, yeah. So, I got into FRP, because I was interested in it as a better, or nicer, model of writing UI programs. And why did I end up not so interested in it? I guess I basically got sidelined in my mind by the Datalog and Datafun and relational programming ideas.

SK: Cool. Well, so, it feels like you first got into the logic programming, then got into relational programming? Or it kind of happened at the same- Cool. Well, so, it feels like you first got into the logic programming, then got into relational programming? Or it kind of happened at the same-

MA: Happened at the same time. I treat them as kind of the same. Happened at the same time. I treat them as kind of the same.

SK: Oh, relational programming and logic programming. Oh, they're kind of synonyms. Because to me, one feels like... Relational, I think SQL databases. And logic, I think Prolog. But I guess part of your thing is you want to unify them. Oh, relational programming and logic programming. Oh, they're kind of synonyms. Because to me, one feels like... Relational, I think SQL databases. And logic, I think Prolog. But I guess part of your thing is you want to unify them.

MA: Oh, I don't know about whether unifying them fully, but I do think of them as strongly related, right? Oh, I don't know about whether unifying them fully, but I do think of them as strongly related, right?

SK: Do most logic programmers think of that in that way? Do most logic programmers think of that in that way?

MA: I don't know. Certainly Will Byrd and his collaborators, Friedman referred his work as relational programming. I don't know. Certainly Will Byrd and his collaborators, Friedman referred his work as relational programming.

SK: Who's Will Byrd? Who's Will Byrd?

MA: Will Byrd is a... How do I explain Will Byrd? He's an academic. He's, I think, probably mostly known for a miniKanren, which is a relational/logic programming language that is distinctly not Prolog. It's built in Scheme. And it's, sort of, notable feature, well, there's two things I would mention. First of all, there's a mini version of it called microKanren, which is notable for having an implementation that is like 50 lines long. Will Byrd is a... How do I explain Will Byrd? He's an academic. He's, I think, probably mostly known for a miniKanren, which is a relational/logic programming language that is distinctly not Prolog. It's built in Scheme. And it's, sort of, notable feature, well, there's two things I would mention. First of all, there's a mini version of it called microKanren, which is notable for having an implementation that is like 50 lines long.

MA: And that has been ported to every language under the sun, because it has an implementation his about 50 lines long. But it captures the essence of miniKanren, so it's a really fun thing to play around with, if you're interested in relational programming. And the interesting thing about miniKanren, beyond that is that unlike Prolog, its search strategy is complete. So, Prolog, if you write a particular thing that you read it logically specifies something. And that has been ported to every language under the sun, because it has an implementation his about 50 lines long. But it captures the essence of miniKanren, so it's a really fun thing to play around with, if you're interested in relational programming. And the interesting thing about miniKanren, beyond that is that unlike Prolog, its search strategy is complete. So, Prolog, if you write a particular thing that you read it logically specifies something.

MA: Like, let's say, you write transitive closure. So, you have edges in the graph. You have relation-predicate-edge that tells of two arguments that says an edge from this node to that node. And you want to find a predicate that gives you reachability that tells you there's a path from this node to that node. If you do this in the obvious way, which is just there's a path from X to Y, if there's an edge from X to Y. And there's a path from X to Z, if there's a path from X to Y and a path from Y to Z, right? It's edges, but transitive. If you do this and you feed it to Prolog, it will infinite loop and generate nothing of interest. Like, let's say, you write transitive closure. So, you have edges in the graph. You have relation-predicate-edge that tells of two arguments that says an edge from this node to that node. And you want to find a predicate that gives you reachability that tells you there's a path from this node to that node. If you do this in the obvious way, which is just there's a path from X to Y, if there's an edge from X to Y. And there's a path from X to Z, if there's a path from X to Y and a path from Y to Z, right? It's edges, but transitive. If you do this and you feed it to Prolog, it will infinite loop and generate nothing of interest.

SK: Because it'll just keep generating extra facts that you already know? Because it'll just keep generating extra facts that you already know?

MA: Yeah, so, it'll take the second clause, which is a path that can be built from a concatenation of two paths. And if you ask it, "Hey, what are the paths?" It will keep applying that second rule indefinitely. Yeah, so, it'll take the second clause, which is a path that can be built from a concatenation of two paths. And if you ask it, "Hey, what are the paths?" It will keep applying that second rule indefinitely.

SK: I see. I see.

MA: Because it does sort of depth for search and the search tree you've given it, it has infinite branches. Because it does sort of depth for search and the search tree you've given it, it has infinite branches.

SK: I see. I see.

MA: And so, this is really annoying from a pure logic point of view. I've given you the logical definition of this. Why aren't you computing it? This is the promise of logic programming discarded for the sake of a simple implementation. And miniKanren is like, "No, we will not discard that promise." We give the complete search strategy. If you give us some rules, we will give you eventually all of their consequences. No matter how you order the rules, no matter what you do, we will eventually find all the consequences of these rules. And so, this is really annoying from a pure logic point of view. I've given you the logical definition of this. Why aren't you computing it? This is the promise of logic programming discarded for the sake of a simple implementation. And miniKanren is like, "No, we will not discard that promise." We give the complete search strategy. If you give us some rules, we will give you eventually all of their consequences. No matter how you order the rules, no matter what you do, we will eventually find all the consequences of these rules.

SK: And they're able to do this, because...? And they're able to do this, because...?

MA: They changed the search strategy. They changed the search strategy.

SK: Oh, it's just an implementation detail, like- Oh, it's just an implementation detail, like-

MA: Well- Well-

SK: It didn't restrict the way you could- It didn't restrict the way you could-

MA: There are a couple of features in Prolog that are specifically about the search strategy and are about sort of extralogical things. So, for example, there was the cut operator, or the bang operator, that prevents backtracking. That's about the search strategy. It prevents backtracking. There are a couple of features in Prolog that are specifically about the search strategy and are about sort of extralogical things. So, for example, there was the cut operator, or the bang operator, that prevents backtracking. That's about the search strategy. It prevents backtracking.

SK: Wait, this is in miniKanren? Wait, this is in miniKanren?

MA: No, cut is in Prolog. No, cut is in Prolog.

SK: I see. I see.

MA: It is not in miniKanren. It is not in miniKanren.

SK: So, and miniKanren's almost like higher level? It wouldn't let you- So, and miniKanren's almost like higher level? It wouldn't let you-

MA: Yeah, it would not let you do this. Yeah, it would not let you do this.

SK: Explain the... like, direct the search strategy. Explain the... like, direct the search strategy.

MA: That's not entirely true. The order in which you put things will affect the order in which the tree gets searched, but it will eventually find... search the whole tree. That's not entirely true. The order in which you put things will affect the order in which the tree gets searched, but it will eventually find... search the whole tree.

SK: How will it know when to stop, in a way that Prolog doesn't know when to stop? How will it know when to stop, in a way that Prolog doesn't know when to stop?

MA: So, if you give it an infinite search tree, it will never stop. But if you give Prolog an infinite search tree, it might never stop and also not explore the whole tree. Right? So, it might just get stuck going down one particular branch of the tree and never come back up. Whereas, miniKanren is more like doing... It's not doing a breadth-first search, but it's doing something more similar to a breadth-first search where eventually, it will reach any node in the tree. So, if you give it an infinite search tree, it will never stop. But if you give Prolog an infinite search tree, it might never stop and also not explore the whole tree. Right? So, it might just get stuck going down one particular branch of the tree and never come back up. Whereas, miniKanren is more like doing... It's not doing a breadth-first search, but it's doing something more similar to a breadth-first search where eventually, it will reach any node in the tree.

SK: Oh, but it might keep going forever. Oh, but it might keep going forever.

MA: But it might keep going forever if the tree's infinite. Yeah, that's your problem. But it might keep going forever if the tree's infinite. Yeah, that's your problem.

SK: I see. I see.

MA: Right. Datalog, on the other hand, simply does not allow infinite searches. That's the area I focused on. I focused on very decidable logic programming. Right. Datalog, on the other hand, simply does not allow infinite searches. That's the area I focused on. I focused on very decidable logic programming.

SK: Okay. Well, let's rewind and unpack some of these terms, because I want to give... I want to use this opportunity to give a good foundation for these topics, because I think a lot... most of us, I think, have heard of these things, Prolog and logic programming and relational things. But anyways, I just want to start on firm foundations. So, when I hear relational, I think of Codd and databases. Okay. Well, let's rewind and unpack some of these terms, because I want to give... I want to use this opportunity to give a good foundation for these topics, because I think a lot... most of us, I think, have heard of these things, Prolog and logic programming and relational things. But anyways, I just want to start on firm foundations. So, when I hear relational, I think of Codd and databases.

SK: So, maybe give like the brief history. Is that kind of where relational came from? So, maybe give like the brief history. Is that kind of where relational came from?

MA: Yeah, yeah. It's a perfectably reasonable thing to think of when you hear relational, right? Like, you created relational algebra or relational calculus. I still don't know what the difference between those two is, by the way. And from that, came SQL and most of our modern database work. Yeah, yeah. It's a perfectably reasonable thing to think of when you hear relational, right? Like, you created relational algebra or relational calculus. I still don't know what the difference between those two is, by the way. And from that, came SQL and most of our modern database work.

SK: And so, when I think of that, I think of path independence and normalization. That's where my brain goes, but is that... That's not- And so, when I think of that, I think of path independence and normalization. That's where my brain goes, but is that... That's not-

MA: What is path independence? What is path independence?

SK: The opposite of path independence is when I have like a nested JSON data structure, I realized like, oh crap, I actually want... If I have a list of people and each person has a list of favorite things. I'm like, "Oh crap, actually, I want to know how many distinct favorite things there are." Basically, I know I can get in trouble if I just nest the data structure in the way that I'm going to want the data. And then, I'm like, "Oh crap, I actually want the data a different way." And usually what happens to me is I end up taking that data structure and then, unfurling it into orthogonal lists that point to each other. The opposite of path independence is when I have like a nested JSON data structure, I realized like, oh crap, I actually want... If I have a list of people and each person has a list of favorite things. I'm like, "Oh crap, actually, I want to know how many distinct favorite things there are." Basically, I know I can get in trouble if I just nest the data structure in the way that I'm going to want the data. And then, I'm like, "Oh crap, I actually want the data a different way." And usually what happens to me is I end up taking that data structure and then, unfurling it into orthogonal lists that point to each other.

MA: Yeah. Which is very much the, sort of, relational approach, right? Just have a bunch of relations saying how your data relates. Don't think too hard about nesting everything, so it's deficient. Leave that up to the query optimizer and hope you have a good query optimizer. Yeah. Which is very much the, sort of, relational approach, right? Just have a bunch of relations saying how your data relates. Don't think too hard about nesting everything, so it's deficient. Leave that up to the query optimizer and hope you have a good query optimizer.

SK: What is the relationship between relational algebra and SQL? What is the relationship between relational algebra and SQL?

MA: So, relational algebra is this formalism that Codd came up with in the 70's...? Could be the 60's. I could be wrong. But anyway- So, relational algebra is this formalism that Codd came up with in the 70's...? Could be the 60's. I could be wrong. But anyway-

SK: Is it like lambda calculus is to functional programming? Is it like lambda calculus is to functional programming?

MA: Kind of, yeah, right? So, SQL can be thought of as an implementation of relational algebra plus some other stuff, except not quite. So, it's relational algebra plus some stuff, but it gives up on some of the simplicity of relational algebra. For example, it has bag semantics, not set semantics. So, there's a difference between having multiple of the same thing, right? And it also adds some stuff to relational algebra that's really important like aggregations. Kind of, yeah, right? So, SQL can be thought of as an implementation of relational algebra plus some other stuff, except not quite. So, it's relational algebra plus some stuff, but it gives up on some of the simplicity of relational algebra. For example, it has bag semantics, not set semantics. So, there's a difference between having multiple of the same thing, right? And it also adds some stuff to relational algebra that's really important like aggregations.

MA: So, anyway, before talking about what it adds, what is relational algebra? Relational algebra is you have a bunch of relations, right? A relation is basically just a set of tuples, right? So, it's a- So, anyway, before talking about what it adds, what is relational algebra? Relational algebra is you have a bunch of relations, right? A relation is basically just a set of tuples, right? So, it's a-

SK: Rows. Rows.

MA: Oh, a set of rows. Oh, a set of rows.

SK: Okay. And a tuple is just like a dictionary or a object? Like, it's key values. Okay. And a tuple is just like a dictionary or a object? Like, it's key values.

MA: You can think of it as key values, if you'd like, right? But you think of a relation has a bunch of columns, right? Like, there might be first name, last name, user ID. The elements of a relation are individual rows. We taught the value of first name and the value of last name and a user ID. And that's what a relation is, right, so it's a collection of rows. And all the rows have the same shape. They have values for each column. You can think of it as key values, if you'd like, right? But you think of a relation has a bunch of columns, right? Like, there might be first name, last name, user ID. The elements of a relation are individual rows. We taught the value of first name and the value of last name and a user ID. And that's what a relation is, right, so it's a collection of rows. And all the rows have the same shape. They have values for each column.

SK: And you say that's a set of tuples. So, it's not a list. It's- And you say that's a set of tuples. So, it's not a list. It's-

MA: Yeah, it's not ordered. It does not care about duplicates. Yeah, it's not ordered. It does not care about duplicates.

SK: Okay. And ID's, did we talk about that or not? Not yet. Okay. And ID's, did we talk about that or not? Not yet.

MA: No, no, that's not particularly important. I don't even know whether the concept of a primary key is in the relational algebra. It's certainly not- No, no, that's not particularly important. I don't even know whether the concept of a primary key is in the relational algebra. It's certainly not-

SK: Foreign keys. Foreign keys.

MA: Again, that's sort of in my mind, I haven't read any of the original stuff on relational algebra. I only sort of second sources. I read Wikipedia. But in my mind, that's sort of just a concept layered on top of it that formalizes a pattern of using relational algebra, right? So, you have relations. Now, how do you use relations? And the answer is you have various operators that combine them. Again, that's sort of in my mind, I haven't read any of the original stuff on relational algebra. I only sort of second sources. I read Wikipedia. But in my mind, that's sort of just a concept layered on top of it that formalizes a pattern of using relational algebra, right? So, you have relations. Now, how do you use relations? And the answer is you have various operators that combine them.

MA: Some of them are simple filtering. You can say, "Throw out the things in this relation that don't satisfy such and such a condition." Union, if two relations have the same column names, right, or contain the same shape of stuff, you can take their union. And then, the most interesting one, of course, is relational joins. Some of them are simple filtering. You can say, "Throw out the things in this relation that don't satisfy such and such a condition." Union, if two relations have the same column names, right, or contain the same shape of stuff, you can take their union. And then, the most interesting one, of course, is relational joins.

MA: And what a join is is... Actually, perhaps before we even talk about joins, we can talk about cross product. And what a join is is... Actually, perhaps before we even talk about joins, we can talk about cross product.

SK: Unions, I thought that was what joins were. Unions, I thought that was what joins were.

MA: No, so a union is just like give me anything that is in either of these sets. No, so a union is just like give me anything that is in either of these sets.

SK: Oh, yeah, yeah, yeah, yeah. Oh, yeah, yeah, yeah, yeah.

MA: Right. So, it's literal set theory union. Right. So, it's literal set theory union.

SK: So, a relation like... I usually think of a... In a database, you have a customer table and it's all of the customers. But a relation wouldn't be... Like, I could have a relation of two different subsets of customers and make a union. So, a relation like... I usually think of a... In a database, you have a customer table and it's all of the customers. But a relation wouldn't be... Like, I could have a relation of two different subsets of customers and make a union.

MA: Yeah, you could if you wanted to. And you could do the SQL, too. SQL has unions. Yeah, you could if you wanted to. And you could do the SQL, too. SQL has unions.

SK: I see. I see.

MA: They're not all that commonly used, but they are there. They're not all that commonly used, but they are there.

SK: I see. So, joins are- I see. So, joins are-

MA: joins are like the thing. joins are like the thing.

SK: I see. I see.

MA: All this other stuff is useful and sometimes, necessary. But joins are the single most common operation. And what they are, the way I like to think of them, although this may not be immediately obvious is they're a cross product followed by a filter followed by a projection. So, hold on. What are each of those things? A cross product is just I have two relations. Give me all possible pairings of things from those relations. All this other stuff is useful and sometimes, necessary. But joins are the single most common operation. And what they are, the way I like to think of them, although this may not be immediately obvious is they're a cross product followed by a filter followed by a projection. So, hold on. What are each of those things? A cross product is just I have two relations. Give me all possible pairings of things from those relations.

MA: So, if I have a table of customers and I have a table of ice cream flavors, let's say, I have Charlie Coder, user ID 0 and Hilary Hacker, user ID 1. And I have chocolate and strawberry. The cross product will be Charlie Coder user ID 0 strawberry, Charlie Coder, user ID 0 chocolate, Hilary Hacker, user ID 1 strawberry, Hilary Hacker, user ID 1 chocolate. Right? So, if I have a table of customers and I have a table of ice cream flavors, let's say, I have Charlie Coder, user ID 0 and Hilary Hacker, user ID 1. And I have chocolate and strawberry. The cross product will be Charlie Coder user ID 0 strawberry, Charlie Coder, user ID 0 chocolate, Hilary Hacker, user ID 1 strawberry, Hilary Hacker, user ID 1 chocolate. Right?

MA: All possible combinations. This can get very big. So, okay, why would you want to do that? Well, you can then filter this by some predicates. And I've chosen a bad example, because there's no obvious way those things are connected. Maybe we can say that the parity of somebody's user ID determines whether they like chocolate or strawberry ice cream. Or another way to do it would be you have... a realistic way is you have a table of users. And then, a table of orders. You know, the user ID and then, the order ID, right? So, the table relating user IDs to order IDs and a table relating user IDs to their names. And you want to have the order IDs and the names paired together. So, you can take your cross product, which just gives you every possible user pair with every possible order. And then, you filter it down by requiring that the user IDs match. All possible combinations. This can get very big. So, okay, why would you want to do that? Well, you can then filter this by some predicates. And I've chosen a bad example, because there's no obvious way those things are connected. Maybe we can say that the parity of somebody's user ID determines whether they like chocolate or strawberry ice cream. Or another way to do it would be you have... a realistic way is you have a table of users. And then, a table of orders. You know, the user ID and then, the order ID, right? So, the table relating user IDs to order IDs and a table relating user IDs to their names. And you want to have the order IDs and the names paired together. So, you can take your cross product, which just gives you every possible user pair with every possible order. And then, you filter it down by requiring that the user IDs match.

MA: So, that's called an equijoin, because you're requiring the two things to be equal. And then, you, yeah, you throw out the junk you predict. That's not particularly important, right? Because you have two copies of the user ID column and you're requiring them to be equal and simply throw out one. So, that's called an equijoin, because you're requiring the two things to be equal. And then, you, yeah, you throw out the junk you predict. That's not particularly important, right? Because you have two copies of the user ID column and you're requiring them to be equal and simply throw out one.

SK: Okay. So, the select is kind of the project. The join is the cross product. And then, the predicate is the join on- Okay. So, the select is kind of the project. The join is the cross product. And then, the predicate is the join on-

MA: Yeah, right. Yeah, right.

MA: So, yeah, this is when you have two relations and you want to correlate them somehow. You want to say, "Hey, give me all combinations of things from this relation and that relation that satisfies some predicate.", right, where these things match. And that, to me, that's the relation algebra. You have relations. You have joins. You have a few other things like unions and filters. And you're done. And you can do a whole lot of stuff with this, but not everything. So, yeah, this is when you have two relations and you want to correlate them somehow. You want to say, "Hey, give me all combinations of things from this relation and that relation that satisfies some predicate.", right, where these things match. And that, to me, that's the relation algebra. You have relations. You have joins. You have a few other things like unions and filters. And you're done. And you can do a whole lot of stuff with this, but not everything.

MA: For example, if you want to have the sum of something, the relational algebra does not do that. It benefits relations. A sum is not a number, not a relation. All right. It just does not have aggregations. For example, if you want to have the sum of something, the relational algebra does not do that. It benefits relations. A sum is not a number, not a relation. All right. It just does not have aggregations.

SK: Oh, okay. That's interesting. Oh, okay. That's interesting.

MA: Which, I mean, obviously, this is a limitation. It's not as if anybody has ever thought, oh, that's enough. Which, I mean, obviously, this is a limitation. It's not as if anybody has ever thought, oh, that's enough.

SK: Why would you leave that out? But I guess, when you have an algebra, you have types. And you have operations on those types. So, if you had... like we had algebra for numbers and we can add 1 and 2, and it'll give you the number 3. But let's say I want the word "three". We need the word "three", but like it would never give you the word "three". It would just give you the number 3. Why would you leave that out? But I guess, when you have an algebra, you have types. And you have operations on those types. So, if you had... like we had algebra for numbers and we can add 1 and 2, and it'll give you the number 3. But let's say I want the word "three". We need the word "three", but like it would never give you the word "three". It would just give you the number 3.

MA: Right. It's useful to formalize numbers, even without formalizing how to print them in strings. Because, well, you can add that part if you like. But here's how we do numbers. Relational algebra is here is how we do relations. Right. It's useful to formalize numbers, even without formalizing how to print them in strings. Because, well, you can add that part if you like. But here's how we do numbers. Relational algebra is here is how we do relations.

SK: I see. I see.

MA: And then, you can add extra stuff on top of that. And SQL does and it's useful. And then, you can add extra stuff on top of that. And SQL does and it's useful.

SK: And aggregations. And aggregations.

MA: Well, it's interesting, because Datalog goes in a totally different direction. Datalog adds some- Well, it's interesting, because Datalog goes in a totally different direction. Datalog adds some-

SK: So, Datalog, maybe give the logic programming background. So, Datalog, maybe give the logic programming background.

MA: Yeah. So, well, one way of explaining Datalog, so Datalog can be thought of as a logic programming language. Yeah. So, well, one way of explaining Datalog, so Datalog can be thought of as a logic programming language.

SK: Or- Or-

MA: It can be thought of as a database language. It's, sort of, somewhere between the two. It can be thought of as a database language. It's, sort of, somewhere between the two.

SK: Okay. So, sorry for interrupting. Keep going with what you were saying. Okay. So, sorry for interrupting. Keep going with what you were saying.

MA: Yeah. And one way of thinking about it is it takes relational algebra and it adds something to it, just like SQL, but it adds a completely different thing. It adds the ability to define relations, to construct relations recursively. And so, the classic example of this is what I already gave, transitive closure in a graph. You have the edges. And you want to find all the pairs of nodes, which are reachable from one another. So, an edge relation would have a source and a bits column. And in relational algebra... Yeah. And one way of thinking about it is it takes relational algebra and it adds something to it, just like SQL, but it adds a completely different thing. It adds the ability to define relations, to construct relations recursively. And so, the classic example of this is what I already gave, transitive closure in a graph. You have the edges. And you want to find all the pairs of nodes, which are reachable from one another. So, an edge relation would have a source and a bits column. And in relational algebra...

SK: Oh, I see. Oh, I see.

MA: Pick any number, N, and I can find you the N's distance paths. Pick any number, N, and I can find you the N's distance paths.

SK: I see. I see, because- I see. I see, because-

MA: I can't find you all the paths. I can't find you all the paths.

SK: I see. I see. Because if you do the cross product of once, that gets you one and then, you filter. That gets you one path. And then, you can do the cross product again. We can do the cross product infinitely or, like, until it doesn't change or something like that. I see. I see. I see. Because if you do the cross product of once, that gets you one and then, you filter. That gets you one path. And then, you can do the cross product again. We can do the cross product infinitely or, like, until it doesn't change or something like that. I see.

MA: Yeah, exactly. Yeah, exactly.

SK: Okay. So, yeah. I want to spend a lot of time talking about computability with you, because I feel like... because I think that's something that comes up a lot when people discount logic programming. It's too slow or it'll infinite loop forever. Basically, it's like too abstract. It's like let's stick closer to the bits, because we know that if we're controlling all the bits, we know the program will end, because we have a tight reign on it. Okay. So, yeah. I want to spend a lot of time talking about computability with you, because I feel like... because I think that's something that comes up a lot when people discount logic programming. It's too slow or it'll infinite loop forever. Basically, it's like too abstract. It's like let's stick closer to the bits, because we know that if we're controlling all the bits, we know the program will end, because we have a tight reign on it.

SK: So, yeah. So, I guess, maybe, let's talk theoretical. What is computability? So, yeah. So, I guess, maybe, let's talk theoretical. What is computability?

MA: Well, whether something can be computed or not by. Usually, we think of a Turing machine or whatever. It hardly matters. Well, whether something can be computed or not by. Usually, we think of a Turing machine or whatever. It hardly matters.

SK: Is it related to decidability? Is it related to decidability?

MA: Yeah, decidability is the same thing, basically. Decidability, strictly speaking, is pose a question with a definite yes/no answer, right. Or think of a question, a class of questions parameterized by something, right, with definite yes/no answers. So, an example would be "are these two numbers equal?" So, that's a class of questions. It's not a specific question. It'd be like, "Does two equal four?" Yeah, decidability is the same thing, basically. Decidability, strictly speaking, is pose a question with a definite yes/no answer, right. Or think of a question, a class of questions parameterized by something, right, with definite yes/no answers. So, an example would be "are these two numbers equal?" So, that's a class of questions. It's not a specific question. It'd be like, "Does two equal four?"

MA: And, of course, that can be answered by a machine. Just build a machine that returns no. But it only gets interesting once it's a class of questions. So, it's whether two numbers are equal. So, it has two placeholders, two variables in it, X and Y. You can them whatever. That question is decidable, if your numbers are natural numbers. It is not decidable if your numbers are real numbers. And, of course, that can be answered by a machine. Just build a machine that returns no. But it only gets interesting once it's a class of questions. So, it's whether two numbers are equal. So, it has two placeholders, two variables in it, X and Y. You can them whatever. That question is decidable, if your numbers are natural numbers. It is not decidable if your numbers are real numbers.

SK: Oh, I see. Because real numbers could be infinite? They could- Oh, I see. Because real numbers could be infinite? They could-

MA: Yeah, real numbers have infinite precision. And you cannot tell in advance how many digits you'll have to look at. You might have a number that you... Let's say you have the number one, two, one, three, four, five, six, seven. And you have another number one, two, one, three, four, five, six, seven. And they keep going. And they keep going forever. How do you know when you're done? How do you know that they really are equal? Maybe, there's a digit that's not equal just beyond where you looked. Yeah, real numbers have infinite precision. And you cannot tell in advance how many digits you'll have to look at. You might have a number that you... Let's say you have the number one, two, one, three, four, five, six, seven. And you have another number one, two, one, three, four, five, six, seven. And they keep going. And they keep going forever. How do you know when you're done? How do you know that they really are equal? Maybe, there's a digit that's not equal just beyond where you looked.

SK: So, decidability and computability has a lot to do with looping for forever? So, decidability and computability has a lot to do with looping for forever?

MA: Yeah, right? To say that a question is decidable is to say there is a Turing machine, or a computer program, that will answer every single question of that form. And for each one, it will answer it in finite time, right, with yes or no. So that it never infinite loops on any particular instance of that problem. If it infinite loops on some instance, then it's not a decision procedure. So, the problem is not decided by that program. Yeah, right? To say that a question is decidable is to say there is a Turing machine, or a computer program, that will answer every single question of that form. And for each one, it will answer it in finite time, right, with yes or no. So that it never infinite loops on any particular instance of that problem. If it infinite loops on some instance, then it's not a decision procedure. So, the problem is not decided by that program.

SK: I see. I see.

MA: Okay, right. So, the real number equality is an interesting case, because it has the property of these two numbers are not equal. Then, the sort of obvious program, just compare the digits one by one, right, will eventually say, "These aren't equal." If two numbers aren't equal, eventually you'll get to a digit where they differ. And you'll be like, "They're not equal." Okay, right. So, the real number equality is an interesting case, because it has the property of these two numbers are not equal. Then, the sort of obvious program, just compare the digits one by one, right, will eventually say, "These aren't equal." If two numbers aren't equal, eventually you'll get to a digit where they differ. And you'll be like, "They're not equal."

SK: It's only when- It's only when-

MA: It's only when they are equal that- It's only when they are equal that-

SK: You- You-

MA: You won't be able to terminate, yeah. You won't be able to terminate, yeah.

SK: Oh, I see. I see. That is interesting. Oh, I see. I see. That is interesting.

MA: Yeah, so that's called either semi-decidability or co-semi-decidability. I never remember which is which. Yeah, so that's called either semi-decidability or co-semi-decidability. I never remember which is which.

SK: Oh, I see. Semi-decidability is when it's decidable in specific cases. Oh, I see. Semi-decidability is when it's decidable in specific cases.

MA: Semi-decidable is when like you have a program that can... that if the answer is no, it will eventually answer no. But if the answer is yes, it might infinite loop. Semi-decidable is when like you have a program that can... that if the answer is no, it will eventually answer no. But if the answer is yes, it might infinite loop.

SK: Okay. And so, I forget how we got in this tangent. It's related to Datalog? Okay. And so, I forget how we got in this tangent. It's related to Datalog?

MA: You're sort of thinking about decidability and computability and logic programming. You're sort of thinking about decidability and computability and logic programming.

SK: Oh, okay. I think, let's unroll the stack. And before we continue down this thread, I want to ask you about... You were playing at three things, relations- Oh, okay. I think, let's unroll the stack. And before we continue down this thread, I want to ask you about... You were playing at three things, relations-

MA: Right. Tables, relations and predicates. Right. Tables, relations and predicates.

SK: Okay. And do you remember why... oh, because you said... you were explaining how you can get to Datalog from the relational path. Okay. And do you remember why... oh, because you said... you were explaining how you can get to Datalog from the relational path.

MA: Right, yeah. Right, yeah.

SK: Or you can get to it from the logic path. Do you know the history of Prolog, which way it come from? Or was it influenced by both? Or you can get to it from the logic path. Do you know the history of Prolog, which way it come from? Or was it influenced by both?

MA: So, I think what's sort of... I'm not sure. Datalog, sort of, rose... It was after the relational algebra. I think it arose mostly in the 80's, but people sort of noticing... I don't know whether they noticed that the syntax looks like Prolog. So, based on that, I imagine they noticed, "Hey, if we limit Prolog in such and such a way, then suddenly it is decidable, right?", which is to say, in Prolog you can ask queries where it will infinite loop. In Datalog, you cannot. Every program in Datalog terminates. Every query in Datalog terminates. It will always answer any question you pose of it. So, I think what's sort of... I'm not sure. Datalog, sort of, rose... It was after the relational algebra. I think it arose mostly in the 80's, but people sort of noticing... I don't know whether they noticed that the syntax looks like Prolog. So, based on that, I imagine they noticed, "Hey, if we limit Prolog in such and such a way, then suddenly it is decidable, right?", which is to say, in Prolog you can ask queries where it will infinite loop. In Datalog, you cannot. Every program in Datalog terminates. Every query in Datalog terminates. It will always answer any question you pose of it.

SK: Well, what did they remove? Well, what did they remove?

MA: Basically every predicate, or relation or table, in a Datalog program has to be finite. So, in Prolog for example, you can find a predicate that takes three lists, I'll call them X, Y, and Z, an is true if X appended to Y is Z. And you can run this. Basically every predicate, or relation or table, in a Datalog program has to be finite. So, in Prolog for example, you can find a predicate that takes three lists, I'll call them X, Y, and Z, an is true if X appended to Y is Z. And you can run this.

MA: One of the wonders of logic programming, you can run this relation in any direction. So, you can give it two lists, X and Y, and it will give you, spit back at you, the append of these two lists. But you can also give it one list for Z and it will spit back all the lists, which when appended make that list. In other words, it will find all the ways to split a list into two smaller lists. And these are both the same relation. You write it once and you get both ways of doing it. One of the wonders of logic programming, you can run this relation in any direction. So, you can give it two lists, X and Y, and it will give you, spit back at you, the append of these two lists. But you can also give it one list for Z and it will spit back all the lists, which when appended make that list. In other words, it will find all the ways to split a list into two smaller lists. And these are both the same relation. You write it once and you get both ways of doing it.

SK: I may have missed it. So, the same relation goes in both... So, maybe give me an example. Maybe give me concrete lists. I may have missed it. So, the same relation goes in both... So, maybe give me an example. Maybe give me concrete lists.

MA: Sure. So, if I say, you know, append list containing 1, list containing 2, Z, right? Sure. So, if I say, you know, append list containing 1, list containing 2, Z, right?

SK: What's Z? What's Z?

MA: Where Z is a variable. It's an unknown. And I'm asking it. When you do this, this is called a query. And it's saying, "Give me all the values of Z such that this is true." Where Z is a variable. It's an unknown. And I'm asking it. When you do this, this is called a query. And it's saying, "Give me all the values of Z such that this is true."

SK: So, it's the- So, it's the-

MA: Such that the append of one and two is Z. Such that the append of one and two is Z.

SK: In a normal program, it would just say Z equals append one, two? In a normal program, it would just say Z equals append one, two?

MA: Yeah, right, yeah. Yeah, right, yeah.

SK: But in Prolog, you- But in Prolog, you-

MA: You just give the logical expression that you want to be true and it finds the solutions. And in this case, the solution is Z equals one, two. But you can also put variables for the other parts of it. You can say, "Give me X and Y such that append X, Y is one, two." All right? You just give the logical expression that you want to be true and it finds the solutions. And in this case, the solution is Z equals one, two. But you can also put variables for the other parts of it. You can say, "Give me X and Y such that append X, Y is one, two." All right?

SK: I see, okay. I see, okay.

MA: So, this is like saying one, two equals X plus Y, which in a normal programming language would be a syntax error, probably. So, this is like saying one, two equals X plus Y, which in a normal programming language would be a syntax error, probably.

SK: I see. I see. I see, okay. I see. I see. I see, okay.

MA: But in Prolog, it will give you back multiple answers. It will say, "Okay, one solution is X is the list one, two and Y is the empty list." Another one is X is the list one and Y is the list two. And the final one is X is the empty list and Y is the list one, two. And the same code, and I can write down the code. Well, I mean, this is a podcast so that's probably not helpful, but I can write down the code for this. It's not very complicated. And it can be used in both directions. But in Prolog, it will give you back multiple answers. It will say, "Okay, one solution is X is the list one, two and Y is the empty list." Another one is X is the list one and Y is the list two. And the final one is X is the empty list and Y is the list one, two. And the same code, and I can write down the code. Well, I mean, this is a podcast so that's probably not helpful, but I can write down the code for this. It's not very complicated. And it can be used in both directions.

SK: Yeah, just write down the code and show it to the microphone. Yeah, just write down the code and show it to the microphone.

MA: Yeah. Yeah.

SK: Okay. And so, this is almost like... this can lead us to bad things in Prolog? This can lead to undecidability? Okay. And so, this is almost like... this can lead us to bad things in Prolog? This can lead to undecidability?

MA: I mean, the one time, this is part of why Prolog is awesome. And it's also dangerous, because it makes your language more powerful and it can lead to programs that infinite loop. This is not particularly any more dangerous than any other Turing-complete programming language. Every programming language that we know can write infinite loops, except for ones that are very, very carefully limited like Datalog. I mean, the one time, this is part of why Prolog is awesome. And it's also dangerous, because it makes your language more powerful and it can lead to programs that infinite loop. This is not particularly any more dangerous than any other Turing-complete programming language. Every programming language that we know can write infinite loops, except for ones that are very, very carefully limited like Datalog.

SK: Datalog cannot infinite loop. Datalog cannot infinite loop.

MA: Cannot infinite loop. Cannot infinite loop.

SK: Most programming language, you could just write "while true." Datalog doesn't have "while true". Most programming language, you could just write "while true." Datalog doesn't have "while true".

MA: No, right. The equivalent of "while true" terminates with false. That's a little bit of an unfair comparison. But there's a concrete example of this, which is in Prolog, you can feed it, not the liar's paradox. So, it's a logic programming language. So, you can actually translate paradoxes into it. But the liar's paradox is this sentence is false. And this is problematic, because if it's false, it's true. And if it's true, it's false. No, right. The equivalent of "while true" terminates with false. That's a little bit of an unfair comparison. But there's a concrete example of this, which is in Prolog, you can feed it, not the liar's paradox. So, it's a logic programming language. So, you can actually translate paradoxes into it. But the liar's paradox is this sentence is false. And this is problematic, because if it's false, it's true. And if it's true, it's false.

MA: But there is another, not exactly paradox, the truth teller's paradox. I'm not sure if that's the standard name, but it's this sentence is true. And this isn't really a paradox, because like you can say it's false. And then, it's false, right? Because it says it's true and it's not true, so it's false, okay? But you can also say it's true, because it says it's true. And it's true, so it's true. But there is another, not exactly paradox, the truth teller's paradox. I'm not sure if that's the standard name, but it's this sentence is true. And this isn't really a paradox, because like you can say it's false. And then, it's false, right? Because it says it's true and it's not true, so it's false, okay? But you can also say it's true, because it says it's true. And it's true, so it's true.

MA: So, it's unclear what truth value it should have, but it is not paradoxical to assign to a particular truth value. Now, if you feed the equivalent of this to Prolog, you say basically foo holds as the variable X, if foo holds as the variable X. Foo of X, if foo of X. And then, if you ask Prolog, does foo hold of two? It will infinite loop. Now, in Datalog, if you do the equivalent thing, it will simply say, no. No, it's false. So, it's unclear what truth value it should have, but it is not paradoxical to assign to a particular truth value. Now, if you feed the equivalent of this to Prolog, you say basically foo holds as the variable X, if foo holds as the variable X. Foo of X, if foo of X. And then, if you ask Prolog, does foo hold of two? It will infinite loop. Now, in Datalog, if you do the equivalent thing, it will simply say, no. No, it's false.

MA: So, Datalog has an answer to the question "What is the value of this sentence? This sentence is true," and its answer is false. And the reason for this is, basically, Datalog has a, sort of, minimum least fixed point semantics. Or it's sometimes called a minimum model semantics. But what it basically means is that if you don't say something is true, it assumes it is false. So, Datalog has an answer to the question "What is the value of this sentence? This sentence is true," and its answer is false. And the reason for this is, basically, Datalog has a, sort of, minimum least fixed point semantics. Or it's sometimes called a minimum model semantics. But what it basically means is that if you don't say something is true, it assumes it is false.

MA: For example, if you say, "There is an edge from two to three." And then, you end your program, right? You say, "That's all there is in the program, if there is an edge from a two to three." And then, you ask it, "Is there an edge from three to seven?" No. You didn't write that, so it's not true. So, it infers only the minimum set of things consistent with the program that you've written. For example, if you say, "There is an edge from two to three." And then, you end your program, right? You say, "That's all there is in the program, if there is an edge from a two to three." And then, you ask it, "Is there an edge from three to seven?" No. You didn't write that, so it's not true. So, it infers only the minimum set of things consistent with the program that you've written.

MA: It will not infer anything that you didn't write. And so, if you say, "Foo of X, if foo of X." It will not infer that foo of two is true, because there's no way to get to that. It's consistent that it be true, just like it's consistent with what you wrote down, that there being an edge from four to seven. You didn't explicitly say that it was false. But, sort of, normally, we only write down the things that are true. It will not infer anything that you didn't write. And so, if you say, "Foo of X, if foo of X." It will not infer that foo of two is true, because there's no way to get to that. It's consistent that it be true, just like it's consistent with what you wrote down, that there being an edge from four to seven. You didn't explicitly say that it was false. But, sort of, normally, we only write down the things that are true.

MA: Sort of intuitively, if we're describing something, we say all the things that are true of the situation, not all the things that were false. Because there are too goddamn many. And so, based on that, right, based on that idea, only assume things are true if there's a clear way to prove them. Datalog will give you, sort of, the minimum level. It will not... Yeah. Sort of intuitively, if we're describing something, we say all the things that are true of the situation, not all the things that were false. Because there are too goddamn many. And so, based on that, right, based on that idea, only assume things are true if there's a clear way to prove them. Datalog will give you, sort of, the minimum level. It will not... Yeah.

SK: I feel like there's a phrase of mathematics that does this, that only proves things that are- I feel like there's a phrase of mathematics that does this, that only proves things that are-

MA: That's sort of what minimum model or least fixed point is. That's sort of what minimum model or least fixed point is.

SK: So, we talked through the basis of relational stuff was relational algebra. The basis for logic programming is logic. So, we talked through the basis of relational stuff was relational algebra. The basis for logic programming is logic.

MA: First-order logic, yeah. First-order logic, yeah.

SK: And first-order refers to? And first-order refers to?

MA: First-order means you can quantify over object, but not over sets of objects. So, first-order logic is the kind of logic that we're, sort of, most familiar with, right? We can say things like, "For any number, X, X plus one is greater than X." And calling it a first-order means that you can write that for any number X. So, there's also something less powerful than that, which is propositional logic where you cannot write "for all." First-order means you can quantify over object, but not over sets of objects. So, first-order logic is the kind of logic that we're, sort of, most familiar with, right? We can say things like, "For any number, X, X plus one is greater than X." And calling it a first-order means that you can write that for any number X. So, there's also something less powerful than that, which is propositional logic where you cannot write "for all."

SK: I see. I see.

MA: You can take primitive propositions and you can conjunct them. You can say, X and Y. You can disjunct the next, or Y and so on. But you can't quantify over all variables. And then, there's higher-order logics, which let you effectivel, not just quantify over individual things, like numbers. But let you quantify over properties of numbers. For any property, P of numbers, there exists a number X, which P satisfies. This isn't true. You can take primitive propositions and you can conjunct them. You can say, X and Y. You can disjunct the next, or Y and so on. But you can't quantify over all variables. And then, there's higher-order logics, which let you effectivel, not just quantify over individual things, like numbers. But let you quantify over properties of numbers. For any property, P of numbers, there exists a number X, which P satisfies. This isn't true.

SK: I see. I see. I see. I see.

MA: That's a false proposition, because consider the property that doesn't hold of any number. But it allows you to quantify over even larger stuff. That's a false proposition, because consider the property that doesn't hold of any number. But it allows you to quantify over even larger stuff.

SK: And now, I see how it's higher-order. I see how the phrase makes sense. You can have specific propositions about specific members. And then, you can have propositions for a bunch of numbers. And then, you could have propositions about propositions. I see. And now, I see how it's higher-order. I see how the phrase makes sense. You can have specific propositions about specific members. And then, you can have propositions for a bunch of numbers. And then, you could have propositions about propositions. I see.

MA: Yeah. And the weird thing is... This is a total tangent, but sort of the weird thing about this is, so logicians, more or less, figured out propositional logic, then first-order logic, then higher-order logic. There's obviously still work on each of these things, but sort of that's the order in which they started considering things. Type theorists and programming language theorists figured out the equivalent of propositional logic, very simple type systems. Yeah. And the weird thing is... This is a total tangent, but sort of the weird thing about this is, so logicians, more or less, figured out propositional logic, then first-order logic, then higher-order logic. There's obviously still work on each of these things, but sort of that's the order in which they started considering things. Type theorists and programming language theorists figured out the equivalent of propositional logic, very simple type systems.

MA: And then, they figured out precisely second order type systems, right? Type systems that let you quantify over types, but not over values, over types. And then, we're beginning to figure out... Well, we are figuring out dependent types, which are kind of first-order, as well as higher-order. And then, they figured out precisely second order type systems, right? Type systems that let you quantify over types, but not over values, over types. And then, we're beginning to figure out... Well, we are figuring out dependent types, which are kind of first-order, as well as higher-order.

SK: Oh, that's interesting. Oh, that's interesting.

MA: So, it's kind of like we skipped over the just first-order phase. There are type systems that are kind of directly correspondent to first-order logics, but they're kind of weird. The first thing they figured out was, so called, parametric polymorphism, which is where you're allowed to quantify over types. You're allowed to say, "This function has type... For any type alpha, it takes alpha to alpha." So, it's kind of like we skipped over the just first-order phase. There are type systems that are kind of directly correspondent to first-order logics, but they're kind of weird. The first thing they figured out was, so called, parametric polymorphism, which is where you're allowed to quantify over types. You're allowed to say, "This function has type... For any type alpha, it takes alpha to alpha."

SK: So, in my head, I have... in Haskell, I'm thinking Int -> Int is like first... is the base. So, in my head, I have... in Haskell, I'm thinking Int -> Int is like first... is the base.

MA: That's about... There's many different kinds of higher-orderness in programming languages. And so, one of them is like, "Are your functions higher-order?", which I think is what you were thinking of. That's about... There's many different kinds of higher-orderness in programming languages. And so, one of them is like, "Are your functions higher-order?", which I think is what you were thinking of.

SK: Yeah, you're right. I don't even have to talk about functions. Yeah, you're right. I don't even have to talk about functions.

MA: Maybe, I was missing it. Maybe, I was missing it.

SK: No, no. I just was over-complicating my example. We have Ints. And then, we have lists of Ints, which is... and like a list of an Int, is that polymorphic at all or higher-order? That's still first-order? Can be parameterized... No, no. I just was over-complicating my example. We have Ints. And then, we have lists of Ints, which is... and like a list of an Int, is that polymorphic at all or higher-order? That's still first-order? Can be parameterized...

MA: That's like even a different direction. That's talking about types parameterized by other types. That's like even a different direction. That's talking about types parameterized by other types.

SK: Is that- Is that-

MA: Well, it's related, but it's not exactly the same thing. So, quantification in a type system is, for example, having a function, take the identity function for example, that works at any type. Or the map function, map's a function. Well, that complicates things, because it also involves lists. But- Well, it's related, but it's not exactly the same thing. So, quantification in a type system is, for example, having a function, take the identity function for example, that works at any type. Or the map function, map's a function. Well, that complicates things, because it also involves lists. But-

SK: If you can like describe types. If you have a type that admits other types, like subtyping...? If you can like describe types. If you have a type that admits other types, like subtyping...?

MA: No. No.

SK: Like Num is- Like Num is-

MA: It's not about subtyping, not really. It's about: what is the type of the identity function? Well, you could say it has the type Int -> Int, because it takes Ints. You can say it has the type Bool -> Bool. But neither of these is really a most general type to give it. The most general type to give it is to say, "For any type A, it is type A -> A." That "for A" I put there, that's the equivalent of a "for all" in logic. It's not about subtyping, not really. It's about: what is the type of the identity function? Well, you could say it has the type Int -> Int, because it takes Ints. You can say it has the type Bool -> Bool. But neither of these is really a most general type to give it. The most general type to give it is to say, "For any type A, it is type A -> A." That "for A" I put there, that's the equivalent of a "for all" in logic.

MA: But what is it quantifying over? It's not quantifying over values. It's not for any value X. It's quantifying over types. It's for any type of X. But what is it quantifying over? It's not quantifying over values. It's not for any value X. It's quantifying over types. It's for any type of X.

SK: I see. I see.

MA: And that's what makes it a second order quantification. And that's what makes it a second order quantification.

SK: Is it but only useful for the ID function? Is it but only useful for the ID function?

MA: No. It's useful for a lot of other functions. They usually have to involve some sort of data structure. So, an example would be the map function that takes a function and a list and applies the function to every part of that list. It doesn't care whether it's a list of integers or a list of booleans. So, it has the type: for any type A and any type B, give me a function from A to B and give me a list of A. And I'll give you a list of B. No. It's useful for a lot of other functions. They usually have to involve some sort of data structure. So, an example would be the map function that takes a function and a list and applies the function to every part of that list. It doesn't care whether it's a list of integers or a list of booleans. So, it has the type: for any type A and any type B, give me a function from A to B and give me a list of A. And I'll give you a list of B.

SK: I guess, I normally think about... yeah, it doesn't occur to me that there's an implicit "for any A" and "for any B". The only time I think of the implicit "for any" if it's like "Show A => ..." you know? Because then, it's like, "Oh, okay, this is for any Show." I guess, I normally think about... yeah, it doesn't occur to me that there's an implicit "for any A" and "for any B". The only time I think of the implicit "for any" if it's like "Show A => ..." you know? Because then, it's like, "Oh, okay, this is for any Show."

MA: Yeah, because then it's explicit in the syntax. Yeah, because then it's explicit in the syntax.

SK: Yes, exactly. Yes, exactly.

MA: Yeah, but to a type theorist, we think of there being an implicit form "for any A". Yeah, but to a type theorist, we think of there being an implicit form "for any A".

SK: For any A. I see. I see. And, okay, that's interesting that we skipped... So, the middle level that we skipped would be- For any A. I see. I see. And, okay, that's interesting that we skipped... So, the middle level that we skipped would be-

MA: First-order, first-order. The equivalent to first-order logic. First-order, first-order. The equivalent to first-order logic.

SK: So, what would be a type in first-order? So, what would be a type in first-order?

MA: Well, the reason that we skipped it is it's very not obvious what it would be, right? Because- Well, the reason that we skipped it is it's very not obvious what it would be, right? Because-

SK: It's just less useful? It's just less useful?

MA: Well, I don't know about less useful. The obvious example that I could give is dependent types. But dependent types don't really correspond to first-order logic. They correspond to very higher-order logic, like all the bloody layers. There's not a clear correspondence anymore. Well, I don't know about less useful. The obvious example that I could give is dependent types. But dependent types don't really correspond to first-order logic. They correspond to very higher-order logic, like all the bloody layers. There's not a clear correspondence anymore.

SK: Oh, okay, interesting. Oh, okay, interesting.

MA: But the thing that dependent types allow you to do that is like first-order logic is they allow you to quantify over all the values of the given type. So, I can say, "For any natural number, N, this will take a list of length N and of, I don't know, integers and return a list of length N and integers." So, I'm not quantifying over a type there. I'm quantifying over natural numbers, right, for the length of the list. That's like what first-order logic lets you do. But the thing that dependent types allow you to do that is like first-order logic is they allow you to quantify over all the values of the given type. So, I can say, "For any natural number, N, this will take a list of length N and of, I don't know, integers and return a list of length N and integers." So, I'm not quantifying over a type there. I'm quantifying over natural numbers, right, for the length of the list. That's like what first-order logic lets you do.

SK: Okay. Interesting. Okay, so- Okay. Interesting. Okay, so-

MA: That was a kind of huge tangent. That was a kind of huge tangent.

SK: Yeah, yeah. Well, I feel like this has been great. I feel like we've been talking about interesting things, but we should probably get to your main project. I think we spent enough time laying the foundations and talking around it. So, yeah, give the quick summary... Yeah, yeah. Well, I feel like this has been great. I feel like we've been talking about interesting things, but we should probably get to your main project. I think we spent enough time laying the foundations and talking around it. So, yeah, give the quick summary...

MA: The spiel for Datafun. So, we have Datalog, right, which is this language that can be thought of as logic programming, but limited, right? Limited so that it's no longer true and complete. It always terminates. But because of those limitations, we have, for example, but much more efficient implementation strategies for it. And, yeah, I mean, that's basically the idea. It makes the implementation strategies more efficient and do interesting things. The spiel for Datafun. So, we have Datalog, right, which is this language that can be thought of as logic programming, but limited, right? Limited so that it's no longer true and complete. It always terminates. But because of those limitations, we have, for example, but much more efficient implementation strategies for it. And, yeah, I mean, that's basically the idea. It makes the implementation strategies more efficient and do interesting things.

MA: Or you can think of it as relation algebra plus fixed points, so it's like SQL with extra stuff... Except aggregations are a pain. So, I'll talk more about that later. But anyway, it's between these two cool areas, logic programming and relational programming. Or you can think of it as relation algebra plus fixed points, so it's like SQL with extra stuff... Except aggregations are a pain. So, I'll talk more about that later. But anyway, it's between these two cool areas, logic programming and relational programming.

SK: This is Datalog or- This is Datalog or-

MA: Datalog. That's what Datalog is. But what Datalog doesn't let you do is it doesn't let you notice that there's a repeated pattern in your code and break it out into a function. This is an ability that logic programming has, because logic programming doesn't have the limitations of Datalog, right? But once you impose limitations to Datalog, which are nice, you lose that ability. Datalog. That's what Datalog is. But what Datalog doesn't let you do is it doesn't let you notice that there's a repeated pattern in your code and break it out into a function. This is an ability that logic programming has, because logic programming doesn't have the limitations of Datalog, right? But once you impose limitations to Datalog, which are nice, you lose that ability.

MA: But it's also something that functional programming has, because we have functions. See a repeated pattern? Just write the function that encapsulates that repeated pattern. Take the parts that are varying and make them arguments to the function. And take the parts that are constant and make them the code of the function, right? But it's also something that functional programming has, because we have functions. See a repeated pattern? Just write the function that encapsulates that repeated pattern. Take the parts that are varying and make them arguments to the function. And take the parts that are constant and make them the code of the function, right?

MA: And it seems like this would be a useful ability to have in Datalog. For example, transitive closure, the standard Datalog example. And it seems like this would be a useful ability to have in Datalog. For example, transitive closure, the standard Datalog example.

SK: You have a lot of graphs in your life. You have a lot of graphs in your life.

MA: Yeah. You can write transitive closure in Datalog, but you cannot write a function that, taken a graph, takes its transitive closure. Yeah. You can write transitive closure in Datalog, but you cannot write a function that, taken a graph, takes its transitive closure.

SK: It only works for specific graphs. It only works for specific graphs.

MA: Right. You have to hard code. You have to pick a relation that represents the graph that you want to take the transitive closure of and write the thing that takes its transitive closure. And it's hard coded to that graph. You cannot plug in a different graph. Right. You have to hard code. You have to pick a relation that represents the graph that you want to take the transitive closure of and write the thing that takes its transitive closure. And it's hard coded to that graph. You cannot plug in a different graph.

SK: It's like writing macro to plug in a different graph or the ability to write functions. It's like writing macro to plug in a different graph or the ability to write functions.

MA: Right. Or add the ability to write goddamn functions. So, that's kind of what Datafun is. It's an attempt to allow you to write what is effectively Datalog code, but in a functional language so that if you see a repeated pattern in your code, you can just abstract that over them. And along the way, we sort of end up adding a bunch of interesting things, because it's easy and natural to add them in the context of a functional language. Right. Or add the ability to write goddamn functions. So, that's kind of what Datafun is. It's an attempt to allow you to write what is effectively Datalog code, but in a functional language so that if you see a repeated pattern in your code, you can just abstract that over them. And along the way, we sort of end up adding a bunch of interesting things, because it's easy and natural to add them in the context of a functional language.

MA: So, for example, we can add types. Datalog is traditionally kind of untyped. There's no particular problem with adding types directly on Datalog. But as long as we're going through a functional language and we know how to use types for that, we add those. So, you can have sum types now, if you want sosumtypes. Also, lattices, so Datalog... How do I explain the use of lattices in logic programming and in Datalog and in Datafun? So, for example, we can add types. Datalog is traditionally kind of untyped. There's no particular problem with adding types directly on Datalog. But as long as we're going through a functional language and we know how to use types for that, we add those. So, you can have sum types now, if you want sosumtypes. Also, lattices, so Datalog... How do I explain the use of lattices in logic programming and in Datalog and in Datafun?

SK: I always forget what a lattice is. I always forget what a lattice is.

MA: So, in this case, what I'm actually concerned with are join semi-lattices. People often call them lattices, because saying join semi-lattice every time gets to be a mouthful. But what that means is you have... There's two ways of thinking about it. One way of thinking about it is you have a binary operator that is associative, commutative, and idempotent. So, associative, the parens don't matter. So, in this case, what I'm actually concerned with are join semi-lattices. People often call them lattices, because saying join semi-lattice every time gets to be a mouthful. But what that means is you have... There's two ways of thinking about it. One way of thinking about it is you have a binary operator that is associative, commutative, and idempotent. So, associative, the parens don't matter.

MA: Commutative, the order doesn't matter. Swap things around as much as you'd like. Idempotent, doing things twice doesn't matter. X join... the operatives are usually called join, which is confusing, because it's not database join. It's a different operator. So, X join X is X. That's what idempotence means. And it has an identity element, a thing that does nothing. So, the classic example of a join semi-lattice is sets under union. Commutative, the order doesn't matter. Swap things around as much as you'd like. Idempotent, doing things twice doesn't matter. X join... the operatives are usually called join, which is confusing, because it's not database join. It's a different operator. So, X join X is X. That's what idempotence means. And it has an identity element, a thing that does nothing. So, the classic example of a join semi-lattice is sets under union.

MA: Union is associative. The parens don't matter. It's commutative. X union Y equals Y union X. Order doesn't matter. It's idempotent. A thing union itself is that thing. Adding a set to itself. It has the same elements. Union is associative. The parens don't matter. It's commutative. X union Y equals Y union X. Order doesn't matter. It's idempotent. A thing union itself is that thing. Adding a set to itself. It has the same elements.

SK: I see. I see. I see. I see.

MA: Right? And the identity element, the thing that does nothing, is the empty set. Right? And the identity element, the thing that does nothing, is the empty set.

SK: Addition and multiplication are? Addition and multiplication are?

MA: Addition and multiplication are not semi-lattices, because they're not idempotent. Addition and multiplication are not semi-lattices, because they're not idempotent.

SK: Oh, if you add a number- Oh, if you add a number-

MA: Two plus two is four. Two plus two is four.

SK: I see, yeah, I see. I see, yeah, I see.

MA: But maximum is a semi-lattice on the natural numbers. But maximum is a semi-lattice on the natural numbers.

MA: Or minimum, I guess? Or minimum, I guess?

MA: Minimum on the negative numbers would be. I need an identity element. So, let's go through each of these properties. Maximum is associative, yes? It's commutative. X max Y is Y m