Subscribe to RCast on iTunes | Google Play | Stitcher﻿



In Part 2 of a two-part series, Greg Meredith is joined by Christian Williams and Isaac DeFrain to investigate biological metaphors in thinking about distributed consensus platforms that relate to Proof of Stake and Casper.

Slides referenced in this discussion are available here.

Transcript

Greg: First of all, thank you, Derek, for hosting, and thanks to Christian and Isaac for joining. We also have here in the room Pawel. We’re going to be talking about the second part of Towards A Living World. Since Isaac wasn’t here for the first part, and other listeners might want to catch up, I’ll just do a quick recap.

I’m essentially operating from the draft of a document that I have title entitled, “Towards A Living World.” The work itself is in two parts, but it’s also at two levels. The two parts are elaborating a framework that plugs into what we’ve already got with the CBC Casper so that we can do programmable likeness constraints. This culminates in a variant of the Rho calculus that allows us to make context the first class and thereby give us a mechanism for expressing synchronization constraints as encoder-decoder pairs for an error-correcting code, where the context is running those encoder-decoder pairs. That’s the basic idea.

When you make these contexts be first-class things that you can pass around, that allows you to program the synchronization constraints as the protocol evolves, rather than having the encoder-decoder pair specified for the entire lifetime of the protocol.

This approach to capturing synchronization constraints requires a few insights. First of all, we don’t want wall-clock time because wall-clock time requires a lot of trust and it also introduces a whole bunch of complexity. One example of the kind of complexity that begins to deal with: if we wanted to run these protocols between satellite and the ground, we begin to have to deal with relativistic adjustments. GPS systems today deal with these kinds of relativistic adjustments, so that that introduces a lot of complexity.

You also have to put a lot of trust in your network time service. That’s a source of trust. So things can be hacked in the network time service. Between those two basic issues, wall-clock time is a bad idea as a device for expressing synchrony constraints. Instead, we use this error-correcting code idea and then we allow the notion of wall-clock time to emerge from the kind of synchrony constraints that we can express with these encoder-decoder pairs.

That’s one piece of the puzzle. It culminates in a variant of the Rho calculus, which I had originally called the Space calculus because names are now locations in the sense of Conor McBride’s notion of a location or Huet’s notion of a location that derives from the notion of zipper. Names are pairs of contexts and terms and that allows you to locate processes.

Because we’re able to show that we can also express synchrony constraints, then it’s not just about space, it’s also about time. The programmability of the contexts gives you the ability to have time and space be stretchy. They can bend and morph depending upon how the whole thing evolves. It starts to look relativistic. We have to deal with relativity if we do wall-clock time; here relativity is baked into the system.

That’s part one. Part two is reimagining the validator harness. I’m not using the metaphors of gas but using biological metaphors: metabolic control and reproduction and adaptation. This next part is how to fit those two together. Let me stop there and make sure that the recap made sense.

Isaac: That definitely makes sense. I see how having this sort of native notion of time shouldn’t be advantageous. I wonder, maybe the performance, it almost seems like you get something for free if you just tack on wall clock time. But like you said, then there’s this whole trust issue. That’s a huge issue in and of itself. I’m wondering, to have this thing, this notion of time come directly from the protocol itself. I wonder how that compares in performance to something like tacking on wall-clock time.

Greg: That’s a really good question. Essentially, in terms of the protocol, it comes down to the cost of encoding data and decoding data for error correction. I have decades and decades of experience with error-correcting codes and how to make those efficient. We know where the tradeoff points are for getting more error correction, what that costs in terms of encoding and decoding. There’s lots of literature on exactly this issue. We can do a fairly detailed comparison between that wall-clock time. The nice thing is that we’re who set it up so that we can answer it with engineering. I like to use math to be able to answer some of those questions.

Isaac: Perfect. That’s the best possible answer.

Greg: For people who might be coming to this later on, I know it’s a podcast, but there’s a read-along text, which is this Towards A Living World document. The summary of the first part is given on page 21: error-correcting codes, this phase calculus summary. It’s as tiny as the Rho calculus. It fits on a page to give the entire syntax and semantics. The structural equivalence is not on the page, but you could fit it on the page if you did it cleverly. The main point is that the spec is very tiny and mostly self-contained. For listeners who want to see what the combination is, they can just look on page 21.

I also want to suggest that for listeners who are interested, the other piece of the puzzle that’s worth taking a look at is the validator harness that happens on pages 35-37. It’s all one piece of code, but in each of those pages, we identify elements of the validator harness. The magic trick that happened that we walked through with Christian, we started from first principles: how to write down lifelike characteristics—metabolic control of activity and reproduction, autonomous reproduction, like the splitting of the cell. Then finally, some notion of adaptation. All of that is represented on a single page with this single harness.

The metabolic control of computation matches directly with the token gas split that we see in Ethereum, or even in the RChain implementation now. The reproduction piece makes explicit an issue that we have been struggling with because it was implicit in the way we’ve been thinking about the validator’s structure, which is how many threads of validation activity are associated with a single staking identity. The reproduction part of the structure makes that explicit, so you know that you have some population of validator behavior that’s associated with that staking identity. Once you’ve made it explicit, you can begin to put constraints on it. Those constraints correspond exactly with the notion of staking.

If you starve the harness of the resource, then you can either kill off the entire population of validator behaviors or you can reward it dually and increase the population, or you can do something in between where you start only a portion of it. That range of possibilities gives rise to slashing all the way to the bone. For example, if the validator’s equivocating, then you really don’t want that behavior to continue. You want to kill off that population of behaviors. But if the validator has, for reasons outside of its control, created an invalid block, you might want to punish that but not eliminate that kind of behavior.

Then there’s the notion of a functional behavior, which we encoded into the framework. Here functional behavior, thought of biologically, relates to, for example, I put a picture in the slides of a spider weaving a web. That’s functional behavior, even though it’s related to how the spider feeds itself. But it’s also an extension of the spider’s perceptual system. We’ve all seen that. You see a spider and a web and you touch the web and the vibrations created by the touch ripple throughout the web, then the spider responds. The weaving of the web is a kind of functional behavior.

We can build that into the framework. That functional behavior corresponds to the validator accepting client requests. In other words, running contracts. The execution of smart contracts is functional behavior.

The launch point for this talk is an observation that I made in the previous one where we first recognize that the validator behavior is not the only thing that needs to be resource controlled. We also want to resource control the functional behavior. When we start running that smart contract, we need to make sure that it only makes as much progress as there are resources that are provided to it.

We observed that because of the way we had written down the behavior for the validator, we can just iterate that: wrap the smart contract in another layer of validation behavior, which creates the metabolic control over the steps of the smart contract. It shows the value of this approach.

Now comes a really interesting trick, which is that we can create a fixed point, where instead of just doing two levels, we do an infinite number of levels. We close up the validator behavior, this validator harness, with reflective fixed points. The equation on slide 38: if P is our lifelike behavior, our validator behavior, then R of P is going to be defined to be P of S. Then where we would put the normal functional behavior, we put R of P. Now you get this infinite tower of validators inside validators inside validators inside validators.

You can view this infinite tower as a tower of simulators. The outer simulator is effectively running the inner simulator as a control behavior. You could imagine running all of RChain as a Rholang program inside of RChain, and then inside that, you run this other nested RChain, and inside that, you run this nested RChain. It’s a fun picture, this kind of Rick and Morty-like picture of the world as a simulation. So the teenyverse and this microverse is this sort of idea of what’s being expressed here.

There’s a bit more to it and that’s where we climb up to the second level. Before we do that, let me again stop and check in and make sure that, Isaac and Christian, you’ve had a chance to grok what I’m talking about here.

Christian: Just for everyone’s sake, would you mind reminding us how to interpret this reflective power of behavior and how you impose resource constraints on it?

Greg: The reflective tower—you can think about it at any given level. Think of that as running RChain. You have a bunch of these validated behaviors in parallel and they’re communicating with each other around certain client requests. Those client requests are to run something. But what is it that the client requests are going to run? They’re going to run another level of validation.

It’s as if we have this nested a notion of simulations. You can see that a validator as a universal Turing machine. What you do is you as part of your client requests, you ship it as a Turing machine that the universal Turing machine is going to interpret. This allows you to do this at infinite levels. It’s very similar to the reflected tower 3-Lisp that was defined by Brian Smith back in the day. It’s really an infinitely nested universal Turing machine. Because each one of these is metabolically controlled, at any given level you have a staking token that is used to control how many steps in the simulation you’re allowed to go.

Christian: If you have two such towers in parallel and their outer levels are the same, if you go down a level or go in a level, are those connected or are they separate within each tower?

Greg: If you have a community of validators, and each one of them has incited a community of validators, community validators inside one validator are not connected to the community of validators inside another validator. But that community validators are all connected to each other and they can do stuff. It looks like the blockchain version of the Mandelbrot set.

Christian: If lower levels or inner levels need to communicate with inner levels of other towers, is that something that you would need?

Greg: It’s not needed. You could certainly arrange it so that you could do that, but then it gets even more complicated. The relationship between level N and level N-1, this one is controlling how many steps level N can go—how much of the simulation you can run. The interesting thing, in terms of self-hosting, is that we can now self-host the full Rho calculus. The idea is that we’re now nesting these universal Turing machines.

In fact, we can self-host any flavor of the Rho calculus. In particular, there’s a way to encode the Space calculus in the Rho calculus, or there’s a way to encode the Rho calculus in the Space calculus. We could rewrite everything that we’ve done in this sort of biologically-inspired model of consensus in the Space-Time calculus, and then be simulating a notion of space-time at each level.

We have this lifelike simulator for space-time that is infinitely nested. There’s an infinitely-nested notion of space-time. I don’t know how else to say it. I’m trying to try to find it another way to communicate the idea. The important point is that I’ve got this tower of simulators for the Space-Time calculus. At the inner level, I’m simulating a notion of space-time inside the outer level. The outer level is providing a resource constraint on the simulator for the Space-Time calculus on the inner level.

Isaac: I intuitively understand this infinitely nested business. Coming from a purely mathematical standpoint, it makes sense to have infinitely nested things. The thing that’s hard to understand is how this is actually all implemented on a machine.

Greg: That’s a good one. You can implement it on a machine. That’s why I wrote down the fixed point approach. I could literally write this down in Haskell; I can also write it down in Rholang. The important point is that I’ve given a soluble recursive equation.

There are some domain equations which are not soluble. For example, D of X is equal to the power set of D of X. That’s not soluble. You have to at least get off the ground. But this one is carefully constructed so that you have a guard around the recursion that allows it to be soluble.

Christian: What is that guard?

Greg: The guard is the P constructor itself. That’s the one observation in order to know that we can implement this in Haskell or Scala or Rholang.

The next piece of the puzzle is to investigate some of the metaphors. There comes to be a philosophical investigation. The first part of the talk was just the practical side. We build up this machinery; the reason we’re building up this machinery is to create a next-gen blockchain architecture. RChain gives an approximation zero. Then there’s an approximation one, which makes sure that we have this nice programmable liveness and improves the construction for the validators. The two fit together because RChain is effectively just wrapping a Casper consensus around the Rho engine. Now we’re going to wrap the Casper Consensus, reimagined in terms of this biological metaphor around not the Rho calculus, but the Space-Time calculus. It’s just a refinement of the existing RChain architecture with some spiffier gadgets.

That’s nothing new or super fancy. But we notice that we can do lots of things with those gadgets that are slightly harder to do as it was conceived. Once we have fit those two together, it turns out we can look at some of the more mathematical or philosophical consequences of these constructions.

The first one is this nested tower of simulations, a kind of Rick and Morty contemplation, but it gives rise to interesting observations. The first one I want to chat about is something that happens in the standard model physics. There’s an implicit assumption that smaller equals simpler. This assumption goes all the way back to 3,000 years ago when there began to be this debate between the continuists and the atomists, for lack of better nomenclature.

Atomists argued that the physical universe is ultimately grounded out into some collection of building blocks that can be assembled in various ways to get the complexity that we see as physical reality. The continuists said, “No, it doesn’t work that way. You don’t have these sort of discrete building blocks. It can be infinitely sliced and diced.” I want to argue that at least at the mathematical level we can have our cake and eat it too. We can build things out of discrete building blocks, but the discrete building blocks are not aligned with the spatiotemporal locality.

If you look at standard model physics, by the time you get down to the scale of the quark, you’ve lost a whole bunch of capacity for organizational complexity. In fact, there’s a general principle, which is: as you descend in spatial-temporal scale, you also descend in terms of your capacity for organizational complexity, until finally, you get down to quarks. There are very limited kinds of organizational complexity that are inexpressible. This is summarized in a slogan that smaller equals simpler. Does that make sense?

Christian: Yes.

Greg: This was an important insight. Until I could write a model that breaks that assumption and yet still was consistent and I can calculate with it, I couldn’t see the assumption. But that assumption doesn’t have to be true, and we just built a world where it isn’t true. We can treat nesting as the notion of getting smaller and smaller or bigger and bigger depending on which direction. In many ways, it doesn’t matter which direction you associate with getting smaller or getting bigger. You could imagine that getting smaller is going inward in terms of nesting. But you could also imagine that is getting bigger because you have to add more and more addressing information in order to find where you know, so they’re dual to each other and that’s because you have this infinity to work with.

But organizational complexity has nothing to do whatsoever with where you are in that messaging hierarchy. There’s a corresponding question that arises. Einstein posits something about information flow as related to the speed of light. In Einstein’s proposals, what we get is the speed of light is like the clock speed of the universe as a computer. You can’t get information to flow faster than the speed of light in a certain sense. Quantum mechanics breaks this intuition in a certain way, but there’s something fundamental about how information flow is related to the speed of light.

Here, if you break the assumption that smaller equals simpler, then there are weird things that you could do. For example, since you have this whole computational universe available to you at lower scales, you could consult that computational universe as a kind of Oracle. You can say, “Hey, you down there in the teenyverse, go and calculate this question for me.” At my scale, it will appear to take a single computational step, even though when you go downward, you might take a lot of steps and you might consult other oracles even further downward, which takes even more steps.

In fact, who’s to say where the bound is? There’s an interesting question about how much information flow or how much computation can be done in a fixed computational step at a particular level of simulation. Does that make sense? Do you understand what I’m driving at?

Isaac: You’re saying, is there a bound on the steps taken in some of these sub-layers that correspond to a single step, and I guess just the original layer? Cause you can go up or down from basically anywhere you are.

Greg: That’s exactly right. I want to argue that we can steal the principle of the speed of light as constant, and imported it into this setting and say that we only get to do a finite amount of descent and re-ascent in a single computational step. It has to be bounded. Otherwise, you can go do magical, miraculous things. Maybe the nature of the universe is to allow magical, miraculous things: turning water into wine and all kinds of stuff like that. At least now we have a place where we get to impose a notion of continuity that says, if you relax the flow of continuity, then you are allowing for miracles.

I also want to point out—and again, I’m not really facile with renormalization calculations—but from what I know about renormalization calculations, this bound on not being able to do infinite descent and re-ascent is close to what’s going on when we talk about renormalization. Essentially, renormalization is getting rid of similar kinds of infinities; infinities that you’d like to exclude from your calculations. There’s a connection there. I haven’t taken the time to go and elucidate the connection. I hypothesized that we can probably express notions of renormalization in the setting.

Christian: Since on each level you’re focused on resource constraints, it seems like there would need to be a similar constraint on the traversal of the levels themselves. That can’t be taken for granted as infinite because on the physical level, somehow these are being collapsed to a single computational level. It doesn’t give you the inception effect on time. If you go down a level, it’s still going to take as long as it did if you did it in the top level.

Greg: Yes, exactly. Or rather, it’s not so much that it has to take as long, but we want to be able to record a notion in which we can impose that kind of constraint because our intuition about reality—at least currently—suggests that that’s the way things operate. But we don’t know. We’re still working it out.

What’s interesting about this framework from a philosophical point of view, apart from its practical applications—and I want to stress that there are very practical applications of the mathematics that I’ve been laying out—it has these philosophical implications that are pretty striking. It uncovers this assumption that smaller equals simpler, which does not have to be the case. It also uncovers a new meaning to the intuitions behind the speed of light constant.

Another aspect of the standard model of physics that it begins to challenge is this idea of certain kinds of symmetries. In particular, in standard model physics, just from my naive understanding of it—I’m not a physicist and don’t play one on TV—if you have a physical situation in which you’ve got an electron in one location and another electron in another location, and if you preserve all their properties, if you can swap them and preserve all their properties (if they’re in this situation where you can do that), the two resulting universes in which their swapped are the same universe. There’s no observation or measurement that would be able to detect that you’ve done this swap.

This notion of swapping is very akin to fungibility in economics. If Christian has a dollar and Isaac has a dollar, if they swap their dollars, you haven’t changed their economic universals. From the point of view of their capacity as economic agents of the U.S. or global economy, it doesn’t matter if Christian has marked his dollar in a certain way. As long as it’s still spendable, even though the dollars are distinguishable either by a mark or a serial number, your economic capacities are same, and so the economic universes are the same.

These kinds of symmetries are fundamental, both standard model physics and certain aspects of economics. However, in this new setup, you can break those symmetries. In particular, you can imagine that our notion, since we no longer have this smaller equals simpler assumption playing around, you could have this electron—like programs—that for all intents and purposes look the same.

If we conduct a bisimulation calculation with observations that are resource-bounded, though our observations are only allowed to use probes up to a certain amount of resourcing, then we can’t distinguish them. But if our observations are allowed to vary in terms of resources, then we can distinguish them.

These electrons might be subject to evolutionary pressure. They’re under constant adaptation. The way in which we affect evolutionary pressure is by measurement. We’re constantly measuring the properties of these things. If the measurers are far enough away in terms of levels from the things that they’re observing, then those observations could re-float millions and millions of inner generations for these electron-like things. Since there are so many iterations from the scale of the observer, they look and feel or appear to be the same entity computationally, when in fact they are not. As a result, you get a contextualized notion of symmetry. It’s symmetry up to how many resources you can spend to distinguish them.

Christian: In practice, what’s an example? What are we thinking about as fungible versus distinguishable?

Greg: I’m arguing that we might not have a fixed set of types of particles.

Christian: I mean in RChain.

Greg: In terms of RChain, this would be like saying that two different smart contracts look like they could be substituted for each other if you don’t account for the costs of going inward by so many levels.

Christian: Okay.

Greg: If you’re willing to only put a fuzzy bound on how far inward they could go, then they appear to be bisimilar. If you swap them, the corresponding validators they get swapped to have different resources with respect to going inward, then they won’t necessarily work.

The analogy to physics is that now we no longer have a finitary set of particles. There’s now an infinite zoo of particles. It doesn’t mean that we have to leave behind the notion of a crisp classification. This is where the LADL types come in. We can now start using our LADL types to describe and classify computational behaviors. The classification is set up to deal with the fact that we now have an infinite zoo of things. To get better classifications, we refine the types, which was the other aspect of what I was talking about in the previous talk.

Isaac, you weren’t here, but I showed that we can do a notion of genetic algorithms inside the Rho calculus because the code of a process to a process, the code of P is to P as genome is to genome. This can be very, very precise. For example, I went over the rough ideas of what genetic algorithms are and how they’re applied to the generation of programs and give rise therefore to genetic programming. And I showed that for all bipartite term constructors in the Rho calculus we can define crossover up. Because LADL turns every term constructor into a type constructor, we can also do the genetic algorithms at the type level.

Now we can do these two-level search algorithms where you propose a type and then you propose a term that is an inhabitant of that type that maybe doesn’t fit some search criteria. That’s the counterexample. Then you refine the type to eliminate that class of terms and you iterate, you go through that process. You’re searching feature space, both in terms of types and in terms of terms, which is an incredibly powerful thing.

As a mathematician, my inner dialogue is divided into two parts. I’ve got a player or a proposer side that’s proposing a type: this must be true. The type is like a theorem: the following is true. Then another part of my mind, the opponent part, or the imposer part is saying: but what about this counterexample? That’s the term side. Then the proposer responds to that by refining the proposal. Back and forth, back and forth, until I find the construction that matches the thing I’m searching for.

This computational framework that I’ve laid out in these talks supports that process. The reason I’m talking about particle physics in this context is that if we jettison the idea that there’s some finitary collection of building blocks—maybe there’s an infinite collection of building blocks—it doesn’t mean that we have to lose good crisp boundaries in terms of classification. We still have incredibly powerful tools for classifying things. Not only do we have tools for classifying things, but they fit into a framework that allows for genetic evolution of our classification schemes according to selection criteria.

Christian: In LADL, term constructors in the Rho calculus or the Space calculus don’t explicitly reference reflection itself. If you already have some reflective tower in place, how do you implement types in LADL that describe this sort of level-boundedness that you’re talking about that, that a certain process can only go so deep?

Greg: That’s a great point. The term constructors do reference the reflection. In fact, that’s an interesting aspect of the whole Rho construction, in general, is that it’s carefully crafted. One of the distinctions between the Rho calculus and an alternative that Matthias Radestock considered is that: I have a very, very careful discipline. There is a strict alternation between name constructors and process constructors. This corresponds precisely to the player opponent game theoretic construction that I was talking about in terms of my own inner dialogue.

There’s a notion of alternate gameplay built into the syntax of the Rho calculus. That’s utilized essentially, there’s a critical point where that’s utilized in the proofs of the well-foundedness of alpha equivalents in the Rho calculus. You need that alternating property or things go wrong. I mention all of this as context and background to address the fact that the @ sign does, in fact, give you a notion of level.

The standard terminologies, the @ sign is reification and the * is reflection. In our calculations, we’re not just looking at that notion of level, we’re looking at the notion of level as instantiated in terms of whether you’re a functional behavior or you’re the outer harness. That is easily syntactically covered as well. We’re able to detect that in a type-like way.

Christian: I feel like there’s so much fascinating math when it comes to this tower.

Greg: Yes, exactly. It’s a lot of fun. It’s a big space to go and explore. In terms of this type idea, we mentioned that last time but it bears repeating since types give you a notion of query—the basic idea for types as query—suppose I have a repository of source code, like Github. I could use it a type to query that repository by simply asking for all of the programs in the repository and type-checking them against the type. Those that do type-check are the ones that are returned by the query; they answer the query. That’s a brute force way. There are lots and lots of optimizations that can be done. Because types form a query mechanism, we can imagine that we use types in two different ways.

You can use them as a compile-time phenomenon. You write down validator harnesses that will only accept client requests for contracts that type in a particular way. Everything else would not even be considered; you know that before you ever start the validator code. You can also do this at run-time. You can write a validator that is searching for the client requests for deployments of smart contracts that match a particular type and nothing else will it pick. That allows you to write validators that are much more constrained, much pickier, and thereby give you a much more refined and interesting marketplace, at least in my opinion.

That then feeds into these philosophical considerations. I’ve been yammering away and we’re getting close to the top of the hour. I wanted to hear from you guys as to your thoughts, having been exposed to these ideas.

Isaac: You were talking about having different validators that can pick contracts to run based on the type. The first thing that I think of is, you’re talking about that gives a sort of richness to the different types of validator behavior. That immediately makes me think of, you can have services that will validate contracts have a certain type and there are going to be a whole plethora of these different services based on certain types of contracts that they would run. You can see how that expands the expressiveness.

Greg: Exactly. To me, this is not just a nice to have. In terms of trying to build out a computational framework that’s practical, this is a must have. To the best of my knowledge, there’s not a single smart contracting platform out there that has anything like this, that has even contemplated this level of functionality. And we can write it down very specifically. What about you Christian?

Christian: What I’m wondering right now is a lot less practical. Thinking about this tower, I’m wondering about the logistics. If you make one process reflected in itself ad infinitum, how do you describe taking a process from outside and placing it inside that process at an arbitrary location? I have a bunch of similar questions about how towers communicate and that kind of thing, and I feel like the answer is that context and how everything is structured in the Space-Time calculus—just the actual syntax—everything is always being accounted for within these contexts. All you have to do is zoom up or zoom down.

Greg: That’s the whole idea. You got it.

I want to provide one last little point. I’ll just read it from the document, “Towards A Living World.” It talks about my own philosophical motivations for working at the second level. The basic idea is that the conceptual framework of physics makes it very difficult to fit life in. In the language of physics, we don’t recognize the purpose of a planetary orbit or the gravitational effects of a star. We don’t use the language of telos or purpose.

By the time we get to biology, it’s almost impossible not to speak about purpose. When you look at the immunological response to a viral infection, you have to use the language of purpose or the language becomes really awkward. I want to take on board that the purpose is part and parcel of life. Our conceptualization of life. Life means purpose.

I want to suggest that it’s probably not an accident that if the dominant metaphors at the scientific culture are lifeless and purposeless, if our scientific rumination and our scientific conceptualization of the world is of a universe that is lifeless and purposeless, that framework of thought might play a role in our culture’s headlong descent towards a lifeless planet.

If you ever go whitewater rafting or kayaking, you notice right away that your boat goes where your attention is. If your attention is riveted to a lifeless and purposeless universe, that might be where you’d head. In the same way that we need something like scalable blockchain technology to conceive and manifest a way of coordinating ourselves with the speed and agility that’s required to escape mass extinction, I think we also need a headspace, a conceptual framework, something like a roadmap towards a living world.

I want to argue that it might very well be that we never had to try to get life to emerge out of lifelessness. Maybe life was the ground to begin with. That it was life that begets life. The mathematics of a living world might be richer. In order to step into a living world, we don’t have to jettison scientific rigor or accuracy or precision of thought or precision of discourse. In fact, it might be the other way around, that if we include life into our computational and calculational framework, that it’s just a lot richer. It’s more fun. That’s what motivates this other level of work.