CHURCH SPEAKS

I would say that there are two things I’m obsessing about recently. One is global warming and the other is augmentation. Global warming is something that strikes me as an interesting social phenomenon and scientific challenge. From the social side, you’ve got denialism, which, to me, is more important. You have denialism on a bunch of fronts. You've got denial of the Holocaust and evolution, but those aren’t things that necessarily in and of themselves impact our lives. It’s very heartrending and callous that anyone would deny the Holocaust, but as long as they don’t add to that a lot of other racism, nobody’s going to get hurt by it.

I imagine that we could probably populate my company enEvolv, which has evolution in the title, mostly with creationists and they would still get the products out. You just follow a recipe. Even though you’re doing evolution, you don’t need to believe it. Maybe it would help if the very top scientists believed in neo-Darwinism or something. Those are curious things that people fight about and have deep feelings about, but they don’t affect day-to-day life.

Global warming is something that could be catastrophic. You could argue that it’s in the same category because you can’t prove that my life today is worse because of global warming, but it’s something where it could be exponential. The odds are against it, but we don’t even know how to calculate the odds. It’s not like we’re playing blackjack or something like that. There’s more carbon in the Arctic tundra than in the entire atmosphere plus all the rain forests put together. And that carbon, unlike the rain forest where you have to burn the rain forest to release it, goes into the atmosphere as soon as you get melting. It’s already many gigatons per year going up. That’s something that could spiral out of control.

Even for the ultra concerned citizens, almost all the suggestions are not about how to prevent an exponential release, but how to slow down the inevitable. It's like the extinction problem: If you don’t have a way of reversing it, then you’re fighting a losing battle. That’s not psychologically a good thing, it’s hard to get enthusiastic funding for it, and you will ultimately fail. Whether it’s solar panels, or not using your SUVs as much, or not buying SUVs, or having smaller houses—all of these things are slowing down the inevitable. It’s hard to get excited about that.

The other thing that is problematic socially is the whole idea that it’s an "inconvenient truth." To some extent Gore’s phrase is brilliant, but it’s also counterproductive because the people for whom it is inconvenient don’t want to believe it’s inconvenient. People don’t want to give up their SUVs and their steak meals. It would be better to talk about a convenient solution, whether or not that’s the real solution or the best solution, just talk about it so you get acceptance first. You need acceptance before you can get to the best solution.

The other part that makes acceptance difficult is blame. People will say, "It’s not my fault," and that gets confused for "it’s not anybody’s fault." You could make an argument that it’s not your fault because you weren’t around during the Industrial Revolution. You didn’t personally do that much; you’re just one seven billionth of the problem at most. You could make an argument that you’re not personally to blame, but then expanding that to no human being has had anything to do with it is where things go off the tracks. The thing that got us into the position of denial was the blame game. You want everybody to be inconvenienced because it’s their fault. That’s two strikes against you.

I don't know if you’ve read The Righteous Mind, but Jon Haidt makes the point that even people who consider themselves very rational are not using a rational argument when making decisions. They’re making decisions and then using the rational argument to rationalize. A lot of what he says sounds obvious once you restate it, but I found the way he says it and backs it up with social science research very illuminating, if not compelling.

The elephant, as he refers to it, the thing that’s making your decisions in your life, is deciding that this person is telling you that you’re responsible for something you don’t feel responsible for. It's telling you that you have to sacrifice many things that you don’t want to sacrifice. From your viewpoint, that person is inconvenient, incorrect, and you’re going to ignore them. The more they insult you and your way of life, the less you’re going to listen to them, and then you’re going to make a bunch of rationalizations about that. This is why we have problems.

So, let’s reframe it as you’re not responsible necessarily, but here’s an opportunity. Let’s say nature caused this global warming, and maybe this global warming could get worse. Let’s not even say that it has gotten to a horrible place. We're above 400 parts per million, and maybe that's too far, but it could get exponentially worse. If it keeps going up exponentially, we’ll eventually consume all of the carbon dioxide and all the methane that’s right below the surface, and we know there’s tons of methane in the cold water that could be released. Methane, by the way, is a global warming gas that's twenty-eight times worse than carbon dioxide. All this stuff could result in melting of all the ice, and then you could get to temperatures that aren’t necessarily in the historical record.

There’s a tendency to use history as an indicator of the future—of course, the SEC warns us against doing this when investing in stocks. There is a non-zero probability that we could go somewhere that is without historical precedent. Look at Venus. Venus has an atmosphere that’s ninety-five times higher pressure and temperatures that would carbonize life. At one point something must have happened in its history that had never happened before and was irreversible at that point. We don’t want Earth to turn into Venus.

If we stop blaming and start talking about opportunities, what could we do? We already do lots of things that are unnatural. Natural would be to let weeds grow all over the planet instead of planting crops. The fact is there was an opportunity there. It’s not our fault that the world was not covered with beautiful crops; it’s our opportunity to cover it with beautiful crops. The same thing would happen if an asteroid were headed our way, or if we had seismic information that there was going to be some super volcano; it wouldn’t be our fault, but you wouldn’t deny the possibility that the human race should band together and solve the problem proactively in anticipation of possible disaster. Even if it’s not guaranteed that that asteroid is going to hit us, if it’s big enough and it’s headed our way, we would band together as a population to fight it.

Once you get past that "not my fault" attitude, treat it as an opportunity, and then you get to the science and engineering aspects, which are more interesting.

~ ~ ~ ~

I probably started seriously in the ‘90s when I got interested in photosynthesis. We published some papers together with Penny Chisholm, a professor at MIT, on the most abundant photosynthetic organism on the planet. There are on the order of 1023 of these organisms on the planet. It’s a mind-boggling number of them. Nobody knew about them before Penny started working on them.

People would look at the ocean and see something that looks pretty close to sterile. Occasionally, a fish would float by. If you scoop up the ocean, you’ll find diatoms and a few heterotrophs, various kinds of phytoplankton and zooplankton, things that whales might filter feed. It’s not that dense, not like what you would get from a scum-covered pond or the excrement of animals, which is solid bacteria. There are more bacterial cells in just two kilograms of your intestines than in the whole rest of your body.

In the ocean, even though they’re dilute, there are more of these photosynthetic organisms than the rest. The reason people missed them is because they’re so small. They’re even smaller than the bacteria that people notice. They just look like little blips in the microscope, like they might be a mistake or a little speck of something that isn’t real. What Penny noticed was that they were highly fluorescent. And she eventually figured out how to culture them, even though they don’t culture very well.

I used to think of the ocean as a harsh and dynamic environment, and anything that survived there had to be versatile and able to handle all the differences of storms and sunlight, but these are fussy, fragile creatures, even though they’re one of the most abundant creatures on the planet. If you get the iron concentration or the copper concentration just a little bit off—too much or too little—they say, "I’m out of here. I’m dead. Forget about it." These cyanobacteria are very fastidious for the most part, and they might be part of the solution.

How does this relate to my genome work? I met Penny because I was a technology developer. Most of my interesting collaborations came because I had a technology, and people would either seek me out so they could use the technology or I would seek them out. I’ve had Department of Energy funding ever since I started my laboratory in 1986, ’87. In fact, it’s the only grant that I’ve had continuously since 1987. I was developing technology for the Human Genome Project, and then I thought, wow, this is the Department of Energy, we should be taking this technology developed for the Human Genome Project, which really isn’t in the purview of the Department of Energy. They have a health effects component, but microorganisms that impact energy were much more in their realm.

The biggest energy creators in the world, the ones that take solar energy and turn it into a form that’s useful to humans, are these photosynthetic organisms. The cyanobacteria fix [carbon via] light as well or better than land plants. Under ideal circumstances, they can be maybe seven to ten times more productive per photon.

I sought out Penny because I had this genome hammer. I had a set of tools for analyzing genomes, transcriptomes, and proteomes, and I thought we should apply it to something that would be of benefit to the Department of Energy. Penny and I collaborated until there was a reorganization of the Department of Energy and we were encouraged to get separate grants. It left a lasting impression on me because she came from a very different culture than I did. She was technically in the Department of Civil Engineering at MIT, and I’m clearly not a civil engineer. Moreover, she was an ecologist. She loves the oceans and these photosynthetic bacteria. This was her life, so I learned to appreciate it from her viewpoint, which was great. I’m always a technologist the same way she’s always a biologist, so I kept thinking about how we could use these cyanobacteria to solve energy problems.

One of the companies I started was called Joule. Like most of the biofuel companies, I knew it was going to be a difficult problem not just scientifically but also from a social and economic standpoint. At any moment, the price per barrel could be engineered downward. It’s just trivial for OPEC or anybody else in that business to temporarily drop the price to the point where it destabilizes all the competing technologies and then drive it back up again at their leisure.

It struck me as an important and interesting problem, so I wanted to become more conversant with it. Both at Joule and LS9 we discovered ways that you could turn carbon dioxide or other carbon compounds into alkanes, the things that make up gasoline and diesel fuel. They’re a polymer of carbons with hydrogens coming off—a hydrocarbon—and we figured out how to make those enzymatically. We found the enzymes that occur in nature, which was not obvious, and both companies have patents on making those alkanes. The survival of those kinds of biofuel companies have depended on them creating more valuable chemicals rather than solving the energy problem, unfortunately. Some of them have survived, like Amyris, which has thrived by pivoting from biofuels to flavors, fragrances, and other high-value biochemicals.

It’s not in my main stream of technology development or even in applications of our technology, but we do have modest efforts that at a minimum help raise consciousness. Sometimes it takes a relatively small suggestion or demonstration to make a big change, and certainly that’s happened to us in the past. A small contribution to DNA sequencing, or DNA editing, or DNA synthesis has caused million-fold changes in the economics of those fields. The one we’re most known for, which is a tiny project in the lab but has gotten a disproportionate share of attention, is getting mammoths, cold-resistant elephants, to stomp around and change the temperature of the soil in the Arctic. There’s so much carbon in it that you don’t want the soil to raise in temperature. There’s experimental evidence that that sort of activity could reduce the soil temperature. But the fact is that even if that were spectacularly successful, it is just another one of the many methods of slowing down the inevitable. In other words, if you lower the temperature of the Arctic tundra, you don’t reverse the carbon problem. You’re missing the opportunity of improving or intentionally changing the amount of carbon and other global warming gases. It’s like solar panels and lowering your consumption, things like that.

[Arctic grass and] cyanobacteria, on the other hand, they fix [carbon]. Cyanobacteria turn carbon dioxide, a global warming gas, into carbohydrates and other carbon-containing polymers, which sequester the carbon so that they're no longer global warming gases. They turn it into their own bodies. They do this on such a big scale that about 15 percent of the carbon dioxide in the atmosphere is fixed every year by these cyanobacteria, which is roughly the amount that we’re off from the pre-industrial era. If all of the material that they fix didn’t turn back into carbon dioxide, we’d have solved the global warming problem in a year or two. The reality, however, is that almost as soon as they divide and make baby bacteria, phages break them open, spilling their guts, and they start turning into carbon dioxide. Then all the other things around them start chomping on the bits left over from the phages.

If you could make the cyanobacteria resistant to the phages, then you might be able to reduce that. Even a small reduction in that immediate turnover from carbon comes in, gets fixed into carbohydrates, the cell protecting those carbohydrates breaks open and all the hungry heterotrophs eat it and turn it back into carbon dioxide—if you could break that cycle even a little bit, you’d start sequestering carbon dioxide.

Where are these bacteria in the real world? They are in all the oceans of the world, all the lakes, all the rivers—every body of water is filled with these bacteria. There are different strains—there’s a freshwater set, a deepwater set, and so forth—but as a class, photosynthetic bacteria are so simple. They’re point-like. They’re not colonies for the most part. The abundant ones that do most of the photosynthesis are rugged individuals. They’re out there by themselves doing their thing, fixing carbon, and they’re at great risk.

The phages are also everywhere. There are about ten times as many phages, which are barely alive. They’re parasites. They’re in every ocean, every lake, every stream, and they’re constantly killing the bacteria.

The most abundant bacteria—cyanobacteria, or, Prochlorococcus is a particularly abundant one—are so small that you can’t see them under a conventional microscope. There are some super resolution microscopes that allow you to see some detail, and certainly with electron microscopes you can see them, but you can’t see them with the naked eye. If you pick up a glass of ocean water, it doesn’t even look green. There’s not even a bulk measurement that makes it look like they’re there, but they are. The phages that destroy them are even smaller. Those can only be seen by electron microscopes at the best resolution. They’re tiny killing machines that just have DNA and RNA plus a protein coat. They go in and they take over the cell.

The world has slowly come to see what an important part of the ecosystem cyanobacteria is, though nobody discusses it as a serious way of dealing with carbon sequestration. I could be completely off base in saying that we should give this a thorough look, to see if there's a way that we could harness this incredible amount of photosynthesis and use it to turn carbon dioxide and water into carbohydrates and just have that sequestered.

It’s ironic that one of the big efforts in biotechnology has been making biodegradable plastics. That’s been considered a great success story, but what we really want to do is make non-degradable or not easily degradable plastics out of the carbon dioxide. We want to sequester carbon dioxide into things that don’t turn over, and the problem with the cyanobacteria is they do turn over. While they’re intact, they’re in pretty good shape. They’re good at fixing the carbon; they’re just not good at keeping it fixed. If we could change that, then the amount of photosynthetic capability is vast, and we could quickly start pulling carbon dioxide out of the air.

There are two options that tie into my research, in addition to the older research with Penny where we had elucidated many of the characteristics of these organisms. My newer research is about making bacteria that are resistant to all phages. We’ve done a demonstration project in E. coli where we’ve changed the genetic code. It’s one of the largest genome engineering projects where we had to change hundreds of genes in a particular way. This is not just arbitrarily making a copy of a genome; this is engineering hundreds of genes so that you could change the genetic code, where you could remove from the cell something that phages depend on. In fact, the cell depends on it. Normally, you couldn’t remove it from the cell because it was an essential gene, but by moving around the genetic code, it makes it possible for you to delete the thing that was previously essential. (In the codon table there are 64 codons for 21 functions, 20 amino acids plus stop, and so that means there are extra codons.) Now when the phage comes in, it can’t set up shop. It can’t do anything because it’s missing something that is absolutely essential for it. It used to be essential for the host, but the host has been taken to the shop and made it so it’s nonessential and deleted it, but the phage wasn’t present during that transaction, so the phage is still dependent on it and can’t grow. Not only can’t it grow, it’s so messed up by this change that it can’t even evolve.

There’s a constant warfare in which the bacteria becomes a little resistant to the phage, the phage becomes resistant to the resistance, and you keep playing this little cat and mouse game. In this case, you’ve changed it so radically that the phage can’t evolve. Most of this is pretty strong theoretical argument. Some of it is backed up by experiments since we've made one genome which has a changed genetic code.

How does this go from the lab into the real world? The organism we’ve done the largest genome engineering project on so far is E. coli. It’s an industrial microorganism, and it has a phage problem. The hope is that when we perfect the strain which is resistant to all viruses, all E. coli bacterial phages, that will then become the favorite E. coli to use in industry.

There’s a bunch of things we have to do to make sure that it manufactures the things that industry cares about in addition to being phage-resistant. If it were just phage-resistant and didn’t manufacture well, they’re not going to take it. That’ll be the first product that we engineer and release into the real world via resistant to all phages. They don’t have to worry quite so much about phages, certainly not about phages they’ve never seen before. This is a profound thought, that you can be resistant to viruses and phages you’ve never seen before. You don’t have to make a specific drug, or a specific vaccine, or specific strategy for that particular virus.

We then need to do it again, but on new bacteria. And maybe we start with other bacteria that have industrial and agricultural significance, like bacteria in the dairy industry—yogurt and cheese, for example, have a big phage virus problem. Then you might want to do it on these cyanobacteria, these photosynthetic organisms, and show that you can make them resistant to phages, and determine whether that increases their productivity, whether they now fix a lot of the carbon and the carbon stays fixed.

What’s interesting is this doesn’t necessarily involve vast amounts of money, but it could save vast amounts of money. Like many things in early-stage science and engineering, it’s highly leveraged. A hundred thousand dollars or a couple million dollars can get you a breakthrough that is transformative and self-fueling from that point on. Once you’ve made the breakthrough that makes it obvious, then there’s plenty of money and plenty of will to scale it up.

Even though this might make sense from an existential risk standpoint, from a society-wide standpoint, it could be a tragedy of the commons, where we can’t figure out how individuals, companies, or countries can benefit. We can see how it would be of great benefit especially if we have an exponential increase in temperature and release of global warming gases.

I’m always looking for the win-win, something where you don’t necessarily have to make a lot of sacrifices or crazy investments. The biofuels was one strategy. We knew there wasn’t necessarily an easy path to get companies interested in engineering these organisms. Another one might be carbon credits, to the extent that those are accepted. This would be a good way of achieving carbon credits.

The final and the most interesting way is if you take the carbohydrates that these organisms make and turn them into something valuable. In other words, something that is non-biodegradable and valuable. So, you take advantage of the fact that these things are self-assembling, that biology can build very large structures with very little management. If you want to have a forest, you just don’t interfere with it and you’ll have a forest. The ecology might go one way or another, but it will fill the land with photosynthetic organisms unless you interfere with it on a regular basis.

The same thing could happen in the ocean with a little nudge. This is very whimsical and intentionally playful, but you could have these things fix the carbon into a structure that was valuable: a bridge across the Atlantic and Pacific oceans, or floating cities. You could make things that have intrinsic value and, because of their self-assembly nature, don’t cost a lot of money. They can create as many jobs as you want to create, but they don’t have to.

You could conceivably imagine a scenario where the bacteria are doing all the work, just as if a volcano or a forest fire clears the land, you don’t have to do that much; it’s going to create a forest there. Here, you’d have to nudge it in some kind of engineered way, but the engineering and the science behind it isn’t even in the same league of cost as most of industry and all the money that is at risk due to temperature change.

~ ~ ~ ~

Any book that is about or adjacent to an existential risk needs to, or ideally would, raise consciousness about the existential risk. The goal might be to calm people down or rile people up, but the book should either come to a conclusion or argue why it’s premature to come to a conclusion about whether this is a sufficiently big existential risk to apply resources.

Technology developers, in particular, ought to at least think about existential risk. Some of our time should be set aside to talking with the public; some of it should be set to ethical standards, safety, and security; and part of it should be towards big existential risk questions, even if they’re low probability of happening during our lifetime or our kids'. It seems like almost everything that I get close to and care about is exponential. These exponentials in computers, electronics, DNA reading and writing are all things where I’m seeing million-fold changes in less than a decade. I can imagine that happening with some of these other things. An asteroid could come out of nowhere and we would have very little time to react. Global warming could go exponential and we would have relatively little time to react.

In some cases it’s better to overreact to an imaginary problem or problem that could materialize than to underreact and not have enough time, because you start making bad decisions when you don’t have enough time. Take, for example, Y2K: We don’t know whether that was a big crisis or not. It turned out it was a fizzle in the end, but was it a fizzle because we reacted to it in advance? In fact, many of the things where safety engineers and security experts are most effective, they get least credit for because they were so effective that no one ever knew it was a problem. The reason that we push back on existential risks where we have unknown or low probabilities is because it’s going to be inconvenient, because it’s going to make us have to sacrifice. Even if we don't know the probability but we know the consequences are huge, we come up with a clever solution that doesn't require us to give up our sacred cows.

That’s easy to say and harder to pull off, but I’ve seen many cases of win-win in my life. Rather than having the gut-wrenching decision of whether we should sequence your genome or mine at $3 billion, we just said, "Let’s bring it down to $1,000 and then we can do both of them." That’s a win-win. We didn’t have to have a big national debate as to who gets sequenced and who doesn’t. We didn’t have to have death panels decide who gets to benefit from this new technology or who doesn’t.

The main risk in AI, for my mind, is it’s not so much whether we can mathematically understand what they’re thinking; it’s whether we’re capable of teaching them ethical behavior. We’re barely capable of teaching each other ethical behavior. We’re barely capable of agreeing on what it is. But over long periods of time and over large numbers of cultures, we tend to agree on enough that things might even be improving. They’re acceptable enough so that our population is growing. Our middle class and even maybe our upper class is growing in numbers, and if Steven Pinker and others are correct about violence decreasing, maybe our ethics are good enough. But maybe we bring in a wildcard: Let’s say we genetically engineer an octopus to be brilliant and can manipulate bombs and rockets. We have no idea how an octopus thinks. We have no idea why its ethics would align with ours at all.

If we teach a dog, we’ve got a slightly better chance because they're a little bit more aligned with humans. But if we teach a piece of silicon, we don’t know that it's going to follow our rules. You can say, "Well, they won’t necessarily be doing human tasks. They’ll be calculating big sums of numbers and doing statistics. They’ll be doing advertising." Wait a minute, they are doing advertising that’s starting to influence human life. "They’ll be guiding drones." Well, that could affect a human being. "They’ll be manipulating markets." They’re starting to do things that could really impact our lives. They’re not just doing boring mechanical tasks. We better make sure that their ethics are somewhat aligned with ours.

We haven’t reduced our own ethics to a consensus or mathematics, so it’s all done by gut feeling. No matter how much rationalization we wrap around it, it’s mostly gut feeling. It’s a combination of instincts and deep culture, admittedly multicultural, but there’s something that we have in common that pushes us in the right direction. I am doubtful that we can guarantee or even come close. We never let guarantees interfere with technology. We always take the technology. We’re very greedy. I’m pretty sure that there will be general consensus that we can’t teach these things ethics. The alternative is teaching ourselves how to be as clever as the machines. Rather than trying to teach them ethics, let's teach ourselves to do the task that they’re currently doing. It's not so farfetched.

Things like Jeopardy, Go, or Chess aren’t tasks that we need to do. They were always activities that give you bragging rights. Except for game playing as an end in itself, our ancestors did not depend on being able to win those games. They were representative of intellectual skills that would be beneficial, like the ability to be a good businessperson. The point is, in order for a computer to win at those games, they have to use 100,000 watts of power continuously while a human brain is using 20 watts. Admittedly, the body it’s in is using another 80 watts, and maybe that body has creature comforts that require more watts, but the fact is we’re very energy-efficient for doing this. Humans are also doing a lot more than losing games of Chess, Go, and Jeopardy; we’re worrying about our family, about our careers, and about existential risk. We’re doing all kinds of things that computers can’t yet do. The thing is we’re ahead, and biotechnology is going faster than computer technology.

Corporations are like machines in that corporations behave like people. They have some of the rights of citizens. They’ve been granted certain rights, but they’re also like machines in that we don’t know the ethics of the companies are going to perfectly coincide with people. The difference is that those are machines made out of people, so there is some hope that they might more naturally align their ethics with the population. Even when people talk about evil corporations, what they mean is that the customers that are supporting the corporations want things that are not in their own best interests. So, if I bought cigarettes, I would be supporting a company that’s giving me cancer, but I’m the one that’s doing it. Maybe they did an ad campaign that sucked me in, and that may make them evil, but I ultimately paid for that ad campaign.

Corporations can have ethical problems because they’re composed in a way that’s not culturally and biologically aligned perfectly with the people that make up the corporation. That could happen with the big five AI-related companies, or it could be that they are perfectly aligned with the population, hypothetically, in terms of ethics, but they don’t know how to teach the machines ethics.

There might even be a consensus that’s derivable about the importance of value alignment. So many people that think about it at all end up in that direction. It may be something we could agree on as a goal, but very few of us would have a clue as to how to achieve that goal. We could get some agreement that it would be nice to eliminate poverty, diseases of poverty, diseases in general, but we don’t know how. We can’t write a recipe for that, and the same thing goes for value alignment. We can agree that it’s a cool thing, but how do you do that? How do you convince yourself you’ve done it?

We are definitely living in exponential times, where many of these things are reinforcing each other. We’re using deep machine learning to accelerate our biological research. We might soon be using the biological research to accelerate the production of better algorithms. We have a grant from IARPA that’s aimed at improving visual deep machine learning by figuring out exactly how a rodent processes visual information in its visual cortex. That could result in much better algorithms.

It could be that some of the brain initiative projects allow us to build human brains that are more consistent with our ethics and capable of doing advanced tasks like artificial intelligence. Artificial intelligence has the connotation of silicon-based, so you’d have to give it a new name—superintelligence or human-based intelligence—to distinguish it both from artificial intelligence and human intelligence. The safest path by far is getting humans to do all the tasks that they would like to delegate to machines, but we’re not obviously on that super safe path.

There’s this confluence of technologies where we have autocatalytic cycles, a particular technology that feeds on itself. I can use biotechnology to find new nanomachines in the wild and turn them into new biotechnologies. We can use those biotechnologies to engineer new biotechnologies, and you get a tight loop. Or we can have a bigger loop where the artificial intelligence will help us build biotechnologies that will help us build artificial intelligence. The point is all these things are autocatalytic in that the more of them we have, the more we get, and it just goes exponential. That can buy us all kinds of medium to short-term benefits. Because it’s going so fast now, it’s not unusual to be able to get breakthroughs that may allow us to conquer malaria and Lyme disease by engineering the animal vectors of those, and to cure our transplantation crisis by engineering pigs to be humanized enough so that they can be organ donors. That may have its own little autocatalytic loop because it’s hard to debug enhancement in human beings. You can enhance the pig organs so that they’re resistant to viruses, resistant to senescence, resistant to cancer, maybe cryopreservable, and you can work out this whole preventative medicine, this whole enhancement medicine in pigs that are headed for desperately ill people. The ethics are well-aligned. You want to give a desperately ill person the best organ you can, including enhancement.

That’s a whole other possible loop that would result in enhancement of human beings that may save us from enhancements of non-human beings. That’s very important. All these loops that make us more intelligent, possibly more ethical, may also help us see opportunities that are staring us in the face. Whether or not you want to deny our involvement in climate change, there’s an opportunity there for getting the climate to be what we want it to be, where we want it to be. Just like we wouldn’t miss an opportunity to deflect an asteroid or improve crop productivity, we’re probably not going to miss the opportunity to make climate what we want it to be. There may not be immediate alignment on that, but there’s a remarkable level of alignment, even when you play the blame game and the belt-tightening game that came out of the accords that the United States is not a signatory for. It was a signatory and it may not be in the near future. There's enough consensus there, and there would probably be even more if it were a win-win that didn’t require sacrifice.

All these things come together in a time of exponential change. It’s not necessarily some panacea that’s full of abundance and you don’t have to think and it’s easy, but there are some win-wins to be had if we think about it deeply and we talk about it as if science was a real thing rather than something that’s inconvenient.