Click to Show Episode Transcript

Click above to close.

0:00:00 Sean Carroll: Hello everybody, and welcome to The Mindscape Podcast. I’m your host, Sean Carroll and today we’re thinking about the future. Implicitly of course, we think about the future a lot on this podcast and elsewhere, but today, we’re being a bit more upfront about it. There are people who describe themselves and are described by the outside world as Professional Futurists whose job it is to predict what will happen down the road. But there’s this other genre which is also very successful called Science Fiction, writing fictional narratives that are often set in the future, not necessarily trying to predict exactly what will happen, but at least to imagine different possible futures.

0:00:39 SC: This helps us think about how we should approach the future, as well as how we should approach the present. When I was a kid, I was a big science fiction fan, mostly reading novels but also watching TV and movies. But I can tell you from reading more recent science fiction that the level of sophistication has gone way up. Both the literary quality and also the scientific quality of modern science fiction is as high as it’s ever been. In fact, on today’s podcast, I put forward the hypothesis that modern science fiction writers are in some sense the last great generalists, because not only do you have to understand a lot about science, you also have to understand a lot about humanity, so you need to understand sociology and psychology and you need to be able to write, you need to be able to tell a good story to invoke a vivid world, to invent interesting and colorful characters.

0:01:29 SC: So, our guest today Annalee Newitz is absolutely one of those generalists, someone who can think in interesting ways, about a wide variety of things. Annalee got her PhD in English and American studies from Berkeley, but then she became a writer, specializing in Science and Technology. She was the founder of the famous blog io9. She was then the editor-in-chief of Gizmodo, and she’s right now the editor at large at Ars Technica where she has a very wide variety of experience writing about both individual technological breakthroughs and also the background science behind these breakthroughs. Recently Annalee has decided to turn that experience to writing science fiction. She and science fiction writer, Charlie Jane Anders actually co-host their own podcast, which recently started, it’s called Our Opinions Are Correct on which Charlie Jane and Annalee discuss the meaning of science fiction. I encourage you to check that out. And in her recent novel called “Autonomous” Annalee deals with the biochemistry of pharmaceuticals, the ethics of robotics and artificial intelligence, and my favorite, the economics of what it means to have a right to work.

0:02:37 SC: She talks about slavery and indenture and people trying to work in different parts of the world. We take for granted the idea that we’re allowed to work but maybe in the future that won’t be the case. This is the kind of speculative scenario that science fiction is perfectly made for. So on the podcast we’ll talk about science, technology, science fiction, the difference between writing fiction and nonfiction and how we should think about what the future has in store. So, let’s go.

[music]

0:03:20 SC: Annalee Newitz, welcome to The Mindscape Podcast.

0:03:22 Annalee Newitz: Yeah, thanks for having me.

0:03:23 SC: So I was reading an article in Slate from a few months ago by you, and if I could paraphrase the lesson, it would be that science fiction needs more economics in it. Is that accurate and do you actually believe that?

0:03:40 AN: I actually do believe that, I thought it was very clever for coining the phrase dismal science fiction which I probably didn’t really coin, but it felt like I did at the time. And to sort of describe this cluster of issues that shows up in some science fiction novels where, and also some fantasy as well, where we kind of see characters grappling with the economic dimensions of what’s going on, whether that’s space travel and colonizing other worlds, or whether that’s getting a grant to study something, which we almost never see in science fiction. And it’s like it consumes the lives of scientists. It’s actually a big part of how you do science.

0:04:23 SC: It certainly is. Just for the audience out there, I read Annalee’s wonderful debut novel “Autonomous”, and yeah, there’s a lot more about being the principal investigator in that novel, and applying for grants that I’ve read in almost any other novel that I can remember.

0:04:39 AN: So I’ve gotten a lot of appreciative tweets from graduate students, saying “ah at last, our labor has been acknowledged.” Yeah, I think that one of the things we talk about a lot as science fiction writers is world building. And how do you make an imaginary world feel lived in, and feel real so that your reader… You know, partly so your reader has a fun experience, but also so that it actually inspires people to really rethink reality, which is ultimately what we’re writing about, since we don’t actually travel into the future and report on it. So…

0:05:20 AN: And I think that, especially right now, economics are becoming a really important part of how we think about science, how we deal with each other, what’s shaping the future. And so, some writers are dealing with this more and more, and also economists are really interested in building scenarios about the future, too. So I feel like there’s a lot of good opportunities there for us to be thinking about, basically thinking about science in context, if that makes sense, like sort of thinking about not just like, “Hey, we discovered a new particle”. But how did that happen? How did this dude get to be the one who discovered this particle out of all the people who were working on the project, or how did this project get the funding that allowed them to discover the particle, why did this particle accelerator lose its funding? Why did this one, even though it has lots of problems, get to keep its funding. There’s all these weird questions that sound really wonky, but they’re actually part of making a really exciting story because part of what’s fun is seeing people struggle to get what they want and that struggle isn’t just shooting particles down an accelerator, although that would be cool if it were.

0:06:40 SC: That would be cool, but you’re right, that’s not exactly what it is. I have to ask before I forget. Have you read the famous Paul Krugman article about the economics of faster than light travel.

0:06:51 AN: I have, yeah, and that sparked a huge debate, that was part of a huge debate within the science fiction community, and people like Charles Straus were proposing for a while that we have this kind of ultra realistic science fiction, mundane science fiction, basically, where we were just not gonna have faster than light travel because it was just absurd. And so, a few people signed on for that, and Charlie Straus wound up getting around it by having teleportation gates in his work, and I was like, [laughter] “Oh okay, so we’re not gonna have faster than light travel ’cause that’s just silly, but teleportation gates, no prob.”

[laughter]

0:07:29 SC: Well, we have standards for ourselves and standards for other people. That’s okay, but…

0:07:33 AN: Yeah, exactly.

0:07:34 SC: But I love this idea about making it realistic and making the world that you’re creating realistic both in an economic sense, as well as a technological sense, ’cause after all, economics is an enormous influence on how we live our lives at a very basic level. So it led me to the following conjecture. Are science fiction writers the last great generalists because of all the stuff you have to know?

0:08:00 AN: Wow, that’s a really interesting question. I mean, I think it’s really gonna depend on the science fiction writer and also how they view their relationship to science because I think there is definitely science fiction writers out there, maybe people like David Brin for example, who… And I think a lot of writers who’ve now passed on who were really popular in the ’50s and ’60s, I think they really thought of themselves as what we would probably today call a foresight analyst or a futurist. I think they really did not feel like they were generalists who were commenting on the world today, but that they were actually predicting things that would happen based on what they knew of the world. A kind of well-informed prediction. And I think now, yeah, I mean some science fiction writers for sure are generalists. One of the big breakout new voices in science fiction is Ann Leckie who won every award in the universe for her “Ancillary Justice” Series.

0:09:08 SC: Yeah, I love those.

0:09:08 AN: It’s starts with [0:09:08] ____. Yeah, they’re fantastic. And I mean, she deals with everything. She deals with economics, she deals with space travel, she deals with AI, she deals with archaeology. And I know from talking to her that she is a voracious reader, and she’ll read everything from medieval Chinese novels to the latest articles about discoveries in science, so I think she may be an outlier.

0:09:42 SC: There will always be a spectrum right? It’s not every science fiction writer cares that much to do the research and think about these things at an academic level, but not only, it seems like if you want to do a certain kind of writing, not only do you have to catch up on humanity and economics and sociology and political science and technology and physics and biology, but you have to be able to tell a story. So there’s, the humanities are in there too. So, it does strike me that the best equipped science fiction writers are true generalists in the old school sense of the word.

0:10:12 AN: Yeah, I think that’s right. And it’s funny ’cause the novel that I’m writing right now which won’t be out for a year or so. Sorry that I’m kind of teasing it a tiny bit, but it’s like autonomous. It has a lot of academic in it. Scientists, but also social scientists. And it’s set in an alternate timeline where there is a need for science and social science to work hand in hand and basically be in the same academic department. So this is a very kind of narrow fantasy of an alternate history…

0:10:48 SC: Completely unrealistic. Yeah, I mean… [0:10:49] ____ had some teleportation machines in there. It’s much more believable.

0:10:51 AN: I know. There is time machines. So that’s sort of what has to happen, right? You have to have time machines in order to have an academic department which is completely integrated social science and science.

0:11:02 SC: Right. [laughter]

0:11:03 AN: And physical science, ’cause it’s geology. And so part of what I’ve been thinking about a lot in this book, is where does Social Science fit into the scientific project? And I promise that the book is actually just mostly people running around in time machines and fighting and stuff, but there’s underlying…

0:11:22 SC: It’s mostly footnotes. It’s like one of these law review articles that has three lines of text at the top and then a hundred lines of footnotes and small [0:11:29] ____.

0:11:29 AN: Yeah, there are… There will actually be… [laughter] There will be an afterword with some end notes. But it is, but there is this kind of… For people, like I said, you won’t need to go into the novel being like, “I have some thoughts about humanities and the sciences”, but some of the characters are social scientists and they’re integral to this scientific project, and I was thinking about exactly what you’re saying was story telling and how what humanities and social sciences bring to science are a sense of history, and also an ability to analyze social data, which is increasingly for, especially in the realm of technology but also in a lot of areas of things like infrastructure design and stuff that really does hit up against hard sciences or hard engineering. You know, we actually need to understand how humans act over time and what humans have done at various points, pressure points or what they do when they get into a big crowd versus what they do when they disperse. And there are patterns. They’re not… It’s not perfectly reproducible as an ideal scientific experiment would be. And that’s very frustrating because if only we always just acted the same way all the time, we could fix a lot of problems with humanity.

0:12:51 SC: That’s why it’s better to become a physicist than a sociologist. It’s way easier to study atoms, than people.

0:12:57 AN: Yeah, except when there’s all these… There’s dark matter and magical particles. [laughter]

0:13:08 SC: It’s very simple, really.

0:13:10 AN: No.

[laughter]

0:13:11 SC: Yeah, it really is. Compared to any person.

0:13:13 AN: Is it simple because you just put dark in front of it? And then we use, is it dark matter, is it dark energy, whatever.

0:13:21 SC: No, you can literally write down on one page, a model that fits all of the data, right? Can you even imagine doing that with people. No, it’s just not… Of course, we don’t know exactly what the dark matter is. It could be quite complicated, but in terms of explaining what you see out there in galaxies in the universe, the physics you need to imagine the dark matter has is actually really simple. So I’m very happy in telling my students that people with short attention spans should go into physics rather than the social sciences, because you can simplify things way down and the thing still works. That’s the miracle.

0:13:55 AN: Yeah, I mean I do think there are efforts to do that with the social sciences. I think Game Theory is a great example of social science trying to reduce all of human interactions to kind of these few basic laws. And that’s what’s wrong in Game Theory is that it works sometimes in some situations if you’re literally playing a game. Or actually, I love in the new movie Crazy Rich Asians that Game Theory becomes an integral part of how to have a romance which in fact is written into the history of Game Theory in a way. So anyway, it works in some cases, but it certainly doesn’t explain all the ambiguity.

0:14:39 SC: Well, to be fair I think the Game Theory is just right in terms of its mathematical demonstrations of things but then the question as always, with math is, are the assumptions behind these theorems that you’ve proven, matching on to the situation that you care about in reality. And that’s where Game Theory or calculus or algebra, or anything can leave you wrong. Certainly extrapolation and regression can leave you terribly, terribly wrong.

0:15:03 AN: That’s right, yeah. And a lot of that does… I mean it does end up coming down to as you’re saying, the fact that individual humans sometimes react in ways that we don’t expec. Crowds of humans sometimes react in ways that we don’t expect. And you can say, okay, there’s these general… We see these general rules. If we look at the long arc of recorded history in the West, we can say, “Alright, we see general patterns emerging in terms of how people group together, and what kinds of political structures they seem to enjoy”, but there’s a huge variety of those political structures, there’s huge variety of ways that we’ve gotten together. So it’s hard. It’s so tempting to be like, “Alright I’m gonna come up with the universal theory of human relationships or of human development in some ways.” And everybody’s tried it. Karl Marx tried to do it in the 19th century and you know, it’s… It doesn’t ever completely work.

0:16:04 SC: Well, I think we even know why it doesn’t work out, right? I mean, the famous example is Isaac Asimov and Psychohistory, right? He had in mind, I think it was pretty explicit in the novels. He had this analogy, when you have a bunch of atoms in a box of gas, you can extrapolate general rules for their behavior as a fluid, thermodynamics and statistical mechanics. So of course, when you get many people, you can do the same thing. But as a physicist, there’s an obvious flaw in that reasoning, which is that, in the case of atoms, they just simply bump into each other and respond in a very linear way. Small things remain small or even get smoothed out. Wild fluctuations get cancelled out by all the other fluctuations. Whereas in a group of human beings who interact very strongly and nonlinearly, tiny fluctuations can be amplified very, very strongly.

0:16:56 SC: It is absolutely possible from a physics point of view to understand why groups of people can be influenced by individuals that have a strong impact on the others.

0:17:06 AN: Yeah, I’m reading a novel right now called “An Unkindness of Ghosts” by Rivers Solomon, which just came out. It’s really great, it’s a generation ship novel and so it’s set in the future because we don’t have generation ships yet and I…

0:17:20 SC: That’s what they’ll have you believe.

[laughter]

0:17:21 AN: Yeah, I’m sure that Harp is working on it. But one of the things that’s great in Solomon’s novel is that she deals with the way that this generation ship has recreated plantation life. And so they have slaves and servants who are living in the lower decks and doing all the scut work, and then they have the plantation owners, and it does break down along familiar racial lines in this book where there’s sort of lighter skinned people on top, darker skinned people on the bottom. And this is a great example of that… Of an author trying to play with what you’re talking about where a tiny or a large fluctuation continues to echo over centuries. So there’s been this incredible historical disturbance of the slavery of Africans in the United States. And it doesn’t just go away, it doesn’t… The kind of ripples that have propagated out from that historical trauma just don’t… You would hope, you would wish that they would just kind of bump into other things and that those ripples would bump into the Obama presidency and be like, “A-ha everything’s fixed”, but it isn’t fixed. And in fact, sometimes the effects get bigger over time, or they return in new ways that are just as pernicious.

0:18:41 AN: And so that’s the fun part of being a writer. It’s the sad part of history, but the fun part of being a writer is thinking about how there are these social effects that can have very unexpected consequences, and also that they can continue. To me, writing about the future, that’s one of the pleasures of it is, I think, for some science fiction writers, the pleasure is coming up with all the new shit, like, “Oh my God, we’re gonna have faster than light travel or we’re gonna implant other personalities inside our heads.” And to me, I was really interested in how would history come back in some new form, and how would we continue to have some of the same struggle. Which is probably partly why I was excited about talking about economics, because I don’t think in 150 years, we’re gonna have gotten rid of capitalism and we’re not gonna be… Science is not magically gonna pay for itself. And so I really wanted to think about how people in the future would still be, even though there’s many, many things that are different, even though they have hard AI, which is a kind of magical dubious thing, but they also are still trying to get grants and they’re still struggling with social inequality.

0:20:01 SC: Tell us a little bit more about exactly what you imagined economically in Autonomous. Or in fact, tell us about Autonomous just a little bit, so those have not yet read it, shame on you, [laughter] can follow along.

0:20:10 AN: So Autonomous is about a pharmaceutical pirate. She is a former academic, a synthetic biologist, and wants to bring medicine to people who are poor or just don’t have any money at all, who don’t have access to it. And she’s found that working in academia isn’t allowing her to do that. There’s too much political stuff, there’s too many corporations that wanna hold on to those drug patents. So she becomes a pirate and starts creating versions of these expensive medicines for free or for very cheap to give to people who can’t afford them. And she comes on the radar of the corporation whose drugs she’s ripping off and they send a couple of agents after her, basically to render her because there’s a couple of other things that they’re trying to cover up that she knows about and…

0:21:08 SC: And “render” here being a euphemism for exterminate.

0:21:11 AN: Pretty much. [laughter] The corporation isn’t worried about her health.

0:21:15 SC: It’s not touchy-feely. Yeah.

0:21:18 AN: And so the agents who come after her become main characters in the book as well, and we switch back and forth between Jack the pirate and Paladin the robot, who is part of this team of agents chasing after Jack. And so the world that they’re in is a world where basically the UN has been replaced by a group that I call the International Property Coalition. So human rights have been superseded by property rights, and that’s how people think of human rights, is in the context of property, which makes sense because robots are human-equivalent in this world and they are born indentured to pay off the cost of their manufacture. And eventually through some legal shenanigans that have happened in the past, and there’s a tiny info dump where we learn this in the book, humans also are now subject to what they call the “human rights indenture laws”.

0:22:16 AN: So you have the lucky right that if you need to, you can sell yourself to someone and they will own you for a set period of time. And of course there’s a ton of abuses in the system. And basically, it’s a story about the future of slavery, and how do we reinvent slavery and give it a bunch of euphemisms. And a big part of that is the economic system in one of the large economic coalitions that we see a lot, which is a combination of North… It’s basically North America. It’s Canada and the US have become this kind of free-trade zone, and in order to work or own a house or go to school in the free-trade zone, you have to buy a franchise in a city, and when you do that, you pay a certain amount of money to get in, and then you pay every year and then you get free health care, you get the right to work and live there, you get free schooling, you get free internet, emergency services; all the things that you would expect to get paid for by the government in our world. Well, not all of those, but many of those…

0:23:23 SC: Not all of those. We dream about them, yeah.

0:23:25 AN: We dream about some of those, but things like having roads and having access to education and things like that, we think of as just, well, that’s just what you get. Of course, you have roads, that’s part of what you pay taxes for, at a minimum. We also imagine that in our world, we think, “Oh well, you should have the right to work anywhere.” And in this future, that right just doesn’t exist anymore. If you wanna work somewhere you have to pay in. So you…

0:23:55 SC: And there’s this parallelism for human beings who are allowed to sell themselves into indenture, and the AI robots, which are automatically there until they can buy their way out. Am I remembering it correctly?

0:24:08 AN: Yeah. So basically, pretty much when all robots are born, and these are of course artificially intelligent robots, so their consciousness isn’t like a human’s, but it is equivalent to a human’s. They have feelings and desires and things like that. So yeah, the law says that they can be indentured for up to ten years to whoever manufactures them, like I said, to pay off the cost of their manufacture. And not all robots survive that period of time, but if they do, then they are given an autonomy key and are allowed to go be productive members of society. And we do get to see a couple moments of where we meet some robots that have and gotten out of indenture. And one of the things they love to do is just go shopping. ‘Cause they’re basically people.

0:25:01 AN: They’re like, “Now we’re free and so we’ll go shopping at a mall in Vancouver.” So the economy is… It’s a hyper-capitalist economy where a lot of the things that we think of as our rights, which have nothing to do with capitalism, they just have to do with how the United States was set up, those rights have been taken away. Like I said, the right to work wherever you want, the right to live wherever you want. But they have been replaced by new property rights where you can pay to work where you want and pay to live where you want. And so that’s why people get indentured, because if you don’t have enough money to pay to be part of a city or some other community, you’re just gonna die, unless you sell yourself. So those are your choices: Die or become indentured. So…

0:25:49 SC: And it reminds us, we do take things for granted, right? Both explicitly and implicitly in our laws, we have rights that we label. And I was talking to a law professor on a different episode of the podcast, not yet released, but about the fact that there’s a lot of things we think should be rights that no one ever thought to put in the constitution because everyone knows that they’re rights, right?

0:26:14 AN: Mm-hmm.

0:26:16 SC: And we might even have mentioned the right to work as one of them, but that can be taken away, especially if the economic system changes.

0:26:25 AN: It’s very true. And it was because I was talking to an economist as I was coming up, as I was doing world-building on this that I thought about that, because he pointed out that…

0:26:37 SC: This is our friend Noah Smith, I bet. Right?

0:26:39 AN: This is our friend Noah Smith, who is a dedicated dystopian thinker, and even though he’s a pretty cheerful guy…

0:26:47 SC: @Noahpinion on Twitter.

0:26:49 AN: Yes, notorious Twitter personality. And he said exactly what we’re saying. He was like, well, we think that we should just be allowed to live wherever we want. But actually that’s a really amazing right to just have that at birth. A lot of civilizations didn’t have that and don’t have that. So basically, I just had to take away some of these basic rights from my characters, and suddenly it made perfect sense that people would choose to be indentured. And indeed, even in our world now, there’s plenty of jobs that people do that are basically indenture. Well, like being a grad student, for example, [laughter] and I was definitely thinking of that. I spent many, many years as a grad student, and I definitely had a grumpy feeling about my employment situation.

0:27:39 SC: We prefer “apprenticeship” to indentured servitude. But… [laughter]

0:27:42 AN: Which is just another way of saying indenture. And in fact, you’re paying in. It’s the university system, is you pay in in order to have the right to work there, at least in my experience. Some people get fancier deals probably, but the point is…

0:27:58 SC: You are a humanities graduate student. Physics graduate students get paid, it’s true.

0:28:02 AN: Yeah. We got paid a little bit, [laughter] and I was kind of on the border with social science, so maybe… But we, yeah, it certainly was not… In some ways it’s good that it’s not like science ’cause there’s nobody who has to scramble to get money to pay us because…

0:28:19 SC: Because there’s no money.

0:28:20 AN: There’s no money, but the department would try to hand out jobs in a relatively equitable way. So it’s like, “Here’s $1000 for this semester. Everybody gets $1000 for the semester.” Yay! [laughter]

0:28:33 SC: Yay. And how did it work in terms of the mechanics of writing the novel, going back and forth between world-building and storytelling? I presume that the world-building influences the story telling, but do you have pages and pages of unpublished world-building materials stuck on your hard drive somewhere?

0:28:54 AN: I do have an embarrassingly large amount of world-building materials that never made it in. A lot of stuff, I think, has to be barely painted in in the background because readers don’t wanna wade through like a giant info dump where it’s like, “As you know, Bob, in this world here’s how indenture works. And so as you move into the cities, this is what’s happening.” So really, the way the book comes together, I think, or came together for me, and I think the way novels generally come together for me, is it starts with characters, and I start to think about the world that they’re in. And then I have to really focus on the character to write the book, and I feel like that’s laying the ground work is, “Okay, here’s the characters, here’s what happens to them, here’s how they change or don’t change.”, and then I have to come back again at the end and say, “Okay, so what are the rules of the world? I need to smooth this out and make sure everything is consistent. Otherwise, every nerd in the universe is gonna be smacking their face as they read this book, because it’s just completely [laughter] incoherent in terms of how the world works”.

0:30:04 SC: That’s interesting. I think, and I noticed this with my wife Jennifer, who was also a science writer…

0:30:09 AN: An awesome science writer.

0:30:09 SC: That we write… An awesome science writer, that was implicit. We write our books very differently. And I’m guessing that if I were to write a novel, I would write it very differently than that. I would have to do all the world-building first, and then have to come up with what the characters are doing in that world. Just like when I write my nonfiction books, I start with chapter one and then chapter two and then chapter three, and Jennifer can start with chapter seven just like it’s no big deal, and it drives me crazy.

0:30:35 AN: I definitely, with both my fiction and nonfiction writing, ’cause I do nonfiction books as well, I have to start with chapter one, too. And I know people like Jennifer who are like, “No, but start in the middle.” And I have writer friends who do fiction who… They’ll write a bunch of scenes and then figure out where they go, and that…

0:31:00 SC: Yeah.

0:31:00 AN: Drives me crazy. I have to know… I have to know what the world is, and I have to know what the… I have to start with the characters from the beginning and go to the end. And it’s true, when I get to the end, I go back to the beginning and clean everything up.

0:31:12 SC: Sure.

0:31:14 AN: But yeah, I am very linear, I guess. I live in linear time, so I try to honor that in my work.

0:31:21 SC: And that’s one of the good things about different ways of creating things. I remember when we did a science consult with Ridley Scott for one of his movies that hasn’t been made yet, he was going to adapt the Forever War, the Joe Haldeman novel.

0:31:34 AN: Yeah.

0:31:34 SC: And so there was no script written yet. Of course, the novel had been written, but it was clear that Ridley Scott had in his mind certain very specific scenes all painted out. He knew visually what it was going to be like. And the words, they would appear to fit whatever you wanted to happen, but he knew what it would look like from the start. And that’s yet another way to start with a project like that.

0:31:56 AN: Yeah, and I think, for sure, like the novel I’m working on right now, which I’m almost done with, there were certain scenes that I knew I wanted in there toward the end, and I kept thinking… I didn’t write them down because I wasn’t there yet, but I was so… Just the other day, I wrote one of them finally, and I was like, “Yes! I finally wrote that scene.” And so there is definitely… I feel like… In TV writing, people call them “beats”, and you know where the beats are gonna be, and then you kind of write yourself into them.

0:32:33 SC: And do you… Sorry, let me back up. There’s a whole long tradition of scientists diving in and writing the occasional science fiction novel, with mixed results, shall we say, right? We’re trained through a lifetime of being an academic to talk and think one way, and being a fiction writer is very different. Now, you got your start as a science writer, I guess as a tech writer and then a science writer, and then you made the transition into fiction. So what was that like? Were you always planning to write fiction at some point, or was it a huge change of gears?

0:33:07 AN: It was a pretty big change of gears. I thought I had already been through the big transition when I moved from writing as an academic to writing popular work, writing journalism and pop science. And also of course, I switched from a humanities focus to looking at tech and science. So I was like, “Okay, I’m done with all my big transitions.” [laughter] And yeah, it never really works out that way. And so making the transition from academic writing to popular writing, I have to say, was actually much more difficult in many ways than transitioning from science nonfiction to science fiction. Because when you’re writing for an academic audience, you’re trying to do so many things that are very different.

0:33:57 AN: You’re not worried about popular or mass appeal. You often are buried in a long tradition of thought, and so you’re having to talk about, “Here’s what everybody else in this field has done. Here’s my one tiny contribution that takes us slightly forward and to the left.” And with pop fiction and nonfiction, you don’t have to do as much of that kind of work. Part of your job is also just to engage the reader which… It’s a whole craft that you have to learn, how to be engaging and how to tell a story that people wanna keep reading. And you don’t have to do that.

0:34:37 SC: A craft that so many academics have zero training or interest in learning about. Yeah.

0:34:41 AN: Right, but that was actually like a hard transition, because one of the freedoms that academic writing gives you is to be as weird and quirky as you want and have a title for your essay that’s some weird play on an 18th-century poem, and the three people that read that article are gonna be like, “Yes! That’s so funny, Annalee. Good job!”

0:35:04 SC: That was hilarious. [laughter]

0:35:06 AN: And you feel like, “Wow, that was so sophisticated. There were 90 levels of references there.” And like, “That’s not something you can do in popular fiction”. In some limited ways you can do that, for sure, and I definitely have weird obscure references in my books. But that can’t be the whole thing. You also have to tell a story that people can relate to and that speaks to a lot of people and not just like the five people that are interested in Marxist cultural studies, which is what I was doing.

0:35:39 SC: So you think that the experience of going through that transition and becoming a popular science writer was extremely helpful for your next transition into fiction.

0:35:47 AN: It really was, because I had already, before I wrote Autonomous, or actually I should say in the middle of writing Autonomous, ’cause I wrote a really shitty first draft, which I put away, and then wrote Scatter, Adapt, and Remember, which is a nonfiction science book about mass extinction and how humans will survive the current mass extinction. And that book, I did a lot of learning about how to structure an arc in a book, how to make it have a beginning, middle, and end. I don’t think I fully succeeded, but the failures there helped to teach me. Oh, so I need to have this kind of connective tissue. And so…

0:36:31 SC: Aristotle was right about storytelling, right? This is a reason why someone pointed out that why Hollywood movies do well compared to a lot of foreign films. It’s not that Hollywood movies’ appeal to lowest common denominator, but they understand story structure. There are acts, there are conflicts that get resolved at the right time, and that’s a whole skill to have to learn.

0:36:52 AN: Yeah, that’s a big part of it. And certainly… I’ve been reading science fiction my whole life, and so I definitely have a template in my head where it’s like, “Oh, this is how you tell a story. You do what Octavia Butler would do, or you do what Ursula Le Guin would do”. There have been many times when I’ve thought, “What would Octavia Butler do?”

0:37:15 SC: What would Octavia do? Of course.

0:37:16 AN: Yeah, and she would basically… She’s famous for having spent lots of time with false starts and revision and writing half a novel and tossing it aside ’cause she was very exacting about how to tell a story, and that’s why her stories are so engaging and stick with you for so long. So I think I really enjoyed making the transition. I was really surprised that it worked so well, and I was even more surprised that people actually liked Autonomous, ’cause I thought it was just gonna be a weird book about robot sex and people would be… There’d be like four. It’d be like my academic writing. Four people would be like, “This is the greatest thing ever!” and everybody else would be like, “That book happened? I don’t know.” So I was so gratified that my weird ideas, and my efforts to make them appeal to more than just me actually worked.

0:38:10 SC: To be fair, there is robot sex there in the book. Or actually, is there robot sex? There’s robot-on-human sex.

0:38:17 AN: I mean that’s robot sex. Anytime robots are having sex…

0:38:19 SC: Yeah, okay. Sex with a robot.

0:38:19 AN: Anytime robots are having sex is robot sex. But yeah, and that was part of what… There’s some quirky stuff in this novel, as well as, of course, my exegesis on the future of economics. There’s also human relationships and romance and stuff like that, and I think that’s part of what people really liked. Even though Paladin the robot does some mean things, she’s also a really relatable character. And she’s just been born, in a way. She’s just been booted up, so she’s learning not to be a jerk.

0:38:56 SC: Well, we have to talk about Paladin a little bit. Paladin the robot. We talked about the economics and how that goes into the world building. There’s also a lot of ideas in the novel about AI and robots in general. And so one of the things you already touched on, so I just wanted to touch on it again. We’re certainly sympathetic with Paladin. She’s discovering herself and sort of a newbie in the world and an ingenue in discovering things. And then she kills people pretty ruthlessly all the time. And…

0:39:25 AN: Yeah.

0:39:25 SC: It’s almost like you have to read it twice, like did that really just happen. Was that an intentional kind of mood setting device? Or how did that come to you?

0:39:36 AN: So, Paladin was the first character that came to me in the novel, so I sort of started with Paladin and kind of ended with Paladin too. ‘Cause when I did my major revision, Paladin was the character that changed the most. And I felt like I… I like characters who are complex and in a gray area. And this is a robot that was designed to be a combat robot. Like, there’s no question why Paladin is there is basically just to do some reconnaissance and then do some killing.

0:40:13 SC: Killing people, part of the job. Yeah, part of the programming.

0:40:15 AN: Part of the job and part of his programming. And he starts out as basically owned by a military branch of the African Federation, and the African Federation is kinda like the EU for African nations in this future. And so it’s a bunch of African nations kind of sharing the same currency and things like that, so… So he’s owned by the African Federation. Everyone just kind of assumes he’s a he. Because if you’re a big giant bulky terminator-looking motherfucker, you must be a dude, and he doesn’t really care one way or the other but, but his…

0:40:52 SC: He’s a robot.

0:40:53 AN: Gender identity. He’s a robot, he’s like, Whatever. I don’t really have gender. But as the book goes along and he starts learning more about people and how people relate to each other and what they think of as being important, he starts to… He wants to change his pronoun, so she becomes she in the middle of the book, kind of arbitrarily for a bunch of reasons, that I don’t want to spoil. And so she kind of goes through some of the things that a human teenager might go through. You know, she’s just been booted up, she’s questioning the role that she’s been assigned in the world, she’s questioning who she is, and who she can love and how she wants humans to perceive her. Because changing her pronoun, of course is all about what humans perceive. ‘Cause again, robots don’t really care about gender that much. So, I wanted people to have this sort of crunchy difficult relationship with Paladin and not have it be just like Paladin is an innocent sweet robot, you know. Paladin crushes people’s brains, that’s her job. And you know, she eventually toward the end is kind of like, “Huh, why am I doing this job? Well, I’ll just kill some more people.”

0:42:09 SC: Brain crushing. Is that really my purpose? Yeah.

[laughter]

0:42:12 AN: And she never has, again, I mean, I don’t kind of spoon feed the reader. She never has a moment where she kind of scratches her head, and she’s like, “What is meaning of life?”, but she does start questioning her programming, and she’s given a chance to really see how she’s been programmed. And that’s a really important turning point. And again, for me, this was… A lot of it was thinking about what… Again, what it’s like to be a teenager or what it’s like to become a young adult and suddenly you realize that there’s all this stuff you’re doing just because you’ve been told that you should do it and just because you’ve been told it’s right. And you know, lots of people, for example, grow up with religious programming and they just have never questioned it, and all of a sudden, they’re like, “Wait, why am I doing all this stuff? Do I really believe this? How do I… How do I wanna be religious? Do I wanna be?”

0:43:02 AN: And Paladin’s kinda going through something like that with her programming. Is kind of rethinking all the stuff she’s been taught. And so that was so fun to write, especially because I’m like a person who can’t necessarily go into their own mind and be like, “I have… Here, I have discovered the programming called Catholic church, and now I can remove it”. Because you just can’t do that, you can’t remove it if you’re a person. But for a robot, she actually can go in eventually and see all the programs she’s running and be like, “Oh that’s why I was feeling that way. Oh, I could modify that”. So robots would be great in therapy. That’s what I think. [laughter]

0:43:40 SC: It’ll be very… Not a very lucrative customer, ’cause you can just fix them, and then they go away. That’s not what a therapist wants at all.

0:43:48 AN: Yeah, but it might be really complex. Therapist might have to… Essentially like a robot therapist would have to be basically some kind of programmer or maybe even a hacker, ’cause you’d have to build little hacks to get around like, “Well, you need this program, but this program is also telling you to fall in love with guys who are really bad for you. But we wanna keep part of the program, but not that other part”. So it could be complex.

0:44:11 SC: Well, so I have thoughts on this, both ways. One is, it’s provocative to think about an artificially intelligent mind being able to pinpoint programs that give them motivation to do certain things in certain ways, because it teaches us or makes us think something about human beings. In some sense, we’re robots, we’re computers. Anyway, we’re physical organisms. There’s things in our brains, going on.

0:44:40 AN: That’s right.

0:44:41 SC: We might imagine pinpointing, “Oh there is the part of my brain that makes me fall in love with bad boys who I know will cheat on me”. And would we want to go in there or pay money to the surgeon to go in there and fix that part of our brain.

0:44:56 AN: Yeah, I mean that’s kind of the question that’s asked in the movie, the “Eternal Sunshine of the Spotless Mind”, where there’s a technology. I love that movie, and I’m sure it’s part of what influenced me. And in that movie, there’s a technology where you can nuke a set of memories, so that you can forget a mean person in your life, or maybe you can forget your lust for bad boys or whatever. So yeah, I think that that is a question that these characters have to deal with. For Paladin, that’s less of a concern because she can see all of the modifications she’s making, and… That said, there are moments where she’s having to second-guess the humans around her because she knows that the humans don’t have that same ability and so she has to say, “Okay, is this person speaking to me out of their programming or do they really mean it?” And again, that’s a very human question, because you never know, “Is this person just being polite or do they really like me?”

[laughter]

0:46:00 AN: And so that’s a… And so, the fun thing with Paladin is that while she’s having these very human thoughts, I also based her consciousness on basically a UNIX system and thought about how early UNIX computers networked with each other. And what would it be like if you had a consciousness that had kind of grown out of computer networks and how would these AIs communicate with each other? And so I have them doing secure handshakes and when they need each other, they’re speaking to each other wirelessly, the robots are, and kind of announcing themselves in the same way that data announces itself to a computer on the network where it says, “Hello, here comes my data”. So I’m kind of translating from what packets are saying as they reach a port. And so I felt like I made Paladin sufficiently alien and sufficiently connected to a tradition of computer minds that some of her problems really aren’t human problems. Some of her problems are really network security problems, and there is a whole bit where she has to… She has to deal with what it means to have a mind that is stored in the cloud, and she doesn’t control that. The company that owns her basically owns her backups, which is her mind. And so, that’s a thing that humans haven’t really had to deal with quite yet. Not in a literal sense anyway.

0:47:36 SC: And I think that this does also raises questions about artificial intelligence. So I think that the fact that Paladin can pinpoint in her brain certain programs that make her feel certain ways provokes us into thinking a certain way about human free will right? But on the flip side of that, I’m a little skeptical that artificial intelligence would work as sort of conscious human level AI if you couldn’t reprogram yourself. I mean, I hear so many conversations about artificial intelligence that are about how we should program the computer so it doesn’t do bad things and I can’t really imagine that that would qualify as artificial intelligence if it couldn’t change its mind in some profound way.

0:48:21 AN: I completely agree with you. And one of the things I’ve said about this book over and over is that it is science fiction, but there is this fantasy at the heart of it, which is that we just sort of wave our hands and have strong AI and don’t ask any questions.

0:48:37 SC: Right.

0:48:38 AN: Because I think that in real life as we explore more and more what machine learning means and what it means to build machine learning algorithms into other applications, we’re realizing that, first of all, we don’t know what the hell intelligence is at all in humans, in anything. We are totally square one with that. And I mean, sure, there’s a lot of people who have a lot of different definitions, but trying to recreate something that we can’t even quantify in ourselves is really tough. The other thing that I think we’re learning is that intelligence to the extent that it can be talked about is really an ecosystem.

0:49:17 AN: There’s different kinds of intelligence, and I’m not just talking about dumb stuff, like emotional intelligence, which I think, whatever. That’s a self-help book thing. I mean kinds of intelligence that work on different kinds of problems, intelligence that presents itself as maybe not seeming very intelligent or but actually is brilliant at doing some one particular task or a set of tasks. And I think the whole movement around the idea of neurotypicality being actually not really that typical is really helping us in a way much more than you might think in terms of defining intelligence, because once we have an idea that there’s a spectrum of what it means to have consciousness and intelligence, and it’s not just like, here’s the one kind of consciousness that we recognize as intelligent and everything other than that is like autistic or a schizophrenic or whatever the other names are that we’re using at whatever point in history, which is a very abstract way of saying that when we actually, if we ever do have something that resembles what gets called strong AI, which is basically human equivalent AI. It may actually look nothing like Paladin.

0:50:36 AN: It may express itself in ways that are really hard for us to relate to. It may be like talking to a very non-neurotypical person, and we’re gonna have to learn to translate from those machines. And I think some of them might turn out to be kinda like Paladin, but I agree with you that if we really wanted to imitate a human mind, it would have to be self-modifying, in some way, and if it weren’t self-modifying, I don’t know if I would say it couldn’t be human equivalent, but it would be equivalent to a human who had been horribly abused and kind of culturally programmed like maybe like a cult victim or something like that. Someone who hadn’t been allowed to make any choices. And as a result, may be had experienced a huge amount of trauma. So if you can imagine…

0:51:32 SC: Go ahead.

0:51:33 AN: Go ahead.

[laughter]

0:51:34 SC: I was gonna say my own suspicion is that… My suspicion is that we’ll have a hundred different kinds of artificial intelligence before we have anything that is really like a human being. I think that we way underestimate how hard it is to get something that’s like a human being. I think it will happen. It’s an obstacle in principle, but there’s many different ways to be intelligent, we’re discovering this already. It’s way easier to build a computer that will win at chess than a computer that will carry on a fun conversation with you. And part of that is that it’s not just processing power, but it’s also motivations. It’s our brains are embodied in our bodies that get hungry and thirsty, and want to reproduce and you wanna sing songs, and it’s gonna be… You could have other motivations for an embodied robot, but they wouldn’t be the same. So I would be surprised if the first, second, third, fourth generations of AI, were anything like human beings at all.

0:52:30 AN: I think, as we’ve been saying, I think it’s gonna be an ecosystem, so I think there’s gonna be some AIs whose motivations are incredibly difficult for us to relate to. We might know what those motivations are, but they won’t fit anything. We won’t be able to feel those motivations in the same way an AI might. On the other hand, if we do assume that we can create a human equivalent AI and program it, we might very well program it to have the same kinds of fucked up desires that humans do, right? We might give it what Fuco calls a perverse implantation, [laughter] which is just his way of talking about how we sort of teach people to think that their own desires are terrible and wrong even though we feel them all the time ’cause they’re, as you said, we live in meat sacks, and we have built in urges and needs. But using culture, we can try to train people that these urges and needs are bad and can modify them, so we can modify how people want to have sex, or we can modify how people want to eat. That’s why people have so many weird rituals around sex and food because these are these basic things that we want and culture has just swarmed all over them and messed them up and made them complex in ways that are infinite, just like infinite complexity laid on top of like basically just eating and reproducing people.

[laughter]

0:54:00 AN: It’s not… These are not complex activities, but we fetishize them to the point of absurdity. And so I think you could imagine a robot that had a lot of hangups. Martha Wells has a great series called Murderbot, which is a series of novelas, which I highly recommend, where she has a robot who has a lot of social problems. And partly due to its programming, partly due to the fact that like Paladin it’s kind of a murder, it’s a murderbot, it’s designed to kill and protect and it copes with all of its problems and all of its hangups by downloading a huge amount of soap operas to its local memory and then just watching soaps. And it’s this very human tic that this robot has.

0:54:52 AN: But it made me think about how we really could, through cultural conditioning, create AI that are very human, and that learn about life from stories and that crave stories in the same way that people do. And so, I think like I said, it’s gonna be an ecosystem and it’s gonna be way weirder than we thought. I don’t think we’re gonna have a robot uprising. I think it’s much more like robots and AI and machine learning will kind of merge into human society in ways that are sometimes hard to tell apart from human society. If you get an implant that’s kind of an AI or you kind of become partly AI, I think it’s… I think we’re gonna end up with a spectrum where there’s totally biological human intelligence, all the way on the other side is like totally machine intelligence and everything in the middle.

0:55:47 SC: Yeah, I think that’s exactly right. I’m on your side there. Did you get… Did you do a lot of conversing with AI experts and robot experts for the novel, just like economics?

0:55:57 AN: I did, yes, I talked to… I talked to hardware experts about how a robot body would be put together because that was something that I didn’t know very much about. I read a lot about AI and machine learning. I had also already been writing in my journalism a lot about it and a lot about just computer networks in general. I used to, when I first started my career, I wrote a lot about computer security and hacking which is actually a great way to learn about computer networking, because it’s all about the ways that subversive activities can take place and people can kind of sneak around all of the protocols or all of the things that are supposedly safe. And so it’s, in a way, it’s almost like studying computer security or writing about it, it’s almost like looking at computer network neurosis because it’s all about how to…

0:56:52 SC: Right.

0:56:53 AN: Like unwanted thoughts and applications appear in this good pure mind of the computer or mind of your network of computers. And so… So I had all that stuff in my head and it had been sitting in my head for a pretty long time, so I was like “At last! I can finally use this knowledge to build a robot.”

0:57:12 SC: So your grand unified theory is that everything we do is a kind of therapy for one thing or another.

[laughter]

0:57:18 AN: Actually… Well… Or the opposite, right? That everything we do…

0:57:22 SC: Everything is a neurosis and all we’re doing is trying to deal with our neurosis, become functionally neurotic.

0:57:27 AN: Work it out, yeah.

0:57:28 SC: Whether we’re humans or machine.

0:57:31 AN: And just to not kill each other. Ultimately I think a lot of my work, including the novel I’m working on now, the big struggle is just like, just don’t kill each other. I know you’re mad, [laughter] I know you’re feeling neurotic, but just put the gun down. No brain crushing, let’s just work it out like grownup robots.

0:57:54 SC: Speaking of neurotic, you also… There’s a whole sort of yet another technological angle in Autonomous, where there are these synthetic drugs being made and then pirated. That’s what Jack does for a living. And so, were you also talking to pharma people and biochemists about how drugs could make you more or less neurotic or helpful or less?

0:58:16 AN: Yes, I did talk to a lot of very kind and sympathetic, biologists. I talked to a couple of synthetic biologists. I talked to a genetic engineer about the lab scenes. I gave him an early copy of the book and it was really funny, because he was like, “Well actually the lab scenes are fine and this is all good, but you call… There’s this one machine that you have, and it’s a Fabber.”

0:58:47 SC: Yep.

0:58:47 AN: As he’s like, “Why is everything a Fabber? [laughter] That is not how it would work, it might have the same technology, but there’d be a bunch of different special use cases and all different kinds of Fabbers, so that is just terrible.” And so I actually did go in and change it so that a lot of things now that used to be called a Fabber have other names. There’s cookers and there’s devices… Again special use devices that are basically 3D printers, but can print tissue or print various molecular structures, and stuff like that. So that was really helpful. And then I did talk to neuroscientists about the drug that the characters are taking, which is getting them addicted to work. And they… And I was like, “Okay here’s how the drug would work. And what do you think?” And they were like, “Oh yeah, wow, that could work. Oh, that’s so evil, yes. Oh, very evil.” So they were excited about the evil.

0:59:42 SC: Happens all the time. I mean, when I have done the movie consulting, it’s… Physicists consulting with people making movies is fairly straightforward, right? Like the answer to everything is you need a wormhole to do it. But biologists…

0:59:56 AN: That’s what you told me when I consulted with you.

0:59:58 SC: Yeah, because it’s the truth.

1:00:00 AN: It is, I’m excited…

1:00:00 SC: You don’t need a physics consultant, you just need an AI chat bot to say “Yes, you need a wormhole. Just put the wormhole in there. It’s not realistic, but you need it anyway”. But if you want… If you’re a writer, a producer, whatever, and you say, “Okay I need an organism, a germ that will go in and have the following terrible effect on humankind.” The biologist will go, “No, I don’t think that… Oh, oh yeah. Oh yeah, sure. We can do that. No problem”, and it’s kind of scary from that point of view.

1:00:28 AN: Yeah, and in fact I just outright stole some ideas, with permission, from some of the biologists I talked to. I have a final, final boss fight scene where they’re using a bunch of cool, futuristic technology. They’re sort of repurposing stuff, and the biologist I was talking to said, “Oh, but you know, you couldn’t do that, but you could do this other thing which would cause his skin to erupt with stuff”, and I was like, “Oh yeah”, so anyway, that’s not too much of a spoiler, but they do…

1:01:00 SC: And you’re like, “They pay me to do this, this is the best thing ever. How could I get a job so good?”

1:01:04 AN: Yeah, well, I also just… I always have a warm feeling when a scientist tells me that something is plausible, even if they’re sort of saying, “With a caveat, that of course, it’s not plausible”. [laughter] And it’s like, when you told me about wormholes I was like, “Okay, I can use wormholes, ’cause Sean said it’s okay, even though he said also wormholes couldn’t really exist, but he said they could kinda maybe possibly exist!” [laughter]

1:01:23 SC: They’re not gonna exist. There are different levels, there’s an ecosystem of violating the laws of physics, so… So, that’s perfectly… There’s acceptable ones, and less acceptable ones.

1:01:34 AN: Yeah, yeah, exactly. I think the thing that’s most important to me when representing science is trying to represent the scientific process accurately, which is to say…

1:01:43 SC: Sure.

1:01:43 AN: Showing people, testing things and retesting them, and how do you test things? I have a character who’s testing photonic molecule levels, which is not a thing, there is not photonic molecules. But you would in fact have to go test the levels on a device over and over to kind of make the suppositions that she does, so I wanted at least to be honest in that way so that people understand that science is actually a lot of dreary testing and not a lot of gazing rapturously into glowing holes and saying, “It’s full of stars”.

1:02:19 SC: We’ll allow some absurdity, but we want our absurdities to be grounded in the scientific method as much as possible.

1:02:23 AN: Exactly.

1:02:25 SC: And how much of the discussion of the intersection between economics and pharma in the book was a commentary of reflection or inspired by the real world right now?

1:02:36 AN: 100% inspired by the real world. I mean, I live in the real world, maybe to my detriment, and so everything I write about is… My brain takes it in, and I either consciously or probably a lot of times, unconsciously, kinda smoosh it up and try to rethink it and put it on the page. So I’ve… Like a lot of people in the United States, I’ve seen… I’ve had many friends that suffered horribly because they couldn’t afford medical treatments, and so that was very much on my mind. And pharma companies are huge, it’s a huge business and it’s poorly understood and the way that they make money overlaps a lot with the patent system, which is also poorly understood. And so it’s a great way to just rip people off, and you always have a captive audience. Nobody’s gonna say no to a drug that’s gonna cure them or prolong their life. So it’s easy… In some ways, it’s an easy target for a big bad, but it’s also a… It’s a source of great anxiety and pain for lots of people. So it seemed like an obvious place to poke. And who doesn’t want a pharmaceutical pirate? That’s the best pirate.

1:03:54 SC: Well, I was gonna say, your protagonist is a biochemical pharmaceutical pirate, so that’s pretty good.

1:04:00 AN: Yeah, I think… And it’s funny ’cause after the novel came out, a few people have come out identifying as pharmaceutical pirates. And so I was excited about that, I was like, “Yes, steal from the rich, give to the poor.” That’s my hope, is that… Inspire that.

1:04:16 SC: It is closer to the Robin Hood than real piracy right? I mean, it’s your… Real pirates are in it for themselves.

1:04:21 AN: Yes. It’s… Jack’s character is very much like in the Robin Hood tradition. I was explicitly thinking about Robin Hood, who is like one of my very favorite mythical characters ever, so…

1:04:35 SC: And so, why don’t you just give us a teaser for the next one? Are you… I should say, are you mostly a fiction writer now? Or you’re still writing nonfiction also?

1:04:43 AN: I am trying to be bi, which is hard. Nobody really acknowledges these bitextual people like me. But I am working on two books right now, one of which is nonfiction, which is about ancient abandoned cities focusing on four cities specifically and looking at why people abandoned them even though they were kind of cities that were at the heart of their civilizations. And sort of using that to think about the modern world and what we’re gonna do with our mega cities. And I’m also working on a novel which is about time travelling geologists. So it’s great to be working on something about ancient history while I’m also writing about time travel, ’cause whenever I feel the need, like desperately, as I’m writing about archeology, like, “I wish I could just go back and find out what really happened.” I’m like, “I will go over to my novel, where all they have to do is get a grant to use the time machine in order to go back and find out what happened.” So it’s been a really fun past year, although very stressful. And… Because I’m trying to write a lot of stuff. But the novel should be out next fall, and the nonfiction book will probably be out in 2020.

1:05:58 SC: Yeah, I think there’s a kind of a theme here. I don’t wanna sort of force a theme where it’s not there, but you’ve written about how we’re going to survive an apocalypse. Sorry, what was the title of that book again?

1:06:08 AN: Scatter, Adapt and Remember.

1:06:09 SC: Right.

1:06:10 AN: How we will survive a mass extinction.

1:06:13 SC: Yes, and it depends on the mass extinction, right? And…

1:06:16 AN: Very much so. There’s a lot of instructions in the book.

1:06:19 SC: Yeah, and now you have the book on ancient cities where they… Was it mostly… Spoiler alert, I suppose, but is it mostly that they chose to leave them or were they forced to leave them for some reason or another?

1:06:31 AN: It’s always a combination. These were cities that were abandoned, which means that people left them, they weren’t forced out by invading forces in any of the cases I’m looking at. In one case, like with Pompeii, the city was buried in ash. But other cities that were buried in ash at that same time were dug out, and… Like Naples, which was affected as well. But Pompeii was not. And so I’m looking at what are the precipitating factors that turn a huge city like, say, Anchor, which had a million people about 1,000 years ago, which made it the biggest city on earth at that time. How did that go from the biggest city on earth, and then within 400 years, to just a few monks living in kind of ruined remains?

1:07:25 SC: Can we blame neurosis for this?

1:07:28 AN: You know, I mean, there’s the neurosis that once you get a bunch of humans together in a city, the neurosis become things like historical trauma and also infrastructure upkeep. That turns out to be a big issue. Politics are a huge part of what causes a city to fail, but politics alone won’t do it. It’s really got to be combined with some kind of environmental problem, whether the problem is actually climate change or mismanaging the water supply or whatever. So I don’t know if that’s a neurosis or just… But it is certainly a recurring human problem, so…

1:08:09 SC: Yeah.

1:08:09 AN: We see it over and over again.

1:08:12 SC: So Scatter, Adapt and Remember? That was the title?

1:08:15 AN: That’s the title of my first book, and the nonfiction book I’m working on now about abandoned cities is just called Four Lost Cities.

1:08:24 SC: But I like, Scatter, Adapt and Remember is kind of the theme, right? Running through the fiction as well as the nonfiction. There’s human beings trying to adapt to challenging circumstances and choosing different modes of doing that.

1:08:36 AN: That’s very true. For me, in all of my work, I have a strong wish to urge people to survive, and try to survive in a way that helps lots of other people survive with them. I mean, I was joking earlier about the whole goal with my fiction is just to tell people not to kill each other.

1:08:55 SC: Not killing. Rule one.

1:08:56 AN: Don’t kill, that’s number one. And that’s part of… That’s a joke, but also that is part of what I’m trying to do in my nonfiction, is kind of show people ways that humans have survived and how, even when horrible circumstances kick them out of a really awesome city, they still manage to survive, their culture lives on, humans live on, and we create new social structures that are hopefully a little bit less inclined to encourage people to kill each other.

1:09:28 SC: Yeah.

1:09:28 AN: That’s the goal. Just like less and less pressure to kill. [laughter] More and more pressure to take care of everyone. So yeah, survival and… Survival in style, I would say. And…

1:09:42 SC: Survival in style is good. I did notice in one interview someone wanted to paint Autonomous as a dystopian novel. But you said it’s just a topia, it’s neither a dystopia nor a utopia, it’s the usual in between somewhere.

1:09:55 AN: Yeah, just how things are now. Sometimes things are a little worse, sometimes they’re a little better for certain groups of people, but yeah, I mean, I… Definitely Autonomous, there’s a lot of things that happen that are… Yeah, they’re positive. There’s great medicine available, if you can afford it. So, you just need a pirate to help make that a better system. And certainly in the time travel novel I’m working on now, there’s like… There’s some bad stuff, but there’s some good stuff. It’s all balanced out.

1:10:24 SC: Well, you can’t write a novel where everyone’s happy and just sitting around saying how happy they are, but maybe the descriptions of less than perfect situations can help nudge us here in the real world towards making things a little bit better.

1:10:35 AN: I think that’s true, and I don’t think we’re ever gonna reach a point as a civilization or a species where we’re happy all the time, even if we have an awesome genie coefficient and we don’t have an extreme division between rich and poor, and we don’t have massive social injustice, we’re still gonna argue. We’re still gonna have jealousy.

1:10:53 SC: Oh yeah.

1:10:54 AN: We’re still gonna be sad when someone dies or someone leaves us, there’s always gonna be drama and sadness. It’s just that, hopefully, we can keep it on a kind of… Keep it on the… Keep it limited. Back to my point about not killing.

1:11:13 SC: I like that, I think…

1:11:14 AN: Let’s just have interpersonal drama and not have wars.

1:11:19 SC: I think that’s a perfect note to end on. Annalee Newitz, thanks so much for being on the podcast.

1:11:22 AN: Yeah, thank you for having me.

[music]