If we don’t know the dangers that the future brings, how can we prepare for them?

The End of the World is a podcast that explores existential risks, those organic and man-made catastrophes that could bring about the extinction of humanity. The series begins with the origin of life and goes on to list all of the ways Homo sapiens could be snuffed out by nature, before focusing on all the ways we might do it ourselves. Artificial intelligence, biotechnology, physics experiments, nanobots—we may have an odd 5 billion years before we get swallowed up by the sun, but at the rate we’re innovating, it’ll be a wonder if we see another century.

Sound bleak? Well, yes, it is. But the podcast comes at a particularly dark time in human history. Never before have we been haphazardly equipped with the tools to bring about our own demise. We’re recklessly building AIs and algorithms that have no empathy for their human creators. Pandemic-level pathogens escape their labs on a shockingly regular basis. And we’re only a decade or two away from a climate-change point of no return; global warming’s own event horizon. Yet we don’t seem to be in any great rush to course correct. (The Doomsday Clock currently has us at two minutes to midnight, for what it’s worth.)

Host Josh Clark takes the stance of the more you know, the more you can do about it. And he knows a lot of stuff. The co-host of the Stuff You Should Know podcast and a former senior writer at How Stuff Works, an educational website, Clark has made a living of making sense of the world and teaching it to others. Through all of his research, he thankfully doesn’t think we’re doomed—not yet, at least. We just have to make it through the next couple of years alive.

In this surprisingly uplifting conversation, Clark walks us through the various ways we could potentially send the species into a tailspin, and how we can avoid that terrible fate.

Quartz: It feels like we’re at an inflection point of society’s trajectory. But haven’t we always felt like we could be on the brink of destruction? Why is this moment different?

Josh Clark: Existential risk is a different category of threat that we’re not used to. The difference between an existential risk and every other risk that we’ve come across up to this point is that we’ve never had technology powerful enough to actually wipe ourselves out. For a long time, we thought that nuclear war or even catastrophic climate change could do it. And those would be really bad for humanity! If the kind of climate change that we’re going to start facing in 12 years happens, a lot of people are going to die, and a lot of people will be displaced. It’ll be a terrible period of adaption for humanity. But we will adapt. There will be people left to rebuild.

But with these new risks, there won’t be anyone left to rebuild. There are no chances left. If one existential risk befalls one person, that’s it! Humanity is done. We tend to think because we’ve made it this far, we’ll be okay. But we’ve never been exposed to anything that could take out all of humanity. Nothing’s had the goods. But the technology we’ve started to develop now does.

We could make it to a utopian, post-biological society. But there’s a big catch: To make it there, we have to safely get through this period beforehand, which is the most dangerous that society has ever lived through.

In the artificial intelligence episode of your podcast, you bring up one of my favorite AI thought experiments, Nick Bostrom’s Paperclip Maximizer, except you adapt it for morality. In the original story, someone programs an AI to make paperclips, and once it runs out of the obvious stuff to recycle the metals from, it starts ripping apart the world, looking for more. But what if you tasked the same AI with slowing down climate change? The quickest way to do that would be to kill off all the humans, not some high-tech solution. We’d be goners. If we’re so good at screwing things up, maybe it’s only natural that we’re going to bring about our own demise?

I was really surprised to run across a lot of people who believe that humanity doesn’t deserve to survive. I understand every single point that they make: We’re a holy terror on every other species; we’re terrible stewards of the biosphere; we do truly horrific things to animals—and to each other. So you could make a great case for why humanity doesn’t deserve to live.

But at the bottom of that, I disagree with the sentiment. It punishes the humans to come for the crimes of those of alive today. I also think that the humans to come could be way better at being good humans than we are.

If one existential risk befalls one person, that’s it! Humanity is done.

Our moral trajectory is on an upward climb. It goes up and down—and sometimes it goes backward and it’s ugly and messy—but overall, it’s still moving upward. Take how we treat animals, for instance. In the last 50 years we went from animals having the same rights as the toaster in your house to prosecuting humans who mistreat animals. We now recognize that us humans—the most amazing creatures on Earth—don’t have the right to violate these animals’ natural rights. Once you open that door and step through, our view of other animals is going to become more harmonious and friendly. Maybe all humans will soon be on a vegan diet, eating in-vitro meat, and all of the animals’ habitats will be preserved. And I can’t believe that won’t be expanded to all life.

But we can’t do that if we all die—to commit omnicide, as Bostrom puts it. If we take humans out of the equation, we also take out the possibility that we could make the world better for all life. As the most intelligent life on Earth, we could see it as our responsibility to make life better for the rest rather than take take take take take. That’s my hope.

If we let Earth go along its natural path, the probability of a catastrophic natural event like a world-ending volcanic lava eruption or getting in the way of a cosmic ray is pretty low. But we’re accelerating our existence. We’re so obsessed with dangerous research, whether it’s smashing atoms together in the Large Hadron Collider or artificially manufacturing deadly flu strains. How do we find the balance between innovation and preservation?

Even if we can bumble though our next million years on Earth with this level of technological maturity, we still run into a natural risk eventually. Maybe it’s an asteroid. Maybe it’s a horrible super-volcanic eruption. It could be anything. But in a billion or so years, the sun will grow hotter and brighter and will actually deplete the CO2 in Earth’s atmosphere, messing up the climate and making photosynthesis impossible. So life would basically cease here. That means there’s a 1.1 to 1.2 billion-year expiration date ahead of humanity, if we can just stick around on Earth for the next billion years.

But can we stick around on Earth for the next billion years?

On my money, no! I don’t think so. But if we use our technology to get off the Earth and start spreading out, maybe we can avoid that billion-year expiration date. There’s value in using our smarts—but you can also make the case that our smarts are what got us into this mess in the first place.

So it’s a catch 22?

Yes, even if you took our smarts out of the equation, we’re still doomed at some point—but the only way we can avoid being doomed is to use our smarts. I was worried that some people were going to be like, “Josh, let’s light some torches and chase the scientists out of town.” So I really went to some lengths to say that banning science is not going to help. The only way that we can pull ourselves out of this natural extinction threat is to use our smarts and use technology.

Let’s get ourselves off this rock and into the universe, then! The series starts with an episode on the Fermi Paradox, which is the concept of if there is statistical likelihood that there are other intelligent forms out there, how come we haven’t found them? What’s your opinion—are we alone?

I believe that, right now, we are the most intelligent evolved life in the universe; that Earth is the only home to intelligent life. But there’s a question mark on whether we’re separated from other intelligent life by time—that they might have already died out, or that they’re still to come.

I used to think that no, this universe is way too big for it just to be us; it’s bonkers to think otherwise. But when I was researching the series, I changed my mind, big time. Belief in the Fermi Paradox—or a belief in aliens, or other intelligent life—is almost a religiously held view. I mean, people got mad that I wasn’t like, “There probably are aliens.” I didn’t put it in [the podcast], as my conclusion is that there probably aren’t. And most of the people who I talked to—with the exception of Seth Shostak, who’s the head of SETI [the Search for ExtraTerrestrial Intelligence Institute], who obviously thinks there are—none of them believe that there is other intelligent life out there in the universe. Maybe other intelligent life has existed, and maybe there is more to come. But right now, we’re the only ones.

Do you think we’ll successfully colonize other planets?

Give it enough time, yes. Totally. Absolutely. I mean, we’re so close right now to spreading out into the solar system. Really the only thing lacking isn’t technology—it’s political will. If we globally came together and said we wanted to do it, no problem, we could do it in a short time.

As long as we can stay alive, we will definitely spread off of Earth. There are a lot of challenges between now and then. But I don’t think it’s anything that we can’t surmount pretty easily if we put our minds to it.

After listening to the podcast, what’s the most common question people ask you?

If we start talking about it, something that seems remote and weird and odd and crazy becomes obvious.

People keep on asking what they can do to help. I didn’t have an answer, so I started to look around. There are actually a lot of things that a person can do (short of going back to school and learning nanotechnology, that is).

The best thing that anyone can do is start talking about [existential threats]. If we start talking about it, something that seems remote and weird and odd and crazy becomes obvious. It becomes imperative to talk about it. We have existential threats coming our way, and if we don’t do anything about them, we’re in really big trouble. That’s the barrier right now. This stuff seems fringey and crackpotty, but as more people talk about it, it’ll seem less so.

When things can see so bleak, how do you keep people hopeful?

You lose people when you fearmonger. You easily push them into despondence and apathy, and people become fearful and overwhelmed. You might think, “What’s the point? We may as well let the asteroid come for us if there’s nothing else we can do.”

But if you can explain how something can happen, it becomes less scary. I think vagueness is the scariest thing. It’s like in any horror movie. If they show you the monster and you know what it looks like, it’s automatically less scary. But if they just mess with you and don’t let you see it, it’s terrifying. What I’m doing is dragging the monster out into the light and saying, “See? Yeah, this thing can kill us—but it’s not as scary as it seems.”