The 21st century is the most important century in human history.

At least that’s what a number of thinkers say. Their argument is pretty simple: Mostly, it’s that there are huge challenges that we have to surmount this century to get any future at all, making this the most consequential of all centuries so far. Furthermore, a solution to those challenges would likely mean a future farther from the brink of destruction — which makes this century more pivotal than future centuries, too.

Not all that long ago — in 1945, with the first wartime use of nuclear weapons — humankind developed the ability to destroy ourselves. Since then, we’ve only gotten better at it. There are now tens of thousands of nuclear weapons, and we’re proceeding at great speed toward other ways to endanger our civilization — from climate change to engineered pandemics to artificial intelligence to other, even more speculative future technologies.

“Unless we get our act together as a species, there’s only so many of these centuries that we’re going to be able to survive,” Oxford philosopher Toby Ord has argued. It’s not that any one of these things is guaranteed to destroy us — it’s that, if every year we get a little bit lucky not to have a nuclear war, a little bit lucky not to have a global pandemic, a little bit lucky not to have a dangerous incident of some other kind, then, eventually, we’ll run out of luck.

That, according to this view, makes this a pivotal time in history — the era between when we invented ways to destroy ourselves and when we (hopefully) invent some form of structure or governance that means we can address such problems in a coordinated, systematic way without relying on luck.

This argument has been influential in the budding academic field that studies existential risk. But it also has its critics. One of them is Ord’s fellow Oxford professor Will MacAskill, who argued recently that we’re probably not living in the most important era in history, after all.

His core claim is this: For almost every century in all of human history, people who think they’re in the most important century of human history will be wrong. There are 50,000 years of human history behind us, and potentially hundreds of thousands more ahead of us. Sure, we might be facing some big crises now, but the idea that they’re the biggest crises we’ll ever face starts out as extremely implausible.

This might not sound like it has important implications, but it does. If this is a particularly critical century, then focusing on the immediate challenges in front of us is the best thing to do for the long-term future — for example, throwing all our resources at tackling the biggest threats on the horizon. If this is not a critical century, then it makes more sense to focus on how we can shape future generations — perhaps by setting up new long-lasting institutions, funding philosophy and ethics research, and working to educate future generations.

The argument for this century’s critical importance

“We live during the hinge of history,” famous British philosopher Derek Parfit argued in his 2011 book On What Matters. “Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period.”

There are three core arguments, as MacAskill identifies them, that we’re living in a critical moment in human history. The first is the one presented above: call it the “time of perils” argument.

It used to be impossible for the human race to wholly wipe itself out. Experts disagree on exactly how disastrous a nuclear war would be, but suffice it to say that it’s probably no longer impossible — and it’s getting easier every year. A global pandemic comparable to the one in 1918 could be catastrophic for the world today, and there’s no real reason to think that the pandemic of 1918 is the worst it can get. And engineered pandemics could be even worse.

Other forms of risk to humanity are more distant. Climate change will not make the Earth uninhabitable, but it can certainly make it more fragile, less resilient, less globally coordinated, and more vulnerable to additional sources of shocks to the ecosystem or to the geopolitical environment. Artificial intelligence researchers disagree on whether transformative AI technology is ten years away or 200, but many agree that when it arrives it will, unless carefully designed, be catastrophic.

“We’re currently in this very special time by the standards of human history, when our actions could destroy our world, or at least it’s very plausible that they could,” Ord argues. To be clear, he’s not sure that this very special time is precisely a century long; it could, he told me, easily last several hundred years. But it can’t last forever. If we survive each year only by getting lucky, some year we’ll get unlucky. For that reason, existential risk researchers hope we can end the global situation that produces such perils entirely, instead of just trying to survive the perils of each given year. If we did that, it’d have an enormous, decisive effect on the future.

This is a significant line of argument for the importance of this century. But it’s not the only one. Another line of argument? The “values lock-in” view. In general, the values lock-in view argues that there are some ways that humans in the near future could lock in one particular course for humans in the more distant future. And if we do that, we need to make sure we don’t lock-out the potential for future moral progress.

Some researchers believe that developing more advanced technology will involve handing off many of the most important questions about human values — putting them, effectively, outside of human control. If we program the first advanced computer system to share our values, then that’s what it will do, even if we later decide we wanted something different.

Most researchers who are concerned with “values lock-in” situations are thinking about artificial intelligence. But there’s a more general form of the argument. MacAskill summarizes it as “the most pivotal point in time is when we develop techniques for engineering the motivations and values of the subsequent generation (such as through AI, but also perhaps through other technology, such as genetic engineering or advanced brainwashing technology).” Or, if a sufficiently authoritarian and powerful dictatorship were to emerge, new technology might mean that future generations were powerless to overthrow or reform it.

That all sounds firmly in the realm of science fiction. But 2019’s science fiction may be 2100’s genuine social problems — a century is a very long time.

There’s a final line of argument about what makes this time unusual. For most of history, the whole world couldn’t coordinate around one course of action — even if we agreed on it. Global communications technology has changed that. For most of history, economic growth was slow or nonexistent. Now, there’s rapid and transformative economic growth, and some people think this cannot possibly be sustained into the distant future. All of that makes this an unusual time — and perhaps a time when people determined to change the world are unusually empowered to do so.

Why we have lots of competition for the most important century

So that’s the case that this is the most important time in history. What’s the case against?

MacAskill’s argument goes like this. Human history could potentially stretch for billions of years, and has already stretched for tens or hundreds of thousands (depending on when you start counting civilizations as relevantly part of human history). So, he argues, if we’re trying to figure out when the most critical century in history is, we should think about all of those billions of potential years.

“Out of all those years, there is just one time that is the most influential. According to [the hypothesis], that time is… right now. If true, that would seem like an extraordinary coincidence, which should make us suspicious of whatever reasoning led us to that conclusion,” he says.

Another way to think about it: the above few paragraphs make the case that this is a uniquely important era. But one could easily imagine an article from Vox in 1750 making the case that the century ahead was a uniquely important era — after all, by the end of it, America would be independent, the international slave trade ending, the French Revolution and the Napoleonic era ahead to catastrophically shake Europe. A century after that, a series of democratic revolutions across Europe struck many of the era’s autocrats as the most important moment in all of history.

A Vox article at the time of the collapse of the Roman Empire would have a pretty good claim to be talking about the most important era in human history. And what about the eras when major world religions were founded?

And that’s before we even start thinking about the distant future. Perhaps Vox in 3400 would be making the case, via whatever channels of communication are popular in 3400, that the decision of how to colonize the Andromeda Galaxy is the most important moment in human history.

Sure, we have some pretty good arguments for the importance of our era. But ... doesn’t everybody? Are the arguments for the 21st century really that much stronger than the arguments for the 1st century, or for centuries yet to come?

Under this view, sure, we have some serious challenges ahead of us. But it’s a mistake to think we’re in a unique moment in history. There’s every reason to think that the challenges faced in future centuries will be as significant.

Why it matters how much this century matters

It’s easy to see this debate as impossibly abstract. Who cares whether we’re in the most important century in history or just a very important century that, statistically speaking, will probably be surpassed by some other one millions of years into the future?

There’s certainly something to this observation. The philosophers who debate this disagree on fairly little. They agree that there’s some evidence this century is uniquely important (though they differ on whether it’s enough evidence to overcome the inherent unlikeliness). Almost all of them want vastly more resources to be devoted to combatting existential risks, or things that might wipe us out in the next few centuries. Almost all of them think that one of the great moral failings of our generation is our failure to ensure there’ll be a next generation.

But it’s not just an abstract philosophy argument, either. If this is the crucial moment in human history, foundations that’ll be around for centuries aren’t a top priority. If humanity’s biggest problems are best left to our grandchildren and their grandchildren, then it doesn’t seem so strange to try to set up enduring human institutions with the power to influence successive generations. If this is the critical moment, the balance of our efforts should probably be spent less on long-term priorities questions and more on action — like political efforts to reverse course on dangerous human activities, and research on how to mitigate the immediate dangers of present threats.

Some people who study the far future think that we mostly need targeted interventions to benefit the far future. A targeted intervention is something like stopping an asteroid from hitting Earth. Others favor mostly very broad ways of helping the far future, like making the population more altruistic or educated or compassionate, trusting that whatever problems arise for them, these things are likely to help. The question of whether this century is unique — or at least highly unusual — might affect whether targeted or broad interventions seem better.

In short, how we think about the threats of this century might significantly affect how we go about addressing the threats of this century. And since — here, all the researchers agree — there are a lot of challenges ahead of us, we want to make sure we’re addressing them right.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.