Roughly 108 billion people have ever been alive on planet Earth. If humanity survives another 50 million years (a reasonable length of time compared to other species’ tenures), then the total number of people who will ever live is about 3 quadrillion, or 3 million billion.

If you care about improving human lives, you should overwhelmingly care about those quadrillions of lives rather than the comparatively small number of people alive today. The 7.6 billion people now living, after all, amount to less than 0.003 percent of the population that will live in the future. It’s reasonable to suggest that those quadrillions of future people have, accordingly, hundreds of thousands of times more moral weight than those of us living here today do.

That’s the basic argument behind Nick Beckstead’s 2013 Rutgers philosophy dissertation, “On the overwhelming importance of shaping the far future.” It’s a glorious mindfuck of a thesis, not least because Beckstead shows very convincingly that this is a conclusion any plausible moral view would reach. It’s not just something that weird utilitarians have to deal with.

And Beckstead, to his considerable credit, walks the walk on this. He works at the Open Philanthropy Project on grants relating to the far future and runs a charitable fund for donors who want to prioritize the far future. And arguments from him and others have turned “long-termism” into a very vibrant, important strand of the effective altruism community.

But what does prioritizing the far future even mean?

The most literal thing it could mean is preventing human extinction, to ensure that the species persists as long as possible. For the long-term-focused effective altruists I know, that typically means identifying concrete threats to humanity’s continued existence — like unfriendly artificial intelligence, or a pandemic, or global warming/out of control geoengineering — and engaging in activities to prevent that specific eventuality.

But in a set of slides he made in 2013, Beckstead makes a compelling case that while that’s certainly part of what caring about the far future entails, approaches that address specific threats to humanity (which he calls “targeted” approaches to the far future) have to complement “broad” approaches, where instead of trying to predict what’s going to kill us all, you just generally try to keep civilization running as best it can, so that it is, as a whole, well-equipped to deal with potential extinction events in the future, not just in 2030 or 2040 but in 3500 or 95000 or even 37 million.

In other words, caring about the far future doesn’t mean just paying attention to low-probability risks of total annihilation; it also means acting on pressing needs now.

For example: We’re going to be better prepared to prevent extinction from AI or a supervirus or global warming if society as a whole makes a lot of scientific progress. And a significant bottleneck there is that the vast majority of humanity doesn’t get high-enough-quality education to engage in scientific research, if they want to, which reduces the odds that we have enough trained scientists to come up with the breakthroughs we need as a civilization to survive and thrive.

So maybe one of the best things we can do for the far future is to improve school systems — here and now — to harness the group economist Raj Chetty calls “lost Einsteins” (potential innovators who are thwarted by poverty and inequality in rich countries) and, more importantly, the hundreds of millions of kids in developing countries dealing with even worse education systems than those in depressed communities in the rich world.

What if living ethically for the far future means living ethically now?

Beckstead mentions some other broad, or very broad, ideas (these are all his descriptions):

Help make computers faster so that people everywhere can work more efficiently

Change intellectual property law so that technological innovation can happen more quickly

Advocate for open borders so that people from poorly governed countries can move to better-governed countries and be more productive

Meta-research: improve incentives and norms in academic work to better advance human knowledge

Improve education

Advocate for political party X to make future people have values more like political party X

”If you look at these areas (economic growth and technological progress, access to information, individual capability, social coordination, motives) a lot of everyday good works contribute,” Beckstead writes. “An implication of this is that a lot of everyday good works are good from a broad perspective, even though hardly anyone thinks explicitly in terms of far future standards.”

Look at those examples again: It’s just a list of what normal altruistically motivated people, not effective altruism folks, generally do. Charities in the US love talking about the lost opportunities for innovation that poverty creates. Lots of smart people who want to make a difference become scientists, or try to work as teachers or on improving education policy, and lord knows there are plenty of people who become political party operatives out of a conviction that the moral consequences of the party’s platform are good.

All of which is to say: Maybe effective altruists aren’t that special, or at least maybe we don’t have access to that many specific and weird conclusions about how best to help the world. If the far future is what matters, and generally trying to make the world work better is among the best ways to help the far future, then effective altruism just becomes plain ol’ do-goodery.*

*One big caveat: A lot of this logic becomes dramatically complicated if you consider that non-human animals have moral worth. They were here before us, they will be here after us, and especially given that most of them live in the wild, where we have very limited knowledge of how to improve animal welfare, it would appear that their presence and moral weight turn the altruistic efforts of humans into … tinkering at the margins of the moral universe.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.