The Fermi Paradox is the apparent contradiction between the high probability of other lifeforms in the universe and humanity's lack of contact with extraterrestrial civilizations. For every grain of sand on Earth there are 10,000 stars, a vastness so immense it escapes comprehension. Humanity should not be the only intelligent life in the universe, but so far no evidence indicates otherwise. Robin Hanson is amongst pragmatic intellects who answer the paradox by amplifying the differentiation between finding life, and finding intelligent life.

In 1963, Frank Drake set out to find a systematic means of calculating the probability of alien life. The Drake equation factors in star formation, fractions of stars, the number of planets that may be hospitable to life, the fraction which develop life, the fraction of intelligent life, and the fraction of intelligent life which develops technology. Frugal calculations estimate there are at least 1,000 space-faring civilizations in our own galaxy, but messages beamed into space by Search for Extraterrestrial Intelligence (SETI) go unanswered.

The Drake equation estimates the number of intelligent civilizations, but the Kardashev Scale is needed to measure types of civilizations. Scientists are looking for advanced life that uses technology. The scale, created by Russian astronomer Nikolai Kardashev, is a method of measuring an intelligent civilization's level of technological capability. Three designated categories, named Type 1, Type 2, and Type 3, are organized based on energy production and energy demand of a civilization.

Type 1 Civilization: has the ability to maximize all of the energy available on their planet. Human beings are not yet a Type 1 civilization, but will likely achieve Type 1 within the next 100–200 years.

Type 2 Civilization: harnesses all of the energy of their local star. They directly tap into the energy production of a star using theoretical technology like a Dyson Sphere.

Type 3 Civilizations: access power comparable to the total energy output of the Milky Way galaxy. A Type 3 civilization might look at humans the same way humans look at chimpanzees.

Advancing from Type 1 to Type 3 takes millions, perhaps billions, of years. If the Drake equation is correct than scientists imagine at least one out of 1,000 intelligent civilizations must have advanced to Type 3. Given the power of a Type 3 Civilization, it should leave waste scattered about space, or remnants of technology, but the only trash in space comes from Earth. Something must impede the progress of intelligent civilization to move from one Type to the next.

Robin Hanson proposed a theory, arguing the failure to find extraterrestrial life implies something is wrong with the argument that advanced civilizations are probable. Some "filter" acts as a barrier during the evolutionary path of intelligence, an event preventing an intelligent civilizations development, which may be impossible to overcome. This stage is known as The Great Filter. The question becomes: where in the evolutionary chain of events does it occur?

Optimists contend that human beings have already passed the Great Filter; it is very rare for life to achieve the level of intelligence humanity possesses. Life may be far rarer than the Drake equation predicts. Over a billion years passed before life appeared on Earth. Maybe the stars are devoid of life because there really is no other life.

Life does not survive easily. It remained as prokaryote cells for two billion years before evolving into eukaryote cells, complex cells with a nucleus. The barrier might have been the jump from prokaryote cells to eukaryote. The universe might be crawling with simple cell organisms, and they fail to evolve into more complex life. If the Great Filter occurred sometime in the past then humanity is free to colonize the stars and eventually harness the power of galaxies.

If the Great Filter has not already happened then the future looks grim. There may be some cataclysmic event waiting for us in the future, whether it's an unseen asteroid or sentient robots. Perhaps all intelligent life is inherently self-destructive and will kill itself; humanity nearly committed suicide numerous times during the Cold War. Researchers can only theorize what apocalyptic event might inevitably end humanity. The universe itself may be too inhospitable for intelligent life to thrive and become a Type 3 civilization.

Oxford University philosopher Nick Bostrom says the discovery of fossilized complex life on another world "would be by far the worst news ever printed on a newspaper cover." The discovery would nearly confirm the Great Filter is waiting for us some time in the future. It would imply life easily begins throughout the universe, but its advance is haltered by some other factor. If we do not find any other life on planets, like Mars, then humanity may have already safely passed through the Great Filter.

The conditions in the universe may have only recently altered to allow life to flourish. The first 13 billion years of the universe may have been seeped in chaos and cataclysmic events, gamma-ray bursts immolated planets and supernovas destroyed solar systems. Humanity may exist at the right time, and in the right place, to move towards super-intelligence. In the trailer for Interstellar, Michael Caine says, "We must confront the reality that nothing in our solar system can help us." The universe is indifferent to humanities predicament. Even the Earth was an inhospitable place for billions of years before life evolved. Volcanoes, earthquakes, and meteors combined to create a planet constantly in the midst of apocalypse.

The universe is 13.8 billion years old and the modern world has only existed for the last second of the last hour of the last day on the cosmic calendar. Scientists searching for life may not have discovered any evidence of extraterrestrials, but that does not mean they do not exist. The observable universe is only a slice of a colossal space, and modern technology has only existed for a microsecond in the history of the universe. We may not be looking in the right place, or for the right signals. Maybe aliens know about humanity and have ostracized them from the galactic community. Alien civilizations may be so different from our own that they are incomprehensible. Arthur C. Clarke said, "Two possibilities exist: either we are alone in the Universe or we are not. Both are equally terrifying." Humanity will continue to search the stars until their curiosity is answered.

As earlier mentioned, Professor Robin Hanson takes a logical approach to the search for extraterrestrial life. The Great Filter as he refers to it, helps provide a meaningful basis for understanding the possibilities of life on other planets.

As mentioned at the start

An Interview with Robin Hanson

It is great to hear that you are familiar with and remember OMNI magazine from the 1980s. Is there anything about OMNI that stands out in your memory?

I vaguely recall complaints that OMNI was “too speculative”, though not being personally bothered by anything specific. Looking back now it seems OMNI was biased to expecting more rapid tech and social progress than was actually realized. But this bias was probably what readers wanted; they’d buy less of a more realistic magazine.

How do you think that science fiction and scientific discovery interact in today’s society?

Science fiction engages readers by drawing from news and speculation on science and technology. This inspires many to enter into sci/tech careers. More rarely, science fiction inspires particular sci/tech projects. Some claim science fiction helps us predict the future, but suspiciously no one ever collects datasets to formally test this claim. In my experience, when people respond to futuristic topics with science fiction references, the referenced works rarely contain much relevant insight.

Have we passed the Great Filter? You explored this question in The Great Filter - Are We Almost Past It? in 1998. Have your theories on this changed at all since then?

Since 1998 we have learned that the future window for life on Earth seems wider, suggesting that fewer very hard filter steps could have happened on Earth, plausibly only one to three. (And none in the last half billion years.) We’ve also learned that anthropic selection arguments make a future filter more likely than it otherwise would be. The first result weakly suggests more optimism, but the latter result more strongly suggests pessimism.

In your piece The Great Filter - Are We Almost Past It? you state, “The larger the remaining filter we face, the more carefully humanity should try to avoid negative scenarios.” What is the most important thing that we’re doing wrong?

Our obvious failing is not putting enough effort into thinking about what could go wrong and how to stop such things, and we are not actually doing most of what we’ve thought of so far. For example, we could do much more to prevent pandemics. More generally, I’ve proposed “refuge futures” to predict the causes and chances of many kinds of big disasters.

The more I study science, the more I believe in God.” - Albert Einstein. What do you think of this quote? How do religion and science interact in your own life?

Religion seems quite useful; religious people tend to be happier, live longer, smoke less, exercise more, earn more, get and stay married more, commit less crime, use less illegal drugs, have more social connections, donate and volunteer more, and have more kids. And it isn’t crazy to think there might be vast powers far away out there. Even so, given all we know it is basically crazy to think such vast powers regularly listen to human prayers and respond by intervening in the details of individual human lives.

Why did you decide to write The Age of Em as a book instead of a research paper or a thesis? How does writing a book compare to the other pieces you’ve written for publication in the past?

The Age of Em is about work, love, and life when a certain kind of robot (brain emulations) becomes smart and cheap enough to displace humans on most jobs, and in effect rule the Earth. I’ve waited to write books in part because I think they should be reserved for points one can’t make as persuasively in a sequence of articles. I have several such points to make, but for my first book I picked the topic that would most engross me, so I’d finish it. After all, I hear half the people who get a book contract never deliver a book. I learned that taking four years to write a very unusual book can be lonely; few can offer discussion or encouragement until the book is finished.

What do you think about the theory of technological singularity? How does your upcoming book, The Age of Em, explore and support/disprove this?

The word “singularity” has many associations. For some it means “when robots are as smart or smarter than humans,” and my book details exactly such a scenario. For some it means “when rates of change greatly increase,” and this also happens in my scenario, as it has happened before at least three times, at the introduction of humans, farming, and industry. But for some “singularity” means “a horizon beyond which we cannot see.” I deny this claim, and try to show in detail how we can see well beyond such changes.

What kind of backlash have you received from your theories?

My book estimates that the future is like the past in seeing only modest global coordination. Some readers disagree and expect great global coordination, such as via the first group with a technology lead parlaying that into world conquest. A lack of coordination suggests the reversal of many consistent industrial-era trends, such as increasing income, leisure, democracy, and civil rights. Some readers are offended that I’ve violated a common taboo against estimating that we will fail to prevent outcomes many see as undesirable. However, many other readers are fascinated by the strange yet understandable world I describe.

Do you believe that the Great Filter is a natural event, or is intelligent life inherently self destructive?

Past filter steps are both driven by internal dynamics, and by how such dynamics influence life’s resilience against external disturbances. But future filter steps seem to be almost all about internal dynamics.

What is the most bizarre alternative to the Great Filter theory that you’ve heard?

Honestly I’m not good at remembering things I consider bizarre; they seem too unlikely to be worth remembering.

Can you elaborate on your decision to have your head cryonically frozen in the event of a medical death? Why your head as opposed to your whole body?

I’m gambling that the organization that freezes my head will last for many decades, and then that future brain scans of that frozen head can read enough information to make a brain emulation of me. Freezing my body adds to the cost but little to the gain of this plan. By my calculation, I need only a 5% chance of success to justify the expense.

Finally, what can we expect from you over the next couple of years?

My second book, coauthored with Kevin Simler, also from Oxford, comes in spring 2017: The Elephant in the Brain: Hidden Motives in Everyday Life. More big idea books will probably follow.

How is it possible for you to forecast the social consequences of a very disruptive future technology?

Really, other than asking an unusual question I'm not being creative or contrarian. I happen to know the basics of an unusually wide range of academic fields, but in each field I’m just applying our simplest standard theories. I expect experts in each field to mostly recognize and approve of my applications. Perhaps the particular scenario I consider is especially well-suited to this approach, but even so a big issue is that people have mostly just not tried this sort of straightforward methodical approach.

What will earth be like if robots rule the world? Professor Robin Hanson believes that we can find the answer before it happens. In The Age of Em, he uses standard theories to paint a detailed picture of a world where brain emulations, or ems, dominate the economy and the world. Humans may not change greatly in the em era, but the ems themselves differ greatly from us, making future humans question common assumptions of moral progress.