It is not easy to dismiss the Fermi paradox. Either technological civilizations or even life are extremely unlikely or too short-lived, or some exotic theory is the case, of which there are many. For instance, aliens may have already conquered the galaxy and are among us, or they retreated in some digital simulated life rather than venturing in space, or perhaps we live in a computer simulation with Earth being the only simulated planet with life. Here, just for the record I briefly go over three I find most likely (at least today).

One important requirement for an explanation is that it should not rely on undue assumptions on other civilizations. “Other civilizations don’t wish to expand like we do” won’t do: some of them may not, but some may. It only takes one civilization with similar mentality to ours to expand across the galaxy. If we are the first, and if we don’t destroy ourselves, most likely we will eventually expand.

So here are my top three explanations:

A technological great filter lies ahead of us. I find it likely that science and technology require a certain free spirit of innovation, exploration, and individuality. Across our history, the leverage of an individual to cause damage is increasing steadily. Back 10,000 years ago, a strong and mean individual could kill a few people and bring down a hut or two. Today, a bad actor can cause much more harm. What if there is a technology that will unavoidably be invented, which gives the ability to anyone to instantly and irreversibly destroy the civilization? For example, an exotic and easily tapped energy source, or downloadable code for grey goo. If such a technology inexorably lies ahead of us, which is plausible, it is difficult to imagine how we could prevent every single individual from deploying it. How about other civilizations, could a collectivist civilization akin to an ant colony avoid such doom? Brains are expensive; in a collectivist civilization that confers no evolutionary advantage to individual intelligence, “free-riders” will get rid of their brains, so it is conceivable that every technological civilization consists of competing individuals and in every single one of them one individual eventually and inexorably triggers the doomsday machine. One catch to this explanation: for “best results” the doomsday machine must be triggered before exponential space exploration commences.

Aliens are among us. The first civilization to develop space travel, if similar to us in mindset, will likely want to expand at least defensively across the galaxy and beyond. If nothing else, to prevent future aggressor civilizations from expanding. Or perhaps because it is aware of destructive abilities of even inferior civilizations (think: grey goo) and wants to monitor the galaxy. A defensive expansion is more likely — a no-brainer — compared to a rapid colonization, which has the downside of creating potential future competitors. A civilization that interconnects into a big internet-brain may have little use of distant colonies and expand at a rate much lower than 1% of the speed of light. In the defensive expansion scenario, the civilization will still rapidly send robot factories to build drones that will monitor all interesting planetary systems, and be ready to unleash destructive force to anything that looks threatening. Incidentally, UFOs are becoming mainstream. If UFO reports are to be believed (OK, a big IF) then the reported UFOs are acting exactly as expected from drones who inspect things, are unconcerned about us, and are ready to engage in case anything they deem threatening appears. Which raises the important question of what they might deem threatening. Or perhaps, aliens are among us in the quantum realm or in some other unexpected physical form. The exponential technological progress has to reach one or a few phase transitions, after which all bets are off. To advanced aliens, components such as neurons or silicon transistors will seem hopelessly bulky and inefficient as computational building blocks. Hence, as a colleague pointed out, SETI is severely outdated using technology and reasoning of the 1950s to search for aliens and should broaden its scope and methods. I bet Carl Sagan — my childhood hero and pioneer of SETI— would agree.

Technological civilizations are unlikely. This is the explanation I find least likely (rather, I leave room for an entirely different explanation, such as a specific and compelling hypothesis of why a sufficiently advanced civilization finds the visible universe uninteresting or explores it invisibly). Intelligence has most likely only evolved once on Earth in terms of nervous system; however, higher intelligence has evolved independently multiple times. Orangutans and chimps, dolphins and whales, elephants, ravens and crows, kea and African Grey parrots, and very independently octopuses and squids, have remarkable intelligence. Many species use tools. We are first to develop technology on Earth, but isn’t it a stretch to assert that if we weren’t around no other species on Earth would develop technology in the next 100 million years? Or 1 billion years? What if life is vanishingly unlikely. Again, I don’t think that’s a robust explanation. The first step of life cannot be unlikely: while liquid water appeared on Earth 4.4 billion years ago, the first evidence of life may date back to 4.3 billion years ago, which hints to life originating quickly in geological terms once conditions are right. If any step in the evolution to intelligence was vanishingly unlikely, that step would most likely have taken a disproportionately long time on Earth. That is not what we observe: the last universal common ancestor appears about 3.5 billion years ago (bya) after a steady evolution of basic biomolecular functions; photosynthesis appears 3 bya; land microbes 2.8 bya; cyanobacteria’s oxygen photosynthesis 2.5 bya; eukaryotes 1.85 bya, land fungi 1.3 bya, sexual reproduction 1.2 bya; marine eukaryotes 1 bya; protozoa 750 million years ago, and so on, steadily evolving into intelligent species in the past few hundred million years. The coarse-grained breakdown of evolution’s steps in the early billions of years reflects our lack of data on the ancient progression of molecular biology rather than any single vanishingly unlikely event.

Photo by Aziz Acharki on Unsplash

Incidentally, I want to urge against jumping to the anthropic principle and stating that there is nothing puzzling about seemingly being alone because the sole intelligent civilization necessarily is puzzled about being alone. The anthropic principle is quite unsatisfying to begin with in cosmology. However, at least in that case we have a single observed event to explain — the universe and its cosmological properties — and no expectation of observing other similar events (i.e., other universes). In the case of the Fermi Paradox, because there may be yet unobserved civilizations lurking around, we have to weigh any theory of us being alone against some prior probability of it being true. Given our observations on Earth, the prior probability we assign to technological civilizations cannot be vanishingly small — everything points to steady biochemical and then organismal evolution from formation of water all the way to intelligent tool-using species — therefore we have to make every effort to completely exclude other explanations before we jump to the conclusion that we are alone.

So where does this leave us? I hope (1) is false. (2) is no good news either. (3) is wishful thinking, or perhaps scary too. I would love to see a better explanation. If you have your favorite explanation in mind, or thoughts to share, please comment below!

Want more stories like this? Be sure to check out more from Mission!🧠 👉 Here!