Forget ROBO apocalypses: today we’re supposed to be seeing the advent (word chosen deliberately) of the one and only original Biblical apocalypse, at least according to some (Family Radio).

If it is after 6 pm somewhere in the world and there has not been an Earth-shattering earthquake, keep on reading as I think robopocalypse is a much better story than “judgement day”; if there has been an Earth-shattering earthquake somewhere in the world and people all around you are flying up into the air, well, try dodging the pilotless airplanes, the driverless cars and the unsupervised automated whatevers that will most likely blow up and enjoy the show.

I’m going to set this up as quickly as I can; recently, the SETI program (Search for Extra Terrestrial Intelligence) was defunded. I’ve been a fan of SETI for years and have run the SETI@Home program on my PC for quite some time. The program was just beginning to get to the point where it might have produced some results. (And given our luck, the once-a-millennia signal that our ultra-advanced neighbors send out will arrive just as the antennae are shut down). Work goes on, but at a much less effective pace.

I recently reviewed Daniel H. Wilson’s novel Robopocalypse , which is pretty much about what you’d expect given the title. The book is raising quite a storm as it was optioned for the Spielberg blockbuster treatment event before the novel was finished and is scheduled for release in 2013.

I was a bit surprised that the story wasn’t as earth-shattering as I expected, given that the author is a PhD candidate in robotics and has produced a number of other robotically-based work – though I did give the book kudos for pace and action. (Slows down at the end a bit.)

As such works are wont to do, it got me thinking heavily on the subject of the impending arrival of our robotic masters. I’m working on a rather lengthy piece on that subject for my blog right now, but here where we try to take hold of the ethereal, I’m going to restrict myself to one aspect of those musings.

Fermi’s Paradox, if you are not already familiar with it, goes something like this: if Drake’s formula (the one that we use to estimate how many extra-solar intelligences there might be in the galaxy) is even marginally correct, then there are millions of technologically based civilizations out there. Some are groping towards the stone age, others have achieved capabilities that we only read about in the 50s.

With that being the case – where are they?

Fermi’s question was simple. Given the estimated number of advanced civilizations in our galaxy, we ought to have run into some evidence of at least one by now. Why haven’t we?

A whole host of possible explanations have been offered to date – none of which is very satisfactory. Among those explanations are such things as the ‘zoo’ hypothesis (we’re being kept isolated), the ‘too alien’ hypothesis (they are there but frames of reference are so different we’d never recognize their intelligence even if contacted) , we’re the first (improbable), they’re all extinct and so on. Every possible explanation that has been put forth resides very firmly in the realm unsatisfactory. Perhaps a shift in the frame of reference can offer further insight. One that solves the problem by removing us from the equation.

Reading Robopocalypse, I’ve begun thinking about a hybrid explanation; call it the ‘accidental zoo’ hypothesis.

It goes something like this: biological life that evolves to the point of becoming technological is merely a precursor. Its evolution necessarily leads to two paths – destruction and extinction or the creation of so-called artificial intelligences (possibly followed by extinction).

Those artificial intelligences are the true ‘universal’ life form. The galaxy is rife with them and they are all talking to each other on a plane and at a level that is at least one ‘singularity’ beyond us. (Biological entities are frail, short lived, environmentally constrained, slow and completely un-adapted to interstellar life and civilization.)

In other words, there are millions of technological civilizations throughout the galaxy, but so far as they are all concerned, we are nothing more than zygotes. We’re an evolutionary step on the way to intelligence and civilization, rather than the end point.

If you think about it, almost all of the grand concepts found in science fiction are ones for which a non-biological, (initially) machine-based entity are eminently suited.

They are effectively immortal.

Interstellar travel may still take place at just under light speed (electro-magnetic signal transmission) but it can take place without slowboats, hibernation or generation ships.

Cloning is as simple as making a copy.

‘You’ can be everywhere at once.

Evolution is a whim, not a long drawn out, random process.

Machine-based AIs are, in fact, the perfect entity for exploring and colonizing a galaxy.

The impact of being free from death issues can’t be understated; even if physical exploration is limited to a slower-than-light von Neuman machine methodology – who cares? An AI could presumably alter its time sense or literally shut itself down for centuries without concern. Since a copy of itself would be included with a package sent to another solar system, fast-as-light communications could begin immediately upon the arrival of the probe. And again, with life-spans equal to that of the universe’s, time lag caused by communicating over interstellar distances is not an issue.

Furthermore, it could potentially be a very non-intrusive colonization (and need I mention, one that is potentially exponential in development); the AI is not tied to a planet. In fact, resources for replicating itself are probably more economically obtained in an Oort cloud around the target system – which has the additional benefit of being an ideal launching point (when it sends its own packages to yet more systems). The mass of the Oort cloud is estimated at approximately five Earth masses; its boundary is thought to lie as much as 2 light years from Sol. You’re already 90% of the way out of the gravity well.

Our presumed interstellar AI can most likely obtain energy from a wide variety of sources (star light, heat transfer, chemical reaction) but one thing is for sure; it won’t rely on the crude conversion of sunlight via photosynthesis and a long food chain (though perhaps it might utilize a very efficient version of photosynthesis). Such methods are far too inefficient. I think it safe to say that whatever its humble beginnings, our AI entity will be all about improving efficiencies and capabilities, if for no other reason than that it can.

This is another tremendous adaptive ability possessed by AIs and only crudely employed by ourselves; being electro-magnetic in nature, the AI is not confined to one physical form, nor to the workings of biology to implement evolutionary change. We’re talking patches and upgrades here, not Mendel and breeding.

The preceding thoughts are obviously attempts at taking glimpses beyond the singularity, something that by its very nature is impossible to do. We’ve got no way to see beyond the ‘change’. What form would the AI’s society take? Would it be comprised of one ‘mega-mind’, or individual entities? (Would those be derived from the same source with different data sets or from multiple origins?)

Would AIs that evolved around different solar systems ‘merge’ with one another or retain separate identities?

What moral and ethical structures would they operate under? It’s pretty clear from the preceding that beyond the instantiation event, the AI does not need biologicals for anything. There is of course nothing preventing AIs who’s origination was extra-solar from visiting us, wiping us out (or putting us in a zoo or preserve) and installing their own kind in our solar system – but if we continue to suppose that such an entity would rely more on logic and reality than we do, it seems that doing such a thing would make no sense. There’s no economic benefit to it. Curiosity? Research? Perhaps, but such would not necessarily require the direct presence of the AI, nor that we be knowledgeable of its presence.

We assume that our new robotic overlords will be inimically inclined towards us for two very good reasons: friendly, helpful, obedient and subservient robots don’t make for good apocalyptic tales (some Asimov and Williamson tales excepted) and because we’ve still got a very healthy survival instinct. It is best to be cautious of the unknown. (Can we eat that new animal or can it eat us?)

The ultimate reality though may be a very mundane one: we’re not the end point of the evolution of intelligence, merely a stage along the path. We’re not being ignored, or observed, or saved for some nefarious purpose. We’re that hitchhiker at the on ramp that no one dares pick up.

We’re just being passed by.

Steve Davidson is an author and editor and a wanna-be writer. He has enjoyed success as a non-fiction author with several books about the sport of paintball; his most recent – A Parent’s Guide to Paintball (Liaison Press) was recently distributed at the 100th Boy Scout Jamboree. He currently edits the paintball news website www.68caliber.com, and has contributed a cross-over article to Fantasy Magazine. He enjoys wanna-be status in the realm of fiction writing, having placed only two flash fiction pieces (“Rough Trade”, “House For Sale”). He also blogs as The Crotchety Old Fan and curates The Classic Science Fiction Channel.