Someday soon, say tech optimists, humans might be able to upload their consciousness to machines. There it can live forever, get backed up in the cloud, replicated across the planet, downloaded into new hardware whenever needed. Boosters call such a moment "the singularity," since it would represent a point beyond which the human race would be forever and unpredictably altered. Critics, on the other hand, just roll their eyes.

But if, by some miracle, humanity does manage to turn itself into and/or build a host of Cylons, that would be a Pretty Big Change—and things that create Pretty Big Changes should be studied. But even if they cost $150 billion?

That's the argument of Max Tegmark, an MIT physicist, writing for "big questions" site Edge.org. He's not convinced the singularity will arrive, and he's not convinced its arrival would even be a good thing. But he is convinced the singularity would have absolutely stunning consequences for humanity.

On one hand, it could potentially solve most of our problems, even mortality. It could also open up space, the final frontier: unshackled by the limitations of our human bodies, such advanced life could rise up and eventually make much of our observable universe come alive. On the other hand, it could destroy life as we know it and everything we care about... Objectively, whoever or whatever controls this technology would rapidly become the world's wealthiest and most powerful, outsmarting all financial markets, out-inventing and out-patenting all human researchers, and out-manipulating all human leaders. Even if we humans nominally merge with such machines, we might have no guarantees whatsoever about the ultimate outcome, making it feel less like a merger and more like a hostile corporate takeover. Subjectively, these machines wouldn't feel like we do. Would they feel anything at all? I believe that consciousness is the way information feels when being processed. I therefore think it's likely that they too would feel self-aware, and should be viewed not as mere lifeless machines but as conscious beings like us—but with a consciousness that subjectively feels quite different from ours.

And, if there's even a tiny chance that the singularity could arrive, he says, we had better get a research program going to think about the best ways to deal with the coming immortality/cyborg apocalypse/colonization of the universe. That research program may be expensive, however. Tegmark has a modest proposal:

[The singularity] could be the best or worst thing ever to happen to life as we know it, so if there's even a one percent chance that there'll be a singularity in our lifetime, I think a reasonable precaution would be to spend at least one percent of our GDP studying the issue and deciding what to do about it. Yet we largely ignore it, and are curiously complacent about life as we know it getting transformed. What we should be worried about is that we're not worried.

Let's assume that "our GDP" here refers solely to the United States. In 2011, US gross domestic product hit approximately $15 trillion; one percent of that money would come to a whopping $150 billion. If the EU did its own singularity research at one percent of its GDP, that would add another $170 billion to the pot. Should China and other states contribute at similar levels, this singularity research project could approach the $400 billion range.

That's a lot of cash. As to the question of whether a singularity research project would be worth the money, it would seem to depend on the likelihood of the singularity becoming a reality. Say, for the sake of argument, that we accept Tegmark's "one percent chance" threshold as the proper one—does the singularity have a greater than one percent chance of happening this century?

As a perennial skeptic of most ideas that involve "uploading our consciousness" or "superhuman artificial intelligence," I'm more than a little doubtful. Author Bruce Sterling, who writes sci-fi and authored the nonfiction classic The Hacker Crackdown, is with me.

"It's just not happening," Sterling wrote in his own Edge.org commentary. "All the symptoms are absent. Computer hardware is not accelerating on any exponential runway beyond all hope of control. We're no closer to 'self-aware' machines than we were in the remote 1960s. Modern wireless devices in a modern Cloud are an entirely different cyber-paradigm than imaginary 1990s 'minds on nonbiological substrates' that might allegedly have the 'computational power of a human brain.' A Singularity has no business model, no major power group in our society is interested in provoking one, nobody who matters sees any reason to create one, there's no there there."

On the other hand, if the singularity does arrive despite my skepticism, I've already picked out the machine I'd like to house my future consciousness: the model six Cylon.