I think I'm turning cyborg-ese, I really think so

Publication date: 18 April 2009

Originally published (in a much smaller version) 2008,

in Atomic: Maximum Power Computing

Last modified 03-Dec-2011 .

Let me tell you how you can live forever.

Or, at least, for a very long time. Possibly a very, very long time, subjectively.

If it is possible for the activities of a human brain to be emulated by some other computing device, then it is possible for a human mind to be copied to a computer - "uploaded", in current sci-fi parlance. The computer can then run the human-mind program indefinitely. If it runs the human-program much faster than a normal brain can, then from the point of view of the emulated brain, thousands of years could go by per real-world minute.

Which could be very good, if you've got lots of interesting things to do in there. Or very bad, if your immortal computerised consciousness is just clawing at the inside of a featureless white digital coffin for a trillion trillion subjective years.

There's no evidence to suggest that a computer actually can simulate any brain more complex than an insect's, but there's also no reason to believe, as far as I can see, that the human brain is anything other than a fiendishly complex analogue computer. It's already easy for digital hardware to simulate simple analogue computers and other such hardware, and you can run a neural-network simulation with more complexity than a worm brain on a Commodore 64.

So let's assume, for the sake of argument, that it will, one day, be possible for computers to emulate a human brain[1], with any luck before everyone alive today has died of old age.

And, importantly, let's also assume that we'll come up with some way to scan a brain and upload it.

Hey presto: Something close to immortality for that brain, right?

Well, yes. But also no.

For all you know, a cloaked alien saucer just flew over you, scanned your brain and immediately set it running in a wonderfully stimulating simulated environment in the saucer's on-board computer.

That may be very fascinating for the copy in the computer, but it doesn't help the version of you that's still sitting here reading this Web site. That version will still die when it dies.

I don't much care how many copies of me exist, or will exist in the future. I'm concerned about the version of me that's sitting here typing these words.

Mathematician and physicist Frank J. Tipler has this problem all stitched up. In the distant future, he says, "Omega Point" ultra-computers will simulate all possible minds and environments, including all of the ancestors of whatever entities built the computers.

This also means that, statistically, it's virtually certain that we all actually already are simulated minds in an "ancestor simulation", since there'll be so many more simulated minds at the Omega Point than there have been in the entire previous history of the universe.

(If space-time does have a fundamental granularity, then that's pretty much what you'd expect to see in a simulated universe, based on something like Conway's Life except, y'know, probably more complicated.)

And, oh, the Omega Point is quite literally God, and He loves us.

I believe it is fair to describe the Omega Point Theory's reception among other physicists, computer scientists and philosophers as "chilly".

So let's go right out on a limb and assume that we are not living in an ancestor simulation running in a late-universe Matrioshka hypercomputer. So we all are, in fact, living creatures in what we may laughingly call a real material world.

We are, therefore, still faced with the problem of uploading ourselves without just making a copy.

The problem, it seems to me, is discontinuity.

All of us are very different creatures from the ones we were when we were five years old, but we're not concerned about the "death" of the young version of ourselves. That version just slowly and continuously turned into the current version.

Heck, if a serious change in brain-state counts as a death, then you die every time you go to sleep. The person that wakes up in the morning just happens to share the name, body and memories of the one who fell asleep last night, and who thus ceased to exist[2].

Fortunately, this shatteringly horrifying (to me, at least) idea about the nature of the identity/consciousness relationship can be interpreted in other ways.

One can, for instance, say that pauses in the procedure-that-is-consciousness don't matter, as long as the procedure starts up again in much the same state. This is what happens when we wake up, recover from general anaesthesia, or are revived after drowning in a freezing lake. No worries; you are still you.

(Pretty much everybody, of course, holds this belief that interruptions of consciousness do not constitute death, though most people haven't explicitly spelled it out to themselves. Only those of us who like to sit quietly in a chair and scare the living crap out of ourselves with existential musings have bothered to codify this belief, and then chase its bobbing little fluffy tail into the ontological rabbit hole. That hole leads in a distinctively Omega-Point-y direction: If being dead for a few minutes in cold water or on an operating table doesn't matter, then by extension being dead for zillions of years doesn't matter either, as long as your consciousness is then reconstituted. I don't think there's any way to escape this conclusion, without insisting for no clearly-explained reason that it just doesn't count unless the consciousness is recreated in the same body that it was in at the start of the interruption. This unsettling argument doesn't really matter, though, if you don't buy the argument that all possible consciousnesses actually will be created and/or recreated in computers in the future.)

The idea that interruptions of consciousness are acceptable does not necessarily lead to the notion that uploading is OK. But it does, in some people's opinion, legitimise the otherwise similarly horrifying type of teleporter, which scans you, dismantles you, dumps your molecules in its bulk matter store, and transmits sufficient information to another teleporter for that other machine to construct a copy of you from its own matter store.

This procedure is depicted in convenient cartoon form here:

The scan-and-dismantle teleporter isn't a problem, if "you" are just your consciousness-process. Why should you care whether your consciousness-process is running in your old body or in the new one that just got built?

I'm none too keen on that sort of teleporter myself (me and Doctor McCoy...), but I'm unconcerned about gradual consciousness changes. As long as I don't stop being one thing and suddenly start being another, I think it's fair to say that there's no point of identity-death to actually worry about. This could, of course, merely be because I have to think that, if I'm not to conclude that I die every time I go to sleep.

Here's what I, and numerous people much cleverer than me, have come up with as an alternative to the copying problem.

Presume that there is a computer augmentation that you can attach to your skull, like Lobot from The Empire Strikes Back. When you first attach it, it grows nanotech tendrils into your brain, and uses them to read your neural activity and build up its own neural net. Its net, strapped to the back of your head, slowly comes to mirror your own.

In the next stage, when your brain cells die or when the computer decides there's something it can help you with, it starts to write to your brain when appropriate. Initially only very slightly, replacing the occasional lost pathway or popping a little information into your consciousness that your own brain couldn't generate, but doing more and more as time goes by.

After a while you've got a half-organic, half-computer brain.

And eventually, the organic brain can have died off completely.

But there's been no distinct point when "you" moved from the meat to the computer, any more than there was a distinct point when "you" stopped being a baby. You could take the death of your last organic brain cell to be an arbitrary point of loss-of-human-ness, but only the most reductio-ad-absurdum fanatic would say that "you" resided in that one last cell.

And now, hey presto, you're uploaded. And, with any luck, also considerably augmented.

Please form an orderly queue for the procedure.

(It'll help everyone out if you shave your own head while you're waiting.)

Footnotes

(Well, foot-articles, really. The first of these "notes" is longer than the whole main article.)

Footnote 1:

After this column ran in Atomic magazine, I got a bit of mail from readers about it. To quote one of them:

I have to say that, to me at least, it is a self-evident truth that no amount of mathematical complexity can lead to human consciousness and awareness.

He then recommended the eminent mathematician Roger Penrose's 1989 book, "The Emperor's New Mind".

Penrose, and his followers and philosophical predecessors, say that true, "strong", artificial intelligence is impossible. Whether you try to build up an AI in a computer from scratch, or copy some other intelligence into it, it won't work, unless the computer has a special something that nobody has yet identified.

Penrose reckons that the known laws of physics are inadequate to explain consciousness, and suspects that some form of quantum physics beyond that currently understood will be necessary if we're ever to come up with a real theory of consciousness.

And, by extension, he says that unless we manage to make computers that include this new "correct quantum gravity", or whatever other new theory consciousness turns out to require, we'll never be able to create real artificial intelligence.

In particular, Penrose says that all current computers are deterministic - their future state is 100% predictable, based on their hardware, programming and inputs - and a deterministic computer can never be conscious, because determinism precludes free will.

This argument seems to me to be putting the cart before the horse.

We don't yet have the ability to make a computer of a similar order of complexity to the human brain, so stating a priori that such computers will never exhibit intelligence strikes me as being like someone in 1850 saying that no heavier-than-air powered aircraft will ever work.

Many people had tried to make aeroplanes by 1850, but we now know that their efforts were doomed to failure. They didn't understand aerodynamic principles, and - more prosaically - they didn't have powerful engines that were light enough (though a steam aeroplane did actually fly, by 1933!).

Likewise, we have over the last few decades made half-arsed attempts at making true artificial intelligence in computers, which have all been dismal failures. I don't think those failures are grounds for claiming that human brains operate in some mystic way beyond all current physics, though, any more than the failure of nineteenth-century pedal-powered aeroplanes to get off the ground meant that physics would never explain the flight of birds.

If it turns out that you can't just make an analogue computer of similar complexity to the human brain, or an even larger digital computer with low-level programming that gives it similar functionality to an analogue computer, and teach such a computer to act like a conscious mind, then there may be grounds for Penrose's conclusion that some utterly new field of physics will need to be created to explain the magic of consciousness.

Until we've actually genuinely tried, though, it strikes me as goofy to just proclaim the effort to be futile and then start rambling on about quantum gravity.

In more than one way, the grey mush in our heads greatly surpasses the biggest supercomputers we've made so far. The human brain has about a hundred billion neurons, each of which has on average about seven thousand connections to other neurons, with all sorts of different weightings and influences and finesses and tweaks. We're currently only a couple of orders of magnitude away from having a hundred billion transistors in one CPU, but CPU transistors just have an emitter, a collector and a base, and are either 100% on or 100% off. They aren't really anything like a neuron.

To (attempt to) properly simulate a brain, we'd need to make (or emulate) some sort of processor whose "transistors" have the same vast interconnectedness as neurons, and also have the analogue response of neurons. Emulation looks like the most promising way to do this, but it will of course require an awful lot of computing power, plus further research into exactly how neurons do connect to each other, which is currently only somewhat hazily understood. (Heck, we don't even really know how anaesthesia works, yet.)

If we manage to make such a processor and it doesn't turn out to do any brain-like things, then I think Penrose's claims should be taken seriously. At the moment, though, we've barely even begun to try to make such a thing, so it strikes me as extremely premature to say that it's "self-evident" that the whole exercise would be pointless.

(It seems to me that Penrose's ideas about consciousness are another of those cases where onlookers only seem to be impressed by that portion of the theory that doesn't intersect with their own field of expertise. It's like Immanuel Velikovsky's "Worlds In Collision" stuff; astrophysicists said "Velikovsky's astrophysics doesn't make any sense at all, but what he has to say about biology is very intriguing..." and biologists said "Velikovsky has some real insights into planetary dynamics, though he's completely wrong about evolution, of course...")

Even if it turns out that brains can't be simulated by deterministic computers, I don't think that's much of an argument against AI. I don't think anybody in the last 20 years has postulated a brain-simulator that was deterministic.

Even quite simple analogue computers can be non-deterministic, and it's piss-easy, for that matter, to add robust randomness to a straight digital computer. Lots of very interesting research is being done in this field; evolutionary computation, for instance, shows great promise, and has the advantage that you don't need to go into the project with a low-level understanding of how the brains you're making are going to work. You just glom a ton of connections together and apply selective pressure, and things design themselves.

(The down side of this is that even quite simple evolved systems can be very resistant to analysis. I wouldn't be at all surprised if, 50 years from now, robot brains are common but nobody knows how they work in any more detail than we currently know how our own brains work.)

Humans have made countless systems that do weird random stuff, and nobody's saying that whole new logic has to be created to deal with those (if you don't count chaos theory, of course), or that their behaviour can only be explained by quantum effects. Perhaps Penrose really does have some deep insights here, but I don't have anything like the mathematical knowledge to be able to tell wisdom from gibberish at the nitty-gritty level where mathematical theories actually get proved.

Penrose says X, Minsky says Y; neither of 'em's made a brain yet, so I'll check back in a few years, OK?

Penrose knows an awful lot about mathematical physics and relativity and cosmology, but I am unconvinced that any of these fields - particularly the latter two - are actually relevant to AI. Perhaps I'm quite wrong, and they are, but I can barely figure out calculus, never mind the rest of the reasoning Penrose uses to argue that all of the people working in the AI field - which, of course, does not include Penrose himself - are wasting their time.

That said, it is unquestionable that, as I mentioned above, a brain is not just a single complex computer running a program. It's a whole ecosystem, if you will, of systems and sub-systems and sub-sub-systems, intersecting and interacting and cooperating and competing. There are literally hundreds of distinctly identified brain regions, whose normal function is understood to varying degrees - but abnormal brains, lacking whole large sections, sometimes leave their owners far less handicapped than you'd expect.

Anybody who's cracked an Oliver Sacks book will understand the educational value of brains that don't work like most people's. But anybody who's brushed up against the software-development world - heck, anybody who's spent a reasonable amount of time just using computers - ought, I think, to also see some parallels between the bizarre derangements of malfunctioning human brains and the more mystifying kinds of computer problems.

Apparently unrelated systems that interact, emergent behaviour that seems unconnected with anything you told the system to do, problems that're as fascinating as they are frustrating; computers have all of these, and so do brains.

Computers don't yet have anything like the infinite analogue flexibility of human minds, and humans of course have a much more impressive "view" of what our minds are doing than we have of what's going on inside a computer, which we can only see via its output devices. But it strikes me as entirely unwarranted to argue that computers and minds are clearly totally unrelated and will never meet.

The development of computers and their software is slow and stumbling enough, compared with the complexity of existent biological systems, that I don't think it's justified to say that computers are definitely heading toward the status of brains. But I don't think it's at all out of the question. And, as with brains, failure modes are a window into what computers are becoming.

When something goes amiss in your brain, very weird things can happen. You can, for instance, get a bump on the head, and then become "alexic", entirely unable to discern the meaning of written words. And, stranger yet, you'll be unable even to remember what written words looked like. Try to think of the Coca-Cola logo, or the simple capital-letters TOYOTA on the back of a Hilux pickup, and all that'll come to mind is a meaningless glyph-jumble like the Hebrew or Arabic versions of the Coke logo, for people who can't read those scripts.

There are plenty of similarly bizarre brain disorders. Lose your right parietal cortex to a stroke and you can end up losing not only the left side of your perceptual universe ("hemispacial neglect"), but the very concept of "leftness". Your brain is no longer able to think that anything definably "left" can possibly be of any interest at all.

And, on top of that, you may very well be "anosognosic"; unable to understand that there is anything wrong. You may later learn to turn your dinner-plate 180 degrees to bring into view the half of it that previously didn't exist, but that won't change your gut certainty that the second half of the plate might as well have been an invisible pink unicorn, for all of the reality it'd had before you turned it.

Blindness caused by brain damage can be anosognosic, as well. Essentially, the brain seems to just expand the "painting over" feature that it normally uses to fill in your eyes' blind spots, and to hide the vast difference in resolution between your central macular vision and the rest of your visual field. Now, everything you see is painted by the brain, which may take cues from your other remaining senses, but really does just make it all up.

As far as an anosognosic blind person is concerned, they can see just fine. But everything they see is internally constructed, as when dreaming. (This'd be a pretty neat thing to be able to turn on and off at will.)

If you happen to get a dose of frontal-lobe damage along with your vision-centre loss, you can end up not only unaware of your blindness, but no longer able to think about the concept of "seeing" at all. The result seems to be a vague confabulation whenever "looking" or "watching" or "seeing" is mentioned; not only do you generate what you "see" entirely internally, but you're entirely unable to realise that other people don't do the same.

If one of these strange maladies befalls you, then depending on how the damage was caused, you may stay alexic, or blind and anosognosic, for the rest of your life. Or you may be fine five days later. Brains are like that.

But, to a more than coincidental degree, software is like that too.

Failures that screw up stuff that had nothing at all to do with whatever a programmer just changed. Systems that're 100% certain that nothing is wrong, when they're clearly actually not working at all. Seemingly irrelevant voodoo rituals that fix everything again.

Most software doesn't do very exciting things, so most of these sorts of failures lack the human interest of rare delusional disorders or sudden loss of language. But that doesn't mean they're not conceptually related, and will become more obviously so in the future.

I confidently predict that as we get closer to practical artificial intelligence, we'll see software whose failure modes more and more resemble the "failure modes" of the human brain. By "practical", here, I mean AI systems that are totally useless for the Turing test, and which clearly have as much similarity to general intelligence as a hang-glider has to a seabird, but which can do things that previously required direct human control, like driving cars.

We're already coming up with computer vision systems that can interpret normal real-world scenes - they're essential for driverless cars. If there doesn't end up being some commonality between the way those systems figure out what's where and the way the human brain does, and also some commonality between what fools them and what fools us, I shall purchase a hat, and then eat it.

I mean, it's possible for a human to lose "movement vision" - the ability to perceive objects in motion as being in motion, rather than as a sort of infrequently-updated freeze-frame slide-show. That sounds just like a less-than-perfect computer-vision system, to me.

If it turns out that we need a whole general Theory of Mind to be able to make any sort of artificial consciousness, then it may take us a long, long time to do that, if we ever do.

But we didn't have to make a bird from scratch in order to achieve heavier-than-air flight. We still can't make a bird, but we can make an aeroplane that's far bigger and faster than any bird.

Footnote 2:

Even if large changes in brain-state do, in your opinion, constitute death, it's readily arguable that sleep and several other altered states of consciousness do not. Several of these states can fairly be said to be somewhere between normal wakefulness and normal sleep, and this fact by itself suggests that this is a continuum of fuzzily-defined states, not a collection of distinct and separate states. Meditation, hypnotic or other trance states, the vivid hallucinations that wakefully conscious people have when they're in a sensory deprivation tank or wearing ganzfeld gear; the list goes on and on, and all of these states share some characteristics of wakefulness and some of sleep.

I think this concept is well summed up by this footnote - yes, this footnote of mine is citing another footnote - from page 57 of the Vintage Books edition of Oliver Sacks' An Anthropologist on Mars:

Rodolfo Llinás and his colleagues at New York University, comparing the electrophysiological properties of the brain in waking and dreaming, postulate a single fundamental mechanism for both - a ceaseless inner talking between cerebral cortex and thalamus, a ceaseless interplay of image and feeling, irrespective of whether there is sensory input or not. When there is sensory input, this interplay integrates it to generate waking consciousness, but in the absence of sensory input it continues a to generate brain states, those brain states we call fantasy, hallucination, or dreams. Thus waking consciousness is dreaming - but dreaming constrained by external reality.

(Note that the above may be an excellent answer for more than one question in a special edition of Trivial Pursuit.)