It is popular for people to talk about mind uploading as part of our future as humans. Mind uploading involves the storage and recreation of a mind in a computer. Many futurists believe that this will become a part of most people’s end of life strategy. By uploading your brain, you can avoid the certain death of your physical body. Some even think that it will become popular to upload our minds earlier in our lives, to allow advantages such as being able to exist in a simulated world, or to avoid unexpected accidental death. An uploaded brain might be able to store ‘save points’ where an individual can go back to if their artificial body is destroyed.

Despite the amazing possibilities of copying and recreating conscious being, I think that many adherents of mind uploading are selling a false promise: that mind uploading will give you, or what we think is you, eternal life. To break this notion down, I will be addressing several key misconceptions and physical problems that could make this problem impossible. I will start with the easier misconceptions, and move through to more difficult to answer or complicated issues.

What is consciousness

For the purposes of this discussion, I will be defining consciousness as follows: consciousness is defined as having qualia. That is, having the experience of the world. This is distinct from acting in the world. When you see a sunrise, smell a rose, taste a steak, this is all qualia. Our senses aren’t simply sensors; we don’t just take in information, process it then perform an output, like a computer might (although, depending on what we’re doing, it may feel that way! e.g driving) There is something in between which is how we feel as we do these things. We could conceivably make a robot that performs functions very similar to a human which may appear to be intelligent and gives us reason to believe that the robot is conscious. Since we do not experience what the robot experiences, we cannot know whether the robot is actually conscious and having conscious experiences of the world.

The question of consciousness is a difficult one not just for robots, but for other humans as well. It is possible that everyone around us is simply a meat-bot, acting intelligently but not actually experiencing the world around them. This position is commonly referred to as solipsism. For this argument, we will assume that the people around us are real and do really have conscious experience of the world. We don’t have a lot of evidence that all the people around us have been created in a contrived system to trick us into think that everyone around has consciousness. We have some evidence that the people around us are somewhat like us, and hence they may have some qualities in common, specifically there is some evidence through extrapolation that the people around us could have consciousness. On the other hand, we do know that people are trying to create computers and robots that have the appearance of intelligence but possibly not consciousness.

The metaphysical implications on consciousness during mind uploading are important because we do not simply want a robot representation of the world to replace us. The purpose of mind uploading is to actually experience the world long after we would have otherwise died, or experience a different world through an electronic version of ourselves.

Storing your memories

The fallacy I see with most people’s idea of mind uploading is that they have taken the analogy of computers as brains too far. That is, they think that there are parts of the brain dedicated to memory storage (like RAM or hard drives in a computer) and parts of the brain dedicated to processing (like the CPU of the computer). This could not be further from the truth. The brain is made up of roughly 100 billion neurons. Each one of those neurons provides both memory and processing simultaneously. Amazingly, each neuron also generates all the energy they need to perform their function within each of the cells.

Simply storing your memories on a hard drive will not ensure eternal longevity any more than recording your voice will. While in a computer you can load up the RAM with data and run a similar processor and get the same results, the difference between brains is not simply the memories that are stored within them, but also the way it processes information itself.

To recreate the same conscious being (or at least a being resembling the old one), you would need to capture the complex interplay between the memories that you have and the methodology by which they are stored recreated and influence the production of new ideas and outputs.

Complex behaviour is not evidence of consciousness

Complex behaviour is often cited as evidence of ‘consciousness’ in the Turing test. The Turing test, invented by Alan Turing, is a test designed to determine if a computer can replicate human behaviour such that it resembles an intelligent human being. In the original Turing test, a computer operator has a text discussion with a computer designed to appear to be human. The computer passes the Turing test if the computer operator is incapable of distinguishing whether there is a human or a computer having the discussion on the other side. Despite the common misconception, the Turing test does not prove intelligent behaviour, let alone consciousness. In fact, Alan Turing did not call the test after himself. He called the test “the imitation game”. Many people have equated intelligent looking behaviour with intelligence, which is not quite true. In order to ascertain intelligence, many, many more test are required than a simple Turing test, as the Turing test is actually a tiny subset of human intelligence and behaviour.

Even more worrying than the above conclusion about passing the Turing test being equivalent to intelligent behaviour, is that some people have equated intelligent looking behaviour with consciousness. Complex behaviour is not evidence of consciousness. Complex behaviour is simply evidence of being able to perform complex tasks. While a computer may be able to fool a human into thinking that a human is responsible for the words that are typed onto a computer screen, that complex behaviour shows nothing about what it is like for the computer to feel what it is like to have a conversation with a human.

Functionalism



Functional theory of the mind assumes that the sensory input and behavioural outputs of the brain are what is important to the establishment of the consciousness, not the methodology by which that sensory inputs are converted into behavioural outputs. Functionalism asserts that two minds are equivalent if they return the same outputs given the same inputs. Functionalism has already been heavily criticised with relatively strong paradoxical examples, such as the China brain, the Chinese room, and the inverted spectrum. I have provided the relevant link to Wikipedia for you to explore on your own here

Below I will try to explain in an experimental way (involving pseudo mathematical representation) one particular criticism I have of functionalist brain simulation equivalence. If you don’t understand, please move on to the next subheading where you should be able to pick up again.

A functionalist’s view of the brain with time

Imagine a real brain, B, that has a sensory input vector u_B, and returns behavioural output v_B i.e. v_B=B(u_B). When the brain B has operated on u to produce v, a qualia q_B is generated. Functionalists claim that 2 minds, B and F (a functional representation of the brain B) are equivalent when u_F=u_B and v_F=v_B. The implication of this, from the functionalist point of view, is that if u_F=u_B and v_F=v_B, it is necessary that q_F=q_B. That is, when inputs and outputs are identical, the brain and the functional representation of the brain experience equivalent qualia.

Let us consider a real brain. A real brain in the current world will live for a limited amount of time over which to receive inputs and give outputs. The set of input vectors that a brain will experience through a lifetime, U_L is a subset of all possible input vectors, U, (U_L c U). Imagine a machine that records all of the inputs U_L and the outputs V_L through the lifetime of an individual brain B. A brain simulation function G can be constructed such that for any u_B \eps U_L, a v_G is produced such that when u_G=u_B, v_G=v_B.

The functionalist position must hold that this function G perfectly replicates the inputs and outputs of a person’s brain B for their entire life, despite it being little more than a recording of inputs and outputs, and hence it counts as a perfect representation of the qualia of that person’s life. Although no other life-experience can occur than the one recorded, the qualia must be equivalent to the one recorded by the machine. The function G can be significantly less complicated than the brain B, as the brain B must be able to produce results v_B, intelligent or not, for all u_B \eps U, where U is the set of all possible inputs, whereas the function G has a much more restricted domain: it only accepts u_G \eps U_L. For all u_G \eps U\U_L (U which is not U_L), G can either have no domain over these points or can return garbage in v_G. This means that G might not be truly functionally equivalent B for all u.

Despite B and G not being truly functionally equivalent, they are functionally equivalent for a specific person’s life experience. So, while an individual brain B may be able to generate a qualia set Q_L’ that is different to Q_L given U_L’ that is different to U_L, the brain simulation G would not be able to generate Q_L’. Despite this, the real brain B has only ever had U_L as an inputs during its entire lifetime. Hence the only qualia that B experienced is Q_L. Therefore, the function G, which is simply a recording of inputs and outputs, has qualia that is equivalent to the brain B.

If a functionalist rejects this notion and believes that a simulated brain must be capable of creating a v_G for all u_G \eps U such that v_G=v_B for all u_G=u_B \eps U, then the functionalist believes that it is not simply the nature of the returned outputs given the inputs that matter, but also the nature by which the calculation takes place that matters. Such a functionalist point of view means that the brain cannot be considered a black box for delivering qualia irrespective of the types of calculations that take place.

Therefore there are only two possible positions that remain for people who reject that the recording function G has equivalent qualia; a) that only identical physical brains with identical physical inputs can have the same qualia (I’ll call this the Real Brain Qualia Hypothesis), or (the weaker position) b) that in order to generate the equivalent qualia of a real brain B, the functional representation of the brain S must simulate all physical processes that occur in the brain perfectly (I’ll call this the Whole Brain Simulation Qualia Hypothesis).

Metaphysical implications of the Real Brain Qualia Hypothesis (RBQH) and the Whole Brain Simulation Qualia Hypothesis (WBSQH)

The RBQH assumes that there is something special about the interaction of the matter that occurs in the brain that produces the qualia specific to us and our experiences. The WBSQH implies that the only thing that is important about the qualia that occurs in the brain is the calculation of the states of the matter. Note that the RBQH also says that the calculation of the states of the matter are important (the best representation of reality is reality itself, after all, and hence the universe is calculating the next step in the universe perfectly at all times). It should be noted that adherents of the WHBSQH assert that the physical nature of the interaction of matter during the operation of the brain is not important to the nature of consciousness. While this position is possible, the lack of evidence supporting this concept means that the WBSQH cannot be said to be more or less likely to be true than the RBQH.

To temper this position against WBSQH, it should be noted that a silicon (or otherwise) system used to generate a simulation of a complex brain may have some form of consciousness. Considering that electrical currents are generated in both, and many similar atomic and sub atomic particles are present, we can say that it is possible to maintain a RBQH for the type of experience a real brain would hold, while saying it is possible that complex computers may have a type of consciousness which is qualitatively different to a typical brain due to the physical and computational differences that occur between them.

To summarise, there are roughly 3 positions:

a) Only Neuron Qualia Hypothesis: the only real neurons (and possibly only human neurons) can establish any type of consciousness.

b) RBQH with possible qualia in complicated computers that is qualitatively different to our own qualia.

c) WBSQH: that any qualia can be generated through simulation of the brain components

I am obviously an adherent to position b.

What needs to be specified in the WBSQH is the level to which the simulation needs to occur. Do we need to simulate the firing of neurons? Do we need to simulate the molecules? The atoms? The subatomic particles? The quarks? For the purposes of this argument, I will assume that WBSQH adherents believe that only a full physical simulation (to the smallest important particle) is sufficient for simulating the brain to create qualia.

Whole brain simulation-why reading in a brain perfectly is likely impossible.

I hope by now that I have convinced you that at the very minimum, storing the memories or recreating the inputs and outputs of the brain is not enough to create a brain that has a qualia equivalent to that of me or you. Let’s assume, for a moment, that we think that WBSQH is possible.

When we want to read in a brain into the simulation, we are confronted with a problem regarding the physics of matter itself. The Heisenberg uncertainty principle makes it impossible to determine a particle’s position and velocity (or energy state) precisely at the same time. To measure one requires changing the other. This, unfortunately, means that measuring a brain in its exact state would actually be impossible. When we determine the state of neurons, positions of electrons and atoms, we do so with uncertainty about either the position or velocity (or energy state) that it has. This means that after reading the brain in, despite having exactly the same inputs and outputs to begin with, the brain and the simulated whole brain would start to diverge in results, even if inputs and outputs are maintained the same. At first this will be a small problem. Quantum effects may impact only a few of the neural firings that may occur. But because the brain relies on its own output as part of its input, those misfirings will cascade into a totally different brain state at some point.

This argument does not preclude the idea that you can make a consciousness that is very similar to a real brain using whole brain simulation. It does, however, mean that generating what most people consider to be the ‘same’ brain is highly unlikely.

Dual experience problem

Let’s say you could overcome the impossible to solve problem of the Heisenburg uncertainty principle when trying to read in the brain. If we could scan and recreate the brain perfectly, we could theoretically create 2 brains at exactly the same state as at scanning, or even create one brain and leave the original brain in its current state. At that point, you have created 2 brains whose only connection is through their shared past memories. There is no way for those 2 brains to communicate beyond the traditional means of talking etc.

We know that our experiences are linked to the inputs and outputs of our brain. Therefore, having completely separate inputs and outputs for each brain, the brains would have independent conscious states. That is, the experiences of one would be separate from the experiences of the other.

Now, it may be possible to argue that our consciousness is never the same, that it is constantly changing. The conscious you of today may be entirely different to the conscious you that went to sleep last night, with only the memories of the past, same body and same rough location to keep a continuous self identity. What we do know for certain, however, is that there is no continuity of conscious experience between human A and a simulation of human A’s brain.

The dual experience problem doesn’t say whether the real or simulated brain is computationally equivalent. It also says nothing about whether the simulated brain is conscious or not. A computer simulation of the brain may be complicated enough such that it has some form of consciousness. The dual experience problem simply says that the simulated brain cannot be the same consciousness as the original brain off which it is based.