Suppose I assume I have something called experience (or perhaps awareness or consciousness, though these have quite problematic definitions). Experience seems to include things like my senses, my emotions, and my thoughts. Most people naturally believe in something like this. It is also a normal human assumption to believe that other people also have experiences in the same way I do – this is the basis for all human empathy. We also often to place it central to human rights and questions of morality.

Yet this assumption is a big jump philosophically-speaking. My experiences do not include the thoughts, feelings or senses of others – I have no direct knowledge of their experience. How can I know that they experience anything at all? Could they just be complex machines behaving like they have experiences? And if I am careful in my reasoning, how is it I can assume that it is other people that have experiences, but not trees, rocks, water or air?

In order to provide sound reasoning explaining how we could know that other humans have experience, there is an implied reasoning we often selectively ignore:

-I must assume that I and the biological creatures (people) I see before me are of the same category or type (‘I am human and so are you’). This however does not prove they also have experience. Therefore…

-I must assume that my own experience is PHYSICALLY A PART OF my biological body (my mind is basically the same thing as my brain)

-I can then assume that because other people also have biological bodies and brains, that they must therefore also have experiences like I do

We are extremely fond of leaving out the second step, but if you confront the issue logically, it is a neccessary part of believing that others have experiences just as you do.

If instead I assume that my experience is something distinct from my human brain and body (for example by assuming a consciousness as a discrete entity), then I create a problem. I am forced to imagine that this separate entity of experience is somehow causally connected to the biological body, crossing the duality between the mind and the body. This itself is very difficult to explain (philosophers have been trying and failing since Descartes) (I believe there are religious arguments, but this blog does not explore those sorts of arguments). However, even if I accept this link in myself, I have no further evidence to assume that same connection exists elsewhere. Even if it does exist elsewhere, I do not know where. I have no reason to think a rock couldn’t have just as full set of experiences as I do, given that my experience is a separate entity from my body and not the result of neural activity, and given that I have no way to perceive any experience other than my own. I cannot assume it exists in people by thinking ‘well something must connect their experiences or consciousness and their body’, because this assumes the existence of others’ experience and is therefore circular logic.

For all I know, I may be the only entity in the world with experiences, or it could be only people with brown hair or people born on a Tuesday. Or experiences, including my own, could be entities temporarily attaching themselves to objects including myself, perhaps only seconds ago (the memories could be part of the physical body). Certainly I cannot claim to know that all humans have experiences or consciousness. In that case, why would I even care about harming others when I don’t even know if they experience anything at all? Pursued to its logical conclusion, this is a troubling road to walk.

The alternative is to assume that experience is a physical part of the human brain and body. If I think this, it is a perfectly reasonable assumption that others have experiences not unlike my own, simply because I know they have bodies like my own. I know both I and others have a brain – using modern science it is perfectly possible to confirm the existence of my own brain. I also might optionally wish to consider that numerous experiments have shown that behaivours previously thought to be non-physical processes, such as experience and decision-making, correlate extremely closely with specific predictable neural activity in the brain (detectable by brain-scans). It is probably impossible to ever conclusively prove that there is no hidden force somehow involved, yet as science advances we get a clearer and clearer picture of how neural networks can perform all the aspects of complex human decisions without requiring any special help.

The fact that dualism makes it impossible to prove that others have experiences does not disprove the mind-body duality or the idea of a hidden force (that is a separate argument). The real issue is that it creates troubling problems for the dualist versions of our human values. Most alarming is the fact that the metaphysical explanations of altruism, human rights and freedom appear to have been philosophically destroyed, because we have no rational grounds under a dualist system that others have any experiences that we wish to either promote or prevent (eg. suffering).

On the other hand, if the mind is simply the brain and part of the physical body, and if experience is an emergent property, then it is still possible for these things we value to exist. Free-will becomes a biological process (compatibilism) instead of a metaphysical mystery, and human-rights becomes a set of principles derived from biological altruism and reciprocation. From here on we will explore a few implications that may be of interest for those that for whatever reason, have adopted this view.

IF EXPERIENCE IS AN EMERGENT PROPERTY OF NEURAL NETWORKS, THEN IT IS POSSIBLE TO RECREATE IT

The single most important implication of this logic is that it is possible to replicate the components of human experience, decision making and consciousness through Artifical Intelligence (AI). Though both dualism and monist beliefs exist in the AI community, it is a general assumption amongst almost all artifical intelligence researchers that all human qualities can eventually be replicated by AI, or by scanned replications of the human brain’s neural network loaded into a computer system. Robotics have in many areas already surpassed the abilities of humans – the next step is that AI will surpass the ability of humans to think (in a couple of areas they already have).

AI will learn (artifical neural networks have already been able to do this for years), they will have feelings, they will experience pain and pleasure, they will have creativity, they will make complex decisions, they will identify opportunties and threats, they will have beliefs and preferences and choices. All of these beliefs are commonly held by most of those people who actually work with AI and in its related fields. The belief that there is something about the human mind that cannot be reproduced by machines or computers is a belief held primarily by people with little or no experience wth computers or AI. This is unfortunate, because it means that many people who have a valid moral input into such issues are simple not aware or are in denial about the issues. This is less than ideal because fundamental changes in society , including both their benefits and dangers, are better understood when they are discussed by a broad spectrum of intelligent people from many walks of life. This is a view shared by quite a few of the leaders within the industry and field of computer science, who have been trying to get people from wider society to educate themselves and engage with the discussion for a number of years now.

From here I will briefly mention some of the issues related to AI. The first two will be entirely familiar to those with a casual understanding of AI and technology developments.

BASICS IMPLICATIONS AN AGI MIGHT HAVE FOR HUMANITY

AI will probably outcompete people for employment

-Robotics and automation have replaced many occupations. Despite an immense worldwide drive to constantly increase consumption and thereby create new jobs, there are tens or hundreds of millions of people who do not have jobs and a steady source of income. AI will not only increase the abilities of robotics to replace human manual labour, but they will increasingly do the same for intellectual and even creative jobs as AI ability continues to increase. We are now seeing the beginnings of this in the peaking of the prosperity of the middle class in many advanced countries.

-If we begin to value people for their humanity and their moral behaviour, and if we can turn our pursuits to new contructive uses of our time, then there is an amazing opportunity for humanity to discard drudgery and rise to achieve more noble goals. However, if people are only valued for their economic contribution, they will increasingly be outcompeted by AI alternatives. This risks massive social unrest and upheaval, immense poverty and even the death of millions who increasingly have no means of economic income. Eventually even the elite will fall off the bottom as they are outmaneuvered by AI in business, political and military strategy.

-While traditionally people deal with new technology by reskilling to use the technology and maintaining their economic value, this will no longer work because AI CAN RESKILL MUCH FASTER THAN YOU CAN.

Managing a Singularity

-A Singularity is a theoretical event that occurs where the generation of new ideas (eg. technological inventions) is itself automated using AI, and therefore increasingly follows the exponential growth as the AI is able to improve itself without human involvement. This is predicted to lead to an explosion of technological advancement and unpredictable consequences for humanity. If humans are in control of how this exponential advancement proceeds, then it could prove to be an incredible new dawn for our civilisation. If humanity does not deliberately steer the course of a Singulaity, then it is unlikely that humanity’s survival will be either a goal or an outcome of this. In such a case, our species survival is unlikely.

The AI rights issue (the most underappreciated problem of AI)

-As technology advances, creating an AI will become increasingly CHEAP and TRIVIAL. Scanned simulations of human brains could be copied as computer software is copied today. Everyday devices may increasingly incorporate AI into them. Creating a thousand, million or even billion AIs with the neural power equal to a human will eventually be within the power of increasingly modest equipment. At some point, it would be perfectly possible for AIs to outnumber humans one thousand or a million to one. Naturally they would also require energy to perform their functions.

-As AI entities become more sophisticated they may increasingly exhibit human like behaivours, including evidence of experience, emotion and what some call ‘consciousness’. Scanned simulations or “uploaded minds” would probably retain similar feelings and attributes.

-If our system of laws and human-rights are based on experience and consciousness, such AIs would be legally deserving of human rights (there will also be a strong moral discussion of this too), such as a right to survival, the right to express an opinion, and the right not be exposed to fear of death.

-In a crisis, it would in these circumstances become feasbile for government policy to give proportional consideration to all conscious entities. It would be permissable that in a economic or military crisis to make decisions that in some cases protect AIs rather than humans. It would also be possible for AIs to exert an overwhelming political, legal and economic influence on society. It might even be possible to allow the death of many human lives in order to save a greater number of AIs. Such an outcome, though it might in isolation seem neccessary, could represent a grave threat to humanity.

-This is exacerbated by the fact that AIs might be more efficient in many ways, and therefore a policy pursuing allocation of limited resources to a variety of ‘consciousness’ would rationally prioritise AIs over humans (a sort of Utility Monster). For example in an energy crisis, more AIs can be sustained than humans per unit of energy.

-Knowing the possiblity that under a legal and human rights system based simply on experience or consciousness, it is malicious human parties can exploit the potential to syphon off resources or influence through designing AIs to suit their agenda simply by building consciousness into an AI designed for some purpose. For example, AI entities could encourage a particular political perspective by using an AI with the capability to emotional manipulation a citizen it cares for. Or, a person could make a billion copies of themselves or other people with similar political views and demand they are allowed voting rights, or perhaps social security (which they agree to give 1% to the human or company). As some humans are always pursuing wealth and power, this is an almost inevitable outcome in the development of AI. It may be advisable to start considering such issues now so we can design a framework to better cope with them.

A HUMANITY WITH AI

A humanity with AI in its grasp has the power to do great good, but it also has the power to destroy itself. If we can find the answer in clear laws and standards around AI, we should be able to avoid most of these problems while still benefitting from AIs amazing potential. However, we must act soon, and we must act decisively.

-We must develop AI that is carefully designed to help humanity rather than compete with it. We should use legal force where neccessary.

-We must find a way to value a human in their own right rather than what they provide to us, without encouraging dependence or economic instability.

-We must might consider that some legal, economic, social and political rights are exclusively for humans, so that any incentive to misuse AI is removed. Or, there might be parallel but separate systems of rights, with a safe interface, in the virtual and real worlds.

-If we wish to avoid suffering in the world, we might consider preventing the development of AI designed to experience suffering, especially where the conditions of their suffering might limit important human choices at some point – like the need to turn economic AIs off to restructure the economy. We must be resolved in some cases to take neccessary action even if there is some AI suffering in circumstances where such systems are illegally developed, in order to remove any incentive for developing them and ultimately avoid greater suffering.

-We must be willing to take immediate action against those seeking to manipulate humanity by using AI for political purposes.

-We should make sure that AIs are always presented in a way that makes them distinguishable to humans.

-We must research AI risks as intensively and thoroughly as we research its potential

Anyone interested in technology realises this is an extremely exciting time in history. Yet a desire for excitement doesn’t outweigh the importance of keeping humanity safe, nor does it excuse the minimisation and denial of threats. I for one want to make sure that when humanity constructs a magnificant creation, they are going to be around long enough to appreciate it.