Summary

In order to reduce suffering, we need to determine which entities experience conscious emotions. This makes the problem of consciousness an issue not just of philosophical speculation but of practical importance, especially since most of the potentially conscious beings in the universe—animals, insects, digital agents, and more?—are not humans whom we can be fairly sure have subjective experience. However, the problem of consciousness is not a "hard" problem in the way that David Chalmers asserts. Consciousness is neither a thing which exists "out there" nor an ontologically fundamental property of matter; it's a definitional category into which we classify minds. "Is this digital mind really conscious?" is analogous to "Is a rock that people use to eat on really a table?" That consciousness is a cluster in thingspace rather than a concrete property of the world does not make reducing suffering less important. After all, even if consciousness were something concrete rather than conceptual, we would still have to choose to care about it.

A dialogue that further discusses some of these ideas and gives more airing to objections against reductionism is "My Confusions about the Hard Problem of Consciousness".

See also "A Simple Program to Illustrate the Hard Problem of Consciousness".

Introduction

Throughout the period from 2007 to early 2009, I was quite confused about consciousness. I thought there was indeed an explanatory gap between the material operations that happen in our brains and our subjective experiences. My friend Carl Shulman engaged with me through many long conversations on the topic, aiming to explain his understanding of consciousness. Eventually, in fall 2009, I finally grasped what he was trying to convey, and suddenly it fell into place. I had the sensation of a student struggling to figure out a math problem who suddenly realizes the solution.

My aim in this essay is to explain this view of consciousness and show that it's actually rather simple and unconfusing. These ideas are not original; similar ideas can be found in the writings of Daniel Dennett, as well as many AI researchers.

I begin by explaining why the question matters and what the fuss over the "hard problem" is all about.

Why qualia matter

Anyone who wants to reduce suffering is fundamentally concerned with qualia—in particular, preventing experiences of distressful qualia. Then we must ask, What sorts of entities experience these qualia? In particular, below are a few instances of questions of this sort whose answers have significant ramifications for the types of policies that suffering reducers should support and the causes on which they should expend resources.

Which animals can suffer? How far down the evolutionary tree does conscious awareness of pain extend? In particular, can insects suffer? Fortunately, these questions do not require a complete understanding of consciousness before we can arrive at good answers. Before learning about consciousness in depth, I would have been, say, 95% sure that you can feel conscious pain, because you appear to be an organism with nearly identical physiology to my own, behaving in very similar ways. Similarly, I was ~90% certain that cats could suffer for similar reasons, including shared phylogenetic heritage, comparability of neural structures, and similarity of behavior under distress. However, when it came to fish, I assigned only, say, 70% probability to the capacity for conscious suffering, and for insects even lower than that. Can non-animals suffer? One view of consciousness is panpsychism, which holds that all matter in the universe has "mind stuff" that gives rise to phenomenal experience. If this were true, at what level would conscious minds begin? I as a whole organism feel conscious, but what about each of my neurons? Should I worry about rocks being conscious? Even if so, the utilitarian implications wouldn't be obvious—Do rocks prefer to lie in this pile or that one?—but it's possible that further reflection on the matter would suggest nontrivial recommendations for action. Can artificial computers be made conscious? This is relevant for a number of reasons. For example, if humans do create conscious computers, we will need to enforce "computer welfare" guidelines in the same way that we impose animal-welfare standards.

For the first of these questions, Which animals can suffer?, one very promising approach is to identify the neural correlates of consciousness (NCCs) in humans. Even if we don't understand the causes of consciousness, we can get a pretty good sense of what sorts of neural activity are associated with consciousness and then check whether activity of this type occurs in other animals.

Of course, as we move further backwards in the evolutionary tree, using this criterion alone becomes increasingly dubious: Do animals need to have the specific neural structures that humans do in order to feel qualia? Might they not have phenomenal experience through other biological mechanisms? And when it comes to the second and third questions, the NCC approach is almost entirely inapplicable. Rather than merely identifying the neural correlates of consciousness, we really want to know the correlates of consciousness in general: What necessary and sufficient criteria would allow us to tell that entity X is suffering while entity Y isn't?

A second approach to assessing the presence of consciousness, familiar from the animal-welfare literature, is to examine behavior: Does this organism, when given a noxious stimulus, exhibit prolonged aversive responses? Does the stimulus induce learning and change motivational tradeoffs? Is it remembered? How similar are the behavioral reactions to humans'? In general, it seems safe to assume that consciousness is somewhat widespread throughout the animal kingdom, because it exists in Homo sapiens and appears to be a useful adaptation. Or is it? This raises the question: What exactly is consciousness for? Sure, perhaps a system for reflective awareness allows an organism to organize its thoughts, imagine counterfactual scenarios, and execute sophisticated novel behaviors. But why does the process "feel like" anything? Why doesn't it take place "in the dark," to quote David Chalmers.

The "hard problem"

This last question is the "hard problem of consciousness" that Chalmers has promulgated:

It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all?

Chalmers himself proposes that consciousness may be fundamental to reality, in the same way that, say, general relativity takes space-time as a foundational building block that is simply assumed to exist. On this property-dualist view, physical entities interact with mental entities in a "structurally coherent" way, i.e., "any information that is consciously experienced will also be cognitively represented." Others have criticized this position. In opposition to property dualism, Eliezer Yudkowsky writes:

Why say that you could subtract this true stuff of consciousness, and leave all the atoms in the same place doing the same things? If that's true, we need some separate physical explanation for why Chalmers talks about "the mysterious redness of red". That is, there exists both a mysterious redness of red, which is extra-physical, and an entirely separate reason, within physics, why Chalmers talks about the "mysterious redness of red". [...] To postulate this stuff of consciousness, and then further postulate that it doesn't do anything—for the love of cute kittens, why?

A plethora of alternate views have been proposed regarding the philosophy of mind. My sense from what I have read is that philosophers are making the topic way more complicated than it needs to be. In the remainder of this piece, I elaborate on a very simple approach to the issue.

Removing confusion

It doesn't make sense to ask questions like, Does a computer program of a mind really instantiate consciousness? That question reifies consciousness as a thing that may or may not be produced by a program. Rather, the particles constituting the computer just move—and that's it. The question of whether a given physical operation is "conscious" is not a factual dispute but a definitional one: Do we want to define consciousness as including those sorts of physical operations?

Picking a definition

We could appeal to various reasons for a particular definition. For example, we could look in the dictionary for the word "conscious" and try to decide whether the operation we're examining fits that description. We could decide based on which categories are most useful for scientists in their work, or in explaining their work to laypeople. We could ask philosophers of mind what properties they feel best capture the sentiment of what consciousness should entail.

This is all fine as a linguistic exercise, but what I'm ultimately interested in is whether a given mind deserves ethical consideration. I tend to say things like, "Minds matter morally if and only if they have conscious emotions." In that case, why not define "conscious emotion" as "physical operations that I care about morally?" That's what I'll do in the remainder of this piece, but it would be straightforward to modify what I say below to use a different definition instead.

As an analogy, I would say that the word "consciousness" is like the word "justice": It's a broad, vague term that can mean many specific things to different people. Given a particular definition of justice, it becomes a factual question how much a given society is implementing justice, just like given a specific definition of consciousness, it becomes a factual question how much a given mind is implementing consciousness. But some of the debate is over what the definition should be in the first place. This is why consciousness is partly a moral issue. Of course, it's also partly a factual issue in the same sense that justice is: You have to actually go look at what a given society/brain is doing to see how it measures up according to your metric.

Questions and answers

Question : Why does it feel like anything to have the type of self-awareness that humans do? (the hard problem) Answer : When the particles that compose self-aware brains like ours move around, they do so in such a way that the parts of the brain that perform higher-level aggregation and analysis (what we call the "conscious" parts) receive signals of a type that cause them to move in such a way that they, among other things, transmit signals to the mouth to utter the words, "That feels like something." Qualia feel like something because the organisms that "experience" them execute cognitive algorithms that make them act in ways that we call "believing that it feels like something." I'll put it another way: If you hold an implicitly dualist view of consciousness, what do you think happens mechanically in the brain when an organism feels qualia? Well, those mechanical operations are what qualia are. Why we have them and how they work in the brain are interesting questions, but they're not mysterious—they're akin to the question, Why does Lisp have gensyms and how are they implemented (both at the level of source code and at the level of hardware)?

Question : Do insects suffer? Answer : I'll rephrase the question: Are the physical operations that go on inside insect brains sufficiently similar to the operations in our brains which we call "suffering" that we decide we want to care about them? Fundamentally, it's up to us. Still, the question isn't a trivial one. What we need to do is decide upon some set of criteria for whether a given physical process is of the type that we want to regard as being a "mind" that we care about. For example, we may decide to regard as bad those mechanical operations which correspond to the execution of an algorithm for self-awareness of "painful" input signals (where "self-awareness" may have a very sophisticated definition, if we so choose). Whether insects do this type of self-modeling of their reception of emotional signals (or whether they receive the emotional signals without a higher-level understanding of what's going on) is then an open factual question—and a perfectly understandable one. One way to begin crafting that set of criteria for what deserves to be called "conscious suffering" would be to examine the neural correlates of suffering in humans and other complex animals, and try to generalize what it is about those particular brain algorithms or processes that we can in general say is bad .

Question : Is a rock conscious? Answer : Do you want to care about the atomic movements in a rock as though they're conscious? I personally attach little concern to the particle movements in a rock. I may care to a non-zero degree that certain strained interpretations of rock particle movements correspond to certain algorithms that, running in animal minds, constitute self-awareness of suffering, but this degree of concern is tiny—probably small enough to ignore entirely from my calculations. In any event, it's not clear whether smashing rocks leads to better or worse experiences under these strained interpretations.

Question: If we write a simple Python program with an Organism class that has a field for Pleasure and set it equal to the string "infinity," have we brought an infinite amount of pleasure into the multiverse? Answer: It depends on whether you want to regard such a program as having such infinite value. I don't. In fact, I may very well regard it as having no value.

More Q&A

A friend posed some questions regarding my above position. Below I present some snippets from that exchange.

Question : It seems completely arbitrary to me to show concern for certain movements of particles and not for others. Answer : Arbitrary, yes. But that's the way I'm hard-wired: I'm a machine that responds to thoughts about suffering (i.e., certain particle movements) with a desire to act to reduce them. All machines respond to certain types of particle movements—that's how machines work. Indeed, how could things be otherwise? What else could it possibly look like for a being to care about something and respond to it other than to care about real things in the world and take physical actions in response? Suppose there were some sort of "pain particle" corresponding to the quale of suffering. Why care about that? What makes that any less arbitrary than a particular class of particle movements corresponding to particular cognitive algorithms within certain sorts of self-aware brains? Maybe someone could reply that "It is indeed arbitrary to care about certain kinds of physical particles and not others; however, pain is not physical but somehow 'other-worldly' or of a different metaphysical type." I'm not sure what it means to say that something which interacts with the physical world is "other-worldly," but okay, sure. There's still the question, Why care about other-worldly things? Why care about things with different metaphysical types? Suppose the only thing that had a different metaphysical type than physical things was the feeling you get when you put your foot into silly putty. Would that then be all that you care about?

Question : It remains completely unexplained why you do in fact care about such patterns and not others. Also, it doesn't explain why, once you realize that there is nothing special about them, you still care about them. Answer : Say you enjoy eating ice cream. Well, ice cream is just a particular collection of a subset of the atoms from the periodic table arranged in a particular way. This doesn't make it taste less good or lead you to desire it less. This just restates the point that all ethics is ultimately arbitrary—it's we who have to decide what we want to care about. I don't see caring about a certain class of algorithms instantiated by physical brains as less arbitrary than caring about anything else. If it helps, we can think about this class as being "special"—whatever that means. We can have a sense that there's a unique, mysterious thing that it's like to be in pain, and indeed, that language describes what we mean by our caring. But that is, basically, poetry—it shouldn't prevent us from understanding physically what's really going on. Talking in a distanced way about physical mechanisms is important, to ensure that we have a correct understanding of the physical world. However, it shouldn't lead us to see emotions coldly or lose motivation to prevent suffering. There are many levels of abstraction for describing a process, each useful for its own purposes. The more poetic description of self-awareness of certain neural processes that we call "feelings" is entirely appropriate as well. We can also remind ourselves of the seriousness of suffering by raw experience: Does pain hurt any less once you've learned that it's produced by chemical and electrical signals transmitted via nerve cells? Knowing how something happens doesn't make me stop caring about it.

Question: Finally, it is unclear to me how, once you acknowledge that your concern for pain is ultimately arbitrary, can you draw a fundamental difference between concern for pain and the innumerable other concerns that we have pre-reflectively. If all these concerns are on a par, it seems very unlikely that only one of them would survive a process of critical examination while all others would eventually dissolve or disappear. Again, it seems that we must recognize pain as a special type of concern to explain the fact that it is the only one that remains. Answer: That's the way I'm built. Not only do I care about lots of particle movements (preventing suffering, eating ice cream, sleeping when tired, etc.) but not intrinsically about others (e.g., pushing sand around in circles, sorting pebbles into heaps), I also have a special regard for one of those ways of moving particles, namely preventing suffering: I (claim to) place it far above the others in importance. (As a practical matter, it's not clear how much that's the case: I am also a somewhat selfish human being who often fails to spend my resources in a completely optimal manner.) The fact that this machine thinks about certain particle movements as belonging to a unique class is just another of the machine's interesting attributes. A good way to summarize is as follows. At an intuitive level, I still do think in precisely the terms you describe: A "what it's like to feel pain" that's specially bad and whose alleviation ought to take priority over other things people value. But "how the cognitive algorithm feels from the inside" shouldn't get in the way of our really understanding it. One reason it's important to look at the algorithm from the outside as well is that doing so helps us avoid, for instance, the mistake that our AIs will automatically be friendly or empathetic.

Consciousness and tableness

Because our brains have such a hard time with intuitions about consciousness, it's much easier to frame the problem in a different domain. Consider the following object:







Now how about this? Is the following a table?









Our concepts—clusters in thingspace—are fuzzy and have trouble with border cases. It seems clear to us that our subjective feelings are forms of consciousness, just as it seems clear that the first image is a table. But it's less clear if what happens in a Grandroid mind is also a form of consciousness: It has some of the properties of our minds but not all. Just like the rock, it's debatable whether the label "conscious" should apply. Yet while everyone can agree that whether the second image here is a table or not is just disputing definitions, people don't always make the same acknowledgment with respect to consciousness; they feel like there's some "objective answer" as to whether a mind is conscious or not. My claim is that there is no fundamental difference.

When we view consciousness as a cluster in thingspace, it becomes natural to see it as coming in gradations. After all, any object that exists is a point in thingspace, and any point in thingspace is some distance away from the centroid of the "consciousness" cluster. Some objects are closer (e.g., computers) and some are farther (e.g., rocks), but no point is infinitely distant. Hence, it's plausible to see anything as resembling consciousness to at least a tiny degree, though of course, the degree of similarity may be so small as to not matter. And in fact, there's nothing special about consciousness in this discussion. The same thinking applies for any concept. Anything can be seen to share at least some traits with a table. For instance, both atoms and tables have mass, can be stacked on top of each other, recoil when objects are thrown against them, and so on. These comparisons are very weak, but the "degree of tableness" of an atom seems to be just very tiny, not fully zero. Of course, if you prefer, you can set definite boundaries around a cluster in thingspace, as long as you don't mind accepting arbitrariness at the border between what's in and what's out.

Deictic definitions?

In one discussion, Adriano Mannino suggested:

When I speak of consciousness/qualia/subjective experience, I know what I am talking about by an internal deictic definition. I know what pain-qualia are because I (sometimes) have them - and *this* is what I mean by "being in pain". Algorithms don't enter the definition.

As an analogy to this view, suppose we pointed to an electron and said, "this is what I mean by an electron." That would be a fully adequate definition of electron-ness, because electrons are physically identical things.

But we can't do the same for consciousness. If we say "this thing I'm feeling is what consciousness is," then any mind even slightly different from the one you're pointing at is not conscious. There's no unique "inner property" that you're pointing to that's somehow shared across all minds in the same way as the physical constitution of an electron is shared across all electrons. Rather, we have to get messy: We have to start defining similarity measures and creating clusters in mind-space based on brain algorithms, behavioral properties, and other mental features we find morally relevant.

The same is true with tables. We can't point to the first image above and say "this is what I mean by a table," because then what about the thing you have in your dining room to eat on, if it's made of metal rather than wood and is round rather than rectangular? What about the rock used as a picnic table? And so on.

Similar comments apply to other mental states besides "consciousness". Take the mental state of "pleasure". Suppose a person's brain is receiving reward signals and broadcasting this good news along to other brain processes, triggering lots of follow-on consequences. If we ask the person what it feels like, she'll say "It's pleasurable." If we then ask the person what "pleasure" is, she'll point to her brain and say, "It's this stuff that's going on in my head now." But what stuff exactly is being pointed to? And how rigid are the boundaries for what sorts of other brains also count as experiencing pleasure?

Sloman (1991), p. 695:

A fly, a mouse, and a person may all be aware of a moving object: Is that the same [mental] state? There's no answer because there's nothing unique that you have and others definitely do or do not have. I am not denying the existence of what's attended to — just its unique identification. Your state is very complex and other things may have states that are partly similar, partly different. But in what ways? How many different substates underlie "conscious" states? What feels like something simple that is either present or absent is actually something complex, aspects of which may be present or absent in different combinations.

Hard problem of tableness

We can pose the "hard problem of tableness." Rewording from Chalmers:

It is undeniable that some objects are tables. But the question of how it is that these objects are tables is perplexing. Why is it that when an object is used to hold our food, it acquires the tableness property? It is widely agreed that tableness arises from a physical basis, but we have no good explanation of why and how it so arises.

To be fair, there is a real issue with consciousness that makes it unique from tableness. But it's not the issue people think it is. Consciousness causes people to feel a certain "this is subjective experience" sensation and puzzle about why they feel it. Explaining consciousness then partly consists in identifying what sorts of neural processes happen in brains to produce this sensation. This falls under the category of "easy" problems within the Chalmers framework, because although it's of course a very difficult scientific undertaking, there's nothing fundamentally confusing about it. The task is like determining why a computer whose source code you can't directly inspect is producing a given output under certain conditions.

Schwitzgebel (2016) proposes definition by example as a way to define consciousness "innocently enough that its existence should be accepted even by philosophers who wish to avoid dubious epistemic and metaphysical commitments such as dualism, infallibilism, privacy, inexplicability, or intrinsic simplicity." I agree that definition by example is metaphysically harmless, as long as it's understood that the category so defined doesn't have special metaphysical status.

Schwitzgebel (2016) proposes that for phenomenal consciousness, "Positive examples include sensory experiences, imagery experiences, vivid emotions, and dreams. Negative examples include growth hormone release, dispositional knowledge, standing intentions, and sensory reactivity to masked visual displays. Phenomenal consciousness is the most folk psychologically obvious thing or feature that the positive examples possess and that the negative examples lack". This is again fine if it's not taken too seriously, but I feel as though our judgments about whether, e.g., growth-hormone release should count as conscious might well be revised as we learn more, and it seems premature to me to give such negative examples the status of "ground truth". For example, Schwitzgebel (2016) wonders "whether snails might be conscious despite (presumably) their not being disposed to reach phenomenal judgments about their experience." But it's not implausible that computations in the human hypothalamus (and other systems controlling hormone release) are in some sense similarly sentient as snails. So we should hold out uncertainty about whether hormone release is conscious too. But in that case, our list of negative examples is called into question. Schwitzgebel (2016) acknowledges that "if the putative negative examples failed to be negative, as in some versions of panpsychism, we might still be able to salvage the concept, by targeting the feature that the positive examples have and that the negative examples are falsely assumed to lack." The process of revising our list of positive and negative examples in light of new empirical and theoretical discoveries, in a reflective-equilibrium sort of way, seems right to me.

Schwitzgebel (2016) avoids committing on certain disputed cases of qualia. "I did not, for example, list a peripheral [unattended] experience of the feeling of your feet in your shoes among the positive examples, nor did I list a nonconscious knowledge of the state of your feet among the negative examples." However, I think "sensory reactivity to masked visual displays", which Schwitzgebel (2016) does include among the negative examples, is about as likely to eventually fall under our conception of consciousness as unattended sensations from one's feet. My guess based on what I know of global-workspace research is that both of these cases involve small, local brain signals that don't get amplified to brain-wide notability.

Schwitzgebel (2016) says he's committed to the premise that "The folk category is not empty or broken but rather picks out a feature that (most of) the positive examples share and the negative examples presumably lack. If the target examples had nothing important in common and were only a hodgepodge, this assumption would be violated." I don't think "consciousness" as defined by Schwitzgebel (2016) is a hodgepodge. One reason is that, e.g., whether something is broadcast brain-wide (as in global-workspace theory) already seems to distinguish Schwitzgebel (2016)'s positive vs. negative examples pretty well in humans. However, I'm much more skeptical than Schwitzgebel (2016) seems to be that there's a clear dividing line for consciousness that most people would agree upon if they knew all the relevant neuroscientific details, especially when we stray from human minds. Schwitzgebel (2016) acknowledges that consciousness is "perhaps blurry-edged"; I expect it's very blurry-edged.

Schwitzgebel (2016) encourages readers not to "pick out some scientifically constructed but folk-psychologically non-obvious feature like accessibility to the 'central workspace'". Instead, Schwitzgebel (2016) insists that, like with furniture, we can intuitively classify conscious vs. non-conscious things pretty well without an analysis of the concept. That may be true, but many religious people can probably classify "things with souls" from "things without souls" fairly consistently, at least within a given religion. Humans are clearly among the positive examples of things with souls, and rocks are clearly among the negative examples (except in the case of animism, etc.). That we can identify this folk concept doesn't mean we have to accept "soul realism".

Most of Schwitzgebel (2016)'s positive and negative examples concern human minds. But this makes it hard to generalize beyond humans, in a similar way as it's riskier to extrapolate than interpolate. Imagine that I'm training an image-classification neural network by presenting two classes of labeled images: deciduous trees and evergreen trees. The distinction between these classes is somewhat clear, and the classifier could probably achieve good accuracy. Now suppose we present the classifier with a picture of a dandelion. Is this deciduous or evergreen? I incline toward saying it's more deciduous because the leaves look more like the leaves of deciduous trees than those of evergreen trees. But I would feel uncomfortable making this judgment, because dandelions are quite different from trees. By analogy, I think we should feel uncomfortable generalizing the concept of consciousness from the human domain to non-humans.

Like I did earlier in my article, Schwitzgebel (2016) draws an analogy with furniture:

I might say “by furniture I mean tables, chairs, desks, lamps, ottomans, and that sort of thing; and not pictures, doors, sinks, toys, or vacuum cleaners”. Hopefully, you will latch on to approximately the relevant concept (e.g., not being tempted to think of a ballpoint pen as furniture but being inclined to think that a dresser probably is).

Ironically, when I first read this passage, I thought to myself that "pictures" do seem like a kind of furniture, because they hang on the wall like mirrors and some coat racks do. Dictionary.com defines "furniture" as "the movable articles, as tables, chairs, desks or cabinets, required for use or ornament in a house, office, or the like." Pictures (especially when framed) are moveable articles that ornament a room, so I feel like they do sort of qualify under the definition. Anyway, this particular dispute doesn't matter, but it serves to illustrate my general point: I expect the boundaries of even an "innocent" concept like "furniture" or "consciousness" will provoke debate.

Schwitzgebel (2016) cautions that "one crucial background condition that is necessary for definition by example to succeed" is that "There must be an obvious or natural category or concept that the audience will latch onto once sufficiently many positive and negative examples have been provided." I don't even think this is true for furniture, as the question of how to classify "pictures" illustrates. Moreover, the concept of furniture becomes even hazier when we venture outside the human realm. Is a bird's nest furniture? Is honey in a bee hive furniture? Do computers have furniture (such as digital tables and chairs in The Sims)?

In addition, I'm not sure what the point of this discussion is. Rather than debating what is and isn't furniture, we should talk about individual items in the house and move on with our lives. Or at least we should create more precise categories, like "wooden furniture", "room decorations", "exercise equipment", and so on. Perhaps Schwitzgebel (2016) cares more about consciousness because of his "wonderfulness condition": "One necessary condition on being substantively interesting in the relevant sense is that phenomenal consciousness should retain at least a superficial air of mystery and epistemic difficulty, rather than collapsing immediately into something as straightforwardly deflationary as dispositions to verbal report, or functional 'access consciousness' in Block’s (1995/2007) sense, or an 'easy problem' in Chalmers’ (1995) sense. [...] Frankish’s quasi-phenomenality, characterized in terms of our dispositions to make phenomenal judgments, does not appear to meet the wonderfulness condition." I side with Frankish here. I think there are lots of things that brains do, and there are likely to be predictors (like global availability) that separate intuitively conscious from intuitively unconscious brain states in humans. But I expect that the mysteries waiting to be discovered will turn out to look a lot like Frankish’s proposed deflation and other similarly "mundane" (though ingenious) explanations. Of course, I might be wrong, and there might turn out to be some non-deflationary yet still physicalist way in which the folk-psychological concept of consciousness does carve nature at its joints to a surprising degree.

Dissolving the question

Eliezer Yudkowsky explained the concept of "dissolving the question" in the context of confusion about free will:

The philosopher's instinct is to find the most defensible position, publish it, and move on. But the "naive" view, the instinctive view, is a fact about human psychology. You can prove that free will is impossible until the Sun goes cold, but this leaves an unexplained fact of cognitive science: If free will doesn't exist, what goes on inside the head of a human being who thinks it does? This is not a rhetorical question! [...] You could look at the Standard Dispute over "If a tree falls in the forest, and no one hears it, does it make a sound?", and you could do the Traditional Rationalist thing: Observe that the two don't disagree on any point of anticipated experience, and triumphantly declare the argument pointless. That happens to be correct in this particular case; but, as a question of cognitive science, why did the arguers make that mistake in the first place? [...] [One possible explanation is that] there can be a dangling unit in the center of a neural network, which does not correspond to any real thing, or any real property of any real thing, existent anywhere in the real world. [...] This dangling unit feels like an unresolved question, even after every answerable query is answered. No matter how much anyone proves to you that no difference of anticipated experience depends on the question, you're left wondering: "But does the falling tree really make a sound, or not?" [Or, in the case of consciousness, "Is this mind really conscious or not?" ...] But when you can lay out the cognitive algorithm in sufficient detail that you can walk through the thought process, step by step, and describe how each intuitive perception arises—decompose the confusion into smaller pieces not themselves confusing—then you're done.

Aaron Sloman discusses the same idea in "Phenomenal and Access Consciousness and the 'Hard' Problem: A View from the Designer Stance." He calls consciousness a "polymorphic" concept, in the sense that what exactly it means depends on the context of usage and what you're trying to say about it. Similar polymorphic concepts include "efficient" or "impediment"; what it looks like for a lawn-mower to be efficient is different from what it looks like for a cough cure to be efficient. So talking about "consciousness" as a single, unified thing is mistaken. There are lots of concrete processes that go on in a brain, and asking "Is X really conscious?" is the wrong question. The answer depends on what details you're trying to figure out.

Water and H 2 O

Philosophy of mind often makes use of a comparison with statements like "water is H 2 O". In this section I'll explain why "consciousness is [certain kinds of] computations" and "water is H 2 O" are parallel reductions.

First, imagine that you live in Africa 20,000 years ago. You drink a fluid that trickles out of your hands. It's transparent, comes from the skies, relieves your thirst, and so on. If you spoke English, you would call a thing with traits like these "water". "Water" is hence defined by how we experience it—by what kinds of things we see it do and how it seems to us.

Also in Africa, you reflect on your own subjective experience. It proceeds in a serial fashion, shifting from one topic of focus to another. It contains a variety of textures depending on what you happen to be doing. There are times when your experience seems "intense" in a way that you don't like. If you spoke English, you would call a thing with traits like these "consciousness".

Now imagine you're a student in the year 2014 in an elementary-school classroom. Your teacher tells you that scientists now know that the thing we call "water" can be modeled successfully as atoms that arrange in certain ways. In particular, when scientists imagine two hydrogen atoms together with an oxygen, the resulting molecule exhibits properties that, in aggregate, make it transparent, cause it to trickle out of your hand, etc.

Likewise, your teacher tells you that the thing called "consciousness" can be successfully modeled as neurons that arrange in certain ways. In particular, when scientists postulate billions of neurons arranged in the network configurations of your brains and actively firing with certain patterns, it generates thoughts that are serially ordered, shift from one topic to another, distinguish among a variety of sensations using what the experiencer calls "qualia", etc.

Sometimes it's objected that, before knowing about the physics of our world, consciousness could have turned out to be something other than brain states. But before knowing about the physics of our world, water could have turned out to be something other than H 2 O; for instance, it could have been some substance XYZ. After knowing the physics of our world, we can relate the general, observable features of water/consciousness to the more specific material bases that they turned out to have.

So the general traits we call "water" or "consciousness" could have reduced to something else. But let's ask the converse question: Could H 2 O or brain states have turned out to not be water or consciousness?

If the description of H 2 O includes its physical and chemical properties, then H 2 O must necessarily have the properties of water, because those properties are derivable from the underlying physics. For instance, it must be that liquid H 2 O trickles out of your hand. Given the environmental conditions on Earth, H 2 O must fall down from the skies. And so on.

O includes its physical and chemical properties, then H O must necessarily have the properties of water, because those properties are derivable from the underlying physics. For instance, it must be that liquid H O trickles out of your hand. Given the environmental conditions on Earth, H O must fall down from the skies. And so on. Likewise, brain states as found in, e.g., awake, normal humans must necessarily have the properties of consciousness, like serial ordering of attention, discrimination among stimuli and internal states based on what the mind regards as qualia, etc. Seeing that these properties are logically implied by neuroscience is a bit trickier and less intuitive than with water because of their first-person nature, but there's no fundamental difference.

If brain states imply consciousness, then philosophical zombies are not logically possible, just as non-watery H 2 O (at room temperature and normal pressure on Earth) is not logically possible. Eric Funkhouser makes a similar point in "A Call for Modesty: A Priori Philosophy and the Mind-Body Problem", p. 28.

The water analogy elucidates other debates in philosophy of mind. Functionalists object to standard type physicalism on the grounds that it fails to allow for multiple realizability. As an analogy, I could find a non-H 2 O chemical, say ABC, that is liquid at room temperature, trickles out of my hands, is clear, and has most of the other properties of water. We can imagine another planet where ABC fills the oceans and where aliens have adapted to ingesting ABC. I could say that ABC deserves to be called water too—at least, water of a different kind than H 2 O, but still water nonetheless. This would be akin to functionalism. Alternatively, one could say that "water" should be reserved for H 2 O, and we should call ABC as "water2". This stance would be akin to non-functionalist type physicalism.a In "The Meaning of 'Meaning'", Hilary Putnam gives the example of "jade", which can refer to either jadeite or nephrite, yet both of these minerals have similar superficial properties.

We can see that the functionalist-vs.-type-physicalist debate is trivial in the sense that it just rearranges definitions—moving fences while keeping the ground underneath the same. If we had ethical codes relating to water, people would debate whether ABC was "really water" or whether it didn't count. One could point out that even if ABC was called "water2" rather than "water", we might still care about it anyway. We could operationalize the definition of "water" as being "substances that have moral relevance in our ethical code". People might write essays titled, "Which Chemical Substances Do I Care About?"

Other views on the "philosophy of water" might emerge. For instance:

Dualists would contend that H 2 O's properties by themselves cannot explain the "waterness" of water ("the hard problem of water"), and there must be some additional waterness, which either exists as a separate substance (substance dualism) or is an epiphenomenal property of water's physical behavior (property dualism).

O's properties by themselves cannot explain the "waterness" of water ("the hard problem of water"), and there must be some additional waterness, which either exists as a separate substance (substance dualism) or is an epiphenomenal property of water's physical behavior (property dualism). Some philosophers might advance views of "panwaterism"—the idea that all normal matter on Earth is at least a little bit like water. For instance, all (or at least most) normal matter on Earth is composed of small particles, obeys the rules of physics and chemistry, has kinetic energy and atomic vibrations, can be made liquid at some temperature, etc.

Others might argue for "water eliminativism"—that the folk concept of "water" as something that trickles from your hand, looks clear, can be drunk, etc. is too fuzzy to be useful and should be replaced by more accurate descriptions in terms of physics and chemistry.

People might ask themselves questions like: "If you put salt in water, is it still water? Is frozen water still water? Is another chemical that looks, smells, and tastes just like water really water?"

As far as I can tell, the only disanalogy between water and consciousness is that people feel consciousness is not explainable in physical terms, while water is. I can feel this intuition myself and understand where such views are coming from. But when you think about it, it's weird that anything in the universe exists or that anything is the way it is. Consciousness is just one thing within the universe, so it's indeed weird, but that's the way things are. There's no need to invent metaphysical flowery that disguises the weirdness without doing anything to make it go away. Consciousness will always feel weird—no matter what the explanation.

Declaring "consciousness" to be a thing apart from physical operations in an effort to explain qualia doesn't solve anything. It's like declaring "God" to be a thing apart from physical operations in an effort to explain the universe. "Consciousness" or "qualia" as things are semantic stopsigns: They cut off further exploration into what those things actually are. It's just as weird for consciousness to be some primitive as for it to reduce to processes involving other primitives. The main difference is that when we take the reductionist viewpoint, we actually think about what consciousness is, and this muddies the calm waters of what would have been a semantic stopsign instead.

Does neuroscience explain phenomenology?

The "explanatory gap" claims that no amount of neuroscience modeling can explain why our experience has the unique texture it does. Why do we see coherent objects and have distinct sensations rather than seeing/feeling a blur or nothing at all? The idea of a philosophical zombie proposes that there might exist a brain that performs all the same algorithms and functions as our brains but that doesn't "feel like" anything.

But if you study neuroscience, you see that neuroscientists' models do (begin to) explain why we perceive coherent objects, distinct sensations, and so on. We can see how various structures perform various functions that are part of the whole process of phenomenal experience. This realization took a lot of reading before it sunk in to me, but one example paper for getting started is "Global Workspace Dynamics: Cortical 'Binding and Propagation' Enables Conscious Contents." I'm not committed to the global-workspace model specifically, but it illustrates the outlines of what a theory of consciousness might look like. Of course, there are many parts missing, including being able to trace the specific origins of thoughts like "Wow, I have phenomenal experience."

Optical illusions present an interesting case study in which cognitive science can predict phenomenology to an impressive degree of detail. Dennett notes this in the case of motion capture, where he claims that "Ramachandran and Gregory predicted this motion capture phenomenon, an entirely novel and artificial subjective experience, on the basis of their knowledge of how the brain processes vision." (I'm not sure exactly what study Dennett is referring to. One paper by Ramachandran takes motion capture as data for a theory to explain, rather than predicting the phenomenon in advance. But it's certainly plausible to imagine cognitive scientists predicting the results of novel optical illusions.)

Once we internalize how neuroscience models actually do (crudely begin to) explain where the various parts of our cognitive lives come from, zombies become inconceivable. Having a mind that implements all these components and yet isn't called "conscious" would be like having a set of legs that press one after another against the ground in order to move a body forward and yet aren't considered to be "walking." When I go for a few months without reading philosophy of mind—instead focusing on computer and cognitive sciences—it seems obvious to me that our phenomenal experiences result from mechanical processes, and imagining zombies feels almost like imagining square circles. Consciousness is complex computation of certain sorts and can't be removed from a working mind any more than capitalism can be removed from a market economy.

It helps to really internalize ourselves as being mechanical computers in a mechanical, computational world. We rationally know this is true from the viewpoint of physicalist monism, but conjuring the right mental feelings about it is also important. If we think of ourselves as machines, we can begin to see how we really are just part of the material world, and our inveterate sense of difference between ourselves and everything else in nature diminishes. While not exactly panpsychism, this opens up the door to seeing consciousness-like systems in more places than we'd ordinarily imagine them to be, although we can debate what degree of resemblance is required before ethical implications become significant.

When I picture a global-workspace account of brain processing, doing the numerous different things that our brains do many times per second—including combining external perceptions with a sense of self and telling us that there's a something it's like to be us (c.f. Vincent Picciuto's "mental quotation")—this seems to be a satisfactory account of subjective experience to me. Probably you can train your brain to find it either intuitive or unintuitive over time, just like with quantum mechanics or transfinite arithmetic. For that matter, many people find God a satisfying explanation of what I see as a genuine hard problem in philosophy, which is why the multiverse exists. My intuition that reductionist explanations of consciousness are satisfactory doesn't prove the reduction is true, but by the same token, our intuitions also don't imply there is an explanatory gap, and I see no indications from neuroscience that there should be one.

The main idea

When people feel confused about consciousness, I recommend the following: Consider the moment when you feel like "Consciousness is more than just neural operations; there's a peculiar 'feeling of what it's like' that goes beyond just a computational algorithm playing out." This thought is some physical computation in your brain, right? Well, what produced that thought? What processes in your brain caused you to verbalize that sentiment? Those processes are (one instance of) what consciousness is.

Where would knowledge of non-computational phenomenal experience come from? The knowledge is presumably stored in your material memory neurons. How did it get there? By anything other than a computational process to put it there?

Ultimately, there is just physics, and "experience" is not a physical primitive. I don't believe that some systems are in some objective sense "actually" unitary subjects of experience while others are not; rather, we decide to call some systems (most notably ourselves!) such subjects when our brains respond to them with a "unitary subject of experience" classifier.

The fact that our "feeling conscious" is part of this process of what brains do when they detect consciousness is key. That's why it's so hard to tell people that consciousness is not a physical primitive. It's sort of like telling people that "red" is not a physical primitive when they're staring at an apple. Of course, redness is not an intrinsic property of the universe but arises when our visual systems receive 700-nm light waves. Likewise, consciousness is not an intrinsic property of the universe but is perceived when our brains reflect upon certain systems—typically in our heads but also sometimes in others' heads, perhaps with somewhat different algorithms when we think about others compared with ourselves. (See Michael Graziano's attention schema theory of consciousness.)

Treating something as conscious is an act of projection—whether for other minds or ourselves. Of course, the exact details of how this projection happens differ by cases. For ourselves, we have an immediate, overwhelming, unshakable sense that "there's something it's like" to be me. The strength of this sensation probably explains why we have a hard time understanding consciousness as not being an objective property of the universe, though we can see people's sense of self breaking down somewhat in conditions like schizophrenia. With other people and higher animals, the attribution of consciousness is also quite natural, though it doesn't feel the same as our own consciousness because we lack access to the internals of their brains, and our minds tell us that we're "us" and they're "them." For lower animals, computational systems, and so on, attributions of consciousness typically happen via rational comparison-making along various attributes, at least until the comparisons are made enough that they can become internalized as more intuitive.

Brian McLaughlin:

The reason why it seems like there are two things has to do with the way our cognitive architecture is structured and the concepts we use. So our neurobiological concepts—the kind of concepts that people use in neurobiology—are very different from our concepts of consciousness or concepts of things like the feel of pain. Those concepts have very different roles in our cognitive economy. They're kind of independent of each other. And it's because of the independence of the two kinds of concepts that it seems like they're two different things.

Finding the view unsatisfying

Consciousness is one of those domains like free will where our intuitions are almost built to not understand it. We have a certain cognitive stance for computers/machines/material operations, and thinking about ourselves in the same way seems to violate that stance. The fact that it feels weird/unsatisfying is a consistent emotional reaction that our brains produce, but anything other than the physicalist explanation seems either confused or vastly overcomplicated.

In Consciousness Explained, Dennett gives another example of a reduction that might feel unsatisfying (p. 454):

When we learn that the only difference between gold and silver is the number of subatomic particles in their atoms, we may feel cheated or angry—those physicists have explained something away: The goldness is gone from gold; they've left out the very silveriness of silver that we appreciate. [...] But of course there has to be some "leaving out"—otherwise we wouldn't have begun to explain. Leaving something out is not a feature of failed explanations, but of successful explanations.

I suspect that some confusion about consciousness is due to cultural legacy—folk-psychological intuitions about Cartesian souls and such. As Susan Blackmore says, "We can't even describe the problem of consciousness without implying dualism." Like with the idea of libertarian free will, our language is built around confusion. If we had grown up being told from an early age about how consciousness is a poetic means to describe a complex series of processes that happen when intelligent systems process information in certain ways—rather than being told that consciousness is a "great mystery"—I suspect we might be more inclined to take consciousness for granted, in a similar way as we take for granted other seemingly preposterous reductions, like explaining that apples fall because of the curvature of spacetime or that tables may be made of vibrating strings in 11 dimensions. Consciousness has to be something, and it's not any stranger for it to be certain physical operations than for it to be anything else.

I'm not a zombie

One article points out that differing intuitions on philosophy of mind may partly stem from differing cognitive dispositions. For instance, the author claims (without citations) that "Philosophical Idealists, for instance, seem to have exceptionally high openness to experience, whereas dualists are typically the opposite. Physicalism seems to correlate with autism spectrum disorder." The article further proposes that reductionists on consciousness like Dennett may have weaker phenomenal experience than most people and so don't recognize the explanatory gap. (Of course, if weaker phenomenal experience caused the difference in Dennett's behavior, this would require that phenomenal experience not be epiphenomenal.)

One of my friends once asked me if I was a quasi-zombie with a low degree of phenomenal experience to account for my view on consciousness. But it's not true: I have pretty vivid "feelings of what it's like" to experience the world, and indeed, these contributed to my being confused about consciousness until around 2009. To this day, I still find it rather weird that consciousness is just a collection of physical algorithms, but logic forces me to accept it, in a similar way as I'm forced to accept counterintuitive theorems in mathematics that have been shown to follow from their premises. Over time, physicalist reductionism on consciousness does become less strange, especially as I learn more about neural mechanisms and can see how my experiences do make a lot of sense in this framework. But it remains an uphill battle.

In fact, rather than falling on the autistic side and seeing agents in the world as mostly unconscious automatons, I lean toward the opposite extreme: I tend to see consciousness all over, including in places where others don't—including insects, crudely in even present-day computing systems, and just maybe even in natural information processing exhibited by fundamental physics. Given that there's no objectively right answer for what entities do and don't have phenomenal experience, differences in personality will impact the conclusions we draw about who matters and who doesn't.

As it turns out, Dennett himself finds his position counterintuitive too (quote from "The Fantasy of First-Person Science"):

I think the [Dennettian] A team wins, but I don't think it is obvious. In fact, I think it takes a rather remarkable exercise of the imagination to see how it might even be possible, but I do think one can present a powerful case for it. [...] David Chalmers is the captain of the B team [...]. He insists that he just knows that the A team leaves out consciousness. [...] I know the intuition well. I can feel it myself. [...] I feel it, but I don't credit it. [...] We can come to see it, in the end, as a misleader, a roadblock to understanding. We've learned to dismiss other such intuitions in the past--the obstacles that so long prevented us from seeing the Earth as revolving around the sun, or seeing that living things were composed of non-living matter. [...] So now, do you want to join me in leaping over the Zombic Hunch, or do you want to stay put, transfixed by this intuition that won't budge?

First vs. third person

Does it make sense to talk about "first person" as distinct from "third person"? Does our first-person experience show there's something more going on than algorithms?

There's not a difference in kind between first and third person—they're just two ways of talking about the same thing. It's a matter of perspective, like whether you look at an object from the front or behind. The difference is whether our brains are analyzing our internal states or the minds of others. We have much greater access to our own heads, just like a computer has much greater access to its internal variables than it does to other computers that it has to observe or message over a channel.

In general, drawing analogies with computers helps make the confusing feeling go away. We have a first-person perspective in the same way that a computer can reference itself and be aware of what's happening to it.

In the case of brains that have "self-awareness," this self-reference is of a variety where the brain creates a feeling of unity/selfhood, which manifests through various kinds of special responses that create a concept of "what it feels like to be me from a first-person standpoint." Whatever thoughts and behaviors you have as part of this first-person sensation are algorithms going on in your brain that generate and respond to that impression. That's what the first person is: a series of special brain responses that do whatever we brains do when they have a feeling of self-awareness.

A friend of mine once said he finds this account somehow "unsatisfying." I kind of know what he's talking about, because it feels like the world of computer processing doesn't capture what's going on when we have experiences. But we find it unsatisfying because of how our brains respond to things. They're built to have this special view of our own processing and the mental processing of others as being distinct from mere material stuff. It's like being in love and unable to see the other person as just an ordinary human, much less a mobile sack of water. Or pareidolia, where you can't avoid interpreting this cliff as a face. Our internal feeling of consciousness is strong but ultimately no more mysterious than seeing faces in toast. That we see faces in cliffs and toast doesn't mean they somehow exist in a special way beyond the ordinary matter of which they're composed.

Postscript: A conversation on the problem of tableness

Alice: The object in my dining room is a table. This is the most certain fact in the world to me. But it's a mystery why it's a table rather than not a table. Sure, it has legs, is made of wood, and supports dinner plates, but none of that explains where its fundamental "tableness" nature comes from. I mean, why would a combination of features like legs and a wooden surface give rise to a table rather than just being legs and a wooden surface? And how do I tell if other objects are tables? What if they're made of metal? Might they still intrinsically be tables? This seems like a deep philosophical problem....

Brian: When you see that thing in your dining room, your brain immediately calls it a "table" in your inner monologue. That strong and immediate feeling is why you say it's the clearest thing in the world to you that your dining-room furniture piece is a table.

Alice: Maybe, but surely whether it's a table does not depend on my mental attitude toward it! Some things just are tables, and some are not. I don't see how the essence of being a table could be up to our whims to decide.

Brian: This "essence of being a table"—what is that? How does it relate to the physical world?

Alice: Tableness is the intrinsic nature of what a table is. I know for sure that the thing in my dining room is a table. Whether other things are also tables I'm never certain, but I can make guesses based on their similarity to what's in my dining room.

Brian: Why do you think the essence of tablehood is related to the physical traits of a table? Why couldn't it attach to random things?

Alice: Maybe it could, but it seems simpler to have a model in which tableness correlates with the properties of the object.

Brian: I see. This "tableness"—does it have any impact on the physical world?

Alice: No, it's an epiphenomenal property of certain pieces of matter. Some material configurations instantiate tableness, while others do not.

Brian: So how do you know something is a table? Couldn't we imagine a world where the thing in your dining room is actually not a table, but you falsely come to believe it is?

Alice: I'm more certain that the thing in my dining room is a table than I am about anything else, including the laws of physics and so on. It's just obvious that the thing in my dining room is a table. Are you denying that tables exist?

Brian: No, I believe tables exist. But I would think of tables differently. In my view, there are various types of matter, and some of them are similar to each other in the ways they're shaped, how they're used, and so on. We decide to call "tables" those pieces of matter that have legs and a flat surface, are used for eating on, etc. So the thing in your dining room is a table by definition or common convention. Fundamentally, it is what it is—a piece of matter—and what we call it doesn't change that.

Alice: No, but don't you see? You're not explaining why that thing is a table. You're saying it has certain properties—fine. But where does this essence of tableness come from? Your physicalist account can't explain that!

Brian: Could you elaborate more on why you feel the thing in your dining room table can't be just its collection of legs, surface, functional attributes, etc.? Isn't it simpler to see it as being those things than to postulate something extra?

Alice: The reason is because I can imagine an object with legs, a flat surface, that's used for eating on and yet that still isn't a table. When I picture just legs and a flat surface, it doesn't feel like the same character as the tableness property feels like.

Brian: So could the issue be that your brain generates a "tableness" response when it sees your table, and it doesn't do that when it sees other collections of legs, surfaces, supine dinner plates, etc.? Then you mistake this feeling as representing "something extra" rather than just being a way that your brain reacts to that collection of inputs? In a similar way, you might see a beautiful landscape and think "that's objectively beautiful," when in fact, it's just your brain's response to that collection of inputs that you're referring to by that "objective beauty" notion?

Alice: That might be true for landscapes being beautiful, but tableness is different. It has an all-encompassing impression to it. There's a special "something that it is" to be a table.

Brian: I wonder whether if you went to work with a wood craftswoman, you'd develop different intuitions. She could show you how you can create the table's legs, fit them into the surface, sand it, gloss it, and so on. You might see how the currently atomic concept of "tableness" that you now have can be broken into components that fit together to create a seemingly unified whole. You could create an object very similar to your table, and maybe you'd call that object "a table" as well. Then you could make an object with slightly different legs, or a square top instead of a round top, and so on, and you could see how it shares many of the same component parts, functionality, and properties of your current table. Eventually, you'd have a richer vocabulary for what "tableness" is all about, and you could see how the thing in your dining room can be described by many individual features that work together. Calling the thing in your dining room a "table" might then become like using the name "car" to describe a 2014 Honda Accord LX 4dr Sedan with 185 BHP, 5-passenger seating, and 15.8 ft3 cargo capacity. The added depth of your conceptual tools might dissolve the idea that "tableness" is some special, intrinsic essence.

Alice: Mmm, I'm skeptical. The tableness of my dining-room furniture piece is just so clear. But thanks for the chat, Brian.

Further exchanges

The above fictional conversation was inspired by many past discussions I've had (about consciousness, not tables). Following are some real-life exchanges excerpted and modified from this discussion on Facebook with some friends.

Adriano: There is no one, and there can't be anyone, who has the concept of "chair" without also having the concept of "stuff arranged chair-wise". That's not surprising: They're identical. By contrast, many people (historically, children today, people ignorant about science today) do have the concept of "consciousness" or, more specifically, e.g. of "pain" while not having the concept of "stuff arranged brain-wise (in such and such a way)".

Brian: Not true. Plato would have said there's a perfect essence of chair-ness living in the realm of ideal forms. When we think of a neural-network model of a concept, it's easy to imagine someone's "chair" node being activated by the lower-level features of a chair without that person realizing what the lower-level features are. Analogously, lots of people have their "attractive face" classifiers triggered by symmetry, absence of blemishes, smooth skin, pronounced cheek/jaw bones signifying high levels of sex hormones, etc. without decomposing all those pieces. Consciousness is the same way, and it's just a particularly strong and overwhelming conceptual-node activation. It's one of the last vestiges of Platonist confusion among educated people. If it's too easy to see the components with a chair, we could talk about a wristwatch, or a computer, or whatever. I could refer to a computer without having much of any idea how a computer works, what it's made of, etc.

Adriano: If the [neural correlates (NCs)] of conscious states are empirical discoveries—as opposed to (trivial) definitional issues—then you can't be right. And they are empirical discoveries, unless you're going to claim you can know what the NCs are of different pains, say, from the armchair.

Brian: The fact that a chair has four legs, a seat, a back, is made of wood, etc. are also empirical discoveries. A 2-year old may not know what "wood" is and may not be able to count to four, but he still has a general, vague sense of "chairness." Then as he gets older, he learns about wood, legs, atoms, etc. These are empirical discoveries.

Think of it like this. You see a computer for the first time. It's a thing with a screen. You press buttons, and it does stuff. You learn how to search for files and browse the web. You type messages. You see that the computer can display screen savers and raise alerts. Now you read a book about computer design, take apart your computer, learn programming and hardware engineering, etc. You come to realize that displaying the screen saver and browsing the web are XYZ physical operations / algorithms by the computer. For your conscious mind, the analogy is similar, except in that case, you observe your consciousness from inside your own head rather than from the external world. It's still just sense perceptions, though. Not a fundamental difference.

Adriano: If there's the behavior we totally arbitrarily choose to call "pain behavior", there can be no doubt whatsoever that there is pain, because "there is pain" just means "there is the behavior we have labeled 'pain behavior'". That's implausible. If I just chose to call your pain behavior "pleasure behavior" and your pleasure behavior "pain behavior", your reaction wouldn't just be: Ah, ok, we can't be disagreeing about anything, we must just be using these expressions differently.

Brian: "Pain" and "pleasure" behavior bake in ethical notions. When we're disagreeing about whether something is pain behavior, we're also implicitly disagreeing about what we want our ethical response to be, which is where the substance of the debate lies. Absent ethics, I agree it would be a trivial semantic dispute.

Adriano: Also note that Brian's above claim could not deal with the possible empirical finding that my own skull is empty. On Brian's view, I would be forced to conclude that I lack consciousness in this case.

Brian: Not true. :) My view of consciousness incorporates many features that we may consider relevant, including behavioral dispositions, verbal claims, etc. If you could act like Adriano without having anything in your skull, I'd still call you conscious. Note that one of my main points about game NPCs [in this essay] was that they exhibit goal-directed behavior, appear to want things, respond aversively to injurious stimuli, etc. All of those attributes could also be seen in a brainless Adriano. It is true that you would not be conscious in exactly the same way as the real Adriano is. You would have a different sort of consciousness. But I would still put you in the category of "conscious beings."

Postscript: Can mind swapping tell us what's conscious?

David Pearce has suggested that "In principle, you or I could engineer a reversible thalamic bridge with another human or a pig. (cf. 'Could Conjoined Twins Share a Mind?')" in order to determine whether the other mind was conscious. The idea of exchanging mind parts or hooking minds together is also elaborated in Marc D. Hauser's chapter "Swappable Minds" in the book The Next Fifty Years. Is this a way to overcome the limitations of our first-person experience and determine "definitively" whether another mind is conscious?

One issue here is that by hooking up the wires, you would be changing the brains of the participants involved. They would not be the same as when they operated individually. To see why this is a problem, imagine hooking yourself up to a book. Presumably the book isn't conscious, but perhaps the neural wiring could be configured such that the book's sentences directly stimulate your verbal centers in such a way that you feel those sentences running through your own head. Would you be thereby "tapping into the conscious thoughts of the book"? And indeed, just reading a book is one crude form of this process, with the "neural wiring" consisting of photons from the book's page entering your eyes, being processed by layers of your visual system, and ultimately converting into inner thoughts. Likewise, scientists are beginning to develop brain implants that directly feel virtual objects. A further step could be to hook up people's minds to the thoughts and emotions of a virtual character. Does that mean the virtual character is conscious? Or is the wiring itself serving as stimulation that generates those thoughts and feelings? In other words, the appearance of another mind would come mainly from the bridge wiring, not necessarily the object to which your brain was being connected.

These examples illuminate the more fundamental problem with trying to assess whether something is "really conscious": Consciousness is not a thing to be grasped but is a label we apply to a collection of processes. It's certainly the case that mind-connections could increase people's empathy and decrease their sense of separateness. This might radically transform the way society looks at the world. But it doesn't change the fundamental nature of what consciousness is or our access to it. As I noted above, everything is a perception by us, whether it's an internal thought or feeling, or an external stimulus from the world. Hooking up to other minds represents just one more form of perception—analogous to touching an object that we could before only see and reason about theoretically.

Postscript: Giulio Tononi and panpsychism

Giulio Tononi has an advanced an interesting measure for consciousness that, if taken philosophically, implies a sort of panpsychism. Christof Koch wrote a summary of Tononi's work for Scientific American Mind titled "A 'Complex' Theory of Consciousness." Tononi's

contribution is significant, and his ideas have contributed much to the field of consciousness. (That said, I probably prefer for neuroscience research to proceed slower rather than faster.)

Tononi's Phi formula is a measure on network systems that describes their information content and connectedness. Phi is a more sophisticated measure than, say, "number of neurons" or "number of synapses," but it's not different in kind. The "number of neurons" of an organism is surely relevant to consciousness, but it's not a theory of consciousness.

Phi is presumed to be highest for those brain regions that seem most crucial for consciousness in humans, which suggests that Phi is a useful metric. However, it's not clear that Phi is identical with the features of a mind that we care about. After all, almost any system has nonzero Phi. Do we want to care about hydrogen ions to a nonzero degree? There may be particular, specific things that happen in brains during emotional experiences that are the only features we wish to value, or at least the features we wish to value most. Perhaps these usually accompany high Phi, but they may be a small subset of all possible systems which have high Phi. Work remains to articulate exactly what those features are that we care about. Such work would be aided by a deeper understanding of the mechanisms of conscious experience, in addition to this aggregate measure that seems to be generally correlated with consciousness. (Of course, that aggregate measure is a great tool, but it's far from the whole story.)

Why can't it be the case that Phi actually is the consciousness that we care about, with no extra complications? Well, it could be, and I won't rule it out. But it seems to me that Phi doesn't explain dynamically how consciousness arises. Consciousness is not a reified thing; it's not a physical property of the universe that just exists intrinsically. Rather, instances of consciousness are algorithms that are implemented in specific steps. (I. J. Good parodying Wittgenstein: "what can be said at all can be said clearly and it can be programmed.") Consciousness involves specific things that brains do. What's the exact sequence of steps that leads you to say "Wow, I'm conscious"? I'm confident that process is more specific than a general property of network systems. Of course, the neural precursors to that utterance are not the only things we count as consciousness, but they wave their hands in a direction that's probably more specific and algorithmic than a very elementary property of how networks are arranged.

It's not incoherent to care about something really simple. We could, for example, decide that consciousness is the number of times that electrons jump energy levels in atoms. But that measure doesn't capture what moves us when we see an animal writhing in pain.

I probably care about many different components of what brains do, each to a nonzero degree. Thus, I might indeed give some weight to the Phi measure, along with (potentially more) weight to other aspects of what I think consciousness is. I personally prefer an "active," algorithmically focused view rather than a passive formula based on aggregate statistics, but there's room for both in what I value.

To be sure, I think Tononi's work is interesting and valuable, but the degree of media attention and fascination that it gets is unwarranted. It's not really more groundbreaking than tons of more "boring" neuroscience work. The public is awed by sexy math they can't understand, but if the ideas were explained more plainly, people would say, "Oh, that's it?" Math is like makeup for ideas.

Ben Goertzel suggests that attempts to identify a single, parsimonious theory of consciousness represent physics envy because they take consciousness to be simpler than it is. On the one hand, I can see how simpler moral theories can seem more justified and less hacky, but on the other hand, I think what I value is complex.

What do I think of panpsychism more broadly? According to my reductionist account of consciousness, the idea that all matter contains "mind stuff" is kind of empty. We can interpret any piece of matter as being conscious if we want to, but in many cases it doesn't make sense to most of us to speak that way. Panpsychism is analogous to "pantableism"—the view that tableness is intrinsic to all matter. There is some sense in which you can interpret any piece of matter as being a table. After all, for any (solid) clump of atoms, you can put stuff on it, and it can support the things that it holds. But this definition is really a stretch for most objects. So it is with saying that everything is conscious.

Analogy: Theories of reproduction

This discussion can be summarized by an analogy. Suppose we introduce rabbits to Australia and want to explain their reproduction. Tononi's Phi is analogous to drawing a logistic-growth curve as a "theory of reproduction." It captures well some aggregate statistics about the rabbit populations, but it doesn't explain the mechanics of how they engage in courtship, have sex, give birth, rear the children, etc. Sex and child-rearing can be described at various levels of abstraction: goals and high-level procedure, concrete operations of organs and hormones, and micro-level biology of cells and molecules. These mirror David Marr's three levels for describing a cognitive system. These descriptions enrich our understanding of reproduction beyond just looking at a logistic curve.

Moreover, a logistic curve can describe many other phenomena, like the spread of a rumor in a population or the growth of an economy with resource constraints. Tononi's theory of reproduction would then say that rumors and economies also "reproduce." Certainly we can imagine saying that in a metaphorical way, but do we actually want to consider those to be instances of reproduction? I don't know; it depends on the context. Moreover, if you look at a small piece of a logistic curve, it appears approximately linear. Lines appear in many places throughout nature. Does this mean that shadows of reproduction can be found in all matter? Does this imply pan-reproductionism?

These issues may sound silly, but exactly the same sorts of arguments take place in the philosophy of mind.

While we're on the subject of reproduction, it's worth noting the comparison between the so-called explanatory gap with consciousness and vitalism, the idea that "living organisms are fundamentally different from non-living entities because they contain some non-physical element or are governed by different principles than are inanimate things." Dennett raises this analogy in "Facing Backwards on the Problem of Consciousness". Sam Harris quotes J.S. Haldane as making a statement that we might call "the hard problem of reproduction":

It is exactly the same with the closely related phenomena of reproduction. We cannot by any stretch of the imagination conceive a delicate and complex mechanism which is capable, like a living organism, of reproducing itself indefinitely often.

Harris goes on to dispute this analogy, arguing that "life" is third-person while "consciousness" is first-person. But I don't think there's a fundamental, ontological distinction between these two viewpoints: Both consist in things that we learn from lower-level processes in our brains.

Postscript: Philosophy of cognitive science

One of the things that has most transformed the way I look at the world has been cognitive science, specifically the philosophical understanding that grounds it: Seeing the brain as a collection of cognitive algorithms running on biological hardware. This focus on not just what the brain does but how it might do it is fundamentally transformative.

For as long as I can remember, I had known about the types of psychological facts commonly reported in the news: For instance, that this particular region of the brain controls this particular function, or that certain drugs can treat certain brain disorders by acting in certain ways. And it's basic knowledge to almost everyone on the planet that operations inside the head are somehow important for cognitive function, because when people damage their brains, they lose certain abilities.

While I knew all of this abstractly, I never thought much about what it implied philosophically. I saw myself largely as a homunculus, a black box that performed various behaviors and had various emotions over time. Psychology, then, was like macroeconomics or population biology: What sort of trends do these black boxes tend to exhibit in given circumstances? I didn't think about the fact that my behaviors could be reduced further to particular cognitive-processing steps inside my brain.

Yet it seems pretty clear that such a reduction is possible. Think about computers, for instance. Like a human, a computer exhibits particular behaviors in particular circumstances, and certain types of damage cause certain, predictable malfunctions. Yet I don't think I ever pictured a computer as a distinct inner self that might potentially have free will; there were no ghosts or phantoms inside the machine. Once I had some exposure to computer architecture and software design, I could imagine what kinds of operations might be going on behind, say, my text-editor program. So why I did I picture other people and myself differently? My conceptions reflected how an algorithm feels from inside; I simply stopped at the basic homunculus intuition without breaking it apart.

Picturing yourself as a (really complicated and kludgey) computer program casts life in a new light. Rather than simply doing a particular, habitual action in a particular situation, I like to reflect upon, What sort of cognitive algorithm might be causing this behavior? Of course, I rarely have good answers—studying that is what cognitive science is for—but the fact that there is an answer soluble in principle gives a new angle on my own psychology. It's perhaps like the Buddhist notion of looking at yourself from the outside, distanced from the in-the-trenches raw experience of an emotion. And, optimistically, such a perspective might suggest ways to improve your psychology, perhaps by adopting new cognitive rituals. That is, of course, what self-help books have done for ages; the computer analogy (e.g., "brain hacks" or "mind hacks," as they're sometimes called) is just one more metaphor for describing the same thing. (That said, I personally haven't found brain hacks very successful, at least other than the common-sense ones.)

Related is the realization that thought isn't a magical, instantaneous operation but, rather, requires physical work. Planning, envisioning scenarios, calculating the results of possible actions, acquiring information, debating different hypotheses about the way the world works, proving theorems, and so on are not—as, say, logicians or economists often imagine them—immediate and obvious; they involve computational effort that requires moving atoms around in the real world.

The idea of divine foreknowledge imagines that God can somehow know everything that will happen in the universe in an immaterial way. But in fact, knowledge is an arrangement of matter in a certain representational form, and it seems the most accurate means for God to learn how the universe will turn out is to run the universe and see. (Note that I ethically oppose creating universes because of all the suffering they would contain if actualized.) Of course, some general trends may be deducible with less precision than what's obtained by creating a whole universe. Even human cosmologists have a reasonable handle on the long-term future of the universe just with tiny models running in their brains.

Given that thought necessarily takes time and energy, the fact that you considered an option and then disregarded it is not a "wasted effort," because there's no other way to figure out the right answer than actually to do the calculation. Similarly, you're not at fault for failing to know something or for temporarily holding a misconception; the process of acquiring correct (or at least "less wrong") beliefs about the world requires substantive computation and physical interaction with other people. Changing your opinions when you discover you're in error isn't something to be embarrassed about—it's an intrinsic step in the algorithm of acquiring better opinions itself.

Postscript: Definitional disputes in philosophy

I think about 2/3 of philosophy is very useful for clarifying one's thinking or challenging one's intuitions. This is particularly true for philosophy based on various sciences, as well as for some moral philosophy. The other 1/3 of philosophy feels "off" to me in some way—due to its being confused, confusing, or just irrelevant.

Part of the problem is that some of the fields are simply not important, like aesthetics. These domains can provide intellectual tickles, but they're probably not going to help substantially toward reducing suffering. In contrast, other topics like probability theory, epistemology of disagreement, anthropic reasoning, philosophy of physics, and so on are quite relevant to the altruistic enterprise.

However, another part of the problem is that many disputes in philosophy are definitional, like "If a tree falls in a deserted forest, does it make a sound?" Lest you think I'm exaggerating, look at Wikipedia's "List of unsolved problems in philosophy." At least the following so-called puzzles described there are basically entirely definitional disputes:

Art objects

Gettier problem

Sorites paradox

Demarcation problem.

There are plenty more examples. For instance, the Stanford Encyclopedia of Philosophy entry on "Holes" consists almost entirely in proposing various definitions of holes and showing how they may be inconsistent with some current human uses of language.

I agree it can be important to define words well, because some definitions are more helpful for discussion than others, but at some point we just have to say, "Fine, let's accept that choice and move on." As Luke Muehlhauser said in "Conceptual Analysis and Moral Theory":

[At MIRI,] Within 20 seconds of arguing about the definition of 'desire', someone will say, "Screw it. Taboo 'desire' so we can argue about facts and anticipations, not definitions."

There's nothing intrinsic about the content of a definition. As Humpty Dumpty told Alice: "When I use a word, it means just what I choose it to mean—neither more nor less."

Yet it seems as though philosophers feel otherwise. As Luke explains in his "Conceptual Analysis" piece:

The trouble is that philosophers often take this "what we mean by" question so seriously that thousands of pages of debate concern which definition to use rather than which facts are true and what to anticipate. In one chapter, Schroeder offers 8 objections [...] to a popular conceptual analysis of 'desire' called the 'action-based theory of desire'. Seven of these objections concern our intuitions about the meaning of the word 'desire', including one which asks us to imagine the existence of alien life forms that have desires about the weather but have no dispositions to act to affect the weather.

There are times when philosophy-like distinctions are important, like when deciding which computations we care about and how much. This project feels like conceptual analysis, because it involves unpacking corner cases and shifting our intuitions based on different plausibility arguments and intuition pumps. The difference is that what we care about actually matters, because it dramatically affects where our altruistic energies go. The question of whether a justified true belief is always "knowledge" does not affect our altruistic actions; yet philosophers argue about it as though their lives depended on it.

I suspect that many debates in philosophy reduce to definition disputes, even if they don't seem that way. This may be the case with moral realism, though I'm not sure. It's sometimes the case for free will.

I close with a quote from Dave Yount that's relevant to the current essay: "If you cannot adequately define what a chair is ... you might be a philosopher." Delineating the boundaries of what makes something a chair, or a table, is not my concern. On the other hand, delineating what makes a mind morally important is very significant, even though it's fundamentally the same kind of question.

Postscript: Perception as a world model

Naively it feels as though the external world is "really" the way it looks to us. If so, it might seem astonishing that our brains can reproduce in our heads the experience of the way the world looks externally. How does the brain do that?

But in fact, there is no such thing as how the world "really" looks. The way something looks is always relative to an observer's perception. Neurons in our brains create some representation of the world, and then whatever that happens to look like is how we see the world. A different animal might see/hear/sense the world differently, and its experiences would be relative to that different perceptual system.

These representations that brains produce are "world models" that help agents make sense of and predict stimuli. A Turing machine that lived on an infinite bitstring tape, if it was sufficiently sophisticated so as to execute consciousness-like algorithms, would also develop a model of its environment, but that environment would look very different from what we can imagine.

This point alone doesn't answer the "hard problem" of why there is anything at all that it's like to see the world, but it does answer the mistaken presumption that our brains are able to successfully reproduce the way the world actually looks.

Postscript: Being here rather than there

"Why am I my current self rather than my past self or future self? Why is it 'now' now? Why isn't it some other time?" This is a question that makes my brain crash; it feels so weird to think about.

This question is another form of a relatedly confusing question: "Why am I me rather than someone else? There are other conscious minds in the world, so why am I myself rather than them? Why am I located here rather than there?" Answering this question would also answer the previous question about time because my past and future selves are additional people within spacetime that I could be but am not. (There seems to be a lot of fuss in the philosophy of time about why now is now, but my impression is there's less confusion about why one finds oneself here rather than there, even though these are both the same kinds of questions.)

What is my answer? Well, an unconfusing physicalist account is just to say that spacetime exists, and of course there are going to be some things located in some places and other things in other places. I am a particular "thing" engaged in complex self-reference when I'm thinking about these questions, but fundamentally I'm not different in this respect than, say, an arrow on a piece of paper that's pointing at itself. Why is this arrow here and not somewhere else? That's just the way the matter of the world is arranged. It has to be somewhere, and the way the laws of physics have unfolded has led to it being there.

But with subjective experience, the question feels different. It feels like there's actually something special about my viewpoint. I'm a conscious being who is undergoing transformations in time and space. This is different than an arrow being located somewhere on a page, because the world is happening to me, and time is moving forward for me. I am here now, not somewhere else.

Yet fundamentally there is no difference between me and the arrow. Consciousness is not ontologically privileged in spacetime, and at bottom it is a complex sort of self-pointing arrow. You are pointing at yourself, yes, but so are your other selves pointing at themselves, just as every looped arrow is pointing at itself and not some other arrow. If I ask, "Will the real Slim Shady please stand up?", all of the versions of Slim Shady over his lifetime would stand up and claim to be the "special" one that's experiencing the world as it is now.

That said, I agree it feels weird, just like the reductionist view on consciousness feels weird. My brain still trips itself up thinking about it, just like I might feel vertigo watching someone peer over a balcony on a movie screen. [Update, Sep. 2014: A few months after writing that sentence, I discovered Benj Hellie's vertiginous question. Our common reference to "vertigo" was coincidental.] We seem to have certain instinctive feelings about these issues that lead us to ask questions that may ultimately not make sense.