[http://bigthink.com/ideas/40965 Megan Erickson on November 6, 2011, 12:00 AM]

She says, “ Discovering how mechanistic processes work – the firing of neurons or the earth revolving around the sun, for example – is considered by some to be an “easy” problem because it involves observation, the description of an event from a third person point of view. “Hard” problems, on the other hand, involve first person experience. They’re the questions that persist even after physical processes have been mapped and explained.

. . .

So there’s no reason to assume that consciousness is eternally inexplicable. However, it may never be explained through neurobiology, says David Chalmers, the philosopher who originally made the distinction. “In so many other fields physical explanation has been successful… but there seems to be this big gap in the case of consciousness,” he says. “It’s just very hard to see how [neurological] interactions are going to give you subjective experience.”

. . .

What do you think? Is the distinction between “hard problems” and “soft problems” useful, or reductive? Does the brain create consciousness? Will we ever empirically understand where it comes from or how it works?”

She spawned quite a discussion. I find the distinction between “hard” and “soft” problems illuminates more than anything how flabbergasted most people still are in contemplating crossing the objective/subjective divide. It illustrates how inconceivable bridging that explanatory gap has seemed to be. And that tells how little most of us still understand. It turns out, however, there are a great many reasons to conclude that the brain indeed does create consciousness, Erickson herself points to some in her article. What had been lacking until relatively recently was an overarching framework or theory through which to grasp the nature of consciousness. The lack of a general theory of consciousness, of how it comes to be that there is something that it is like to be, was really the last rational bastion of opposition to the scientific assertion that consciousness emerges from the brain.

The solution to the hard problem is rather simple. It has been so hard to see for certain evolutionary and cultural reasons. The solution is that the “self” is an illusion. The conscious part of us is actually a representational process. For simple reasons of caloric efficiency, to survive, humans have only needed to assume that they directly experience reality. This is now clearly false. We do not directly experience reality. We have various limited sense organs through which, from an existence far richer than we can perceive, information about the environment is taken in and an estimation of reality constructed. Other species construct a different map of the same external world from different sets of senses. Some have eyes tuned to a different slice of the electromagnetic spectrum. Some use echolocation, some are sensitive to electromagnetic fields, etc.

It isn’t at all the case for example that you “see with your eyes.” You see with your visual system. But we didn’t need to know that to survive. We didn’t need to know that when you saw a large feline predator of mostly orange coloring with black stripes, you were, strictly speaking, seeing a representation of a tiger. We just needed to know to get the fuck out of dodge. If you had a brain tempted to make the likely correct philosophical intuition that your experience of reality is actually virtual, those extra milliseconds could cost you your life while in danger, and in most cases, if not under direct threat, until very recently the extra caloric cost of that higher order thought process of a representation of a representation would just make you hungrier faster without much evolutionary benefit.

We are not just naive realists in our understanding of the external world, but clearly (especially?) we are equally naive realists in our understanding of our internal world. Hence, the common cultural dichotomy of body and mind, and the often magical explanations of consciousness. We don’t experience the neural underpinning of consciousness, and thus if what we experience is what is real, consciousness is inexplicable and magical. “Any technology sufficiently advanced is indistinguishable from magic.” We just happened to be born in “technological” equipment like that.

The nature of agency itself is thus tied to the extent to which modern neuroscience may or may not be indicating that consciousness itself and hence all of rationality might function in the brain as a special kind of sense perception of the world. I recommend Thomas Metzinger’s Graduate Council lecture at UC Berkeley for a quick overview of the self-model theory of subjectivity. This is the first paragraph of his book, Being No One:

“This is a book about consciousness, the phenomenal self, and the first-person perspective. Its main thesis is that no such things as selves exist in the world: Nobody ever was or had a self. All that ever existed were conscious self-models that could not be recognized as models. The phenomenal self is not a thing, but a process—and that subjective experience of being someone emerges if a conscious information-processing system operates under a transparent self-model. You are such a system right now, as you read these sentences. Because you cannot recognize your self-model as a model, it is transparent: you look right through it. You don’t see it. But you see with it. In other, more metaphorical, words, the central claim of this book is that as you read these lines you constantly confuse yourself with the content of the self-model currently activated by your brain.”[ii]

This explains the research that increasingly is indicating the extraordinary extent to which seemingly conscious decisions have been made before subjects think they consciously made these decisions. This seems to indicate that many so-called conscious decisions are actually ­subconscious and our conscious awareness is only being informed of the decision. It is not that consciousness has no meaningful role to play in a our behavior, but rather it seems to play a larger role the more reflection is reflected in any given behavior. For example its role in the choice to move to a different city, as opposed to realizing that you just scratched an itch, or the more we train ourselves in the daily practices of mindfulness.

Furthermore, these findings have interesting repercussions in the discussion of agency in general and in the context of freedom.[iv] It seems the true limits of our freedom are not external to our bodies, as the determinism misperception implies, and therefore the autonomy of our human entity is not externally determined (only confined), but that it is consciousness itself that has been shown to be increasingly more limited than traditionally expected. We don’t have bodies, bodies have us.

Interestingly enough, the descriptions of many mystical experiences, are cast in terms of the abandonment of the “self.” I hypothesize that meditative practices, in witnessing the mind as an object, are stepping stones to a higher order consciousness, one that implicitly recognizes Metzinger’s scientific perspective. It might be more proper to say that mystical experiences are compatible with the self-model theory of consciousness, as indeed they would have to be if the model is any good. I do not want to suggest that Metzinger’s theory in any way implies universal consciousness or any other speculative metaphysics justified on experiential grounds. I speculate on the only way I could find universal consciousness rationally plausible here.

[ii] Metzinger, Thomas. Being No One: the Self-model Theory of Subjectivity. Cambridge, Mass.: MIT, 2003. Print. [iv] “Are Zombies Responsible? The Role of Consciousness in Moral Responsibility” Neil Levy. Centre for Applied Philosophy and Public Ethics. University of Melbourne, nllevy@unimelb.edu.au http://www.keepandshare.com/doc/3254946/are-zombies-responsible-the-role-of-consciousness-in-moral-responsibility-pdf-november-8-2011?da=y