In the last decade, a number of neuroscientists have become interested in the question of consciousness. For example Christof Koch, Stanilas Dehaene, Gerald Edelman, and many others. There have been a number of interesting new insights on this old subject, mostly focused on the so-called “neural correlates of consciousness”, that is, the properties of neural activity that are associated with conscious states, as opposed to say coma. However, to my mind there is no convincing theory that explains what is consciousness, why we are conscious at all and why we feel anything at all (phenomenal consciousness). But there have been attempts. A recent one is the integrated information theory (IIT) proposed by Tononi, which proposes that consciousness is a property of all systems that have a high level of “integrated information”. In a nutshell, such a system is a dynamical system that cannot be divided into smaller independent (or weakly dependent) systems. The term “information” should be understood in the sense of information theory: how much information (uncertainty reduction) there is in the state of a subsystem about the future of another subsystem. In a nutshell, the problem with this theory is that it is as much about consciousness as information theory is about information.

Christof Koch is a notorious fan of IIT. He describes the theory in a popular science article entitled “Ubiquitous minds” (available on his web page). I should point out that it is not an academic paper, so maybe my criticisms will seem unfair. So to be fair, let us say that what follows is a criticism of the arguments in that article, but perhaps not of Koch's thought in general (admittedly I have not read his book yet).

Koch correctly presents IIT as a modern form of panpsychism, that is, the idea that lots of things are conscious to some level. Animals, of course, but also any kind of system, living or not, that has high “integrated information” (named “phi”). On his blog, Scott Aaronson, a theoretical computer scientist, gives an example of a matrix multiplication system that has this property and therefore should be highly conscious according to IIT if it where physically implemented. Now Tononi and Koch do not see this counter-intuitive implication as a problem with the theory, but on the contrary they embrace it as a highly interesting implication. Koch speculates for example that the internet might be conscious.

Koch starts by describing the naïve version of panpsychism, which indeed can easily be dismissed. Naïve panpsychism states that everything is conscious, to different levels: a brain, a tree, a rock. This immediately raises a big problem (refered to as the “problem of aggregates” in the article): you might claim that everything is conscious, but then you need to define what a “thing” is. Is half a rock conscious? Then which half? Is any set of 1000 particles randomly chosen in the universe conscious? Is half of my brain plus half of your stomach a conscious entity?

IIT is more restricted than naïve panpsychism, but it suffers from the same problem: how do you define a “system”? Wouldn't a subsystem of a conscious system also be conscious, according to the theory? As Koch writes, the theory offers no intrinsic solution to this problem, it must be augmented by an ad hoc postulate (“that only “local maxima” of integrated information exist”). What puzzles me is that the paper ends on the claim that IIT offers an “elegant explanation for [the existence of] subjective experience”. What I have read here is an interesting theory of interdependence in systems, and then a claim that systems made of interdependent parts are conscious. Where is the explanation in that? A word (“consciousness”) was arbitrarily put onto this particular property of systems, but no hint was provided at any point about a connection between the meaning of that word and the property those systems. Why would this property produce consciousness? No explanation is given by the theory.

If it is not an explanation, then it must simply be a hypothesis; the hypothesis that systems with high integrated information are conscious. That is, it is a hypothesis about which systems are conscious and which are not. As we noted above, this hypothesis assigns consciousness to non-living things, possibly including the internet, and definitely including some rather stupid machines that no one would consider conscious. I would consider this a problem, but tenants of IIT would simply adopt panpsychism and consider that, counter-intuitively, those things are actually conscious. But then this means admitting that no observation whatsoever can give us any hint about what systems are conscious (contrary to the first pages of Koch's article, where he argues that animals are conscious on those grounds); in other words, that the hypothesis is metaphysical and not testable. So the hypothesis is either unscientific or wrong.

Now I am not saying that the theory is uninteresting. I simply think that it is a theory about consciousness and not of consciousness. What about is it exactly? Let us go back to what integrated information is supposed to mean. Essentially, high integrated information means that the system cannot be subdivided in two independent systems – the future state of system A depends on the current state of system B and conversely. This corresponds to an important property of consciousness: the unity of consciousness. You experience a single stream of consciousness that integrates sound, vision, etc. Sound and vision are not experienced by two separate minds but by a single one. Yet this is what should happen if there were two unconnected brain areas dealing with sound and light. Thus a necessary condition for a unique conscious experience is that the substrate of consciousness cannot be divided into causally independent subsets. This is an important requirement, and therefore I do think that the theory has interesting things to say about consciousness, in particular what its substrate is, but it explains nothing about why there is a conscious experience at all. It provides a necessary condition for consciousness – and that's already quite good for a theory about consciousness.

But that's it. It does not explain why an interdependent system should be conscious – and in fact, given some examples of such systems, it seems unlikely that it is the case. What is missing in the theory? I hinted at it in my introduction: the problem with integrated information theory is that it is as much about consciousness as information theory is about information. The word “information” in information theory has little to do with information in the common sense of the word, that is, something that carries meaning for the receiver. But information theory is actually better described as a theory of communication. In fact, one should remember that Shannon's seminal paper was entitled “A Mathematical Theory of Communication”, not of information. In a communication channel, A is encoded into B by a dictionary, and B carries information about A insofar as one can recover A from B. But of course it only makes sense for the person sitting at the receiving end of the communication channel if 1) she has the dictionary, 2) A already makes sense to her. “Information” theory says nothing about how A acquires any meaning at all, it is just about the communication of information. For this reason, “integrated information” fails to address another important aspect of consciousness, which in philosophy is named “intentionality”: the idea that one is always conscious of something, i.e. consciousness has a “content” - not just a “quantity of consciousness”. Any theory that is solely based on information in Shannon's sense (dictionary) cannot say much about phenomenonal consciousness (how it feels like).

For the end of this post, I will simply quote Scott Aaronson:

“But let me end on a positive note. In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.”