Daniel Dennet, in his multi-draft theory of consciousness explains that we have the belief that our consciousness is far more unified than it really is. This is true as far as it goes, experimentally and philosophically. Dennett and other philosophers of mind (and nearly everyone I’ve met who roots for him) take this one step further. The unity of consciousness is an illusion. This is not incidental, from a functionalist point of view any sort of unity of consciousness is not something you would actually be able to predict. Hence in order to justify that model the unity of consciousness that does exist has to be explained away somehow (as an “illusion” more often than not).
The need for a radically new paradigm, however, does not come from the idea that our consciousness is always fully unified in the sort of naïve way common sense tells us. It simply comes from the existence of some unity, any amount of unity, however little that is, because a purely computational account would not predict any sort of unity of the sort we see in consciousness.
I usually start with the following “your left and right visual field are currently being experienced as a unitary visual field.” Many object to this, in part perhaps because you can only focus your attention on one side at a time (awareness of the other side, as opposed to attention, does remain in most cases though). A more thoroughly indisputable instance of unitary consciousness is the fact that you can recognize objects, which itself requires instantaneous information unity. For example, you look at your hand, and you don’t only see it as a collection of shapes and colors, but you see it and recognize it as a “hand.” The very concept of a hand has implicitly in it a wide variety of pieces of information instantaneously joined together.
Some argue that the unity of consciousness is not good for anything. It is just an epiphenomenon. But it isn’t. You just need to look at what happens when it breaks down, as in the case of simultagnosia, schizophrenia or high doses of ketamine. Not being able to unify features of an experience into an integrated whole impairs the information processing that your mind typically accomplishes. So here is another big hint: Phenomenal binding is computationally relevant. I think that a strong argument could be made for “why we are conscious” by merely looking at the advantages of phenomenal binding over classical computers. If such advantages exist, they may explain why natural selection would have *recruited* consciousness as an information processing device instead of sticking to classical information processing.
Some people bring Integrated Information Theory of consciousness (Tonini) into the picture when asked about phenomenal binding. However, IIT does not provide a mechanism of action for the unity of consciousness. It *assumes it is the result of irreducible information*! IIT acknowledges that conscious states simultaneously represent multiple pieces of information in an indivisible whole. The problem, though, is that rather than asking “what is the mechanism of action for this unity?” the theory instead has a false start: It asks “under the assumption that the mind is a classical computer, what kind of physical systems would display integrated information?” But if you go out and look for integrated information in that sense, already defined within a classical paradigm, you already made a mistake. You assumed a specific framework for how the information gets integrated, one that cannot even in principle work. This is because the various parts of classical systems are not in direct instantaneous communication to each other. If you remove a part, the news that such part does not exist takes time to reach the other parts. And at a given time you cannot really define a “global state” because the changes in each part have not had the chance to influence the other parts.
In contrast to a classical system, each part of your local consciousness (whatever bundle IS unified, as opposed to the naïve view Dennett discredits) is instantaneously a part of a whole. A “whole.” due to delays in information propagation, cannot be defined in a classical system, and yet it is a central feature of consciousness.
In addition, in a classical system you can fully account for the emergent behavior by using a strict bottom-up approach. The large scale behavior is an emergent phenomena of the small-scale interactions (such as wetness being nothing but the interaction of water molecules). Contra classical systems, your consciousness has an instantaneous bottom-up *and* top-down relationship. Not only is the meaning of the whole determined by the interactions of the parts, but the nature of the parts are determined by the whole. An example of this is how when you see a cube, each of its square sides stops being just a square. They become “the sides of a cube.” The very nature of the experience of such squares as the sides of a cube cannot be accounted for without taking into account the entire experience.
The correct approach, I think, would be to focus on the unity of consciousness itself and ask “what sort of beast is this?” Not assume it has to be the result of some predefined process. Tentatively, I think that the way to investigate this would be to try to replace the corpus callosum with other machinery that is functionally identical in some sense. If we replace it by synthetic neurons that have the same macrofunctionality as biological neurons and the unity of consciousness breaks down, we would know that such unity is not accounted by simple information transmission in the classical sense. In addition, and in parallel, we could start a research program that seeks to define the computational advantages of consciousness over classical systems. Personally, I think this will be a very fruitful project, and will ultimately have tremendous applications (Better Computing Through Qualia!).
One comment