Why not computing qualia?

Qualia is the given. In our minds, qualia comparisons and interactions are an essential component of our information processing pipeline. However, this is a particular property of the medium: Consciousness.

David Marr, a cognitive scientist and vision researcher, developed an interesting conceptual framework to analyze information processing systems. This is Marr’s three levels of analysis:

The computational level describes the system in terms of the problems that it can and does solve. What does it do? How fast can it do it? What kind of environment does it need to perform the task?

The algorithmic level describes the system in terms of the specific methods it uses to solve the problems specified on the computational level. In many cases, two information processing systems do the exact same thing from an input-output point of view, and yet they are algorithmically very different. Even when both systems have near identical time and memory demands, you cannot rule out the possibility that they use very different algorithms. A thorough analysis of the state-space of possible algorithms and their relative implementation demands could rule out the use of different algorithms, but this is hard to do.

The implementation level describes the system in terms of the very substrate it uses. Two systems that perform the same exact algorithms can still differ in the substrate used to implement them.

An abacus is a simple information processing system that is easy to describe in terms of Marr’s three levels of analysis. First, computationally: the abacus performs addition, subtraction and various other arithmetic computations. Then algorithms: those used to process information involve moving {beds} along {sticks} (I use ‘{}’ to denote that the sense of these words is about their algorithmic-level abstractions rather than physical instantiation). And the implementation: not only can you choose from metallic and wooden abacus, you can also get your abacus implemented using people’s minds!

What about the mind itself? The mind is an information processing system. At the computational level, the mind has a very general power. It can solve problems never before presented to it, and it can also design and implement computers to do narrow problems more efficiently. At the algorithmic level, we know very little about the human mind, though various fields center on this level. Computational psychology models the algorithmic and the computational level of the mind. Psychophysics, too, attempts to reveal the parameters of the algorithmic component of our minds and their relationship to parameters at the implementation level. And when we reason about logical problems, we do so using specific algorithms. And even counting is something that kids do with algorithms they learn.

The implementation level of the mind is a very tricky subject. There is a lot of research on the possible implementation of algorithms that a neural network abstraction of the brain can instantiate. This is an incredibly important part of the puzzle, but it cannot fully describe the implementation of the human mind. This is because some of the algorithms performed by the mind seem to be implemented with phenomenal binding: The simultaneous presence of diverse phenomenologies in a unified experience. When we make decisions we compare how pleasant each option would be. And to reason, we bundle up sensory-symbolic representations within the scope of the attention of our conscious mind. In general, all algorithmic applications of sensations require the use of phenomenal binding in specific ways. The mind is implemented by a network of binding tendencies between sensations.

A full theory of the mind will require a complete account of the computational properties of qualia. To obtain those we will have to bridge the computational and the implementation level descriptions. We can do this from the bottom up:

  1. An account of the dynamics of qualia (to be able to predict the interactions between sensations just as we can currently predict how silicon transistors will behave) is needed to describe the implementation level of the mind.
  2. An account of the algorithms of the mind (how the dynamics of qualia are harnessed to implement algorithms) is needed to describe the algorithmic level of the mind.
  3. And an account of the capabilities of the mind (the computational limits of qualia algorithms) is needed to describe the computational level of the mind.

Why not call it computing qualia? Computational theories of mind suggest that your conscious experience is the result of information processing per se. But information processing cannot account for the substrate that implements it. Otherwise you are confusing the first and the second level with the third. Even a computer cannot exist without first making sure that there is a substrate that can implement it. In the case of the mind, the fact that you experience multiple pieces of information at once is information concerning the implementation level of your mind, not the algorithmic or computational level.

Isn’t information processing substrate-independent?

Not when the substrate has properties needed for the specific information processing system at hand. If you go to your brian and replace one neuron at a time by a silicon neuron that is functionally identical, at what point would your consciousness disappear? Would it fade gradually? Or would nothing happen? Functionalists would say that nothing should happen since all the functions, locally and globally, are maintained throughout this process. But what if this is not possible?

If you start with a quantum computer, for example, then you have the problem that part of the computational horse-power is being produced by the very quantum substrate of the universe, which the computer harnesses. Replacing every component, one at a time, by a classical counter-part (a non-quantum chip with similar non-quantum properties), would actually lead to a crash. At some point the quantum part of the computer will break down and will no longer be useful.

Likewise with the mind. If the substrate of the mind is actually relevant from a computational point of view, then replacing a brain by seemingly functionally identical components could also lead to an inevitable crash. Functionalists would suggest that there is no reason to think that there is anything functionally special about the mind. But phenomenal binding seems to be it. Uniting pieces of information in a unitary experience is an integral part of our information processing pipeline, and precisely that functionality is the one we do not know how to conceivably implement without consciousness.


Textures

Implementing something with phenomenal binding, rather than implementing phenomenal binding (which is not possible)


On a related note: If you build a conscious robot and you don’t mind phenomenal binding, your robot will have a jumbled-up mind. You need to incorporate phenomenal binding in the pipeline of training. If you want your conscious robot to have a semantically meaningful interpretation of the sentence “the cat eats” you need to be able to bind its sensory-symbolic representations of cats and of eating to each other.

13 comments

  1. Pingback: The Case for Open Phenomenology - Open Phenomenology
  2. Pingback: Glossary of Qualia Research Institute Terms | Qualia Computing
  3. Pingback: Every Qualia Computing Article Ever | Qualia Computing
  4. Pingback: The Universal Plot: Part I – Consciousness vs. Pure Replicators | Qualia Computing
  5. randalkoene · December 11, 2017

    While the description of Marr’s levels is nice, unfortunately, the last part of this essay is ‘begging the question’ (or, self-referentially basing conclusions on an unsupported assumption). The author simply states that “phenomenal binding is it”, as if pointing to phenomenal binding should explain to everyone in which way a functionalist approach to the mind must be wrong, or that pointing to phenomenal binding would clearly implicate an unknown attribute of the substrate. But how has the author determined that other, functional, solutions to the problem have been exhausted??
    Take for example this hypothesis: That simultaneous activation of neurons in specific phases (time intervals) of oscillatory brain activity achieve binding and that such binding can be transferred usefully to further processing by the phase-locked cooperation between brain regions.

    • algekalipso · December 16, 2017

      Hi Randal! The key thing I want to leave you with is the notion that what we are trying to explain is not merely the functional properties of the brain, but also the subjective qualities of experience. Within some accounts of consciousness/theories of mind both of these things are accomplished at the same time by providing an algorithmic descriptions that explains the Input-Output behavior of our brain. And if, say, functionalism is true, then this this might be all you need to do. But in this line or argument, one would then have to explain why functionalism is true and how it addresses the deeper problems that arise from contemplating that which has to be explained about consciousness. You may know that we do not think that functionalism succeeds as a theory of consciousness, and we are still looking for counter-arguments to our criticisms of it (see: https://qualiacomputing.com/2017/07/22/why-i-think-the-foundational-research-institute-should-rethink-its-approach/).

      In particular, I would point out that the hypothesis you discuss is great in so far as we are talking about explaining “functional unity” and “integration of information to produce an appropriate output”. Simultaneous activation of neurons in specific phases *does seem to be empirically highly correlated with local binding* (but not global binding, cf. https://qualiacomputing.com/2016/12/16/the-binding-problem/). Yet the *explanatory status* of this facts remains at the computational level of abstraction of Marr’s three levels, and it does not (yet) address the *ontological unity* of moments of experience.

      As a quip, I’d say: Explaining functional unity is not sufficient to explain ontological unity. Now, you may claim that functional unity (e.g. defined as information integration to produce an output, ala IIT) is numerically the same as ontological unity (as is often claimed within functionalist paradigms), but I am not convinced. That said, I admit that this makes “ontological unity” sound epiphenomenal. As in: It does not have a measurable effect over and beyond the functional unity we already explained. So here is the kicker: I think that ontological unity *has* behavioral consequences, and in fact, without it, I think that the outputs of our nervous system would look different. We have yet to actually explain and prove scientifically that classical accounts of binding are sufficient to account for behavior (so far it’s more of a heuristic). And in time, I suspect that we will find strong counter-example to this, with information integration that produces outputs in defiance of the account of binding that you are proposing.

      In particular, I think that the sort of research you are carrying will greatly accelerate this process:

      Work on neural prosthesis could be used to falsify many implications of functionalism. The thing I expect to see is that substituting parts of the brain with pseudo-I/O-equivalent neurochip components will work in some places, somewhat work in other places, and totally not work in some other. At first it will be possible to explain away the incomplete success/problems with neurochips by saying that neurons are really finicky and that of course we can’t expect to get it right easily. The weird thing will be that some of those chips will nonetheless work fantastically well (for e.g. hippocampus, potentially) but really poorly in other places (e.g. I suspect, corpus callosum or thalamic regions, anywhere in which the syntax of qualia computing is quintessential to the functioning of the brain, or where there is large-scale binding/’boundrying’). Either way, the work in this area will provide amazingly useful pieces of information.

  6. Pingback: Burning Man | Qualia Computing
  7. Pingback: Qualia Computing Attending the 2017 Psychedelic Science Conference | Qualia Computing
  8. Pingback: The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes | Qualia Computing
  9. Linc cholke · June 25, 2016

    It seems all the that can be studied is some form of cognition. And the post can be restated as a study of cognition. We have no way to even start to describe aspect of qualia which is not cognitive, the aspect I call the private observer experience.

    I would like your comments on this.

    • algekalipso · July 13, 2016

      I think that it is possible to bootstrap our way into describing every single variety of qualia. We are very far from being able to do that, of course, since we lack the primitive concepts that develop in the presence of entirely different varieties of qualia. We also lack primitive concepts in the majority of the qualia that we experience on a daily basis, let alone the more rare sorts of experience people have every once in a while.

      Bootstrapping, however, may be possible, if we manage to create the conditions in our consciousness for a well defined ecology of arbitrary sets of qualia varieties. Then we could develop new conceptual primitives in those spaces and be able to develop a language and new forms of existence.

      We can start, though, by developing a better language for our own experiences. As I outlined in Pyshophysics for Psychedelic Research (https://qualiacomputing.com/2015/04/20/psychophysics-for-psychedelic-research-textures/) the state-space of textures has 5 spectra, each corresponding to a histogram of summary statistics. These summary statistics exactly correspond to the various convolutional filters that our cortex applies to early visual processing. So, as it turns out, our visual experiential world can only really be described properly by referring a state-space of summary statistics. And that is a more precise language that can at least help us convey and analyze the qualities of our experience (even if they weren’t cognitive to begin with).

      • Tim Tyler · December 9, 2017

        “If the substrate of the mind is actually relevant from a computational point of view, then replacing a brain by seemingly functionally identical components could also lead to an inevitable crash. Functionalists would suggest that there is no reason to think that there is anything functionally special about the mind. But phenomenal binding seems to be it.”

        There’s nothing terribly special about phenomenal binding IMO. The alledged unitary nature of consciousness is not physical – if it exists in the first place it is illusiory – i.e. it just seems unitary, and is acutally implemented as a distributed system in the brain, complete with asynchronous operation, time lags, etc. The evidence for phenomenal binding being somehow beyond Turing machines is really shitty, poor quality evidence. Note that even Quantum computing is not beyond Turing. A classical computer can simulate a quantum one, just slowly.

  10. Pingback: Algorithmic Reduction of Psychedelic States | Qualia Computing

Leave a Reply