Generalized Wada Test and the Total Order of Consciousness

In a Wada test a single hemisphere is sedated with sodium amobarbital. While the sedated hemisphere is unresponsive, a cognitive examination is conducted on the other hemisphere. This test is done to determine whether performing an ablative surgery on a given hemisphere is a viable treatment for epileptic seizures. By using the Wada test, one can avoid creating irreversible damage in areas of the brain crucial for modern day life, such as language production regions.


The Generalized Wada Test

The thought of targeting an isolated brain region for drug therapy is very stimulating. But do we have to sedate it? Sodium amobarbital may have useful properties that makes it a good fit for the Wada test. But it is unlikely to be the only substance that can be used. More broadly, there seem to be a variety of compounds that can be used for intracarotid drug delivery.

In all likelihood there must be a number of psychedelic compounds that could selectively affect brain regions via intracarotid delivery. One thought is to inject 2C-B (or whichever psychedelic has the desired pharmacological properties) on one hemisphere so that a person can compare the two sides of her visual field. This way, she would be able to compare side-by-side the features and patterns highlighted by the algorithms of her visual system (which would, presumably, be different on each side). In turn, this will enable us to catalogue more precisely the specific differences in visual experience under the influence of several drugs.

Even more generally, one could also make use of additional brain interventions such as tDCS, ultrasound, optogenetcs, etc. For example, imagine using ketamine and tDCS on the right hemisphere while the left receives ultrasound stimulation. We have a combinatorial explosion. A good one. I call this the Generalized Wada Test (WGT).


Philosophical Applications of the Generalized Wada Test

This technique presents a striking possibility: approaching philosophical problems empirically. More specifically, this technique might be used to:

  1. Test the properties of phenomenal binding, and
  2.  Allow “incommensurable” experiences to “experience each other” as the halves of a unitary consciousness

Phenomenal binding can be put under a microscope by using a GWT to infer the necessary chemical properties that brain regions require in order to enable the integration of phenomenal features into unitary experiential wholes. The speed at which binding takes place between the hemispheres could also be quantified. If phenomenal binding is not possible between two given states of consciousness, that would also be very valuable information for consciousness research.

With regards to the second possibility…


Is there a Total Order of Subjective Preferences?

Take two states of consciousness A and B. Suppose we use a GWT to make A manifest in the left hemisphere, while B does so in the right. The subject as a whole is asked to decide which of the two states of consciousness is subjectively preferable. If A is preferred over B, then a directed edge from B to A is added to the graph (with a weight proportional to the certainty/degree of preference). By adding the corresponding weighted edge between every pair of states of consciousness inducible on a GWT we would map a large portion of the state-space of consciousness available to humans. Let’s call this graph the directed network of subjective preferences.

Now, once we have fully populated such graph… would it actually be a directed acyclic graph (DAG)? Could we extract a Total Order? In other words, does the directed network of subjective preferences reveal a proper order of experiences from least to most preferred?

Can we make a universal scale of subjective preferability? Is it possible to infer a scale that, as David Pearce would call it, shows us the utility function of the universe?

But what if we find cycles?


Hedonic Tone

Even though there is a very close relationship between bliss and activity in the outer shell of the nucleus accumbens (and various other nearby hedonic hot-spots), it is not yet clear whether all pleasurable, blissful or otherwise subjectively valuable states are triggered by the activation of this area. We know that classic psychedelics, for example, do not have pharmacological dopaminergic or opiodergic action, and thus don’t activate the nucleus accumbens directly. And yet, people do report ecstatic and blissful states of consciousness on LSD…

It is not yet clear whether that bliss is mediated by hedonic hot-spot activity (thankfully, we may soon find out). If psychedelic bliss is fundamentally dissociated from dopaminergic and opiodergic activity, what would that say about the nature of pleasure? Could there be higher levels of bliss that are unrelated to current neurobiological models of subjective reward? What if everyone on acid bliss says that acid bliss is better than heroine bliss, while everyone on heroine bliss says the opposite? What do we make of Dostoevsky’s epileptic bliss?

For several instants I experience a happiness that is impossible in an ordinary state, and of which other people have no conception. I feel full harmony in myself and in the whole world, and the feeling is so strong and sweet that for a few seconds of such bliss one could give up ten years of life, perhaps all of life.

I felt that heaven descended to earth and swallowed me. I really attained god and was imbued with him. All of you healthy people don’t even suspect what happiness is, that happiness that we epileptics experience for a second before an attack.

Nothing short of a Generalized Wada Test would be able to approach these questions.

Why not computing qualia?

Qualia is the given. In our minds, qualia comparisons and interactions are an essential component of our information processing pipeline. However, this is a particular property of the medium: Consciousness.

David Marr, a cognitive scientist and vision researcher, developed an interesting conceptual framework to analyze information processing systems. This is Marr’s three levels of analysis:

The computational level describes the system in terms of the problems that it can and does solve. What does it do? How fast can it do it? What kind of environment does it need to perform the task?

The algorithmic level describes the system in terms of the specific methods it uses to solve the problems specified on the computational level. In many cases, two information processing systems do the exact same thing from an input-output point of view, and yet they are algorithmically very different. Even when both systems have near identical time and memory demands, you cannot rule out the possibility that they use very different algorithms. A thorough analysis of the state-space of possible algorithms and their relative implementation demands could rule out the use of different algorithms, but this is hard to do.

The implementation level describes the system in terms of the very substrate it uses. Two systems that perform the same exact algorithms can still differ in the substrate used to implement them.

An abacus is a simple information processing system that is easy to describe in terms of Marr’s three levels of analysis. First, computationally: the abacus performs addition, subtraction and various other arithmetic computations. Then algorithms: those used to process information involve moving {beds} along {sticks} (I use ‘{}’ to denote that the sense of these words is about their algorithmic-level abstractions rather than physical instantiation). And the implementation: not only can you choose from metallic and wooden abacus, you can also get your abacus implemented using people’s minds!

What about the mind itself? The mind is an information processing system. At the computational level, the mind has a very general power. It can solve problems never before presented to it, and it can also design and implement computers to do narrow problems more efficiently. At the algorithmic level, we know very little about the human mind, though various fields center on this level. Computational psychology models the algorithmic and the computational level of the mind. Psychophysics, too, attempts to reveal the parameters of the algorithmic component of our minds and their relationship to parameters at the implementation level. And when we reason about logical problems, we do so using specific algorithms. And even counting is something that kids do with algorithms they learn.

The implementation level of the mind is a very tricky subject. There is a lot of research on the possible implementation of algorithms that a neural network abstraction of the brain can instantiate. This is an incredibly important part of the puzzle, but it cannot fully describe the implementation of the human mind. This is because some of the algorithms performed by the mind seem to be implemented with phenomenal binding: The simultaneous presence of diverse phenomenologies in a unified experience. When we make decisions we compare how pleasant each option would be. And to reason, we bundle up sensory-symbolic representations within the scope of the attention of our conscious mind. In general, all algorithmic applications of sensations require the use of phenomenal binding in specific ways. The mind is implemented by a network of binding tendencies between sensations.

A full theory of the mind will require a complete account of the computational properties of qualia. To obtain those we will have to bridge the computational and the implementation level descriptions. We can do this from the bottom up:

  1. An account of the dynamics of qualia (to be able to predict the interactions between sensations just as we can currently predict how silicon transistors will behave) is needed to describe the implementation level of the mind.
  2. An account of the algorithms of the mind (how the dynamics of qualia are harnessed to implement algorithms) is needed to describe the algorithmic level of the mind.
  3. And an account of the capabilities of the mind (the computational limits of qualia algorithms) is needed to describe the computational level of the mind.

Why not call it computing qualia? Computational theories of mind suggest that your conscious experience is the result of information processing per se. But information processing cannot account for the substrate that implements it. Otherwise you are confusing the first and the second level with the third. Even a computer cannot exist without first making sure that there is a substrate that can implement it. In the case of the mind, the fact that you experience multiple pieces of information at once is information concerning the implementation level of your mind, not the algorithmic or computational level.

Isn’t information processing substrate-independent?

Not when the substrate has properties needed for the specific information processing system at hand. If you go to your brian and replace one neuron at a time by a silicon neuron that is functionally identical, at what point would your consciousness disappear? Would it fade gradually? Or would nothing happen? Functionalists would say that nothing should happen since all the functions, locally and globally, are maintained throughout this process. But what if this is not possible?

If you start with a quantum computer, for example, then you have the problem that part of the computational horse-power is being produced by the very quantum substrate of the universe, which the computer harnesses. Replacing every component, one at a time, by a classical counter-part (a non-quantum chip with similar non-quantum properties), would actually lead to a crash. At some point the quantum part of the computer will break down and will no longer be useful.

Likewise with the mind. If the substrate of the mind is actually relevant from a computational point of view, then replacing a brain by seemingly functionally identical components could also lead to an inevitable crash. Functionalists would suggest that there is no reason to think that there is anything functionally special about the mind. But phenomenal binding seems to be it. Uniting pieces of information in a unitary experience is an integral part of our information processing pipeline, and precisely that functionality is the one we do not know how to conceivably implement without consciousness.


Textures

Implementing something with phenomenal binding, rather than implementing phenomenal binding (which is not possible)


On a related note: If you build a conscious robot and you don’t mind phenomenal binding, your robot will have a jumbled-up mind. You need to incorporate phenomenal binding in the pipeline of training. If you want your conscious robot to have a semantically meaningful interpretation of the sentence “the cat eats” you need to be able to bind its sensory-symbolic representations of cats and of eating to each other.

A workable solution to the problem of other minds

Deciding whether other entities are also conscious is not an insoluble philosophical problem. It is tricky. A good analogy might be a wire puzzle. At a first glance, the piece you have to free looks completely locked. And yet a solution does exist, it just requires to represent a sufficiently large number of facts and features that our working memory is not enough.

acap_a.

Usually showing the solution once will not fully satisfy one’s curiosity. It takes some time to develop a personally satisfying account. And to do so, we need to unpack how the various components interact with one another. After a while the reason why the free piece is not locked becomes intuitive, and at the same time you may also encounter mathematical arguments and principles to complement your understanding.

At first, though, the free piece looks and feels locked.

I think the problem of other minds is perceived similarly to a wire puzzle. At first it looks and feels insoluble. After a while, though, many suspect that the problem can be solved. This essay proposes a protocol that may point in the right direction. It could have some flaws as it is currently formulated, so I’m open to refinements of any kind. But I believe that it represents a drastic improvement over previous protocols, and it gets close to being a fully functioning proof of concept.

Starting from the basics: An approach that is widely discussed is the application of a Turing test. But a Turing test has several serious flaws when used as a test of consciousness. First, many conscious entities can’t pass a Turing test. So we know that it could have a very poor recall (missing most conscious entities). This problem is also present in every protocol I’m aware of. The major problem with it is that when an entity passes a Turing test, this can be counted as probabilistic evidence in favor of a large number of hypothesis, and not only to the desired conclusion that “this entity is conscious.” In principle highly persuasive chatbots could hack your entity recognition module by presenting hyperstimuli created by analyzing your biases for styles of conversation.

Your brain sees faces everywhere (cartoons, 2D computer screens, even clouds). It also sees entities where there are none. It might be much more simple to *trick* your judgement than actually create a sentient intelligence. Could the entity given the Turing test be an elaborate chatobot with no phenomenal binding? It seems likely that could take place.

Thus passing a Turing test is also not a guarantee that an entity is conscious. The method would have low recall and probably low accuracy too.

The second approach would be to simply *connect* your brain to the other entity’s brain (that is, of course, if you are not talking about a disembodied entity). We already have something like the Corpus Callosum, which seems to be capable of providing a bridge that solves the phenomenal binding problem between the hemispheres of a single person. In principle we could create a biologically similar, microfunctionally equivalent neural bridge between two persons.

Assuming physicalism, it seems very likely that there is a way for this to be done. Here, rather than merely observing the other person’s conscious experience, the point of connecting would be to become one entity. Strong, extremely compelling personal identity problems aside (Who are you really? Can you expect to ‘survive’ after the union? If you are the merged entity, does that mean you were always the same consciousness as the one with whom you merged?, etc. More on this on later posts), this possibility opens up the opportunity to actually corroborate that another entity is indeed conscious.

Indeed separate hemispheres can have very different opinions about the nature of reality. Assuming physicalism, why would it be the case that you can’t actually revert (or instantiate for the first time) the union between brains?

The previous idea has been proposed before. I think it is a significant improvement over the use of a Turing test, since you are directly addressing the main phenomenon in question (rather than its ripples). That said, the method has problems, and epistemic holes.  In brief, a big unknown is the effect that interfacing with another conscious experience has on both conscious experiences. For example, some people have (like Eliezer Yudkowsky and Brian Tomasik) argued that your interaction with the other brain could functionally expand your own mind. As it were, the interaction with the other brain could be interpreted as expanding your own mind by obtaining a large hardware upgrade. Thus it could be that the whole experience of being connected and becoming one with another entity is a fantasy of your recently-expanded mind. It can give you the impression that the other brain was already conscious before you were connected to it. So you can’t rule out that it was a zombie before and after the connection was over.

But there is a way out. And this is the stimulating part of the essay. Because I’m about to untangle the wires.

The great idea behind this solution is: Phenomenal puzzles. This one phenomenal puzzle linked here is about figuring out the appropriate geometry of color (arranging the state-space in an Euclidean manifold so that the degrees of subjective differences between colors are proportional to their distances). Doing this requires the ability of comparing the various parts of an experience to each other and being able to remember the comparison. In turn this can be iterated and generate a map of subjective differences. This is an instance of what I call qualia computing, where you need to be in touch with the subjective quality of your experience and to be capable of comparing sensations.

In brief, you want to give the other entity a puzzle that can only be solved by a conscious entity via manipulating and comparing qualia. The medium used to deliver the puzzle will be a first-person merging of brains: To share the puzzle you first connect with the entity you want to test.

By doing this, by sharing the puzzle when you are connected to the other entity, you will be able to know its inner referents in terms of qualia. While connected, you can point to a yellow patch and say “this is yellow.” Possibly, both halfs will have their own system of private referents (a natural consequence of having slightly different sense organs which make variable mappings between physical stimuli and qualia). But as a whole the merged entity will be able to compare notes with itself about the mapping of stimuli to qualia in both halfs. The entity could look at the same object from the point of view of its two heads at the same time and form an unified visual field, which incorporates the feed from the two former “personal-sized” visual fields (similarly to how you incorporate sensory stimuli from two eyes. Now you’ll see with four). The color appearance of the object could have a slightly different quality when the two visual fields are compared. That’s the fascinating thing about phenomenal binding. The differences in mappings between stimuli and qualia of the two former entities can be compared, which means that this difference can be analyzed and reasoned about and added to both repertoires of hippocampal snapshots of the current experience.

Then, when you disconnect from the other and there are two streams of consciousness going on again, you will both know what that “yellow” referred to. This overcomes the age-old problem of communicating private referents, and mutually agreeing on name for private referents. This way, the pieces of the (phenomenal) puzzle will be the same in both minds.

For the test to work, the specific question needs to stay secret until it is revealed briefly before merging.

Imagine that you have a set of standardized phenomenal puzzles. Psychologists and people who have done the test before tend to agree that the puzzles in the set do require you to explore a minimum number of states of consciousness. The tests have precise conceptual answers. These answers are extremely difficult to deliver by accident or luck.

The puzzles may require you to use external tools like an image editor or a computer. This is because computers can enable you to program combinations of sensory input in precise ways. This expands the phenomenal gamut you can reach. In turn one can calibrate sensory input to have nice properties (ex. use gamma correction).  The puzzles will also be selected based on the time sentient beings typically take to solve them.

When you want to perform the test, you meet with the entity right after you finish reading the phenomenal puzzle. The puzzle is calibrated to not be solvable in the time it will take you to connect to the other entity.

When you connect your brain to the other entity and become one conscious narrative, the entire entity reads the puzzle to itself. In other words, you state out loud the phenomenal puzzle by clearly pointing to the referents of the puzzle within your own “shared” experience. Then you disconnect the two brains.

In the time that the other entity is trying to solve the puzzle you distract yourself. This way you can prevent yourself from solving the puzzle. Ideally you might want to bring your state of consciousness to a very low activity. The other entity will have all of its stimuli controlled to guarantee there is no incoming information. All the “qualia processing” is going on through approved channels. When the entity claims to have solved the puzzle, at that point you connect your brain back to it.

Does the merged entity know anything about the solution to the puzzle? You search for a memory thread that shows the process of solving the puzzle and the eventual answer. Thanks to the calibration of this puzzle (it has also been given to “merged” entities before) we know you would need more time to solve it. Now you may find yourself in a position where you realize that if the other entity was a zombie, you would have somehow solved a phenomenal puzzle without using experience at all. If so, where did that information come from?

With the memory thread you can remember how the other entity arrived at the conclusion. All of the hard work can be attributed to the other entity now. You witness this confirmation as the merged entity, and then you disconnect. You will still hold redundant memories of the period of merging (both brains do, like the hemispheres in split-brain patients). Do you know the answer to the puzzle? You can now check your memory for it and see that you can reconstruct the answer very quickly. The whole process may even take less time than it would take you to solve the puzzle.

If you know the answer to the puzzle you can infer that the other entity is capable of manipulating qualia in the same way that you can. You would now have information that your mind/brain could only obtain by exploring a large region of the state-space of consciousness… which takes time. The answer to the puzzle is a verifiable fact about the structure of your conscious experience. It gives you information about your own qualia gamut (think CIELAB). In summary, the other entity figured out a fact about your own conscious experience, and explained it to you using your own private referents.

You can then conclude that if the entity solved the phenomenal puzzle for you, it must be capable of manipulating its qualia in a semantically consistent way to how you do it. A positive result reveals that the entity utilizes conscious algorithms. Perhaps even stronger: It also shares the generalizable computational power of a sapient mind.

Unfortunately just as for the Turing test, not passing this test is not a guarantee that the other entity lacks consciousness. What the test guarantees is a high precision: Near every entity that passes the test is conscious. And that is a milestone, I think.

Do you agree that the problem of other minds is like a wire puzzle?

Now go ahead and brainstorm more phenomenal puzzles!

Manifolds of Consciousness: The emerging geometries of iterated local binding

The qualia manifolds

Ever noticed implicit geometries in the structure of the qualia you deal with on a daily basis?

So here is one observation about our experience. Visual experience has two major dimensions and one minor one (depth). This sensory modality is experienced as either 2 or 3 dimensional (and ambiguous points in between are also instantiated at times). Now, it also has a specific kind of topological features. It seems that the edges of the visual field are the edges of a patch in Euclidean space. The edges are not connected to each other. At first, it might take you by surprise to consider hypothetical visual fields with edges that are actually connected. Maybe you could make it a torus, by connecting edges left and right as well as those at the top and the bottom of the visual field. It’ll make a manifold of experience. You may also twist it before connecting it, making a Klein bottle or a projective plane.

CrossCapSlicedOpen

Real Projective Plane. Imagine your visual field connected to itself in this way by twisting and joining the edges. 

A common reaction to this idea is “it may be impossible to do that, maybe the geometry of our visual field is the only possible one.” Without actually going ahead and interfering with your mind and brain directly it is unlikely I’ll be able to show conclusively it is possible. But there is a strong intuition pump available to help you conceive of the possibility.

So, touch your arm. Your writs more specifically. Using a finger make a circle around the wrist. You end up where you started, and yet you only advanced in one direction.