A workable solution to the problem of other minds

Deciding whether other entities are also conscious is not an insoluble philosophical problem. It is tricky. A good analogy might be a wire puzzle. At a first glance, the piece you have to free looks completely locked. And yet a solution does exist, it just requires to represent a sufficiently large number of facts and features that our working memory is not enough.

acap_a.

Usually showing the solution once will not fully satisfy one’s curiosity. It takes some time to develop a personally satisfying account. And to do so, we need to unpack how the various components interact with one another. After a while the reason why the free piece is not locked becomes intuitive, and at the same time you may also encounter mathematical arguments and principles to complement your understanding.

At first, though, the free piece looks and feels locked.

I think the problem of other minds is perceived similarly to a wire puzzle. At first it looks and feels insoluble. After a while, though, many suspect that the problem can be solved. This essay proposes a protocol that may point in the right direction. It could have some flaws as it is currently formulated, so I’m open to refinements of any kind. But I believe that it represents a drastic improvement over previous protocols, and it gets close to being a fully functioning proof of concept.

Starting from the basics: An approach that is widely discussed is the application of a Turing test. But a Turing test has several serious flaws when used as a test of consciousness. First, many conscious entities can’t pass a Turing test. So we know that it could have a very poor recall (missing most conscious entities). This problem is also present in every protocol I’m aware of. The major problem with it is that when an entity passes a Turing test, this can be counted as probabilistic evidence in favor of a large number of hypothesis, and not only to the desired conclusion that “this entity is conscious.” In principle highly persuasive chatbots could hack your entity recognition module by presenting hyperstimuli created by analyzing your biases for styles of conversation.

Your brain sees faces everywhere (cartoons, 2D computer screens, even clouds). It also sees entities where there are none. It might be much more simple to *trick* your judgement than actually create a sentient intelligence. Could the entity given the Turing test be an elaborate chatobot with no phenomenal binding? It seems likely that could take place.

Thus passing a Turing test is also not a guarantee that an entity is conscious. The method would have low recall and probably low accuracy too.

The second approach would be to simply *connect* your brain to the other entity’s brain (that is, of course, if you are not talking about a disembodied entity). We already have something like the Corpus Callosum, which seems to be capable of providing a bridge that solves the phenomenal binding problem between the hemispheres of a single person. In principle we could create a biologically similar, microfunctionally equivalent neural bridge between two persons.

Assuming physicalism, it seems very likely that there is a way for this to be done. Here, rather than merely observing the other person’s conscious experience, the point of connecting would be to become one entity. Strong, extremely compelling personal identity problems aside (Who are you really? Can you expect to ‘survive’ after the union? If you are the merged entity, does that mean you were always the same consciousness as the one with whom you merged?, etc. More on this on later posts), this possibility opens up the opportunity to actually corroborate that another entity is indeed conscious.

Indeed separate hemispheres can have very different opinions about the nature of reality. Assuming physicalism, why would it be the case that you can’t actually revert (or instantiate for the first time) the union between brains?

The previous idea has been proposed before. I think it is a significant improvement over the use of a Turing test, since you are directly addressing the main phenomenon in question (rather than its ripples). That said, the method has problems, and epistemic holes.  In brief, a big unknown is the effect that interfacing with another conscious experience has on both conscious experiences. For example, some people have (like Eliezer Yudkowsky and Brian Tomasik) argued that your interaction with the other brain could functionally expand your own mind. As it were, the interaction with the other brain could be interpreted as expanding your own mind by obtaining a large hardware upgrade. Thus it could be that the whole experience of being connected and becoming one with another entity is a fantasy of your recently-expanded mind. It can give you the impression that the other brain was already conscious before you were connected to it. So you can’t rule out that it was a zombie before and after the connection was over.

But there is a way out. And this is the stimulating part of the essay. Because I’m about to untangle the wires.

The great idea behind this solution is: Phenomenal puzzles. This one phenomenal puzzle linked here is about figuring out the appropriate geometry of color (arranging the state-space in an Euclidean manifold so that the degrees of subjective differences between colors are proportional to their distances). Doing this requires the ability of comparing the various parts of an experience to each other and being able to remember the comparison. In turn this can be iterated and generate a map of subjective differences. This is an instance of what I call qualia computing, where you need to be in touch with the subjective quality of your experience and to be capable of comparing sensations.

In brief, you want to give the other entity a puzzle that can only be solved by a conscious entity via manipulating and comparing qualia. The medium used to deliver the puzzle will be a first-person merging of brains: To share the puzzle you first connect with the entity you want to test.

By doing this, by sharing the puzzle when you are connected to the other entity, you will be able to know its inner referents in terms of qualia. While connected, you can point to a yellow patch and say “this is yellow.” Possibly, both halfs will have their own system of private referents (a natural consequence of having slightly different sense organs which make variable mappings between physical stimuli and qualia). But as a whole the merged entity will be able to compare notes with itself about the mapping of stimuli to qualia in both halfs. The entity could look at the same object from the point of view of its two heads at the same time and form an unified visual field, which incorporates the feed from the two former “personal-sized” visual fields (similarly to how you incorporate sensory stimuli from two eyes. Now you’ll see with four). The color appearance of the object could have a slightly different quality when the two visual fields are compared. That’s the fascinating thing about phenomenal binding. The differences in mappings between stimuli and qualia of the two former entities can be compared, which means that this difference can be analyzed and reasoned about and added to both repertoires of hippocampal snapshots of the current experience.

Then, when you disconnect from the other and there are two streams of consciousness going on again, you will both know what that “yellow” referred to. This overcomes the age-old problem of communicating private referents, and mutually agreeing on name for private referents. This way, the pieces of the (phenomenal) puzzle will be the same in both minds.

For the test to work, the specific question needs to stay secret until it is revealed briefly before merging.

Imagine that you have a set of standardized phenomenal puzzles. Psychologists and people who have done the test before tend to agree that the puzzles in the set do require you to explore a minimum number of states of consciousness. The tests have precise conceptual answers. These answers are extremely difficult to deliver by accident or luck.

The puzzles may require you to use external tools like an image editor or a computer. This is because computers can enable you to program combinations of sensory input in precise ways. This expands the phenomenal gamut you can reach. In turn one can calibrate sensory input to have nice properties (ex. use gamma correction).  The puzzles will also be selected based on the time sentient beings typically take to solve them.

When you want to perform the test, you meet with the entity right after you finish reading the phenomenal puzzle. The puzzle is calibrated to not be solvable in the time it will take you to connect to the other entity.

When you connect your brain to the other entity and become one conscious narrative, the entire entity reads the puzzle to itself. In other words, you state out loud the phenomenal puzzle by clearly pointing to the referents of the puzzle within your own “shared” experience. Then you disconnect the two brains.

In the time that the other entity is trying to solve the puzzle you distract yourself. This way you can prevent yourself from solving the puzzle. Ideally you might want to bring your state of consciousness to a very low activity. The other entity will have all of its stimuli controlled to guarantee there is no incoming information. All the “qualia processing” is going on through approved channels. When the entity claims to have solved the puzzle, at that point you connect your brain back to it.

Does the merged entity know anything about the solution to the puzzle? You search for a memory thread that shows the process of solving the puzzle and the eventual answer. Thanks to the calibration of this puzzle (it has also been given to “merged” entities before) we know you would need more time to solve it. Now you may find yourself in a position where you realize that if the other entity was a zombie, you would have somehow solved a phenomenal puzzle without using experience at all. If so, where did that information come from?

With the memory thread you can remember how the other entity arrived at the conclusion. All of the hard work can be attributed to the other entity now. You witness this confirmation as the merged entity, and then you disconnect. You will still hold redundant memories of the period of merging (both brains do, like the hemispheres in split-brain patients). Do you know the answer to the puzzle? You can now check your memory for it and see that you can reconstruct the answer very quickly. The whole process may even take less time than it would take you to solve the puzzle.

If you know the answer to the puzzle you can infer that the other entity is capable of manipulating qualia in the same way that you can. You would now have information that your mind/brain could only obtain by exploring a large region of the state-space of consciousness… which takes time. The answer to the puzzle is a verifiable fact about the structure of your conscious experience. It gives you information about your own qualia gamut (think CIELAB). In summary, the other entity figured out a fact about your own conscious experience, and explained it to you using your own private referents.

You can then conclude that if the entity solved the phenomenal puzzle for you, it must be capable of manipulating its qualia in a semantically consistent way to how you do it. A positive result reveals that the entity utilizes conscious algorithms. Perhaps even stronger: It also shares the generalizable computational power of a sapient mind.

Unfortunately just as for the Turing test, not passing this test is not a guarantee that the other entity lacks consciousness. What the test guarantees is a high precision: Near every entity that passes the test is conscious. And that is a milestone, I think.

Do you agree that the problem of other minds is like a wire puzzle?

Now go ahead and brainstorm more phenomenal puzzles!

Leave a Reply