State-Space of Background Assumptions

There is a wide number of transhumanist strains/clusters, and we don’t really understand why. How do we explain the fact that immortality is the number one concern for some, while it is a very minor concern for others who are more preoccupied with AI apocalypse or making everyone animated by gradients of bliss?

A possible interpretation is that our values and objectives are in fact intimately connected to our background assumptions about fundamental matters such as consciousness and personal identity. To test this theory, I developed a questionnaire for transhumanists that will examine the relationship between transhumanist goals and their background philosophical assumptions. If you wish to contribute, please find this questionnaire here (it takes ~15 minutes):

Here.

The link will be alive until Jul 30 2015 (EDIT: I have extended the deadline until August 2nd!). Please complete it as soon as possible. Once the results are out you will be happy you participated.

The very sense we give to words requires an underlying network of background assumptions to support them. Thus, when we don’t share implicit background assumptions, we often interpret what others say in very different ways than what they had in mind. With enough transhumanists answering this questionaire (about 150) we will be able to develop a better ontology. What would this look like? I don’t know yet, but I can give you an example of the sort of results this work can deliver: State-Space of Drug Effects.

Getting closer to digital LSD

I am very pleased with the recent work on psychedelic replications by communities such as the wonderful Psychonaut Wiki and r/replications. There is a lot of great work in the area, a little too much to discuss at length in one post. Keep up the good work!

A recent source of marvelous psychedelic replication techniques has just come into the scene, and from an unlikely source. Of course, we are talking about inceptionism applied to deep belief networks.

Someone said DMT?

Someone said DMT?

First of all, who says these pictures are actually trippy? Is there evidence of that? I intend to fully operationalize the concept of trippiness for the classification of pictures; I believe the question is empirically approachable. In the meantime I will simply point out that a lot of people are talking about the peculiar trippiness of these pictures. To give an example, look at some of the comments on the Google blogpost:

Help! We’ve created AIs more powerful than us, and now we need to feed them hallucinogenic drugs to subdue them…. – Urs

Either somebody has been feeding hallucinogens to Google’s image-recognition neural networks, or computer comprehension is alien! Well, actually, I wonder how this compares to visualizations of how the human brain stores images for pattern-matching purposes. – Stephen

Computers are all on drugs. – Matt

And from the Vice article:

“Its incredible how close it looks to an LSD trip, that is normally so hard to describe.” – corners

There are ongoing discussions in a lot of forums about this right now. Somehow, it seems that these new pictures are hitting a particular component of the psychedelic experience that previous replications have missed or at least not fully captured. What is that?

For the purpose of this post I will use a particular classification of phenomenal effects caused by psychedelics. Specifically, the one proposed by Psychedelic Information Theory. In order to fully grasp the motivation for this classification I highly recommend reading the control interrupt model of psychedelic action. In summary, it seems that there are natural inhibitory processes that prevent features of our current experience to build up over time. Psychedelics are thought to chemically interrupt inhibitory control signals from the cortex, which in turn results in a non-linear interaction between the unmitigated characteristics of your conscious experience. I will explain in a bit how this model provides a good framework for explaining the way recent Google Inceptionist (GI) pictures fit into the broader world of visual psychedelic replication.

But now let’s start with the three classes of hallucinations discussed:

  1. Entropic hallucinations describe the visual effects of gently pushing one’s eyes as well as the amazing interaction between LSD and strobes
  2. Eidetic hallucinations are the result of interpreting ambiguous stimuli using high-level concepts
  3. Erratic hallucinations result from the chaotic binding and over-saturation of sensory modalities, which affect the stability of the global perceptual frame (and probably disrupts the continuity field too)

Zooming into the phenomenology of eidetic hallucinations:

The most commonly reported eidetic hallucinations seen on psychedelics are of people, faces, animals, plants, flowers, spirits, aliens, insects, and other similar archetypes. Eidetic hallucinations can sometimes take the form of entire virtual worlds, spirit dimensions, invisible landscapes, and so on. Eidetics often emerge within a pre-existing entoptic interference pattern that grows in intensity over time to produce more photographic or 3D rendered objects. Eidetics under the influence of psychedelics are most often reported with eyes closed or while sitting motionless in meditative trance. On high doses of psychedelics eidetic hallucinations may materialize with eyes open on any surface, pattern, or texture that’s gazed at for more than a few seconds.*

If you surf the internet looking for replications of psychedelic experiences, you will notice that there are great examples of a wide range of effects, but compelling software-generated images of eidetic hallucinations are rare. The challenge here is the complexity of creating actionable tools that highlight high-level features in pre-existing pictures. Amazingly, people can make successful and stunning pictures with eidetic tones, but this requires a lot of dedication and artistic experience. The mighty human artistic effort is unstoppable, though:


Thanks to this 3-fold classification of psychedelic effects we can isolate the quality of experience that both Dali and the recent GI pics specifically enhance. Of course, the phenomenology of most psychedelic experiences incorporate elements of each of these classes, and the interaction between them is certainly non-trivial. In addition, specific substances may have a larger loading of each type, and signature proportions with peculiar results.

It is also worth mentioning the existence of other classification systems, within and beyond visual phenomenology. For example the subjective effect index of Psychonaut Wiki and even the various circuits proposed by ancient Leary and Dass writings have very worthwhile observations that may come useful in one context or another. For the level of resolution here discussed giving eidetic hallucinations their own class is particularly useful.


How the Inceptionist method and psychedelic experiences work similarly

Here is the core of the explanation for how the Google trippy pictures were made:

In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.

In some sense this is basically the same eidetic effect we find in psychedelic experiences. For one reason or another, there are moments during a psychedelic experience in which strong eidetic effects manifest. As if a specific layer (or hierarchy level) of one’s model of reality is chosen for being enhanced and fractally iterated in a scale-free manner. Referencing back the control interrupt model of psychedelic action, we can reason that what is going on involves a reduction in the amount of inhibition that highlighted high-level features receive. Again, this resembles the Inceptionist algorithm:

If we choose higher-level layers, which identify more sophisticated features in images, complex features or even whole objects tend to emerge. Again, we just start with an existing image and give it to our neural net. We ask the network: “Whatever you see there, I want more of it!” This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.

Now, this only really shows a snapshot of a psychedelic experience with a heavy eidetic bent. In actual psychedelic experiences there are other common factors that come into play that influence the experience. First, not only are specific features highlighted, but, on the whole, we could say that there is an increase in the overall amount of sensations experienced together. The overall amplitude of your experience goes up, if that makes sense. In other words, although this is hard to imagine, the overall amount of experience increases relative to baseline. That is not evoked using external stimuli, of course, since the actual change in the intensity of your experience requires direct control interruption. The overall information content globally available in the field of awareness of a person tripping increases in a dose-dependent way.

The second hallmark characteristic of psychedelic experiences, which gives them a powerful edge over current digital techniques, is that the state highlights already salient stimuli. High-level psychedelic pattern recognition seems to be based on attention-modulated saliency enhancement. Let me explain:

Our visual system automatically recognizes salient features in our experience. This is not an exclusive property of visual consciousness, by the way. Here we must notice that awareness and attention are distinct but related aspects of our mind. Awareness happens effortlessly, and its visual variety arises as soon as we open our eyes (within 200 milliseconds therefrom). Even at the level of awareness we see a fast sorting of perceived features by their overall saliency, which is a function both of their intrinsic properties and those relative to every other feature in the awareness field. Attention, which is slower and builds on top of the awareness field, enables a variety of high level cognitive activities to interplay with the features highlighted by awareness. In turn, the overall state of consciousness of a person changes as attention moves the reference point for awareness to bring forth new salient features. Iteratively, these processes allow a mind to surf through states of consciousness.

In summary, awareness creates the marketplace of salient features that compete for attention. As attention is recentered on a new cluster of features, the field of awareness is modified and the new salient features again have a chance to change the focus of attention.

With psychedelic-induced control interruption, the intensity by which saliency of features in the field of awareness is highlighted goes up significantly. In turn, the attention-modulated perception of the intensely salient features highlights specific high-level features suggested by the field of awareness. And finally, this conceptual mental state highlighted via attention, results in an even higher saliency for conceptually-related features. And hence come the eye reality, fish realty, tree reality, abstract concept reality, divine reality, fractal reality, etc. people discover on LSD.

Although a difficult challenge, I predict that a well-trained, dedicated and mentally healthy psychonaut would be able to paint psychedelic experiences of her own that highlight similar high-level features as those highlighted on specific Inceptionist works of art. Probably a long meditative practice would help in the process, since the specific saliency of various features is attention-modulated, and thus requires inhibiting unrelated salient directions (e.g. deep philosophical questions, personal issues, etc.) and focus exclusively on, say, dogs.


Who chooses what is salient?

If you already know what class of features you want to highlight, then the inceptionist method will help you. But what about choosing what to highlight to begin with? This, I believe, is the crux of what makes psychedelic experiences (and minds in general) still unbeatable by neural networks. Once you know what to look for, your cortex and inceptionist methods (and their future incarnations) might be on the same playing field. But what enables you to decide what is worth looking for?

The key unresolved problem standing for a fully-digital psychedelic experience replication algorithm is what I call the saliency-attention mapping. This is: Given a particular conscious experience that is highlighting a set of features, how does attention ultimately find what to focus on? How are the subsequent relevant features to be highlighted? In many cases we choose to ignore all of the immediately salient features in a scene precisely to see more subtle patterns. And during a psychedelic experience, directing your attention to entirely unsuspecting places has the effect of switching off previously salient features and activating a new class of them (for example, choosing to focus on the music rather than the visual scene).

Is there any way of modeling the saliency-attention mapping without taking into account all of the information present in the field of awareness at the time? Indeed, an ongoing hypothesis here in Qualia Computing is that consciousness itself is required for this step. The very computational advantage of being conscious seems to be related to the unitary nature of experiences: Your choices are not only the result of parallel processing or implicit information integration. They stem from what you choose to pay attention to considering the entirety of your field of awareness. You do this at every point in time. Thus, a sort of instantaneous and ontological unity is required to account for a significant step of the information processing pipeline of the mind. And this may lead to a saliency-attention function whose runtime complexity is impossible to match with digital computers.


The conceivability horizon

Now, this unitary field of awareness step also has large down-stream effects. In particular, subsets of the phenomenology of experiences can be reinterpreted in very novel ways. Psychedelics are likewise famous for unlocking entirely new conceptual ontologies and points of view that remain with the person long after acute effects subside. We could call this, an extension of the horizon of conceivability. This comes about from considering many of the features of the particular conscious experience at once and identifying a new private referent (such as a concept) whose meaning is derived from the unique combination of those elements.

Without a unitary conscious experience this step would be impossible, and it remains to be seen for an artificial neural network to accomplish this on its own. For completeness, it is worth mentioning that phenomenal binding also has strong implications for memory. Every time we experience a new situation a new ‘situational snapshot’ is added to the collection of — and network of relationships between — memories that can be triggered with temporal lobe stimulation. Thus, incorporating a human (or whatever implements phenomenal binding) into the loop may be unavoidable.


The future of psychedelic replications and consciousness engineering

Eidetic art is marvelous, and for a long time we didn’t have any idea about how to systematize it in software. Now we have some wonderful examples of a fully scalable approach. Inevitably, we will soon have visual editing software that incorporates neural networks.

Deep belief networks applied to replications will allow us to drastically increase the level of realism of simulated trips. This will thus draw a lot more attention to this fascinating field, and bring engineers, artists and mathematicians onboard. They will have a wonderful synergy in this sphere.

But how practical are these techniques? If you want to find fundamentally new patterns in an image, what should you use… neural networks or LSD? The answer is: why do you have to choose only one? Here is where I casually mention that if you were planning on taking a psychedelic sometime in the future, why not tell us how the trippy images of Google look like during a trippy experience? I bet a lot of people would appreciate your input.

Presumably, incorporating a human in the loop could actually empower these networks to recreate remarkably psychedelic progressions of scenes and features (and high level ideas!). To do so you need to somehow identify what the human finds salient in the picture/video being explored, and how her attention is directed as a consequence of that. Obvious candidates here are eye tracking devices and the general class of bio/neurometrics. More speculatively, endocrine measurements of the chemical markers of saliency and attention may be of tremendous value too. What would this look like? A person hooked to a series of tubes that provide fast feedback using a lab-on-a-chip, and a deep belief network with flexible Inceptionist dynamics guided by the person’s measured center of attention. In case you haven’t noticed, I think that this area of exploration is extremely promising. Go ahead and do it!

Now, if you want to figure out a hard technical problem, currently mild psychedelic experiences are more promising than deep belief networks. This, again, is because the attention-modulated saliency enhancement of psychedelics can allow you to discover, explore and reinterpret the features that matter for a particular problem. Assisted digital exploration, however, may someday surpass the effectiveness of psychedelics, or better yet: A smart combination of techniques –chemical, biological and digital– will incite in the field of consciousness research what the Galilean revolution was to physics. The hands-on collective exploration science needs in order to fully thrive is about to arrive for consciousness. Finally!

Generalized Wada Test and the Total Order of Consciousness

In a Wada test a single hemisphere is sedated with sodium amobarbital. While the sedated hemisphere is unresponsive, a cognitive examination is conducted on the other hemisphere. This test is done to determine whether performing an ablative surgery on a given hemisphere is a viable treatment for epileptic seizures. By using the Wada test, one can avoid creating irreversible damage in areas of the brain crucial for modern day life, such as language production regions.


The Generalized Wada Test

The thought of targeting an isolated brain region for drug therapy is very stimulating. But do we have to sedate it? Sodium amobarbital may have useful properties that makes it a good fit for the Wada test. But it is unlikely to be the only substance that can be used. More broadly, there seem to be a variety of compounds that can be used for intracarotid drug delivery.

In all likelihood there must be a number of psychedelic compounds that could selectively affect brain regions via intracarotid delivery. One thought is to inject 2C-B (or whichever psychedelic has the desired pharmacological properties) on one hemisphere so that a person can compare the two sides of her visual field. This way, she would be able to compare side-by-side the features and patterns highlighted by the algorithms of her visual system (which would, presumably, be different on each side). In turn, this will enable us to catalogue more precisely the specific differences in visual experience under the influence of several drugs.

Even more generally, one could also make use of additional brain interventions such as tDCS, ultrasound, optogenetcs, etc. For example, imagine using ketamine and tDCS on the right hemisphere while the left receives ultrasound stimulation. We have a combinatorial explosion. A good one. I call this the Generalized Wada Test (WGT).


Philosophical Applications of the Generalized Wada Test

This technique presents a striking possibility: approaching philosophical problems empirically. More specifically, this technique might be used to:

  1. Test the properties of phenomenal binding, and
  2.  Allow “incommensurable” experiences to “experience each other” as the halves of a unitary consciousness

Phenomenal binding can be put under a microscope by using a GWT to infer the necessary chemical properties that brain regions require in order to enable the integration of phenomenal features into unitary experiential wholes. The speed at which binding takes place between the hemispheres could also be quantified. If phenomenal binding is not possible between two given states of consciousness, that would also be very valuable information for consciousness research.

With regards to the second possibility…


Is there a Total Order of Subjective Preferences?

Take two states of consciousness A and B. Suppose we use a GWT to make A manifest in the left hemisphere, while B does so in the right. The subject as a whole is asked to decide which of the two states of consciousness is subjectively preferable. If A is preferred over B, then a directed edge from B to A is added to the graph (with a weight proportional to the certainty/degree of preference). By adding the corresponding weighted edge between every pair of states of consciousness inducible on a GWT we would map a large portion of the state-space of consciousness available to humans. Let’s call this graph the directed network of subjective preferences.

Now, once we have fully populated such graph… would it actually be a directed acyclic graph (DAG)? Could we extract a Total Order? In other words, does the directed network of subjective preferences reveal a proper order of experiences from least to most preferred?

Can we make a universal scale of subjective preferability? Is it possible to infer a scale that, as David Pearce would call it, shows us the utility function of the universe?

But what if we find cycles?


Hedonic Tone

Even though there is a very close relationship between bliss and activity in the outer shell of the nucleus accumbens (and various other nearby hedonic hot-spots), it is not yet clear whether all pleasurable, blissful or otherwise subjectively valuable states are triggered by the activation of this area. We know that classic psychedelics, for example, do not have pharmacological dopaminergic or opiodergic action, and thus don’t activate the nucleus accumbens directly. And yet, people do report ecstatic and blissful states of consciousness on LSD…

It is not yet clear whether that bliss is mediated by hedonic hot-spot activity (thankfully, we may soon find out). If psychedelic bliss is fundamentally dissociated from dopaminergic and opiodergic activity, what would that say about the nature of pleasure? Could there be higher levels of bliss that are unrelated to current neurobiological models of subjective reward? What if everyone on acid bliss says that acid bliss is better than heroine bliss, while everyone on heroine bliss says the opposite? What do we make of Dostoevsky’s epileptic bliss?

For several instants I experience a happiness that is impossible in an ordinary state, and of which other people have no conception. I feel full harmony in myself and in the whole world, and the feeling is so strong and sweet that for a few seconds of such bliss one could give up ten years of life, perhaps all of life.

I felt that heaven descended to earth and swallowed me. I really attained god and was imbued with him. All of you healthy people don’t even suspect what happiness is, that happiness that we epileptics experience for a second before an attack.

Nothing short of a Generalized Wada Test would be able to approach these questions.

Why not computing qualia?

Qualia is the given. In our minds, qualia comparisons and interactions are an essential component of our information processing pipeline. However, this is a particular property of the medium: Consciousness.

David Marr, a cognitive scientist and vision researcher, developed an interesting conceptual framework to analyze information processing systems. This is Marr’s three levels of analysis:

The computational level describes the system in terms of the problems that it can and does solve. What does it do? How fast can it do it? What kind of environment does it need to perform the task?

The algorithmic level describes the system in terms of the specific methods it uses to solve the problems specified on the computational level. In many cases, two information processing systems do the exact same thing from an input-output point of view, and yet they are algorithmically very different. Even when both systems have near identical time and memory demands, you cannot rule out the possibility that they use very different algorithms. A thorough analysis of the state-space of possible algorithms and their relative implementation demands could rule out the use of different algorithms, but this is hard to do.

The implementation level describes the system in terms of the very substrate it uses. Two systems that perform the same exact algorithms can still differ in the substrate used to implement them.

An abacus is a simple information processing system that is easy to describe in terms of Marr’s three levels of analysis. First, computationally: the abacus performs addition, subtraction and various other arithmetic computations. Then algorithms: those used to process information involve moving {beds} along {sticks} (I use ‘{}’ to denote that the sense of these words is about their algorithmic-level abstractions rather than physical instantiation). And the implementation: not only can you choose from metallic and wooden abacus, you can also get your abacus implemented using people’s minds!

What about the mind itself? The mind is an information processing system. At the computational level, the mind has a very general power. It can solve problems never before presented to it, and it can also design and implement computers to do narrow problems more efficiently. At the algorithmic level, we know very little about the human mind, though various fields center on this level. Computational psychology models the algorithmic and the computational level of the mind. Psychophysics, too, attempts to reveal the parameters of the algorithmic component of our minds and their relationship to parameters at the implementation level. And when we reason about logical problems, we do so using specific algorithms. And even counting is something that kids do with algorithms they learn.

The implementation level of the mind is a very tricky subject. There is a lot of research on the possible implementation of algorithms that a neural network abstraction of the brain can instantiate. This is an incredibly important part of the puzzle, but it cannot fully describe the implementation of the human mind. This is because some of the algorithms performed by the mind seem to be implemented with phenomenal binding: The simultaneous presence of diverse phenomenologies in a unified experience. When we make decisions we compare how pleasant each option would be. And to reason, we bundle up sensory-symbolic representations within the scope of the attention of our conscious mind. In general, all algorithmic applications of sensations require the use of phenomenal binding in specific ways. The mind is implemented by a network of binding tendencies between sensations.

A full theory of the mind will require a complete account of the computational properties of qualia. To obtain those we will have to bridge the computational and the implementation level descriptions. We can do this from the bottom up:

  1. An account of the dynamics of qualia (to be able to predict the interactions between sensations just as we can currently predict how silicon transistors will behave) is needed to describe the implementation level of the mind.
  2. An account of the algorithms of the mind (how the dynamics of qualia are harnessed to implement algorithms) is needed to describe the algorithmic level of the mind.
  3. And an account of the capabilities of the mind (the computational limits of qualia algorithms) is needed to describe the computational level of the mind.

Why not call it computing qualia? Computational theories of mind suggest that your conscious experience is the result of information processing per se. But information processing cannot account for the substrate that implements it. Otherwise you are confusing the first and the second level with the third. Even a computer cannot exist without first making sure that there is a substrate that can implement it. In the case of the mind, the fact that you experience multiple pieces of information at once is information concerning the implementation level of your mind, not the algorithmic or computational level.

Isn’t information processing substrate-independent?

Not when the substrate has properties needed for the specific information processing system at hand. If you go to your brian and replace one neuron at a time by a silicon neuron that is functionally identical, at what point would your consciousness disappear? Would it fade gradually? Or would nothing happen? Functionalists would say that nothing should happen since all the functions, locally and globally, are maintained throughout this process. But what if this is not possible?

If you start with a quantum computer, for example, then you have the problem that part of the computational horse-power is being produced by the very quantum substrate of the universe, which the computer harnesses. Replacing every component, one at a time, by a classical counter-part (a non-quantum chip with similar non-quantum properties), would actually lead to a crash. At some point the quantum part of the computer will break down and will no longer be useful.

Likewise with the mind. If the substrate of the mind is actually relevant from a computational point of view, then replacing a brain by seemingly functionally identical components could also lead to an inevitable crash. Functionalists would suggest that there is no reason to think that there is anything functionally special about the mind. But phenomenal binding seems to be it. Uniting pieces of information in a unitary experience is an integral part of our information processing pipeline, and precisely that functionality is the one we do not know how to conceivably implement without consciousness.


Textures

Implementing something with phenomenal binding, rather than implementing phenomenal binding (which is not possible)


On a related note: If you build a conscious robot and you don’t mind phenomenal binding, your robot will have a jumbled-up mind. You need to incorporate phenomenal binding in the pipeline of training. If you want your conscious robot to have a semantically meaningful interpretation of the sentence “the cat eats” you need to be able to bind its sensory-symbolic representations of cats and of eating to each other.

A workable solution to the problem of other minds

Deciding whether other entities are also conscious is not an insoluble philosophical problem. It is tricky. A good analogy might be a wire puzzle. At a first glance, the piece you have to free looks completely locked. And yet a solution does exist, it just requires to represent a sufficiently large number of facts and features that our working memory is not enough.

acap_a.

Usually showing the solution once will not fully satisfy one’s curiosity. It takes some time to develop a personally satisfying account. And to do so, we need to unpack how the various components interact with one another. After a while the reason why the free piece is not locked becomes intuitive, and at the same time you may also encounter mathematical arguments and principles to complement your understanding.

At first, though, the free piece looks and feels locked.

I think the problem of other minds is perceived similarly to a wire puzzle. At first it looks and feels insoluble. After a while, though, many suspect that the problem can be solved. This essay proposes a protocol that may point in the right direction. It could have some flaws as it is currently formulated, so I’m open to refinements of any kind. But I believe that it represents a drastic improvement over previous protocols, and it gets close to being a fully functioning proof of concept.

Starting from the basics: An approach that is widely discussed is the application of a Turing test. But a Turing test has several serious flaws when used as a test of consciousness. First, many conscious entities can’t pass a Turing test. So we know that it could have a very poor recall (missing most conscious entities). This problem is also present in every protocol I’m aware of. The major problem with it is that when an entity passes a Turing test, this can be counted as probabilistic evidence in favor of a large number of hypothesis, and not only to the desired conclusion that “this entity is conscious.” In principle highly persuasive chatbots could hack your entity recognition module by presenting hyperstimuli created by analyzing your biases for styles of conversation.

Your brain sees faces everywhere (cartoons, 2D computer screens, even clouds). It also sees entities where there are none. It might be much more simple to *trick* your judgement than actually create a sentient intelligence. Could the entity given the Turing test be an elaborate chatobot with no phenomenal binding? It seems likely that could take place.

Thus passing a Turing test is also not a guarantee that an entity is conscious. The method would have low recall and probably low accuracy too.

The second approach would be to simply *connect* your brain to the other entity’s brain (that is, of course, if you are not talking about a disembodied entity). We already have something like the Corpus Callosum, which seems to be capable of providing a bridge that solves the phenomenal binding problem between the hemispheres of a single person. In principle we could create a biologically similar, microfunctionally equivalent neural bridge between two persons.

Assuming physicalism, it seems very likely that there is a way for this to be done. Here, rather than merely observing the other person’s conscious experience, the point of connecting would be to become one entity. Strong, extremely compelling personal identity problems aside (Who are you really? Can you expect to ‘survive’ after the union? If you are the merged entity, does that mean you were always the same consciousness as the one with whom you merged?, etc. More on this on later posts), this possibility opens up the opportunity to actually corroborate that another entity is indeed conscious.

Indeed separate hemispheres can have very different opinions about the nature of reality. Assuming physicalism, why would it be the case that you can’t actually revert (or instantiate for the first time) the union between brains?

The previous idea has been proposed before. I think it is a significant improvement over the use of a Turing test, since you are directly addressing the main phenomenon in question (rather than its ripples). That said, the method has problems, and epistemic holes.  In brief, a big unknown is the effect that interfacing with another conscious experience has on both conscious experiences. For example, some people have (like Eliezer Yudkowsky and Brian Tomasik) argued that your interaction with the other brain could functionally expand your own mind. As it were, the interaction with the other brain could be interpreted as expanding your own mind by obtaining a large hardware upgrade. Thus it could be that the whole experience of being connected and becoming one with another entity is a fantasy of your recently-expanded mind. It can give you the impression that the other brain was already conscious before you were connected to it. So you can’t rule out that it was a zombie before and after the connection was over.

But there is a way out. And this is the stimulating part of the essay. Because I’m about to untangle the wires.

The great idea behind this solution is: Phenomenal puzzles. This one phenomenal puzzle linked here is about figuring out the appropriate geometry of color (arranging the state-space in an Euclidean manifold so that the degrees of subjective differences between colors are proportional to their distances). Doing this requires the ability of comparing the various parts of an experience to each other and being able to remember the comparison. In turn this can be iterated and generate a map of subjective differences. This is an instance of what I call qualia computing, where you need to be in touch with the subjective quality of your experience and to be capable of comparing sensations.

In brief, you want to give the other entity a puzzle that can only be solved by a conscious entity via manipulating and comparing qualia. The medium used to deliver the puzzle will be a first-person merging of brains: To share the puzzle you first connect with the entity you want to test.

By doing this, by sharing the puzzle when you are connected to the other entity, you will be able to know its inner referents in terms of qualia. While connected, you can point to a yellow patch and say “this is yellow.” Possibly, both halfs will have their own system of private referents (a natural consequence of having slightly different sense organs which make variable mappings between physical stimuli and qualia). But as a whole the merged entity will be able to compare notes with itself about the mapping of stimuli to qualia in both halfs. The entity could look at the same object from the point of view of its two heads at the same time and form an unified visual field, which incorporates the feed from the two former “personal-sized” visual fields (similarly to how you incorporate sensory stimuli from two eyes. Now you’ll see with four). The color appearance of the object could have a slightly different quality when the two visual fields are compared. That’s the fascinating thing about phenomenal binding. The differences in mappings between stimuli and qualia of the two former entities can be compared, which means that this difference can be analyzed and reasoned about and added to both repertoires of hippocampal snapshots of the current experience.

Then, when you disconnect from the other and there are two streams of consciousness going on again, you will both know what that “yellow” referred to. This overcomes the age-old problem of communicating private referents, and mutually agreeing on name for private referents. This way, the pieces of the (phenomenal) puzzle will be the same in both minds.

For the test to work, the specific question needs to stay secret until it is revealed briefly before merging.

Imagine that you have a set of standardized phenomenal puzzles. Psychologists and people who have done the test before tend to agree that the puzzles in the set do require you to explore a minimum number of states of consciousness. The tests have precise conceptual answers. These answers are extremely difficult to deliver by accident or luck.

The puzzles may require you to use external tools like an image editor or a computer. This is because computers can enable you to program combinations of sensory input in precise ways. This expands the phenomenal gamut you can reach. In turn one can calibrate sensory input to have nice properties (ex. use gamma correction).  The puzzles will also be selected based on the time sentient beings typically take to solve them.

When you want to perform the test, you meet with the entity right after you finish reading the phenomenal puzzle. The puzzle is calibrated to not be solvable in the time it will take you to connect to the other entity.

When you connect your brain to the other entity and become one conscious narrative, the entire entity reads the puzzle to itself. In other words, you state out loud the phenomenal puzzle by clearly pointing to the referents of the puzzle within your own “shared” experience. Then you disconnect the two brains.

In the time that the other entity is trying to solve the puzzle you distract yourself. This way you can prevent yourself from solving the puzzle. Ideally you might want to bring your state of consciousness to a very low activity. The other entity will have all of its stimuli controlled to guarantee there is no incoming information. All the “qualia processing” is going on through approved channels. When the entity claims to have solved the puzzle, at that point you connect your brain back to it.

Does the merged entity know anything about the solution to the puzzle? You search for a memory thread that shows the process of solving the puzzle and the eventual answer. Thanks to the calibration of this puzzle (it has also been given to “merged” entities before) we know you would need more time to solve it. Now you may find yourself in a position where you realize that if the other entity was a zombie, you would have somehow solved a phenomenal puzzle without using experience at all. If so, where did that information come from?

With the memory thread you can remember how the other entity arrived at the conclusion. All of the hard work can be attributed to the other entity now. You witness this confirmation as the merged entity, and then you disconnect. You will still hold redundant memories of the period of merging (both brains do, like the hemispheres in split-brain patients). Do you know the answer to the puzzle? You can now check your memory for it and see that you can reconstruct the answer very quickly. The whole process may even take less time than it would take you to solve the puzzle.

If you know the answer to the puzzle you can infer that the other entity is capable of manipulating qualia in the same way that you can. You would now have information that your mind/brain could only obtain by exploring a large region of the state-space of consciousness… which takes time. The answer to the puzzle is a verifiable fact about the structure of your conscious experience. It gives you information about your own qualia gamut (think CIELAB). In summary, the other entity figured out a fact about your own conscious experience, and explained it to you using your own private referents.

You can then conclude that if the entity solved the phenomenal puzzle for you, it must be capable of manipulating its qualia in a semantically consistent way to how you do it. A positive result reveals that the entity utilizes conscious algorithms. Perhaps even stronger: It also shares the generalizable computational power of a sapient mind.

Unfortunately just as for the Turing test, not passing this test is not a guarantee that the other entity lacks consciousness. What the test guarantees is a high precision: Near every entity that passes the test is conscious. And that is a milestone, I think.

Do you agree that the problem of other minds is like a wire puzzle?

Now go ahead and brainstorm more phenomenal puzzles!

Manifolds of Consciousness: The emerging geometries of iterated local binding

The qualia manifolds

Ever noticed implicit geometries in the structure of the qualia you deal with on a daily basis?

So here is one observation about our experience. Visual experience has two major dimensions and one minor one (depth). This sensory modality is experienced as either 2 or 3 dimensional (and ambiguous points in between are also instantiated at times). Now, it also has a specific kind of topological features. It seems that the edges of the visual field are the edges of a patch in Euclidean space. The edges are not connected to each other. At first, it might take you by surprise to consider hypothetical visual fields with edges that are actually connected. Maybe you could make it a torus, by connecting edges left and right as well as those at the top and the bottom of the visual field. It’ll make a manifold of experience. You may also twist it before connecting it, making a Klein bottle or a projective plane.

CrossCapSlicedOpen

Real Projective Plane. Imagine your visual field connected to itself in this way by twisting and joining the edges. 

A common reaction to this idea is “it may be impossible to do that, maybe the geometry of our visual field is the only possible one.” Without actually going ahead and interfering with your mind and brain directly it is unlikely I’ll be able to show conclusively it is possible. But there is a strong intuition pump available to help you conceive of the possibility.

So, touch your arm. Your writs more specifically. Using a finger make a circle around the wrist. You end up where you started, and yet you only advanced in one direction.