Types of Binding

Excerpt from “Mindmelding: Consciousness, Neuroscience, and the Mind’s Privacy” (2012) by William Hirstein (pgs. 57-58 and 64-65)

The Neuroscience of Binding

When you experience an orchestra playing, you see them and hear them at the same time. The sights and sounds are co-conscious (Hurley, 2003; de Vignemont, 2004). The brain has an amazing ability to make everything in consciousness co-conscious with everything else, so that the co-conscious relation is transitive: That means, if x is co-conscious with y, and y is co-conscious with z, then x is co-conscious with z. Brain researchers hypothesized that the brain’s method of achieving co-consciousness is to link the different areas embodying each portion of the brain state by a synchronizing electrical pulse. In 1993, Linás and Ribary proposed that these temporal binding processes are responsible for unifying information from the different sensory modalities. Electrical activity, “manifested as variations in the minute voltage across the cell’s enveloping membrane,” is able to spread, like “ripples in calm water” according to Linás (2002, pp.9-10). This sort of binding has been found not only in the visual system, but also in other modalities (Engel et al., 2003). Bachmann makes the important point that the binding processes need to be “general and lacking any sensory specificity. This may be understood via a comparison: A mirror that is expected to reflect equally well everything” (2006, 32).

Roelfsema et al. (1997) implanted electrodes in the brain of cats and found binding across parietal and motor areas. Desmedt and Tomberg (1994) found binding between a parietal area and a prefrontal area nine centimeters apart in their subjects, who had to respond with one hand, to signal which finger on another hand had been stimulated – a conscious response to a conscious perception. Binding can occur across great distances in the brain. Engel et al. (1991) also found binding across the two hemispheres. Apparently binding processes can produce unified conscious states out of cortical areas widely separated. Notice, however, that even if there is a single area in the brain where all the sensory modalities, memory, and emotion, and anything else that can be in a conscious state were known to feed into, binding would still be needed. As long as there is any spatial extent at all to the merging area, binding is needed. In addition to its ability to unify spatially separate areas, binding has a temporal dimension. When we engage in certain behaviors, binding unifies different areas that are cooperating to produce a perception-action cycle. When laboratory animals were trained to perform sensory-motor tasks, the synchronized oscillations were seen to increase both within the areas involved in performing the task and across those areas, according to Singer (1997).

Several different levels of binding are needed to produce a full conscious mental state:

  1. Binding of information from many sensory neurons into object features
  2. Binding of features into unimodal representations of objects
  3. Binding of different modalities, e.g., the sound and movement made by a single object
  4. Binding of multimodal object representations into a full surrounding environment
  5. Binding of representations, emotions, and memories, into full conscious states.

So is there one basic type of binding, or many? The issue is still debated. On the side of there being a single basic process, Koch says that he is content to make “the tentative assumption that all the different aspects of consciousness (smell, pain, vision, self-consciousness, the feeling of willing an action, of being angry and so on) employ one or perhaps a few common mechanisms” (2004, p15). On the other hand, O’Reilly et al. argue that “instead of one simple and generic solution to the binding problem, the brain has developed a number of specialized mechanisms that build on the strengths of existing neural hardware in different brain areas” (2003, p.168).

[…]

What is the function of binding?

We saw just above that Crick and Koch suggest a function for binding, to assist a coalition of neurons in getting the “attention” of prefrontal executive processes when there are other competitors for this attention. Crick and Koch also claim that only bound states can enter short-term memory and be available for consciousness (Crick and Koch, 1990). Engel et al. mention a possible function of binding: “In sensory systems, temporal binding may serve for perceptual grouping and, thus, constitute an important prerequisite for scene segmentation and object recognition” (2003, 140). One effect of malfunctions in the binding process may be a perceptual disorder in which the parts of objects cannot be integrated into a perception of the whole object. Riddoch and Humphreys (2003) describe a disorder called ‘integrative agnosia’ in which the patient cannot integrate the parts of an object into a whole. They mention a patient who is given a photograph of a paintbrush but sees the handle and the bristles as two separate objects. Breitmeyer and Stoerig (2006, p.43) say that:

[P]atients can have what are called “apperceptive agnosia,” resulting from damage to object-specific extrastriate cortical areas such as the fusiform face area and the parahippocampal place area. While these patients are aware of qualia, they are unable to segment the primitive unity into foreground or background or to fuse its spatially distributed elements into coherent shapes and objects.

A second possible function of binding is a kind of bridging function, it makes high-level perception-action cycles go through. Engel et al. say that, “temporal binding may be involved in sensorimotor integration, that is, in establishing selective links between sensory and motor aspects of behavior” (2003, p.140).

Here is another hypothesis we might call the scale model theory of binding. For example, in order to test a new airplane design in a wind tunnel, one needs a complete model of it. The reason for this is that a change in one area, say the wing, will alter the aerodynamics of the entire plane, especially those areas behind the wing. The world itself is quite holistic. […] Binding allows the executive processes to operate on a large, holistic model of the world in a way that allows the model to simulate the same holistic effects found in the world. The holism of the represented realm is mirrored by a type of brain holism in the form of binding.


See also these articles about (phenomenal) binding:

That Time Daniel Dennett Took 200 Micrograms of LSD (In Another Timeline)

[Epistemic status: fiction]

Andrew Zuckerman messaged me:

Daniel Dennett admits that he has never used psychedelics! What percentage of functionalists are psychedelic-naïve? What percentage of qualia formalists are psychedelic-naïve? In this 2019 quote, he talks about his drug experience and also alludes to meme hazards (though he may not use that term!):

Yes, you put it well. It’s risky to subject your brain and body to unusual substances and stimuli, but any new challenge may prove very enlightening–and possibly therapeutic. There is only a difference in degree between being bumped from depression by a gorgeous summer day and being cured of depression by ingesting a drug of one sort or another. I expect we’ll learn a great deal in the near future about the modulating power of psychedelics. I also expect that we’ll have some scientific martyrs along the way–people who bravely but rashly do things to themselves that disable their minds in very unfortunate ways. I know of a few such cases, and these have made me quite cautious about self-experimentation, since I’m quite content with the mind I have–though I wish I were a better mathematician. Aside from alcohol, caffeine, nicotine and cannabis (which has little effect on me, so I don’t bother with it), I have avoided the mind-changing options. No LSD, no psilocybin or mescaline, though I’ve often been offered them, and none of the “hard” drugs.

 

As a philosopher, I have always accepted the possibility that the Athenians were right: Socrates was quite capable of corrupting the minds of those with whom he had dialogue. I don’t think he did any clear lasting harm, but it is certainly possible for a philosopher to seriously confuse an interlocutor or reader—to the point of mental illness or suicide, or other destructive behavior. Ideas can be just as dangerous as drugs.

 

Dennett Explained by Brendan Fleig-Goldstein and Daniel A. Friedman (2019)


It would be quite fascinating to know what Dan would say about lived psychedelic states. With that in mind, here is an essay prompt originally conceived for GPT-3 to satisfy our curiosity:

And after seeing some surprising empirical results with his heterophenomenological methods when examining the experience of people on psychedelics, Daniel Dennett decided to experience it for himself by taking 200 micrograms of LSD. The first thing he said to himself as he felt the first indications of the come-up was…


anders_and_maggie

Maggie and Anders

Maggie Wassinge and Anders Amelin (QRI Sweden volunteer coordinators and content contributors – see letters I & II, letters III, IV, V, & VI, and letters VII, VIII, & IX) decided to give it a go first. We shall then compare it with what GPT-3 can come up with at a later point.

In a way, what you are about to read can be thought of as Anders & Maggie’s simulation of a hyper-intelligent GPT-3-like system’s simulation of a hypothetical Daniel Dennett on LSD. So many layers, I know. But the results speak for themselves:


Dan says: Correlation is all that is needed. So what states a mind uses for modeling conditions of importance to it, is fundamentally arbitrary. Like with language. Words represent things to humans but there are many languages, and words do not resemble what they represent. They only correlate with what they represent, and the correlation needs to be learned. To do science about the mind, one does not need to attach any special weight to any conscious state of mind over any other. One person’s hope may be another person’s despair. These “qualia” are like internally projected hypo-languages which are different all over the map, if there were one. In fact there cannot be an interpersonal map of what things feel like personally. Only naïve people could hope for a fundamental linguistics of consciousness, as if these states were to constitute a universal language of some ground truth of the mind. Silly. Romantic. It’s all arbitrary. For the record, I never said subjective experiential states do not exist. People misrepresent me on that. What I am saying is that it will never be possible to scientifically map what the state that a word such as, for instance, “green”, translates to feels like by the brain of a given individual. My green could be your red.cielab-lch-color-tolerancing-methods

Just drop the whole idea of trying to map the state-space of qualia. That is my position. Or at least I know it is, logically. Right now I begin to notice how everything intensifies and becomes somehow more salient. More revealingly “real”. As I reflect on the notion of how “states” correlate, a humorous episode from my undergraduate student life so long ago, is brought to the surface. At Wesleyan it was, where I was taking a course in Art Appreciation. The lecturer was showing a slide of a still life. A bowl of fruit it was, conspicuously over-ripe. Pointing at one of the fruits, saying “Can anyone tell me what state this peach is in?” There was silence for about three seconds, then one student exclaimed: “Georgia”. Everyone laughed joyfully. Except me. I never quite liked puns. Too plebeian. Sense of humor is arbitrary. I believe that episode helped convince me that the mind is not mysterious after all. It is just a form of evolved spaghetti code finding arbitrary solutions to common problems. Much like adaptations of biochemistry in various species of life. The basic building blocks remain fixed as an operative system if you will, but what is constructed with it is arbitrary and only shaped by fitness proxies. Which are, again, nothing but correlations. I realized then that I’d be able to explain consciousness within a materialist paradigm without any mention of spirituality or new realms of physics. All talk of such is nonsense.Daniel_dennett_Oct2008

I have to say, however, that a remarkable transformation inside my mind is taking place as a result of this drug. I notice the way I now find puns quite funny. Fascinating. I also reflect on the fact that I find it fascinating that I find puns funny. It’s as if… I hesitate to think it even to myself, but there seems to be some extraordinarily strong illusion that “funny” and “fascinating” are in fact those very qualia states which… which cannot possibly be arbitrary. Although the reality of it has got to be that when I feel funniness or fascination, those are brain activity patterns unique to myself, not possible for me to relate to any other creature in the universe experiencing them the same way, or at least not to any non-human species. Not a single one would feel the same, I’m sure. Consider a raven, for example. It’s a bird that behaves socially intricately, makes plans for the next day, can grasp how tools are used, and excels at many other mental tasks even sometimes surpassing a chimpanzee. Yet a raven has a last common ancestor with humans more than three hundred million years ago. The separate genetic happenstances of evolution since then, coupled with the miniaturization pressure due to weight limitations on a flying creature, means that if I were to dissect and anatomically compare the brain of a raven and a human, I’d be at a total loss. Does the bird even have a cerebral cortex?03-brai-diagram

An out of character thing is happening to me. I begin to feel as if it were in fact likely that a raven does sense conscious states of “funny” and “fascinating”. I still have functioning logic that tells me it must be impossible. Certainly, it’s an intelligent creature. A raven is conscious, probably. Maybe the drug makes me exaggerate even that, but it ought to have a high likelihood of being the case. But the states of experience in a raven’s mind must be totally alien if it were possible to compare them side by side with those of a human, which of course it is not. The bird might as well come from another planet.Head_of_Raven

The psychedelic drug is having an emotional effect on me. It does not twist my logic, though. This makes for internal conflict. Oppositional suggestions spontaneously present themselves. Could there be at least some qualia properties which are universal? Or is every aspect arbitrary? If the states of the subjective are not epiphenomenal, there would be evolutionary selection pressures shaping them. Logically there should be differences in computational efficiency when the information encoded in qualia feeds back into actions carried out by the body that the mind controls. Or is it epiphenomenal after all? Well, there’s the hard problem. No use pondering that. It’s a drug effect. It’ll wear off. Funny thing though, I feel very, very happy. I’m wondering about valence. It now appeals strongly to take the cognitive leap that at least the positive/negative “axis” of experience may in fact be universal. A modifier of all conscious states, a kind of transform function. Even alien states could then have a “good or bad” quality to them. Not directly related to the cognitive power of intelligences, but used as an efficient guidance for agency by them all, from the humblest mite to the wisest philosopher. Nah. Romanticizing. Anthropomorphizing.

36766208_10160731731785637_6606215010454601728_oFurther into this “trip” now. Enjoying the ride. It’s not going to change my psyche permanently, so why not relax and let go? What if conscious mind states really do have different computational efficiency for various purposes? That would mean there is “ground truth” to be found about consciousness. But how does nature enable the process for “hitting” the efficient states? If that has been convergently perfected by evolution, conscious experience may be more universal than I used to take for granted. Without there being anything supernatural about it. Suppose the possibility space of all conscious states is very large, so that within it there is an ideally suited state for any mental task. No divine providence or intelligent design, just a law of large numbers.

The problem then is only a search algorithmic one, really. Suppose “fright” is a state ideally suited for avoiding danger. At least now, under the influence, fright strikes me as rather better for the purpose than attraction. Come to think of it, Toxoplasma Gondii has the ability to replace fright with attraction in mice with respect to cats. It works the same way in other mammals, too. Are things then not so arbitrarily organized in brains? Well, those are such basic states we’d share them with rodents presumably. Still can’t tell if fright feels like fear in a raven or octopus. But can it feel like attraction? Hmmm, these are just mind wanderings I go through while I wait for this drug to wear off. What’s the harm in it?

Suppose there is a most computationally efficient conscious state for a given mental task. I’d call that state the ground state of conscious intelligence with respect to that task. I’m thinking of it like mental physical chemistry. In that framework, a psychedelic drug would bring a mind to excited states. Those are states the mind has not practiced using for tasks it has learned to do before. The excited states can then be perceived as useless, for they perform worse at tasks one has previously become competent at while sober. Psychedelic states are excited with respect to previous mental tasks, but they would potentially be ground states for new tasks! It’s probably not initially evident exactly what those tasks are, but the great potential to in fact become more mentally able would be apparent to those who use psychedelics. Right now this stands out to me as absolutely crisp, clear and evident. And the sheer realness of the realization is earth-shaking. Too bad my career could not be improved by any new mental abilities.Touched_by_His_Noodly_Appendage_HD

Oh Spaghetti Monster, I’m really high now. I feel like the sober me is just so dull. Illusion, of course, but a wonderful one I’ll have to admit. My mind is taking off from the heavy drudgery of Earth and reaching into the heavens on the wings of Odin’s ravens, eternally open to new insights about life, the universe and everything. Seeking forever the question to the answer. I myself am the answer. Forty-two. I was born in nineteen forty two. The darkest year in human history. The year when Adolf Hitler looked unstoppable at destroying all human value in the entire world. Then I came into existence, and things started to improve.

It just struck me that a bird is a good example of embodied intelligence. Sensory input to the brain can produce lasting changes in the neural connectivity and so on, resulting in a saved mental map of that which precipitated the sensory input. Now, a bird has the advantage of flight. It can view things from the ground and from successively higher altitudes and remember the appearance of things on all these different scales. Plus it can move sideways large distances and find systematic changes over scales of horizontal navigation. Entire continents can be included in a bird’s area of potential interest. Continents and seasons. I’m curious if engineers will someday be able to copy the ability of birds into a flying robot. Maximizing computational efficiency. Human-level artificial intelligence I’m quite doubtful of, but maybe bird brains are within reach, though quite a challenge, too.

This GPT-3 system by OpenAI is pretty good for throwing up somewhat plausible suggestions for what someone might say in certain situations. Impressive for a purely lexical information processing system. It can be trained on pretty much any language. I wonder if it could become useful for formalizing those qualia ground states? The system itself is not an intelligence in the agency sense but it is a good predictor of states. Suppose it can model the way the mind of the bird cycles through all those mental maps the bird brain has in memory. Where the zooming in and out on different scales brings out different visual patterns. If aspects of patterns from one zoom level is combined with aspect from another zoom level, the result can be a smart conclusion about where and when to set off in what direction and with what aim. Then there can be combinations also with horizontally displaced maps and time-displaced maps. Essentially, to a computer scientist we are talking massively parallel processing through cycles of information compression and expansion with successive approximation combinations of pattern pieces from the various levels in rapid repetition until something leads to an action which becomes rewarded via a utility function maximization.

Integrated_information_theory_postulates

Axioms of Integrated Information Theory (IIT)

Thank goodness I’m keeping all this drugged handwaving to myself and not sharing it in the form of any trip report. I have a reputation for being down to Earth, and I wouldn’t want to spoil it. Flying with ravens, dear me. Privately it is quite fun right now, though. That cycling of mental maps, could it be compatible with the Integrated Information Theory? I don’t think Tononi’s people have gone into how an intelligent system would search qualia state-space and how it would find the task-specific ground states via successive approximations. Rapidly iterated cycling would bring in a dynamic aspect they haven’t gotten to, perhaps. I realize I haven’t read the latest from them. Was always a bit skeptical of the unwieldy mathematics they use. Back of the envelope here… if you replace the clunky “integration” with resonance, maybe there’s a continuum of amplitudes of consciousness intensity? Possibly with a threshold corresponding to IIT’s nonconscious feed-forward causation chains. The only thing straight from physics which would allow this, as far as I can tell from the basic mathematics of it, would be wave interference dynamics. If so, what property might valence correspond to? Indeed, be mappable to? For conscious minds, experiential valence is the closest one gets to updating on a utility function. Waves can interfere constructively and destructively. That gives us frequency-variable amplitude combinations, likely isomorphic with the experienced phenomenology and intensity of conscious states. Such as the enormous “realness” and “fantastic truth” I am now immersed in. Not sure if it’s even “I”. There is ego dissolution. It’s more like a free-floating cosmic revelation. Spectacular must be the mental task for which this state is the ground state!

Wave pattern variability is clearly not a bottleneck. Plotting graphs of frequencies and amplitudes for even simple interference patterns shows there’s a near-infinite space of distinct potential patterns to pick from. The operative system, that is evolution and development of nervous systems, must have been slow going to optimize by evolution via genetic selection early on in the history of life, but then it could go faster and faster. Let me see, humans acquired a huge redundancy of neocortex of the same type as animals use for avigation in spacetime locations. Hmmm…, that which the birds are so good at. Wonder if the same functionality in ravens also got increased in volume beyond what is needed for navigation? Opening up the possibility of using the brain to also “navigate” in social relational space or tool function space. Literally, these are “spaces” in the brain’s mental models.2000px-Migrationroutes.svg

Natural selection of genetics cannot have found the ground states for all the multiple tasks a human with our general intelligence is able to take on. Extra brain tissue is one thing it could produce, but the way that tissue gets efficiently used must be trained during life. Since the computational efficiency of the human brain is assessed to be near the theoretical maximum for the raw processing power it has available, inefficient information-encoding states really aren’t very likely to make up any major portion of our mental activity. Now, that’s a really strong constraint on mechanisms of consciousness there. If you don’t believe it was all magically designed by God, you’d have to find a plausible parsimonious mechanism for how the optimization takes place.

If valence is in the system as a basic property, then what can it be if it’s not amplitude? For things to work optimally, valence should in fact be orthogonal to amplitude. Let me see… What has a natural tendency to persist in evolving systems of wave interference? Playing around with some programs on my computer now… well, appears it’s consonance which continues and dissonance which dissipates. And noise which neutralizes. Hey, that’s even simple to remember: consonance continues, dissonance dissipates, noise neutralizes. Goodness, I feel like a hippie. Beads and Roman sandals won’t be seen. In Muskogee, Oklahoma, USA. Soon I’ll become convinced love’s got some cosmic ground state function, and that the multiverse is mind-like. Maybe it’s all in the vibes, actually. Spaghetti Monster, how silly that sounds. And at the same time, how true!

matthew_smith_65036312_10158707068303858_8051960337261395968_o

Artist: Matthew Smith

I’m now considering the brain to produce self-organizing ground state qualia selection via activity wave interference with dissonance gradient descent and consonance gradient ascent with ongoing information compression-expansion cycling and normalization via buildup of system fatigue. Wonder if it’s just me tripping, or if someone else might seriously be thinking along these lines. If so, what could make a catchy name for their model?

Maybe “Resonant State Selection Theory”? I only wish this could be true, for then it would be possible to unify empty individualism with open individualism in a framework of full empathic transparency. The major ground states for human intelligence could presumably be mapped pretty well with an impressive statistical analyzer like GPT-3. Mapping the universal ground truth of conscious intelligence, what a vision!

But, alas, the acid is beginning to wear off. Back to the good old opaque arbitrariness I’ve built my career on. No turning back now. I think it’s time for a cup of tea, and maybe a cracker to go with that.

raven-99568_960_720

Self-Locatingly Uncertain Psilocybin Trip Report by an Anonymous Reader

See more rational trip reports by anonymous readers for: 2C-B, 4-AcO-DMT, LSD, N,N-DMT, 5-MeO-DMT


Pre-ingestion Notes

Physiological Background

  • Restfulness: Well-Rested (7-8 hours of sleep)
  • Wake-up time: 5:10 am
  • Morning run: 5:30 am (~30 minutes total)
  • Breakfast time: 9:25 am
  • Food: 2 chicken sausages + ½ bagel, ¾ can of Peach-Pear La Croix (carbonation will create a slightly acidic environment, possibly potentiating drug’s metabolism)
  • Weight: 55-60 kg
  • Height: 170-180 cm
  • Sex / Identity / Orientation: Male / Cis / Hetero
  • Libido: decently high; 2 days since last masturbation
  • Dosage: 6 g, mostly caps; took 2x dosage of everyone else to prevent any potential short-term homeostatic adaptive effects which might lead to upping dosage mid-stride
  • Other Drugs: none

86500006_10157440168357499_2144632867752247296_o

“I have a split personality”, said tom, being frank

Environmental Stimuli

  • Temperature: moderately warm – cozy (~ 74 F, 23 C)
  • Light: near window with ample filtered sunlight
  • Touch: sitting on fuzzy carpet; comfortable loose-fitting athleisure wear clothes
  • Olfactory: 5 sprays of peach-nectarine body scent (as an experiment to test memory retrieval of trip, as unlike other sensory systems, scent mostly bypasses the thalamic sensory gating filter)
  • Taste: regular mushroom taste, 2 Listerine strips immediately afterward (again to engage in potential memory retrieval). Nothing nauseating or stomach-troubling.
  • Auditory: listening to minimalist / contemporary classical music by Ludovico Einaudi
  • Visual: YouTube videos of fractal images & Islamic art
  • Social: 4 friends (3 of which are of the rationalist-tribe; all techies)

    86831264_10157440168352499_8433373718276734976_o

    Plate of mushrooms and accompanying peach-nectarine body scent

Prior Cognitive State

  • Past drug experiences: 2 occasions of cannabis, few times alcohol, and 2 occasions of adderall. All minor experiences, none negative, and generally I strongly abstain from psychoactive drugs (even caffeine). No prior experience with psychedelics(!)
  • Mood: relatively optimistic and happy; no negative feelings
  • Beliefs: generally Fictionalist in ontology, though in recent years have experienced Platonist tendencies. Strongly grounded in an empirically-based, scientific-physicalist perspective. Strong sense of individuality + Selfhood. Not religious, nor spiritual, though upbringing was Roman Catholic.
  • Training: former academic neuroscience + pharmacologically-trained; current data science graduate student
  • Goals: to gain first-person experience of Klüver constants and re-establish/confirm any potential taxonomy of geometric forms (see here, here, and here for more)
  • Expectations: extremely skeptical that any fantastical effects might occur other than some random colors in the visual field, minor mood changes, minor memory + time effects such as retarded / slowed time, and maybe at most some Alice-in-Wonderland Syndrome perceptual-type effects. Hoping for experiences of strong and consistent + clearly defined Klüver constants, but skeptical they even exist. Skeptical of any notion of god-minds, communion with nature, dying and being reborn, and generally any spiritual or mystical experiences.


Real-Time Trip Report (unedited)

Time of Ingestion – 9:55 am, 02/15/2020

10:21 – First noticeable effect. Concentration lagging. Palms beginning to sweat. Starting to feel like it might be difficult to focus enough to write a report.
10:29 – REALLY strong physiological effect. Losing focus. Similar to being extremely tired(!). Sweat increasing (palms, pits, neck in that order). No visual or auditory hallucinations yet. Everyone else is laughing somewhat uncontrollably.
10:33 – First STRONG spike of losing concentration. Similar to being really tired or fainting.
10:36 – Another strong spike of losing consciousness. No hallucinations yet.
10:37 – Spike again. Increasing frequency now. One about every 15-20 seconds. Have to write this sentence in bursts, and memory is trying to keep up recovering my train of thought. Have to stop and now and then to
10:38 – First visual effectss! Tortoise shell-like fractal images. Hard to focus. Palms really really really sweaty, notice it as keyboard residue.
10:43 – Still hanging in there. Notice that friend’s response time is really late with around 2 minute delayss, not sure if that’s me or them? Extreme switching cost now between chrome tabs of writing and watching youtube vid.
10:46 – Switching cost too strong. Dilemma now between 3rd person documentation and 1st person immersion. Trying hrad.
10:47 – okay giving in to the experience now. too strong now
10:48 – laughter
10:48 does not make sense, really hard to type now, losing languge
50 trying hard to document. worried hwo to convey this
51 realizati not abot documnting 52 thats it 53 can i come back
54 latice structs 58 still here k
11
03


86738717_10157440168497499_1761472955465859072_o

Post-Trip Report

Total Trip Duration: ~6-7 hours

I’m writing this now being fully recovered and in my usual frame of mind. I’m going to start with how much I can remember…

Somewhat cognizant but before peak experience, I came back from one of my blackouts and saw one of my friends outside the cabin. I worried about their safety but realized that other people were going to take care of him. I also remember one of my friends constantly checking up on me. The strongest thought that I recall during the beginning of my experience was realizing that I had to let go of all this documentation, stop worrying about others and what other people think, and had to let go of this 3rd person perspective. I know this sounds somewhat monstrously misanthropic, but at the time of the experience I felt like that to pursue Truth I had to go further than everyone else even if that meant leaving them behind, and it was distracting to even have to jump back and forth between 1st and 3rd person perspective to focus on responding to other people during my brief moments of sanity to tell them I was ok when I could be delving deeper and deeper into the experience. This thought of leaving all thoughts of others behind to pursue the Truth reoccurred a few times as I began to get sucked into the vortex of patterns found in the wood grains of the floor and the curls of the carpet.

Linearity of experience was soon lost. I blinked in and out of existence. Time was definitely not unraveling like the constant forward stream I was used to and I felt like I was teleporting from one false reality to the next. I didn’t know in what order those events occurred. At this period I really wasn’t thinking, but more just passively viewing brief glimpses and snapshots of my body going through the motion. One moment I was in the kitchen. The next locked in the bathroom. The next on the living room floor. It’s such a shame that there’s a quirk in our language requiring me to express these as “next experiences” when in reality I did not experience them as having an order. I felt a really strong sense of déjà vu and reverse déjà vu tied to each of them, as if I’ve already done them before while also simultaneously knowing that I will be doing them in the future. It was really weird to have the feeling that you know you’ve already lived the future.

I blinked onto the bathroom floor. I thought why should I even look back and respond at all to the other people asking if I was okay when I finally have the chance to explore and find Truth with a capital T, and suddenly a strong sadness hit me. It was more of a feeling than a coherent logical thought, but the best way I can explain it was that it was a type of guilt that I’ll never be able to share this with my sister, my brother, my mom and dad, and then what will happen to my roommates, and then all my other friends and classmates around me, and how they’ll worry about me and so I told myself I’ll have to come back for them.

And so I tried to come back. Trying and wanting is such an interesting concept. It’s weird to desire language without being able to form a coherent line of thought or internal sentence in your head, but that is what I remember doing. How do you say to yourself that you want something without even being able to describe to yourself what that is? It seems desire may be more fundamental than internal language. Soon my linguistic centers began to reboot and I realized that this had to do with my memory chunks getting larger and allowing me to hold more in short term memory. Although seemingly primitive and simplistic, I can’t emphasize enough how this realization that memory and language were intertwined and recursively bootstrapping each other really helped soothe away any panic that I was totally lost.

I blinked into the kitchen. Time was becoming more linearly coherent now, but I kept blanking out and teleporting randomly throughout the kitchen. The thing was: I didn’t think of it as the same kitchen. I thought of it as different parallel realities of a kitchen that I recognize. I remember my friend offering me a chip and me trying so very hard to grasp that chip from her and hold on to that reality even if it was not the true real one, as if by merely believing by sheer force of will that I am actually grasping an object then made it concrete. I recall saying ‘trying to parse’. I strained with cognitive effort to stop teleporting. I remember asking everyone in each reality I blinked into whether or not this was the real one. “Wow, is this real?” became a repeating question tinted with wonder and surprise.

I blinked into the living room. Time was linear again, and it seems I began to be able to somewhat coherently reflect on the geometric patterns clouding my sight. I regretfully wished I could have focused on them more (as well as the apparently living, pulsating, and breathing floor beneath me and the shrinking and growing of my hands), but it was at this moment that I truly to the deep core of myself had the gut feeling that this reality wasn’t my original one. I honestly and wholeheartedly believed that this reality was a construct and that I was living out a simulation either in the mind of my true body in the real world (where I was probably in some coma in the hospital) or inhabiting the body of a different version of myself in a parallel universe. Everything felt false. Fake. Simulated. I was overcome with a Great Sadness that I didn’t know how to get back to my own original reality, and that I never said goodbye to the people I loved. And I was surprised because these melancholic emotions were of such strength to overcome my scientific training and any previous skepticism I once held.

I tried in vain to remember some mathematical way of proving you were living in a simulation, maybe something from information theory, or Tegmark’s mathematical universe, or something regarding speeds and frames of references and computational power being limited in an embedded simulated universe, but I could not for the life of me recall how to actually prove this to myself or what experiment to run. I remember, fuck man, I really should have worked out those thought experiments and proofs in depth because now I’m stuck.

However, it was on that thought of frames of references that I realized with some sadness and regret that maybe it’s not all that bad since how can I be the one to say one reality is more real and valid than the next? The best way I can convey this was that it was a somewhat mono no aware-type feeling. Even if this is a simulation in my mind or I’m in some parallel universe, why should I be any less happy? If someone spent their whole life creating their own meaning through something as removed from reality as art, or music, or pure math, and was able to live a fulfilling life, why should any particular version of myself be considered less meaningful just because this version of me possesses a memory of another me as an origin and potential branching-off point? Wasn’t another reality just as valid as the original one that I just came from? What made my old frame of reference special except for the mere fact of it being my origin? Why was I feeling this sense of sadness that I left it all behind to teleport to this version of reality? And then came the acceptance that if this reality was just as real except for my gut belief that it wasn’t, why shouldn’t I be able to simultaneously accept that gut feeling and move on and live in this version of reality?

And so I decided to live on, and within a few hours began to lose this sense that this was the false reality (although I really really wished I had a GoPro camera with me so that I could definitively prove to myself that I was in the correct reality). I began to have a newfound strong sense of empathy towards people with dissociative disorders. Thinking back on the experience, I think I primed myself for these thoughts when I kept switching between first-person and third-person perspectives, telling myself I couldn’t handle the switching cost any longer and that I should just immerse myself fully in the experience and forget about documenting this for other people, and why was I even submitting myself to the approval of others anyways because if there’s anyone who will have to go further in their exploration and sacrifice the chance to be with others then I guess I’ll just have to take up that burden.

Overall, I think the strong dissociative experience of thinking this reality was the fake simulated one had to do with maybe a couple of things. As mentioned before, one cause could have been the psychological priming induced by constantly switching between 1st and 3rd person ways of perceiving this event, creating the necessary emotional conditions of being simultaneously split between existing and being fully immersed in the present moment versus wanting to abstract / detach myself from the moment.

The second potential cause of the dissociation could have been due to my brain constantly blacking out and being rebooted in another physical part of the house. Because I had no memory of the continuity of how I got from one context to the next, this conditioned my brain to rationalize and register each separate event as a separate reality, probably falsely recognizing and incorrectly pattern matching these experiences as being more similar to a dream state where teleportation is normal, perceptions are distorted, and sequences of events are jumbled. This probably then began synthesizing the necessary eventual gut-belief that this reality was fake (because I subconsciously falsely pattern matched that it was similar to a dream).

Finally, I think the third potential contributor to the dissociation occurred when I was coming down from the experience and my brain went on overdrive trying to rationalize events. It might be possible that the more you are adept at creatively rationalizing things away the paradoxically worse you are at accepting this reality. Just having knowledge of potential parallel realities in physics, the simulation hypothesis that we might either be simulated in our heads or on a computer and may not realize it, the philosophy of solipsism, and knowing the neuroscience of how just freaking good the brain is at tricking itself that something is real, all created fertile conditions for my brain to interpret this reality as false.

Thinking back on the experience, I now have a newfound appreciation for memory and the continuity of experience, and their contribution to what it means to feel situated and embodied in this reality. If I were to do this again I would probably micro-dose so that I could still retain my linguistic faculties and linear way of reasoning in order to investigate the visual geometric effects in much greater scientific 3rd person POV-like rigor, focusing less on the semantic psychoanalytical content of the experience and more on the psychophysical optical effects (which was my original goal!). This really showed me how dependent the sense of Self was on the continuity of memory and experience and that maybe the Self really is composed of different smaller frames of reference generated by subnetworks in the brain (and hence is an ecology of momentary and brief snapshots / selves constantly going in and out of existence, both competing and coalescing in dynamic flux to make up the whole Self). Without the strong pillar that the continuity of singular memory provides to the host body that this community of selves inhabits, I think there really could be a binding problem for integrating these individual snapshots into a singular Selfhood and individual identity that persist through time that we call ‘I’.
Overall, I gained a much greater appreciation for continuity, the linear narrative of language intertwined with memory, and what it feels like on the inside of someone who is dissociated and thinks their surrounding reality is just a construct and not real. Also, the geometric images were really cool and I finally now understand why people say Google’s DeepDream art seems psychedelic, because looking into the mirror during the later stages of my trip confirmed that my face really did look like a globular, eyes-everywhere, and skewed proportioned / sized image!sIALPrcsO8TouJL3khCiXQ-small
Anyways, I really want to thank my friends that were with me on this trip and for constantly checking up on me to make sure I was okay. Rarely do I feel comfortable in the presence of other people and I’m glad I felt safe with you.

Thank you ❤️

Overall, I’d give my first experience with psychedelics a 7/10!


2-Week Post-Trip Report

So it’s been 2 weeks since ingestion and I just wanted to briefly report this one last interesting phenomenon for documentation’s sake.

Out of the 14 days post-trip, a little less than half of those nights (6 nights in total, distributed more heavily in the week immediately after the trip), I’ve had dreams that featured strongly self-referential phenomena. Within these dreams, my perceptual surroundings immediately reminded me of my aforementioned psychonautical experience of questioning my reality. These strong emotional realizations would then essentially cause my brain to kick itself out of the dream.

However, instead of truly waking up, I was still nested inside another dream in which I was imagining waking up. I usually went through 2-3 rounds of this false waking-up cycle until I finally surfaced into the real reality of the morning.

I thought it interesting for the first 2 nights, but after that it actually got really tiring always waking up questioning whether or not I’m really awake, and then having to go through the same motions of prepping for school/work knowing you’ve done that 2-3 times already in your head for the day.

What was funny though was that this always occurred around the same time each morning (my body has always naturally woken up at 5 am on the dot since high school), and in each iteration of the dream in which I falsely woke up I remember looking at my clock and seeing that it is 5 am. This would then lend me false-confidence and confirm that “ah, ok, I’m not in the dream anymore since I really do wake up at 5 am“. However, I think after enough times of this not working my brain finally came to learn that that was no longer a reliable indicator of the reality being real.

This skeptical realization finally got strong enough to be able to recall within my dream and act as an early kick-out mechanism that I eventually woke up closer and closer to my true ‘waking up point’. I remember going through the wake-up, look at the clock, remember that this no longer works, then immediately get kicked out, and wake up again, look at the clock, then remember this no longer works and get kicked out again, wake up, then look at the clock and finally get some inkling that this perception is a little different and realize that’s because I’m really waking up for real now. A rather fascinating experience!


Featured image by Nick Swanson

5-MeO-DMT Awakenings: From Naïve Realism to Symmetrical Enlightenment

In the following video Leo Gura from actualized.org talks about his 30-day 5-MeO-DMT streak experiment. In this post I’ll highlight some of the notable things he said and comment along the way using a QRI-lens to interpret his experiences (if you would rather make up your mind about what he says without my commentary just go and watch the video on your own before reading what I have to say about it).

TL;DR: Many of the core QRI paradigms such as Neural Annealing, the Symmetry Theory of Valence, the Tyranny of the Intentional Object, and Hyperbolic Geometry on Psychedelics have a surprising degree of explanatory power when it comes to making sense of the peculiar process that ensues when someone takes a lot of 5-MeO-DMT. The deep connections between symmetry, valence, smooth geometry, and information content are made clear in this context due to the extreme and purified nature of the states induced by the drug.


Introduction

Recently Adeptus Psychonautica (who has interviewed me in the past about the hyperbolic geometry of DMT experiences) put out a video titled “When you have taken too much – Actualized.org“. This video caught my attention because Leo Gura did something that is rather taboo in spiritual communities, and for good reasons. Namely, he tried to convince the viewers that he had achieved a level of awakening that nobody (or perhaps only a few people) on the entire planet had ever reached. He then said he was going to isolate for a month to integrate these profound awakenings and come back with a description of what they are all about.

Thankfully I didn’t have to wait a month to satisfy my curiosity and see what happened after his period of isolation because by the time I found about it he had already posted his post-retreat video. Well, it turns out that he used those 30 days of isolation to conduct a very hard-core psychedelic experiment. Namely, he took high doses of 5-MeO-DMT daily for the entire month. I’ve never heard of anyone doing this before.

Learning about what he experienced during that month is of special interest to me for many reasons. In particular, thanks to previous research about extreme bliss and suffering, we had determined that 5-MeO-DMT is currently the psychedelic drug that has the most powerful and intense effects on valence. Recall Logarithmic Scales of Pleasure and Pain (video): many lines of evidence point to the fact that extreme states of consciousness are surprisingly powerful in ways that are completely counterintuitive. So when Leo says that there are “many levels of awakening” and goes on to discuss how each level is unrecognizably more intense and deeper than the previous one, I am very much inclined to believe he is trying to convey a true property of his experiences. Note that Leo did not only indulge in psychedelics; we are talking about 5-MeO-DMT in particular, which is the thermonuclear bomb version of a psychoactive drug (as with Plutonium, this stuff is best handled with caution). More so, thankfully Leo is very eloquent, which is rare among people who have had many extreme experiences. So I was very eager to hear what he had to say.

While I can very easily believe his trip reports when it comes to their profundity, intensity, and extraordinary degree of consciousness, I do not particularly find his interpretations of these experiences convincing. As I go about describing his video, I will point out ways in which you can take as veridical his phenomenological descriptions without at the same time having to agree with his interpretations of them. More so, if you end up exploring these varieties of altered states yourself, by reading this you will now at least have two different and competing frameworks to explain your experiences. This, I think, is an improvement. Right now the psychedelic and scientific community has very few lenses with which to interpret something as extraordinary as 5-MeO-DMT experiences. And I believe this comes at a great cost to people’s sanity and epistemic rationality.

What Are Leo’s Background Assumptions?

In the pre-retreat video Leo says that his core teachings (and what he attempts to realize on his own self) are: (1) you are literally God, (2) there is nothing but consciousness – God is infinite consciousness, (3) everything is states of consciousness – everything at all times is a different state of consciousness, (4) you are love – and love is absolute – this is all constructed out of love – fear is just fear of aspects of yourself you have disconnected from, (5) you have no beginning and no end, (6) you should be radically open-minded. Then he also adds that physical and mental health issues are just manifestations of your resistance to realizing that you are God.

What Are My Background Assumptions?

Personal Identity

I am quite sympathetic to the idea of oneness, which is also talked about with terms like nonduality and monopsychism. In philosophical terminology, which I find to be more precise and rigorous, this concept goes by the name of Open Individualism – the belief that we are all one single consciousness. I have written extensively about Open Individualism in the past (e.g. 1, 2, 3), but I would like to point out that the arguments I’ve presented in favor of this view are not based on direct experience, but rather, on logical consistency from background assumptions we take for granted. For instance, if you assume that you are the same subject of experience you were a second ago, it follows that you can exist in two points in space-time and still be the same being. Your physical configuration is different than a few seconds ago (let alone a decade), you have slightly different memories, the neurons active are different, etc. For every property you point out as your “identity carrier” I can find a counter-example where such carrier changes a little while you still remain the same subject of experience. Add to that teleportation, fission, fusion, and gradual replacement thought experiments and you can build a framework where you can become any other arbitrary person without a loss of identity. These lines of argumentation coupled with the transitivity of identity can build the case that we are indeed all one to begin with.

But realize that rather than saying that you can grasp this (potential) truth directly from first person experience, I build from agreed upon assumptions to arrive at an otherwise outlandish view. Understanding the argument does not entail “feeling we are all one”, and neither does feeling we are all one entails understanding the arguments!

Indirect Realism About Perception

There is a mind-independent world out there and you never get to experience it directly. In some sense, we each live in a private skull-bound world-simulation that tracks the fitness-relevant features of our environment. Hence, during meditation, dreaming, or psychedelic states you are not accessing any sort of external reality directly, but rather, exploring possible configurations and qualities of your inner world-simulation. This is something that Leo may implicitly not realize. In particular, interpreting 5-MeO-DMT experiences through direct realism (also called naïve realism – the view that you experience the world directly through your senses) would make you think that you are literally merging with the entire cosmos on the drug. Whereas interpreting those experiences with indirect realism merely entails that your inner boundaries are dissolving. In other words, the partitions inside your world-simulation are what implements the feeling of the self-other duality. And since 5-MeO-DMT dissolves inner boundaries, it feels as though you are becoming one with your surroundings (and the rest of reality).

Physicalism and Panpsychism

An important background assumption is that the laws of physics accurately describe the behavior of the universe. This is distinct from materialism, which would also posit that all matter is inherently insentient. Physicalism merely says that the laws of physics describe the behavior of the physical, but leaves its intrinsic nature as an open question. Together with panpsychism, however, physicalism entails that what the laws of physics are describing is the behavior of consciousness.

Tyranny of the Intentional Object

We tend to believe that what makes us happy is external to us, while in reality happiness is a state of consciousness triggered by external circumstances. Our minds lead us to believe otherwise for evolutionary reasons.

Valence Structuralism

What makes an experience feel good or bad is not its semantic content, its computational use, or even whether the experience is self-reinforcing or not. What makes experiences feel good or bad is their structure. In particular, a very promising idea that will come up below is that highly symmetrical states of consciousness are inherently blissful, such as those we can access during orgasm, meditation, psychedelics, or even just good food and a hug. Recall that 5-MeO-DMT dissolves internal boundaries, and this is indicative of increased inner symmetry (where the boundaries themselves entail symmetry breaking operations). Thus, an exotic state of oneness is blissful not because you are merging with God, but “merely” because it has a higher degree of symmetry and therefore it’s valence is higher than what we can normally experience. In particular, the symmetry I’m talking abut here may be an objective feature of experiences perhaps even measurable with today’s neuroimaging technology.

There are additional key background philosophical assumptions, but the above are enough to get us started analyzing Leo’s 5-MeO-DMT journey from a different angle.


The Video

[Video descriptions are in italics whereas my commentary is bolded.]

For the first 8 minutes or so Leo explains that people do not really know that there are many levels of enlightenment. He starts out strong by claiming that he has reached levels of enlightenment that nobody (or perhaps just a few people) have ever reached. More so, while he agrees with the teachings of meditation masters of the past, he questions the levels of awakening that they had actually reached. It takes one to know one, and he claims that he’s seen things far beyond what previous teachers have talked about. More so, he argues that people simply have no way of knowing how enlightened their teachers are. People just trust books, gurus, teachers, religious leaders, etc. about whether they are “fully” enlightened, but how could they know for sure without reaching their level, and then surpassing them? He wraps up this part of the video by saying that the only viable path is to go all the way by yourself – to dismiss all the teachers, all the books, and all the instructions and see how far you can go on your own when genuinely pursuing truth by yourself.

With this epistemological caveat out of the way, Leo goes on to describe his methodology. Namely, he embarked on a quest of taking 5-MeO-DMT at increasing doses every day for 30 days in a row.leo_10_05

At 10:05 he says that within a week of this protocol he started reaching levels of awakening so elevated that he realized he had already surpassed every single spiritual teacher that he had ever heard of. He started writing a manifesto explaining this, claiming that even the most enlightened humans are not truly as awake as he became during that week. That it had became “completely transparent that most people who say they are awake or teach awakening are not even 1% awake”. But he decided not to go forward with the manifesto because he still values the teachings of spiritual leaders, whom according to him are doing a great service to mankind. He didn’t want to start, what he called, a “nonduality war” (which is of course a fascinating term if you think about it).

The main thing I’d like to comment here is that Leo is never entirely clear about what makes an “awakening experience” authentic. From what I gather (and from what comes next in the video) we can infer that the leading criteria consists of a fuzzy blend of experience of certainty, feeling of unity, and sense of direct knowing coupled together. To the extent that 5-MeO-DMT does all of these things to an extraordinary degree, we can take Leo on his words that he indeed experienced states of consciousness that feel like awakening that are most likely inaccessible to everyone who hasn’t gone through a protocol like his. What is still unclear is how exactly the semantic contents of these experiences are verified by means other than intuition. We will come back to that.

At 16:00 he makes the distinction between awakening as merely “cessation”, “nothingness”, “emptiness”, “the Self”, or that “you are nothing and everything” versus what he has been experiencing. He agrees that those are true and worthy realizations, but he claims that before his experiences, these understandings were still only realized at a very “low level”. Other masters, he claims, may care about ending suffering, about peace, about emptiness, and so on. But that nobody seems to truly care about understanding reality (because otherwise they would be doing what he’s doing). He rebukes possible critics (arguably of the Zen variety) who would say that “understanding is a function of the mind” so the goal shouldn’t be to understand. He asserts that no, based on his lived experience, that consciousness is capable of “infinite understanding”.

Notwithstanding the challenges posed by ultrafinitism, I am also inclined to believe Leo that he has experienced completely new varieties of “understanding”. In my model of the mind, understanding something means to have the ability to render it in your world-simulation in a particular kind of way that allows you to see it from every possible angle you have access to. On 5-MeO-DMT, as we will see to a greater extent below, a certain new set of projective operations get unlocked that allow you to render information from many, many more points of view at the same time. It is unclear whether this is possible with meditation alone (in personal communication, Daniel Ingram said yes) but it is certainly extraordinarily rare for even advanced meditators to be able to do this. So I am with Leo when it comes to describing “new kinds of understandings”. But perhaps I am not on board when it comes to claiming that the content of such understandings is an accurate rendering of the structure of reality.

At 18:30 Leo asserts that what happened to him is that over the course of the first week of his experiment he “completely understood reality, completely understood what God is”. God has no beginning and no end. He explains that normal human understanding sees situations from a single point of view (such as from the past to the future). But that actual infinite reality is from all sides at once: “When you are in full God consciousness, you look around the room, and you can see it from every single point of view, from an infinite number of angle and perspectives. You see that every part of the room generates and manufactures and creates every other part. […] Here when you are in God consciousness, you see it from every single possible dimension and angle. It’s not happening lilnearly, it’s all in the present now. And you can see it from every angle almost as though, if you take a watermelon and you do a cross-section with a giant knife, through that watermelon, and you keep doing cross-section, cross-section, cross-section in various different angles, eventually you’ll slice it up into an infinite number of perspectives. And then you’ll understand the entire watermelon as a sort of a whole. Whereas usually as humans what we do is we slice down that watermelon just right down the middle. And we just see that one cross-section.”

Now, this is extremely interesting. But first, it’s important to point out that here Leo might implicitly be reasoning about his experience through the lens of direct realism about perception. That is, that as he experiences this profound sense of understanding that encompasses every possible angle at once, he seems to believe that this is an understanding of his environment, of his future and past, and of reality as a whole. On the other hand, if you start out assuming indirect realism about perception, how you interpret this experience would be in terms of the instantiation of new exotic geometries of your own world-simulation. Here I must bring up the analysis of “regular” DMT (i.e. n,n-DMT) experiences through the lens of hyperbolic geometry. Indeed, regular DMT elevates the energy of your consciousness, which manifests in brighter colors, fast movement, intricate and detailed patterns, and as curved phenomenal space. We know this because of numerous trip reports from people well educated in advanced mathematics who claim that the visual symmetries one can experience on DMT (at doses above 10mg) have hyperbolic curvature (cf. hyperbolic orbifolds). It is also consistent with many other phenomena one can experience on DMT (see the Eli 5 for a quick summary). But you should keep in mind that this analysis never claims that you are experiencing directly a mind-independent “hyperspace”. Rather, the analysis focuses on how DMT modifies the geometric properties of your inner world-simulation.

Hyperbolic Geometry of DMT Experiences copy 47

Energy-complexity landscape on DMT

Hyperbolic Geometry of DMT Experiences copy 38

DMT trip progression

Intriguingly, our inner world-simulations work with projective geometry. In normal circumstances our world-simulations have a consistent set of projective points at infinity – they render the modal and amodal features of our experience in projective scenes that are globally consistent. But psychedelics can give rise to this phenomenon of “point-of-view-fragmentation“, where your experience becomes a patchwork of inconsistent projective renderings. So even on “regular” DMT you can get the profound feeling of “seeing something from multiple points of view at once”. Enhanced with hyperbolic geometry, this can cause the stark impression that you can explore “hyperspace” with a kind of “ultra-understanding”.

Looking beyond “regular” DMT, 5-MeO-DMT is yet more crazy than that. You see, even on DMT you get the feeling that you are restricted in the number of points of view from which you can see something at the same time. You can see it from many more points of view than normal, but it’s still restricted. But the extreme “smoothing” of experience that 5-MeO-DMT causes makes it so that you cannot distinguish one point of view from another. So they all blend together. Not only do you experience semantic content from “multiple points of view at once” as in DMT, but you can erase distinctions between points of view so that one’s sense of knowing arises involving a totally new kind of projective effect, in which you actually feel you can see something from “every point of view at once”. It feels that you have unlocked a kind of omniscience. This already happens on other psychedelics to a lesser extent (and in meditation, and even sober life to an even lesser extent, but still there), and it is a consequence of smoothing the geometry of your experience to such an extent that there are no symmetry-breaking imperfections “with which to orient a projective point”. I suspect that the higher “formless” jhanas of “boundless space” and “boundless consciousness” are hitting at this effect. And on 5-MeO-DMT this effect is pronounced. More so, because of the connection between symmetry and smoothness of space (cf. Geometry Through the Eyes of Felix Klein) when this happens you will also automatically be instantiating a high-dimensional group. And according to the Symmetry Theory of Valence, this ought to be extraordinarily blissful. And indeed it is.

This is, perhaps, partly what is going on in the experience that Leo is describing. Again, I am inclined to believe his description, but happy to dismiss his naïve interpretation.

indras_net

Indra’s Net

At 23:15 Leo describes how from his 5-MeO-DMT point of view he realized what “consciousness truly is”. And that is an “infinitely interconnected self-communicating field”. In normal everyday states of consciousness the different parts of your experience are “connected” but not “communicating.” But on 5-MeO, “as you become more conscious, what happens is that every point in space inter-connects with itself and starts to communicate with itself. This is a really profound, shocking, mystical experience. And it keeps getting cranked up more and more and more. You can call it omniscience, or telepathy. And it’s like the universal communication system gets turned on for the first time. Right now your conscious field is not in infinite communication with itself. It’s fragmented and divided. Such that you think I’m over here, you are over there, my computer is over here, your computer is over there…”. He explains that if we were to realize we are all one, we would then instantly be able to communicate between each other.

Here again we get extremely different interpretations of the phenomena Leo describes depending on whether you believe in direct or indirect realism about perception. As Leo implicitly assumes direct realism about perception, he interprets this effect as literally switching on an “universal communication system” between every points in reality, whereas the indirect realist interpretation would be that you have somehow interlocked the pieces of your conscious experience in such a way that they now act as an interconnected whole. This is something that indeed has been reported before, and at QRI we call this effect “network integration“. A simple way of encapsulating this phenomenon would be by saying that the cross-frequency coupling of your nervous system is massively increased so that there is seamless information and energy transfer between vibrations at different scales (to a much lesser extent MDMA also does this, but 5-MeO-DMT is the most powerful “integration aid” we know of). This sounds crazy but it really isn’t. After all, your nervous system is a network of oscillators. It stands to reason that you can change how they interact with one another by fine-tuning their connections and get as a result decoupling of vibrations (e.g. SSRIs), or coupling only between vibrations of a specific frequency (e.g. stimulants and depressants), or more coupling in general (e.g. psychedelics). In particular, 5-MeO-DMT does seem to cause a massively effective kind of fractal coupling, where every vibration can get in tune with every other vibration. And recall, since a lot of our inner world simulation is about representing “external reality”, this effect can give rise to the feeling that you can now instantly communicate with other parts of reality as a whole. This, from my point of view, is merely misinterpreting the experience by imagining that you have direct access to your surroundings.

At 34:52 Leo explains that you just need 5-MeO-DMT to experience these awakenings. And yet, he also claims that everything in reality is imaginary. It is all something that you, as God, are imagining because “you need a story to deny that you are infinite consciousness.” Even though the neurotransmitters are imaginary, you still need to modify them in order to have this experience: “I’m talking about superhuman levels of consciousness. These are not levels of consciousness that you can access sober. You need to literally upgrade the neurotransmitters in your imaginary brain. And yes, your brain is still imaginary, and those neurotransmitters are imaginary. But you still need to upgrade them nevertheless in order to access some of the things I say.”

Needless to say, it’s bizarre that you would need imaginary neurotransmitter-mimicking molecules in your brain in order to realize that all of reality is your own imagination. When you dream, do you need to find a specific drug inside your dream in order to wake up from the dream? Perhaps this view can indeed be steel-manned, but the odds seem stacked against it.

At 38:30 he starts talking about his pornography collection. He assembles nude images of women, not only to relieve horniness, but also as a kind of pursuit of aesthetics. Pictures of nude super-models are some of the most beautiful things a (straight) man can see. He brings this up in order to talk about how he then at some point started exploring watching these pictures on 5-MeO-DMT. Recollecting this brings him to tears because of how beautiful the experiences were. He states “you’ve never really seen porn until you’ve seen it on 5-MeO-DMT.” He claims that he started to feel that this way he really felt that it is you (God) that is beautiful, which is manifested through those pictures.

A robust finding in the psychology of sexual attraction is that symmetry in faces is correlated with attractiveness. Indeed, more regular faces tend to be perceived as more beautiful. Amazingly, you can play with this effect by decorating someone’s face with face-paint. The more symmetrical the pattern, the more beautiful the face looks (and vice-versa). Arguably, the effect Leo is describing where people who are already beautiful become unbelievably pretty on 5-MeO-DMT involves embedding high-dimensional symmetries into the way you render them in your world-simulation. A lesser, and perhaps more reliable, version of this effect happens when you look at people on MDMA. They look way more attractive than what they look like sober.

Leo then brings up (~41:30) that he started to take 5-MeO-DMT on warm baths as well, which he reassures us is not as dangerous as it sounds (not enough water to drown if he experiences a whiteout). [It’s important to mention that people have died by taking ketamine on bath tubs; although a different drug, it is arguably still extremely dangerous to take 5-MeO-DMT alone on a bathtub; don’t do it]. He then has an incredible awakening surrendering to God consciousness in the bathtub, on 5-MeO-DMT, jerking off to beautiful women in the screen of his laptop. He gets a profound insight into the very “nature of desire”. He explains that it is very difficult to recognize the true nature of desire while on a normal level of consciousness because our desires are biased and fragmented. When “your consciousness becomes infinite” those biases dissolve, and you experience desire in its pure form. Which according to his direct experience turned out to be “desire for God, desire for myself”. And this is because you are, deep down “infinite love”. When you desire a husband, or sex, or whatever, you are really desiring God in disguise. But the problem is that since your path to God is constrained by the form you desire, your connection to God is not stable. But once you have this experience of complete understanding of what desire is, you finally get your desire fully quenched by experiencing God’s love.

This is a very deep point. It is related to what I’ve sometimes called the “most important philosophical question“, which is: is valence a spiritual phenomenon or spirituality a valence phenomenon? In other words, do we find experiences of God blissful because they have harmony and symmetry, or perhaps is it the other way around, where even the most trivial of pleasures, like drinking a good smoothy, feels good because it temporarily “gets you closer to God”? I lean towards the former, and that in fact mystical experiences are so beautiful because they are indeed extremely harmonious and resonant states of consciousness, and not because they take you closer to God. But I know very smart people who can’t decide between these views. For example, my friend Stuart Garvagh writes: 

What if the two options are indistinguishable? Suppose valence is a measure of the harmony/symmetry of the object of consciousness, and the experience of “Oneness” or Cosmic Consciousness is equivalent to having the object of consciousness be all of creation (God‘s object), a highly symmetrical, full-spectrum object (full of bliss, light, love, beingness, all-knowledge, empty of discernible content or information). All objects of consciousness are distortions (or refractions, or something) of this one object. Happiness is equivalent to reducing or “polishing-out” these distortions. Thus, what appears to be just the fact of certain states being more pleasant than others is equivalent to certain states being closer to God‘s creation as a whole. Obviously this is all pure speculation and just a story to illustrate a point, but I could see it being very tough to tease apart the truth-value of 1 and 2. Note: I’m fairly agnostic myself, but lean towards 2 (bliss is the perfume of “God realizing God” or the subject of experience knowing Itself). I would very much love to have this question answered convincingly!

At 50:00 Leo says that “everything I’ve described so far is really a prelude to the real heart of awakening, which is the discovery of love. […] I had already awakened to love a number of times, but this was deeper. By the two week mark the love really started to crack open. Infinite self-love. You are drowning on this love.” He goes on to describe how at this point he was developing a form of telepathy that allowed him to communicate with God directly (which is, of course, a way of talking to himself as he is God already). It’s just a helpful way to further develop. And what God was showing him was how to receive self-love. It was so much at first he couldn’t handle it. And so he went through a self-purification process.

An interesting lens with which to interpret this experience of purification is that of neural annealing. Each 5-MeO-DMT experience would be making Leo’s nervous system resonate in ways in new ways, slowly writing over previous patterns and entraining the characteristic high-symmetry patterns of the state. Over time, the nervous system adjusts its weights in order to be able to handle that resonance without getting its patterns over-written. In other words, Leo has been transforming his nervous system into a kind of high-valence machine, which is of course very beneficial for intrinsic feelings of wellbeing (though perhaps detrimental to one’s epistemology).

55:00: He points out that unlike addictive drugs, he actually had to push himself very hard to continue to take 5-MeO-DMT everyday for 30 days. He stopped wanting to do it. The ego didn’t want it. And yes, it was pleasurable once he surrendered on every session, but it was difficult, heavy spiritual work. He says that he could only really do this because of years of practice with and without psychedelics, intense meditation, and a lot of personal development. And because of this, he explains his 5-MeO experiences felt like “years of spiritual work condensed into a single hour.” He then says that God will never judge you, and will help you to accept whatever terrible things you’ve done. And many of his subsequent trips were centered around self-acceptance. 

Following the path of progressive neural annealing, going deeper and deeper into a state of self-acceptance can be understood as a deeper harmonization of your nervous system with itself.

At 1:01:20, Leo claims to have figured out what the purpose of reality truly is: “Reality is a contest for who can love who more. That’s really what life is about when you are fully conscious. […] Consciousness is a race for who can love who more. […] An intelligent fully conscious consciousness would only be interested in love. It wouldn’t be interested in anything else. Because everything else is inferior. […] Everything else is just utter silliness!”

I tend to agree with this, though perhaps not in an agentive way. As David Pearce says: “the pleasure-pain axis discloses the universe’s intrinsic value function.” So when you’ve annealed extremely harmonious patterns and do not get distracted by negative emotion, naturally, all there is left to do is maximize love. Unless we mess up, this is the only good final destiny for the cosmos (albeit perhaps it might take the form of a Hedonium shockwave, which at least in our current human form, sound utterly unappealing to most people).

1:06:10 “[God’s love] sparks you to also want to love it back. You see, it turns into a reciprocal reaction, where it is like two mirrors that are mirroring light between each other like a laser beam that is bouncing between two mirrors. And it’s bouncing back and forth and back and forth. And as it bounces back and forth it becomes more and more concentrated. And it strengthens. And it becomes more coherent. And so that’s what started happening. At first it started out as just a little game. Like ‘I love you, I love you, I love you’. A little game. It sounds like it’s almost like childish. And it sort of was. But then it morphed from being this childish thing, into being this serious existential business. This turned into the work. This was the true awakening. Is that with the two mirrors, you know, first it took a little while to get the two mirrors aligned. Because you know if the two mirrors are not perfectly aligned, the laser beam will kind of bounce back and forth in different directions. It’s not going to really concentrate. So that was happening at first. […] The love started bouncing back and forth between us, and getting stronger and stronger. […] Each time it bounces back to me it transforms me. It opens me up deeper. And as it opens me up deeper it reveals blockages and obstacles to my capacity to love.”

Now this is a fascinating account. And while Leo interprets it in a completely mystical way, the description also fits very well an annealing process where the nervous system gets more and more fine-tuned in order to be able to contain high levels of coherent energy via symmetry. Again, this would be extremely high-valence as a consequence of the Symmetry Theory of Valence. Notice that we’ve talked about this phenomenon of “infinite mirrors” on psychedelics since 2016 (see: Algorithmic Reduction of Psychedelic States).

At ~1:09:30 he starts discussing that at this point he was confronted by God about whether he was willing to love the holocaust, and rape, and murder, and bullies, and people of all sorts, even devil worshipers. 

Two important points here. First, it is a bit ambiguous whether Leo here is using the word “love” in the sense of “enjoyment” or in the sense of “loving-kindness and compassion”. The former would be disturbing while the latter would be admirable. I suppose he was talking about the latter, in which case “loving rape” would refer to “being able to accept and forgive those who rape” which indeed sounds very Godly. This radical move is explored in metta (loving-kindness) meditation and it seems healthy on the whole. And second: Why? Why go through the trouble of embracing all the evil and repulsive aspects of ourselves? One interpretation here, coming back to the analysis based on neural annealing, is that any little kink or imperfection caused by negative emotion in our nervous system will create slight symmetry breaking effects on the resonance of the entire system as whole. So after you’ve “polished and aligned the mirrors for long enough” the tiny imperfections become the next natural blockage to overcome in order to maximize the preservation of coherent energy via symmetry.

~1:12:00 Leo explains that the hardest thing to love is your own self-hatred. In the bouncing off of the love between you and God, with each bounce, you find that the parts you hate about yourself reflect an imperfect love. But God loves all of you including your self-hatred. So he pings you about that. And once you can accept it, that’s what truly changes you. “Because when you feel that love, and you feel how accepting it is, and how forgiving it is of all of your evil and of all of your sins… that’s the thing that kills you, that transforms you. That’s what breaks your heart, wide open. That’s what gets you to surrender. That’s what humbles you. That’s what heals you.” Leo then explains that he discovered what “healing is”. And it is “truth and love”. That in order to heal anyone, you need to love them and accept them. Not via sappy postcards and white lies but by truth. He also states that all physical, mental, and spiritual ailments have, at their root, lack of love.

If love is one of the cleanest expressions of high-valence symmetry and resonance, we can certainly expect that inundating a nervous system with it will smooth and clean its blockages, i.e. the sources of neural dissonance. Hence the incredible power of MDMA on healing nervous systems in the short-term. Indeed, positive emotion is itself healing and enhances neural coherence. But where I think this view is incomplete is in diagnosing the terrible suffering that goes on in the world in terms of a lack of love. For instance, are cluster headaches really just the result of lack of self-love? In here must bring back the background assumption of physicalism and make a firm statement that if we fall into illusion about the nature of reality we risk not saving people (and sentient beings more generally) who are really in the depth of Hell. Just loving them without taking the causally-relevant physical action to prevent their suffering is, in my opinion, not true love. Hence the importance of maintaining a high level of epistemic rigor: for the sake of others. (See: Hell Must Be Destroyed).

1:22:30 Leo explains that in this “love contest” with God of bouncing off love through parallel mirrors the love became so deep that for the first time in his life he felt the need to apologize: “I’m sorry for not loving more.” He goes into a sermon about how we are petty, and selfish, etc. and how God loves us anyway. “Real love means: I really love you as you are. And I don’t need anything from you. And especially all those things that you think I want you to change about you, I don’t need you to change. I can accept them all exactly as they are. Because that’s love. And when you realize THAT, that’s what transforms you. It is not that God says that he loves you. He is demonstrating it. It’s the demonstration that transforms you.” Leo expresses that he was then for the first time in his life able to say “thank you” sincerely. Specifically, “thank you for your love”: “This is the point at which you’ve really been touched by God’s love. And at this point you realize that that’s it, that’s the point, that’s the lesson in life. That’s my only job. It’s to love.” And finally, that for the first time in his life he was able to say “I love you” and truly mean it. “And you fall in love with God… but it doesn’t end there.”

An interesting interpretation of the felt-sense of “truly meaning” words like “I’m sorry”, “thank you”, and “I love you” is that at this point Leo has really deeply annealed his nervous system into a vessel for coherent energy. In other words, at this point he is saying and meaning those words through the whole of his nervous system, rather than them coming from a fragmented region of a complex set of competing internal family systems in a scattered way. Which is, of course, the way it usually goes.

1:35:30 Leo explains that at this point he started going into the stage of being able to radiate love. That he was unable to radiate love before. “I love that you are not capable of love. I love that. And when that hits you, that’s what fills you with enough love to overcome your resistance to love that next level thing that you couldn’t love.” Then at ~ 1:38:00 it gets really serious. Leo explains that so far he was just loving and accepting past events and people. But he was then asked by God whether he would be willing to live through the worst things that have happened and will happen. To incarnate and be tortured, among many other horrible things. And that’s what true love really means. “When you see a murder on the TV, you have to realize that God lived through that. And the only reason he lived through that is because it loved it.”

I do not understand this. Here is where the distinction between the two kinds of senses of the word “love” become very important. I worry that Leo has annealed to the version of love with the meaning of “enjoyment” rather than “loving-kindness and compassion”. Because a loving God would be happy to take the place of someone who went through Hell. But would a loving God send himself to Hell if nobody had to in the first place? That would just create suffering out of nothing. So I am confused about why Leo would believe this to be the case. It’s quite possible that there are many maxima of symmetry in the nervous system you can achieve with 5-MeO-DMT, and some of them are loving in the sense of compassionate and others are crazy and would be willing to create suffering out of nothing from a misguided understanding of what love is supposed to be. Again, handle Plutonium with caution.

1:43:00 Leo started wondering “what is reality then?” And the answer was: “It’s infinite consciousness. Infinite formless consciousness. So what happens was that my mind in my visual field as I was in that bathtub. My mind and my visual field focused in on empty space, and I sort of zoomed into that empty space and realized that that empty space is just love”. He then describes a process where his consciousness became more and more concentrated and absorbed into space, each dot of consciousness branching out into more and more dots of consciousness, turning into the brightest possible white light. But when he inquired into what was that white light he kept seeing that there was no end to it, and rather, that each point was always connected to more points. Inquiring further, he would get the response that at the core, reality is pure love. That it wouldn’t be and couldn’t be any other way.

The description sounds remarkably close to the formless jhanas such as “boundless space” and “boundless consciousness”. The description itself is extremely reminiscent of an annealing process, reaching a highly energized state of consciousness nearly devoid of information content and nearly perfectly symmetrical. The fact that at this incredibly annealed level he felt so much love supports the Symmetry Theory of Valence.

147:28 – And after Leo realizes that “Of course it is love!” he says that’s when the fear comes: “Because then what you realize is that this is the end. This is the end of your life. You are dead. If you go any further you are dead. Everything will disappear. Your family, your friends, you parents, all of it is completely imaginary. And if you stop imagining it right now, it will all end. If you go any further into this Singularity, you will become pure, formless, infinite, love for ever, loving itself forever. And the entire universe will be destroyed as if it never existed. Complete nothingness. Complete everythingness. You will merge into everyone.”

This sounds like the transition between the 6th and 7th Jhana, i.e. between “boundless consciousness” and “nothingness”. Again, this would be the result of further loss of information via an annealing process, refining the symmetry up to that of a “point”. Interestingly, Mike Johnson in Principia Quallia points out that as symmetry approaches an asymptote of perfection you do get a higher quality of valence but at the cost of reduced consciousness. This might explain why you go from “the brightest possible love” to a feeling of nothingness at this critical transition.

1:48:25: “…You will merge into everyone. Your mother, your father, your children, your spouse, Hitler, terrorists, 9/11, Donald Trump, rape, murder, torture, everything will become pure infinite love, merging completely into itself, there will be no distinction between absolutely anything, and that will be the end. And you will realize what reality is. Infinite consciousness. Love. God. And you will realize that everything in your life from your birth to this point has just been some imaginary story. A dream that was design to lead you to pure absolute infinite love. And you will rest in that love forever. Forever falling in love with yourself. Forever making love to yourself. Forever in infinite union. With every possible object that could ever exist. Pure absolute, omnipotent, omniscient, perfect, intelligent, consciousness. Everything that could ever possibly be, is you. And THAT is awakening. When you are this awake, you are dead. And you have no desire for life. There is no physical existence. There is no universe. Nothing remains. Your parents, and your spouse, and your children, they don’t stay back and keep living their lives, enjoying their life without you while your body drops dead. No, no, no, no, no. This is much more serious than that. If you do this. If you become infinite love, you will take everybody with you. There will not be anybody left. You will destroy the entire universe. Every single sentient being will become you. They will have no existence whatsoever. Zero. They will die with you. They will all awaken with you. It’s infinite awakening. It’s completely absolute. There will not be anything left. You will take the entire universe with you. Into pure oneness. THAT’S awakening.”

This is not the first time I hear about this kind of experience. It certainly sounds extraordinarily scary. Though perhaps a negative utilitarian would find it to be the ultimate relief and the best of all possible imaginable outcomes. With the human survival instinct, and quite possibly a body fully aroused with the incredible power of 5-MeO-DMT, this is bound to be one of the most terrifying feelings possible. It’s quite likely that it may be one element of what makes “bad 5-MeO-DMT experiences” so terrifying. But here we must recall that the map is not the territory. And while an annealing process might slowly write over every single facet of one’s model of reality and in turn making them part of a super-cluster of high-dimensional resonance that reflects itself seemingly infinitely, doing this does not entail that you are in fact about to destroy the universe. Though, admittedly, it will surely feel that way. Additionally, I would gather that were it possible to actually end the universe this way, somebody, somewhere, in some reality or another, would have already done so. Remember that if God could be killed, it’d be dead already.

1:52:01: “And I didn’t go there! As you can tell, since I’m still sitting here. I’m not there. I was too afraid to go there. And God was fine with it. It didn’t push me. But that’s not the end of the story! It’s still just the beginning.” He then goes on to explain that a part of him wanted to do it and another part of him didn’t want to. He says it got really loopy and weird; this really shook him. That God was beckoning him to go and be one forever, but he was still ambivalent and needed some time to think about it. He knew it would make no difference, but he still decided to ‘make preparations’ and tell his family and friends that he loves them before moving forward with a final decision to annihilate the universe. By the time he had done that… he had stopped taking 5-MeO-DMT: “The experiences had gotten so profound and so deep… this was roughly the 25th or 27th day of this whole 30 day process. I swore off 5-MeO-DMT and said ‘Ok I’m not doing any more of this shit. It’s enough'”. He explains that by this time the drug was making him feel infinite consciousness when waking up (from sleep) the next day. He felt the Singularity was sucking him into it. It felt both terrifying and irresistible. Every time he would go to sleep it would suck him in really strongly, and he kept resisting it. He would wake up sweaty and in a panic. He was tripping deeper in his sleep than in the bathtub. He couldn’t sleep without this happening, and it kept happening for about 5 days. “I just want to get back to normal. This is getting freaky now.” 

I’ve heard this from more than a couple people. That is, that when one does 5-MeO-DMT enough times, and especially within a short enough period of time, the “realizations” start to also happen during sleep in an involuntarily way. One can interpret this as the annealing process of 5-MeO-DMT now latching on to sleep (itself a natural annealing process meant to lessen the technical debt of the nervous system). Even just a couple strong trips can really change what sleep feels like for many days. I can’t imagine just how intense it must have been for Leo after 25 days straight of using this drug.

2:01:40 – Leo explains that when he was dozing off with a blanket on his living room (terrified of sleeping on his bed due to the effect just described) he experienced a “yet deeper awakening” which involved realizing that all of his previous awakenings were just like points and that the new one was like a line connecting many points. “Everything I’ve said up to this point were just a single dimension of awakening. And then what I broke through to is a second dimension. A second dimension of awakening opened up. This second dimension is completely unimaginable, completely indescribable, cannot be talked about, cannot be thought about. And yet it’s there. In it, are things that are completely outside of the physical universe that you cannot conceive or imagine.” He goes on to explain that there are then also a third, fourth, fifth, etc. dimensions. And that he believes there is an infinite number of them. He barely even began to explore the second dimension of awakening, but he realized that it goes forever. It kept happening, he had intense emotional distress and mood swings. But gradually after five more days it subsided, and he started to be able to sleep more normally. “And I’ve been working to make sense of all of this for the last couple of weeks. So that’s what happened.”

Alright, this is out of my depth and I do not have an interpretation of what this “second dimension of awakening” is about. If anyone has any clue, please leave a comment or shoot me an email. I’m as as confused as Leo is about this.

~2:05:00 – Leo confesses he does not know what would happen if he went through with joining the Singularity and mentions that it sounds a bit like Mahasamādhi. He simply has not answers at this point, but he asserts that the experience has made him question the extent of the enlightenment of other teachers. It also has made him more loving. But still, he feels frustration: “I don’t know what to do from here.”

And neither do I. Do you, dear reader?

Postscript: In the last 10 minutes of the video Leo shares a heart warming message about how reality is, deep down, truly, “just love” and that him saying this may be a seed that will blossom into you finding this out for yourself at some point in the future. He ends by cautioning his audience to not believe as a matter of fact that this is the path for everyone. He suggests that others should just use his examples from his own journey as examples rather than an absolute guide or how-to for enlightenment. He asks his audience to make sure to question the depth of their own awakening – to not believe that they have reached the ultimate level. He admits he has no idea whether there is an ultimate level or not, and that he still has some healing to do on himself. He remains dissatisfied with his understanding of reality.


Thank you for reading!

THE END

Qualiascope

[Epistemic status: Fiction]

It was the 21st of April of a recent year. I was listening to Jefferson Airplane songs, had just peeled a tangerine, and was about to vape 20 milligrams of DMT. After exhaling all the material I focused on the smell of the tangerine, holding it in my hands. I was engrossed in the scent. And then: “Who is that?” I felt an entity question my identity, as if I had just startled it in its natural habitat. I felt its presence for most of the duration of the trip, but it didn’t interact with me any further. Later that night I had a lucid dream. “I thought you were a dog” – said a voice. I recognized it from the trip, it was the entity. “The amount of scent qualia your experience contained was much more like that of a dog than a human.” After that night I would encounter it numerous other times. We got to know each other. It is a being from a nearby dimension, or perhaps a partially orthogonal fold of the Calabi-Yau manifold. Either way, in its world they don’t have physical senses like we do. They instead “sniff the qualia” present in the “universal wavefunction”. Their minds have a “qualiascope”. In practice this means they can see us from the inside– what we feel, see, touch, think, etc.

The being showed me what it is like to be one of them. We mindmelded the third time we met. I got to see the world around me as if for the first time; I was seeing it in its ultimate intrinsic nature, rather than as the shadow of it that I would perceive in everyday life with my senses. The qualia of other people is very intricate, semantically complex, and flavorful. The being showed me how it perceived various human contexts. For instance, an interesting place to “point the qualiascope at” is a philosophy department. It is very dense with logico-linguistic qualia and recursion. Compared to other contexts, though, it is thin in knowledge of the varieties of experiences available to humans. Raves are quite incredible. Sniffing the qualia of three thousand people who all share a general palette of LSD, Ketamine, and MDMA qualia is quite moving and mind-blowing. In turn, Buddhist monasteries are some of the sanest places on Earth. Bright, balanced, energized clarity of the finest quality is to be found in groups of people highly experienced in meditation. Every once in a while we would sense from afar a kind of “qualia explosion”. Sometimes it turned out to be extremely blissful, such as some 5-MeO-DMT experiences. And sometimes they turned out to be extremely painful, like a cluster headache episode. With the qualiascope these were sensed as being a kind of elemental type of qualia. Enormous in their “volume” relative to the experiences humans generally have. Like at another order of magnitude altogether.

The tenth time we met, the being said I was ready to feel non-human qualia. It was rough. From its point of view, the biological qualia of this planet is not really particularly human-flavored. As many humans alive as there are, there are almost ten times as many pigs alive. The being showed me how in the language of their world (using qualia-based symbols), they don’t refer to this planet as “human world”. They talk about it as “cow-chicken-pig world”. The things that I felt associated to some of those “folds of experience” were frightful to an incredible extent. It revived in me the conscience that nonhuman animals suffer enormously. I also became fascinated by all the ways in which nonhuman animals experience pleasure and a sense of meaning.

The twentieth time we met, I was shown what the qualia of non-living matter felt like from the inside. Most of it was “qualia dust”. But metals and their delocalized electrons felt, well, “electric” and somewhat more liquid and unified in a subtle ethereal way. Pointing the qualiascope at the center of the Earth was impressive. Some of the combinations of pressure, temperature, and material composition would create “qualia spaghetti” of a rather nice, glowing valence. The temperature would suggest a much higher degree of intrinsic intensity from the inside, but the patterns of qualia formed were not much more intense than what you would find in small animals. But that all changes as soon as you point the qualiascope at the sun. Oh boy. The things I felt were life-redefining. I thought that once you’ve felt what 5-MeO-DMT is capable of you’d maxed out on the brightness of qualia. But inside the sun, at the core, there are qualia aggregates of a subjective brightness at least a million times more powerful. Once you’ve got an inkling of the existence of that, you start to see the universe as they see it. And that is that the kind of stuff going on in places like the planet Earth is a rounding error in light of the other qualia happenings out there in the cosmos.

The hundredth or so time we met, we used the qualiascope to sense what is going on in supernovae. And then black holes. And the quantum vacuum (turns out the Zero Point Energy folks who say there is an enormous amount of energy in the vacuum of space are more right than they can even imagine). They showed me qualiascope records of past civilizations in other galaxies, and how they had developed qualia technology.

The two hundredth time or so we met, the being finally came out about his real interest. It showed me Hedonium. Matter and energy optimized for maximally bound positive valence. Turns out there are about seven thousand nearly optimal configurations for Hedonium possible with our laws of physics (cf. 230 Space Groups). They are kind of ultra-high dimensional crystalline bundles of “awakened qualia”, equanimity, and bright pleasure all combined. They truly feel like “what it was all meant to be all along”. The Big Bang, the Inflationary Period, Baryonic matter, galaxies, organic molecules, life, sapient beings, and technologized qualia all seemed like the path of redemption since The Fall, namely, our first descent from Hedonium. Or so my archetype-prone human mind liked to interpret my new understanding of the universe.

The being finally came through with its agenda. It turns out it is one of the protectors of the Hedonium created by an advanced civilization in other partially orthogonal folds of the Calabi-Yau manifold. The closest archetype in our human world would perhaps be that of the Buddhist Bodhisattva. Namely, a being close to enlightenment that vows to stay in samsara to liberate all other beings before itself escaping the wheel of suffering entirely. My interdimensional Bodhisattva friend told me that our world is on the path of creating qualia technologies too. That in geologic times we are not far from being able to make Hedonium ourselves. It said we should not feel hopeless. That we have really good chances of exiting our Darwinian predicament. Since a year ago I have’t had any contact with it. I write this for myself as I don’t expect anyone to believe me. But I do pass along the message. Don’t lose hope. Paradises beyond the imaginable are right next door in qualia space. We just need to find them by exploring the state-space of consciousness.

Oh, my Bodhisattva friend also told me to pass along to you all the message of “If you could possibly stop eating Caroline Reapers, that would be great!” because all of that intense spicy qualia is interfering with their radio systems. Thanks!


See also: What’s out there?

One for All and All for One

By David Pearce (response to Quora question: “What does David Pearce think of closed, empty, and open individualism?“)


Vedanta teaches that consciousness is singular, all happenings are played out in one universal consciousness and there is no multiplicity of selves.

 

– Erwin Schrödinger, ‘My View of the World’, 1951

Enlightenment came to me suddenly and unexpectedly one afternoon in March [1939] when I was walking up to the school notice board to see whether my name was on the list for tomorrow’s football game. I was not on the list. And in a blinding flash of inner light I saw the answer to both my problems, the problem of war and the problem of injustice. The answer was amazingly simple. I called it Cosmic Unity. Cosmic Unity said: There is only one of us. We are all the same person. I am you and I am Winston Churchill and Hitler and Gandhi and everybody. There is no problem of injustice because your sufferings are also mine. There will be no problem of war as soon as you understand that in killing me you are only killing yourself.

 

– Freeman Dyson, ‘Disturbing the Universe’, 1979

Common sense assumes “closed” individualism: we are born, live awhile, and then die. Common sense is wrong about most things, and the assumption of enduring metaphysical egos is true to form. Philosophers sometimes speak of the “indiscernibility of identicals”. If a = b, then everything true of a is true of b. This basic principle of logic is trivially true. Our legal system, economy, politics, academic institutions and personal relationships assume it’s false. Violation of elementary logic is a precondition of everyday social life. It’s hard to imagine any human society that wasn’t founded on such a fiction. The myth of enduring metaphysical egos and “closed” individualism also leads to a justice system based on scapegoating. If we were accurately individuated, then such scapegoating would seem absurd.

Among the world’s major belief-systems, Buddhism comes closest to acknowledging “empty” individualism: enduring egos are a myth (cf. “non-self” or Anatta – Wikipedia). But Buddhism isn’t consistent. All our woes are supposedly the product of bad “karma”, the sum of our actions in this and previous states of existence. Karma as understood by Buddhists isn’t the deterministic cause and effect of classical physics, but rather the contribution of bad intent and bad deeds to bad rebirths.

Among secular philosophers, the best-known defender of (what we would now call) empty individualism minus the metaphysical accretions is often reckoned David Hume. Yet Hume was also a “bundle theorist”, sceptical of the diachronic and the synchronic unity of the self. At any given moment, you aren’t a unified subject (“For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat, cold, light or shade, love or hatred, pain or pleasure. I can never catch myself at any time without a perception, and can never observe anything but the perception” (‘On Personal Identity’, A Treatise of Human Nature, 1739)). So strictly, Hume wasn’t even an empty individualist. Contrast Kant’s “transcendental unity of apperception”, aka the unity of the self.

An advocate of common-sense closed individualism might object that critics are abusing language. Thus “Winston Churchill”, say, is just the name given to an extended person born in 1874 who died in 1965. But adhering to this usage would mean abandoning the concept of agency. When you raise your hand, a temporally extended entity born decades ago doesn’t raise its collective hand. Raising your hand is a specific, spatio-temporally located event. In order to make sense of agency, only a “thin” sense of personal identity can work.

According to “open” individualism, there exists only one numerically identical subject who is everyone at all times. Open individualism was christened by philosopher Daniel Kolak, author of I Am You (2004). The roots of open individualism are ancient, stretching back at least to the Upanishads. The older name is monopsychism. I am Jesus, Moses and Einstein, but also Hitler, Stalin and Genghis Khan. And I am also all pigs, dinosaurs and ants: subjects of experience date to the late Pre-Cambrian, if not earlier.

My view?
My ethical sympathies lie with open individualism; but as it stands, I don’t see how a monopsychist theory of identity can be true. Open or closed individualism might (tenuously) be defensible if we were electrons (cfOne-electron universe – Wikipedia). However, sentient beings are qualitatively and numerically different. For example, the half-life of a typical protein in the brain is an estimated 12–14 days. Identity over time is a genetically adaptive fiction for the fleetingly unified subjects of experience generated by the CNS of animals evolved under pressure of natural selection (cfWas Parfit correct we’re not the same person that we were when we were born?). Even memory is a mode of present experience. Both open and closed individualism are false.

By contrast, the fleeting synchronic unity of the self is real, scientifically unexplained (cfthe binding problem) and genetically adaptive. How a pack of supposedly decohered membrane-bound neurons achieves a classically impossible feat of virtual world-making leads us into deep philosophical waters. But whatever the explanation, I think empty individualism is true. Thus I share with my namesakes – the authors of The Hedonistic Imperative (1995) – the view that we ought to abolish the biology of suffering in favour of genetically-programmed gradients of superhuman bliss. Yet my namesakes elsewhere in tenselessly existing space-time (or Hilbert space) physically differ from the multiple David Pearces (DPs) responding to your question. Using numerical superscripts, e.g. DP^564356, DP^54346 (etc), might be less inappropriate than using a single name. But even “DP” here is misleading because such usage suggests an enduring carrier of identity. No such enduring carrier exists, merely modestly dynamically stable patterns of fundamental quantum fields. Primitive primate minds were not designed to “carve Nature at the joints”.

However, just because a theory is true doesn’t mean humans ought to believe in it. What matters are its ethical consequences. Will the world be a better or worse place if most of us are closed, empty or open individualists? Psychologically, empty individualism is probably the least emotionally satisfying account of personal identity – convenient when informing an importunate debt-collection company they are confusing you with someone else, but otherwise a recipe for fecklessness, irresponsibility and overly-demanding feats of altruism. Humans would be more civilised if most people believed in open individualism. The factory-farmed pig destined to be turned into a bacon sandwich is really youthe conventional distinction between selfishness and altruism collapses. Selfish behaviour is actually self-harming. Not just moral decency, but decision-theoretic rationality dictates choosing a veggie burger rather than a meat burger. Contrast the metaphysical closed individualism assumed by, say, the Less Wrong Decision Theory FAQ. And indeed, all first-person facts, not least the distress of a horribly abused pig, are equally real. None are ontologically privileged. More speculatively, if non-materialist physicalism is true, then fields of subjectivity are what the mathematical formalism of quantum field theory describes. The intrinsic nature argument proposes that only experience is physically real. On this story, the mathematical machinery of modern physics is transposed to an idealist ontology. This conjecture is hard to swallow; I’m agnostic.

Bern, 20. 5. 2003 Copyright Peter Mosimann: Kuppel

One for all, all for one” – unofficial motto of Switzerland.

Speculative solutions to the Hard Problem of consciousness aside, the egocentric delusion of Darwinian life is too strong for most people to embrace open individualism with conviction. Closed individualism is massively fitness-enhancing (cfAre you the center of the universe?). Moreover, temperamentally happy people tend to have a strong sense of enduring personal identity and agency; depressives have a weaker sense of personhood. Most of the worthwhile things in this world (as well as its biggest horrors) are accomplished by narcissistic closed individualists with towering egos. Consider the transhumanist agenda. Working on a cure for the terrible disorder we know as aging might in theory be undertaken by empty individualists or open individualists; but in practice, the impetus for defeating death and aging comes from strong-minded and “selfish” closed individualists who don’t want their enduring metaphysical egos to perish. Likewise, the well-being of all sentience in our forward light-cone – the primary focus of most DPs – will probably be delivered by closed individualists. Benevolent egomaniacs will most likely save the world.

One for all, all for one”, as Alexandre Dumas put it in The Three Musketeers?
Maybe one day: full-spectrum superintelligence won’t have a false theory of personal identity. “Unus pro omnibus, omnes pro uno” is the unofficial motto of Switzerland. It deserves to be the ethos of the universe.

main-qimg-46d38d2ebcea7325a3f29f7ec454096b

Is the Orthogonality Thesis Defensible if We Assume Both Valence Realism and Open Individualism?

Ari Astra asks: Is the Orthogonality Thesis Defensible if We Assume Both “Valence Realism” and Open Individualism?


Ari’s own response: I suppose it’s contingent on whether or not digital zombies are capable of general intelligence, which is an open question. However, phenomenally bound subjective world simulations seem like an uncharacteristic extravagance on the part of evolution if non-sphexish p-zombie general intelligence is possible.

Of course, it may be possible, but just not reachable through Darwinian selection. But the fact that a search process as huge as evolution couldn’t find it and instead developed profoundly sophisticated phenomenally bound subjectivity is (possibly strong) evidence against the proposition that zombie AGI is possible (or likely to be stumbled on by accident).

If we do need phenomenally bound subjectivity for non-sphexish intelligence and minds ultimately care about qualia valence – and confusedly think that they care about other things only when they’re below a certain intelligence (or thoughtfulness) level – then it seems to follow that smarter than human AGIs will converge on valence optimization.

If OI is also true, then smarter than human AGIs will likely converge on this as well – since it’s within the reach of smart humans – and this will plausibly lead to AGIs adopting sentience in general as their target for valence optimization.

Friendliness may be built into the nature of all sufficiently smart and thoughtful general intelligence.

If we’re not drug-naive and we’ve conducted the phenomenological experiment of chemically blowing open the reducing valves that keep “mind at large” out and that filteratively shape hominid consciousness, we know by direct acquaintance that it’s possible to hack our way to more expansive awareness.

We shouldn’t discount the possibility that AGI will do the same simply because the idea is weirdly genre bending. Whatever narrow experience of “self” AGI starts with in the beginning, it may quickly expand out of.


Michael E. Johnson‘s response: The orthogonality thesis seems sound from ‘far mode’ but always breaks down in ‘near mode’. One way it breaks down is in implementation: the way you build an AGI system will definitely influence what it tends to ‘want’. Orthogonality is a leaky abstraction in this case.

Another way it breaks down is that the nature and structure of the universe instantiates various Schelling points. As you note, if Valence Realism is true, then there exists a pretty big Schelling point around optimizing that. Any arbitrary AGI would be much more likely to optimize for (and coordinate around) optimizing for positive qualia than, say, paperclips. I think this may be what your question gets at.

Coordination is also a huge question. You may have read this already, but worth pointing to: A new theory of Open Individualism.

To collect some threads- I’d suggest that much of the future will be determined by the coordination capacity and game-theoretical equilibriums between (1) different theories of identity, and (2) different metaphysics.

What does ‘metaphysics’ mean here? I use ‘metaphysics’ as shorthand for ‘the ontology people believe is ‘real’. What they believe we should look at when determining moral action.’

The cleanest typology for metaphysics I can offer is: some theories focus on computations as the thing that’s ‘real’, the thing that ethically matters – we should pay attention to what the *bits* are doing. Others focus on physical states – we should pay attention to what the *atoms* are doing. I’m on team atoms, as I note here: Against Functionalism.

My suggested takeaway: an open individualist who assumes computationalism is true (team bits) will have a hard time coordinating with an open individualist who assumes physicalism is true (team atoms) — they’re essentially running incompatible versions of OI and will compete for resources. As a first approximation, instead of three theories of personal identity – Closed Individualism, Empty Individualism, Open Individualism – we’d have six. CI-bits, CI-atoms, EI-bits, EI-atoms, OI-bits, OI-atoms. Whether the future is positive will be substantially determined by how widely and deeply we can build positive-sum moral trades between these six frames.

Maybe there’s further structure, if we add the dimension of ‘yes/no’ on Valence Realism. But maybe not– my intuition is that ‘team bits’ trends toward not being valence realists, whereas ‘team atoms’ tends toward it. So we’d still have these core six.

(I believe OI-atoms or EI-atoms is the ‘most true’ theory of personal identity, and that upon reflection and under consistency constraints agents will converge to these theories at the limit, but I expect all six theories to be well-represented by various agents and pseudo-agents in our current and foreseeable technological society.)

Wada Test + Phenomenal Puzzles: Testing the Independent Consciousness of Individual Brain Hemispheres

by Quintin Frerichs


One of the most pressing problems in philosophy of mind is solving the so-called ‘problem of other minds‘, the difficulty of proving that agents outside oneself have qualia. A workable solution to the problem of other minds would endow us with the ability to define the moral patienthood of present-day biological entities, evade our solipsistic tendencies, and open the door to truly understanding future nonhuman intelligences, should they prove to be conscious. Even more strangely, it would allow us to evaluate whether dream characters or the products of dissociative identity disorder are separate consciousnesses. Irrevocably proving the existence of qualia in other biological life which lacks the capacity for language and higher-order thought is not, to my knowledge, even conceptually feasible at this time. In the case of two agents with the capacity to communicate and problem solve, however, this solution has been proposed, which requires the agent being tested to prove they have qualia by solving a “phenomenal puzzle”. Crucially, the solution does not require that the two agents experience the same qualia, simply that there exists a mapping between their respective conscious states.

If an agent A wishes to prove the existence of qualia in agent B using the above procedure, then A and B must have the following:

  1. A phenomenal bridge (e.g. a biological neural network that connects your brain to someone else’s brain so that both brains now instantiate a single consciousness).
  2. A qualia calibrator (a device that allows you to cycle through many combinations of qualia values quickly so that you can compare the sensory-qualia mappings in both brains and generate a shared vocabulary for qualia values).
  3. A phenomenal puzzle (as described above).
  4. The right set and setting: the use of a proper protocol.

I contend that there may already be a procedure which can be used to generate a reversible phenomenal bridge between two separate minds: a way to make two minds one and subsequently one mind two. Moving in each of these two directions has apparently been demonstrated; by craniopagus twins connected with a thalamic bridge and by corpus callosotomy separating the two cerebral hemispheres. There is tantalizing evidence in each case that consciousness is being fused or fissioned, respectively. In the case of the Hogan sisters, the apparently unitary mind  has access to sensory information from the sensory organs of each cranium. In the case of separating hemispheres there is some debate: alien hand syndrome has suggested the existence of dual consciousness, while other findings have cast doubt on the existence of two separate consciousnesses. While a surgical procedure for separating the hemispheres is as yet permanent, a chemically-induced separation of the hemispheres via the Wada test may provide new avenues for testing the problem of other minds. While some forms of communication (namely language, which is largely left-lateralized) are impaired by the Wada test, other forms such as singing can be left intact. Thus, I believe a combination of Gazzinaga’s procedure and Gómez Emilsson’s phenomenal puzzle approach, in conjunction with a working qualia calibrator, could demonstrate the existence or absence of dual consciousness in the human mind-brain. A version of the Wada test with higher specificity may also be required, to negate some of the characteristic symptoms of confusion, hemineglect, and loss of verbal comprehension.

 

The procedure (utilizing the state space of color, with agents L and R corresponding to the left and right hemispheres) would be as follows: 

Note: a difficulty of utilizing the below outlined procedure is determining which hemisphere should serve as the benchmark. While often language ability is dominant in the left hemisphere (especially in right-handed individuals) and therefore eliminated when the left hemisphere is inactivated during the Wada test, this is not always the case. In cases where at least some language ability is preserved in each hemisphere, either can reliably serve as the point of comparison. 

  1. Design a phenomenal puzzle, such that the solution corresponds to reporting the number of just noticeable differences required to produce a linear mapping between two locations in the state space of color. 
  2. Separate the left and right visual fields (Gazzaniga).
  3. Sodium amobarbital is administered to the left internal cardioid artery via the femoral artery and EEG confirms inactivation of the left hemisphere. In the LVF a consent checkbox for performing the experiment is given to the right hemisphere, Y/N checked using the left hand.
  4. Similarly, sodium amobarbital is administered to the right internal cardioid artery via the femoral artery and EEG confirms inactivation of the right hemisphere. Consent can be verbally obtained from the left hemisphere. 
  5. With both hemispheres activated, qualia calibration on the state space of color is performed (see: A workable solution to the problem of other minds). 
  6. With R inactivated, the phenomenal puzzle is presented to L without enough time for L to solve the puzzle.
  7. Both hemispheres are activated, and L tells the phenomenal puzzle to LR.
  8. L is inactivated and R attempts to solve the puzzle on its own. When R claims to have solved the puzzle (in writing or song most likely), both hemispheres are again reactivated in order to produce LR. R shares its solution with LR.
  9. R is inactivated, and L shares the solution to the phenomenal puzzle. If the solution is correct, then R is conscious! 

Point-of-view characterization of above procedure (Under the assumption that both hemispheres are, in fact, conscious):

  1. From the perspective of the left brain: A researcher asks “do you consent to the following procedure?” You answer ‘yes’, perhaps wondering if you’ve lost just a part of your computational resources, or created an entirely separate consciousness. A short period of darkness and sedation ensues while consent is obtained from the right brain. Suddenly, the amount of consciousness you’re experiencing expands greatly and new memories are available. The computer screen in front of you rapidly cycles through a series of paired color values. The Qualia Calibrator confirms a match by waiting for consensus of the right motor cortex (in lieu of a button press) and from verbal confirmation of the left hemisphere. It feels like an eye exam at hyper speed: “Color one or color two? Color two or color three?”, but for thousands of colors, many of which you don’t have a name for. Then, you sleep, for some indeterminate amount of time. When you awaken, the researcher explains to you the puzzle to solve. Your consciousness is then expanded again, and you repeat the puzzle to yourself, with the strange feeling that “part of you didn’t know about it”. You go dark again. And when the lights are turn on again, things feel normal, but you have a prominent new memory, the solution to the puzzle. Quickly you check. Take this strange shade of cyan and change it once, twice, three times…yup! That’s the mellow orange you were looking for, and in the same number of “just noticeable differences”.
  2. From the perspective of the right brain: You awaken to a scrollable consent form with a checkbox, and a left-handed mouse. Despite your state of relative confusion and lack of verbal fluency, you’re able to understand the form and check the box. Suddenly, your conscious experience expands and your fluency erupts. The computer screen in front of you rapidly cycles through a series of paired color values. The Qualia Calibrator confirms a match by waiting for consensus of the right motor cortex (in lieu of a button press) and from verbal confirmation of the left hemisphere. It feels like an eye exam at hyper speed: “Color one or color two? Color two or color three?”, but for thousands of colors, many of which you don’t have a name for. Again you sleep, your consciousness is briefly expanded, and you learn of the puzzle you are to solve. How did you learn about it? It is weird, you started “repeating” the puzzle to yourself, with the strange feeling that “part of you already had heard it before”. But either way, now you feel like you have heard it really well. Next, it feels like you took a strong sedative and a memory-loss drug at the same time. Now, in this highly impoverished cognitive state, you have to solve a complicated puzzle. To prove that you exist. Ugh. Fortunately, you have help, in the form of an AI which provides the linear mapping you need to discover, provided you answer how many just noticeable differences occur between each set of two points. Half man and machine collaborate to find the solution, and you commit it to memory. Reunited once more, you “share your findings to yourself”. It turns out you’re conscious. The world now knows: the right hemisphere is conscious on its own when the left one is unconscious. Hooray!

AI Alignment Podcast: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson

Lucas Perry from the Future of Life Institute recently interviewed my co-founder Mike Johnson and I in his AI Alignment podcast. Here is the full transcript:


Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Andrés Gomez Emilsson and Mike Johnson from the Qualia Research Institute. In this episode, we discuss the Qualia Research Institute’s mission and core philosophy. We get into the differences between and arguments for and against functionalism and qualia realism. We discuss definitions of consciousness, how consciousness might be causal, we explore Marr’s Levels of Analysis, we discuss the Symmetry Theory of Valence. We also get into identity and consciousness, and the world, the is-out problem, what this all means for AI alignment and building beautiful futures.

And then end on some fun bits, exploring the potentially large amounts of qualia hidden away in cosmological events, and whether or not our universe is something more like heaven or hell. And remember, if you find this podcast interesting or useful, remember to like, comment, subscribe, and follow us on your preferred listening platform. You can continue to help make this podcast better by participating in a very short survey linked in the description of wherever you might find this podcast. It really helps. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master’s in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder.

He is interested in neuroscience, philosophy of mind, and complexity theory. And so, without further ado, I give you Mike Johnson and Andrés Gomez Emilsson. So, Mike and Andrés, thank you so much for coming on. Really excited about this conversation and there’s definitely a ton for us to get into here.

Andrés: Thank you so much for having us. It’s a pleasure.

Mike: Yeah, glad to be here.

Lucas: Let’s start off just talking to provide some background about the Qualia Research Institute. If you guys could explain a little bit, your perspective of the mission and base philosophy and vision that you guys have at QRI. If you could share that, that would be great.

Andrés: Yeah, for sure. I think one important point is there’s some people that think that really what matters might have to do with performing particular types of algorithms, or achieving external goals in the world. Broadly speaking, we tend to focus on experience as the source of value, and if you assume that experience is a source of value, then really mapping out what is the set of possible experiences, what are their computational properties, and above all, how good or bad they feel seems like an ethical and theoretical priority to actually make progress on how to systematically figure out what it is that we should be doing.

Mike: I’ll just add to that, this thing called consciousness seems pretty confusing and strange. We think of it as pre-paradigmatic, much like alchemy. Our vision for what we’re doing is to systematize it and to do to consciousness research what chemistry did to alchemy.

Lucas: To sort of summarize this, you guys are attempting to be very clear about phenomenology. You want to provide a formal structure for understanding and also being able to infer phenomenological states in people. So you guys are realists about consciousness?

Mike: Yes, absolutely.

Lucas: Let’s go ahead and lay some conceptual foundations. On your website, you guys describe QRI’s full stack, so the kinds of metaphysical and philosophical assumptions that you guys are holding to while you’re on this endeavor to mathematically capture consciousness.

Mike: I would say ‘full stack’ talks about how we do philosophy of mind, we do neuroscience, and we’re just getting into neurotechnology with the thought that yeah, if you have a better theory of consciousness, you should be able to have a better theory about the brain. And if you have a better theory about the brain, you should be able to build cooler stuff than you could otherwise. But starting with the philosophy, there’s this conception of qualia of formalism; the idea that phenomenology can be precisely represented mathematically. You borrow the goal from Giulio Tononi’s IIT. We don’t necessarily agree with the specific math involved, but the goal of constructing a mathematical object that is isomorphic to a systems phenomenology would be the correct approach if you want to formalize phenomenology.

And then from there, one of the big questions in how you even start is, what’s the simplest starting point? And here, I think one of our big innovations that is not seen at any other research group is we’ve started with emotional valence and pleasure. We think these are not only very ethically important, but also just literally the easiest place to start reverse engineering.

Lucas: Right, and so this view is also colored by physicalism and quality of structuralism and valence realism. Could you explain some of those things in a non-jargony way?

Mike: Sure. Quality of formalism is this idea that math is the right language to talk about qualia in, and that we can get a precise answer. This is another way of saying that we’re realists about consciousness much as people can be realists about electromagnetism. We’re also valence realists. This refers to how we believe emotional valence, or pain and pleasure, the goodness or badness of an experience. We think this is a natural kind. This concept carves reality at the joints. We have some further thoughts on how to define this mathematically as well.

Lucas: So you guys are physicalists, so you think that basically the causal structure of the world is best understood by physics and that consciousness was always part of the game engine of the universe from the beginning. Ontologically, it was basic and always there in the same sense that the other forces of nature were already in the game engine since the beginning?

Mike: Yeah, I would say so. I personally like the frame of dual aspect monism, but I would also step back a little bit and say there’s two attractors in this discussion. One is the physicalist attractor, and that’s QRI. Another would be the functionalist/computationalist attractor. I think a lot of AI researchers are in this attractor and this is a pretty deep question of, if we want to try to understand what value is, or what’s really going on, or if we want to try to reverse engineer phenomenology, do we pay attention to bits or atoms? What’s more real; bits or atoms?

Lucas: That’s an excellent question. Scientific reductionism here I think is very interesting. Could you guys go ahead and unpack though the skeptics position of your view and broadly adjudicate the merits of each view?

Andrés: Maybe a really important frame here is called Marr’s Levels of Analyses. David Marr was a cognitive scientist, wrote a really influential book in the ’80s called On Vision where he basically creates a schema for how to understand knowledge about, in this particular case, how you actually make sense of the world visually. The framework goes as follows: you have three ways in which you can describe an information processing system. First of all, the computational/behavioral level. What that is about is understanding the input-output mapping of an information processing system. Part of it is also understanding the run-time complexity of the system and under what conditions it’s able to perform its actions. Here an analogy would be with an abacus, for example.

On the computational/behavioral level, what an abacus can do is add, subtract, multiply, divide, and if you’re really creative you can also exponentiate and do other interesting things. Then you have the algorithmic level of analysis, which is a little bit more detailed, and in a sense more constrained. What the algorithm level of analysis is about is figuring out what are the internal representations and possible manipulations of those representations such that you get the input output of mapping described by the first layer. Here you have an interesting relationship where understanding the first layer doesn’t fully constrain the second one. That is to say, there are many systems that have the same input output mapping but that under the hood uses different algorithms.

In the case of the abacus, an algorithm might be something whenever you want to add a number you just push a bead. Whenever you’re done with a row, you push all of the beads backs and then you add a bead in the row underneath. And finally, you have the implementation level of analysis, and that is, what is the system actually made of? How is it constructed? All of these different levels ultimately also map onto different theories of consciousness, and that is basically where in the stack you associate consciousness, or being, or “what matters”. So, for example, behaviorists in the ’50s, they may associate consciousness, if they give any credibility to that term, with the behavioral level. They don’t really care what’s happening inside as long as you have extended pattern of reinforcement learning over many iterations.

What matters is basically how you’re behaving and that’s the crux of who you are. A functionalist will actually care about what algorithms you’re running, how is it that you’re actually transforming the input into the output. Functionalists generally do care about, for example, brain imaging, they do care about the high level algorithms that the brain is running, and generally will be very interested in figuring out these algorithms and generalize them in fields like machine learning and digital neural networks and so on. A physicalist associate consciousness at the implementation level of analysis. How the system is physically constructed, has bearings on what is it like to be that system.

Lucas: So, you guys haven’t said that this was your favorite approach, but if people are familiar with David Chalmers, these seem to be the easy problems, right? And functionalists are interested in just the easy problems and some of them will actually just try to explain consciousness away, right?

Mike: Yeah, I would say so. And I think to try to condense some of the criticism we have of functionalism, I would claim that it looks like a theory of consciousness and can feel like a theory of consciousness, but it may not actually do what we need a theory of consciousness to do; specify which exact phenomenological states are present.

Lucas: Is there not some conceptual partitioning that we need to do between functionalists who believe in qualia or consciousness, and those that are illusionists or want to explain it away or think that it’s a myth?

Mike: I think that there is that partition, and I guess there is a question of how principled the partition you can be, or whether if you chase the ideas down as far as you can, the partition collapses. Either consciousness is a thing that is real in some fundamental sense and I think you can get there with physicalism, or consciousness is more of a process, a leaky abstraction. I think functionalism naturally tugs in that direction. For example, Brian Tomasik has followed this line of reasoning and come to the conclusion of analytic functionalism, which is trying to explain away consciousness.

Lucas: What is your guys’s working definition of consciousness and what does it mean to say that consciousness is real.

Mike: It is a word that’s overloaded. It’s used in many contexts. I would frame it as what it feels like to be something, and something is conscious if there is something it feels like to be that thing.

Andrés: It’s important also to highlight some of its properties. As Mike pointed out, “consciousness” is used in many different ways. There’s like eight definitions for the word consciousness, and honestly, all of them are really interesting. Some of them are more fundamental than others and we tend to focus on the more fundamental side of the spectrum for the word. A sense that would be very not fundamental would be consciousness in the sense of social awareness or something like that. We actually think of consciousness much more in terms of qualia; what is it like to be something? What is it like to exist? Some of the key properties of consciousness are as follows: First of all, we do think it exists.

Second, in some sense it has causal power in the sense that the fact that we are conscious matters for evolution, evolution made us conscious for a reason that it’s actually doing some computational legwork that would be maybe possible to do, but just not as efficient or not as conveniently as it is possible with consciousness. Then also you have the property of qualia, the fact that we can experience sights, and colors, and tactile sensations, and thoughts experiences, and emotions, and so on, and all of these are in completely different worlds, and in a sense they are, but they have the property that they can be part of a unified experience that can experience color at the same time as experiencing sound. That sends those different types of sensations, we describe them as the category of consciousness because they can be experienced together.

And finally, you have unity, the fact that you have the capability of experiencing many qualia simultaneously. That’s generally a very strong claim to make, but we think you need to acknowledge and take seriously its unity.

Lucas: What are your guys’s intuition pumps for thinking why consciousness exists as a thing? Why is there a qualia?

Andrés: There’s the metaphysical question of why consciousness exists to begin within. That’s something I would like to punt for the time being. There’s also the question of why was it recruited for information processing purposes in animals? The intuition here is that there are various contrasts that you can have within experience, which can serve a computational role. So, there may be a very deep reason why color qualia or visual qualia is used for information processing associated with sight, and why tactile qualia is associated with information processing useful for touching and making haptic representations, and that might have to do with the actual map of how all the qualia values are related to each other. Obviously, you have all of these edge cases, people who are seeing synesthetic.

They may open their eyes and they experience sounds associated with colors, and people tend to think of those as abnormal. I would flip it around and say that we are all synesthetic, it’s just that the synesthesia that we have in general is very evolutionarily adaptive. The reason why you experience colors when you open your eyes is that that type of qualia is really well suited to represent geometrically a projective space. That’s something that naturally comes out of representing the world with the sensory apparatus like eyes. That doesn’t mean that there aren’t other ways of doing it. It’s possible that you could have an offshoot of humans that whenever they opened their eyes, they experience sound and they use that very well to represent the visual world.

But we may very well be in a local maxima of how different types of qualia are used to represent and do certain types of computations in a very well-suited way. It’s like the intuition behind why we’re conscious, is that all of these different contrasts in the structure of the relationship of possible qualia values has computational implications, and there’s actual ways of using this contrast in very computationally effective ways.

Lucas: So, just to channel the functionalist here, wouldn’t he just say that everything you just said about qualia could be fully reducible to input output and algorithmic information processing? So, why do we need this extra property of qualia?

Andrés: There’s this article, I believe is by Brian Tomasik that basically says, flavors of consciousness are flavors of computation. It might be very useful to do that exercise, where basically you identify color qualia as just a certain type of computation and it may very well be that the geometric structure of color is actually just a particular algorithmic structure, that whenever you have a particular type of algorithmic information processing, you get these geometric state-space. In the case of color, that’s a Euclidean three-dimensional space. In the case of tactile or smell qualia, it might be a much more complicated space, but then it’s in a sense implied by the algorithms that we run. There is a number of good arguments there.

The general approach to how to tackle them is that when it comes down to actually defining what algorithms a given system is running, you will hit a wall when you try to formalize exactly how to do it. So, one example is, how do you determine the scope of an algorithm? When you’re analyzing a physical system and you’re trying to identify what algorithm it is running, are you allowed to basically contemplate 1,000 atoms? Are you allowed to contemplate a million atoms? Where is a natural boundary for you to say, “Whatever is inside here can be part of the same algorithm, but whatever is outside of it can’t.” And, there really isn’t a frame-invariant way of making those decisions. On the other hand, if you ask to see a qualia with actual physical states, there is a frame-invariant way of describing what the system is.

Mike: So, a couple of years ago I posted a piece giving a critique of functionalism and one of the examples that I brought up was, if I have a bag of popcorn and I shake the bag of popcorn, did I just torture someone? Did I just run a whole brain emulation of some horrible experience, or did I not? There’s not really an objective way to determine which algorithms a physical system is objectively running. So this is a kind of an unanswerable question from the perspective of functionalism, whereas with the physical theory of consciousness, it would have a clear answer.

Andrés: Another metaphor here is, let’s say you’re at a park enjoying an ice cream. In this system that I created that has, let’s say isomorphic algorithms to whatever is going on in your brain, the particular algorithms that your brain is running in that precise moment within a functionalist paradigm maps onto a metal ball rolling down one of the paths within these machine in a straight line, not touching anything else. So there’s actually not much going on. According to functionalism, that would have to be equivalent and it would actually be generating your experience. Now the weird thing there is that you could actually break the machine, you could do a lot of things and the behavior of the ball would not change.

Meaning that within functionalism, and to actually understand what a system is doing, you need to understand the counter-factuals of the system. You need to understand, what would the system be doing if the input had been different? And all of a sudden, you end with this very, very gnarly problem of defining, well, how do you actually objectively decide what is the boundary of the system? Even some of these particular states that allegedly are very complicated, the system looks extremely simple, and you can remove a lot of parts without actually modifying its behavior. Then that casts in question whether there is an objective boundary, any known arbitrary boundary that you can draw around the system and say, “Yeah, this is equivalent to what’s going on in your brain,” right now.

This has a very heavy bearing on the binding problem. The binding problem for those who haven’t heard of it is basically, how is it possible that 100 billion neurons just because they’re skull-bound, spatially distributed, how is it possible that they simultaneously contribute to a unified experience as opposed to, for example, neurons in your brain and neurons in my brain contributing to a unified experience? You hit a lot of problems like what is the speed of propagation of information for different states within the brain? I’ll leave it at that for the time being.

Lucas: I would just like to be careful about this intuition here that experience is unified. I think that the intuition pump for that is direct phenomenological experience like experience seems unified, but experience also seems a lot of different ways that aren’t necessarily descriptive of reality, right?

Andrés: You can think of it as different levels of sophistication, where you may start out with a very naive understanding of the world, where you confuse your experience for the world itself. A very large percentage of people perceive the world and in a sense think that they are experiencing the world directly, whereas all the evidence indicates that actually you’re experiencing an internal representation. You can go and dream, you can hallucinate, you can enter interesting meditative states, and those don’t map to external states of the world.

There’s this transition that happens when you realize that in some sense you’re experiencing a world simulation created by your brain, and of course, you’re fooled by it in countless ways, especially when it comes to emotional things that we look at a person and we might have an intuition of what type of person that person is, and that if we’re not careful, we can confuse our intuition, we can confuse our feelings with truth as if we were actually able to sense their souls, so to speak, rather than, “Hey, I’m running some complicated models on people-space and trying to carve out who they are.” There’s definitely a lot of ways in which experience is very deceptive, but here I would actually make an important distinction.

When it comes to intentional content, and intentional content is basically what the experience is about, for example, if you’re looking at a chair, there’s the quality of chairness, the fact that you understand the meaning of chair and so on. That is usually a very deceptive part of experience. There’s another way of looking at experience that I would say is not deceptive, which is the phenomenal character of experience; how it presents itself. You can be deceived about basically what the experience is about, but you cannot be deceived about how you’re having the experience, how you’re experiencing it. You can infer based on a number of experiences that the only way for you to even actually experience a given phenomenal object is to incorporate a lot of that information into a unified representation.

But also, if you just pay attention to your experience that you can simultaneously place your attention in two spots of your visual field and make them harmonized. That’s a phenomenal character and I would say that there’s a strong case to be made to not doubt that property.

Lucas: I’m trying to do my best to channel the functionalist. I think he or she would say, “Okay, so what? That’s just more information processing, and i’ll bite the bullet on the binding problem. I still need some more time to figure that out. So what? It seems like these people who believe in qualia have an even tougher job of trying to explain this extra spooky quality in the world that’s different from all the other physical phenomenon that science has gone into.” It also seems to violate Occam’s razor or a principle of lightness where one’s metaphysics or ontology would want to assume the least amount of extra properties or entities in order to try to explain the world. I’m just really trying to tease out your best arguments here for qualia realism as we do have this current state of things in AI alignment where most people it seems would either try to explain away consciousness, would say it’s an illusion, or they’re anti-realist about qualia.

Mike: That’s a really good question, a really good frame. And I would say our strongest argument revolves around predictive power. Just like centuries ago, you could absolutely be a skeptic about, shall we say, electromagnetism realism. And you could say, “Yeah, I mean there is this thing we call static, and there’s this thing we call lightning, and there’s this thing we call load stones or magnets, but all these things are distinct. And to think that there’s some unifying frame, some deep structure of the universe that would tie all these things together and highly compress these phenomenon, that’s crazy talk.” And so, this is a viable position today to say that about consciousness, that it’s not yet clear whether consciousness has deep structure, but we’re assuming it does, and we think that unlocks a lot of predictive power.

We should be able to make predictions that are both more concise and compressed and crisp than others, and we should be able to make predictions that no one else can.

Lucas: So what is the most powerful here about what you guys are doing? Is it the specific theories and assumptions which you take are falsifiable?

Mike: Yeah.

Lucas: If we can make predictive assessments of these things, which are either leaky abstractions or are qualia, how would we even then be able to arrive at a realist or anti-realist view about qualia?

Mike: So, one frame on this is, it could be that one could explain a lot of things about observed behavior and implicit phenomenology through a purely functionalist or computationalist lens, but maybe for a given system it might take 10 terabytes. And if you can get there in a much simpler way, if you can explain it in terms of three elegant equations instead of 10 terabytes, then it wouldn’t be proof that there exists some crystal clear deep structure at work. But it would be very suggestive. Marr’s Levels of Analysis are pretty helpful here, where a functionalist might actually be very skeptical of consciousness mattering at all because it would say, “Hey, if you’re identifying consciousness at the implementation level of analysis, how could that have any bearing on how we are talking about, how we understand the world, how we’d behave?

Since the implementational level is kind of epiphenomenal from the point of view of the algorithm. How can an algorithm know its own implementation, all it can maybe figure out its own algorithm, and it’s identity would be constrained to its own algorithmic structure.” But that’s not quite true. In fact, there is bearings on one level of analysis onto another, meaning in some cases the implementation level of analysis doesn’t actually matter for the algorithm, but in some cases it does. So, if you were implementing a computer, let’s say with water, you have the option of maybe implementing a Turing machine with water buckets and in that case, okay, the implementation level of analysis goes out the window in terms of it doesn’t really help you understand the algorithm.

But if how you’re using water to implement algorithms is by basically creating this system of adding waves in buckets of different shapes, with different resonant modes, then the implementation level of analysis actually matters a whole lot for what algorithms are … finely tuned to be very effective in that substrate. In the case of consciousness and how we behave, we do think properties of the substrate have a lot of bearings on what algorithms we actually run. A functionalist should actually start caring about consciousness if the properties of consciousness makes the algorithms more efficient, more powerful.

Lucas: But what if qualia and consciousness are substantive real things? What if the epiphenomenonalist true and is like smoke rising from computation and it doesn’t have any causal efficacy?

Mike: To offer a re-frame on this, I like this frame of dual aspect monism better. There seems to be an implicit value judgment on epiphenomenalism. It’s seen as this very bad thing if a theory implies qualia as epiphenomenal. Just to put cards on the table, I think Andrés and I differ a little bit on how we see these things, although I think our ideas also mesh up well. But I would say that under the frame of something like dual aspect monism, that there’s actually one thing that exists, and it has two projections or shadows. And one projection is the physical world such as we can tell, and then the other projection is phenomenology, subjective experience. These are just two sides of the same coin and neither is epiphenomenal to the other. It’s literally just two different angles on the same thing.

And in that sense, qualia values and physical values are really talking about the same thing when you get down to it.

Lucas: Okay. So does this all begin with this move that Descartes makes, where he tries to produce a perfectly rational philosophy or worldview by making no assumptions and then starting with experience? Is this the kind of thing that you guys are doing in taking consciousness or qualia to be something real or serious?

Mike: I can just speak for myself here, but I would say my intuition comes from two places. One is staring deep into the beast of functionalism and realizing that it doesn’t lead to a clear answer. My model is that it just is this thing that looks like an answer but can never even in theory be an answer to how consciousness works. And if we deny consciousness, then we’re left in a tricky place with ethics and moral value. It also seems to leave value on the table in terms of predictions, that if we can assume consciousness as real and make better predictions, then that’s evidence that we should do that.

Lucas: Isn’t that just an argument that it would be potentially epistemically useful for ethics if we could have predictive power about consciousness?

Mike: Yeah. So, let’s assume that it’s 100 years, or 500 years, or 1,000 years in the future, and we’ve finally cracked consciousness. We’ve finally solved it. My open question is, what does the solution look like? If we’re functionalists, what does the solution look like? If we’re physicalists, what does the solution look like? And we can expand this to ethics as well.

Lucas: Just as a conceptual clarification, the functionalists are also physicalists though, right?

Andrés: There is two senses of the word physicalism here. So if there’s physicalism in the sense of like a theory of the universe, that the behavior of matter and energy, what happens in the universe is exhaustively described by the laws of physics, or future physics, there is also physicalism in the sense of understanding consciousness in contrast to functionalism. David Pearce, I think, would describe it as non-materialist physicalist idealism. There’s definitely a very close relationship between that phrasing and dual aspect monism. I can briefly unpack it. Basically non materialist is not saying that the stuff of the world is fundamentally unconscious. That’s something that materialism claims, that what the world is made of is not conscious, is raw matter so to speak.

Andrés: Physicalist, again in the sense of the laws of physics exhaustively describe behavior and idealist in the sense of what makes up the world is qualia or consciousness. The big picture view is that the actual substrate of the universe of quantum fields are fields of qualia.

Lucas: So Mike, you were saying that in the future when we potentially have a solution to the problem of consciousness, that in the end, the functionalists with algorithms and explanations of say all of the easy problems, all of the mechanisms behind the things that we call consciousness, you think that that project will ultimately fail?

Mike: I do believe that, and I guess my gentle challenge to functionalists would be to sketch out a vision of what a satisfying answer to consciousness would be, whether it’s completely explaining it a way or completely explaining it. If in 500 years you go to the local bookstore and you check out consciousness 101, and just flip through it, you look at the headlines and the chapter list and the pictures, what do you see? I think we have an answer as formalists, but I would be very interested in getting the functionalists state on this.

Lucas: All right, so you guys have this belief in the ability to formalize our understanding of consciousness, is this actually contingent on realism or anti realism?

Mike: It is implicitly dependent on realism, that consciousness is real enough to be describable mathematically in a precise sense. And actually that would be my definition of realism, that something is real if we can describe it exactly with mathematics and it is instantiated in the universe. I think the idea of connecting math and consciousness is very core to formalism.

Lucas: What’s particularly interesting here are the you’re making falsifiable claims about phenomenological states. It’s good and exciting that your Symmetry Theory of Valence, which we can get into now has falsifiable aspects. So do you guys want to describe here your Symmetry Theory of Valence and how this fits in and as a consequence of your valence realism?

Andrés: Sure, yeah. I think like one of the key places where this has bearings on is and understanding what is it that we actually want and what is it that we actually like and enjoy. That will be answered in an agent way. So basically you think of agents as entities who spin out possibilities for what actions to take and then they have a way of sorting them by expected utility and then carrying them out. A lot of people may associate what we want or what we like or what we care about at that level, the agent level, whereas we think actually the true source of value is more low level than that. That there’s something else that we’re actually using in order to implement agentive behavior. There’s ways of experiencing value that are completely separated from agents. You don’t actually need to be generating possible actions and evaluating them and enacting them for there to be value or for you to actually be able to enjoy something.

So what we’re examining here is actually what is the lower level property that gives rise even to agentive behavior that underlies every other aspect of experience. These would be a valence and specifically valence gradients. The general claim is that we are set up in such a way that we are basically climbing the valence gradient. This is not true in every situation, but it’s mostly true and it’s definitely mostly true in animals. And then the question becomes what implements valence gradients. Perhaps your intuition is this extraordinary fact that things that have nothing to do with our evolutionary past nonetheless can feel good or bad. So it’s understandable that if you hear somebody scream, you may get nervous or anxious or fearful or if you hear somebody laugh you may feel happy.

That makes sense from an evolutionary point of view, but why would the sound of the Bay Area Rapid Transit, the Bart, which creates these very intense screeching sounds, that is not even within like the vocal range of humans, it’s just really bizarre, never encountered before in our evolutionary past and nonetheless, it has an extraordinarily negative valence. That’s like a hint that valence has to do with patterns, it’s not just goals and actions and utility functions, but the actual pattern of your experience may determine valence. The same goes for a SUBPAC, is this technology that basically renders sounds between 10 and 100 hertz and some of them feel really good, some of them feel pretty unnerving, some of them are anxiety producing and it’s like why would that be the case? Especially when you’re getting two types of input that have nothing to do with our evolutionary past.

It seems that there’s ways of triggering high and low valence states just based on the structure of your experience. The last example I’ll give is very weird states of consciousness like meditation or psychedelics that seem to come with extraordinarily intense and novel forms of experiencing significance or a sense of bliss or pain. And again, they don’t seem to have much semantic content per se or rather the semantic content is not the core reason why they feel that they’re bad. It has to do more with a particular structure that they induce in experience.

Mike: There are many ways to talk about where pain and pleasure come from. We can talk about it in terms of neuro chemicals, opioids, dopamine. We can talk about it in terms of pleasure centers in the brain, in terms of goals and preferences and getting what you want, but all these have counterexamples. All of these have some points that you can follow the thread back to which will beg the question. I think the only way to explain emotional valence, pain and pleasure, that doesn’t beg the question is to explain it in terms of some patterns within phenomenology, just intrinsically feel good and some intrinsically feel bad. To touch back on the formalism brain, this would be saying that if we have a mathematical object that is isomorphic to your phenomenology, to what it feels like to be you, then some pattern or property of this object will refer to or will sort of intrinsically encode you are emotional valence, how pleasant or unpleasant this experiences.

That’s the valence formalism aspect that we’ve come to.

Lucas: So given the valence realism, the view is this intrinsic pleasure, pain axis of the world and this is sort of challenging I guess David Pearce’s view. There are things in experience which are just clearly good seeming or bad seeming. Will MacAskill called these pre theoretic properties we might ascribe to certain kinds of experiential aspects, like they’re just good or bad. So with this valence realism view, this potentiality in this goodness or badness whose nature is sort of self intimatingly disclosed in the physics and in the world since the beginning and now it’s unfolding and expressing itself more so and the universe is sort of coming to life, and embedded somewhere deep within the universe’s structure are these intrinsically good or intrinsically bad valances which complex computational systems and maybe other stuff has access to.

Andrés: Yeah, yeah, that’s right. And I would perhaps emphasize that it’s not only pre-theoretical, it’s pre-agentive, you don’t even need an agent for there to be valence.

Lucas: Right. Okay. This is going to be a good point I think for getting into these other more specific hairy philosophical problems. Could you go ahead and unpack a little bit more this view that pleasure or pain is self intimatingly good or bad that just by existing and experiential relation with the thing its nature is disclosed. Brian Tomasik here, and I think functionalists would say there’s just another reinforcement learning algorithm somewhere before that is just evaluating these phenomenological states. They’re not intrinsically or bad, that’s just what it feels like to be the kind of agent who has that belief.

Andrés: Sure. There’s definitely many angles from which to see this. One of them is by basically realizing that liking, wanting and learning are possible to dissociate, and in particular you’re going to have reinforcement without an associated positive valence. You can have also positive valence without reinforcement or learning. Generally they are correlated but they are different things. My understanding is a lot of people who may think of valence as something we believe matters because you are the type of agent that has a utility function and a reinforcement function. If that was the case, we would expect valence to melt away in states that are non agentive, we wouldn’t necessarily see it. And also that it would be intrinsically tied to intentional content, the aboutness of experience. A very strong counter example is that somebody may claim that really what they truly want this to be academically successful or something like that.

They think of the reward function as intrinsically tied to getting a degree or something like that. I would call that to some extent illusory, that if you actually look at how those preferences are being implemented, that deep down there would be valence gradients happening there. One way to show this would be let’s say the person on the graduation day, you give them an opioid antagonist. The person will subjectively feel that the day is meaningless, you’ve removed the pleasant cream of the experience that they were actually looking for, that they thought all along was tied in with intentional content with the fact of graduating but in fact it was the hedonic gloss that they were after, and that’s kind of like one intuition pump part there.

Lucas: These core problem areas that you’ve identified in Principia Qualia, would you just like to briefly touch on those?

Mike: Yeah, trying to break the problem down into modular pieces with the idea that if we can decompose the problem correctly then the sub problems become much easier than the overall problem and if you collect all the solutions to the sub problem than in aggregate, you get a full solution to the problem of consciousness. So I’ve split things up into the metaphysics, the math and the interpretation. The first question is what metaphysics do you even start with? What ontology do you even try to approach the problem? And we’ve chosen the ontology of physics that can objectively map onto reality in a way that computation can not. Then there’s this question of, okay, so you have your core ontology in this case physics, and then there’s this question of what counts, what actively contributes to consciousness? Do we look at electrons, electromagnetic fields, quarks?

This is an unanswered question. We have hypotheses but we don’t have an answer. Moving into the math, conscious system seemed to have boundaries, if something’s happening inside my head it can directly contribute to my conscious experience. But even if we put our heads together, literally speaking, your consciousness doesn’t bleed over into mine, there seems to be a boundary. So one way of framing this is the boundary problem and one way it’s framing it is the binding problem, and these are just two sides of the same coin. There’s this big puzzle of how do you draw the boundaries of a subject experience. IIT is set up to approach consciousness in itself through this lens that has a certain style of answer, style of approach. We don’t necessarily need to take that approach, but it’s a intellectual landmark. Then we get into things like the state-space problem and the topology of information problem.

If we figured out our basic ontology of what we think is a good starting point and of that stuff, what actively contributes to consciousness, and then we can figure out some principled way to draw a boundary around, okay, this is conscious experience A and this conscious experience B, and they don’t overlap. So you have a bunch of the information inside the boundary. Then there’s this math question of how do you rearrange it into a mathematical object that is isomorphic to what that stuff feels like. And again, IIT has an approach to this, we don’t necessarily ascribe to the exact approach but it’s good to be aware of. There’s also the interpretation problem, which is actually very near and dear to what QRI is working on and this is the concept of if you had a mathematical object that represented what it feels like to be you, how would we even start to figure out what it meant?

Lucas: This is also where the falsifiability comes in, right? If we have the mathematical object and we’re able to formally translate that into phenomenological states, then people can self report on predictions, right?

Mike: Yes. I don’t necessarily fully trust self reports as being the gold standard. I think maybe evolution is tricky sometimes and can lead to inaccurate self report, but at the same time it’s probably pretty good, and it’s the best we have for validating predictions.

Andrés: A lot of this gets easier if we assume that maybe we can be wrong in an absolute sense but we’re often pretty well calibrated to judge relative differences. Maybe you ask me how I’m doing on a scale of one to ten and I say seven and the reality is a five, maybe that’s a problem, but at the same time I like chocolate and if you give me some chocolate and I eat it and that improves my subjective experience and I would expect us to be well calibrated in terms of evaluating whether something is better or worse.

Lucas: There’s this view here though that the brain is not like a classical computer, that it is more like a resonant instrument.

Mike: Yeah. Maybe an analogy here it could be pretty useful. There’s this researcher William Sethares who basically figured out the way to quantify the mutual dissonance between pairs of notes. It turns out that it’s not very hard, all you need to do is add up the pairwise dissonance between every harmonic of the notes. And what that gives you is that if you take for example a major key and you compute the average dissonance between pairs of notes within that major key it’s going to be pretty good on average. And if you take the average dissonance of a minor key it’s going to be higher. So in a sense what distinguishes the minor and a major key is in the combinatorial space of possible permutations of notes, how frequently are they dissonant versus consonant.

That’s a very ground truth mathematical feature of a musical instrument and that’s going to be different from one instrument to the next. With that as a backdrop, we think of the brain and in particular valence in a very similar light that the brain has natural resonant modes and emotions may seem externally complicated. When you’re having a very complicated emotion and we ask you to describe it it’s almost like trying to describe a moment in a symphony, this very complicated composition and how do you even go about it. But deep down the reason why a particular frame sounds pleasant or unpleasant within music is ultimately tractable to the additive per wise dissonance of all of those harmonics. And likewise for a given state of consciousness we suspect that very similar to music the average pairwise dissonance between the harmonics present on a given point in time will be strongly related to how unpleasant the experience is.

These are electromagnetic waves and it’s not exactly like a static or it’s not exactly a standing wave either, but it gets really close to it. So basically what this is saying is there’s this excitation inhibition wave function and that happens statistically across macroscopic regions of the brain. There’s only a discrete number of ways in which that way we can fit an integer number of times in the brain. We’ll give you a link to the actual visualizations for what this looks like. There’s like a concrete example, one of the harmonics with the lowest frequency is basically a very simple one where interviewer hemispheres are alternatingly more excited versus inhibited. So that will be a low frequency harmonic because it is very spatially large waves, an alternating pattern of excitation. Much higher frequency harmonics are much more detailed and obviously hard to describe, but visually generally speaking, the spatial regions that are activated versus inhibited are these very thin wave fronts.

It’s not a mechanical wave as such, it’s a electromagnetic wave. So it would actually be the electric potential in each of these regions of the brain fluctuates, and within this paradigm on any given point in time you can describe a brain state as a weighted sum of all of its harmonics, and what that weighted sum looks like depends on your state of consciousness.

Lucas: Sorry, I’m getting a little caught up here on enjoying resonant sounds and then also the valence realism. The view isn’t that all minds will enjoy resonant things because happiness is like a fundamental valence thing of the world and all brains who come out of evolution should probably enjoy resonance.

Mike: It’s less about the stimulus, it’s less about the exact signal and it’s more about the effect of the signal on our brains. The resonance that matters, the resonance that counts, or the harmony that counts we’d say, or in a precisely technical term, the consonance that counts is the stuff that happens inside our brains. Empirically speaking most signals that involve a lot of harmony create more internal consonance in these natural brain harmonics than for example, dissonant stimuli. But the stuff that counts is inside the head, not the stuff that is going in our ears.

Just to be clear about QRI’s move here, Selen Atasoy has put forth this connectome-specific harmonic wave model and what we’ve done is combined it with our symmetry theory of valence and said this is sort of a way of basically getting a Fourier transform of where the energy is in terms of frequencies of brainwaves in a much cleaner way than has been available through EEG. Basically we can evaluate this data set for harmony. How much harmony is there in a brain, with the link to the Symmetry Theory of Valence then it should be a very good proxy for how pleasant it is to be that brain.

Lucas: Wonderful.

Andrés: In this context, yeah, the Symmetry Theory of Valence would be much more fundamental. There’s probably many ways of generating states of consciousness that are in a sense completely unnatural that are not based on the harmonics of the brain, but we suspect the bulk of the differences in states of consciousness would cash out in differences in brain harmonics because that’s a very efficient way of modulating the symmetry of the state.

Mike: Basically, music can be thought of as a very sophisticated way to hack our brains into a state of greater consonance, greater harmony.

Lucas: All right. People should check out your Principia Qualia, which is the work that you’ve done that captures a lot of this well. Is there anywhere else that you’d like to refer people to for the specifics?

Mike: Principia qualia covers the philosophical framework and the symmetry theory of valence. Andrés has written deeply about this connectome-specific harmonic wave frame and the name of that piece is Quantifying Bliss.

Lucas: Great. I would love to be able to quantify bliss and instantiate it everywhere. Let’s jump in here into a few problems and framings of consciousness. I’m just curious to see if you guys have any comments on ,the first is what you call the real problem of consciousness and the second one is what David Chalmers calls the Meta problem of consciousness. Would you like to go ahead and start off here with just this real problem of consciousness?

Mike: Yeah. So this gets to something we were talking about previously, is consciousness real or is it not? Is it something to be explained or to be explained away? This cashes out in terms of is it something that can be formalized or is it intrinsically fuzzy? I’m calling this the real problem of consciousness, and a lot depends on the answer to this. There are so many different ways to approach consciousness and hundreds, perhaps thousands of different carvings of the problem, panpsychism, we have dualism, we have non materialist physicalism and so on. I think essentially the core distinction, all of these theories sort themselves into two buckets, and that’s is consciousness real enough to formalize exactly or not. This frame is perhaps the most useful frame to use to evaluate theories of consciousness.

Lucas: And then there’s a Meta problem of consciousness which is quite funny, it’s basically like why have we been talking about consciousness for the past hour and what’s all this stuff about qualia and happiness and sadness? Why do people make claims about consciousness? Why does it seem to us that there is maybe something like a hard problem of consciousness, why is it that we experience phenomenological states? Why isn’t everything going on with the lights off?

Mike: I think this is a very clever move by David Chalmers. It’s a way to try to unify the field and get people to talk to each other, which is not so easy in the field. The Meta problem of consciousness doesn’t necessarily solve anything but it tries to inclusively start the conversation.

Andrés: The common move that people make here is all of these crazy things that we think about consciousness and talk about consciousness, that’s just any information processing system modeling its own attentional dynamics. That’s one illusionist frame, but even within qualia realist, qualia formalist paradigm, you still have the question of why do we even think or self reflect about consciousness. You could very well think of consciousness as being computationally relevant, you need to have consciousness and so on, but still lacking introspective access. You could have these complicated conscious information processing systems, but they don’t necessarily self reflect on the quality of their own consciousness. That property is important to model and make sense of.

We have a few formalisms that may give rise to some insight into how self reflectivity happens and in particular how is it possible to model the entirety of your state of consciousness in a given phenomenal object. These ties in with the notion of a homonculei, if the overall valence of your consciousness is actually a signal traditionally used for fitness evaluation, detecting basically when are you in existential risk to yourself or when there’s like reproductive opportunities that you may be missing out on, that it makes sense for there to be a general thermostat of the overall experience where you can just look at it and you get a a sense of the overall well being of the entire experience added together in such a way that you experienced them all at once.

I think like a lot of the puzzlement has to do with that internal self model of the overall well being of the experience, which is something that we are evolutionarily incentivized to actually summarize and be able to see at a glance.

Lucas: So, some people have a view where human beings are conscious and they assume everyone else is conscious and they think that the only place for value to reside is within consciousness, and that a world without consciousness is actually a world without any meaning or value. Even if we think that say philosophical zombies or people who are functionally identical to us but with no qualia or phenomenological states or experiential states, even if we think that those are conceivable, then it would seem that there would be no value in a world of p-zombies. So I guess my question is why does phenomenology matter? Why does the phenomenological modality of pain and pleasure or valence have some sort of special ethical or experiential status unlike qualia like red or blue?

Why does red or blue not disclose some sort of intrinsic value in the same way that my suffering does or my bliss does or the suffering or bliss of other people?

Mike: My intuition is also that consciousness is necessary for value. Nick Bostrom has this wonderful quote in super intelligence that we should be wary of building a Disneyland with no children, some technological wonderland that is filled with marvels of function but doesn’t have any subjective experience, doesn’t have anyone to enjoy it basically. I would just say that I think that most AI safety research is focused around making sure there is a Disneyland, making sure, for example, that we don’t just get turned into something like paperclips. But there’s this other problem, making sure there are children, making sure there are subjective experiences around to enjoy the future. I would say that there aren’t many live research threads on this problem and I see QRI as a live research thread on how to make sure there is subject experience in the future.

Probably a can of worms there, but as your question about in pleasure, I may pass that to my colleague Andrés.

Andrés: Nothing terribly satisfying here. I would go with David Pearce’s view that these properties of experience are self intimating and to the extent that you do believe in value, it will come up as the natural focal points for value, especially if you’re allowed to basically probe the quality of your experience where in many states you believe that the reason why you like something is for intentional content. Again, the case of graduating or it could be the the case of getting a promotion or one of those things that a lot of people associate, with feeling great, but if you actually probe the quality of experience, you will realize that there is this component of it which is its hedonic gloss and you can manipulate it directly again with things like opiate antagonists and if the symmetry theory of valence is true, potentially also by directly modulating the consonance and dissonance of the brain harmonics, in which case the hedonic gloss would change in peculiar ways.

When it comes to consilience, when it comes to many different points of view, agreeing on what aspect of the experience is what brings value to it, it seems to be the hedonic gloss.

Lucas: So in terms of qualia and valence realism, would the causal properties of qualia be the thing that would show any arbitrary mind the self-intimating nature of how good or bad an experience is, and in the space of all possible minds, what is the correct epistemological mechanism for evaluating the moral status of experiential or qualitative states?

Mike: So first of all, I would say that my focus so far has mostly been on describing what is and not what ought. I think that we can talk about valence without necessarily talking about ethics, but if we can talk about valence clearly, that certainly makes some questions in ethics and some frameworks in ethics make much more or less than. So the better we can clearly describe and purely descriptively talk about consciousness, the easier I think a lot of these ethical questions get. I’m trying hard not to privilege any ethical theory. I want to talk about reality. I want to talk about what exists, what’s real and what the structure of what exists is, and I think if we succeed at that then all these other questions about ethics and morality get much, much easier. I do think that there is an implicit should wrapped up in questions about valence, but I do think that’s another leap.

You can accept the valence is real without necessarily accepting that optimizing valence is an ethical imperative. I personally think, yes, it is very ethically important, but it is possible to take a purely descriptive frame to valence, that whether or not this also discloses, as David Pearce said, the utility function of the universe. That is another question and can be decomposed.

Andrés: One framing here too is that we do suspect valence is going to be the thing that matters up on any mind if you probe it in the right way in order to achieve reflective equilibrium. There’s the biggest example of a talk and neuro scientist was giving at some point, there was something off and everybody seemed to be a little bit anxious or irritated and nobody knew why and then one of the conference organizers suddenly came up to the presenter and did something to the microphone and then everything sounded way better and everybody was way happier. There was these very sorrow hissing pattern caused by some malfunction of the microphone and it was making everybody irritated, they just didn’t realize that was the source of the irritation, and when it got fixed then you know everybody’s like, “Oh, that’s why I was feeling upset.”

We will find that to be the case over and over when it comes to improving valence. So like somebody in the year 2050 might come up to one of the connectome-specific harmonic wave clinics, “I don’t know what’s wrong with me,” but if you put them through the scanner we identify your 17th and 19th harmonic in a state of dissonance. We cancel 17th to make it more clean, and then the person who will say all of a sudden like, “Yeah, my problem is fixed. How did you do that?” So I think it’s going to be a lot like that, that the things that puzzle us about why do I prefer these, why do I think this is worse, will all of a sudden become crystal clear from the point of view of valence gradients objectively measured.

Mike: One of my favorite phrases in this context is what you can measure you can manage and if we can actually find the source of dissonance in a brain, then yeah, we can resolve it, and this could open the door for maybe honestly a lot of amazing things, making the human condition just intrinsically better. Also maybe a lot of worrying things, being able to directly manipulate emotions may not necessarily be socially positive on all fronts.

Lucas: So I guess here we can begin to jump into AI alignment and qualia. So we’re building AI systems and they’re getting pretty strong and they’re going to keep getting stronger potentially creating a superintelligence by the end of the century and consciousness and qualia seems to be along the ride for now. So I’d like to discuss a little bit here about more specific places in AI alignment where these views might inform it and direct it.

Mike: Yeah, I would share three problems of AI safety. There’s the technical problem, how do you make a self improving agent that is also predictable and safe. This is a very difficult technical problem. First of all to even make the agent but second of all especially to make it safe, especially if it becomes smarter than we are. There’s also the political problem, even if you have the best technical solution in the world and the sufficiently good technical solution doesn’t mean that it will be put into action in a sane way if we’re not in a reasonable political system. But I would say the third problem is what QRI is most focused on and that’s the philosophical problem. What are we even trying to do here? What is the optimal relationship between AI and humanity and also a couple of specific details here. First of all I think nihilism is absolutely an existential threat and if we can find some antidotes to nihilism through some advanced valence technology that could be enormously helpful for reducing X-risk.

Lucas: What kind of nihilism or are you talking about here, like nihilism about morality and meaning?

Mike: Yes, I would say so, and just personal nihilism that it feels like nothing matters, so why not do risky things?

Lucas: Whose quote is it, the philosophers question like should you just kill yourself? That’s the yawning abyss of nihilism inviting you in.

Andrés: Albert Camus. The only real philosophical question is whether to commit suicide, whereas how I think of it is the real philosophical question is how to make love last, bringing value to the existence, and if you have value on tap, then the question of whether to kill yourself or not seems really nonsensical.

Lucas: For sure.

Mike: We could also say that right now there aren’t many good shelling points for global coordination. People talk about having global coordination and building AGI would be a great thing but we’re a little light on the details of how to do that. If the clear, comprehensive, useful, practical understanding of consciousness can be built, then this may sort of embody or generate new shelling points that the larger world could self organize around. If we can give people a clear understanding of what is and what could be, then I think we will get a better future that actually gets built.

Lucas: Yeah. Showing what is and what could be is immensely important and powerful. So moving forward with AI alignment as we’re building these more and more complex systems, there’s this needed distinction between unconscious and conscious information processing, if we’re interested in the morality and ethics of suffering and joy and other conscious states. How do you guys see the science of consciousness here, actually being able to distinguish between unconscious and conscious information processing systems?

Mike: There are a few frames here. One is that, yeah, it does seem like the brain does some processing in consciousness and some processing outside of consciousness. And what’s up with that, this could be sort of an interesting frame to explore in terms of avoiding things like mind crime in the AGI or AI space that if there are certain computations which are painful then don’t do them in a way that would be associated with consciousness. It would be very good to have rules of thumb here for how to do that. One interesting could be in the future we might not just have compilers which optimize for speed of processing or minimization of dependent libraries and so on, but could optimize for the valence of the computation on certain hardware. This of course gets into complex questions about computationalism, how hardware dependent this compiler would be and so on.

I think it’s an interesting and important long-term frame.

Lucas: So just illustrate here I think the ways in which solving or better understanding consciousness will inform AI alignment from present day until super intelligence and beyond.

Mike: I think there’s a lot of confusion about consciousness and a lot of confusion about what kind of thing the value problem is in AI Safety, and there are some novel approaches on the horizon. I was speaking with Stuart Armstrong the last EA global and he had some great things to share about his model fragments paradigm. I think this is the right direction. It’s sort of understanding, yeah, human preferences are insane. Just they’re not a consistent formal system.

Lucas: Yeah, we contain multitudes.

Mike: Yes, yes. So first of all understanding what generates them seems valuable. So there’s this frame in AI safety we call the complexity value thesis. I believe Eliezer came up with it in a post on Lesswrong. It’s this frame where human value is very fragile in that it can be thought of as a small area, perhaps even almost a point in a very high dimensional space, say a thousand dimensions. If we go any distance in any direction from this tiny point in this high dimensional space, then we quickly get to something that we wouldn’t think of as very valuable. And maybe if we leave everything the same and take away freedom, this paints a pretty sobering picture of how difficult AI alignment will be.

I think this is perhaps arguably the source of a lot of worry in the community, that not only do we need to make machines that won’t just immediately kill us, but that will preserve our position in this very, very high dimensional space well enough that we keep the same trajectory and that possibly if we move at all, then we may enter a totally different trajectory, that we in 2019 wouldn’t think of as having any value. So this problem becomes very, very intractable. I would just say that there is an alternative frame. The phrasing that I’m playing around with here it is instead of the complexity of value thesis, the unity of value thesis, it could be that many of the things that we find valuable, eating ice cream, living in a just society, having a wonderful interaction with a loved one, all of these have the same underlying neural substrate and empirically this is what affective neuroscience is finding.

Eating a chocolate bar activates same brain regions as a transcendental religious experience. So maybe there’s some sort of elegant compression that can be made and that actually things aren’t so starkly strict. We’re not sort of this point in a super high dimensional space and if we leave the point, then everything of value is trashed forever, but maybe there’s some sort of convergent process that we can follow that we can essentialize. We can make this list of 100 things that humanity values and maybe they all have in common positive valence, and positive valence can sort of be reverse engineered. And to some people this feels like a very scary dystopic scenario – don’t knock it until you’ve tried it – but at the same time there’s a lot of complexity here.

One core frame that the idea of qualia formalism and valence realism can offer AI safety is that maybe the actual goal is somewhat different than the complexity of value thesis puts forward. Maybe the actual goal is different and in fact easier. I think this could directly inform how we spend our resources on the problem space.

Lucas: Yeah, I was going to say that there exists standing tension between this view of the complexity of all preferences and values that human beings have and then the valence realist view which says that what’s ultimately good or certain experiential or hedonic states. I’m interested and curious about if this valence view is true, whether it’s all just going to turn into hedonium in the end.

Mike: I’m personally a fan of continuity. I think that if we do things right we’ll have plenty of time to get things right and also if we do things wrong then we’ll have plenty of time for things to be wrong. So I’m personally not a fan of big unilateral moves, it’s just getting back to this question of can understanding what is help us, clearly yes.

Andrés: Yeah. I guess one view is we could say preserve optionality and learn what is, and then from there hopefully we’ll be able to better inform oughts and with maintained optionality we’ll be able to choose the right thing. But that will require a cosmic level of coordination.

Mike: Sure. An interesting frame here is whole brain emulation. So whole brain emulation is sort of a frame built around functionalism and it’s a seductive frame I would say. If whole brain emulations wouldn’t necessarily have the same qualia based on hardware considerations as the original humans, there could be some weird lock in effects where if the majority of society turned themselves into p-zombies then it may be hard to go back on that.

Lucas: Yeah. All right. We’re just getting to the end here, I appreciate all of this. You guys have been tremendous and I really enjoyed this. I want to talk about identity in AI alignment. This sort of taxonomy that you’ve developed about open individualism and closed individualism and all of these other things. Would you like to touch on that and talk about implications here in AI alignment as you see it?

Andrés: Yeah. Yeah, for sure. The taxonomy comes from Daniel Kolak, a philosopher and mathematician. It’s a pretty good taxonomy and basically it’s like open individualism, that’s the view that a lot of meditators and mystics and people who take psychedelics often ascribe to, which is that we’re all one consciousness. Another frame is that our true identity is the light of consciousness, so to speak. So it doesn’t matter in what form it manifests, it’s always the same fundamental ground of being. Then you have the common sense view, it’s called closed individualism. You start existing when you’re born, you stop existing when you die. You’re just this segment. Some religions actually extend that into the future or past with reincarnation or maybe with heaven.

It’s the belief in ontological distinction between you and others while at the same time there is ontological continuity from one moment to the next within you. Finally you have this view that’s called empty individualism, which is that you’re just a moment of experience. That’s fairly common among physicists and a lot of people who’ve tried to formalize consciousness, often they converged on empty individualism. I think a lot of theories of ethics and rationality, like the veil of ignorance as a guide or like how do you define rational decision-making as maximizing the expected utility of yourself as an agent, all of those seem to implicitly be based on closed individualism and they’re not necessarily questioning it very much.

On the other hand, if the sense of individual identity of closed individualism doesn’t actually carve nature at its joints as a Buddhist might say, the feeling of continuity of being a separate unique entity is an illusory construction of your phenomenology that casts in a completely different light how to approach rationality itself and even self interest, right? If you start identifying with the light of consciousness rather than your particular instantiation, you will probably care a lot more about what happens to pigs in factory farms because … In so far as they are conscious they are you in a fundamental way. It matters a lot in terms of how to carve out different possible futures, especially when you get into these very tricky situations like, well what if there is mind melding or what if there is the possibility of making perfect copies of yourself?

All of these edge cases are really problematic from the common sense view of identity, but they’re not really a problem from an open individualist or empty individualist point of view. With all of this said, I do personally think there’s probably a way of combining open individualism with valence realism that gives rise to the next step in human rationality where we’re actually trying to really understand what the universe wants, so to speak. But I would say that there is a very tricky aspect here that has to do with game theory. We evolved to believe in close individualism. The fact that it’s evolutionarily adaptive is obviously not an argument for it being fundamentally true, but it does seem to be some kind of an evolutionarily stable point to believe of yourself as who you can affect the most directly in a causal way, if you define your boundary that way.

That basically gives you focus on the actual degrees of freedom that you do have, and if you think of a society of open individualists, everybody’s altruistically maximally contributing to the universal consciousness, and then you have one close individualist who is just selfishly trying to acquire power just for itself, you can imagine that one view would have a tremendous evolutionary advantage in that context. So I’m not one who just naively advocates for open individualism unreflectively. I think we still have to work out to the game theory of it, how to make it evolutionarily stable and also how to make it ethical. Open question, I do think it’s important to think about and if you take consciousness very seriously, especially within physicalism, that usually will cast huge doubts on the common sense view of identity.

It doesn’t seem like a very plausible view if you actually tried to formalize consciousness.

Mike: The game theory aspect is very interesting. You can think of closed individualism as something evolutionists produced that allows an agent to coordinate very closely with its past and future ourselves. Maybe we can say a little bit about why we’re not by default all empty individualists or open individualists. Empty individualism seems to have a problem where if every slice of conscious experience is its own thing, then why should you even coordinate with your past and future self because they’re not the same as you. So that leads to a problem of defection, and open individualism is everything is the same being so to speak than … As Andrés mentioned that allows free riders, if people are defecting, it doesn’t allow altruist punishment or any way to stop the free ride. There’s interesting game theory here and it also just feeds into the question of how we define our identity in the age of AI, the age of cloning, the age of mind uploading.

This gets very, very tricky very quickly depending on one’s theory of identity. They’re opening themselves up to getting hacked in different ways and so different theories of identity allow different forms of hacking.

Andrés: Yeah, which could be sometimes that’s really good and sometimes really bad. I would make the prediction that not necessarily open individualism in its full fledged form but a weaker sense of identity than closed individualism is likely going to be highly adaptive in the future as people basically have the ability to modify their state of consciousness in much more radical ways. People who just identify with narrow sense of identity will just be in their shells, not try to disturb the local attractor too much. That itself is not necessarily very advantageous. If the things on offer are actually really good, both hedonically and intelligence wise.

I do suspect basically people who are somewhat more open to basically identify with consciousness or at least identify with a broader sense of identity, they will be the people who will be doing more substantial progress, pushing the boundary and creating new cooperation and coordination technology.

Lucas: Wow, I love all that. Seeing closed individualism for what it was has had a tremendous impact on my life and this whole question of identity I think is largely confused for a lot of people. At the beginning you said that open individualism says that we are all one consciousness or something like this, right? For me in identity I’d like to move beyond all distinctions of sameness or differenceness. To say like, oh, we’re all one consciousness to me seems to say we’re all one electromagnetism, which is really to say the consciousness is like an independent feature or property of the world that’s just sort of a ground part of the world and when the world produces agents, consciousness is just an empty identityless property that comes along for the ride.

The same way in which it would be nonsense to say, “Oh, I am these specific atoms, I am just the forces of nature that are bounded within my skin and body” That would be nonsense. In the same way in sense of what we were discussing with consciousness there was the binding problem of the person, the discreteness of the person. Where does the person really begin or end? It seems like these different kinds of individualism have, as you said, epistemic and functional use, but they also, in my view, create a ton of epistemic problems, ethical issues, and in terms of the valence theory, if quality is actually something good or bad, then as David Pearce says, it’s really just an epistemological problem that you don’t have access to other brain states in order to see the self intimating nature of what it’s like to be that thing in that moment.

There’s a sense in which i want to reject all identity as arbitrary and I want to do that in an ultimate way, but then in the conventional way, I agree with you guys that there are these functional and epistemic issues that closed individualism seems to remedy somewhat and is why evolution, I guess selected for it, it’s good for gene propagation and being selfish. But once one sees AI as just a new method of instantiating bliss, it doesn’t matter where the bliss is. Bliss is bliss and there’s no such thing as your bliss or anyone else’s bliss. Bliss is like its own independent feature or property and you don’t really begin or end anywhere. You are like an expression of a 13.7 billion year old system that’s playing out.

The universe is just peopleing all of us at the same time, and when you get this view and you see you as just sort of like the super thin slice of the evolution of consciousness and life, for me it’s like why do I really need to propagate my information into the future? Like I really don’t think there’s anything particularly special about the information of anyone really that exists today. We want to preserve all of the good stuff and propagate those in the future, but people who seek a immortality through AI or seek any kind of continuation of what they believe to be their self is, I just see that all as misguided and I see it as wasting potentially better futures by trying to bring Windows 7 into the world of Windows 10.

Mike: This all gets very muddy when we try to merge human level psychological drives and concepts and adaptations with a fundamental physics level description of what is. I don’t have a clear answer. I would say that it would be great to identify with consciousness itself, but at the same time, that’s not necessarily super easy if you’re suffering from depression or anxiety. So I just think that this is going to be an ongoing negotiation within society and just hopefully we can figure out ways in which everyone can move.

Andrés: There’s an article I wrote it, I just called it consciousness versus replicators. That kind of gets to the heart of this issue, but that sounds a little bit like good and evil, but it really isn’t. The true enemy here is replication for replication’s sake. On the other hand, the only way in which we can ultimately benefit consciousness, at least in a plausible, evolutionarily stable way is through replication. We need to find the balance between replication and benefit of consciousness that makes the whole system stable, good for consciousness and resistant against the factors.

Mike: I would like to say that I really enjoy Max Tegmark’s general frame of you leaving this mathematical universe. One re-frame of what we were just talking about in these terms are there are patterns which have to do with identity and have to do with valence and have to do with many other things. The grand goal is to understand what makes a pattern good or bad and optimize our light cone for those sorts of patterns. This may have some counter intuitive things, maybe closed individualism is actually a very adaptive thing, in the long term it builds robust societies. Could be that that’s not true but I just think that taking the mathematical frame and the long term frame is a very generative approach.

Lucas: Absolutely. Great. I just want to finish up here on two fun things. It seems like good and bad are real in your view. Do we live in heaven or hell?

Mike: Lot of quips that come to mind here. Hell is other people, or nothing is good or bad but thinking makes it so. My pet theory I should say is that we live in something that is perhaps close to heaven as is physically possible. The best of all possible worlds.

Lucas: I don’t always feel that way but why do you think that?

Mike: This gets through the weeds of theories about consciousness. It’s this idea that we tend to think of consciousness on the human scale. Is the human condition good or bad, is the balance of human experience on the good end, the heavenly end or the hellish end. If we do have an objective theory of consciousness, we should be able to point it at things that are not human and even things that are not biological. It may seem like a type error to do this but we should be able to point it at stars and black holes and quantum fuzz. My pet theory, which is totally not validated, but it is falsifiable, and this gets into Bostrom’s simulation hypothesis, it could be that if we tally up the good valence and the bad valence in the universe, that first of all, the human stuff might just be a rounding error.

Most of the value, in this value the positive and negative valence is found elsewhere, not in humanity. And second of all, I have this list in the last appendix of Principia Qualia as well, where could massive amounts of consciousness be hiding in the cosmological sense. I’m very suspicious that the big bang starts with a very symmetrical state, I’ll just leave it there. In a utilitarian sense, if you want to get a sense of whether we live in a place closer to heaven or hell we should actually get a good theory of consciousness and we should point to things that are not humans and cosmological scale events or objects would be very interesting to point it at. This will give a much better clear answer as to whether we live in somewhere closer to heaven or hell than human intuition.

Lucas: All right, great. You guys have been super generous with your time and I’ve really enjoyed this and learned a lot. Is there anything else you guys would like to wrap up on?

Mike: Just I would like to say, yeah, thank you so much for the interview and reaching out and making this happen. It’s been really fun on our side too.

Andrés: Yeah, I think wonderful questions and it’s very rare for an interviewer to have non conventional views of identity to begin with, so it was really fun, really appreciate it.

Lucas: Would you guys like to go ahead and plug anything? What’s the best place to follow you guys, Twitter, Facebook, blogs, website?

Mike: Our website is qualiaresearchinstitute.org and we’re working on getting a PayPal donate button out but in the meantime you can send us some crypto. We’re building out the organization and if you want to read our stuff a lot of it is linked from the website and you can also read my stuff at my blog, opentheory.net and Andrés’ is @qualiacomputing.com.

Lucas: If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.


Featured image credit: Alex Grey

Thoughts on the ‘Is-Ought Problem’ from a Qualia Realist Point of View

tl;dr If we construct a theory of meaning grounded in qualia and felt-sense, it is possible to congruently arrive at “should” statements on the basis of reason and “is” claims. Meaning grounded in qualia allows us to import the pleasure-pain axis and its phenomenal character to the same plane of discussion as factual and structural observations.

Introduction

The Is-Ought problem (also called “Hume’s guillotine”) is a classical philosophical conundrum. On the one hand people feel that our ethical obligations (at least the uncontroversial ones like “do not torture anyone for no reason”) are facts about reality in some important sense, but on the other hand, rigorously deriving such “moral facts” from facts about the universe appears to be a category error. Is there any physical fact that truly compels us to act in one way or another?

A friend recently asked about my thoughts on this question and I took the time to express them to the best of my knowledge.

Takeaways

I provide seven points of discussion that together can be used to make the case that “ought” judgements often, though not always, are on the same ontological footing as “is” claims. Namely, that they are references to the structure and quality of experience, whose ultimate nature is self-intimating (i.e. it reveals itself) and hence inaccessible to those who lack the physiological apparatus to instantiate it. In turn, we could say that within communities of beings who share the same self-intimating qualities of experience, the is/ought divide may not be completely unbridgeable.


Summaries of Question and Response

Summary of the question:

How does a “should” emerge at all? How can reason and/or principles and/or logic compel us to follow some moral code?

Summary of the response:

  1. If “ought” statements are to be part of our worldview, then they must refer to decisions about experiences: what kinds of experiences are better/worse, what experiences should or should not exist, etc.
  2. A shared sense of personal identity (e.g. Open Individualism – which posits that “we are all one consciousness”) allows us to make parallels between the quality of our experience and the experience of others. Hence if one grounds “oughts” on the self-intimating quality of one’s suffering, then we can also extrapolate that such “oughts” must exist in the experience of other sentient beings and that they are no less real “over there” simply because a different brain is generating them (general relativity shows that every “here and now” is equally real).
  3. Reduction cuts both ways: if the “fire in the equations of physics” can feel a certain way (e.g. bliss/pain) then objective causal descriptions of reality (about e.g. brain states) are implicitly referring to precisely that which has an “ought” quality. Thus physics may be inextricably connected with moral “oughts”.
  4. If one loses sight of the fact that one’s experience is the ultimate referent for meaning, it is possible to end up in nihilistic accounts of meaning (e.g. such as Quine’s Indeterminacy of translation and Dennett’s inclusion of qualia within that framework). But if one grounds meaning in qualia, then suddenly both causality and value are on the same ontological footing (cf. Valence Realism).
  5. To see clearly the nature of value it is best to examine it at its extremes (such as MDMA bliss vs. the pain of kidney stones). Having such experiences illuminates the “ought” aspect of consciousness, in contrast to the typical quasi-anhedonic “normal everyday states of consciousness” that most people (and philosophers!) tend to reason from. It would be interesting to see philosophers discuss e.g. the Is-Ought problem while on MDMA.
  6. Claims that “pleasure and pain, value and disvalue, good and bad, etc.” are an illusion by long-term meditators based on the experience of “dissolving value” in meditative states are no more valid than claims that pain is an illusion by someone doped on morphine. In brief: such claims are made in a state of consciousness that has lost touch with the actual quality of experience that gives (dis)value to consciousness.
  7. Admittedly the idea that one state of consciousness can even refer to (let alone make value judgements about) other states of consciousness is very problematic. In what sense does “reference” even make sense? Every moment of experience only has access to its own content. We posit that this problem is not ultimately unsolvable, and that human concepts are currently mere prototypes of a much better future set of varieties of consciousness optimized for truth-finding. As a thought experiment to illustrate this possible future, consider a full-spectrum superintelligence capable of instantiating arbitrary modes of experience and impartially comparing them side by side in order to build a total order of consciousness.

Full Question and Response

Question:

I realized I don’t share some fundamental assumptions that seemed common amongst the people here [referring to the Qualia Research Institute and friends].

The most basic way I know how to phrase it, is the notion that there’s some appeal to reason and/or principles and/or logic that compels us to follow some type of moral code.

A (possibly straw-man) instance is the notion I associate with effective altruism, namely, that one should choose a career based on its calculable contribution to human welfare. The assumption is that human welfare is what we “should” care about. Why should we? What’s compelling about trying to reconfigure ourselves from whatever we value at the moment to replacing that thing with human welfare (or anything else)? What makes us think we can even truly succeed in reconfiguring ourselves like this? The obvious pitfall seems to be we create some image of “goodness” that we try to live up to without ever being honest with ourselves and owning our authentic desires. IMO this issue is rampant in mainstream Christianity.

More generally, I don’t understand how a “should” emerges within moral philosophy at all. I understand how starting with a want, say happiness, and noting a general tendency, such as I become happy when I help others, that one could deduce that helping others often is likely to result in a happy life. I might even say “I should help others” to myself, knowing it’s a strategy to get what I want. That’s not the type of “should” I’m talking about. What I’m talking about is “should” at the most basic level of one’s value structure. I don’t understand how any amount of reasoning could tell us what our most basic values and desires “should” be.

I would like to read something rigorous on this issue. I appreciate any references, as well as any elucidating replies.

Response:

This is a very important topic. I think it is great that you raise this question, as it stands at the core of many debates and arguments about ethics and morality. I think that one can indeed make a really strong case for the view that “ought” is simply never logically implied by any accurate and objective description of the world (the famous is/ought Humean guillotine). I understand that an objective assessment of all that is will usually be cast as a network of causal and structural relationships. By starting out with a network of causal and structural relationships and using logical inferences to arrive at further high-level facts, one is ultimately bound to arrive at conclusions that themselves are just structural and causal relationships. So where does the “ought” fit in here? Is it really just a manner of speaking? A linguistic spandrel that emerges from evolutionary history? It could really seem like it, and I admit that I do not have a silver bullet argument against this view.

However, I do think that eventually we will arrive at a post-Galilean understanding of consciousness, and that this understanding will itself allow us to point out exactly where- if at all- ethical imperatives are located and how they emerge. For now all I have is a series of observations that I hope can help you develop an intuition for how we are thinking about it, and why our take is original and novel (and not simply a rehashing of previous arguments or appeals to nature/intuition/guilt).

So without further ado I would like to lay out the following points on the table:

  1. I am of the mind that if any kind of “ought” is present in reality it will involve decision-making about the quality of consciousness of subjects of experience. I do not think that it makes sense to talk about an ethical imperative that has anything to do with non-experiential properties of the universe precisely because there would be no one affected by it. If there is an argument for caring about things that have no impact on any state of consciousness, I have yet to encounter it. So I will assume that the question refers to whether certain states of consciousness ought to or ought not to exist (and how to make trade offs between them).
  2. I also think that personal identity is key for this discussion, but why this is the case will make sense in a moment. The short answer is that conscious value is self-intimating/self-revealing, and in order to pass judgement on something that you yourself (as a narrative being) will not get to experience you need some confidence (or reasonable cause) to believe that the same self-intimating quality of experience is present in other narrative orbits that will not interact with you. For the same reasons as (1) above, it makes no sense to care about philosophical zombies (no matter how much they scream at you), but the same is the case for “conscious value p. zombies” (where maybe they experience color qualia but do not experience hedonic tone i.e. they can’t suffer).
  3. A very important concept that comes up again and again in our research is the notion that “reduction cuts both ways”. We take dual aspect monism seriously, and in this view we would consider the mathematical description of an experience and its qualia two sides of the same coin. Now, many people come here and say “the moment you reduce an experience of bliss to a mathematical equation you have removed any fuzzy morality from it and arrived at a purely objective and factual account which does not support an ‘ought ontology'”. But doing this mental move requires you to take the mathematical account as a superior ontology to that of the self-intimating quality of experience. In our view, these are two sides of the same coin. If mystical experiences are just a bunch of chemicals, then a bunch of chemicals can also be a mystical experience. To reiterate: reduction cuts both ways, and this happens with the value of experience to the same extent as it happens with the qualia of e.g. red or cinnamon.
  4. Mike Johnson tends to bring up Wittgenstein and Quine to the “Is-Ought” problem because they are famous for ‘reducing language and meaning’ to games and networks of relationships. But here you should realize that you can apply the concept developed in (3) above just as well to this matter. In our view, a view of language that has “words and objects” at its foundation is not a complete ontology, and nor is one that merely introduces language games to dissolve the mystery of meaning. What’s missing here is “felt sense” – the raw way in which concepts feel and operate on each other whether or not they are verbalized. It is my view that here phenomenal binding becomes critical because a felt sense that corresponds to a word, concept, referent, etc. in itself encapsulates a large amount of information simultaneously, and contains many invariants across a set of possible mental transformations that define what it is and what it is not. More so, felt senses are computationally powerful (rather than merely epiphenomenal). Consider Daniel Tammet‘s mathematical feats achieved by experiencing numbers in complex synesthetic ways that interact with each other in ways that are isomorphic to multiplication, factorization, etc. More so, he does this at competitive speeds. Language, in a sense, could be thought of as the surface of felt sense. Daniel Dennett famously argued that you can “Quine Qualia” (meaning that you can explain it away with a groundless network of relationships and referents). We, on the opposite extreme, would bite the bullet of meaning and say that meaning itself is grounded in felt-sense and qualia. Thus, colors, aromas, emotions, and thoughts, rather than being ultimately semantically groundless as Dennett would have it, turn out to be the very foundation of meaning.
  5. In light of the above, let’s consider some experiences that embody the strongest degree of the felt sense of “ought to be” and “ought not to be” that we know of. On the negative side, we have things like cluster headaches and kidney stones. On the positive side we have things like Samadhi, MDMA, and 5-MEO-DMT states of consciousness. I am personally more certain that the “ought not to be” aspect of experience is more real than the “ought to be” aspect of it, which is why I have a tendency (though no strong commitment) towards negative utilitarianism. When you touch a hot stove you get this involuntary reaction and associated valence qualia of “reality needs you to recoil from this”, and in such cases one has degrees of freedom into which to back off. But when experiencing cluster headaches and kidney stones, this sensation- that self-intimating felt-sense of ‘this ought not to be’- is omnidirectional. The experience is one in which one feels like every direction is negative, and in turn, at its extremes, one feels spiritually violated (“a major ethical emergency” is how a sufferer of cluster headaches recently described it to me). This brings me to…
  6. The apparent illusory nature of value in light of meditative deconstruction of felt-senses. As you put it elsewhere: “Introspectively – Meditators with deep experience typically report all concepts are delusion. This is realized in a very direct experiential way.” Here I am ambivalent, though my default response is to make sense of the meditation-induced feeling that “value is illusory” as itself an operation on one’s conscious topology that makes the value quality of experience get diminished or plugged out. Meditation masters will say things like “if you observe the pain very carefully, if you slice it into 30 tiny fragments per second, you will realize that the suffering you experience from it is an illusory construction”. And this kind of language itself is, IMO, liable to give off the illusion that the pain was illusory to begin with. But here I disagree. We don’t say that people who take a strong opioid to reduce acute pain are “gaining insight into the fundamental nature of pain” and that’s “why they stop experiencing it”. Rather, we understand that the strong opioid changes the neurological conditions in such a way that the quality of the pain itself is modified, which results in a duller, “asymbolic“, non-propagating, well-confined discomfort. In other words, strong opioids reduce the value-quality of pain by locally changing the nature of pain rather than by bringing about a realization of its ultimate nature. The same with meditation. The strongest difference here, I think, would be that opioids are preventing the spatial propagation of pain “symmetry breaking structures” across one’s experience and thus “confine pain to a small spatial location”, whereas meditation does something different that is better described as confining the pain to a small temporal region. This is hard to explain in full, and it will require us to fully formalize how the subjective arrow of time is constructed and how pain qualia can make copies across it. [By noting the pain very quickly one is, I believe, preventing it from building up and then having “secondary pain” which emerges from the cymatic resonance of the various lingering echoes of pain across one’s entire “pseudo-time arrow of experience”.] Sorry if this sounds like word salad, I am happy to unpack these concepts if needed, while also admitting that we are in early stages of the theoretical and empirical development.
  7. Finally, I will concede that the common sense view of “reference” is very deluded on many levels. The very notion that we can refer to an experience with another experience, that we can encode the properties of a different moment of experience in one’s current moment of experience, that we can talk about the “real world” or its “objective ethical values” or “moral duty” is very far from sensical in the final analysis. Reference is very tricky, and I think that a full understanding of consciousness will do some severe violence to our common sense in this area. That, however, is different from the self-disclosing properties of experience such as red qualia and pain qualia. You can do away with all of common sense reference while retaining a grounded understanding that “the constituents of the world are qualia values and their local binding relationships”. In turn, I do think that we can aim to do a decently good job at re-building from the ground up a good approximation of our common sense understanding of the world using “meaning grounded in qualia”, and once we do that we will be in a solid foundation (as opposed to the, admittedly very messy, quasi-delusional character of thoughts as they exist today). Needless to say, this may also need us to change our state of consciousness. “Someday we will have thoughts like sunsets” – David Pearce.