LSD Ego Death: Where Hyperbolic Pseudo-Time Arrows Meet Geometric Fixed Points

Alternative Title: LSD Ego Death – A Play in Three Voices

[Epistemic Status: Academic, Casual, and Fictional Analysis of the phenomenology of LSD Ego Death]

Academic:

In this work we advance key novel interpretative frameworks to make sense of the distinct phenomenology that arises when ingesting a high dose of LSD-25 (250μg+). It is often noted that LSD, also known as lysergic acid diethylamide, changes in qualitative character as a function of dose, with a number of phase transitions worth discussing.

Casual:

You start reading an abstract of an academic publication on the topic of LSD phenomenology. What are the chances that you will gain any sense, any inkling, the most basic of hints, of what the high-dose LSD state is like by consuming this kind of media? Perhaps it’s not zero, but in so far as the phenomenological paradigms in mainstream use in the 2020s are concerned, we can be reasonably certain that the piece of media won’t even touch the outer edges of the world of LSD-specific qualia. Right now, you can trust the publication to get right core methodological boundary conditions, like the mg/kg used, the standard deviation of people’s responses to questionnaire items, and the increase in blood pressure at the peak. But at least right now you won’t find a rigorous account of either the phenomenal character (what the experience felt like in detailed colorful phenomenology with precise reproducible parameters) or the semantic content (what the experience was about, the information it allowed you to process, the meaning computed) of the state. For that we need to blend in additional voices to complement the rigidly skeptical vibe and tone of the academic delivery method.

It’s for that reason that we will interweave a casual, matter of fact, “really trying to say what I mean in as many ways as I can even if I sound silly or dumb”, voice (namely, this one, duh!). And more so, in order to address the speculative semantic content in its own terms we shall also include a fantastical voice into the mix. 

Fantastical:

Fuck, you took too much. In many ways you knew that your new druggie friends weren’t to be trusted. Their MDMA pills were bunk, their weed was cheap, and even they pretended to drink more fancy alcohol than they could realistically afford. So it was rather natural for you to assume that their acid tabs would be weak ass. But alas, they turned out to have a really competent, niche, boutique, high-quality acid dealer. She lived only a few miles away and made her own acid, and dosed each tab at an actual, honest-to-God, 120(±10)μg. She also had a lot of cats, for some reason (why this information was relayed to you only once you sobered up was not something you really understood – especially not the part about the cats). Thus, the 2.5 tabs in total you had just taken (well, you took 1/2, then 1, then 1, spaced one hour each, and you had just taken the last dose, meaning you were still very much coming up, and coming up further by the minute) landed you squarely in the 300μg range. But you didn’t know this at the time. In fact, you suspected that the acid was hitting much more strongly than you anticipated for other reasons. You were expecting a 100-150 microgram trip, assuming each tab would be more between 40 and 60μg. But perhaps you really were quite sleep deprived. Or one of the nootropics you had sampled last week turned out to have a longer half-life than you expected and was synergistic with LSD (coluracetam? schizandrol?). Or perhaps it was the mild phenibut withdrawal you were having (you took 2g 72 hours ago, which isn’t much, but LSD amplifies subtle patterns anyway). It wasn’t until about half an hour later, when the final tab started to kick in, that you realized the intensity of the trip kept climbing up still further than you expected, and it really, absolutely, had to be that the acid was much, much stronger than you thought was possible; most likely over 250 mics, as you quickly estimated, and realized the implications.

From experience, you knew that 300 micrograms would cause ego death for sure. Of course people react differently to psychedelics. But in your case, ego death feelings start at around 150, and then even by 225-250 micrograms they would become all-consuming at least for some portion of the trip. In turn, actually taking 300 micrograms for you was ego death overkill, meaning you were most likely not only going to lose it, but be out for no less than an hour. 

What do I mean by being out? And by losing it? The subjective component of the depersonalization that LSD causes is very difficult to explain. This is what this entire document is about. But we can start by describing what it is like from the outside. 

Academic:

The behavioral markers of high dose LSD intoxication include confusion and delusions, as well as visual distortions of sufficient intensity to overcome, block, and replace sensory-activated phenomena. The depersonalization and derealization characteristic of LSD-induced states of consciousness tend to involve themes concerning religious, mystical, fantastical, and science fiction semantic landscapes. It is currently not possible to deduce the phenomenal character of these states of consciousness from within with our mainstream research tools and without compromising the epistemological integrity of our scientists (having them consume the mind-altering substance would, of course, confound the rigor of the analysis).

Casual:

Look, when you “lose it” or when you “are out” what happens from the outside is that you are an unpredictable executor of programs that seem completely random to any external observer. One moment you are quietly sitting, rocking back and forth, on the grass. The next you stand up, walk around peacefully. You sit again, now for literally half an hour without moving. Then you suddenly jump and run for 100m without stopping. And then ask the person who is there, no matter if they are a kid, a grandmother, a cop, a sanitation professional, a sex worker, or a professor, “what do you think about ___”? (where ___ ∈ {consciousness, reality, God, Time, Infinity, Eternity, …}). Of course here reality bifurcates depending on who it is that you happened to have asked this question to. A cop? You might end up arrested. Probably via a short visit to a hospital first. And overall not a great time. A kid? You could be in luck, and the kid might play along without identifying you as a threat, and most likely you continue on your journey without much problem. Or in one of the bad timelines, you end up fighting the kid. Not good. Most likely, if it was a grandmother, you might just activate random helpful programs, like helping her cross the street, and she might not even have the faintest clue (and I mean not the absolute faintest fucking clue) that you’re depersonalized on LSD thinking you’re God and that in a very real, if only phenomenological sense, it was literal Jesus / Christ Consciousness that helped her cross the street.

Under most conditions, the biggest danger that LSD poses is a bad valence reaction, which usually wears off after a few hours and is educational in some way. But when taken at high doses and unsupervised, LSD states can turn into real hazards for the individual and the people around them. Not so much because of malice, or because it triggers animal-like behaviors (it can, rarely, but it’s not why it’s dangerous). The real problem with LSD states in high doses is when you are unsupervised and then you execute random behaviors without knowing who you are, where you are, or even what it is that you are intending to achieve with the actions you are performing. It is therefore *paramount* that if you explore high doses of LSD you do it supervised.

Academic:

What constitutes a small, medium, or large dose of LSD is culture and time-dependent. In the 60s, the average tab used to be between 200 and 400 micrograms. The typical LSD experience was one that included elements of death and rebirth, mystical unions, and complete loss of contact with reality for a period of time. In the present, however, the tabs are closer to the 50-100μg range.

In “psychonaut” circles, which gather in internet forums like bluelight, reddit, and erowid, a “high dose of LSD” might be considered to be 300 micrograms. But in real world, less selected, typical contexts of use for psychedelic and empathogen drugs like dance festivals, a “high dose” might be anything above 150 micrograms. In turn, OG psychonauts like Timothy Leary and Richard Alpert would end up using doses in the 500-1000μg range routinely as part of their own investigations. In contrast, in TIHKAL, Alexander Shulgin lists LSD’s dose range as 60-200 micrograms. Clearly, there is a wide spread of opinions and practices concerning LSD dosing. It is for this reason that one needs to contextualize with historical and cultural details the demographic topos where one is discussing a “high dose of LSD”.

Fantastical:

Being out, and losing it, in your case right now would be disastrous. Why? Because you broke the cardinal sin of psychedelic exploration. You took a high dose of a full psychedelic (e.g. LSD, psilocybin, mescaline, DMT – less so 2C-B or Al-LAD, which have a lower ceiling of depersonalization[1]) without a sitter. Of course you didn’t intend to. You really just wanted to land at the comfortably manageable 100-150 microgram range. But now… now you’re deep into depersonalization-land, and alone. Who knows what you might do? Will you leave your apartment naked? Will someone call the cops? Will you end up in the hospital? You try to visualize future timelines and… something like 40% of them lead to either arrest or hospital or both. Damn it. It’s time to pull all the stops and minimize the bad timelines.

You go to your drug cabinet and decide to take a gabaergic. Here is an important lesson, and where timelines might start to diverge as well. Dosing of sedatives for psychedelic emergencies is a tricky issue. The problem is that sedatives themselves can cause confusion. So there are many stories you can find online of people who take a very large dose of alprazolam (Xanax) or similar (benzo, typically) and then end up both very confused and combative while also tripping really hard. Here interestingly, the added confusion of the sedative plus its anxiolytic effect synergize to make you even more unpredictable. On the other hand, not taking enough is also quite easy, where the LSD (or similar) anxiety and depersonalization continues to overpower the anxiolysis of the sedative.

You gather up all the “adult in the room” energy you can muster and make an educated guess: 600mg of gabapentin and 1g of phenibut. Yet, this will take a while to kick in, and you might depersonalize anytime and start wandering around. You need a plan in the meanwhile. 

Academic:

In the article The Pseudo-Time Arrow we introduced a model of phenomenal time that takes into account the following three assumptions and works out their implications:

  1. Indirect Realism About Perception
  2. Discrete Moments of Experience
  3. Qualia Structuralism

(1) is about how we live in a world-simulation and don’t access the world around us directly. (2) goes into how each moment of experience is itself a whole, and in a way, whatever feeling of space and time we may have, this must be encoded in each moment of experience itself. And (3) states that for any given experience there is a mathematical object whose mathematical features are isomorphic to the phenomenology of the experience (first introduced in Principia Qualia by Michel E. Johnson).

Together, these assumptions entail that the feeling of the passage of time must be encoded as a mathematical feature in each moment of experience. In turn, we speculated that this feature is _implicit causality_ in networks of local binding. Of course the hypothesis is highly speculative, but it was supported by the tantalizing idea that a directed graph could represent different variants of phenomenal time (aka. “exotic phenomenal time”). In particular, this model could account for “moments of eternity”, “time loops”, and even the strange “time splitting/branching”.

Casual:

In some ways, for people like me, LSD is like crack. I have what I have come to call “hyperphilosophia”. I am the kind of person who feels like a failure if I don’t come up with a radically new way of seeing reality by the end of each day. I feel deeply vulnerable, but also deeply intimate, with the nature of reality. Nature at its deepest feels like a brother or sister, basement reality feels close and in some way like a subtle reshuffling of myself. I like trippy ideas, I like to have my thoughts scrambled and then re-annealed in unexpected ways; I delight in combinatorial explosions, emergent effects, unexpected phase transitions, recursive patterns, and the computationally non-trivial. As a 6 year old I used to say that I wanted to be a “physicist mathematician and inventor” (modeling my future career plans around Einstein and Edison); I got deeply depressed for a whole year at the age of 9 when I confronted our mortality head on; and then experiencing a fantastic release at 16 on my first ego death (with weed of all drugs!) when I experienced the taste of Open Individualism; only to then feel depressed again at 20 but now about eternal life and the suffering we’re bound to experience for the rest of time; switching then to pragmatic approaches to reduce suffering and achieve paradise ala David Pearce. Of course this is just a “roll of the dice” and I’m sure I would be telling you about a different philosophical trajectory if we were to sample another timeline. But the point is that all my life I’ve expressed a really intense philosophical temperament. And it feels innate – nobody made me so philosophical – it just happened, as if driven by a force from the deep.

People like us are a certain type for sure, and I know this because out of thousands of people I’ve met I’ve had the fortune of encountering a couple dozen who are like me in these respects. Whether they turned out physicists, artists, or meditators is a matter of personal preference (admittedly the plurality of them is working on AI these days). And in general, it is usually the case that people of this type tend to have a deep interest in psychedelics, for the simple reason that they give you more of what they like than any other drug.

Yes, a powerful pleasant body buzz is appreciated (heroin mellow, meth fizz, and the ring of the Rupa Jhanas are all indeed quite pleasant and intrinsically worthwhile states of consciousness – factoring out their long-term consequences [positive for the Jhanas, negative for heroin and meth]). But that’s not what makes life worth living for people who (suffer from / enjoy their condition of) hyperphilosophia. Rather, it is the beauty of completely new perspectives that illuminate our understanding of reality one way or another that drives us. And LSD, among other tools, often really hits the nail in the head. It makes all the bad trips and nerve wracking anxiety of the state more than worth it in our minds.

One of the striking things about an LSD ego death that is incredibly stimulating from a philosophical perspective is how you handle the feeling of possible futures. Usually the way in which we navigate timelines (this is so seamless that we don’t usually realize how interesting and complex it is) is by imagining that a certain future is real and then “teleporting to it”. We of course don’t teleport to it. But we generate that feeling. And as we plan, we are in a way generating a bunch of wormholes from one future to another (one state of the world to another, chained through a series of actions). But our ability to do this is restricted by our capacity to generate definite, plausible, realistic and achievable chains of future states in our imagination.

On LSD this capacity can become severely impaired. In particular, we often realize that our sense of connection to near futures that we normally feel is in fact not grounded in reality. It’s a kind of mnemonic technique we employ for planning motor actions, but it feels from the inside as if we could control the nearby timelines. On LSD this capacity breaks down and one is forced to instead navigate possible futures via different means. In particular, something that begins to happen above 150 micrograms or so, is that when one imagines a possible future it lingers and refuses to fully collapse. You start experiencing a superposition of possible futures.

For an extreme example, see this quote (from this article) I found in r/BitcoinMarkets by Reddit user  I_DID_LSD_ON_A_PLANE in 2016:

[Trip report of taking a high dose of LSD on an airplane]: So I had what you call “sonder”, a moment of clarity where I realized that I wasn’t the center of the universe, that everyone is just as important as me, everyone has loved ones, stories of lost love etc, they’re the main character in their own movies.

That’s when shit went quantum. All these stories begun sinking in to me. It was as if I was beginning to experience their stories simultaneously. And not just their stories, I began seeing the story of everyone I had ever met in my entire life flash before my eyes. And in this quantum experience, there was a voice that said something about Karma. The voice told me that the plane will crash and that I will be reborn again until the quota of my Karma is at -+0. So, for every ill deed I have done, I would have an ill deed committed to me. For every cheap T-shirt I purchased in my previous life, I would live the life of the poor Asian sweatshop worker sewing that T-shirt. For every hooker I fucked, I would live the life of a fucked hooker.

And it was as if thousands of versions of me was experiencing this moment. It is hard to explain, but in every situation where something could happen, both things happened and I experienced both timelines simultaneously. As I opened my eyes, I noticed how smoke was coming out of the top cabins in the plane. Luggage was falling out. I experienced the airplane crashing a thousand times, and I died and accepted death a thousand times, apologizing to the Karma God for my sins. There was a flash of the brightest white light imagineable and the thousand realities in which I died began fading off. Remaining was only one reality in which the crash didn’t happen. Where I was still sitting in the plane. I could still see the smoke coming out of the plane and as a air stewardess came walking by I asked her if everything was alright. She said “Yes, is everything alright with YOU?”.

Fantastical:

It had been some years since you had done the LSD and Quantum Measurement experiment in order to decide if the feeling of timelines splitting was in any way real. Two caveats about that experiment. First, it used quantum random number generators from Sydney that were no less than 100ms old by the time they were displayed on the screen. And second, you didn’t get the phenomenology of time splitting while on acid during the tests anyway. But having conducted the experiment anyway at least provided some bounds for the phenomenon. Literal superposition of timelines, if real, would need higher doses or more fresh quantum random numbers. Either way, it reassured you somewhat that the effect wasn’t so strong that it could be detected easily.

But now you wish you had done the experiment more thoroughly. Because… the freaking feeling of timelines splitting is absolutely raging with intensity right now and you wish you could know if it’s for real or just a hallucination. And of course, even if just a hallucination, this absolutely changes your model of how phenomenal time must be encoded, because damn, if you can experience multiple timelines at once that means that the structure of experience that encodes time is much more malleable than you thought.

Academic:

A phenomenon reported on high dose LSD is the recursive stacking of internal monologues – this also leads to higher order intentionality and the cross-pollination of narrative voices due to their sudden mutual awareness…

Casual:

Uh? Interesting, I can hear a voice all of a sudden. It calls itself “Academic” and just said something about the stacking of narrative voices.

Fantastical:

It’s always fascinating how on LSD you get a kind of juxtaposition of narrative voices. And in this case, you now have an Academic, a Casual, and a Fantastical narrative stream each happening in a semi-parallel way. And at some point they started to become aware of each other. Commenting on each other. Interlacing and interweaving.

Casual:

Importantly, one of the limiting factors of the academic discourse is that it struggles to interweave detailed phenomenology into its analysis. Thankfully, with the LSD-induced narrative juxtaposition we have a chance to correct this.

Academic:

After reviewing in real time the phenomenology of how you are thinking about future timelines, I would like to posit that the phenomenal character of high dose LSD is characterized by a hyperbolic pseudo-time arrow.

This requires the combination of two paradigms discussed at the Qualia Research Institute. Namely, the pseudo-time arrow, which as we explained tries to make sense of phenomenal time in terms of a directed graph representing moments of experience. And then also the algorithmic reductions introduced in the Hyperbolic Geometry of DMT Experiences

The latter deals with the idea that the geometry of our experience is the result of the balance between various forces. Qualia comes up, gets locally bound to other qualia, then disappears. Under normal circumstances, the network that emerges out of these brief connections has a standard Euclidean geometry (or rather, works as a projection of a Euclidean space, but I digress). But DMT perturbs the balance, in part by making more qualia appear, making it last longer, making it vibrate, and making it connect more with each other, which results in a network that has a hyperbolic geometry. In turn, the felt sense of being on DMT is one of _being_ a larger phenomenal space, which is hard to put into words, but possible with the right framework.

What we want to propose now is that on LSD in particular, the characteristic feeling of “timeline splitting” and the even more general “multiple timeline superposition” effect is the result of a hyperbolic geometry, not of phenomenal space as with DMT, but of phenomenal time. In turn, this can be summarized as: LSD induces a hyperbolic curvature along the pseudo-time arrow. 

Casual:

Indeed, one of the deeply unsettling things about high dose LSD experiences is that you get the feeling that you have knowledge of multiple timelines. In fact, there is a strange sense of uncanny uncertainty about which timeline you are in. And here is where a rather scary trick is often played on us by the state.

The feeling of the multiverse feels very palpable when the garbage collector of your phenomenal motor planning scratchpad is broken and you just sort of accumulate plans without collapsing them (a kind of kinesthetic tracer effect).

Fantastical:

Ok, you need to condense your timelines. You can’t let _that_ many fall off the wagon, so to speak. You could depersonalize any moment. You decide that your best bet is to call a friend of yours. He is likely working, but lives in the city right next to yours and could probably get to your place in half an hour if you’re lucky.

> Hello! 

> Hello! I just got out of a meeting. What’s up?

> Er… ok, this is gonna sound strange. I… took too much LSD. And I think I need help.

> Are you ok? LSD is safe, right?

> Yeah, yeah. I think everything will be fine. But I need to collapse the possibility space. This is too much. I can’t deal with all of these timelines. If you come over at least we will be trimming a bunch of them and preventing me from wandering off thinking I’m God.

> Oh, wow. You don’t sound very high? That made sense, haha.

> Duuudde! I’m in a window of lucidity right now. We’re lucky you caught me in one. Please hurry, I don’t know how much longer I can hang in here. I’m about to experience ego death. What happens next is literally up to God, and I don’t know what his plans are.

Your friend says he’ll take an Uber or Lyft and be there as soon as he can. You try to relax. Reality is scolding you. Why did you take this risk? You should know better!

Casual:

One of the unsettling feelings about high dose LSD is that you get to feel how extremely precious and rare a human life is. We tend to imagine that reincarnation would simply be like, say, where you die and then 40 days later come back as a baby in India or China or the United States or Brazil or whatever, based on priors, and rarely in Iceland or tiny Caribbean Islands. But no. Humans are a luxury reincarnation. Animal? Er, yeah, even animals are pretty rare. The more common form is simply in the shape of some cosmic process or another, like intergalactic wind or superheated plasma inside a star. Any co-arising process that takes place in this Gigantic Field of Consciousness we find ourselves embedded in is a possible destination, for the simple reason that…

Academic:

The One-Electron Universe posits that there is only one particle in the entire cosmos, which is going forwards and backwards in time, interfering with itself, interweaving a pattern of path integrals that interlace with each other. If there is only one electron, then the chances of being a “human moment of experience” at a point in time are vanishingly small. The electrons whose pattern of superposition paint a moment of experience are but a tiny vanishing fraction of the four-dimensional density-mass of the one electron in the block universe entailed by quantum mechanics.

Fantastical:

When you realize that you are the one electron in the universe you often experience a complex superposition of emotions. Of course this is limited by your imagination and emotional state. But if you’re clear-headed, curious, and generally open to exploring possibilities, here is where you feel like you are at the middle point of all reality.

You can access all 6 Realms from this central point, and in a way escape the sense of identification with any one of them. Alas, this is not something that one always achieves. It is easy to get caught up in a random stream and end up in, say, the God Realm completely deluded thinking you’re God. Or in the Hell realm, thinking you’re damned forever somehow. Or the animal, seeking simple body pleasures and comfort. Or the human world, being really puzzled and craving cognitively coherent explanations. Or the Hungry Ghost dimension, where you are always looking to fill yourself up and perceive yourself as fundamentally empty and flawed. Or the Titan realm, which adds a perceptual filter where you feel that everything and everyone is in competition with you and you derive your main source of satisfaction from pride and winning.

In the ideal case, during an LSD ego death you manage to hang out right at the center of this wheel, without falling into any of the particular realms. This is where the luminous awareness happens. And it is what feels like the central hub for the multiverse of consciousness, except in a positive, empowering way.

Casual:

In many ways we could say that the scariest feeling during LSD ego death is the complete lack of control over your next rebirth.

Because if you, in a way, truly surrender to the “fact” that we’re all one and that it all happens in Eternity at the same time anyway… do you realize the ramifications that this has? Everything Everywhere All At Once is a freaking documentary.

Fantastical:

> Hello? What’s up?

> Yeah, er, are you coming over?

> Yes. I mean, you just called me… 5 minutes ago. Did you expect I’d be there already? I’m walking towards the Uber.

> Time is passing really slowly, and I’m really losing it now. Can you… please… maybe like, remind me who I am every, like, 30 seconds or so?

> Mmmm ok. I guess that’s a clear instruction. I can be helpful, sure.

[for the next 40 minutes, in the Uber headed to your place, your friend kept saying your name every 30 seconds, sometimes also his name, and sometimes reminding you where you are and why you called him – bless his soul]

Casual:

Imagine that you are God. You are walking around in the “Garden of Possibilities”. Except that we’re not talking about static possibilities. Rather, we’re talking about processes. Algorithms, really. You walk around and stumble upon a little set of instructions that, when executed, turns you into a little snowflake shape. Or perhaps turns you into a tree-like shape (cf. l-systems). When you’re lucky, it turns you into a beautiful crystalline flower. In these cases, the time that you spend embodying the process is small. Like a little popcorn reality: you encounter, consume, and move on. But every once in a while you encounter a set of instructions that could take a very long time to execute. Due to principles of computational irreducibility, it is also impossible for you to determine in advance (at least in all, most cases) how long the process will take. So every once in a while you encounter a Busy Beaver and end up taking a very, very, very long time to compute that process.

Busy beaver values for different parameters (source)

But guess what? You are God. You’re eternal. You are forever. You will always come back and continue on your walk. But oh boy, from the point of view of the experience of being what the Busy Beaver executes, you do exist for a very long time. From the point of view of God, no matter how long this process takes, it will still be a blink of an eye in the grand scheme of things. God has been countless times in Busy Beavers and will be countless times there again as well. So enjoy being a flower, or a caterpillar, or a raindrop, or even an electron, because most of the time you’re stuck being ridiculously long processes like the Busy Beaver.

Academic:

Under the assumption that the hyperbolic pseudo-time arrow idea is on the right track, we can speculate about how this might come about from a configuration of a feedback system. As we’ve seen before, an important aspect of the phenomenal character of psychedelic states of consciousness is captured by the tracer pattern. More so, as we discussed in the video about DMT and hyperbolic geometry, one of the ways in which psychedelic states can be modeled is in terms of a feedback system with a certain level of noise. Assume that LSD produces a tracer effect where, approximately, 15 times per second you get a strobe and a replay effect overlay on top of your current experience. What would this do to your representation of the passage of time and the way you parse possible futures?

FRAKSL video I made to illustrate hyperbolic pseudo-time arrows coming out of a feedback system (notice how change propagates fractally across the layers).

Casual:

I think that LSD’s characteristic “vibrational frequency” is somewhere between phenethylamines and tryptamines. 2C-B strikes me as in the 10hz range for most vibrations, whereas psilocybin is closer to 20hz. LSD might be around 15hz. And one of the high-level insights that the lens of connectome-specific harmonic modes (or more recently geometric eigenmodes) gives us is that functional localization and harmonic modulation might be intertwined. In other words, the reason why a particular part of the brain might do what it does is because it is a great tuning knob for the harmonic modes that critically hinge on that region of the brain. This overall lens was used by Michael E. Johnson in Principia Qualia to speculate that the pleasure centers are responsible for high variance in valence precisely because they are strategically positioned in a place where large-scale harmony can be easily modulated. With this sort of approach in mind (we could call it even a research aesthetic, where for every spatial pattern there is a temporal dynamic and vice versa) I reckon that partly what explains the _epistemological_ effects of LSD at high doses involves the saturation of specific frequencies for conscious compute. What do I mean by this?

Say indeed that a good approximation for a conscious state is a weighted sum of harmonic modes. This does not take into account the non-linearities (how the presence of a harmonic mode affects other ones) but it might be a great 60%-of-the-way-there kind of approximation. If so, I reckon that we use some “frequency bands” to store specific kinds of information that corresponds to the information that is naturally encoded with rhythms of specific frequencies. It turns out, in this picture, that we have a sort of collection of inner clocks that are sampling the environment to pick up on patterns that repeat at different scales. We have a slow clock that samples every hour or so, one that samples every 10 minutes, one that samples every minute, every 10 seconds, every second, and then at 10, 20, 30, 40, and even 50hz. All of these inner clocks meet with each other to interlace and interweave a “fabric of subjective time”. When we want to know at a glance how we’re doing, we sample a fragment of this “fabric of subjective time” and it contains information about how we’re doing right now, how we were doing a minute ago, an hour, a day, and even longer. Of course sometimes we need to sample the fabric for a while in order to notice more subtle patterns. But the point is that our sense of reality in time seems to be constructed out of the co-occurrence of many metronomes at different scales. 

I think that in particular the spatio-temporal resonant modes that LSD over-excites the most are actually really load-bearing for constructing our sense of our context. It’s as if when you energize too much one of these resonant modes, you actually push it to a smaller range of possible configurations (more smooth sinusoidal waves rather than intricate textures). By super-saturating the energy in some of these harmonics on LSD, you flip over to a regime where there is really no available space for information to be encoded. You can therefore feel extremely alive and real, and yet when you query the “time fabric” you notice that there are big missing components. The information that you would usually get about who you are, where you are, what you have been doing for the last couple of hours, and so on, is instead replaced by a kind of eternal-seeming feeling of always having existed exactly as you currently are.

Fantastical:

If it wasn’t because of your friend helpfully reminding you where you were and who you are, you would have certainly forgotten the nature of your context and for sure wandered off. The scene was shifting widely, and each phenomenal object or construct was composed of a never ending stream of gestalts competing for the space to take hold as the canonical representation (and yet, of course, always superseded by yet another “better fit”, constantly updating).

The feeling of the multiverse was crushing. Here is where you remembered how various pieces of media express aspects of the phenomenology of high dose LSD (warning: mild spoilers – for the movies and for reality as a whole):

  • Everything Everywhere All At Once: in the movie one tunes into other timelines in order to learn the skills that one has in those alternative lifepaths. But this comes with one side-effect, which is that you continue to be connected to the timeline from which you’re learning a skill. In other words, you form a bond across timelines that drags you down as the cost of accessing their skill. On high dose LSD you get the feeling that yes, you can learn a lot from visualizing other timelines, but you also incur the cost of loading up your sensory screen with information you can’t get rid of.
  • The Matrix: the connection is both the obvious one and a non-obvious one. First, yes, the reason this is relevant is because being inside a simulation might feel like a plausible hypothesis while on a high dose of LSD. But less intuitively, the Matrix also fits the bill when it comes to the handling of future-past interactions. The “Don’t worry about the vase” scene (which I imagine Zvi named his blog after) highlights that there is an intertwining between future and past that forges destiny. And many of the feelings about how the future and past are connected echo this theme on a high dose of LSD.
  • Rick and Morty (selected episodes):
    • Death Crystal: here the similarity is in how on LSD you feel that you can go to any given future timeline by imagining clearly a given outcome and then using it as a frame of reference to fill in the details backwards.
    • A Rickle in Time: how the timelines split but can in some ways remain aware of and affect each other.
    • Mortynight Run: In the fictional game Roy: A Life Well Lived you get to experience a whole human lifetime in what looks like minutes from the outside in order to test how you do in a different reality. 
  • Tenet: Here the premise is that you can go back in time, but only one second per second and using special gear (reversed air tanks, in their case).

Of these, perhaps the most surprising to people would be Tenet. So let me elaborate. There are two Tenet-like phenomenologies you experience as your friend is on the way to pick you up worth commenting on:

One, what we could call the “don’t go this way” phenomenology. Here you get the feeling that you make a particular choice. E.g. go to the other room to take more gabapentin and see if that helps (of course it won’t – it’s only been 15 minutes since you took it and it hasn’t even kicked in). Then you visualize briefly what that timeline feels like, and you get the feeling of living through it. Suddenly you snap back into the present moment and decide not to go there. This leaves a taste in your mouth of having gone there, of having been there, of living through the whole thing, just to decide 10 years down the line that you would rather come back and make a different choice.

At the extreme of this phenomenology you find yourself feeling like you’ve lived every possible timeline. And in a way, you “realize” that you’re, in the words of Teafaerie, a deeply jaded God looking for an escape from endless loops. So you “remember” (cf. anamnesis) that you chose to forget on purpose so that you could live as a human in peace, believing others are real, humbly accepting a simple life, lost in a narrative of your own making. The “realization” can be crushing, of course, and is often a gateway to a particular kind of depersonalization/derealization where you walk around claiming you’re God. Alas, this only happens in a sweet spot of intoxication, and since you went above even that, you’ll have a more thorough ego death.

Two, an even more unsettling Tenet-like phenomenology is the feeling that “other timelines are asking for your help – Big Time wants you to volunteer for the Time War!”. Here things go quantum, and completely bonkers. The feeling is the result of having the sense that you can navigate timelines with your mind in a much deeper way than, say, just making choices one at a time. This is a profound feeling, and conveying it in writing is of course a massive stretch. But even the Bering Strait was crossed by hominids once, and this stretch feels also crossable by us with the right ambition.

The multiverse is very large. You see, imagine what it would be like to restart college. One level here is where you start again from day 1. In other timelines you make different friends, read other books, take other classes, have other lovers, major in other disciplines. Now go backwards even a little further back, to when the academic housing committee was making decisions about who goes to which dorm. Then the multiverse diversifies, as you see a combinatorial explosion of possible dorm configurations. Further back, when the admissions committee was making their decisions, and you have an even greater expansion of the multiverse where different class configurations are generated.

Now imagine being able to “search” this bulky multiverse. How do you search it? Of course you could go action by action. But due to chaos, within important parameters like the set of people you’re likely to meet, possibilities quickly get scrambled. The worlds where you chose that bike versus that other bike in that particular moment aren’t much more similar to each other than other random ways of partitioning the timelines. Rather, you need to find pivotal decisions, as well as _anchor feelings_. E.g. It really matters if a particular bad technology is discovered and deployed, because that drastically changes the texture of an entire category of timelines. It is better for you to search timelines via general vibes and feelings like that, because that will really segment the multiverse into meaningfully different outcomes. This is the way in which you can move along timelines on high doses of LSD. You generate the feeling of things “having been a certain way” and you try to leave everything else as loose and unconstrained as possible, so that you search through the path integral of superpositions of all possible worlds where the feeling arises, and every once in a while when you “sample” the superposition you get a plausible universe where this is real.

Now, on 150 or 200 micrograms this feels very hypothetical, and the activity can be quite fun. On 300 micrograms, this feels real. It is actually quite spooky, because you feel a lot of responsibility here. As if the way in which you chose to digest cosmic feelings right there could lock in either a positive or negative timeline for you and your loved ones.

Here is where the Time War comes into play. I didn’t choose this. I don’t like this meme. But it is part of the phenomenology, and I think it is better that we address it head-on rather than let it surprise you and screw you up in one way or another.

The sense of realism that high dose LSD produces is unreal. It feels so real that it feels dreamy. But importantly, the sense of future timelines being truly there in a way is often hard to escape. With this you often get a crushing sense of responsibility. And together with the “don’t go this way” you can experience a feeling of a sort of “ping pong with the multiverse of possibilities” where you feel like you go backwards and forwards in countless cycles searching for a viable, good future for yourself and for everyone. 

In some ways, you may feel like you go to the End of Times when you’ve lived all possible lifetimes and reconverge on the Godhead (I’m not making this up, this is a common type of experience for some reason). Importantly, you often feel like there are _powerful_ cosmic forces at play, that the reason for your life is profound, and that you are playing an important role for the development of consciousness. One might even experience corner-case exotic phenomenal time like states of mind with two arrows of time that are perpendicular to each other (unpacking this would take us an entire new writeup, so we shall save it for another time). And sometimes you can feel like your moral fiber is tested in often incredibly uncomfortable ways by these exotic phenomenal time effects.

Here is an example.

As your sense of “awareness of other timelines” increases, so does your capacity to sense timelines where things are going really well and timelines where things are going really poorly. Like, there are timelines where your friend is also having a heart attack right now, and then those where he crashes on the way to your apartment, and those where there’s a meteorite falling into your city, and so on. Likewise, there’s one where he is about to win the lottery, where you are about to make a profound discovery about reality that stands the test of sober inquiry, where someone just encountered the cure for cancer, and so on. One unsettling feeling you often get on high dose LSD is that because you’re more or less looking at these possibilities “from the point of view of eternity” in a way you are all of them at once. “Even the bad ones?” – yes, unsettlingly, even the bad ones. So the scary moral-fiber-testing thought that sometimes you might get is if you’d volunteer to be in one of the bad ones so that “a version of you gets to be in the good one”. In other words, if you’re everyone, wouldn’t you be willing to trade places? Oftentimes here’s where Open Individualism gets scary and spooky and where talking to someone else to get confirmation that there are parallel conscious narrative streams around is really helpful.

Casual:

We could say that LSD is like a completely different drug depending on the dose range you hit:

Below 50 micrograms it is like a stimulant with stoning undertones. A bit giggly, a bit dissociating, but pretty normal otherwise.

Between 50 and 150 you have a drug that is generally really entertaining, gentle, and for the most part manageable. You get a significant expansion in the room available to have thoughts and feelings, as if your inner scratch pads got doubled in size. Colors, sounds, and bodily feelings all significantly intensified, but still feel like amplified versions of normal life.

Between 150 and 250 you get all of the super stereotypical psychedelic effects, with very noticeable drifting, tracers, symmetries, apophenia, recursive processes, and fractal interlocking landscapes. It is also somewhat dissociative and part of your experience might feel dreamy and blurry, while perhaps the majority of your field is sharp, bright, and very alive.

From 250 to 350 it turns into a multiverse travel situation, where you forget where you are and who you are and at times that you even took a drug. You might be an electron for what feels like millions of years. You might witness a supernova in slow motion. You might spontaneously become absorbed into space (perhaps as a high energy high dimensional version of the 5th Jhana). And you might feel like you hit some kind of God computer that compiles human lifetimes in order to learn about itself. You might also experience the feeling of a massive ball of light colliding with you that turns you into the Rainbow version of the Godhead for a time that might range between seconds and minutes. It’s a very intense experience.

And above? I don’t know, to be honest.

Academic:

The intermittent collapse into “eternity” reported on high dose LSD could perhaps be interpreted as stumbling into fixed points of a feedback system. Similarly to how pointing a camera directly at its own video feed at the right angle produces a perfectly static image. On the other hand, we might speculate that many of the “time branching” effects are instead the result of a feedback system where each iteration doubles the number of images (akin to using a mirror to cover a portion of the screen and reflect the uncovered part of the screen).

Video I made with FRAKSL in order to illustrate exactly the transition between a hyperbolic pseudo-time arrow and a geometric fixed point in a feedback system. This aims to capture the toggle during LSD ego death between experiencing multiple timelines and collapsing into moments of eternity.

Fantastical

You decide that you do want to keep playing the game. You don’t want to roll the dice. You don’t want to embrace Eternity, and with it, all of the timelines, even the ugly ones. You don’t want to be a volunteer in the Time War. You just want to be a normal person, though of course the knowledge you’ve gained would be tough to lose. So you have to make a choice. Either you forget what you learned, or you quit the game. What are you going to do?

As you start really peaking and the existential choice is presented to you, your friend finally arrives outside of your apartment. The entrance is very cinematic, as you witness it both from your phone as well as in real life, like the convergence of two parallel reality streams collapsing into a single intersubjective hologram via a parallax effect. It was intense.

Casual:

You have to admit, the juxtaposition of narrative streams with different stylistic proclivities really does enrich the human condition. In a way, this is one of the things that makes LSD so valuable: you get to experience simultaneously sets of vibes/stances/emotions/attitudes that would generally never co-exist. This is, at least in part, what might be responsible for increasing your psychological integration after the trip; you experience a kind of multi-context harmonization (cf. gestalt annealing). It’s why it’s hard to “hide from yourself on acid” – because the mechanism that usually keeps our incoherent parts compartmentalized breaks down under intense generalized tracers that maintain interweaving, semi-paralel, narrative streams. Importantly, the juxtaposition of narrative voices is computationally non-trivial. It expands the experiential base in a way that allows for fruitful cross-pollination between academic ways of thinking and our immediate phenomenology. Perhaps this is important from a scientific point of view.

Fantastical

With your friend in the apartment taking care of you – or rather, more precisely, reducing possibility-space to a manageable narrative smear, and an acceptable degree of leakage into bad timelines – you can finally relax. More so, the sedatives finally kick in, and the psychedelic effects reduce by maybe 20-25% in the span of an hour or so. You end up having an absolutely great time, and choose to keep playing the game. You forget you’re God, and decide to push the question of whether to fall into Nirvana for good till the next trip.


[1] LSD has a rather peculiar dose-response curve. It is not a “light” psychedelic, although it can certainly be used to have light experiences. Drugs like AL-LAD are sometimes described as relatively shallow in that they don’t produce the full depth of richness LSD does. Or 2C-B/2C-I, which tend to come with a more grounded sense of reality relative to the intensity of the sensory amplification. Or DMT, which despite its extreme reality-replacing effects, tends to nonetheless give you a sense of rhythm and timing that keeps the sense of self intact along some dimensions. LSD is a full psychedelic in that at higher doses it really deeply challenges one’s sense of reality. I have never heard of someone take 2C-B at, say, 30mg and freak out so badly that they believe that reality is about to end or that they are God and wish they didn’t know it. But on 200-400 micrograms of LSD this is routine. Of course you may not externalize it, but the “egocidal” effects of acid are powerful and hard to miss, and they are in some ways much deeper and transformative than the colorful show of DMT or the love of MDMA because it is ruthless in its insistence, methodical in its approach, and patient like water (which over decades can carve deep into rocks). As Christopher Bach says in LSD and the Mind of the Universe: “An LSD session grinds slow but it grinds fine. It gives us time to be engaged and changed by the realities we are encountering. I think this polishing influences both the eventual clarity of our perception in these states and what we are able to bring back from them, both in terms of healing and understanding”. There’s a real sense in which part of the power of LSD comes from its capacity to make you see something for long periods of time that under normal circumstances would have us flinch in a snap.

The View From My Topological Pocket: An Introduction to Field Topology for Solving the Boundary Problem

[Epistemic Status: informal and conversational, this piece provides an off-the-cuff discussion around the topological solution to the boundary problem. Please note that this isn’t intended to serve as a bulletproof argument; rather, it’s a guide through an intuitive explanation. While there might be errors, possibly even in reasoning, I believe they won’t fundamentally alter the overarching conceptual solution.]

This post is an informal and intuitive explanation for why we are looking into topology as a tentative solution to the phenomenal binding (or boundary) problem. In particular, this solutions identifies moments of experience with topological pockets of fields of physics. We recently published a paper where we dive deeper into this explanation space, and concretely hypothesize that the key macroscopic boundary between subjects of experience is the result of topological segmentation in the electromagnetic field (see explainer video / author’s presentation at the Active Inference Institute).

The short explanation for why this is promising is that topological boundaries are objective and frame-invariant features of “basement reality” that have causal effects and thus can be recruited by natural selection for information-processing tasks. If the fields of physics are fields of qualia, topological boundaries of the fields corresponding to phenomenal boundaries between subjects would be an elegant way for a theory of consciousness to “carve nature at its joints”. This solution is very significant if true, because it entails, among other things, that classical digital computers are incapable of creating causally significant experiences: the experiences that emerge out of them are by default something akin to mind dust, and at best, if significant binding happens, they are epiphenomenal from the “point of view” of the computation being realized.

The route to develop an intuition about this topic that this post takes is to deconstruct the idea of a “point of view” as a “natural kind” and instead advocate for topological pockets being the place where information can non-trivially aggregate. This idea, once seen, is hard to unsee; it reframes how we think about what systems are, and even the nature of information itself.


One of the beautiful things about life is that you sometimes have the opportunity to experience a reality plot twist. We might believe one narrative has always been unfolding, only to realize that the true story was different all along. As they say, the rug can be pulled from under your feet.

The QRI memeplex is full of these reality plot twists. You thought that the “plot” of the universe was a battle between good and evil? Well, it turns out it is the struggle between consciousness and replicators instead. Or that what you want is particular states of the environment? Well, it turns out you’ve been pursuing particular configurations of your world simulation all along. You thought that pleasure and pain follow a linear scale? Well, it turns out the scales are closer to logarithmic in nature, with the ends of the distribution being orders of magnitude more intense than the lower ends. I think that along these lines, grasping how “points of view” and “moments of experience” are connected requires a significant reframe of how you conceptualize reality. Let’s dig in!

One of the motivations for this post is that I recently had a wonderful chat with Nir Lahav, who last year published an article that steelmans the view that consciousness is relativistic (see one of his presentations). I will likely discuss his work in more detail in the future. Importantly, talking to him reminded me that ever since the foundation of QRI, we have taken for granted the view that consciousness is frame-invariant, and worked from there. It felt self-evident to us that if something depends on the frame of reference from which you see it, it doesn’t have inherent existence. Our experiences (in particular, each discrete moment of experience), have inherent existence, and thus cannot be frame-dependent. Every experience is self-intimating, self-disclosing, and absolute. So how could it depend on a frame of reference? Alas, I know this is a rather loaded way of putting it and risks confusing a lot of people (for one, Buddhists might retort that experience is inherently “interdependent” and has no inherent existence, to which I would replay “we are talking about different things here”). So I am motivated to present a more fleshed out, yet intuitive, explanation for why we should expect consciousness to be frame-invariant and how, in our view, our solution to the boundary problem is in fact up to this challenge.

The main idea here is to show how frames of reference cannot boostrap phenomenal binding. Indeed, “a point of view” that provides a frame of reference is more of a convenient abstraction that relies on us to bind, interpret, and coalesce pieces of information, than something with a solid ontological status that exists out there in the world. Rather, I will try to show how we are borrowing from our very own capacity for having unified information in order to put together the data that creates the construct of a “point of view”; importantly, this unity is not bootstrapped from other “points of view”, but draws from the texture of the fabric of reality itself. Namely, the field topology.


A scientific theory of consciousness must be able to explain the existence of consciousness, the nature and cause for the diverse array of qualia values and varieties (the palette problem), how consciousness is causally efficacious (avoid epiphenomenalism), and explain how the information content of each moment of experience is presented “all at once” (namely, the binding problem). I’ve talked extensively about these constraints in writings, videos, and interviews, but what I want to emphasize here is that these problems need to be addressed head on for a theory of consciousness to work at all. Keep these constraints in mind as we deconstruct the apparent solidity of frames of reference and the difficulty that arises in order to bootstrap causal and computational effects in connection to phenomenal binding out of a relativistic frame.

At a very high level, a fuzzy (but perhaps sufficient) intuition for what’s problematic when a theory of consciousness doesn’t seek frame-invariance is that you are trying to create something concrete with real and non-trivial causal effects and information content, out of fundamentally “fuzzy” parts.

In brief, ask yourself, can something fuzzy “observe” something fuzzy? How can fuzzyness be used to boostrap something non-fuzzy?

In a world of atoms and forces, “systems” or “things” or “objects” or “algorithms” or “experiences” or “computations” don’t exist intrinsically because there are no objective, frame-invariant, and causally significant ways to draw boundaries around them!

I hope to convince you that any sense of unity or coherence that you get from this picture of reality (a relativistic system with atoms and forces) is in fact a projection from your mind, that inhabits your mind, and is not out there in the world. You are looking at the system, and you are making connections between the parts, and indeed you are creating a hierarchy of interlocking gestalts to represent this entire conception of reality. But that is all in your mind! It’s a sort of map and territory confusion to believe that two fuzzy “systems” interacting with each other can somehow bootstrap a non-fuzzy ontological object (aka. a requirement for a moment of experience). 

I reckon that these vague explanations are in fact sufficient for some people to understand where I’m going. But some of you are probably clueless about what the problem is, and for good reason. This is never discussed in detail, and this is largely, I think, because people who think a lot about the problem don’t usually end up with a convincing solution. And in some cases, the result is that thinkers bite the bullet that there are only fuzzy patterns in reality.

How Many Fuzzy Computations Are There in a System?

Indeed, thinking of the universe as being made of particles and forces implies that computational processes are fuzzy (leaky, porous, open to interpretation, etc.). Now imagine thinking that *you* are one of such fuzzy computations. Having this as an unexamined background assumption gives rise to countless intractable paradoxes. The notion of a point of view, or a frame of reference, does not have real meaning here as the way to aggregate information doesn’t ultimately allow you to identify objective boundaries around packets of information (at least not boundaries that are more than merely-conventional in nature).

From this point of view (about points of view!), you realize that indeed there is no principled and objective way to find real individuals. You end up in the fuzzy world of fuzzy individuals of Brian Tomasik, as helpfully illustrated by this diagram:

Source: Fuzzy, Nested Minds Problematize Utilitarian Aggregation by Brian Tomasik

Brian Tomasik indeed identifies the problem of finding real boundaries between individuals as crucial for utilitarian calculations. And then, incredibly, also admits that his ontological frameworks gives him no principled way of doing so (cf. Michael E. Johnson’s Against Functionalism for a detailed response). Indeed, according to Brian (from the same essay):

Eric Schwitzgebel argues that “If Materialism Is True, the United States Is Probably Conscious“. But if the USA as a whole is conscious, how about each state? Each city? Each street? Each household? Each family? When a new government department is formed, does this create a new conscious entity? Do corporate mergers reduce the number of conscious entities? These seem like silly questions—and indeed, they are! But they arise when we try to individuate the world into separate, discrete minds. Ultimately, “we are all connected”, as they say. Individuation boundaries are artificial and don’t track anything ontologically or phenomenally fundamental (except maybe at the level of fundamental physical particles and structures). The distinction between an agent and its environment is just an edge that we draw around a clump of physics when it’s convenient to do so for certain purposes.

My own view is that every subsystem of the universe can be seen as conscious to some degree and in some way (functionalist panpsychism). In this case, the question of which systems count as individuals for aggregation becomes maximally problematic, since it seems we might need to count all the subsystems in the universe.”

Are you confused now? I hope so. Otherwise I’d worry about you.

Banana For Scale

A frame of reference is like a “banana for scale” but for both time and space. If you assume that the banana isn’t morphing, you can use how long it takes for waves emitted from different points in the banana to bounce back and return in order to infer the distance and location of physical objects around it. Your technologically equipped banana can play the role of a frame of reference in all but the most extreme of conditions (it probably won’t work as you approach a black hole, for very non-trivial reasons involving severe tidal forces, but it’ll work fine otherwise).

Now the question that I want to ask is: how does the banana “know itself”? Seriously, if you are using points in the banana as your frame of reference, you are, in fact, the one who is capable of interpreting the data coming from the banana to paint a picture of your environment. But the banana isn’t doing that. It is you! The banana is merely an instrument that takes measurements. Its unity is assumed rather than demonstrated. 


In fact, for the upper half of the banana to “comprehend” the shape of the other half (as well as its own), it must also rely on a presumed fixed frame of reference. However, it’s important to note that such information truly becomes meaningful only when interpreted by a human mind. In the realm of an atom-and-force-based ontology, the banana doesn’t precisely exist as a tangible entity. Your perception of it as a solid unit, providing direction and scale, is a practical assumption rather than an ontological certainty.

In fact, the moment we try to get a “frame of reference to know itself” you end up in an infinite regress, where smaller and smaller regions of the object are used as frames of reference to measure the rest. And yet, at no point does the information of these frames of reference “come together all at once”, except… of course… in your mind.

Are there ways to boostrap a *something* that aggregates and simultaneously expresses the information gathered across the banana (used as a frame of reference)? If you build a camera to take a snapshot of the, say, information displayed at each coordinate of the banana, the picture you take will have spatial extension and suffer from the same problem. If you think that the point at the aperture can itself capture all of the information at once, you will encounter two problems. If you are thinking of an idealized point-sized aperture, then we run into the problem that points don’t have parts, and therefore can’t contain multiple pieces of information at once. And if you are talking about a real, physical type of aperture, you will find that it cannot be smaller than the diffraction limit. So now you have the problem of how to integrate all of the information *across the whole area of the aperture* when it cannot shrink further without losing critical information. In either case, you still don’t have anything, anywhere, that is capable of simultaneously expressing all of the information of the frame of reference you chose. Namely, the coordinates you measure using a banana.

Let’s dig deeper. We are talking of a banana as a frame of reference. But what if we try to internalize the frame of reference. A lot of people like to think of themselves as the frame of reference that matters. But I ask you: what are your boundaries and how do the parts within those boundaries agree on what is happening?

Let’s say your brain is the frame of reference. Intuitively, one might feel like “this object is real to itself”. But here is where the magic comes. Make the effort to carefully trace how signals or measurements propagate in an object such as the brain. Is it fundamentally different than what happens with a banana? There might be more shortcuts (e.g. long axons) and the wiring could have complex geometry, but neither of these properties can ultimately express information “all at once”. The principle of uniformity says that every part of the universe follows the same universal physical laws. The brain is not an exception. In a way, the brain is itself a possible *expression* of the laws of physics. And in this way, it is no different than a banana.

Sorry, your brain is not going to be a better “ground” for your frame of reference than a banana. And that is because the same infinite recursion that happened with the banana when we tried to use it to ground our frame of reference into something concrete happens with your brain. And also, the same problem happens when we try to “take a snapshot of the state of the brain”, i.e. that the information also doesn’t aggregate in a natural way even in a high-resolution picture of the brain. It still has spatial extension and lacks objective boundaries of any causal significance.

Every single point in your brain has a different view. The universe won’t say “There is a brain here! A self-intimating self-defining object! It is a natural boundary to use to ground a frame of reference!” There is nobody to do that! Are you starting to feel the groundlessness? The bizarre feeling that, hey, there is no rational way to actually set a frame of reference without it falling apart into a gazillion different pieces, all of which have the exact same problem? I’ve been there. For years. But there is a way out. Sort of. Keep reading.

The question that should be bubbling up to the surface right now is: who, or what, is in charge of aggregating points of view? And the answer is: this does not exist and is impossible for it to exist if you start out in an ontology that has as the core building blocks relativistic particles and forces. There is no principled way to aggregate information across space and time that would result in the richness of simultaneous presentation of information that a typical human experience displays. If there is integration of information, and a sort of “all at once” presentation, the only kind of (principled) entity that this ontology would accept is the entire spacetime continuum as a gigantic object! But that’s not what we are. We are definite experiences with specific qualia and binding structures. We are not, as far as I can tell, the entire spacetime continuum all at once. (Or are we?).

If instead we focus on the fine structure of the field, we can look at mathematical features in it that would perhaps draw boundaries that are frame-invariant. Here is where a key insight becomes significant: the topology of a vector field is Lorentz invariant! Meaning, a Lorentz transformation will merely squeeze and sheer, but never change topology on its own. Ok, I admit I am not 100% sure that this holds for all of the topological features of the electromagnetic field (Creon Levit recently raised some interesting technical points that might make some EM topological features frame-dependent; I’ve yet to fully understand his argument but look forward to engaging with it). But what we are really pointing at is the explanation space. A moment ago we were desperate to find a way to ground, say, the reality of a banana in order to use it as a frame of reference. We saw that the banana conceptualized as a collection of atoms and forces does not have this capacity. But we didn’t inquire into other possible physical (though perhaps not *atomistic*) features of the banana. Perhaps, and this is sheer speculation, the potassium ions in the banana peel form a tight electromagnetic mesh that creates a protective Faraday cage for this delicious fruit. In that case, well, the boundaries of that protecting sheet would, interestingly, be frame invariant. A ground!

The 4th Dimension

There is a bit of a sleight of hand here, because I am not taking into account temporal depth, and so it is not entirely clear how large the banana, as a topological structure defined by the potassium ions protective sheer really is (again, this is totally made up! for illustration purposes only). The trick here is to realize that, at least in so far as experiences go, we also have a temporal boundary. Relativistically, there shouldn’t be a hard distinction between temporal and spatial boundaries of a topological pocket of the field. In practice, of course one will typically overwhelm the other, unless you approach the brain you are studying at close to the speed of light (not ideal laboratory conditions, I should add). In our paper, and for many years at QRI (iirc an insight by Michael Johnson in 2016 or so), we’ve talked about experiences having “temporal depth”. David Pearce posits that each fleeting macroscopic state of quantum coherence spanning the entire brain (the physical correlate of consciousness in his model) can last as little as a couple of femtoseconds. This does not seem to worry him: there is no reason why the contents of our experience would give us any explicit hint about our real temporal depth. I intuit that each moment of experience lasts much, much longer. I highly doubt that it can last longer than a hundred milliseconds, but I’m willing to entertain “pocket durations” of, say, a few dozens of milliseconds. Just long enough for 40hz gamma oscillations to bring disparate cortical micropockets into coherence, and importantly, topological union, and have this new new emergent object resonate (where waves bounce back and forth) and thus do wave computing worthwhile enough to pay the energetic cost of carefully modulating this binding operation. Now, this is the sort of “physical correlate of consciousness” I tend to entertain the most. Experiences are fleeting (but not vanishingly so) pockets of the field that come together for computational and causal purposes worthwhile enough to pay the price of making them.

An important clarification here is that now that we have this way of seeing frames of reference we can reconceptualize our previous confusion. We realize that simply labeling parts of reality with coordinates does not magically bring together the information content that can be obtained by integrating the signals read at each of those coordinates. But we suddenly have something that might be way better and more conceptually satisfying. Namely, literal topological objects with boundaries embedded in the spacetime continuum that contribute to the causal unfolding of the reality and are absolute in their existence. These are the objective and real frames of reference we’ve been looking for!

What’s So Special About Field Topology?

Two key points:

  1. Topology is frame-invariant
  2. Topology is causally significant

As already mentioned, the Lorentz Transform can squish and distort, but it doesn’t change topology. The topology of the field is absolute, not relativistic.

The Lorentz Transform can squish and distort, but it doesn’t change topology (image source).

And field topology is also causally significant. There are _many_ examples of this, but let me just mention a very startling one: magnetic reconnection. This happens when the magnetic field lines change how they are connected. I mention this example because when one hears about “topological changes to the fields of physics” one may get the impression that such a thing happens only in extremely carefully controlled situations and at minuscule scales. Similar to the concerns for why quantum coherence is unlikely to play a significant role in the brain, one can get the impression that “the scales are simply off”. Significant quantum coherence typically happens in extremely small distances, for very short periods of time, and involving very few particles at a time, and thus, the argument goes, quantum coherence must be largely inconsequential at scales that could plausibly matter for the brain. But the case of field topology isn’t so delicate. Magnetic reconnection, in particular, takes place at extremely large scales, involving enormous amount of matter and energy, with extremely consequential effects.

You know about solar flairs? Solar flairs are the strange phenomenon in the sun in which plasma is heated up to millions of degrees Kelvin and charged particles are accelerated to near the speed of light, leading to the emission of gigantic amounts of electromagnetic radiation, which in turn can ionize the lower levels of the Earth’s ionosphere, and thus disrupt radio communication (cf. radio blackouts). These extraordinary events are the result of the release of magnetic energy stored in the Sun’s corona via a topological change to the magnetic field! Namely, magnetic reconnection.

So here we have a real and tangible effect happening at a planetary (and stellar!) scale over the course of minutes to hours, involving enormous amounts of matter and energy, coming about from a non-trivial change to the topology of the fields of physics.

(example of magnetic reconnection; source)

Relatedly, coronal mass ejections (CMEs) also dependent on changes to the topology of the EM field. My layman understanding of CMEs is that they are caused by the build-up of magnetic stress in the sun’s atmosphere, which can be triggered by a variety of factors, including uneven spinning and plasma convection currents. When this stress becomes too great, it can cause the magnetic field to twist and trap plasma in solar filaments, which can then be released into interplanetary space through magnetic reconnection. These events are truly enormous in scope (trillions of kilograms of mass ejected) and speed (traveling at thousands of kilometers per second).

CME captured by NASA (source)

It’s worth noting that this process is quite complex/not fully understood, and new research findings continue to illuminate the details of this process. But the fact that topological effects are involved is well established. Here’s a video which I thought was… stellar. Personally, I think a program where people get familiar with the electromagnetic changes that happen in the sun by seeing them in simulations and with the sun visualized in many ways, might help us both predict better solar storms, and then also help people empathize with the sun (or the topological pockets that it harbors!).

The model showed differential rotation causes the sun’s magnetic fields to stretch and spread at different rates. The researchers demonstrated this constant process generates enough energy to form stealth coronal mass ejections over the course of roughly two weeks. The sun’s rotation increasingly stresses magnetic field lines over time, eventually warping them into a strained coil of energy. When enough tension builds, the coil expands and pinches off into a massive bubble of twisted magnetic fields — and without warning — the stealth coronal mass ejection quietly leaves the sun.” (source)

Solar flares and CMEs are just two rather spectacular macroscopic phenomena where field topology has non-trivial causal effects. But in fact there is a whole zoo of distinct non-trivial topological effects with causal implications, such as: how the topology of the Möbius strip can constrain optical resonant modes, twisted topological defects in nematic liquid crystal make some images impossible, the topology of eddy currents can be recruited for shock absorption aka. “magnetic breaking”, Meissner–Ochsenfeld effect and flux pinning enabling magnetic levitation, Skyrmion bundles having potential applications for storing information in spinotropic devices, and so on.

(source)

In brief, topological structures in the fields of physics can pave the way for us to identify the natural units that correspond to “moments of experience”. They are frame-invariant and casually significant, and as such they “carve nature at its joints” while being useful from the point of view of natural selection.

Can a Topological Pocket “Know Itself”?

Now the most interesting question arises. How does a topological pocket “know itself”? How can it act as a frame of reference for itself? How can it represent information about its environment if it does not have direct access to it? Well, this is in fact a very interesting area of research. Namely, how do you get the inside of a system with a clear and definite boundary to model its environment despite having only information accessible at its boundary and the resources contained within its boundary? This is a problem that evolution has dealt with for over a billion years (last time I checked). And fascinatingly, is also the subject of study of Active Inference and the Free Energy Principle, whose math, I believe, can be imported to the domain of *topological* boundaries in fields (cf. Markov Boundary).

Here is where qualia computing, attention and awareness, non-linear waves, self-organizing principles, and even optics become extremely relevant. Namely, we are talking about how the *interior shape* of a field could be used in the context of life. Of course the cell walls of even primitive cells are functionally (albeit perhaps not ontologically) a kind of objective and causally significant boundary where this applies. It is enormously adaptive for the cell to use its interior, somehow, to represent its environment (or at least relevant features thereof) in order to navigate, find food, avoid danger, and reproduce.

The situation becomes significantly more intricate when considering highly complex and “evolved” animals such as humans, which encompass numerous additional layers. A single moment of experience cannot be directly equated to a cell, as it does not function as a persistent topological boundary tasked with overseeing the replication of the entire organism. Instead, a moment of experience assumes a considerably more specific role. It acts as an exceptionally specialized topological niche within a vast network of transient, interconnected topological niches—often intricately nested and interwoven. Together, they form an immense structure equipped with the capability to replicate itself. Consequently, the Darwinian evolutionary dynamics of experiences operate on multiple levels. At the most fundamental level, experiences must be selected for their ability to competitively thrive in their immediate micro-environment. Simultaneously, at the broadest level, they must contribute valuable information processing functions that ultimately enhance the inclusive fitness of the entire organism. All the while, our experiences must seamlessly align and “fit well” across all the intermediary levels.

Visual metaphor for how myriad topological pockets in the brain could briefly fuse and become a single one, and then dissolve back into a multitude.

The way this is accomplished is by, in a way, “convincing the experience that it is the organism”. I know this sounds crazy. But ask yourself. Are you a person or an experience? Or neither? Think deeply about Empty Individualism and come back to this question. I reckon that you will find that when you identify with a moment of experience, it turns out that you are an experience *shaped* in the form of the necessary survival needs and reproductive opportunities that a very long-lived organism requires. The organism is fleetingly creating *you* for computational purposes. It’s weird, isn’t it?

The situation is complicated by the fact that it seems that the computational properties of topological pockets of qualia involve topological operations, such as fusion, fission, and the use of all kinds of internal boundaries. More so, the content of a particular experience leaves an imprint in the organism which can be picked up by the next experience. So what happens here is that when you pay really close attention, and you whisper to your mind, “who am I?”, the direct experiential answer will in fact be a slightly distorted version of the truth. And that is because you (a) are always changing and (b) can only use the shape of the previous experience(s) to fill the intentional content of your current experience. Hence, you cannot, at least not under normal circumstances, *really* turn awareness to itself and *be* a topological pocket that “knows itself”. For once, there is a finite speed of information propagation across the many topological pockets that ultimately feed to the central one. So, at any given point in time, there are regions of your experience of which you are *aware* but which you are not attending to.

This brings us to the special case. Can an experience be shaped in such a way that it attends to itself fully, rather than attend to parts of itself which contain information about the state of predecessor topological pockets? I don’t know, but I have a strong hunch that the answer is yes and that this is what a meditative cessation does. Namely, it is a particular configuration of the field where attention is perfectly, homogeneously, distributed throughout in such a way that absolutely nothing breaks the symmetry and the experience “knows itself fully” but lacks any room left to pass it on to the successor pockets. It is a bittersweet situation, really. But I also think that cessations, and indeed moments of very homogeneously distributed attention, are healing for the organism, and even, shall we say, for the soul. And that is because they are moments of complete relief from the discomfort of symmetry breaking of any sort. They teach you about how our world simulation is put together. And intellectually, they are especially fascinating because they may be the one special case in which the referent of an experience is exactly, directly, itself.

To be continued…


Acknowledgements

I am deeply grateful and extend my thanks to Chris Percy for his remarkable contributions and steadfast dedication to this field. His exceptional work has been instrumental in advancing QRI’s ideas within the academic realm. I also want to express my sincere appreciation to Michael Johnson and David Pearce for our enriching philosophical journey together. Our countless discussions on the causal properties of phenomenal binding and the temporal depth of experience have been truly illuminating. A special shout-out to Cube Flipper, Atai Barkai, Dan Girshovic, Nir Lahav, Creon Levit, and Bijan Fakhri for their recent insightful discussions and collaborative efforts in this area. Hunter, Maggie, Anders (RIP), and Marcin, for your exceptional help. Huge gratitude to our donors. And, of course, a big thank you to the vibrant “qualia community” for your unwavering support, kindness, and encouragement in pursuing this and other crucial research endeavors. Your love and care have been a constant source of motivation. Thank you so much!!!

Digital Computers Will Remain Unconscious Until They Recruit Physical Fields for Holistic Computing Using Well-Defined Topological Boundaries

[Epistemic Status: written off the top of my head, thought about it for over a decade]

What do we desire for a theory of consciousness?

We want it to explain why and how the structure of our experience is computationally relevant. Why would nature bother to wire, not only information per se, but our experiences in richly structured ways that seem to track task-relevant computation (though at times in elusive ways)?

I think we can derive an explanation here. It is both very theoretically satisfying and literally mind-bending. This allows us to rule out vast classes of computing systems as having no more than computationally trivial conscious experiences.

TL;DR: We have richly textured bound experiences precisely because the boundaries that individuate us also allow us to act as individuals in many ways. This individual behavior can reflect features of the state of the entire organism in energy-efficient ways. Evolution can recruit this individual, yet holistic, behavior due to its computational advantages. We think that the boundary might be the result of topological segmentation in physical fields.


Marr’s Levels of Analysis and the Being/Form Boundary

One lens we can use to analyze the possibility of sentience in systems is this conceptual boundary between “being” and “form”. Here “being” refers to the interiority of things- their intrinsic likeness. “Form” on the other hand refers to how they appear from the outside. Where you place the being/form boundary influences how you make sense of the world around you. One factor that seems to be at play for where you place the being/form boundary is your implicit background assumptions about consciousness. In particular, how you think of consciousness in relation to Marr’s levels of analysis:

  • If you locate consciousness at the computational (or behavioral) level, then the being/form boundary might be computation/behavior. In other words, sentience simply is the performance of certain functions in certain contexts.
  • If you locate it at the algorithmic level, then the being/form boundary might become algorithm/computation. Meaning that what matters for the inside is the algorithm, whereas the outside (the form) is the function the algorithm produces.
  • And if you locate it at the implementation level, you will find that you identify being with specific physical situations (such as phases of matter and energy) and form as the algorithms that they can instantiate. In turn, the being/form boundary looks like crystals & bubbles & knots of matter and energy vs. how they can be used from the outside to perform functions for each other.

How you approach the question of whether a given chatbot is sentient will drastically depend on where you place the being/form boundary.


Many arguments against the sentience of particular computer systems are based on algorithmic inadequacy. This, for example, takes the form of choosing a current computational theory of mind (e.g. global workspace theory) and checking if the algorithm at play has the bare bones you’d expect a mind to have. This is a meaningful kind of analysis. And if you locate the being/form boundary at the algorithmic level then this is the only kind of analysis that seems to make sense.

What stops people from making successful arguments concerning the implementation level of analysis is confusion about the function for consciousness. So which physical systems are or aren’t conscious seems to be inevitably an epiphenomenalist construct. Meaning that drawing boundaries around systems with specific functions is an inherently fuzzy activity and any criteria we choose for whether a system is performing a certain function will be at best a matter of degree (and opinion).

The way of thinking about phenomenal boundaries I’m presenting in this post will escape this trap.

But before we get there, it’s important to point out the usefulness of reasoning about the algorithmic layer:

Algorithmic Structuring as a Constraint

I think that most people who believe that digital sentience is possible will concede that at least in some situations The Chinese Room is not conscious. The extreme example is when the content of the Chinese Room turns out to be literally a lookup table. Here a simple algorithmic concern is sufficient to rule out its sentience: a lookup table does not have an inner state! And what you do, from the point of view of its inner workings, is the same no matter if you relabel which input goes with what output. Whatever is inscribed in the lookup table (with however many replies and responses as part of the next query) is not something that the lookup table structurally has access to! The lookup table is, in an algorithmic sense, blind to what it is and what it does*. It has no mirror into itself.

Algorithmic considerations are important. To not be a lookup table, we must have at least some internal representations. We must consider constraints on “meaningful experience”, such as probably having at least some of, or something analogous to: a decent number of working memory slots (and types), a good size of visual field, resolution of color in terms of Just Noticeable Differences, and so on. If your algorithm doesn’t even try to “render” its knowledge in some information-rich format, then it may lack the internal representations needed to really “understand”. Put another way: imagine that your experience is like a Holodeck. Ask the question of what is the lower bound on the computational throughput of each sensory modality and their interrelationships. Then see if the algorithm you think can “understand” has internal representations of that kind at all.

Steel-manning algorithmic concerns involves taking a hard look at the number of degrees of freedom of our inner world-simulation (in e.g. free-wheeling hallucinations) and making sure that there are implicit or explicit internal representations with roughly similar computational horsepower as those sensory channels.

I think that this is actually an easy constraint to meet relative to the challenge of actually creating sentient machines. But it’s a bare minimum. You can’t let yourself be fooled by a lookup table.

In practice, the AI researchers will just care about metrics like accuracy, meaning that they will use algorithmic systems with complex internal representations like ours only if it computationally pays off to do so! (Hanson in Age of EM makes the bet it that it is worth simulating a whole high-performing human’s experience; Scott points out we’d all be on super-amphetamines). Me? I’m extremely skeptical that our current mindstates are algorithmically (or even thermodynamically!) optimal for maximally efficient work. But even if normal human consciousness or anything remotely like it was such a global optimum that any other big computational task routes around to it as an instrumental goal, I still think we would need to check if the algorithm does in fact create adequate internal representations before we assign sentience to it.

Thankfully I don’t think we need to go there. I think that the most crucial consideration is that we can rule out a huge class of computing systems ever being conscious by identifying implementation-level constraints for bound experiences. Forget about the algorithmic level altogether for a moment. If your computing system cannot build a bound experience from the bottom up in such a way that it has meaningful holistic behavior, then no matter what you program into it, you will only have “mind dust” at best.

What We Want: Meaningful Boundaries

In order to solve the boundary problem we want to find “natural” boundaries in the world to scaffold off of those. We take on the starting assumption that the universe is a gigantic “field of consciousness” and the question of how atoms come together to form experiences becomes how this field becomes individuated into experiences like ours. So we need to find out how boundaries arise in this field. But these are not just any boundary, but boundaries that are objective, frame-invariant, causally-significant, and computationally-useful. That is, boundaries you can do things with. Boundaries that explain why we are individuals and why creating individual bound experiences was evolutionarily adaptive; not only why it is merely possible but also advantageous.

My claim is that boundaries with such properties are possible, and indeed might explain a wide range of puzzles in psychology and neuroscience. The full conceptually satisfying explanation results from considering two interrelated claims and understanding what they entail together. The two interrelated claims are:

(1) Topological boundaries are frame-invariant and objective features of physics

(2) Such boundaries are causally significant and offer potential computational benefits

I think that these two claims combined have the potential to explain the phenomenal binding/boundary problem (of course assuming you are on board with the universe being a field of consciousness). They also explain why evolution was even capable of recruiting bound experiences for anything. Namely, that the same mechanism that logically entails individuation (topological boundaries) also has mathematical features useful for computation (examples given below). Our individual perspectives on the cosmos are the result of such individuality being a wrinkle in consciousness (so to speak) having non-trivial computational power.

In technical terms, I argue that a satisfactory solution to the boundary problem (1) avoids strong emergence, (2) sidesteps the hard problem of consciousness, (3) prevents the complication of epiphenomenalism, and (4) is compatible with the modern scientific world picture.

And the technical reason why topological segmentation provides the solution is that with it: (1) no strong emergence is required because behavioral holism is only weakly emergent on the laws of physics, (2) we sidestep the hard problem via panpsychism, (3) phenomenal binding is not epiphenomenal because the topological segments have holistic causal effects (such that evolution would have a reason to select for them), and (4) we build on top of the laws of physics rather than introduce new clauses to account for what happens in the nervous system. In this post you’ll get a general walkthrough of the solution. The fully rigorous, step by step, line of argumentation will be presented elsewhere. Please see the video for the detailed breakdown of alternative solutions to the binding/boundary problem and why they don’t work.

Holistic (Field) Computing

A very important move that we can make in order to explore this space is to ask ourselves if the way we think about a concept is overly restrictive. In the case of computation, I would claim that the concept is either applied extremely vaguely or that making it rigorous makes its application so narrow that it loses relevance. In the former case we have the tendency for people to equate consciousness with computation in a very abstract level (such as “resource gathering” and “making predictions” and “learning from mistakes”). In the latter we have cases where computation is defined in terms of computable functions. The conceptual mistake to avoid is to think that just because you can compute a function with a Turing machine, that therefore you are creating the same inner (bound or not) physical states along the way. And while yes, it would be possible to approximate the field behavior we will discuss below with a Turing machine, it would be computationally inefficient (as it would need to simulate a massively parallel system) and lack the bound inner states (and their computational speedups) needed for sentience.

The (conceptual engineering) move I’m suggesting we make is to first of all enrich our conception of computation. To notice that we’ve lived with an impoverished notion all along.

I suggest that our conception of computation needs to be broad enough to include bound states as possible meaningful inputs, internal steps and representations, and outputs. This enriched conception of computation would be capable of making sense of computing systems that work with very unusual inputs and outputs. For instance, it has no problem thinking of a computer that takes as input chaotic superfluid helium and returns soap bubble clusters as outputs. The reason to use such exotic medium is not to add extra steps, but in fact to remove extra steps by letting physics do the hard work for you.

(source)

To illustrate just one example of what you can do with this enriched paradigm of computing I am trying to present to you, let’s now consider the hidden computational power of soap films. Say that you want to connect three poles with a wire. And you want to minimize how much wire you use. One option is to use trigonometry and linear algebra, another one is to use numerical simulations. But an elegant alternative is to create a model of the poles between two parallel planes and then submerge the structure in soapy water.

Letting the natural energy-minimizing property of soap bubbles find the shortest connection between three poles is an interesting way of performing a computation. It is uniquely adapted to the problem without needing tweaks or adjustments – the self-organizing principle will work the same (within reason) wherever you place the poles. You are deriving computational power from physics in a very customized way that nonetheless requires no tuning or external memory. And it’s all done simply by each point of the surface wanting to minimize its tension. Any non-minimal configuration will have potential energy, which then gets transformed into kinetic energy and makes it wobble, and as it wobbles it radiates out its excess energy until it reaches a configuration where it doesn’t wobble anymore. So you have to make the solution of your problem precisely a non-wobbly state!

In this way of thinking about computation, an intrinsic part of the question about what kind of thing a computation is will depend on what physical processes were utilized to implement it. In essence, we can (and I think should) enrich our very conception of computation to include what kind of internal bound states the system is utilizing, and the extent to which the holistic physical effects of such inner states are computationally trivial or significant.

We can call this paradigm of computing “Holistic Computing”.

From Soap Bubbles to ISING-Solvers Meeting Schedulers Implemented with Lasers

Let’s make a huge jump from soap water-based computation. A much more general case that is nonetheless in the same family as using soap bubbles for compute, is having a way to efficiently solve the ISING problem. In particular, having an analog physics-based annealing method in this case comes with unique computational benefits: it turns out that non-linear optics can do this very efficiently. You are in a certain way using the universe’s very frustration with the problem (don’t worry I don’t think it suffers) to get it solved. Here is an amazing recent example: Ising Machines: Non-Von Neumann Computing with Nonlinear Optics – Alireza Marandi – 6/7/2019 (presented at Caltech).

The person who introduces Marandi in the video above is Kwabena Boahen, with whom I had the honor to take his course at Stanford (and play with the neurogrid!). Back in 2012 something like the neurogrid seemed like the obvious path to AGI. Today, ironically, people imagine scaling transformers is all you need. Tomorrow, we’ll recognize the importance of holistic field behavior and the boundary problem.

One way to get there on the computer science front will be by first demonstrating a niche set of applications where e.g. non-linear optics ISING solvers vastly outperform GPUs for energy minimization tasks in random graphs. But as the unique computational benefits become better understood, we will sooner or later switch from thinking about how to solve our particular problem, to thinking about how we can cast our particular problem as an ISING/energy minima problem so that physics solves the problem for us. It’s like having a powerful computer but it only speaks a very specific alien language. If you can translate your problem into its own terms, it’ll solve it at lightning speed. If you can’t, it will be completely useless.

Intelligence: Collecting and Applying Self-Organizing Principles

This takes us to the question of whether general intelligence is possible without switching to a Holistic Computing paradigm. Can you have generally intelligent (digital) chatbots? In some senses, yes. In perhaps the most significant sense, no.

Intelligence is a contentious topic (see here David Pearce’s helpful breakdown of 6 of its facets). One particular facet of intelligence that I find enormously fascinating and largely under-explored is the ability to make sense of new modes of consciousness and then recruit them for computational and aesthetic purposes. THC and music production have a long history of synergy, for instance. A composer who successfully uses THC to generate musical ideas others find novel and meaningful is applying this sort of intelligence. THC-induced states of consciousness are largely dysfunctional for a lot of tasks. But someone who utilizes the sort of intelligence (or meta-intelligence) I’m pointing to will pay attention to the features of experience that do have some novel use and lean on those. THC might impair working memory, but it also expands and stretches musical space. Intensifies reverb, softens rough edges in heart notes, increases emotional range, and adds synesthetic brown noise (which can enhance stochastic resonance). With wit and determination (and co-morbid THC/music addiction), musical artists exploit the oddities of THC musicality to great effect, arguably some much more successfully than others.

The kind of reframe that I’d like you to consider is that we are all in fact something akin to these stoner musicians. We were born with this qualia resonator with lots of cavities, kinds of waves, levels of coupling, and so on. And it took years for us to train it to make adaptive representations of the environment. Along the way, we all (typically) develop a huge repertoire of self-organizing principles we deploy to render what we believe is happing out there in the world. The reason why an experience of “meditation on the wetness of water” can be incredibly powerful is not because you are literally tuning into the resonant frequency of the water around you and in you. No, it’s something very different. You are creating the conditions for the self-organizing principle that we already use to render our experiences with water to take over as the primary organizer of our experience. Since this self-organizing principle does not, by its nature, generate a center, full absorption into “water consciousness” also has a no-self quality to it. Same with the other elements. Excitingly, this way of thinking also opens up our mind about how to craft meditations from first principles. Namely, by creating a periodic table of self-organizing principles and then systematically trying combinations until we identify the laws of qualia chemistry.

You have to come to realize that your brain’s relationship with self-organizing principles is like that of a Pokémon trainer and his Pokémon (ideally in a situation where Pokémon play the Glass Bead Game with each other rather than try to hurt each other– more on that later). Or perhaps like that of a mathematician and clever tricks for proofs, or a musician and rhythmic patterns, and so on. Your brain is a highly tamed inner space qualia warp drive usually working at 1% or less. It has stores of finely balanced and calibrated self-organizing principles that will generate the right atmospheric change to your experience at the drop of a hat. We are usually unaware of how many moods, personalities, contexts, and feelings of the passage of time there are – your brain tries to learn them all so it has them in store for whenever needed. All of a sudden: haze and rain, unfathomable wind, mercury resting motionless. What kind of qualia chemistry did your brain just use to try to render those concepts?

We are using features of consciousness -and the self-organizing principles it affords- to solve problems all the time without explicitly modeling this fact. In my conception of sentient intelligence, being able to recruit self-organizing principles of consciousness for meaningful computation is a pillar of any meaningfully intelligent mind. I think that largely this is what we are doing when humans become extremely good at something (from balancing discs to playing chess and empathizing with each other). We are creating very specialized qualia by finding the right self-organizing principles and then purifying/increasing their quality. To do an excellent modern day job that demands constraint satisfaction at multiple levels of analysis at once likely requires us to form something akin to High-Entropy Alloys of Consciousness. That is, we are usually a judiciously chosen mixture of many self-organizing principles balanced just right to produce a particular niche effect.

Meta-Intelligence

David Pearce’s conception of Full-spectrum Superintelligence is inspiring because it takes into account the state-space of consciousness (and what matters) in judging the quality of a certain intelligence in addition to more traditional metrics. Indeed, as another key conceptual engineering move, I suggest that we can and need to enrich our conception of intelligence in addition to our conception of computation.

So here is my attempt at enriching it further and adding another perspective. One way we can think of intelligence is as the ability to map a problem to a self-organizing principle that will “solve it for you” and having the capacity to instantiate that self-organizing principle. In other words, intelligence is, at least partly, about efficiency: you are successful to the extent that you can take a task that would generally require a large number of manual operations (which take time, effort, and are error-prone) and solve it in an “embodied” way.

Ultimately, a complex system like the one we use for empathy mixes both serial and parallel self-organizing principles for computation. Empathy is enormously cognitively demanding rather than merely a personality trait (e.g. agreeableness), as it requires a complex mirroring capacity that stores and processes information in efficient ways. Exploring exotic states of consciousness is even more computationally demanding. Both are error-prone.

Succinctly, I suggest we consider:

One key facet of intelligence is the capacity to solve problems by breaking them down into two distinct subproblems: (1) find a suitable self-organizing principle you can instantiate reliably, and (2) find out how to translate your problem to a format that our self-organizing principle can be pointed at so that it solves it for us.

Here is a concrete example. If you want to disentangle a wire, you can try to first put it into a discrete datastructure like a graph, and then get the skeleton of the knot in a way that allows you to simplify it with Reidemeister moves (and get lost in the algorithmic complexity of the task). Or you could simply follow the lead of Yu et al. 2021 and make the surfaces repulsive and let this principle solve the problem for you

(source)

These repulsion-based disentanglement algorithm are explained in this video. Importantly, how to do this effectively still needs fine tuning. The method they ended up using was much faster than the (many) other ones tried (a Full-Spectrum Superintellligence would be able to “wiggle” the wires a bit if they got stuck, of course):

(source)

This is hopefully giving you new ways of thinking about computation and intelligence. The key point to realize is that these concepts are not set in stone, and to a large extent may limit our thinking about sentience and intelligence. 

Now, I don’t believe that if you simulate a self-organizing principle of this sort you will get a conscious mind. The whole point of using physics to solve your problem is that in some cases you get better performance than algorithmically representing a physical system and then using that simulation to instantiate self-organizing principles. Moreover physics simulations, to the extent they are implemented in classical computers, will fail to generate the same field boundaries that would be happening in the physical system. To note, physics-inspired simulations like [Yu et al 2021] are nonetheless enormously helpful to illustrate how to think of problem-solving with a massively parallel analog system.

Are Neural Cellular Automata Conscious?

The computational success of Neural Cellular Automata is primarily algorithmic. In essence, digitally implemented NCA are exploring a paradigm of selection and amplification of self-organizing principles, which is indeed a very different way of thinking about computation. But critically any NCA will still lack sentience. The main reasons are that they (a) don’t use physical fields with weak downward causation, and (b) don’t have a mechanism for binding/boundary making. Digitally-implemented cellular automata may have complex emergent behavior, but they generate no meaningful boundaries (i.e. objective, frame-invariant, causally-significant, and computationally-useful). That said, the computational aesthetic of NCA can be fruitfully imported to the study of Holistic Field Computing, in that the techniques for selecting and amplifying self-organizing principles already solved for NCAs may have analogues in how the brain recruits physical self-organizing principles for computation.

Exotic States of Consciousness

Perhaps one of the most compelling demonstrations of the possible zoo (or jungle) of self-organizing principles out of which your brain is recruiting but a tiny narrow range is to pay close attention to a DMT trip.

DMT states of consciousness are computationally non-trivial on many fronts. It is difficult to emphasize how enriched the set of experiential building blocks becomes in such states. Their scientific significance is hard to overstate. Importantly, the bulk of the computational power on DMT is dedicated to trying to make the experience feel good and not feel bad. The complexity involved in this task is often overwhelming. But one could envision a DMT-like state in which some parameters have been stabilized in order to recruit standardized self-organizing principles available only in a specific region of the energy-information landscape. I think that cataloguing the precise mathematical properties of the dynamics of attention and awareness on DMT will turn out to have enormous _computational_ value. And a lot of this computational value will generally be pointed towards aesthetic goals.

To give you a hint of what I’m talking about: A useful QRI model (indeed, algorithmic reduction) of the phenomenology of DMT is that it (a) activates high-frequency metronomes that shake your experience and energize it with a high-frequency vibe, and (b) a new medium of wave propagation gets generated that allows very disparate parts of one’s experience to interact with one another.

3D Space Group (CEV on low dose DMT)

At a sufficient dose, DMT’s secondary effect also makes your experience feel sort of “wet” and “saturated”. Your whole being can feel mercurial and liquidy (cf: Plasmatis and Jim Jam). A friend speculates that’s what it’s like for an experience to be one where everything is touching everything else (all at once).

There are many Indra’s Net-type experiences in this space. In brief, experiences where “each part reflects every other part” are an energy minimum that also reduces prediction errors. And there is a fascinating non-trivial connection with the Free Energy Principle, where experiences that minimize internal prediction errors may display a lot of self-similarity.

To a first approximation, I posit that the complex geometry of DMT experiences are indeed the non-linearities of the DMT-induced wave propagation medium that appear when it is sufficiently energized (so that it transitions from the linear to the non-linear regime). In other words, the complex hallucinations are energized patterns of non-linear resonance trying to radiate out their excess energy. Indeed, as you come down you experience the phenomenon of condensation of shapes of qualia.

Now, we currently don’t know what computational problems this uncharted cornucopia of self-organizing principles could solve efficiently. The situation is analogous to that of the ISING Solver discussed above: we have an incredibly powerful alien computer that will do wonders if we can speak its language, and nothing useful otherwise. Yes, DMT’s computational power is an alien computer in search of a problem that will fit its technical requirements.

Vibe-To-Shape-And-Back

Michael Johnson, Selen Atasoy, and Steven Lehar all have shaped my thinking about resonance in the nervous system. Steven Lehar in particular brought to my attention non-linear resonance as a principle of computation. In essays like The Constructive Aspect of Visual Perception he presents a lot of visual illusions for which non-linear resonance works as a general explanatory principle (and then in The Grand Illusion he reveals how his insights were informed by psychonautic exploration).

One of the cool phenomenological observations Lehar made based on his exploration with DXM was that each phenomenal object has its own resonant frequency. In particular, each object is constructed with waves interfering with each other at a high-enough energy that they bounce off each other (i.e. are non-linear). The relative vibration of the phenomenal objects is a function of the frequencies of resonance of the waves of energy bouncing off each other that are constructing the objects.

In this way, we can start to see how a “vibe” can be attributed to a particular phenomenal object. In essence, long intervals will create lower resonant frequencies. And if you combine this insight with QRI paradigms, you see how the vibe of an experience can modulate the valence (e.g. soft ADSR envelopes and consonance feeling pleasant, for instance). Indeed, on DMT you get to experience the high-dimensional version of music theory, where the valence of a scene is a function of the crazy-complex network of pairwise interactions between phenomenal objects with specific vibratory characteristics. Give thanks to annealing because tuning this manually would be a nightmare.

But then there is the “global” vibe…

Topological Pockets

So far I’ve provided examples of how Holistic Computing enriches our conception of intelligence, computing, and how it even shows up in our experience. But what I’ve yet to do is connect this with meaningful boundaries, as we set ourselves to do. In particular, I haven’t explained why Holistic Computing would arise out of topological boundaries.

For the purpose of this essay I’m defining a topological segment (or pocket) to be a region that can’t be expanded further without this becoming false: every point in the region locally belongs to the same connected space.

The Balloons’ Case

In the case of balloons this cashes out as: a topological segment is one where each point can go to any other point without having to go through connector points/lines/planes. It’s essentially the set of contiguous surfaces.

Now, each of these pockets can have both a rich set of connections to other pockets as well as intricate internal boundaries. The way we could justify Computational Holism being relevant here is that the topological pockets trap energy, and thus allow the pocket to vibrate in ways that express a lot of holistic information. Each contiguous surface makes a sound that represents its entire shape, and thus behaves as a unit in at least this way.

The General Case

An important note here is that I am not claiming that (a) all topological boundaries can be used for Holistic Computing, or (b) to have Holistic Computing you need to have topological boundaries. Rather, I’m claiming that the topological segmentation responsible for individuating experiences does have applications for Holistic Computing and that this conceptually makes sense and is why evolution bothered to make us conscious. But for the general case, you probably do get quite a bit of both Holistic Computing without topological segmentation and vice versa. For example an LC circuit can be used for Holistic Computing on the basis of its steady analog resonance, but I’m not sure if it creates a topological pocket in the EM fields per se.

At this stage of the research we don’t have a leading candidate for the precise topological feature of fields responsible for this. But the explanation space is promising based on being able to satisfy theoretical constraints that no other theory we know of can.

But I can nonetheless provide a proof of concept for how a topological pocket does come with really impactful holism. Let’s dive in!

Getting Holistic Behavior Out of a Topological Pocket

Creating a topological pocket may be consequential in one of several ways. One option for getting holistic behavior arises if you can “trap” energy in the pocket. As a consequence, you will energize its harmonics. The particular way the whole thing vibrates is a function of the entire shape at once. So from the inside, every patch now has information about the whole (namely, by the vibration it feels!).**

(image source)

One possible overarching self-organizing principle that the entire pocket may implement is valence-gradient ascent. In particular, some configurations of the field are more pleasant than others and this has to do with the complexity of the global vibe. Essentially, the reason no part of it wants to be in a pocket with certain asymmetries, is because those asymmetries actually make themselves known everywhere within the pocket by how the whole thing vibrates. Therefore, for the same reason a soap bubble can become spherical by each point on the surface trying to locally minimize tension, our experiences can become symmetrical and harmonious by having each “point” in them trying to maximize its local valence.

Self Mirroring

From Lehar’s Cartoon Epistemology

And here we arrive at perhaps one of the craziest but coolest aspects of Holistic Computing I’ve encountered. Essentially, if we go to the non-linear regime, then the whole vibe is not merely just the weighted sum of the harmonics of the system. Rather, you might have waves interfere with each other in a concentrated fashion in the various cores/clusters, and in turn these become non-linear structures that will try to radiate out their energy. And to maximize valence there needs to be a harmony between the energy coming in and out of these dense non-linearities. In our phenomenology this may perhaps point to our typical self-consciousness. In brief, we have an internal avatar that “reflects” the state of the whole! We are self-mirroring machines! Now this is really non-trivial (and non-linear) Holistic Computing.

Cut From the Same Fabric

So here is where we get to the crux of the insight. Namely, that weakly emergent topological changes can simultaneously have non-trivial causal/computational effects while also solving the boundary problem. We avoid strong emergence but still get a kind of ontological emergence: since consciousness is being cut out of one huge fabric of consciousness, we don’t ever need strong emergence in the form of “consciousness out of the blue all of a sudden”. What you have instead is a kind of ontological birth of an individual. The boundary legitimately created a new being, even if in a way the total amount of consciousness is the same. This is of course an outrageous claim (that you can get “individuals” by e.g. twisting the electric field in just the right way). But I believe the alternatives are far crazier once you understand what they entail.

In a Nutshell

To summarize, we can rule out any of the current computational systems implementing AI algorithms to have anything but trivial consciousness. If there are topological pockets created by e.g. GPUs/TPUs, they are epiphenomenal – the system is designed so that only the local influences it has hardcoded can affect the behavior at each step.

The reason the brain is different is that it has open avenues for solving the boundary problem. In particular, a topological segmentation of the EM field would be a satisfying option, as it would simultaneously give us both holistic field behavior (computationally useful) and a genuine natural boundary. It extends the kind of model explored by Johnjoe McFadden (Conscious Electromagnetic Information Field) and Susan Pockett (Consciousness Is a Thing, Not a Process). They (rightfully) point out that the EM field can solve the binding problem. The boundary problem, in turn, emerges. With topological boundaries, finally, you can get meaningful boundaries (objective, frame-invariant, causally-significant, and computationally-useful).

This conceptual framework both clarifies what kind of system is at minimum required for sentience, and also opens up a research paradigm for systematically exploring topological features of the fields of physics and their plausible use by the nervous system.


* See the “Self Mirroring” section to contrast the self-blindness of a lookup table and the self-awareness of sentient beings.

** More symmetrical shapes will tend to have more clean resonant modes. So to the extent that symmetry tracks fitness on some level (e.g. ability to shed off entropy), then quickly estimating the spectral complexity of an experience can tell you how far it is from global symmetry and possibly health (explanation inspired by: Johnson’s Symmetry Theory of Homeostatic Regulation).


See also:


Many thanks to Michael Johnson, David Pearce, Anders & Maggie, and Steven Lehar for many discussions about the boundary/binding problem. Thanks to Anders & Maggie and to Mike for discussions about valence in this context. And thanks to Mike for offering a steel-man of epiphenomenalism. Many thank yous to all our supporters! Much love!

Infinite bliss!

7 Recent Videos: Consciousness vs. Replicators, High Entropy Alloys of Experience, Sleep Paralysis Stories, Free-Wheeling Hallucinations, Zero Ontology, The Tyranny of the Intentional Object, And A Language for Psychedelic Experiences

[See: Previous 7-video package]

A Universal Plot – Consciousness vs. Pure Replicators: Gene Servants or Blissful Autopoietic Beings? (link)

What is the point of it all? What does it all mean?

In this talk I explain how we can meaningfully address these questions with the frame of “consciousness vs. pure replicators”. This framework allows us to re-interpret and unify all previous “scales of moral/conceptual development”. In turn, it makes solving disagreements in a principled way possible.

“Consciousness vs. Pure Replicators” is what I call “the universal plot of reality”; it is the highest level of narrative that determines what is “relevant to the plot” at any given point in time.

Whether consciousness succeeds at gaining control of its destiny and embarks on a collective journey of self-authorship, or whether we all end up being subservient cogs to a self-replicating mega-system whose one and only utility function is to self-perpetuate, is truly up in the air right now. So what can we do to support the interests of consciousness, then?

To aid consciousness we need more than good intentions (though those are still a key ingredient): I discuss how game theoretical considerations entail that in order for consciousness to succeed we will need to judiciously ally with specific replicator strategies. Being a “cooperatebot” towards anyone who claims to care about consciousness makes you liable to being resource-pumped. You need to verify that something makes sense also from the point of view of game theory; without a way to verify the ultimate values of others, coordinating with them at this level becomes extremely challenging. I suggest that a mature technology of intelligent bliss with objectively verifiable effects would be a game-changer. Once you’ve seen “it” (i.e. optimized bliss consciousness) you join everyone else in self-organizing around it.

If the world is to be taken over by something that cares about the wellbeing of consciousness, how this taking over process looks like may blindside us all. The power of “universal love” conquering all obstacles and creating a paradise for all may not be a New Age fantasy after all. Given the appropriate technology, it may turn out to be a live option…

Topics Covered: Kegan Levels of Development, Spiral Dynamics, Model of Hierarchical Complexity, Meta-Modernism, Qualia Formalism, Valence Structuralism, Pleasure Principle, Open Individualism, Universal Darwinism, Battle Between Good and Evil, Balance Between Good and Evil, Gradients of Wisdom, Consciousness vs. Pure Replicators, Wild Animal Suffering, Mistrusting DMT Entities, Super-Cooperator Cluster, Metta/Lovingkindness, State-Dependent Sexuality, Wireheading, Cooperation Technology, Game-Changing as a Strategy.

~Qualia of the Day: Kala Namak~

Further Readings:


High Entropy Alloys of Experience (link)

~Suggestion: Play a music album you like in the background while listening to this talk.~

How do we find the “gems” hidden in the state-space of consciousness?

In this talk I articulate why it is very likely that there is a huge number of undiscovered states of consciousness that are completely unique, irreducible, and wholistically “special”.

In metallurgy, a high-entropy alloy (HEA) is a mixture of five or more metals in high proportions, often giving rise to a single phase. Some HEAs have been found to have extremely desirable properties from the point of view of material science (such as being the best at both yield-strength and ultimate tensile strength at the same time). Given the huge space of possible mixtures of metals, finding these carefully balanced mixtures with unique emergent properties is both a science and an art. It calls for intelligent strategies to explore the state-space of possible alloys!

I suggest that in the realm of consciousness there are also states that would be appropriate to describe as “high entropy alloys of experience”. I go into how this framework can help us understand unique scents*. We then explore how the receptor affinity profiles of drugs, drug cocktails, and drug schedules can give rise to unique HEA-like states of mind. I then also discuss how memeplexes have various degrees of total complexity, and how this makes some more receptive to dealing with complexity in the world than others. I offer that I really appreciate the HEA-like memeplexes that get expressed in places like EAGlobal, The Science of Consciousness, and Psychedelic Science conferences. I conclude by reflecting on how a “productive mindset” or mood optimized for a specific intellectual job is likely to be HEA-like because it requires the careful balance between many different facets of the mind.

Topics you will master after seeing this talk for even just one time**: High Entropy Alloys, Bronze and Iron Age, Equiatomic Alloys, People Clusters in Parties, Scents, Sexual Orientation, Gay Fragrances, Memeplexes and Mindsets, Vibe of Groups, Energy Parameter, Frozen Food, Crystallites, Space Groups, The Science of Consciousness, EAGlobal, Psychedelic Science, Search Heuristics, DMT as “Competing Clusters of Synchrony”, Birthday Cake Flavor, Cellular Automata, Optimal Mood for Productivity.

*(HEAs: Le Male by JPG, Bleu de Chanel, Mitsouko by Guerlain. Non-HEAs: Tommy Girl by Tommy Hilfiger, Habit Rouge by Guerlain, Amazing Grace Ballet Rose by Philosophy)

**More like “topics barely touched upon”.

Further Readings:

Heterosexual males and females preferred odours from heterosexual males relative to gay males; gay males preferred odours from other gay males.

Source: Sense of smell is linked to sexual orientation, study reveals

If the goal is to avoid the formation of such phases, simply mixing together five or more elements in near-equiatomic concentrations is unlikely to be a useful approach. Even multi-component alloys that are initially single phase after solidification tend to separate into multiple metallic and intermetallic phases when annealed at intermediate temperatures.

Source: High-entropy Alloys (literature review)

Featured image source: @fractjack


6 Spooky Sleep Paralysis Stories (link)

I estimate that I have experienced between 100 and 200 sleep paralysis, many of which were lucid (meaning that I knew I was experiencing a sleep paralysis). In this video I articulate what I have learned from all of these experiences, share some particularly strange stories, and give you tips for how to get out of a sleep paralysis if you find yourself trapped in one.

Topics Covered: Hyperbolic curvature in pasta, dream music, phenomenal viscosity, DXM, imperfect sensory gating, “radio is playing” hallucinations, Dredg – Album: El Cielo · Song: Scissor Lock, taking psychedelics while dreaming, lucid dreams, dopaminergics, controlling the powerful vibrations of sleep paralysis, recursive depth, false awakenings, whimpering, noting meditation, and techniques for escaping a sleep paralysis.

~Qualia of the Day: Gigli/Campanelle Pasta~

Further Readings:

Niacinamide helps in sleep enhancement as evidenced in a 3-week study of six subjects with normal sleep patterns and two with insomnia using electroencephalograms, electromyograms, and electrooculograms to evaluate sleep patterns at baseline and after niacinamide treatment. There was a significant increase in REM sleep in all normal-sleeping subjects, but the two subjects with moderate to severe insomnia experienced significant increases in REM sleep by the third week; awake time was also significantly decreased (Robinson et al., 1977).

(source)

Free-Wheeling Hallucinations: Be the Free-Willed God of Your Inner World-Simulation (link)

Once you realize that you inhabit a world-simulation sustained by your neuronal soil it is natural to ask: why can’t I control its contents? Why can’t I make myself hallucinate whatever I want?

It can be frustrating to realize one lacks control over something that should be truly “ours” – our raw unmediated experience! We could, and perhaps should, be the rightful masters of our very own conscious experience, yet for the most part we remain powerless to explore its possible states at will.

In this video I discuss the existence of some states of consciousness in which you do own and control the contents of your experience. Think of it as acquiring an “experience editor”: an interface with your experience that enables you to modify it at will while keeping the modifications stable.

A lucid dream would be an example of a somewhat fluid and unreliable free-wheeling hallucination. The free-wheeling hallucinations I describe here are much more general, stable, reliable, intense, and hedonic than lucid dreams. More so, to spin up free-wheeling hallucinations could amount to far more than being just a fun activity. Doing so may come to be an extremely valuable tool for a new paradigm of consciousness research! All of the parameters of experience that remain outside of our control under normal circumstances can be studied (both from a first and third person point of view) while in a free-wheeling hallucination! One can conduct a sort of “qualia chemistry” and repeat experiments to get reliable accounts of the behavior of consciousness under exotic (yet controlled) circumstances. Artifacts such as the valence-symmetry correspondance can be inspected in detail. Ultimately, this paradigm may allow us to chart the state-space of consciousness in terms of “edit distances” or “sequence of symmetry breaking operations” away from “formless consciousness”.

I then go on to explain that “knowing everything about your world-simulation” does not entail that the experience will be boring. Hedonic tone can be dissociated from novelty, but we don’t even need to go that far. It suffices to point out that you can set up the parameters of your world-simulation so that it unfolds in a chaotic way, and thus is impossible to predict. Additionally, you cannot really predict what you yourself will think in the future, so the whole setup can continue to generate novelty almost indefinitely (up to one’s storage capacity/size of the state-space/heat death of the universe).

I conclude by exercising my free will.

Topics Covered: Energy Parameter, Predictive Coding, Free Energy Principle, Kolmogorov Complexity of Experience, Principia Qualia, Super Free Will, Quality Trip Reports, DXM + THC Combo, LSD + Ketamine + THC Combo, “Experience Editors”, Qualia Critters, Fire Kasina, Color Control, Qualia Chemistry, Agenthood, Coumarin, Chamomile Tea.

~Qualia of the Day: You Have to Watch the Video~

Further Readings:

Chamomile consists of several ingredients including coumarin, glycoside, herniarin, flavonoid, farnesol, nerolidol and germacranolide. Despite the presence of coumarin, as chamomile’s effect on the coagulation system has not yet been studied, it is unknown if a clinically significant drug-herb interaction exists with antiplatelet/anticoagulant drugs. However, until more information is available, it is not recommended to use these substances concurrently.

Source: Herbal medication: potential for adverse interactions with analgesic drugs

Why Does Anything Exist? Zero Ontology, Physical Information, and Pure Awareness (link)

Why is there something rather than nothing? In this video I take this question very seriously and approach it with optimism. I say (a) this is a meaningful and valid question, and (b) it has a real and satisfying answer. The overall explanation space I explore is that of David Pearce’s Zero Ontology, which postulates that the multiverse is implied by the preservation of “zero information”.

In order to understand Zero Ontology we need to do some conceptual groundwork. So I walk the listener (you, were you to accept this journey) through several concepts that make the question go from “impossible to answer” or even “meaningless” to something that at least conceivably seems possible to solve.

First, we need to sidesteps the common tropes of our habitual modes of thinking, such as expecting answers to come in the form of “causal explanations”. No matter how you look at it, whether the universe extends back forever or not, a causal explanation for the origin of the universe is logically impossible to derive. Instead, we have to think in a radically different way, which is by way of frameworks for implication rather than causation. This opens us up to the possibility that exotic modes of thinking capable of representing what is entailed by “nothing” will show in turn that “something” follows from it. This helps us make sense of Pearce’s argument: the “nothing” we are looking for is not the “common sense” view of the term, but rather a more refined post-theoretical concept that is ill-fitted to the human mind for the time being.

In particular, Pearce focuses on how “no information” may be “what nothing is”. Thus, Zero Ontology attempts to formalize the “fact of inexistence” by reconceptualizing information as “ruling out possibilities”. Based on this alternate concept we see that math, physics, and phenomenology share the common thread of being possible to “construct out of nothing”. In math, the empty set can be used to derive all of arithmetic. In physics the Standard Model is a surprisingly simple theory that can be derived from first principles by imposing the “need for symmetry”. The total energy, charge, momentum, etc. of the universe is zero! And in phenomenology, we encounter a lot of cases where apparently all of the possible flavors of a qualia variety seem to “cancel out” into “pure being” or “raw awareness”. The simplest example is how experiencing “all phenomenal colors at once” (a kind of rainbow effect, but including magenta) seems to be interchangeable with “colorless phenomenal light”.

I tie all of this together and talk about how Zero Ontology allows us to reconceptualize “God/Being” as “unconstrained reality” or “boundarylessness”. I discuss how we could perhaps even probe Zero Ontology empirically in a direct way if we were to train enough physicists, mathematicians, philosophers, computer scientists, etc. to go into high Jhana or 5-MeO-DMT states and then quantify the properties of the fundamental fields implementing these experiences.

I conclude with an analogy to Borges’ Library of Babel (or a quantum version thereof) and why we may be in it. In fact, “be it”.

David Pearce: “A theory that explains everything explains nothing”, protests the critic of Everettian QM. To which we may reply, rather tentatively: yes, precisely.

Topics Covered: The Concept of Nothing, Three Characteristics, Illusion, Limitations of the Medium of Thought, Amusing Ourselves to Death, Redefining Information, Empty Set Arithmetic, Preserved Quantities of Physics, Symmetry and Noether’s Theorem, QFT, Path Integrals, Jhanas, 5-MeO-DMT, Symmetries in Qualia, Quantum Library of Babel, Black Hole Information Paradox.

~Qualia of the Day: Thinking About Nothing~

Further Readings:


The Tyranny of the Intentional Object: Universal Addictions, Meaning Abuse, and Denied Self-Insights (link)

What is it that we truly want? Why do so many people believe that meaning is better than happiness?

In this talk I discuss what we call “the tyranny of the intentional object”, which refers to the tendency for the mind to believe that “what it wants” is semantically meaningful experiences. In reality, what we want under the surface is to avoid negative valence and achieve sustainable positive valence states of consciousness.

I explain that evolution has “hooked us” on particular sources of pleasure in such a way that this is not introspectively accessible to us. We often need specific semantic content to work as a “key” for the “lock” of high-valence states of consciousness. I explain how we are all born chronic (endogenous) opioid addicts, and how our reward architecture is so coercive that we often fail to recognize this because thinking about it makes us feel bad (and thus ironically confirming the situation we are trying to be in denial about!).

I go on to provide my current thoughts on the nature of meaning. Beyond “sense and reference” we find that “felt-sense” is actually what meaning is “made of”. But not any kind of felt-sense. I posit that the felt-senses that we describe as richly meaningful tend to have the following properties: high levels of intention, coherence of attention field lines, a “field syntax”, and a high level of “potential to affect valence”. Valence and meaning are deeply connected but are not identical: we can find corner cases of high-valence but meaningless states of mind and vice versa (though they rare).

Meaning is no less liable to be “abused” than hard drugs: we often find ourselves scratching the bottom of the barrel of our meaning-making structures when things go wrong. I advise against doing this, and instead endorse the use of equanimity when faced with the absurd and Chapman’s “Meaningness” approach: to think of meaning as a gradient rather than in black and white terms. Do take advantage of opportunities for high levels of meaning, but do not rely on them and think they are universal. Indeed “meaning abuse” is a recipe for broken hearts and misguided idealistic solutions to problems that can be easily addressed pragmatically.

Finally, I steelman the importance of “high-dimensional valence” and explain why in turn usually pursuing meaning is indeed much better than shallow pleasure.

~Qualia of the Day: Clean Air~

Further Readings:

[T]he heroin addict will do anything to get another fix: lie, cheat, steal and worse. Natural selection has stumbled on and harnessed Nature’s own version of heroin. Our endogenous opioid system ensures that biological life behaves in callous but genetically adaptive ways. […] All complex animal life is “paid” in junk: the addictive dribble of opioids in our hedonic hotspots released when we act in ways that tend to maximise the inclusive fitness of our genes in the environment of evolutionary adaptedness (EEA). The pleasure-pain axis is coercive. Barring self-deliverance, we can’t opt out. Our “reward” circuitry hardwires opioid addiction and the complex rationalisations it spawns. Human history confirms we’ll do anything to obtain more opioids to feed our habit. The mesolimbic dopamine system enables us to anticipate our next fix and act accordingly: an insidious interplay of “wanting” and “liking”. We enslave and kill billions of sentient beings from other species to gratify our cravings. We feed the corpses of our victims to our offspring. So the vicious cycle of abuse continues.

David Pearce: Quora Responses

A Language for Psychedelic Experiences: Algorithmic Reductions, Field Operators, and Dimensionality (link)

Psychedelic experiences are notoriously difficult to describe. But are they truly ineffable, or do we simply lack the words, syntax, and grammar to articulate them? Optimistically, groups who take seriously the exploration of exotic states of consciousness could create a common ground of semantic primitives to be independently verified and used as the building blocks of a language for the “psychedelic medium of thought”.

In this video I present some ideas for a possible “psychedelic language” based on QRI paradigms and recent experience reports. I go over the article “Algorithmic Reduction of Psychedelic States” and the value of breaking the psychedelic experience in terms of a minimal set of “basic effects” whose stacking and composition gives rise to the wild zoo of effects one observes. I point out that algorithmic reductions can have explanatory power even if they do not provide a clear answer about the nature of the substrate of experience. Importantly, since I wrote that article we have developed a far higher-resolution understanding of exotic states of consciousness:

We suggest that a remarkably fruitful strategy for pointing at a whole family of psychedelic effects comes in the form of “field operators” that change the qualitative properties of our experiential fields. I provide a detailed description of what we call the “world-sheet” of experience and how it encodes emotional and semantic content in its very structure. The world-sheet can have tension, relaxation, different types of resonance and buzzing entrainment, twisting, curling, divergence (with vortices and anti-vortices in the attention field-lines), dissonance, consonance, noise, release, curvature, holographic properties, and dimensionality. I explain that in a psychedelic state, you explore higher up regions in the “Hamiltonian of the field”, meaning that you instantiate field configurations with higher levels of energy. There, we observer interesting trade-offs between the hyperbolicity of the field and its dimensionality. It can instantiate fractals of many sorts (in polar, cartesian, and other coordinate systems) by multi-scale entrainment. Time loops and moments of eternity result from this process iterated over all sensory modalities. The field contains meta-data implicitly encoded in its periphery which you can use for tacit information processing. Semantic content and preferences are encoded in terms of the patterns of attraction and repulsion of the attention-field lines. And so much more (watch the whole video for the entire story).

I conclude by saying that a steady meditation practice can be highly synergistic with psychedelics. Metta/loving-kindness can manifest in the form of smooth, coherent, high-dimensional, and consonant regions of the world-sheet and make the experience way more manageable, wholesome, and enriching. Equanimity, concentration, and sensory clarity are all synergistic with the state, and I posit that using “high-dimensionality” as the annealing target may accelerate the development of these traits in everyday life.

Please consider donating to QRI if you want to see this line of research make waves in academia and expand the Overtone Window for the science of consciousness. Funds will allow us to carry out key scientific experiments to validate models and develop technologies to reduce suffering at scale: https://www.qualiaresearchinstitute.org/donate

~Qualia of the Day: The Phenomenal Silence of Each Field Modality~

Further Readings:


That’s it for now!

Until next time!

Infinite bliss!

– Andrés

Ways of Thinking

Related to: On the Medium of Thought, John von Neumann, Early Isolation Tank Psychonautics: 1970s Trip Reports, Pseudo-Time Arrow, Thinking in Numbers, High-Entropy Alloys of Experience, A Single 3N-Dimensional Universe: Splitting vs. Decoherence, A New Way to Visualize General Relativity, Visual Quantum Physics, and Feynman’s QED Video Lectures (highly recommended!)


Transcript from the last section of the 1983 BBC interview of Richard Feynman “Fun to Imagine” (excerpt starts at 55:52):

Interviewer presumably asks: What is it like to think about your work?

Well, when I’m actually doing my own things, that I’m working in the high, deep, and esoteric stuff that I worry about, I don’t think I can describe very well what it is like… First of all it is like asking a centipede which leg comes after which. It happens quickly and I am not exactly sure… flashes and stuff goes on in the head. But I know it is a crazy mixture of partial differential equations, partial solving of the equations, then having some sort of picture of what’s happening that the equations are saying is happening, but they are not as well separated as the words that I’m using. And it’s a kind of a nutty thing. It’s very hard to describe and I don’t know that it does any good to describe. And something that struck me, that is very curious: I suspect that what goes on in every man’s head might be very, very different. The actual imagery or semi-imagery which comes is different. And that when we are talking to each other at these high and complicated levels, and we think we are speaking very well and we are communicating… but what we’re really doing is having some kind of big translation scheme going on for translating what this fellow says into our images. Which are very different.

I found that out because at the very lowest level, I won’t go into the details, but I got interested… well, I was doing some experiments. And I was trying to figure out something about our time sense. And so what I would do is, I would count trying to count to a minute. Actually, say I’d count to 48 and it would be one minute. So I’d calibrate myself and I would count a minute by counting to 48 (so it was not seconds what I counted, but close enough), and then it turns out if you repeat that you can do very accurately when you get to 48 or 47 or 49, not far off you are very close to a minute. And I would try to find out what affected that time sense, and whether I could do anything at the same time as I was counting and I found that I could do many things, but couldn’t do other things. I could… For example I had great difficulty doing this: I was in university and I had to get my laundry ready. And I was putting the socks out and I had to make a list of how many socks, something like six or eight pair of socks, and I couldn’t count them. Because the “counting machine” was being used and I couldn’t count them. Until I found out I could put them in a pattern and recognize the number. And so I learned a way after practicing by which I could go down on lines of type and newspapers and see them in groups. Three – three – three – one, that’s a group of ten, three – three – three – one… and so on without saying the numbers, just seeing the groupings and I could therefore count the lines of type (I practiced). In the newspaper, the same time I was counting internally the seconds, so I could do this fantastic trick of saying: “48! That’s one minute, and there are 67 lines of type”, you see? It was quite wonderful. And I discovered many things I could read while I was… I could read while I was counting and get an idea of what it was about. But I couldn’t speak, say anything. Because of course, when I was counting I sort of spoke to myself inside. I would say one, two, three… sort of in the head! Well, I went down to get breakfast and there was John Tuckey, a mathematician down at Princeton at the same time, and we had many discussions, and I was telling him about these experiments and what I could do. And he says “that’s absurd!”. He says: “I don’t see why you would have any difficulty talking whatsoever, and I can’t possibly believe that you could read.” So I couldn’t believe all this. But we calibrated him, and it was 52 for him to get to 60 seconds or whatever, I don’t remember the numbers now. And then he’d say, “alright, what do you want me to say? Marry Had a Little Lamb… I can speak about anything. Blah, blah, blah, blah… 52!” It’s a minute, he was right. And I couldn’t possibly do that, and he wanted me to read because he couldn’t believe it. And then we compared notes and it turned out that when he thought of counting, what he did inside his head is that when he counted he saw a tape with numbers, that did clink, clink, clink [shows with his hand the turning and passing of a counting tape], and the tape would change with the numbers printed on it, which he could see. Well, since it’s sort of an optical system that he is using, and not voice, he could speak as much as he wanted. But if he wanted to read then he couldn’t look at his clock. Whereas for me it was the other way.

And that’s where I discovered, at least in this very simple operation of counting, the great difference in what goes on in the head when people think they are doing the same thing! And so it struck me therefore, if that’s already true at the most elementary level, that when we learn about mathematics, and the Bessel functions, and the exponentials, and the electric fields, and all these things… that the imagery and method by which we are storing it all and the way we are thinking about it… could be it really if we get into each other’s heads, entirely different? And in fact why somebody has sometimes a great deal of difficulty understanding when you are pointing to something which you see as obvious, and vice versa, it may be because it’s a little hard to translate what you just said into his particular framework and so on. Now I’m talking like a psychologist and you know I know nothing about this.

Suppose that little things behaved very differently than anything that was big. Anything that you are familiar with… because you see, as the animal evolves, and so on, as the brain evolves, it gets used to handling, and the brain is designed, for ordinary circumstances. But if the gut particles in the deep inner workings whereby some other rules and some other character they behave differently, they were very different than anything on a large scale, then there would be some kind of difficulty, you know, understanding and imagining reality. And that is the difficulty we are in. The behavior of things on a small scale is so fantastic, it is so wonderfully different, so marvelously different than anything that behaves on a large scale… say, “electrons act like waves”, no they don’t exactly. “They act like particles”, no they don’t exactly. “They act like a kind of a fog around the nucleus”, no they don’t exactly. And if you would like to get a clear sharp picture of an animal, so that you could tell exactly how it is going to behave correctly, to have a good image, in other words, a really good image of reality I don’t know how to do it!

Because that image has to be mathematical. We have mathematical expressions, strange as mathematics is I don’t understand how it is, but we can write mathematical expressions and calculate what the thing is going to do without actually being able to picture it. It would be something like a computer that you put certain numbers in and you have the formula for what time the car will arrive at different destinations, and the thing does the arithmetic to figure out what time the car arrives at the different destinations but cannot picture the car. It’s just doing the arithmetic! So we know how to do the arithmetic but we cannot picture the car. No, it’s not a hundred percent because for certain approximate situations a certain kind of approximate picture works. That it’s simply a fog around the nucleus that when you squeeze it, it repels you is very good for understanding the stiffness of material. That it’s a wave which does this and that is very good for some other phenomena. So when you are working with certain particular aspect of the behavior of atoms, for instance when I was talking about temperature and so forth, that they are just little balls is good enough and it gives us a very nice picture of temperature. But if you ask more specific questions and you get down to questions like how is it that when you cool helium down, even to absolute zero where there is not supposed to be any motion, it’s a perfect fluid that hasn’t any viscosity, has no resistance, flows perfectly, and isn’t freezing?

Well if you want to get a picture of atoms that has all of that in it, I can’t do it, you see? But I can explain why the helium behaves as it does by taking my equations and showing that the consequences of them is that the helium will behave as it is observed to behave, so we now have the theory right, but we haven’t got the pictures that will go with the theory. And is that because we are limited and haven’t caught on to the right pictures? Or is that because there aren’t any right pictures for people who have to make pictures out of things that are familiar to them? Let’s suppose it’s the last one. That there’s no right pictures in terms of things that are familiar to them. Is it possible then, to develop a familiarity with those things that are not familiar on hand by study? By learning about the properties of atoms and quantum mechanics, and practicing with the equations, until it becomes a kind of second nature, just as it is second nature to know that if two balls came towards each other they’d mash into bits, you don’t say the two balls when they come toward each other turn blue. You know what they do! So the question is whether you can get to know what things do better than we do today. You know as the generations develop, will they invent ways of teaching, so that the new people will learn tricky ways of looking at things and be so well trained that they won’t have our troubles with picturing the atom? There is still a school of thought that cannot believe that the atomic behavior is so different than large-scale behavior. I think that’s a deep prejudice, it’s a prejudice from being so used to large-scale behavior. And they are always seeking to find, to waiting for the day that we discover that underneath the quantum mechanics, there’s some mundane ordinary balls hitting, or particles moving, and so on. I think they’re going to be defeated. I think nature’s imagination is so much greater than man’s, she’s never gonna let us relax.


From the blog Visual Quantum Physics (same as gifs above):

Types of Binding

Excerpt from “Mindmelding: Consciousness, Neuroscience, and the Mind’s Privacy” (2012) by William Hirstein (pgs. 57-58 and 64-65)

The Neuroscience of Binding

When you experience an orchestra playing, you see them and hear them at the same time. The sights and sounds are co-conscious (Hurley, 2003; de Vignemont, 2004). The brain has an amazing ability to make everything in consciousness co-conscious with everything else, so that the co-conscious relation is transitive: That means, if x is co-conscious with y, and y is co-conscious with z, then x is co-conscious with z. Brain researchers hypothesized that the brain’s method of achieving co-consciousness is to link the different areas embodying each portion of the brain state by a synchronizing electrical pulse. In 1993, Linás and Ribary proposed that these temporal binding processes are responsible for unifying information from the different sensory modalities. Electrical activity, “manifested as variations in the minute voltage across the cell’s enveloping membrane,” is able to spread, like “ripples in calm water” according to Linás (2002, pp.9-10). This sort of binding has been found not only in the visual system, but also in other modalities (Engel et al., 2003). Bachmann makes the important point that the binding processes need to be “general and lacking any sensory specificity. This may be understood via a comparison: A mirror that is expected to reflect equally well everything” (2006, 32).

Roelfsema et al. (1997) implanted electrodes in the brain of cats and found binding across parietal and motor areas. Desmedt and Tomberg (1994) found binding between a parietal area and a prefrontal area nine centimeters apart in their subjects, who had to respond with one hand, to signal which finger on another hand had been stimulated – a conscious response to a conscious perception. Binding can occur across great distances in the brain. Engel et al. (1991) also found binding across the two hemispheres. Apparently binding processes can produce unified conscious states out of cortical areas widely separated. Notice, however, that even if there is a single area in the brain where all the sensory modalities, memory, and emotion, and anything else that can be in a conscious state were known to feed into, binding would still be needed. As long as there is any spatial extent at all to the merging area, binding is needed. In addition to its ability to unify spatially separate areas, binding has a temporal dimension. When we engage in certain behaviors, binding unifies different areas that are cooperating to produce a perception-action cycle. When laboratory animals were trained to perform sensory-motor tasks, the synchronized oscillations were seen to increase both within the areas involved in performing the task and across those areas, according to Singer (1997).

Several different levels of binding are needed to produce a full conscious mental state:

  1. Binding of information from many sensory neurons into object features
  2. Binding of features into unimodal representations of objects
  3. Binding of different modalities, e.g., the sound and movement made by a single object
  4. Binding of multimodal object representations into a full surrounding environment
  5. Binding of representations, emotions, and memories, into full conscious states.

So is there one basic type of binding, or many? The issue is still debated. On the side of there being a single basic process, Koch says that he is content to make “the tentative assumption that all the different aspects of consciousness (smell, pain, vision, self-consciousness, the feeling of willing an action, of being angry and so on) employ one or perhaps a few common mechanisms” (2004, p15). On the other hand, O’Reilly et al. argue that “instead of one simple and generic solution to the binding problem, the brain has developed a number of specialized mechanisms that build on the strengths of existing neural hardware in different brain areas” (2003, p.168).

[…]

What is the function of binding?

We saw just above that Crick and Koch suggest a function for binding, to assist a coalition of neurons in getting the “attention” of prefrontal executive processes when there are other competitors for this attention. Crick and Koch also claim that only bound states can enter short-term memory and be available for consciousness (Crick and Koch, 1990). Engel et al. mention a possible function of binding: “In sensory systems, temporal binding may serve for perceptual grouping and, thus, constitute an important prerequisite for scene segmentation and object recognition” (2003, 140). One effect of malfunctions in the binding process may be a perceptual disorder in which the parts of objects cannot be integrated into a perception of the whole object. Riddoch and Humphreys (2003) describe a disorder called ‘integrative agnosia’ in which the patient cannot integrate the parts of an object into a whole. They mention a patient who is given a photograph of a paintbrush but sees the handle and the bristles as two separate objects. Breitmeyer and Stoerig (2006, p.43) say that:

[P]atients can have what are called “apperceptive agnosia,” resulting from damage to object-specific extrastriate cortical areas such as the fusiform face area and the parahippocampal place area. While these patients are aware of qualia, they are unable to segment the primitive unity into foreground or background or to fuse its spatially distributed elements into coherent shapes and objects.

A second possible function of binding is a kind of bridging function, it makes high-level perception-action cycles go through. Engel et al. say that, “temporal binding may be involved in sensorimotor integration, that is, in establishing selective links between sensory and motor aspects of behavior” (2003, p.140).

Here is another hypothesis we might call the scale model theory of binding. For example, in order to test a new airplane design in a wind tunnel, one needs a complete model of it. The reason for this is that a change in one area, say the wing, will alter the aerodynamics of the entire plane, especially those areas behind the wing. The world itself is quite holistic. […] Binding allows the executive processes to operate on a large, holistic model of the world in a way that allows the model to simulate the same holistic effects found in the world. The holism of the represented realm is mirrored by a type of brain holism in the form of binding.


See also these articles about (phenomenal) binding:

10 Ways Perception Distorts Reality

by David Pearce (Quora response)


If the doors of perception were cleansed every thing would appear to man as it is, Infinite.

– William Blake

1. You don’t perceive the environment. There is no public world. Instead, your local environment partially selects your brain states, some of which are experienced as your external surroundings. Mind-independent reality is a speculative metaphysical inference (sadly a strong one, IMO). Contra William Blake (and Aldous Huxley), there are no see-through doors of perception in need of a good wash, just cranial prisons.

2. Whether you are awake or dreaming, your world-simulation is populated by zombies. When you are awake, these zombies are the avatars of sentient beings, but the imposters loom larger than their hypothetical real-world counterparts.

3. Your egocentric world-simulation resembles a grotesque cartoon. Within the cartoon, you are the hub of reality, the most important being in the universe, followed by your close genetic relatives, lovers, friends and allies. On theoretical grounds, you may wonder if this fitness-enhancing hallucination can be trusted. After all, trillions of other sentient beings apparently share an analogous illusion. In practice, the idea of your playing a humble role in the great scheme of things can be hard to take seriously, unless the hub of the universe is psychologically depressed. Wikipedia’s List of Messiah Claimants could be enlarged.

4. Perceptual direct realism spawns a “magical” theory of reference. If direct realism is delusional, then what is the mysterious relationship between thought-episodes internal to your world-simulation and the external world? (cf. What is the current state of affairs in philosophy concerning the symbol grounding problem?).

5. A realistic interpretation of the formalism of quantum physics confirms that not just the Lockean “secondary” properties of material objects are mind-dependent, but also their “primary” properties (cf. Primary/secondary quality distinction). Shades of Bishop Berkeley? (“Esse est percipi” – “to be is to be perceived”) Kant? Not exactly, but classical physics and Copenhagen-style positivism alike are false theories of reality.

6. According to “no-collapse” quantum mechanics (Everett), you have no unique future, and no unique past. You are not the same person as your countless ancestral namesakes nor the countless folk who wake up tomorrow with an approximation of your memories (cf. Was Parfit correct about consciousness and how we’re not the same person that we were when we were born?).

7. You experience the illusion of embodiment. “In-the-body” hallucinations in biological minds pervade the animal kingdom. As out-of-body experiences on dissociative anaesthetics like ketamine reveal, physical bodies as normally conceived are cross-modally-matched illusions generated by the CNS. Or alternatively, dualism is true. Actually, not everyone has the chronic illusion of embodiment. People with negative autoscopy can stare into a virtual mirror in their phenomenal world-simulation and not see themselves. For evolutionary reasons, negative autoscopy is rare.

8. You experience the illusion of four-dimensional space-time, not high-dimensional Hilbert space. This idea is more controversial. Hilbert space is a generalisation of ordinary Euclidian space to an intuitively huge number of dimensions – conventionally infinite, though the holographic entropy bound suggests the dimensionality of what naïve realists call the observable universe is finite. Quantum mechanics may be understood via the mathematical structure of Hilbert space (cf. Nothing happens in the Universe of the Everett Interpretation). Typically, Hilbert space is treated instrumentally as a mere mathematical abstraction, even by Everettians. As David Wallace, a critic, puts it: “Very few people are willing to defend Hilbert-space realism in print.” In the interests of mental health, such self-censorship may be wise.

9. Experienced psychonauts would echo William James, “…our normal waking consciousness, rational consciousness as we call it, is but one special type of consciousness, whilst all about it, parted from it by the flimsiest of screens, there lie potential forms of consciousness entirely different.” Quite so. Our posthuman successors may regard everyday Darwinian consciousness as delusive in ways that transcend the expressive power of a human conceptual scheme.

10. We do not understand reality. Any account of our misperceptions must pass over the unknown unknowns. I fear we’re missing not only details, but the key to the plot.

A Big State-Space of Consciousness

Kenneth Shinozuka of Blank Horizons asks: Andrés, how long do you think it’ll take to fully map out the state space of consciousness? A thousand or a million years?

The state-space of consciousness is unimaginably large (and yet finite)

I think we will discover the core principles of a foundational theory of consciousness within a century or so. That is, we might find plausible solutions to Mike Johnsons’ 8 subproblems of consciousness and experimentally verify a specific formal theory of consciousness before 2100. That said, there is a very large distance between proving a certain formal theory of consciousness and having a good grasp of the state-space of consciousness.

Knowing Maxwell’s equations gives you a formal theory of electromagnetism. But even then, photons are hidden as an implication of the formalism; you need to do some work to find them in it. And that’s the tip of the iceberg; you would also find hidden in the formalism an array of exotic electromagnetic behavior that arise in unusual physical conditions such as those produced by metamaterials. The formalism is a first step to establish the fundamental constraints for what’s possible. What follows is filling in the gaps between the limits of physical possibility, which is a truly fantastical enterprise considering the range of possible permutations.Island_of_Stability_derived_from_Zagrebaev

A useful analogy here might be: even though we know all of the basic stable elements and many of their properties, we have only started mapping out the space of possible small molecules (e.g. there are ~10^60 bioactive drugs that have never been tested), and have yet to even begin the project in earnest of understanding what proteins can do. Or consider the number of options there are to make high-entropy alloys (alloys made with five or more metals). Or all the ways in which snowflakes of various materials can form, meaning that even when you are studying a single material it can form crystal structures of an incredibly varied nature. And then take into account the emergence of additional collective properties: physical systems can display a dazzling array of emergent exotic effects, from superconductivity and superradiance to Bose-Einstein condensates and fusion chain reactions. Exploring the state-space of material configurations and their emergent properties entails facing a combinatorial explosion of unexpected phenomena.

And this is the case in physics even though we know for a fact that there are only a bit over a hundred possible building blocks (i.e. the elements).

In the province of the mind, we do not yet have even that level of understanding. When it comes to the state-space of consciousness we do not have a corresponding credible “periodic table of qualia”. The range of possible experiences in normal everyday life is astronomical. Even so, the set of possible human sober experiences is a vanishing fraction of the set of possible DMT trips, which is itself a vanishing fraction of the set of possible DMT + LSD + ketamine + TMS + optogenetics + Generalized Wada Test + brain surgery experiences. Brace yourself for a state-space that grows supergeometrically with each variable you introduce.

If we are to truly grasp the state-space of consciousness, we should also take into account non-human animal qualia. And then further still, due to dual-aspect monism, we will need to go into things like understanding that high-entropy alloys themselves have qualia, and then Jupiter Brains, and Mike’s Fraggers, and Black Holes, and quantum fields in the inflation period, and so on. This entails a combinatorial explosion of the likes I don’t believe anyone is really grasping at the moment. We are talking about a monumental “monster” state-space far beyond the size of even the wildest dreams of full-time dreamers. So, I’d say -honestly- I think that mapping out the state-space of consciousness is going to take millions of years.

But isn’t the state-space of consciousness infinite, you ask?

Alas, no. There are two core limiting factors here – one is the speed of light (which entails the existence of gravitational collapse and hence limits to how much matter you can arrange in complex ways before a black hole arises) and the second one is quantum (de)coherence. If phenomenal binding requires fundamental physical properties such as quantum coherence, there will be a maximum limit to how much matter you can bind into a unitary “moment of experience“. Who knows what the limit is! But I doubt it’s the size of a galaxy – perhaps it is more like a Jupiter Brain, or maybe just the size of a large building. This greatly reduces the state-space of consciousness; after all, something finite, no matter how large, is infinitely smaller than something infinite!

But what if reality is continuous? Doesn’t that entail an infinite state-space?

I do not think that the discrete/continuous distinction meaningfully impacts the size of the state-space of consciousness. The reason is that at some point of degree of similarity between experiences you get “just noticeable differences” (JNDs). Even with the tiniest hint of true continuity in consciousness, the state-space would be infinite as a result. But the vast majority of those differences won’t matter: they can be swept under the rug to an extent because they can’t actually be “distinguished from the inside”. To make a good discrete approximation of the state-space, we would just need to divide the state-space into regions of equal area such that their diameter is a JND.15965332_1246551232103698_2088025318638395407_n

Conclusion

In summary, the state-space of consciousness is insanely large but not infinite. While I do think it is possible that the core underlying principles of consciousness (i.e. an empirically-adequate formalism) will be discovered this century or the next, I do not anticipate a substantive map of the state-space of consciousness to be available anytime soon. A truly comprehensive map would, I suspect, be only possible after millions of years of civilizational investment on the task.

AI Alignment Podcast: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson

Lucas Perry from the Future of Life Institute recently interviewed my co-founder Mike Johnson and I in his AI Alignment podcast. Here is the full transcript:


Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I’m Lucas Perry, and today we’ll be speaking with Andrés Gomez Emilsson and Mike Johnson from the Qualia Research Institute. In this episode, we discuss the Qualia Research Institute’s mission and core philosophy. We get into the differences between and arguments for and against functionalism and qualia realism. We discuss definitions of consciousness, how consciousness might be causal, we explore Marr’s Levels of Analysis, we discuss the Symmetry Theory of Valence. We also get into identity and consciousness, and the world, the is-out problem, what this all means for AI alignment and building beautiful futures.

And then end on some fun bits, exploring the potentially large amounts of qualia hidden away in cosmological events, and whether or not our universe is something more like heaven or hell. And remember, if you find this podcast interesting or useful, remember to like, comment, subscribe, and follow us on your preferred listening platform. You can continue to help make this podcast better by participating in a very short survey linked in the description of wherever you might find this podcast. It really helps. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master’s in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder.

He is interested in neuroscience, philosophy of mind, and complexity theory. And so, without further ado, I give you Mike Johnson and Andrés Gomez Emilsson. So, Mike and Andrés, thank you so much for coming on. Really excited about this conversation and there’s definitely a ton for us to get into here.

Andrés: Thank you so much for having us. It’s a pleasure.

Mike: Yeah, glad to be here.

Lucas: Let’s start off just talking to provide some background about the Qualia Research Institute. If you guys could explain a little bit, your perspective of the mission and base philosophy and vision that you guys have at QRI. If you could share that, that would be great.

Andrés: Yeah, for sure. I think one important point is there’s some people that think that really what matters might have to do with performing particular types of algorithms, or achieving external goals in the world. Broadly speaking, we tend to focus on experience as the source of value, and if you assume that experience is a source of value, then really mapping out what is the set of possible experiences, what are their computational properties, and above all, how good or bad they feel seems like an ethical and theoretical priority to actually make progress on how to systematically figure out what it is that we should be doing.

Mike: I’ll just add to that, this thing called consciousness seems pretty confusing and strange. We think of it as pre-paradigmatic, much like alchemy. Our vision for what we’re doing is to systematize it and to do to consciousness research what chemistry did to alchemy.

Lucas: To sort of summarize this, you guys are attempting to be very clear about phenomenology. You want to provide a formal structure for understanding and also being able to infer phenomenological states in people. So you guys are realists about consciousness?

Mike: Yes, absolutely.

Lucas: Let’s go ahead and lay some conceptual foundations. On your website, you guys describe QRI’s full stack, so the kinds of metaphysical and philosophical assumptions that you guys are holding to while you’re on this endeavor to mathematically capture consciousness.

Mike: I would say ‘full stack’ talks about how we do philosophy of mind, we do neuroscience, and we’re just getting into neurotechnology with the thought that yeah, if you have a better theory of consciousness, you should be able to have a better theory about the brain. And if you have a better theory about the brain, you should be able to build cooler stuff than you could otherwise. But starting with the philosophy, there’s this conception of qualia of formalism; the idea that phenomenology can be precisely represented mathematically. You borrow the goal from Giulio Tononi’s IIT. We don’t necessarily agree with the specific math involved, but the goal of constructing a mathematical object that is isomorphic to a systems phenomenology would be the correct approach if you want to formalize phenomenology.

And then from there, one of the big questions in how you even start is, what’s the simplest starting point? And here, I think one of our big innovations that is not seen at any other research group is we’ve started with emotional valence and pleasure. We think these are not only very ethically important, but also just literally the easiest place to start reverse engineering.

Lucas: Right, and so this view is also colored by physicalism and quality of structuralism and valence realism. Could you explain some of those things in a non-jargony way?

Mike: Sure. Quality of formalism is this idea that math is the right language to talk about qualia in, and that we can get a precise answer. This is another way of saying that we’re realists about consciousness much as people can be realists about electromagnetism. We’re also valence realists. This refers to how we believe emotional valence, or pain and pleasure, the goodness or badness of an experience. We think this is a natural kind. This concept carves reality at the joints. We have some further thoughts on how to define this mathematically as well.

Lucas: So you guys are physicalists, so you think that basically the causal structure of the world is best understood by physics and that consciousness was always part of the game engine of the universe from the beginning. Ontologically, it was basic and always there in the same sense that the other forces of nature were already in the game engine since the beginning?

Mike: Yeah, I would say so. I personally like the frame of dual aspect monism, but I would also step back a little bit and say there’s two attractors in this discussion. One is the physicalist attractor, and that’s QRI. Another would be the functionalist/computationalist attractor. I think a lot of AI researchers are in this attractor and this is a pretty deep question of, if we want to try to understand what value is, or what’s really going on, or if we want to try to reverse engineer phenomenology, do we pay attention to bits or atoms? What’s more real; bits or atoms?

Lucas: That’s an excellent question. Scientific reductionism here I think is very interesting. Could you guys go ahead and unpack though the skeptics position of your view and broadly adjudicate the merits of each view?

Andrés: Maybe a really important frame here is called Marr’s Levels of Analyses. David Marr was a cognitive scientist, wrote a really influential book in the ’80s called On Vision where he basically creates a schema for how to understand knowledge about, in this particular case, how you actually make sense of the world visually. The framework goes as follows: you have three ways in which you can describe an information processing system. First of all, the computational/behavioral level. What that is about is understanding the input-output mapping of an information processing system. Part of it is also understanding the run-time complexity of the system and under what conditions it’s able to perform its actions. Here an analogy would be with an abacus, for example.

On the computational/behavioral level, what an abacus can do is add, subtract, multiply, divide, and if you’re really creative you can also exponentiate and do other interesting things. Then you have the algorithmic level of analysis, which is a little bit more detailed, and in a sense more constrained. What the algorithm level of analysis is about is figuring out what are the internal representations and possible manipulations of those representations such that you get the input output of mapping described by the first layer. Here you have an interesting relationship where understanding the first layer doesn’t fully constrain the second one. That is to say, there are many systems that have the same input output mapping but that under the hood uses different algorithms.

In the case of the abacus, an algorithm might be something whenever you want to add a number you just push a bead. Whenever you’re done with a row, you push all of the beads backs and then you add a bead in the row underneath. And finally, you have the implementation level of analysis, and that is, what is the system actually made of? How is it constructed? All of these different levels ultimately also map onto different theories of consciousness, and that is basically where in the stack you associate consciousness, or being, or “what matters”. So, for example, behaviorists in the ’50s, they may associate consciousness, if they give any credibility to that term, with the behavioral level. They don’t really care what’s happening inside as long as you have extended pattern of reinforcement learning over many iterations.

What matters is basically how you’re behaving and that’s the crux of who you are. A functionalist will actually care about what algorithms you’re running, how is it that you’re actually transforming the input into the output. Functionalists generally do care about, for example, brain imaging, they do care about the high level algorithms that the brain is running, and generally will be very interested in figuring out these algorithms and generalize them in fields like machine learning and digital neural networks and so on. A physicalist associate consciousness at the implementation level of analysis. How the system is physically constructed, has bearings on what is it like to be that system.

Lucas: So, you guys haven’t said that this was your favorite approach, but if people are familiar with David Chalmers, these seem to be the easy problems, right? And functionalists are interested in just the easy problems and some of them will actually just try to explain consciousness away, right?

Mike: Yeah, I would say so. And I think to try to condense some of the criticism we have of functionalism, I would claim that it looks like a theory of consciousness and can feel like a theory of consciousness, but it may not actually do what we need a theory of consciousness to do; specify which exact phenomenological states are present.

Lucas: Is there not some conceptual partitioning that we need to do between functionalists who believe in qualia or consciousness, and those that are illusionists or want to explain it away or think that it’s a myth?

Mike: I think that there is that partition, and I guess there is a question of how principled the partition you can be, or whether if you chase the ideas down as far as you can, the partition collapses. Either consciousness is a thing that is real in some fundamental sense and I think you can get there with physicalism, or consciousness is more of a process, a leaky abstraction. I think functionalism naturally tugs in that direction. For example, Brian Tomasik has followed this line of reasoning and come to the conclusion of analytic functionalism, which is trying to explain away consciousness.

Lucas: What is your guys’s working definition of consciousness and what does it mean to say that consciousness is real.

Mike: It is a word that’s overloaded. It’s used in many contexts. I would frame it as what it feels like to be something, and something is conscious if there is something it feels like to be that thing.

Andrés: It’s important also to highlight some of its properties. As Mike pointed out, “consciousness” is used in many different ways. There’s like eight definitions for the word consciousness, and honestly, all of them are really interesting. Some of them are more fundamental than others and we tend to focus on the more fundamental side of the spectrum for the word. A sense that would be very not fundamental would be consciousness in the sense of social awareness or something like that. We actually think of consciousness much more in terms of qualia; what is it like to be something? What is it like to exist? Some of the key properties of consciousness are as follows: First of all, we do think it exists.

Second, in some sense it has causal power in the sense that the fact that we are conscious matters for evolution, evolution made us conscious for a reason that it’s actually doing some computational legwork that would be maybe possible to do, but just not as efficient or not as conveniently as it is possible with consciousness. Then also you have the property of qualia, the fact that we can experience sights, and colors, and tactile sensations, and thoughts experiences, and emotions, and so on, and all of these are in completely different worlds, and in a sense they are, but they have the property that they can be part of a unified experience that can experience color at the same time as experiencing sound. That sends those different types of sensations, we describe them as the category of consciousness because they can be experienced together.

And finally, you have unity, the fact that you have the capability of experiencing many qualia simultaneously. That’s generally a very strong claim to make, but we think you need to acknowledge and take seriously its unity.

Lucas: What are your guys’s intuition pumps for thinking why consciousness exists as a thing? Why is there a qualia?

Andrés: There’s the metaphysical question of why consciousness exists to begin within. That’s something I would like to punt for the time being. There’s also the question of why was it recruited for information processing purposes in animals? The intuition here is that there are various contrasts that you can have within experience, which can serve a computational role. So, there may be a very deep reason why color qualia or visual qualia is used for information processing associated with sight, and why tactile qualia is associated with information processing useful for touching and making haptic representations, and that might have to do with the actual map of how all the qualia values are related to each other. Obviously, you have all of these edge cases, people who are seeing synesthetic.

They may open their eyes and they experience sounds associated with colors, and people tend to think of those as abnormal. I would flip it around and say that we are all synesthetic, it’s just that the synesthesia that we have in general is very evolutionarily adaptive. The reason why you experience colors when you open your eyes is that that type of qualia is really well suited to represent geometrically a projective space. That’s something that naturally comes out of representing the world with the sensory apparatus like eyes. That doesn’t mean that there aren’t other ways of doing it. It’s possible that you could have an offshoot of humans that whenever they opened their eyes, they experience sound and they use that very well to represent the visual world.

But we may very well be in a local maxima of how different types of qualia are used to represent and do certain types of computations in a very well-suited way. It’s like the intuition behind why we’re conscious, is that all of these different contrasts in the structure of the relationship of possible qualia values has computational implications, and there’s actual ways of using this contrast in very computationally effective ways.

Lucas: So, just to channel the functionalist here, wouldn’t he just say that everything you just said about qualia could be fully reducible to input output and algorithmic information processing? So, why do we need this extra property of qualia?

Andrés: There’s this article, I believe is by Brian Tomasik that basically says, flavors of consciousness are flavors of computation. It might be very useful to do that exercise, where basically you identify color qualia as just a certain type of computation and it may very well be that the geometric structure of color is actually just a particular algorithmic structure, that whenever you have a particular type of algorithmic information processing, you get these geometric state-space. In the case of color, that’s a Euclidean three-dimensional space. In the case of tactile or smell qualia, it might be a much more complicated space, but then it’s in a sense implied by the algorithms that we run. There is a number of good arguments there.

The general approach to how to tackle them is that when it comes down to actually defining what algorithms a given system is running, you will hit a wall when you try to formalize exactly how to do it. So, one example is, how do you determine the scope of an algorithm? When you’re analyzing a physical system and you’re trying to identify what algorithm it is running, are you allowed to basically contemplate 1,000 atoms? Are you allowed to contemplate a million atoms? Where is a natural boundary for you to say, “Whatever is inside here can be part of the same algorithm, but whatever is outside of it can’t.” And, there really isn’t a frame-invariant way of making those decisions. On the other hand, if you ask to see a qualia with actual physical states, there is a frame-invariant way of describing what the system is.

Mike: So, a couple of years ago I posted a piece giving a critique of functionalism and one of the examples that I brought up was, if I have a bag of popcorn and I shake the bag of popcorn, did I just torture someone? Did I just run a whole brain emulation of some horrible experience, or did I not? There’s not really an objective way to determine which algorithms a physical system is objectively running. So this is a kind of an unanswerable question from the perspective of functionalism, whereas with the physical theory of consciousness, it would have a clear answer.

Andrés: Another metaphor here is, let’s say you’re at a park enjoying an ice cream. In this system that I created that has, let’s say isomorphic algorithms to whatever is going on in your brain, the particular algorithms that your brain is running in that precise moment within a functionalist paradigm maps onto a metal ball rolling down one of the paths within these machine in a straight line, not touching anything else. So there’s actually not much going on. According to functionalism, that would have to be equivalent and it would actually be generating your experience. Now the weird thing there is that you could actually break the machine, you could do a lot of things and the behavior of the ball would not change.

Meaning that within functionalism, and to actually understand what a system is doing, you need to understand the counter-factuals of the system. You need to understand, what would the system be doing if the input had been different? And all of a sudden, you end with this very, very gnarly problem of defining, well, how do you actually objectively decide what is the boundary of the system? Even some of these particular states that allegedly are very complicated, the system looks extremely simple, and you can remove a lot of parts without actually modifying its behavior. Then that casts in question whether there is an objective boundary, any known arbitrary boundary that you can draw around the system and say, “Yeah, this is equivalent to what’s going on in your brain,” right now.

This has a very heavy bearing on the binding problem. The binding problem for those who haven’t heard of it is basically, how is it possible that 100 billion neurons just because they’re skull-bound, spatially distributed, how is it possible that they simultaneously contribute to a unified experience as opposed to, for example, neurons in your brain and neurons in my brain contributing to a unified experience? You hit a lot of problems like what is the speed of propagation of information for different states within the brain? I’ll leave it at that for the time being.

Lucas: I would just like to be careful about this intuition here that experience is unified. I think that the intuition pump for that is direct phenomenological experience like experience seems unified, but experience also seems a lot of different ways that aren’t necessarily descriptive of reality, right?

Andrés: You can think of it as different levels of sophistication, where you may start out with a very naive understanding of the world, where you confuse your experience for the world itself. A very large percentage of people perceive the world and in a sense think that they are experiencing the world directly, whereas all the evidence indicates that actually you’re experiencing an internal representation. You can go and dream, you can hallucinate, you can enter interesting meditative states, and those don’t map to external states of the world.

There’s this transition that happens when you realize that in some sense you’re experiencing a world simulation created by your brain, and of course, you’re fooled by it in countless ways, especially when it comes to emotional things that we look at a person and we might have an intuition of what type of person that person is, and that if we’re not careful, we can confuse our intuition, we can confuse our feelings with truth as if we were actually able to sense their souls, so to speak, rather than, “Hey, I’m running some complicated models on people-space and trying to carve out who they are.” There’s definitely a lot of ways in which experience is very deceptive, but here I would actually make an important distinction.

When it comes to intentional content, and intentional content is basically what the experience is about, for example, if you’re looking at a chair, there’s the quality of chairness, the fact that you understand the meaning of chair and so on. That is usually a very deceptive part of experience. There’s another way of looking at experience that I would say is not deceptive, which is the phenomenal character of experience; how it presents itself. You can be deceived about basically what the experience is about, but you cannot be deceived about how you’re having the experience, how you’re experiencing it. You can infer based on a number of experiences that the only way for you to even actually experience a given phenomenal object is to incorporate a lot of that information into a unified representation.

But also, if you just pay attention to your experience that you can simultaneously place your attention in two spots of your visual field and make them harmonized. That’s a phenomenal character and I would say that there’s a strong case to be made to not doubt that property.

Lucas: I’m trying to do my best to channel the functionalist. I think he or she would say, “Okay, so what? That’s just more information processing, and i’ll bite the bullet on the binding problem. I still need some more time to figure that out. So what? It seems like these people who believe in qualia have an even tougher job of trying to explain this extra spooky quality in the world that’s different from all the other physical phenomenon that science has gone into.” It also seems to violate Occam’s razor or a principle of lightness where one’s metaphysics or ontology would want to assume the least amount of extra properties or entities in order to try to explain the world. I’m just really trying to tease out your best arguments here for qualia realism as we do have this current state of things in AI alignment where most people it seems would either try to explain away consciousness, would say it’s an illusion, or they’re anti-realist about qualia.

Mike: That’s a really good question, a really good frame. And I would say our strongest argument revolves around predictive power. Just like centuries ago, you could absolutely be a skeptic about, shall we say, electromagnetism realism. And you could say, “Yeah, I mean there is this thing we call static, and there’s this thing we call lightning, and there’s this thing we call load stones or magnets, but all these things are distinct. And to think that there’s some unifying frame, some deep structure of the universe that would tie all these things together and highly compress these phenomenon, that’s crazy talk.” And so, this is a viable position today to say that about consciousness, that it’s not yet clear whether consciousness has deep structure, but we’re assuming it does, and we think that unlocks a lot of predictive power.

We should be able to make predictions that are both more concise and compressed and crisp than others, and we should be able to make predictions that no one else can.

Lucas: So what is the most powerful here about what you guys are doing? Is it the specific theories and assumptions which you take are falsifiable?

Mike: Yeah.

Lucas: If we can make predictive assessments of these things, which are either leaky abstractions or are qualia, how would we even then be able to arrive at a realist or anti-realist view about qualia?

Mike: So, one frame on this is, it could be that one could explain a lot of things about observed behavior and implicit phenomenology through a purely functionalist or computationalist lens, but maybe for a given system it might take 10 terabytes. And if you can get there in a much simpler way, if you can explain it in terms of three elegant equations instead of 10 terabytes, then it wouldn’t be proof that there exists some crystal clear deep structure at work. But it would be very suggestive. Marr’s Levels of Analysis are pretty helpful here, where a functionalist might actually be very skeptical of consciousness mattering at all because it would say, “Hey, if you’re identifying consciousness at the implementation level of analysis, how could that have any bearing on how we are talking about, how we understand the world, how we’d behave?

Since the implementational level is kind of epiphenomenal from the point of view of the algorithm. How can an algorithm know its own implementation, all it can maybe figure out its own algorithm, and it’s identity would be constrained to its own algorithmic structure.” But that’s not quite true. In fact, there is bearings on one level of analysis onto another, meaning in some cases the implementation level of analysis doesn’t actually matter for the algorithm, but in some cases it does. So, if you were implementing a computer, let’s say with water, you have the option of maybe implementing a Turing machine with water buckets and in that case, okay, the implementation level of analysis goes out the window in terms of it doesn’t really help you understand the algorithm.

But if how you’re using water to implement algorithms is by basically creating this system of adding waves in buckets of different shapes, with different resonant modes, then the implementation level of analysis actually matters a whole lot for what algorithms are … finely tuned to be very effective in that substrate. In the case of consciousness and how we behave, we do think properties of the substrate have a lot of bearings on what algorithms we actually run. A functionalist should actually start caring about consciousness if the properties of consciousness makes the algorithms more efficient, more powerful.

Lucas: But what if qualia and consciousness are substantive real things? What if the epiphenomenonalist true and is like smoke rising from computation and it doesn’t have any causal efficacy?

Mike: To offer a re-frame on this, I like this frame of dual aspect monism better. There seems to be an implicit value judgment on epiphenomenalism. It’s seen as this very bad thing if a theory implies qualia as epiphenomenal. Just to put cards on the table, I think Andrés and I differ a little bit on how we see these things, although I think our ideas also mesh up well. But I would say that under the frame of something like dual aspect monism, that there’s actually one thing that exists, and it has two projections or shadows. And one projection is the physical world such as we can tell, and then the other projection is phenomenology, subjective experience. These are just two sides of the same coin and neither is epiphenomenal to the other. It’s literally just two different angles on the same thing.

And in that sense, qualia values and physical values are really talking about the same thing when you get down to it.

Lucas: Okay. So does this all begin with this move that Descartes makes, where he tries to produce a perfectly rational philosophy or worldview by making no assumptions and then starting with experience? Is this the kind of thing that you guys are doing in taking consciousness or qualia to be something real or serious?

Mike: I can just speak for myself here, but I would say my intuition comes from two places. One is staring deep into the beast of functionalism and realizing that it doesn’t lead to a clear answer. My model is that it just is this thing that looks like an answer but can never even in theory be an answer to how consciousness works. And if we deny consciousness, then we’re left in a tricky place with ethics and moral value. It also seems to leave value on the table in terms of predictions, that if we can assume consciousness as real and make better predictions, then that’s evidence that we should do that.

Lucas: Isn’t that just an argument that it would be potentially epistemically useful for ethics if we could have predictive power about consciousness?

Mike: Yeah. So, let’s assume that it’s 100 years, or 500 years, or 1,000 years in the future, and we’ve finally cracked consciousness. We’ve finally solved it. My open question is, what does the solution look like? If we’re functionalists, what does the solution look like? If we’re physicalists, what does the solution look like? And we can expand this to ethics as well.

Lucas: Just as a conceptual clarification, the functionalists are also physicalists though, right?

Andrés: There is two senses of the word physicalism here. So if there’s physicalism in the sense of like a theory of the universe, that the behavior of matter and energy, what happens in the universe is exhaustively described by the laws of physics, or future physics, there is also physicalism in the sense of understanding consciousness in contrast to functionalism. David Pearce, I think, would describe it as non-materialist physicalist idealism. There’s definitely a very close relationship between that phrasing and dual aspect monism. I can briefly unpack it. Basically non materialist is not saying that the stuff of the world is fundamentally unconscious. That’s something that materialism claims, that what the world is made of is not conscious, is raw matter so to speak.

Andrés: Physicalist, again in the sense of the laws of physics exhaustively describe behavior and idealist in the sense of what makes up the world is qualia or consciousness. The big picture view is that the actual substrate of the universe of quantum fields are fields of qualia.

Lucas: So Mike, you were saying that in the future when we potentially have a solution to the problem of consciousness, that in the end, the functionalists with algorithms and explanations of say all of the easy problems, all of the mechanisms behind the things that we call consciousness, you think that that project will ultimately fail?

Mike: I do believe that, and I guess my gentle challenge to functionalists would be to sketch out a vision of what a satisfying answer to consciousness would be, whether it’s completely explaining it a way or completely explaining it. If in 500 years you go to the local bookstore and you check out consciousness 101, and just flip through it, you look at the headlines and the chapter list and the pictures, what do you see? I think we have an answer as formalists, but I would be very interested in getting the functionalists state on this.

Lucas: All right, so you guys have this belief in the ability to formalize our understanding of consciousness, is this actually contingent on realism or anti realism?

Mike: It is implicitly dependent on realism, that consciousness is real enough to be describable mathematically in a precise sense. And actually that would be my definition of realism, that something is real if we can describe it exactly with mathematics and it is instantiated in the universe. I think the idea of connecting math and consciousness is very core to formalism.

Lucas: What’s particularly interesting here are the you’re making falsifiable claims about phenomenological states. It’s good and exciting that your Symmetry Theory of Valence, which we can get into now has falsifiable aspects. So do you guys want to describe here your Symmetry Theory of Valence and how this fits in and as a consequence of your valence realism?

Andrés: Sure, yeah. I think like one of the key places where this has bearings on is and understanding what is it that we actually want and what is it that we actually like and enjoy. That will be answered in an agent way. So basically you think of agents as entities who spin out possibilities for what actions to take and then they have a way of sorting them by expected utility and then carrying them out. A lot of people may associate what we want or what we like or what we care about at that level, the agent level, whereas we think actually the true source of value is more low level than that. That there’s something else that we’re actually using in order to implement agentive behavior. There’s ways of experiencing value that are completely separated from agents. You don’t actually need to be generating possible actions and evaluating them and enacting them for there to be value or for you to actually be able to enjoy something.

So what we’re examining here is actually what is the lower level property that gives rise even to agentive behavior that underlies every other aspect of experience. These would be a valence and specifically valence gradients. The general claim is that we are set up in such a way that we are basically climbing the valence gradient. This is not true in every situation, but it’s mostly true and it’s definitely mostly true in animals. And then the question becomes what implements valence gradients. Perhaps your intuition is this extraordinary fact that things that have nothing to do with our evolutionary past nonetheless can feel good or bad. So it’s understandable that if you hear somebody scream, you may get nervous or anxious or fearful or if you hear somebody laugh you may feel happy.

That makes sense from an evolutionary point of view, but why would the sound of the Bay Area Rapid Transit, the Bart, which creates these very intense screeching sounds, that is not even within like the vocal range of humans, it’s just really bizarre, never encountered before in our evolutionary past and nonetheless, it has an extraordinarily negative valence. That’s like a hint that valence has to do with patterns, it’s not just goals and actions and utility functions, but the actual pattern of your experience may determine valence. The same goes for a SUBPAC, is this technology that basically renders sounds between 10 and 100 hertz and some of them feel really good, some of them feel pretty unnerving, some of them are anxiety producing and it’s like why would that be the case? Especially when you’re getting two types of input that have nothing to do with our evolutionary past.

It seems that there’s ways of triggering high and low valence states just based on the structure of your experience. The last example I’ll give is very weird states of consciousness like meditation or psychedelics that seem to come with extraordinarily intense and novel forms of experiencing significance or a sense of bliss or pain. And again, they don’t seem to have much semantic content per se or rather the semantic content is not the core reason why they feel that they’re bad. It has to do more with a particular structure that they induce in experience.

Mike: There are many ways to talk about where pain and pleasure come from. We can talk about it in terms of neuro chemicals, opioids, dopamine. We can talk about it in terms of pleasure centers in the brain, in terms of goals and preferences and getting what you want, but all these have counterexamples. All of these have some points that you can follow the thread back to which will beg the question. I think the only way to explain emotional valence, pain and pleasure, that doesn’t beg the question is to explain it in terms of some patterns within phenomenology, just intrinsically feel good and some intrinsically feel bad. To touch back on the formalism brain, this would be saying that if we have a mathematical object that is isomorphic to your phenomenology, to what it feels like to be you, then some pattern or property of this object will refer to or will sort of intrinsically encode you are emotional valence, how pleasant or unpleasant this experiences.

That’s the valence formalism aspect that we’ve come to.

Lucas: So given the valence realism, the view is this intrinsic pleasure, pain axis of the world and this is sort of challenging I guess David Pearce’s view. There are things in experience which are just clearly good seeming or bad seeming. Will MacAskill called these pre theoretic properties we might ascribe to certain kinds of experiential aspects, like they’re just good or bad. So with this valence realism view, this potentiality in this goodness or badness whose nature is sort of self intimatingly disclosed in the physics and in the world since the beginning and now it’s unfolding and expressing itself more so and the universe is sort of coming to life, and embedded somewhere deep within the universe’s structure are these intrinsically good or intrinsically bad valances which complex computational systems and maybe other stuff has access to.

Andrés: Yeah, yeah, that’s right. And I would perhaps emphasize that it’s not only pre-theoretical, it’s pre-agentive, you don’t even need an agent for there to be valence.

Lucas: Right. Okay. This is going to be a good point I think for getting into these other more specific hairy philosophical problems. Could you go ahead and unpack a little bit more this view that pleasure or pain is self intimatingly good or bad that just by existing and experiential relation with the thing its nature is disclosed. Brian Tomasik here, and I think functionalists would say there’s just another reinforcement learning algorithm somewhere before that is just evaluating these phenomenological states. They’re not intrinsically or bad, that’s just what it feels like to be the kind of agent who has that belief.

Andrés: Sure. There’s definitely many angles from which to see this. One of them is by basically realizing that liking, wanting and learning are possible to dissociate, and in particular you’re going to have reinforcement without an associated positive valence. You can have also positive valence without reinforcement or learning. Generally they are correlated but they are different things. My understanding is a lot of people who may think of valence as something we believe matters because you are the type of agent that has a utility function and a reinforcement function. If that was the case, we would expect valence to melt away in states that are non agentive, we wouldn’t necessarily see it. And also that it would be intrinsically tied to intentional content, the aboutness of experience. A very strong counter example is that somebody may claim that really what they truly want this to be academically successful or something like that.

They think of the reward function as intrinsically tied to getting a degree or something like that. I would call that to some extent illusory, that if you actually look at how those preferences are being implemented, that deep down there would be valence gradients happening there. One way to show this would be let’s say the person on the graduation day, you give them an opioid antagonist. The person will subjectively feel that the day is meaningless, you’ve removed the pleasant cream of the experience that they were actually looking for, that they thought all along was tied in with intentional content with the fact of graduating but in fact it was the hedonic gloss that they were after, and that’s kind of like one intuition pump part there.

Lucas: These core problem areas that you’ve identified in Principia Qualia, would you just like to briefly touch on those?

Mike: Yeah, trying to break the problem down into modular pieces with the idea that if we can decompose the problem correctly then the sub problems become much easier than the overall problem and if you collect all the solutions to the sub problem than in aggregate, you get a full solution to the problem of consciousness. So I’ve split things up into the metaphysics, the math and the interpretation. The first question is what metaphysics do you even start with? What ontology do you even try to approach the problem? And we’ve chosen the ontology of physics that can objectively map onto reality in a way that computation can not. Then there’s this question of, okay, so you have your core ontology in this case physics, and then there’s this question of what counts, what actively contributes to consciousness? Do we look at electrons, electromagnetic fields, quarks?

This is an unanswered question. We have hypotheses but we don’t have an answer. Moving into the math, conscious system seemed to have boundaries, if something’s happening inside my head it can directly contribute to my conscious experience. But even if we put our heads together, literally speaking, your consciousness doesn’t bleed over into mine, there seems to be a boundary. So one way of framing this is the boundary problem and one way it’s framing it is the binding problem, and these are just two sides of the same coin. There’s this big puzzle of how do you draw the boundaries of a subject experience. IIT is set up to approach consciousness in itself through this lens that has a certain style of answer, style of approach. We don’t necessarily need to take that approach, but it’s a intellectual landmark. Then we get into things like the state-space problem and the topology of information problem.

If we figured out our basic ontology of what we think is a good starting point and of that stuff, what actively contributes to consciousness, and then we can figure out some principled way to draw a boundary around, okay, this is conscious experience A and this conscious experience B, and they don’t overlap. So you have a bunch of the information inside the boundary. Then there’s this math question of how do you rearrange it into a mathematical object that is isomorphic to what that stuff feels like. And again, IIT has an approach to this, we don’t necessarily ascribe to the exact approach but it’s good to be aware of. There’s also the interpretation problem, which is actually very near and dear to what QRI is working on and this is the concept of if you had a mathematical object that represented what it feels like to be you, how would we even start to figure out what it meant?

Lucas: This is also where the falsifiability comes in, right? If we have the mathematical object and we’re able to formally translate that into phenomenological states, then people can self report on predictions, right?

Mike: Yes. I don’t necessarily fully trust self reports as being the gold standard. I think maybe evolution is tricky sometimes and can lead to inaccurate self report, but at the same time it’s probably pretty good, and it’s the best we have for validating predictions.

Andrés: A lot of this gets easier if we assume that maybe we can be wrong in an absolute sense but we’re often pretty well calibrated to judge relative differences. Maybe you ask me how I’m doing on a scale of one to ten and I say seven and the reality is a five, maybe that’s a problem, but at the same time I like chocolate and if you give me some chocolate and I eat it and that improves my subjective experience and I would expect us to be well calibrated in terms of evaluating whether something is better or worse.

Lucas: There’s this view here though that the brain is not like a classical computer, that it is more like a resonant instrument.

Mike: Yeah. Maybe an analogy here it could be pretty useful. There’s this researcher William Sethares who basically figured out the way to quantify the mutual dissonance between pairs of notes. It turns out that it’s not very hard, all you need to do is add up the pairwise dissonance between every harmonic of the notes. And what that gives you is that if you take for example a major key and you compute the average dissonance between pairs of notes within that major key it’s going to be pretty good on average. And if you take the average dissonance of a minor key it’s going to be higher. So in a sense what distinguishes the minor and a major key is in the combinatorial space of possible permutations of notes, how frequently are they dissonant versus consonant.

That’s a very ground truth mathematical feature of a musical instrument and that’s going to be different from one instrument to the next. With that as a backdrop, we think of the brain and in particular valence in a very similar light that the brain has natural resonant modes and emotions may seem externally complicated. When you’re having a very complicated emotion and we ask you to describe it it’s almost like trying to describe a moment in a symphony, this very complicated composition and how do you even go about it. But deep down the reason why a particular frame sounds pleasant or unpleasant within music is ultimately tractable to the additive per wise dissonance of all of those harmonics. And likewise for a given state of consciousness we suspect that very similar to music the average pairwise dissonance between the harmonics present on a given point in time will be strongly related to how unpleasant the experience is.

These are electromagnetic waves and it’s not exactly like a static or it’s not exactly a standing wave either, but it gets really close to it. So basically what this is saying is there’s this excitation inhibition wave function and that happens statistically across macroscopic regions of the brain. There’s only a discrete number of ways in which that way we can fit an integer number of times in the brain. We’ll give you a link to the actual visualizations for what this looks like. There’s like a concrete example, one of the harmonics with the lowest frequency is basically a very simple one where interviewer hemispheres are alternatingly more excited versus inhibited. So that will be a low frequency harmonic because it is very spatially large waves, an alternating pattern of excitation. Much higher frequency harmonics are much more detailed and obviously hard to describe, but visually generally speaking, the spatial regions that are activated versus inhibited are these very thin wave fronts.

It’s not a mechanical wave as such, it’s a electromagnetic wave. So it would actually be the electric potential in each of these regions of the brain fluctuates, and within this paradigm on any given point in time you can describe a brain state as a weighted sum of all of its harmonics, and what that weighted sum looks like depends on your state of consciousness.

Lucas: Sorry, I’m getting a little caught up here on enjoying resonant sounds and then also the valence realism. The view isn’t that all minds will enjoy resonant things because happiness is like a fundamental valence thing of the world and all brains who come out of evolution should probably enjoy resonance.

Mike: It’s less about the stimulus, it’s less about the exact signal and it’s more about the effect of the signal on our brains. The resonance that matters, the resonance that counts, or the harmony that counts we’d say, or in a precisely technical term, the consonance that counts is the stuff that happens inside our brains. Empirically speaking most signals that involve a lot of harmony create more internal consonance in these natural brain harmonics than for example, dissonant stimuli. But the stuff that counts is inside the head, not the stuff that is going in our ears.

Just to be clear about QRI’s move here, Selen Atasoy has put forth this connectome-specific harmonic wave model and what we’ve done is combined it with our symmetry theory of valence and said this is sort of a way of basically getting a Fourier transform of where the energy is in terms of frequencies of brainwaves in a much cleaner way than has been available through EEG. Basically we can evaluate this data set for harmony. How much harmony is there in a brain, with the link to the Symmetry Theory of Valence then it should be a very good proxy for how pleasant it is to be that brain.

Lucas: Wonderful.

Andrés: In this context, yeah, the Symmetry Theory of Valence would be much more fundamental. There’s probably many ways of generating states of consciousness that are in a sense completely unnatural that are not based on the harmonics of the brain, but we suspect the bulk of the differences in states of consciousness would cash out in differences in brain harmonics because that’s a very efficient way of modulating the symmetry of the state.

Mike: Basically, music can be thought of as a very sophisticated way to hack our brains into a state of greater consonance, greater harmony.

Lucas: All right. People should check out your Principia Qualia, which is the work that you’ve done that captures a lot of this well. Is there anywhere else that you’d like to refer people to for the specifics?

Mike: Principia qualia covers the philosophical framework and the symmetry theory of valence. Andrés has written deeply about this connectome-specific harmonic wave frame and the name of that piece is Quantifying Bliss.

Lucas: Great. I would love to be able to quantify bliss and instantiate it everywhere. Let’s jump in here into a few problems and framings of consciousness. I’m just curious to see if you guys have any comments on ,the first is what you call the real problem of consciousness and the second one is what David Chalmers calls the Meta problem of consciousness. Would you like to go ahead and start off here with just this real problem of consciousness?

Mike: Yeah. So this gets to something we were talking about previously, is consciousness real or is it not? Is it something to be explained or to be explained away? This cashes out in terms of is it something that can be formalized or is it intrinsically fuzzy? I’m calling this the real problem of consciousness, and a lot depends on the answer to this. There are so many different ways to approach consciousness and hundreds, perhaps thousands of different carvings of the problem, panpsychism, we have dualism, we have non materialist physicalism and so on. I think essentially the core distinction, all of these theories sort themselves into two buckets, and that’s is consciousness real enough to formalize exactly or not. This frame is perhaps the most useful frame to use to evaluate theories of consciousness.

Lucas: And then there’s a Meta problem of consciousness which is quite funny, it’s basically like why have we been talking about consciousness for the past hour and what’s all this stuff about qualia and happiness and sadness? Why do people make claims about consciousness? Why does it seem to us that there is maybe something like a hard problem of consciousness, why is it that we experience phenomenological states? Why isn’t everything going on with the lights off?

Mike: I think this is a very clever move by David Chalmers. It’s a way to try to unify the field and get people to talk to each other, which is not so easy in the field. The Meta problem of consciousness doesn’t necessarily solve anything but it tries to inclusively start the conversation.

Andrés: The common move that people make here is all of these crazy things that we think about consciousness and talk about consciousness, that’s just any information processing system modeling its own attentional dynamics. That’s one illusionist frame, but even within qualia realist, qualia formalist paradigm, you still have the question of why do we even think or self reflect about consciousness. You could very well think of consciousness as being computationally relevant, you need to have consciousness and so on, but still lacking introspective access. You could have these complicated conscious information processing systems, but they don’t necessarily self reflect on the quality of their own consciousness. That property is important to model and make sense of.

We have a few formalisms that may give rise to some insight into how self reflectivity happens and in particular how is it possible to model the entirety of your state of consciousness in a given phenomenal object. These ties in with the notion of a homonculei, if the overall valence of your consciousness is actually a signal traditionally used for fitness evaluation, detecting basically when are you in existential risk to yourself or when there’s like reproductive opportunities that you may be missing out on, that it makes sense for there to be a general thermostat of the overall experience where you can just look at it and you get a a sense of the overall well being of the entire experience added together in such a way that you experienced them all at once.

I think like a lot of the puzzlement has to do with that internal self model of the overall well being of the experience, which is something that we are evolutionarily incentivized to actually summarize and be able to see at a glance.

Lucas: So, some people have a view where human beings are conscious and they assume everyone else is conscious and they think that the only place for value to reside is within consciousness, and that a world without consciousness is actually a world without any meaning or value. Even if we think that say philosophical zombies or people who are functionally identical to us but with no qualia or phenomenological states or experiential states, even if we think that those are conceivable, then it would seem that there would be no value in a world of p-zombies. So I guess my question is why does phenomenology matter? Why does the phenomenological modality of pain and pleasure or valence have some sort of special ethical or experiential status unlike qualia like red or blue?

Why does red or blue not disclose some sort of intrinsic value in the same way that my suffering does or my bliss does or the suffering or bliss of other people?

Mike: My intuition is also that consciousness is necessary for value. Nick Bostrom has this wonderful quote in super intelligence that we should be wary of building a Disneyland with no children, some technological wonderland that is filled with marvels of function but doesn’t have any subjective experience, doesn’t have anyone to enjoy it basically. I would just say that I think that most AI safety research is focused around making sure there is a Disneyland, making sure, for example, that we don’t just get turned into something like paperclips. But there’s this other problem, making sure there are children, making sure there are subjective experiences around to enjoy the future. I would say that there aren’t many live research threads on this problem and I see QRI as a live research thread on how to make sure there is subject experience in the future.

Probably a can of worms there, but as your question about in pleasure, I may pass that to my colleague Andrés.

Andrés: Nothing terribly satisfying here. I would go with David Pearce’s view that these properties of experience are self intimating and to the extent that you do believe in value, it will come up as the natural focal points for value, especially if you’re allowed to basically probe the quality of your experience where in many states you believe that the reason why you like something is for intentional content. Again, the case of graduating or it could be the the case of getting a promotion or one of those things that a lot of people associate, with feeling great, but if you actually probe the quality of experience, you will realize that there is this component of it which is its hedonic gloss and you can manipulate it directly again with things like opiate antagonists and if the symmetry theory of valence is true, potentially also by directly modulating the consonance and dissonance of the brain harmonics, in which case the hedonic gloss would change in peculiar ways.

When it comes to consilience, when it comes to many different points of view, agreeing on what aspect of the experience is what brings value to it, it seems to be the hedonic gloss.

Lucas: So in terms of qualia and valence realism, would the causal properties of qualia be the thing that would show any arbitrary mind the self-intimating nature of how good or bad an experience is, and in the space of all possible minds, what is the correct epistemological mechanism for evaluating the moral status of experiential or qualitative states?

Mike: So first of all, I would say that my focus so far has mostly been on describing what is and not what ought. I think that we can talk about valence without necessarily talking about ethics, but if we can talk about valence clearly, that certainly makes some questions in ethics and some frameworks in ethics make much more or less than. So the better we can clearly describe and purely descriptively talk about consciousness, the easier I think a lot of these ethical questions get. I’m trying hard not to privilege any ethical theory. I want to talk about reality. I want to talk about what exists, what’s real and what the structure of what exists is, and I think if we succeed at that then all these other questions about ethics and morality get much, much easier. I do think that there is an implicit should wrapped up in questions about valence, but I do think that’s another leap.

You can accept the valence is real without necessarily accepting that optimizing valence is an ethical imperative. I personally think, yes, it is very ethically important, but it is possible to take a purely descriptive frame to valence, that whether or not this also discloses, as David Pearce said, the utility function of the universe. That is another question and can be decomposed.

Andrés: One framing here too is that we do suspect valence is going to be the thing that matters up on any mind if you probe it in the right way in order to achieve reflective equilibrium. There’s the biggest example of a talk and neuro scientist was giving at some point, there was something off and everybody seemed to be a little bit anxious or irritated and nobody knew why and then one of the conference organizers suddenly came up to the presenter and did something to the microphone and then everything sounded way better and everybody was way happier. There was these very sorrow hissing pattern caused by some malfunction of the microphone and it was making everybody irritated, they just didn’t realize that was the source of the irritation, and when it got fixed then you know everybody’s like, “Oh, that’s why I was feeling upset.”

We will find that to be the case over and over when it comes to improving valence. So like somebody in the year 2050 might come up to one of the connectome-specific harmonic wave clinics, “I don’t know what’s wrong with me,” but if you put them through the scanner we identify your 17th and 19th harmonic in a state of dissonance. We cancel 17th to make it more clean, and then the person who will say all of a sudden like, “Yeah, my problem is fixed. How did you do that?” So I think it’s going to be a lot like that, that the things that puzzle us about why do I prefer these, why do I think this is worse, will all of a sudden become crystal clear from the point of view of valence gradients objectively measured.

Mike: One of my favorite phrases in this context is what you can measure you can manage and if we can actually find the source of dissonance in a brain, then yeah, we can resolve it, and this could open the door for maybe honestly a lot of amazing things, making the human condition just intrinsically better. Also maybe a lot of worrying things, being able to directly manipulate emotions may not necessarily be socially positive on all fronts.

Lucas: So I guess here we can begin to jump into AI alignment and qualia. So we’re building AI systems and they’re getting pretty strong and they’re going to keep getting stronger potentially creating a superintelligence by the end of the century and consciousness and qualia seems to be along the ride for now. So I’d like to discuss a little bit here about more specific places in AI alignment where these views might inform it and direct it.

Mike: Yeah, I would share three problems of AI safety. There’s the technical problem, how do you make a self improving agent that is also predictable and safe. This is a very difficult technical problem. First of all to even make the agent but second of all especially to make it safe, especially if it becomes smarter than we are. There’s also the political problem, even if you have the best technical solution in the world and the sufficiently good technical solution doesn’t mean that it will be put into action in a sane way if we’re not in a reasonable political system. But I would say the third problem is what QRI is most focused on and that’s the philosophical problem. What are we even trying to do here? What is the optimal relationship between AI and humanity and also a couple of specific details here. First of all I think nihilism is absolutely an existential threat and if we can find some antidotes to nihilism through some advanced valence technology that could be enormously helpful for reducing X-risk.

Lucas: What kind of nihilism or are you talking about here, like nihilism about morality and meaning?

Mike: Yes, I would say so, and just personal nihilism that it feels like nothing matters, so why not do risky things?

Lucas: Whose quote is it, the philosophers question like should you just kill yourself? That’s the yawning abyss of nihilism inviting you in.

Andrés: Albert Camus. The only real philosophical question is whether to commit suicide, whereas how I think of it is the real philosophical question is how to make love last, bringing value to the existence, and if you have value on tap, then the question of whether to kill yourself or not seems really nonsensical.

Lucas: For sure.

Mike: We could also say that right now there aren’t many good shelling points for global coordination. People talk about having global coordination and building AGI would be a great thing but we’re a little light on the details of how to do that. If the clear, comprehensive, useful, practical understanding of consciousness can be built, then this may sort of embody or generate new shelling points that the larger world could self organize around. If we can give people a clear understanding of what is and what could be, then I think we will get a better future that actually gets built.

Lucas: Yeah. Showing what is and what could be is immensely important and powerful. So moving forward with AI alignment as we’re building these more and more complex systems, there’s this needed distinction between unconscious and conscious information processing, if we’re interested in the morality and ethics of suffering and joy and other conscious states. How do you guys see the science of consciousness here, actually being able to distinguish between unconscious and conscious information processing systems?

Mike: There are a few frames here. One is that, yeah, it does seem like the brain does some processing in consciousness and some processing outside of consciousness. And what’s up with that, this could be sort of an interesting frame to explore in terms of avoiding things like mind crime in the AGI or AI space that if there are certain computations which are painful then don’t do them in a way that would be associated with consciousness. It would be very good to have rules of thumb here for how to do that. One interesting could be in the future we might not just have compilers which optimize for speed of processing or minimization of dependent libraries and so on, but could optimize for the valence of the computation on certain hardware. This of course gets into complex questions about computationalism, how hardware dependent this compiler would be and so on.

I think it’s an interesting and important long-term frame.

Lucas: So just illustrate here I think the ways in which solving or better understanding consciousness will inform AI alignment from present day until super intelligence and beyond.

Mike: I think there’s a lot of confusion about consciousness and a lot of confusion about what kind of thing the value problem is in AI Safety, and there are some novel approaches on the horizon. I was speaking with Stuart Armstrong the last EA global and he had some great things to share about his model fragments paradigm. I think this is the right direction. It’s sort of understanding, yeah, human preferences are insane. Just they’re not a consistent formal system.

Lucas: Yeah, we contain multitudes.

Mike: Yes, yes. So first of all understanding what generates them seems valuable. So there’s this frame in AI safety we call the complexity value thesis. I believe Eliezer came up with it in a post on Lesswrong. It’s this frame where human value is very fragile in that it can be thought of as a small area, perhaps even almost a point in a very high dimensional space, say a thousand dimensions. If we go any distance in any direction from this tiny point in this high dimensional space, then we quickly get to something that we wouldn’t think of as very valuable. And maybe if we leave everything the same and take away freedom, this paints a pretty sobering picture of how difficult AI alignment will be.

I think this is perhaps arguably the source of a lot of worry in the community, that not only do we need to make machines that won’t just immediately kill us, but that will preserve our position in this very, very high dimensional space well enough that we keep the same trajectory and that possibly if we move at all, then we may enter a totally different trajectory, that we in 2019 wouldn’t think of as having any value. So this problem becomes very, very intractable. I would just say that there is an alternative frame. The phrasing that I’m playing around with here it is instead of the complexity of value thesis, the unity of value thesis, it could be that many of the things that we find valuable, eating ice cream, living in a just society, having a wonderful interaction with a loved one, all of these have the same underlying neural substrate and empirically this is what affective neuroscience is finding.

Eating a chocolate bar activates same brain regions as a transcendental religious experience. So maybe there’s some sort of elegant compression that can be made and that actually things aren’t so starkly strict. We’re not sort of this point in a super high dimensional space and if we leave the point, then everything of value is trashed forever, but maybe there’s some sort of convergent process that we can follow that we can essentialize. We can make this list of 100 things that humanity values and maybe they all have in common positive valence, and positive valence can sort of be reverse engineered. And to some people this feels like a very scary dystopic scenario – don’t knock it until you’ve tried it – but at the same time there’s a lot of complexity here.

One core frame that the idea of qualia formalism and valence realism can offer AI safety is that maybe the actual goal is somewhat different than the complexity of value thesis puts forward. Maybe the actual goal is different and in fact easier. I think this could directly inform how we spend our resources on the problem space.

Lucas: Yeah, I was going to say that there exists standing tension between this view of the complexity of all preferences and values that human beings have and then the valence realist view which says that what’s ultimately good or certain experiential or hedonic states. I’m interested and curious about if this valence view is true, whether it’s all just going to turn into hedonium in the end.

Mike: I’m personally a fan of continuity. I think that if we do things right we’ll have plenty of time to get things right and also if we do things wrong then we’ll have plenty of time for things to be wrong. So I’m personally not a fan of big unilateral moves, it’s just getting back to this question of can understanding what is help us, clearly yes.

Andrés: Yeah. I guess one view is we could say preserve optionality and learn what is, and then from there hopefully we’ll be able to better inform oughts and with maintained optionality we’ll be able to choose the right thing. But that will require a cosmic level of coordination.

Mike: Sure. An interesting frame here is whole brain emulation. So whole brain emulation is sort of a frame built around functionalism and it’s a seductive frame I would say. If whole brain emulations wouldn’t necessarily have the same qualia based on hardware considerations as the original humans, there could be some weird lock in effects where if the majority of society turned themselves into p-zombies then it may be hard to go back on that.

Lucas: Yeah. All right. We’re just getting to the end here, I appreciate all of this. You guys have been tremendous and I really enjoyed this. I want to talk about identity in AI alignment. This sort of taxonomy that you’ve developed about open individualism and closed individualism and all of these other things. Would you like to touch on that and talk about implications here in AI alignment as you see it?

Andrés: Yeah. Yeah, for sure. The taxonomy comes from Daniel Kolak, a philosopher and mathematician. It’s a pretty good taxonomy and basically it’s like open individualism, that’s the view that a lot of meditators and mystics and people who take psychedelics often ascribe to, which is that we’re all one consciousness. Another frame is that our true identity is the light of consciousness, so to speak. So it doesn’t matter in what form it manifests, it’s always the same fundamental ground of being. Then you have the common sense view, it’s called closed individualism. You start existing when you’re born, you stop existing when you die. You’re just this segment. Some religions actually extend that into the future or past with reincarnation or maybe with heaven.

It’s the belief in ontological distinction between you and others while at the same time there is ontological continuity from one moment to the next within you. Finally you have this view that’s called empty individualism, which is that you’re just a moment of experience. That’s fairly common among physicists and a lot of people who’ve tried to formalize consciousness, often they converged on empty individualism. I think a lot of theories of ethics and rationality, like the veil of ignorance as a guide or like how do you define rational decision-making as maximizing the expected utility of yourself as an agent, all of those seem to implicitly be based on closed individualism and they’re not necessarily questioning it very much.

On the other hand, if the sense of individual identity of closed individualism doesn’t actually carve nature at its joints as a Buddhist might say, the feeling of continuity of being a separate unique entity is an illusory construction of your phenomenology that casts in a completely different light how to approach rationality itself and even self interest, right? If you start identifying with the light of consciousness rather than your particular instantiation, you will probably care a lot more about what happens to pigs in factory farms because … In so far as they are conscious they are you in a fundamental way. It matters a lot in terms of how to carve out different possible futures, especially when you get into these very tricky situations like, well what if there is mind melding or what if there is the possibility of making perfect copies of yourself?

All of these edge cases are really problematic from the common sense view of identity, but they’re not really a problem from an open individualist or empty individualist point of view. With all of this said, I do personally think there’s probably a way of combining open individualism with valence realism that gives rise to the next step in human rationality where we’re actually trying to really understand what the universe wants, so to speak. But I would say that there is a very tricky aspect here that has to do with game theory. We evolved to believe in close individualism. The fact that it’s evolutionarily adaptive is obviously not an argument for it being fundamentally true, but it does seem to be some kind of an evolutionarily stable point to believe of yourself as who you can affect the most directly in a causal way, if you define your boundary that way.

That basically gives you focus on the actual degrees of freedom that you do have, and if you think of a society of open individualists, everybody’s altruistically maximally contributing to the universal consciousness, and then you have one close individualist who is just selfishly trying to acquire power just for itself, you can imagine that one view would have a tremendous evolutionary advantage in that context. So I’m not one who just naively advocates for open individualism unreflectively. I think we still have to work out to the game theory of it, how to make it evolutionarily stable and also how to make it ethical. Open question, I do think it’s important to think about and if you take consciousness very seriously, especially within physicalism, that usually will cast huge doubts on the common sense view of identity.

It doesn’t seem like a very plausible view if you actually tried to formalize consciousness.

Mike: The game theory aspect is very interesting. You can think of closed individualism as something evolutionists produced that allows an agent to coordinate very closely with its past and future ourselves. Maybe we can say a little bit about why we’re not by default all empty individualists or open individualists. Empty individualism seems to have a problem where if every slice of conscious experience is its own thing, then why should you even coordinate with your past and future self because they’re not the same as you. So that leads to a problem of defection, and open individualism is everything is the same being so to speak than … As Andrés mentioned that allows free riders, if people are defecting, it doesn’t allow altruist punishment or any way to stop the free ride. There’s interesting game theory here and it also just feeds into the question of how we define our identity in the age of AI, the age of cloning, the age of mind uploading.

This gets very, very tricky very quickly depending on one’s theory of identity. They’re opening themselves up to getting hacked in different ways and so different theories of identity allow different forms of hacking.

Andrés: Yeah, which could be sometimes that’s really good and sometimes really bad. I would make the prediction that not necessarily open individualism in its full fledged form but a weaker sense of identity than closed individualism is likely going to be highly adaptive in the future as people basically have the ability to modify their state of consciousness in much more radical ways. People who just identify with narrow sense of identity will just be in their shells, not try to disturb the local attractor too much. That itself is not necessarily very advantageous. If the things on offer are actually really good, both hedonically and intelligence wise.

I do suspect basically people who are somewhat more open to basically identify with consciousness or at least identify with a broader sense of identity, they will be the people who will be doing more substantial progress, pushing the boundary and creating new cooperation and coordination technology.

Lucas: Wow, I love all that. Seeing closed individualism for what it was has had a tremendous impact on my life and this whole question of identity I think is largely confused for a lot of people. At the beginning you said that open individualism says that we are all one consciousness or something like this, right? For me in identity I’d like to move beyond all distinctions of sameness or differenceness. To say like, oh, we’re all one consciousness to me seems to say we’re all one electromagnetism, which is really to say the consciousness is like an independent feature or property of the world that’s just sort of a ground part of the world and when the world produces agents, consciousness is just an empty identityless property that comes along for the ride.

The same way in which it would be nonsense to say, “Oh, I am these specific atoms, I am just the forces of nature that are bounded within my skin and body” That would be nonsense. In the same way in sense of what we were discussing with consciousness there was the binding problem of the person, the discreteness of the person. Where does the person really begin or end? It seems like these different kinds of individualism have, as you said, epistemic and functional use, but they also, in my view, create a ton of epistemic problems, ethical issues, and in terms of the valence theory, if quality is actually something good or bad, then as David Pearce says, it’s really just an epistemological problem that you don’t have access to other brain states in order to see the self intimating nature of what it’s like to be that thing in that moment.

There’s a sense in which i want to reject all identity as arbitrary and I want to do that in an ultimate way, but then in the conventional way, I agree with you guys that there are these functional and epistemic issues that closed individualism seems to remedy somewhat and is why evolution, I guess selected for it, it’s good for gene propagation and being selfish. But once one sees AI as just a new method of instantiating bliss, it doesn’t matter where the bliss is. Bliss is bliss and there’s no such thing as your bliss or anyone else’s bliss. Bliss is like its own independent feature or property and you don’t really begin or end anywhere. You are like an expression of a 13.7 billion year old system that’s playing out.

The universe is just peopleing all of us at the same time, and when you get this view and you see you as just sort of like the super thin slice of the evolution of consciousness and life, for me it’s like why do I really need to propagate my information into the future? Like I really don’t think there’s anything particularly special about the information of anyone really that exists today. We want to preserve all of the good stuff and propagate those in the future, but people who seek a immortality through AI or seek any kind of continuation of what they believe to be their self is, I just see that all as misguided and I see it as wasting potentially better futures by trying to bring Windows 7 into the world of Windows 10.

Mike: This all gets very muddy when we try to merge human level psychological drives and concepts and adaptations with a fundamental physics level description of what is. I don’t have a clear answer. I would say that it would be great to identify with consciousness itself, but at the same time, that’s not necessarily super easy if you’re suffering from depression or anxiety. So I just think that this is going to be an ongoing negotiation within society and just hopefully we can figure out ways in which everyone can move.

Andrés: There’s an article I wrote it, I just called it consciousness versus replicators. That kind of gets to the heart of this issue, but that sounds a little bit like good and evil, but it really isn’t. The true enemy here is replication for replication’s sake. On the other hand, the only way in which we can ultimately benefit consciousness, at least in a plausible, evolutionarily stable way is through replication. We need to find the balance between replication and benefit of consciousness that makes the whole system stable, good for consciousness and resistant against the factors.

Mike: I would like to say that I really enjoy Max Tegmark’s general frame of you leaving this mathematical universe. One re-frame of what we were just talking about in these terms are there are patterns which have to do with identity and have to do with valence and have to do with many other things. The grand goal is to understand what makes a pattern good or bad and optimize our light cone for those sorts of patterns. This may have some counter intuitive things, maybe closed individualism is actually a very adaptive thing, in the long term it builds robust societies. Could be that that’s not true but I just think that taking the mathematical frame and the long term frame is a very generative approach.

Lucas: Absolutely. Great. I just want to finish up here on two fun things. It seems like good and bad are real in your view. Do we live in heaven or hell?

Mike: Lot of quips that come to mind here. Hell is other people, or nothing is good or bad but thinking makes it so. My pet theory I should say is that we live in something that is perhaps close to heaven as is physically possible. The best of all possible worlds.

Lucas: I don’t always feel that way but why do you think that?

Mike: This gets through the weeds of theories about consciousness. It’s this idea that we tend to think of consciousness on the human scale. Is the human condition good or bad, is the balance of human experience on the good end, the heavenly end or the hellish end. If we do have an objective theory of consciousness, we should be able to point it at things that are not human and even things that are not biological. It may seem like a type error to do this but we should be able to point it at stars and black holes and quantum fuzz. My pet theory, which is totally not validated, but it is falsifiable, and this gets into Bostrom’s simulation hypothesis, it could be that if we tally up the good valence and the bad valence in the universe, that first of all, the human stuff might just be a rounding error.

Most of the value, in this value the positive and negative valence is found elsewhere, not in humanity. And second of all, I have this list in the last appendix of Principia Qualia as well, where could massive amounts of consciousness be hiding in the cosmological sense. I’m very suspicious that the big bang starts with a very symmetrical state, I’ll just leave it there. In a utilitarian sense, if you want to get a sense of whether we live in a place closer to heaven or hell we should actually get a good theory of consciousness and we should point to things that are not humans and cosmological scale events or objects would be very interesting to point it at. This will give a much better clear answer as to whether we live in somewhere closer to heaven or hell than human intuition.

Lucas: All right, great. You guys have been super generous with your time and I’ve really enjoyed this and learned a lot. Is there anything else you guys would like to wrap up on?

Mike: Just I would like to say, yeah, thank you so much for the interview and reaching out and making this happen. It’s been really fun on our side too.

Andrés: Yeah, I think wonderful questions and it’s very rare for an interviewer to have non conventional views of identity to begin with, so it was really fun, really appreciate it.

Lucas: Would you guys like to go ahead and plug anything? What’s the best place to follow you guys, Twitter, Facebook, blogs, website?

Mike: Our website is qualiaresearchinstitute.org and we’re working on getting a PayPal donate button out but in the meantime you can send us some crypto. We’re building out the organization and if you want to read our stuff a lot of it is linked from the website and you can also read my stuff at my blog, opentheory.net and Andrés’ is @qualiacomputing.com.

Lucas: If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We’ll be back again soon with another episode in the AI Alignment series.


Featured image credit: Alex Grey

A Non-Circular Solution to the Measurement Problem: If the Superposition Principle is the Bedrock of Quantum Mechanics Why Do We Experience Definite Outcomes?

Source: Quora question – “Scientifically speaking, how serious is the measurement problem concerning the validity of the various interpretations in quantum mechanics?


David Pearce responds [emphasis mine]:

It’s serious. Science should be empirically adequate. Quantum mechanics is the bedrock of science. The superposition principle is the bedrock of quantum mechanics. So why don’t we ever experience superpositions? Why do experiments have definite outcomes? “Schrödinger’s cat” isn’t just a thought-experiment. The experiment can be done today. If quantum mechanics is complete, then microscopic superpositions should rapidly be amplified via quantum entanglement into the macroscopic realm of everyday life.

Copenhagenists are explicit. The lesson of quantum mechanics is that we must abandon realism about the micro-world. But Schrödinger’s cat can’t be quarantined. The regress spirals without end. If quantum mechanics is complete, the lesson of Schrödinger’s cat is that if one abandons realism about a micro-world, then one must abandon realism about a macro-world too. The existence of an objective physical realm independent of one’s mind is certainly a useful calculational tool. Yet if all that matters is empirical adequacy, then why invoke such superfluous metaphysical baggage? The upshot of Copenhagen isn’t science, but solipsism.

There are realist alternatives to quantum solipsism. Some physicists propose that we modify the unitary dynamics to prevent macroscopic superpositions. Roger Penrose, for instance, believes that a non-linear correction to the unitary evolution should be introduced to prevent superpositions of macroscopically distinguishable gravitational fields. Experiments to (dis)confirm the Penrose-Hameroff Orch-OR conjecture should be feasible later this century. But if dynamical collapse theories are wrong, and if quantum mechanics is complete (as most physicists believe), then “cat states” should be ubiquitous. This doesn’t seem to be what we experience.

Everettians are realists, in a sense. Unitary-only QM says that there are quasi-classical branches of the universal wavefunction where you open an infernal chamber and see a live cat, other decohered branches where you see a dead cat; branches where you perceive the detection of a spin-up electron that has passed through a Stern–Gerlach device, other branches where you perceive the detector recording a spin-down electron; and so forth. I’ve long been haunted by a horrible suspicion that unitary-only QM is right, though Everettian QM boggles the mind (cfUniverseSplitter). Yet the heart of the measurement problem from the perspective of empirical science is that one doesn’t ever see superpositions of live-and-dead cats, or detect superpositions of spin-up-and-spin-down electrons, but only definite outcomes. So the conjecture that there are other, madly proliferating decohered branches of the universal wavefunction where different versions of you record different definite outcomes doesn’t solve the mystery of why anything anywhere ever seems definite to anyone at all. Therefore, the problem of definite outcomes in QM isn’t “just” a philosophical or interpretational issue, but an empirical challenge for even the most hard-nosed scientific positivist. “Science” that isn’t empirically adequate isn’t science: it’s metaphysics. Some deeply-buried background assumption(s) or presupposition(s) that working physicists are making must be mistaken. But which? To quote the 2016 International Workshop on Quantum Observers organized by the IJQF,

“…the measurement problem in quantum mechanics is essentially the determinate-experience problem. The problem is to explain how the linear quantum dynamics can be compatible with the existence of our definite experience. This means that in order to finally solve the measurement problem it is necessary to analyze the observer who is physically in a superposition of brain states with definite measurement records. Indeed, such quantum observers exist in all main realistic solutions to the measurement problem, including Bohm’s theory, Everett’s theory, and even the dynamical collapse theories. Then, what does it feel like to be a quantum observer?

Indeed. Here I’ll just state rather than argue my tentative analysis.
Monistic physicalism is true. Quantum mechanics is formally complete. There is no consciousness-induced collapse the wave function, no “hidden variables”, nor any other modification or supplementation of the unitary Schrödinger dynamics. The wavefunction evolves deterministically according to the Schrödinger equation as a linear superposition of different states. Yet what seems empirically self-evident, namely that measurements always find a physical system in a definite state, is false(!) The received wisdom, repeated in countless textbooks, that measurements always find a physical system in a definite state reflects an erroneous theory of perception, namely perceptual direct realism. As philosophers (e.g. the “two worlds” reading of Kant) and even poets (“The brain is wider than the sky…”) have long realised, the conceptual framework of perceptual direct realism is untenable. Only inferential realism about mind-independent reality is scientifically viable. Rather than assuming that superpositions are never experienced, suspend disbelief and consider the opposite possibility. Only superpositions are ever experienced. “Observations” are superpositions, exactly as unmodified and unsupplemented quantum mechanics says they should be: the wavefunction is a complete representation of the physical state of a system, including biological minds and the pseudo-classical world-simulations they run. Not merely “It is the theory that decides what can be observed” (Einstein); quantum theory decides the very nature of “observation” itself. If so, then the superposition principle underpins one’s subjective experience of definite, well-defined classical outcomes (“observations”), whether, say, a phenomenally-bound live cat, or the detection of a spin-up electron that has passed through a Stern–Gerlach device, or any other subjectively determinate outcome. If one isn’t dreaming, tripping or psychotic, then within one’s phenomenal world-simulation, the apparent collapse of a quantum state (into one of the eigenstates of the Hermitian operator associated with the relevant observable in accordance with a probability calculated as the squared absolute value of a complex probability amplitude) consists of fleeting uncollapsed neuronal superpositions within one’s CNS. To solve the measurement problem, the neuronal vehicle of observation and its subjective content must be distinguished. The universality of the superposition principle – not its unexplained breakdown upon “observation” – underpins one’s classical-seeming world-simulation. What naïvely seems to be the external world, i.e. one’s egocentric world-simulation, is what linear superpositions of different states feel like “from the inside”: the intrinsic nature of the physical. The otherwise insoluble binding problem in neuroscience and the problem of definite outcomes in QM share a solution.

Absurd?
Yes, for sure: this minimum requirement for a successful resolution of the mystery is satisfied (“If at first the idea is not absurd, then there is no hope for it”– Einstein, again). The raw power of environmentally-induced decoherence in a warm environment like the CNS makes the conjecture intuitively flaky. Assuming unitary-only QM, the effective theoretical lifetime of neuronal “cat states” in the CNS is less than femtoseconds. Neuronal superpositions of distributed feature-processors are intuitively just “noise”, not phenomenally-bound perceptual objects. At best, the idea that sub-femtosecond neuronal superpositions could underpin our experience of law-like classicality is implausible. Yet we’re not looking for plausible theories but testable theories. Every second of selection pressure in Zurek’s sense (cf. “Quantum Darwinism”) sculpting one’s neocortical world-simulation is more intense and unremitting than four billion years of evolution as conceived by Darwin. My best guess is that interferometry will disclose a perfect structural match. If the non-classical interference signature doesn’t yield a perfect structural match, then dualism is true.

Is the quantum-theoretic version of the intrinsic nature argument for non-materialist physicalism – more snappily, “Schrödinger’s neurons” – a potential solution to the measurement problem? Or a variant of the “word salad” interpretation of quantum mechanics?
Sadly, I can guess.
But if there were one experiment that I could do, one loophole I’d like to see closed via interferometry, then this would be it.