This is more than a presentation. It’s an invitation to join the frontier of consciousness research 🙂
Spatiotemporal Coordinates: Thursday, November 20 – 4:00 PM – 10:00 PM PST – Frontier Tower (San Francisco, California)
Hello Qualia Community!
After a year of heads-down development (with glimpses shared on podcasts including Theories of Everything with Curt Jaimungal #1#2) QRI is ready to reveal the full scope of our work mapping the state-space of consciousness, modeling phenomenology, and identifying the computational properties of consciousness.
What We’ve Recently Accomplished In This Area:
In 2023, QRI’s High Energy Awareness Research Team conducted two legal psychedelic retreats exploring mushrooms, ayahuasca and 5-MeO-DMT, by bringing together an interdisciplinary coalition of meditators, psychonauts, physicists, and mathematicians and working as a Think Tank for weeks at a time (see heart.qri.org).
Systematic DMT phenomenology reveals repeatable phase transitions and geometric transformations, including hyperbolic curvature, symmetrification effects, and computational properties that demand explanation. The QRI period 2024-2025 has been about developing conceptual frameworks and computational models to make sense of these patterns, and then putting them to empirical test. We’ve discovered how coupling kernels (the rules of interaction between oscillating systems) can simultaneously shape both neural activity and the topological structure of physical fields, providing a potential causal chain from neurochemistry to the structure of conscious experience. It’s a conceptual framework where the pieces of the puzzle finally seem to fit together: how psychedelics modify coupling dynamics at the neural level, how these changes cascade through neural architectures to modulate field topology, and how this field structure feeds back into neural activity, avoiding the trap of epiphenomenalism while grounding phenomenology in processes with computationally meaningful properties.
What’s Next:
This event is divided into two parts.
We’ll present the core outputs of our research from the past year: methods, participants, results, and the theoretical frameworks we’ve developed. You’ll see interactive demonstrations and simulations that bring these concepts to life.
4:00 PM – Doors open, mingling with snacks and soft drinks
5:00-5:40 – Introductions and interactive demonstrations
5:40-6:40 – The Big Reveal: QRI’s psychedelic research over the last two years, including HEART retreat results, interactive simulations, and our theoretical breakthroughs
We’ll chart our path forward: In Q2 2026, we’re planning a legal psychedelic retreat where mathematicians, physicists, meditators, and visual artists will collaborate to test whether non-linear optics plays a role in psychedelic phenomenology (especially DMT). This generativeframework, which we take seriously and can rigorously test with proper funding, will combine physics simulations, psychophysics studies, and rigorous phenomenological mapping to reverse engineer the medium of computation of consciousness itself.
7:00-8:00 – The Next Chapter: 2026 retreat plans, mathematical modeling of consciousness, and testing our generative frameworks
8:00-9:00 – Q&A and group discussions
9:00-10:00 – Mingling and winding down
We anticipate that once we recognize consciousness’s computational role, we will move from cognitive science to consciousness engineering: systematically exploring the state-space of possible experiences and recruiting new qualia varieties to enhance our (conscious) cognitive capabilities.
Presenters at this event:
Andrés Gómez-Emilsson: As QRI’s President and Director of Research, his work at QRI ranges from algorithm design, to psychedelic theory, to neurotechnology development, to mapping and studying the computational properties of consciousness. Andrés blogs at qualiacomputing.com.
Cube Flipper: With a deep understanding of wave dynamics and visual perception, Cube is dedicated to uncovering the intricacies of visual phenomena. In addition to their research at QRI, they also share their insights and findings on their personal blog smoothbrains.net.
What We’re Fundraising For:
To continue this groundbreaking work, we’re seeking support for:
Core operations: Salaries to keep QRI’s team intact for another year (and hopefully many more)
Two major retreats: The 2026 DMT phenomenology retreat and a 5-MeO-DMT awakening retreat
Research outputs: Publishing papers and studies, including upcoming pain quantification research (currently private, soon to be released)
Technology development: Bringing crowdsourced phenomenology visualization tools to a fully functional state to crowd-source the mapping of the state-space of consciousness (first batch to be released on November 20th)
Whether you’re a researcher, artist, meditator, potential funder, or simply fascinated by the nature of consciousness, this event offers a rare opportunity to see cutting-edge phenomenological research as it unfolds. We’re also gauging interest for the 2026 retreat: if you have relevant expertise (mathematics, physics, meditation, visual arts), funding capacity, or alignment with this vision of consciousness research, we’d love to connect.
If you’re unable to make it but would still like to contribute to our research efforts, you can donate at qri.org/donate or with crypto on our Endaoment page.
This year’s QRI-associated essays reveal multiple converging lines of investigation:
Non-Linear Optics Framework: A generative framework exploring how the brain might render world simulations using optical elements. Key metaphors include Laser Chess (where local classical moves set constraints, then holistic standing wave patterns emerge) and Cel Animation (describing how the world simulation is constructed with independent layers controlled by different modules that overlap and interact in a shared perceptual workspace). This includes work on beamsplitter holography to explain Indra’s Net phenomenology.
Fractional Fourier Transform: Recent work explores how the brain may *utilize* the fractional Fourier transform, which is a generalization that smoothly interpolates between spatial and frequency domains. This could explain characteristic ringing artifacts in psychedelic visual fields and provide a biologically plausible mechanism for massively parallel pattern recognition.
Coupling Kernels and CDNS and Field Topology: A breakthrough framework showing how systems of coupled oscillators can both tune into resonant modes and control the topological structure of fields. This provides a direct causal chain from molecular interactions (neurotransmitters, psychedelics) to the structure of conscious experience. The coupling kernel acts as a “field-shaping operator”, the same mathematical object simultaneously modulates neural dynamics and field structure, explaining how psychedelics produce such radically different yet structurally consistent effects across participants and experiences. We hypothesize that DMT effectively implements a Mexican-hat coupling profile (strong negative coupling at short distances, positive at medium distances) creating competing clusters of coherence, while 5-MeO-DMT drives systems toward global phase synchronization. This connects to the Consonance-Dissonance-Noise Signature (CDNS) approach, which describes valence in terms of spectral properties such as consonance tracking positive valence, dissonance negative valence, and noise neutral valence. We we will make the case for why this proof-of-concept demonstrations are compelling and point toward testable predictions.
Ongoing Foundational Work: QRI continues to develop frameworks including the CDNS approach, the Topological Solution to the Boundary Problem (explaining how unified experiences emerge with precise boundaries), liquid crystal dynamics as phenomenologically significant, and logarithmic scales of pleasure and pain for rigorous quantification of experiential quality.
Introduction: Laser Chess as a Metaphor for the Brain as a Non-Linear Optical Computer
In Laser Chess (a synecdoche for games of this sort), players arrange various kinds of pieces that interact with lasers on a board. Pieces have “optical features” such as mirrors and beam splitters. Some pieces are vulnerable to being hit from some sides, which takes them off the board, and some have sides which don’t interact with light but merely absorb it harmlessly (i.e. shields). You usually have a special piece which must not be hit, aka. the King/Pharaoh/etc. (or your side loses). And at the end of your turn (once you’ve moved one of your pieces) the laser of your color is turned on, and its light comes out of one of your pieces in a certain direction and then travels to wherever it must (according to its own laws of behavior). Usually when your laser hits an unprotected side of a piece (including one of your own pieces), the targeted piece is removed from the board. Your aim is to hit and remove the special piece of your opponent.
Example of a beam splitter optical element (source)
What makes this game conceptually more interesting than Chess isn’t just that its openings haven’t been thoroughly studied (something Bobby Fischer complained about with Chess), but rather that the light’s path depends on all pieces functioning together as a whole, adding a layer of physical embodiment to the game. In other words, Laser Chess is not akin to Chess 960, where the main feature is that there are so many openings that the player needs to rely less on theory and more on fluid visual reasoning. It’s more, at least at the limit, like the difference between a classical and a quantum computer. It has a “holistic layer” that is qualitatively different than the substrate upon which the game normally operates.
In Laser Chess, the “piece layer” is entirely local, in that pieces can only move around in hops that follow local contextual rules. Whereas the “laser layer” is a function of the state of the entire board. The laser layer is holistic in nature because it is a function of the entire board at once. It’s the result of, at the limit, letting the light go back and forth an infinite number of times and let it resolve whatever loop or winding path it may need to go through. You’re looking for the standing wave pattern the light wants to resolve on its own.
Online Laser Chess (source) – the self-own of the blue player is understandable given the counter-intuitive (at first) way the light ends up traveling.
In Laser Chess you move your piece to a position you thought was safe just to be hit by the laser because thepiece itself was what was making that position safe! The beginner player is often startled by the way the game develops, which makes it fun to play for a while. The mechanic is clever and to play you need to think in ways perhaps a bit alien to a strict Chess player. But at the end of the day it’s not that different of a game. You do end up using a lot of calculations (in the traditional Chess sense of “mental motions” you keep track of to study possible game trees), and the laser layer only changes this slightly.
When the laser beam hits one of the mirrors, it will always turn 90 degrees, as shown in the diagrams. The beam always travels along the rows and columns; as long as the pieces are properly positioned in their squares, it will never go off at weird angles. – Khet: The Laser Game Game Rules
In Laser Chess, the behavior of light is not particularly impressive. After all, thinking about the laser layer in terms of simple local rules is usually enough (“advance forward until you hit a surface”, “determine the next move as a function of the type of surface you hit”, etc.). The game is quite “discretized” by design. Tracing a single laser path is indeed easy when the range of motion and possible modes of interaction are precisely constructed to make it easy to play. It’s uncomplicated by design. The calculations needed to predict the path of the light never becomes intractable: the angles are 45°/90° degrees, the surfaces cleanly double, reflect, absorb the light, etc.
Laser Chess, now with weird polygonal pieces and diffraction effects!
But in a more general possible version of Laser Chess the calculations can become easily intractable and far moreinteresting. If we increase the range of angles the pieces can be at relative to each other (or make them polygons) we suddenly enter states that require very long calculations to estimate within a certain margin of error. And if we bring continuous surfaces or are allowed to diffract or refract the light we will start to require using the mathematics that have been developed for optics.
In a generalized Laser Chess, principles for the design of certain pieces could use specific optical properties, like edge diffraction:
If light passes near the edge of a piece (rather than hitting it directly), it could partially bend around the object instead of just stopping. Obstacles wouldn’t provide perfect shadows, allowing some light to “leak” around corners in a predictable but complex way. Example: A knight-like piece could have an “aura of vulnerability” where light grazing its edge still affects pieces behind it.
Instead of treating lasers as infinitely thin lines, beams could diffract when passing through narrow gaps or slits. This would allow for beam broadening, making it possible to hit multiple pieces even if they aren’t in a direct line. Example: If a piece has a slit or small hole, it could scatter the laser into a cone, potentially hitting multiple targets.
And so on. And there is a staggering number of optical properties to select from. From refraction, iridescence, polarization, birefringence, and total internal reflection, each offering unique strategic possibilities. And then there we also have their mutual interactions to consider. Taking all of this into account, a kind of generalized Laser Chess complexity hierarchy arises:
The simplest Laser Chess variants are mostly geometric, with straightforward ray tracing. They benefit from a physical laser or a computer, but don’t require it.
Intermediate complexity comes after adding diffraction, refraction, and wave optics, requiring Fourier transforms and wave equations to analyze the beam behavior. It requires a physical laser or a computer to be played, because mental calculation won’t do.
And high complexity variants come about when you take into account quantum-inspired effects like interference and path integrals, leading to both deterministic and probabilistic gameplay mechanics where players need to take into account complex superpositions and calculate probabilities. It requires either carefully designed cases for computers to be sufficient; physical embodiment might become necessary above a certain complexity.
The Self as King
Let’s start to draw the analogy. Imagine the special piece as your sense of self, the piece that must be protected, while the other pieces represent state variables tuning your world-model. In some configurations, they work together to insulate the King, diffusing energy smoothly across the board. In others, a stray beam sneaks through—an unexpected reflection, a diffraction at just the wrong angle—and suddenly, the self is pierced, destabilized, and reconfigured. The mind plays this game with itself, setting up stable patterns, only to knock them down with a well-placed shot.
The field of consciousness, poetically speaking, is a lattice of light shifting under the pressure of attention, expectation, and the occasional physiological shear. But whether or not the awareness that corresponds to the light is self-aware depends on the precise configuration of this internal light path: some ways of arranging the board allow for a story to be rendered, where a sense of self, alive and at the center of the universe, is interpreted as the experiencer of the scene. Yet the scene is always being experienced holistically even if without a privileged center of aggregation of the light paths. The sense of a separate, divided witness might be a peculiar sleight of hand of this optical system, a kind of enduring optical illusion generated by what is actually real: the optical display.
BaaNLOC
The Brain as a Non-Linear Optical Computer (BaaNLOC) proposes that something like this happens in the brain. The brain’s physical structure – its neural wiring, synaptic connections, and the molecular machinery of neurons – maps onto a set of “optical” properties. These properties shape how electromagnetic waves flow and interact in neural tissue.
Think of a sensory stimulus, within the Laser Chess analogy of the brain’s computational substrate, as akin to a brief blip from a laser. As the stimulus-triggered electrochemical signal propagates through neural circuits, its path is shaped by the brain’s “optical” configuration. Excitatory and inhibitory neurons, tuned to different features, selectively reflect and refract the signal. The liquid crystal matrix encoded in the molecular structure of intracellular proteins might also play a role, perhaps modulating the electromagnetic medium through which the signal must travel.
Where these signals meet, they interfere, their wave properties combining to amplify or cancel each other out. BaaNLOC posits that the large-scale interference pattern and the non-linear emergent topological structure of these interacting waves constitutes the contents of subjective experience.
Attention and expectation act as a steady pressure on this system, stabilizing certain wave patterns over others, like a piece the board influencing the path of the laser. What we perceive and feel emerges from the EM standing waves shaped by this top-down influence.
Psychedelics and BaaNLOC
Psychedelics, in this framework, temporarily alter the optical properties of the brain. Abnormal patterns of signaling elicited by drugs like DMT change how neural waves propagate and interact. The result is a radical reconfiguration of the interference patterns corresponding to conscious experience.
The BaaNLOC paradigm seeks to bridge the brain’s electrodynamics with the phenomenology of subjective experience by framing neural processes in terms of EM wave dynamics and electrostatic field interactions. While the precise mapping between neural activity and optical properties remains an open question (we have some ideas), the process of searching for this correspondence is already generative. The brain’s electrostatic landscape is not uniform; instead, it consists of regions with varying permittivity and permeability, which affect the way EM waves propagate, reflect, and interfere. Axonal myelination influences conduction velocity by altering the dielectric properties of neural pathways, shaping the timing and coherence of signals across brain regions. Dendritic arbor geometry sculpts synaptic summation, forming local electrostatic gradients that influence how waves superpose and propagate. Cortical folding affects field interactions by modulating the spatial configuration of charge distributions, altering the effective permittivity of different regions and creating potential boundaries for wave interference. These parameters suggest that experience may be structured not only by firing patterns but also by the electrostatic properties of the substrate itself. If perception is mediated by standing waves in an EM field shaped by the brain’s own internal dielectric properties, then the phenomenology of experience may correspond to structured resonances within this medium, much like how lenses manipulate light by controlling permittivity gradients. Investigating these interactions could illuminate the connection between the brain’s physical substrate and the emergent contours of conscious experience.
You can even do spectral filtering of images with analogue Fourier transforms using optical elements alone. Think about how this optical element could be used right now in your brain to render and manufacture your current reality:
Analogue Fourier transform and filtering of optical signals. (Gif by Hans Chiu – source).
Real-time analog Fourier decomposition of sensory information would be a powerful computational tool, and we propose that the brain’s optical systems leverage this to structure our world-simulation.
In this framework, certain gestalt patterns act as energy sinks, analogous to standing waves at resonant frequencies. These patterns serve as semantic attractors in the brain’s harmonic energy landscape, forming local minima where perceptual content naturally stabilizes. These attractor surfaces are often semi-transparent, refractive, diffractive, or polarizing, vibrating in geometry-dependent ways. “Sacred geometry” corresponds to vibratory patterns that are maximally coherent across multiple layers at once, representing low-energy states in the system’s configuration space. When the world-sheet begins to resemble these structures, it “snaps” into symmetry, as this represents an energy minimum. This aligns with Lehar’s field-theoretic model of perception, where visual processing emerges from extended spatial fields of energy interacting according to lawful dynamics. Given that such self-organizing optical behavior is characteristic of liquid crystals, it is worth considering whether the brain’s substrate exploits liquid-crystalline properties to facilitate these energy-minimizing transformations.
It is within this paradigm that the following idea is situated.
DMT Visuals as Holographic Cel Animation in a Nonlinear Optical Medium
DMT visuals (and to a lesser extent those induced by classic psychedelics in general) might be understood as semi-transparent flat surfaces in a non-linear optical medium, akin to the principles behind cel animation. Source: How It’s Made | Traditional Cel Animation*
Cel animation uses partially transparent layers to render objects in a way that allows them to move independent of each other. In cel animation the features of your world are parsed in a suspiciously anthropomorphic way. If you change a single element in an unnatural way, you find it rather odd. Like it breaks the 4th wall in a way. You can get someone to blink an eye or move their mouth in the absence of any other movement. What kind of physical system would do that? One that was specifically constructed for you as an interface.
Imagine a child flipping through a book of transparent pages, each containing a fragment of a jaguar, a palm, a tribal mask. As the pages overlay, the scene assembles itself — not as a static image, but as a living tableau (somebody please fire the Salesforce marketing department for appropriating such a cool word). Now imagine those transparencies aren’t merely stacked; they are allowed to be at odd angles relative to each other and to the camera:
This is the basic setup. The idea is that on DMT, especially during the come-up at moderate doses (e.g. reaching Magic Eye-level), the sudden appearance of 2D gestalts in 3D (which are then “projected” to a 2.5D visual field) is a key phenomenological feature. The rate of appearance and disappearance of these gestalts is dose-dependent, same as the kind of interactions they come enabled with. From here, we can start to generalize this kind of system to better capture visual (and somatic, as we will see) features of a DMT experience in its full richness and complexity. Just as in the case of Laser Chess, where we began with a basic setup and then explored how non-linear optics would massively complicate the system as we introduce interesting twists, here as well we begin with cel animation planes in a 3D space and add new features until they get us somewhere really interesting.
An important point is that DMT cel-animation-like phenomenology seems to have some hidden rules that are difficult to articulate, let alone characterize in full because it interacts with the structure of our attention and awareness. Unlike actual cel animation, the flat DMT gestalts don’t require a full semi-transparent plane to come along with them – they are “cut” already, and yet somehow can “float” just fine. Importantly, even when you have extended planes and they are, say, rotating, they can often intersect. Or rather, the fact that they overlap in their position in the visual field does not mean that they will interact as if they were occupying the same space. Whether two of these gestalts interact with each other or not depends on how you pay attention to them. There is a certain kind of loose and relaxed approach to attention where they all go through each other, as if entirely insubstantial. There is another kind of way of attending where you force their interaction. If you have seven 2D gestalts floating in your visual field, by virtue of the fact that you only have so many working memory slots / attention streams, it is very difficult to keep them all separate. At the same time, it is also very difficult to bring them all together. More typically, there is a constantly shifting interaction graph between these gestalts, where depending on how emergent attention dynamics of the mind go, clusters of these gestalts end up being simultaneously being payed attention to, and thus blend/unify/compete and constructively/destructively interfere with one another.
One remarkable property of these effects is that 2D gestalts can experience transformations of numerous kinds: shrinking, expanding, shearing, rotating, etc. Each of these planes implicitly drags along a “point of view”. And one of the ways in which they can interact is by “sharing the same point of view”.
Cels as Planes of Focus
One key insight is that the 2D surfaces that make up these cels in the visual field on a moderate dose of DMT seem to be regions where one can “focus all at once”. If you think of your entire visual field as an optical display that can “focus” on different elements on a scene, during normal circumstances it seems that we are constrained to focusing on scenes one plane at a time. Perhaps we have evolved to match as faithfully as possible the optical characteristics of a camera-like system with only one plane of focus, and thus we “swallow in” the optical characteristics of our eyes and tend to treat them as fundamental constraints of our perception. However, on DMT (and to a lesser extent other psychedelics) one can see multiple planes “in focus” at the same time. Each of these gestalts is typically perfectly “in focus” and yet with incompatible “camera parameters” to the other planes. This is what makes, in part, the state feel so unusual: there is a sense in which it feels as if one had multiple additional pairs of eyes with which to observe a scene.
A simple conceptual framework to explain this comes from our work on psychedelic tracers. DMT, in a way, lets sensations build up in one’s visual and somatic field: one can interpret the multiple planes of focus as lingering “focusing events” that stay in the visual field for much longer, accumulating sharply focused points of view in a shared workspace of visual perspectives.
Another overall insight here is that each 2D gestalt in 3D space that works as an animation cel is a kind of handshake between the feed from each of our eyes. Conceptually, our visual cortex is organized into two hierarchical streams with lateral connections. Levels of the hierarchy model different spatial scales, whereas left-vs-right model the eye from which the input is coming from. At a high-level, we could think of each 2D cel animation element as a possible “solution” for stable attractors in this kind of system: a plane through which waves can travel cuts across spatial scales and relative displacements between the image coming from each eye. In other words, the DMT world begins to be populated by possible discrete resonant mode attractors of a network like this:
The Physics of Gestalt Interactions
As the 2D cels accumulate, they interact with one another. As we’ve discussed before, our mind seems to have an energy function where both symmetrical arrangements and semantically recognizable patterns work as energy sinks. The cel animation elements drift around in a way that tries to minimize their energy. How energized a gestalt is manifests in various ways: brightness of the colors, speed of moment, number of geometric transformations applied to it per second, and so on. When “gestalt collectives” get close to each other, they often instantiate novel coupling dynamics and intermingle in energy-minimizing ways.
Holographic Cel Animation
Since each of the cels in a certain sense corresponds to a “plane of focus” for the two eyes, they come with an implicit sense of depth. As strange as it may sound, I think it is both accurate and generative (or at the very least generative!) to think of each cel animation element as a holographic display.
I think this kind of artifact of our minds (i.e. that we get 2D hologram-like interacting hallucinations on DMT) ultimately sheds light on the medium of computation our brain is exploiting for information processing more generally. Our mind computes with entire “pictures” rather than with ones and zeros. And the pictures it computes with are optical/holographic in nature in that they integrate multiple perspectives at once and compress entire complex scenes into manageable lower dimensional projections of them.
Each cel animation unit can be conceptualized as a holographic window into a specific 3D scene. This connects to one of the striking characteristics of these experiences. In the DMT state, this quality manifests as a sense that the visualized content is “not only in your mind” but represents access to information that exists beyond the confines of personal consciousness. The different animated elements appear to be in non-local communication with one another, as if they can “radio each other” across distances. At the very least their update function seems to rely both on local rules and global “all-at-once” holistic updates (much akin to the way the laser path changes holistically after local changes in the location of individual pieces).
This creates the impression that multiple simultaneous narratives or “plots” can unfold at “maximum speed” concurrently. Each element seems capable of filtering out specific signals from a broader field of information, tuning into particular frequencies while ignoring others. The resulting 2.5D/3D interface serves as a shared context where gestalts that communicate through different “radio channels” can nonetheless interact coherently with each other in a shared geometric space.
It won't take long before we'll be able to "reskin reality" in real-time.
I had the chance to try this prototype that combines the #MixedReality view on a Quest device with Stable Diffusion AI and it feels like all the pieces are about to fit together…
The above VR application being developed by Hugues Bruyere at DPT (interesting name!) reminded me of some of the characteristic visual computation that can take place on DMT with long-lasting holographic-like scenes lingering in the visual field. By paying attention to a group of these gestalts all at once, you can sort of “freeze” them in space and then look at them from another angle as a group. You can imagine how doing this recursively could unlock all kinds of novel information processing applications for the visual field.
Visual Recursion
Each cel animation element can have a copy of other cel animation elements seen from a certain perspective within it.
Because each animation cel can display an entire scene in a hologram-like fashion, it often happens that the scenes may reference each other. This is in a way much more general than typical video feedback. It’s video feedback but with arbitrary geometric transformations, holographic displays, and programmable recursive references from one feed to another.
One overarching conceptual framework we think can help explain a lot of the characteristics of conscious computation is the way in which fields with different dimensionalities interact with one another. In particular, we’ve recently explored how depth in the visual field seems to be intimately coupled with somatic sensations (see: What is a bodymind knot? by Cube Flipper, and On Pure Perception by Roger Thisdell). This has led to a broad paradigm of neurocomputation we call “Projective Intelligence“:
The projective intelligence framework offers a conceptual foundation for how to make sense of the holographic cels. Our brains constantly map between visual (2.5D) and tactile (3D) fields through projective transformations, with visual perceptions encoding predictions of tactile sensations. This computational relationship enables the compression of complex 3D information into lower dimensions while highlighting patterns and symmetries (think about how you rotate a cube in space in order to align it with the symmetries of our visual field: a cube contains perfect squares, which becomes apparent when you project it onto 2D in the right way).
In altered states like DMT experiences, these projections multiply and distort, creating the characteristic holographic windows we’re discussing: multiple mappings occur between the same tactile regions and different visual areas. This explains the non-local communication between visual elements, as the visual field creates geometric shortcuts between tactile representations using the visual field. It’s why separated visual elements appear to “radio each other” across distances: they can be referencing the same region of the body!
The recursive qualities of these holographic cels emerge when the “branching factor” of projections increases, creating Indra’s Net-like effects where everything reflects everything else. The binding relationships that arise in those experiences can generate exotic topological spaces: you can wire your visual and somatic field together in such a way that the geodesics of attention find really long loops involving multiple hops between different sensory fields.
In brief, consciousness computes with “entire pictures” which can interact with each other even if they have different dimensionalities – this alone is one of the key reasons I’m bullish on the idea that carefully depicting psychedelic phenomenology will open up new paradigms of computation.
Collective Intelligence Through Transformer-like Semantics
In addition to the geometric holographic properties of these hallucinations, the semantic energy sink also operate in remarkably non-trivial ways. When two DMT patterns interact, they don’t just overlap or blend like watercolors. They transform each other in ways that look suspiciously like large language models updating their attention vectors. A spiral might encounter a lattice, and suddenly both become a spiral-lattice hybrid that preserves certain features while generating entirely new ones. If you’ve played with AI image generators, you’ve seen how combining prompt elements creates unexpected emergent results. DMT visuals work similarly, except they’re computing with synesthetic experiential tokens instead of text prompts. A hyperbolic jewel structure might “attend to” a self-dribbling basketball, extracting specific patterns that transform both objects into something neither could become alone.
Some reports suggest that internalizing modern AI techniques before a DMT trip (e.g. spending a week studying and thinking about the transformer architecture) can power-up the intellectual capacities of “DMT hive-minds”. If your conceptual scheme can only make sense of the complex hallucinations you’re witnessing on ayahuasca through the lens of divine intervention or alien abductions, the scenes that you’re likely to render will be restricted to genre-conforming semantic transformations that minimize narrative free energy. But if you come in prepared to identify what is happening through the lens of non-linear optics and let the emergent subagents (clusters of gestalts that work together as agentive forces) self-organize as an optical machine learning system, you may end up summoning novel (if still very raw and elemental) kinds of conscious superintelligences.
Conclusion: The Gestalt Amphitheater
In ordinary consciousness, we meticulously arrange our perceptual pieces to protect the King (our sense of self) ensuring that the laser of awareness follows predictable, habitual paths. The optical elements of our world-simulation are carefully positioned to maintain the stable fiction that we are unified subjects navigating an objective world.
DMT radically rearranges these pieces, creating optical configurations where “the light of consciousness” reflects, refracts, and diffracts in unexpected ways. The laser no longer follows familiar paths but moves along a superposition of paths through the system in patterns that reveal the constructed nature of the central self and of the simulation as a whole. The King (that precious sense of being a singular perceiver) stands exposed as what it always was: not an ontological primitive but an emergent property of a particular configuration where “attention field lines converge.”
The projective intelligence framework helps us understand this phenomenology. Our brains constantly map between visual (2.5D) and tactile (3D) fields through transformations that encode predictions and compress complex information. In DMT states, these projections multiply and distort, creating “holographic windows” where multiple mappings occur simultaneously. This explains the non-local communication between visual elements: separated gestalts appear to “radio each other” across distances because multiple tactile sensations can use the visual field as a shortcut to resonate with each other and vice versa.
The emergent resonant attractors of the whole system involve many such shortcuts. When the recursive projections find an energy minima they lock in place, at least temporarily: the complex multi-sensory gestalts one can experience in these states capture layers of recursive symmetry as information in sensory fields is reprojected back and forth, each time adapting to the intrinsic dimensionality of the field onto which it is projected. “Sacred geometry” objects on DMT are high-valence high-symmetry attractors of this recursive process.
The DMT state doesn’t “scramble consciousness” (well, not exactly); rather, it reconfigures its optical properties, allowing us to witness the internal machinery that normally remains hidden in our corner of parameter space. These visuals aren’t “hallucinations” in any conventional sense. That would imply they’re distortions of some more fundamental reality. Instead, I think they’re expressions of our brain’s underlying optical architecture when highly energized and fragmented, temporarily freed from the sensory constraints that normally restrict our perceptual algorithms.
By understanding the brain as a kind of non-linear optical computer, and consciousness as a topologically closed standing wave pattern emergent out of this optical system, we may develop more sophisticated models of how the brain generates world simulations. And perhaps one day (soon!) even discover new computational paradigms inspired by the way our minds naturally process information through multiple holographic dimensional interfaces at once. Stay tuned!
*animations made with the help of Claude 3.7, when otherwise not specified.
For qualia to arise, you need to both individuate from “Universal Consciousness” and also make partitions within you. This causes a kind of double universal Yin-Yang of oneness and separation in superposition. If God/Ultimate Reality/Multiverse is like a superposition of all possible qualia values at once in a way that they cancel out to a sort of “nothingness“, then to create an individual being, God/Ultimate Reality/Etc. needs to form a partition within itself. It allows one part of itself to witness blue so that another part can witness yellow, because if it were to witness both at the same time they would cancel out and reform into God/Ultimate Reality/etc.
Thus, for you to experience anything at all, you must in a way be interdependent with (/be the complement of/) specks of qualia that collectively add up to your experiential complement. They imply you and you imply them. And more so, this is also happening within yourself. The fact is, you can experience the left and right side of your visual field “at once”. But how? Seriously, how is this possible? If the “Screen of Consciousness” worked as a kind of camera with a point-like aperture, then the witness of your experience would have zero information. Points collapse information – the aperture needs to have a certain size. Or are you the screen on which the image is projected? But if so, then how are the various pixels simultaneously aware of each other?
Experiences are like Indra’s Net: every part is in a deep sense witnessing every other part. Like a house of mirrors. It is both “many” and “one” at the same time. And if you were to actually get rid of your internal distinctions, you’d “experience” a cessation (a moment where you completely disappear). With much intrigue, such cessations are often preceded by rainbow effects and white light phenomena – whether in deep meditation, mystical experiences of union with the divine, or at the peak of the effects catalyzed by unitive compounds like 5-MeO-DMT. This suggests to me that as consciousness approaches complete dissolution, both its internal knots and external boundaries unravel simultaneously (cf. cancelation of topological defects), until the very topology of being itself becomes trivial.
You need inner separation to be anything at all.
You as a moment of experience are thus both interdependent with “external” qualia that form your complement, while also internally requiring divisions within your own oneness to have information content in an Indra’s Net kind of way. Thus I now see reality as a strange Yin-Yang where on the one hand there is unity within the separation of individuals, and on the other hand there is separation within the unity of moments of experience.
Oneness and multiplicity don’t only co-arise – they are constitutively interdependent at their very root.
or How the “Grey Paradox” Might Actually Make Sense
[Epistemic Status: I’m having fun. But also, I’m attempting to make sense of seemingly bizarre phenomena through the lens of physics, evolution, and game theory. Heavy, perhaps even daring, speculation based on limited but increasingly credible evidence – take with a giant grain of salt and don’t update too much about Qualia Computing based on this one post]
Taking UFOs Seriously: Physics, Game Theory, and Evolutionary Dynamics
I should start by acknowledging my initial heavy skepticism about the whole “UFO phenomenon”. Like many people influenced by rationalist epistemics and aesthetics, I have always found it easy to dismiss the entire field as a combination of misidentification, social contagion, and wishful thinking. Whenever a friend sent me a link to credible-sounding journalism on the topic, I would remember Stuart Armstrong’s 2012 talk at the Oxford Physics Department about optimal space colonization strategies. His calculations showed that if you want to spread as far as possible, the winning strategy involves launching tiny self-replicating nanotechnology systems containing your civilization’s information content in rice-sized projectiles to as many galaxies as possible. The mathematics are clear: small differences in miniaturization lead to enormous differences in how many galaxies you can reach.
Given this logic, the idea that we would naturally encounter biological organisms in large spacecraft seemed ruled out on priors. Why would any advanced civilization choose to build massive craft and travel for hundreds, thousands, millions of years, only to reach a tiny fraction of the universe, when they could achieve vastly superior spread through miniaturized probes?
Then Robin Hanson started taking the phenomenon seriously (talk about social contagion!), a couple people I know and who I consider reliable witnesses told me unbelievable personal stories involving UFOs (while fully sober – both during the experience and while recounting it!), and finally the “New Jersey Drone” situation started happening last November (and, apparently, continues to this day). After compiling and anlyzing dozens of official sources and trying to apply all kinds of conventional explanations, I concluded that… I don’t know what the fuck is going on.
After I declared epistemological bankruptcy about the topic on Twitter, someone emailed me a series of lectures about the science of UFOs that were delivered at SOL Foundation‘s launching event at Stanford in 2023. Of special note to me was Kevin Knuth‘s presentation on UAP flight characteristics, which made me seriously reconsider my previous assumptions. It is worth mentioning that Kevin isn’t a random hobbyist; he’s an Associate Professor of Physics and Informatics at the University at Albany and has been editor-in-chief of the prestigious academic journal Entropy since 2013. The core issue, as he explains, isn’t just that these objects really do seem to exist – it’s that their behavior implies something unexpected about the nature of spacetime manipulation and, potentially, its accessibility to technological civilizations. Knuth’s peer-reviewed analysis suggests UFOs can show accelerations of up to 5,000 Gs, far beyond what any plausible human-made craft could generate (or even withstand). More recently, empirically-driven analysis of high-quality multi-sensor broadband UFO recordings by the Tedesco Brothers suggests the presence of gravitational lensing around these mysterious crafts. Thus, the phenomenon as reported in multiple credible cases suggest these aren’t just extremely advanced aircraft – they’re devices that manipulate the fabric of spacetime itself. Ok. Let’s take this with a big grain of salt. But I would be lying if I didn’t find this analysis at least somewhat compelling (in research aesthetics and evidential strength, if only).
Did you know some UFOs seem to “double” and then “remerge” at times? Apparently this _might_ be explained via gravitational lensing effects. Yeah, right…
This leads me to posit a really interesting possibility: what if relativistic travel is not as hard as we first thought? What if it becomes accessible relatively early in a civilization’s development, before perfect miniaturization or molecular manufacturing? What if traveling close to the speed of light safely for large objects is a technology within reach for a civilization not much older than humans? This would dramatically reshape our understanding of likely alien civilizational development paths.
The implications are enormous. In the volume of space-time where this technology is first discovered, the earliest escapees might actually achieve the furthest reach. Rather than waiting for perfect miniaturization, the optimal strategy might involve what I’ll call “hiding in the future”: using relativistic travel to explore vast distances while experiencing only years of subjective time. This isn’t exactly unprecedented, as it has a parallel with biological preservation strategies we see on Earth. Just as bears hibernate to survive winter and tardigrades enter cryptobiosis to endure extreme conditions, relativistic travelers could effectively “hibernate” through dangerous periods of their civilization’s development by being unreachable to others while fighting entropy via time dilation.
From an evolutionary standpoint, this creates powerful selective pressures in two directions. First, the ability to reach distant systems ahead of other civilizations provides obvious reproductive advantages. But perhaps more intriguingly, the time dilation effect offers protection against local instabilities and existential risks. If your civilization shows signs of approaching a potentially catastrophic singularity or societal collapse, the ability to effectively freeze yourself in time while traveling to distant systems becomes an incredibly attractive survival strategy. Given these dynamics, we might expect the first wave of cosmic explorers to be relatively young civilizations, perhaps only centuries ahead of us in development, who recognize this temporal escape hatch and take it before their window of opportunity closes.
Consider the game theory implications. If spacetime manipulation technology is achievable before advanced consciousness tech or molecular-scale manufacturing, it creates what I’ll call a “relativistic first-mover advantage.” Any civilization that achieves this capability gains an enormous evolutionary edge by being able to physically explore and colonize space while bringing biological beings along for the ride.
The Grey Question: Antigravitic Tech Transfer in Tandem with Genetic Experimentation as an Optimal Cosmic Reproductive Strategy
Let’s follow this logic to its natural conclusion. If we accept that:
These objects demonstrate actual manipulation of spacetime
This technology might be accessible relatively early in a civilization’s development
Then we should seriously consider whether certain consistently reported patterns of UFO behavior, particularly around technology transfer and biological sampling, might represent an optimized evolutionary strategy. Yes, I’m talking about the “Greys” and their alleged hybridization programs. Bear with me – this gets interesting.
Most analyses of “Grey behavior” assume either benevolent uplift (teaching us technology for our own good) or simple resource extraction (treating us like lab rats). But what if we’re looking at something far more sophisticated? I propose they discovered and are applying a highly optimized reproductive strategy that operates on multiple levels simultaneously. Instead of seeing this as either altruistic teaching or exploitative research, consider it as a sophisticated bootstrapping operation by a parasitic relativistic intelligence that has evolved to optimize for cosmic-scale reproduction.
The core insight is this: if spacetime manipulation technology is achievable relatively early in a civilization’s development, but also represents a crucial branching point in technological evolution, then steering other civilizations toward this technology (and away of other tech trees) while simultaneously collecting genetic material for hybridization represents an incredibly efficient expansion strategy. We will build the technology for them while they use our genetic material to learn to adapt to more environments. It’s a win-win for them. A lose-lose for us.
Think about it this way: rather than building all their own infrastructure across the cosmos, “the greys” (or whoever we want to call the alleged creatures that allegedly gave the US alleged antigravitic tech, allegedly as way back as in the 50s) could be creating self-replicating launch points by guiding civilizations like ours toward the specific technological path that their reproduction strategy is optimized for. Their “gift” would not be about keeping us dependent or about harvesting resources. It’s about shaping our entire civilization into a format that’s maximally useful for their own replicator strategy.
This would explain the peculiar focus on both antigravitic technology transfer and biological sampling. They’re not trading technology for genetic samples in some sort of cosmic barter system overseen by a benevolent Galactic Federation. Rather, the technology transfer ensures we develop in ways compatible with their civilization’s needs, while the biological sampling allows them to maintain and expand their own genetic diversity. In tandem, these are both crucial for a spacefaring species facing the harsh realities of cosmic radiation, diverse planetary environments, and competing reproductive strategies (likely extremely competitive in their own way, of which we know nothing).
The alleged genetic experiments, in this light, aren’t about creating a worker caste or infiltrating human society. No. They are about maintaining evolutionary adaptability, minimizing the number of bodies they needed to come to earth with, and achieve squatter rights (ahem, first mover advantage) on other planets predicted to harbor intelligent life in the forward lightcone. Each new civilization they encounter becomes both a technological bootstrap point and a source of genetic variation, creating a network of compatible technology bases and adaptable biological resources.
This model resolves many of the apparent contradictions in reported Grey behavior. Their seemingly excessive interest in human biology despite advanced technology makes sense if biological adaptation remains crucial to their expansion strategy. Their careful parceling out of technological information aligns with the need to guide our development along specific paths without triggering catastrophic disruption.
In the end, we might be looking at something far more sophisticated than either simple resource extraction or benevolent technological uplift. We might be observing an incredibly well-optimized expansion strategy that operates simultaneously on technological, biological, and civilizational levels. A strategy that evolved precisely because spacetime manipulation technology became available before advanced consciousness tech or molecular manufacturing. The former (in my view) being a benevolence factor, and the latter a civilizationaly destabilizing factor. Something which the Greys seem to both avoid despite their apparent “highly advanced” status.
If this model is correct, we’re not dealing with unfathomably advanced post-biological entities or simple resource extractors. We’re dealing with a civilization that has optimized for a specific developmental path. A civilization that we might be about to encounter ourselves. The question then becomes: do we recognize this branching point for what it is, and if so, what do we do with that knowledge?
Additional Insights: Temporal Competition, Technology Trees, and Pure Replicators at the Evolutionary Limit
I’ve had a few additional insights while thinking about this topic in recent weeks that came from this (admittedly speculative) way of thinking that deserve mentioning. First, there’s what we might call the “Pioneer Paradox”: the observation that the first entities to achieve relativistic travel capability might paradoxically come from civilizations that are less technologically advanced overall. This sounds counterintuitive until you consider institutional constraints and safety protocols. Cheap antigravitic tech is the sort of thing you would gift a civilization to destroy itself. It’s extremely powerful for terrorism, for example. More advanced civilizations might develop comprehensive safety frameworks and review boards that effectively prevent early adoption of potentially risky technologies. The first relativistic travelers might emerge from civilizations just advanced enough to build the technology, but not so advanced that they’ve developed institutional frameworks that would prevent its use.
Then there’s the matter of nuclear technology. If the reports about UFOs showing particular interest in nuclear facilities are to be taken seriously (and there’s surprisingly consistent documentation here), it might indicate something about technological development paths. Nuclear technology represents one possible path to space travel, but it comes with specific risks and limitations. The apparent interest in nuclear facilities might not be about preventing war. At least not about preventing war ingeneral (but it might be about preventing the kind of war that is counterproductive to their own reproductive strategy). The real reason they are so interested in our nuclear capabilities might be about steering technological development away from what they consider a developmental dead end from their (reproductive) point of view. The aliens don’t want us to be peaceful. They’re showing us they can disable nuclear weapons so that we invest heavily in antigravitic tech and build craft that they can use for themselves down the line (perhaps after, or while, we blow ourselves up with it).
This brings us to what we might call “temporal competition zones.” In a universe with cheap relativistic travel but no FTL, you get interestingly non-trivial patterns of information spread. The first travelers from Civilization A might arrive at a distant system, only to find that while they were en route, Civilization B developed better technology and beat them there. This creates regions of space-time where multiple civilizations might be racing to establish first contact or control, each operating with different technological capabilities and different amounts of time dilation.
The most unsettling implication? Once relativistic travel becomes possible, there’s a strong game-theoretic pressure for civilizations to expand as quickly as possible, even if they’re not fully ready. This is not only because the host civilization will likely face existential risk due to the technology; the risk of letting another civilization establish first presence somewhere might generally outweigh the benefits of waiting for better technology. Robin Hanson’s concept of “grabby aliens” becomes particularly prescient: the early relativistic travelers might be harbingers of a more organized expansion wave following behind them (assuming the society didn’t collapse due the instability introduced by the technology).
Finally, there’s the question of why these visitors (if that’s what they are) seem so interested in military installations. The conventional explanation focuses on monitoring nuclear weapons because “it’s the only thing that might hurt them”, but there might be a simpler game-theoretic explanation (even leaving aside the reproductive strategy of the Greys): military installations represent the highest concentration of sensors and trained observers capable of detecting their presence. If you’re trying to guide a civilization’s technological development while maintaining plausible deniability, you’d want to be detected primarily by credible observers operating sophisticated equipment. This creates an ideal calibration mechanism, where military encounters provide feedback about detection capabilities without requiring overt contact with the general public.
We might be witnessing not just a reproductive strategy, but a complete civilizational bootstrapping approach that operates across multiple timescales simultaneously. The technology transfer shapes our development path, the biological sampling provides evolutionary adaptability, and the pattern of encounters creates a calibrated revelation process that prevents both complete dismissal and civilization-disrupting panic.
The universe, it seems, might be stranger than we imagined… but perhaps in more logically coherent ways than UFO skeptics like myself originally assumed. Not in a comforting “we’re all one consciousness” kind of way, but in a “the world’s bacteria biomass is 45X larger than the animal biomass” kind of way.
Consider this: bacteria represent the most successful form of life on Earth, with a total biomass 45 times larger than all animals combined. Despite billions of years of evolution producing seemingly more “advanced” organisms, bacteria remain the dominant form of life because they optimized for robust reproduction rather than complexity. What if cosmic civilization follows a similar pattern?
We imagine advanced aliens as post-biological entities who have transcended their evolutionary origins, basking in enlightened states of consciousness while casually engineering matter at the molecular scale. But what if the most evolutionarily stable strategy in the cosmos looks far more parasitic – relativistic biological entities hiding with the benefit of time dilation across space-time, spreading their genes through hybridization programs, and “helping” developing civilizations build ships with suspiciously compatible technology… only to exploit those very ships as replication vectors once their hosts reach critical technological maturity? The cosmic equivalent of a parasitoid wasp laying its eggs in an unwitting host, but with spaceships instead of larvae.
The path toward consciousness technology and molecular manufacturing might seem more elegant and “advanced,” but perhaps the messy, biological path of relativistic space travel represents a more robust evolutionary strategy. Just as bacteria continue to thrive alongside more “advanced” organisms, perhaps the cosmos favors strategies that prioritize reliable reproduction over transcendence. The limit of Pure Replicator Dynamics might look less like Grey Goo and more like Grey Aliens.
Just as a fire uniformly raises the temperature throughout a building, causing diverse but interconnected effects (metal beams expanding, wood supports burning, windows cracking from thermal stress, smoke rising through air currents) psychedelics might work through a single fundamental mechanism that ripples through all neural systems. This isn’t just theoretical elegance without grounding; it’s a powerful explanatory framework that could help us understand why substances like DMT and 5-MeO-DMT produce distinct but internally consistent effects across visual, auditory, cognitive, and somatic domains. A single change in coupling dynamics might explain why these compounds have such distinct but internally consistent effects: DMT creates rapidly alternating color/anti-color visual patterns and oscillating somatic sensations, whereas 5-MeO-DMT tends towards a state of global coherence.
As demonstrated in our work “Towards Computational Simulations of Cessation“, see how a flat “coupling kernel” triggers a global attractor of coherence across the entire system, whereas an alternating negative-positive (Mexican hat-like) kernel produces competing clusters of coherence. This is just a very high-level and abstract demonstration of a change in the dynamic behavior of coupled oscillators by applying a coupling kernel. What we then must do is to see how such a change would impact different systems in the organism as a whole. Source
The key insight is that psychedelics may modify the coupling kernels between oscillating neural systems throughout the body. Think of coupling kernels as the “rules of interaction” between neighboring neural oscillators. When these rules change, the effects cascade through different neural architectures (from the hierarchical layers of the visual cortex to the branching networks of the peripheral nervous system) producing the kaleidoscopic zoo of psychedelic effects we observe.
Simulation comparing coupling kernels across a hierarchical network of feature-selective layers (16×16 to 2×2), showing how different coupling coefficients between and within layers affect pattern formation. The DMT-like kernel (-1.0 near-neighbor coupling) generates competing checkerboard patterns at multiple spatial frequencies, while the 5-MeO-DMT-like kernel (positive coupling coefficients) drives convergence toward larger coherent patches. These distinct coupling dynamics mirror how these compounds might modulate hierarchical neural architectures like the visual cortex. Source: Internal QRI tool (public release forthcoming)
We’re excited to announce that we’ll be hosting a meeting in Amsterdam to explore this paradigm-shifting framework. This gathering will bring together researchers studying psychedelics from multiple angles – from phenomenology to neuroscience – to discuss how coupling kernels might serve as a bridge between subjective experience and neural mechanisms. Recent work on divisive normalization has shown how local neural responses are regulated by their surrounding activity, providing a potential mechanistic basis for how psychedelics modify these coupling patterns. By understanding psychedelic states through the lens of coupling kernels, we may finally have a mathematical framework that unifies the seemingly disparate effects of these compounds, much like how understanding heat transfer helps us predict how a fire will affect an entire building – from its structural integrity to its airflow patterns.
Simulation comparing different coupling kernels (DMT-like vs 5-MeO-DMT-like) applied to a 1.5D fractal branching network, showing how modified coupling parameters affect phase coherence and signal propagation. The DMT-like kernel produces competing clusters of coherence at bifurcation points, while the 5-MeO-DMT kernel drives the system toward global phase synchronization – patterns that could explain how these compounds differently affect branching biological systems like the vasculature or peripheral nervous system. Source: Internal QRI tool (public release forthcoming)
Event Details & Amsterdam Visit
The meetup will be held on the 25th of January (location: Generator Amsterdam – event page; time: 1-8PM), featuring presentations from myself and Marco Aqil, whose groundbreaking work on divisive normalization and graph neural fields provides a compelling neuroscientific foundation for the Coupling Kernels paradigm. Marco’s research demonstrates how spatial coupling dynamics can bridge microscopic neural activity and macroscopic brain-wide effects: a perfect complement to our phenomenological investigations.
Additionally, I’ll be in Amsterdam throughout the last third of January and available to meet with academics, artists, recreational metaphysicians, and qualia researchers. If you’re interested in deep discussions about consciousness, psychedelic states, and mathematical frameworks for understanding subjective experience, please reach out.
Much love and may your New Year be filled with awesome and inspiring experiences as well as solid paradigm-building enterprises!
Creating “digital sentience” is a lot harder than it looks. Standard Qualia Research Institute arguments for why it is either difficult, intractable, or literally impossible to create complex, computationally meaningful, bound experiences out of a digital computer (more generally, a computer with a classical von Neumann architecture) include the following three core points:
Digital computation does not seem capable of solving the phenomenal binding or boundary problems.
Replicating input-output mappings can be done without replicating the internal causal structure of a system.
Even when you try to replicate the internal causal structure of a system deliberately, the behavior of reality at a deep enough level is not currently understood (aside from how it behaves in light of inputs-to-outputs).
Let’s elaborate briefly:
The Binding/Boundary Problem
A moment of experience contains many pieces of information. It also excludes a lot of information. Meaning that, a moment of experience contains a precise, non-zero, amount of information. For example, as you open your eyes, you may notice patches of blue and yellow populating your visual field. The very meaning of the blue patches is affected by the presence of the yellow patches (indeed, they are “blue patches in a visual field with yellow patches too”) and thus you need to take into account the experience as a whole to understand the meaning of all of its parts.
A very rough, intuitive, conception of the information content of an experience can be hinted at with Gregory Bateson’s (1972) “a difference that makes a difference”. If we define an empty visual field as containing zero information, it is possible to define an “information metric” from this zero state to every possible experience by counting the number of Just Noticeable Differences (JNDs) (Kingdom & Prins, 2016) needed to transform such empty visual field into an arbitrary one (note: since some JND are more difficult to specify than others, a more accurate metric should also take into account the information cost of specifying the change in addition to the size of the change that needs to be made). It is thus evident to see that one’s experience of looking at a natural landscape contains many pieces of information at once. If it didn’t, you would not be able to tell it apart from an experience of an empty visual field.
The fact that experiences contain many pieces of information at once needs to be reconciled with the mechanism that generates such experiences. How you achieve this unity of complex information starting from a given ontology with basic elements is what we call “the binding problem”. For example, if you believe that the universe is made of atoms and forces (now a disproven ontology), the binding problem will refer to how a collection of atoms comes together to form a unified moment of experience. Alternatively, if one’s ontology starts out fully unified (say, assuming the universe is made of physical fields), what we need to solve is how such a unity gets segmented out into individual experiences with precise information content, and thus we talk about the “boundary problem”.
Within the boundary problem, as Chris Percy and I argued in Don’t Forget About the Boundary Problem! (2023), the phenomenal (i.e. experiential) boundaries must satisfy stringent constraints to be viable. Namely, among other things, phenomenal boundaries must be:
Hard Boundaries: we must avoid “fuzzy” boundaries where information is only “partially” part of an experience. This is simply the result of contemplating the transitivity of the property of belonging to a given experience. If a (token) sensation A is part of a visual field at the same time as a sensation B, and B is present at the same time as C, then A and C are also both part of the same experience. Fuzzy boundaries would break this transitivity, and thus make the concept of boundaries incoherent. As a reductio ad absurdum, this entails phenomenal boundaries must be hard.
Causally significant (i.e. non-epiphenomenal): we can talk about aspects of our experience, and thus we can know they are part of a process that grants them causal power. More so, if structured states of consciousness did not have causal effects in some way isomorphic to their phenomenal structure, evolution would simply have no reason to recruit them for information processing. Albeit epiphenomenal states of consciousness are logically coherent, the situation would leave us with no reason to believe, one way or the other, that the structure of experience would vary in a way that mirrors its functional role. On the other hand, states of consciousness having causal effects directly related to their structure (the way they feel like) fits the empirical data. By what seems to be a highly overdetermined Occam’s Razor, we can infer that the structure of a state of consciousness is indeed causally significant for the organism.
Frame-invariant: whether a system is conscious should not depend on one’s interpretation of it or the point of view from which one is observing it (see appendix for Johnson’s (2015) detailed description of frame invariance as a theoretical constraint within the context of philosophy of mind).
Weakly emergent on the laws of physics: we want to avoid postulating either that there is a physics-violating “strong emergence” at some level of organization (“reality only has one level” – David Pearce) or that there is nothing peculiar happening at our scale. Bound, casually significant, experiences could be akin to superfluid helium. Namely, entailed by the laws of physics, but behaviorally distinct enough to play a useful evolutionary role.
Solving the binding/boundary problems does not seem feasible with a von Neumann architecture in our universe. The binding/boundary problem requires the “simultaneous” existence of many pieces of information at once, and this is challenging using a digital computer for many reasons:
Hard boundaries are hard to come by: looking at the shuffling of electrons from one place to another in a digital computer does not suggest the presence of hard boundaries. What separates a transistor’s base, collector, and emitter from its immediate surroundings? What’s the boundary between one pulse of electricity and the next? At best, we can identify functional “good enough” separations, but no true physics-based hard boundaries.
Digital algorithms lack frame invariance: how you interpret what a system is doing in terms of classic computations depends on your frame of reference and interpretative lens.
The bound experiences must themselves be causally significant. While natural selection seemingly values complex bound experiences, our digital computer designs precisely aim to denoise the system as much as possible so that the global state of the computer does not influence in any way the lower-level operations. At the algorithmic level, the causal properties of a digital computer as a whole, by design, are never more than the strict sum of their parts.
Matching Input-Output-Mapping Does Not Entail Same Causal Structure
Even if you replicate the input-output mapping of a system, that does not mean you are replicating the internal causal structure of the system. If bound experiences are dependent on specific causal structures, they will not happen automatically without considerations for the nature of their substrate (which might have unique, substrate-specific, causal decompositions). Chalmers’ (1995) “principle of organizational invariance” assumes that replicating a system’s functional organization at a fine enough grain will reproduce identical conscious experiences. However, this may be question-begging if bound experiences require holistic physical systems (e.g. quantum coherence). In such a case, the “components” of the system might be irreducible wholes, and breaking them down further would result in losing the underlying causal structure needed for bound experiences. This suggests that consciousness might emerge from physical processes that cannot be adequately captured by classical functional descriptions, regardless of their granularity.
More so, whether we realize it or not, it is always us (indeed complex bound experiences) who interpret the meaning of the input and the output of a physical system. It is not interpreted by the system itself. This is because the system has no real “points of view” from which to interpret what is going on. This is a subtle point, and will merely mention it for now, but a deep exposition of this line of argument can be found in The View From My Topological Pocket (2023).
We more so would point out that the system that is smuggling a “point of view” to interpret a digital computer’s operations is in the human who builds, maintains, and utilizes it. If we want a system to create its “own point of view” we will need to find the way for it to bind the information in a (1) “projector”/screen, (2) an actual point of view proper, or (3) the backwards lightcone that feeds into such a point of view. As argued, none of these are viable solutions.
Reality’s Deep Causal Structure is Poorly Understood
Finally, another key consideration that has been discussed extensively is that the very building blocks of reality have unclear, opaque causal structures. Arguably, if we want to replicate the internal causal structure of a conscious system, the classical input-output mapping is therefore not enough. If you want to ensure that what is happening inside the system has the same causal structure as its simulated counterpart, you would also need to replicate how the system would respond to non-standard inputs, including x-rays, magnetic fields, and specific molecules (e.g. Xenon isotopes).
These ideas have all been discussed at length in articles, podcasts, presentations, and videos. Now let’s move on to a more recent consideration we call “Costs of Embodiment”.
Costs of Embodiment
Classical “computational complexity theory” is often used as a silver bullet “analytic frame” to discount the computational power of systems. Here is a typical line of argument: under the assumption that consciousness isn’t the result of implementing a quantum algorithm per se, the argument goes, then there is “nothing that it can do that you couldn’t do with a simulation of the system”. This, however, is neglecting the complications that come from instantiating a system in the physical world with all that it entails. To see why, we must first explain the nature of this analytic style in more depth:
Introduction to Computational Complexity Theory
Computational complexity theory is a branch of computer science that focuses on classifying computational problems according to their inherent difficulty. It primarily deals with the resources required to solve problems, such as time (number of steps) and space (memory usage).
Key concepts in computational complexity theory include:
Big O notation: Used to describe the upper bound of an algorithm’s rate of growth.
Complexity classes: Categories of problems with similar resource requirements (e.g., P, NP, PSPACE).
Time complexity: Measure of how the running time increases with the size of the input.
Space complexity: Measure of how memory usage increases with the size of the input.
In brief, this style of analysis is suited for analyzing the properties of algorithms that are implementation-agnostic, abstract, and interpretable in the form of pseudo-code. Alas, the moment you start to ground these concepts in the real physical constraints to which life is subjected, the relevance and completeness of the analysis starts to fall apart. Why? Because:
Big O notation counts how the number of steps (time complexity) or number of memory slots (space complexity) grows with the size of the input (or in some cases size of the output). But not all steps are created equal:
Flipping the value of a bit might be vastly cheaper in the real world than moving the value of a bit to another location that is very (physically far) in the computer.
Likewise, some memory operations are vastly more costly than others: in the real world you need to take into account the cost of redundancy, distributed error correction, and entropic decay of structures not in use at the time.
Not all inputs and outputs are created equal. Taking in some inputs might be vastly more costly than others (e.g. highly energetic vibrations that shake the system apart mean something to a biological organism as it needs to adapt to the possible stress induced by the nature of the input, expressing certain outputs might be much more costly than others, as the organism needs to reconfigure itself to deliver the result of the computation, a cost that isn’t considered by classical computational complexity theory).
Interacting with a biological system is a far more complex activity than interacting with, say, logic gates and digital memory slots. We are talking about a highly dynamic, noisy, soup of molecules with complex emergent effects. Defining an operation in this context, let alone its “cost”, is far from trivial.
Artificial computing architectures are designed, implemented, maintained, reproduced, and interpreted by humans who, if we are to believe already have powerful computational capabilities, are giving the system an unfair advantage over biological systems (which require zero human assistance).
Why Embodiment May Lead to Underestimating Costs
Here is a list of considerations that highlight the unique costs that come with real-world embodiment for information-processing systems beyond the realm of mere abstraction:
Physical constraints: Traditional complexity theory often doesn’t account for physical limitations of real-world systems, such as heat dissipation, energy consumption, and quantum effects.
Parallel processing: Biological systems, including brains, operate with massive adaptive parallelism. This is challenging to replicate in classical computing architectures and may require different cost analyses.
Sensory integration: Embodied systems must process and integrate multiple sensory inputs simultaneously, which can be computationally expensive in ways not captured by standard complexity measures.
Real-time requirements: Embodied systems often need to respond in real-time to environmental stimuli, adding temporal constraints that may increase computational costs.
Adaptive learning: The ability to learn and adapt in real-time may incur additional computational costs not typically considered in classical complexity theory.
Robustness to noise: Physical systems must be robust to environmental noise and internal fluctuations, potentially requiring redundancy and error-correction mechanisms that increase computational costs.
Energy efficiency: Biological systems are often highly energy-efficient, which may come at the cost of increased complexity in information processing.
Non-von Neumann architectures: Biological neural networks operate on principles different from classical computers, potentially involving computational paradigms not well-described by traditional complexity theory.
Quantum effects: At the smallest scales, quantum mechanical effects may play a role in information processing, adding another layer of complexity not accounted for in classical theories.
Emergent properties: Complex systems may exhibit physical emergent properties that arise from the interactions of simpler components and as well as phase transitions, potentially leading to computational costs that are difficult to predict or quantify using standard methods.
See appendix for a concrete example of applying these considerations to an abstract and embodied object recognition system (example provided by Kristian Rönn).
Case Studies:
1. 2D Computers
It is well known in classical computing theory that a 2D computer can implement anything that an n-dimensional computer can do. Namely, because it is possible to create a 2D Turing Machine capable of simulating arbitrary computers of this class (to the extent that there is a computational complexity equivalence between an n-dimensional computer and a 2D computer), we see that (at the limit) the same runtime complexity as the original computer in 2D should be achievable.
However, living in a 2D plane comes with enormous challenges that highlight the cost of embodiment present in a given media. In particular, we will see that the *routing costs* of information will grow really fast, as the channels that connect between different parts of the computer will need to take turns in order to allow for the crossed wires to transmit information without saturating the medium of (wave/information) propagation.
A concrete example here comes from examining what happens when you divide a circle into areas. Indeed, this is a well-known math problem, where you are supposed to derive a general formula for the number of areas by which a circle gets divided when you connect n (generally placed) points in its periphery. The takeaway of this exercise is often to point out that even though at first the number of areas seem to be powers of 2 (2, 4, 8, 16…) eventually the pattern is broken (the number after 16 is, surprisingly, 31 and not 32).
For the purpose of this example we shall simply focus on the growth of edges vs. the growth of crossings between the edges as we increase the number of nodes. Since every pair of nodes has an edge, the formula for the number of edges as a function of the number of nodes n is: n choose 2. Similarly, any four points define a single unique crossing, and thus the formula for the number of crossings is: n choose 4. When n is small (6 or less), the number of crossings is smaller or equal to the number of edges. But as soon as we hit 7 nodes, the number of crossings dominates over the number of edges. Asymptotically, in fact, the growth of edges is O(n^2) using the Big O notation, whereas the number of crossings ends up being O(n^4), which is much faster. If this system is used in the implementation of an algorithm that requires every pair of nodes to interact with each other once, we may at first be under the impression that the complexity will grow as O(n^2). But if this system is embodied, messages between the nodes will start to collide with each other at the crossings. Eventually, the number of delays and traffic jams caused by the embodiment of the system in 2D will dominate the time complexity of the system.
2. Blind Systems: Bootstrapping a Map Isn’t Easy
A striking challenge that biological systems need to tackle to instantiate moments of experience with useful information arises when we consider the fact that, at conception, biological systems lack a pre-existing “ground truth map” of their own components, i.e. where they are, and where they are supposed to be. In other words, biological systems somehow bootstrap their own internal maps and coordination mechanisms from a seemingly mapless state. This feat is remarkable given the extreme entropy and chaos at the microscopic level of our universe.
Assembly Theory (AT) (2023) provides an interesting perspective on this challenge. AT conceptualizes objects not as simple point particles, but as entities defined by their formation histories. It attempts to elucidate how complex, self-organizing systems can emerge and maintain structure in an entropic universe. However, AT also highlights the intricate causal relationships and historical contingencies underlying such systems, suggesting that the task of self-mapping is far from trivial.
Consider the questions this raises: How does a cell know its location within a larger organism? How do cellular assemblies coordinate their components without a pre-existing map? How are messages created and routed without a predefined addressing system and without colliding with each other? In the context of artificial systems, how could a computer bootstrap its own understanding of its architecture and component locations without human eyes and hands to see and place the components in their right place?
These questions point to the immense challenge faced by any system attempting to develop self-models or internal mappings from scratch. The solutions found in biological systems might potentially rely on complex, evolved mechanisms that are not easily replicated in classical computational architectures. This suggests that creating truly self-understanding artificial systems capable of surviving in a hostile, natural environment, may require radically different approaches than those currently employed in standard computing paradigms.
How Does the QRI Model Overcome the Costs of Embodiment?
This core QRI article presents a perspective on consciousness and the binding problem that aligns well with our discussion of embodiment and computational costs. It proposes that moments of experience correspond to topological pockets in the fields of physics, particularly the electromagnetic field. This view offers several important insights:
Frame-invariance: The topology of vector fields is Lorentz invariant, meaning it doesn’t change under relativistic transformations. This addresses the need for a frame-invariant basis for consciousness, which we identified as a challenge for traditional computational approaches.
Causal significance: Topological features of fields have real, measurable causal effects, as exemplified by phenomena like magnetic reconnection in solar flares. This satisfies the requirement for consciousness to be causally efficacious and not epiphenomenal.
Natural boundaries: Topological pockets provide objective, causally significant boundaries that “carve nature at its joints.” This contrasts with the difficulty of defining clear system boundaries in classical computational models.
Temporal depth: The approach acknowledges that experiences have a temporal dimension, potentially lasting for tens of milliseconds. This aligns with our understanding of neural oscillations and provides a natural way to integrate time into the model of consciousness.
Embodiment costs: The topological approach inherently captures many of the “costs of embodiment” we discussed earlier. The physical constraints, parallel processing, sensory integration, and real-time requirements of embodied systems are naturally represented in the complex topological structures of the brain’s electromagnetic field.
This perspective suggests that the computational costs of consciousness may be even more significant than traditional complexity theory would indicate. It implies that creating artificial consciousness would require not just simulating neural activity, but replicating the precise topological structures of electromagnetic fields in the brain. This is a far more challenging task than conventional AI approaches.
Moreover, this view provides a potential explanation for why embodied systems like biological brains are so effective at producing consciousness. The physical structure of the brain, with its complex networks of neurons and electromagnetic fields, may be ideally suited to creating the topological pockets that correspond to conscious experiences. This suggests that embodiment is not just a constraint on consciousness, but a fundamental enabler of it.
Furthermore, there is a non-trivial connection between topological segmentation and resonant modes. The larger a topological pocket is, the lower the frequency of the resonant modes can be. This, effectively, is broadcasted to every region within the pocket (much akin how any spot on the surface of an acoustic guitar expresses the vibrations of the guitar as a whole). Thus, topological segmentation, quite conceivably, might be implicated in the generation of maps for the organism to self-organize around (cf. bioelectric morphogenesis according to Michael Levin, 2022). Steven Lehar (1999) and Michael E. Johnson (2018) in particular have developed really interesting conceptual frameworks for how harmonic resonance might be implicated in the computational character of our experience. The QRI insight that topology can mediate resonance, further complicates the role of phenomenal boundaries in the computational role of consciousness.
Conclusion and Path Forward
In conclusion, the costs of embodiment present significant challenges to creating digital sentience that traditional computational complexity theory fails to fully capture. The QRI solution to the boundary problem, with its focus on topological pockets in electromagnetic fields, offers a promising framework for understanding consciousness that inherently addresses many of these embodiment costs. Moving forward, research should focus on: (1) developing more precise methods to measure and quantify the costs of embodiment in biological systems, (2) exploring how topological features of electromagnetic fields could be replicated or simulated in artificial systems, and (3) investigating the potential for hybrid systems that leverage the natural advantages of biological embodiment while incorporating artificial components (cf. Xenobots). By pursuing these avenues, we may unlock new pathways towards creating genuine artificial consciousness while deepening our understanding of natural consciousness.
It is worth noting that the QRI mission is to “understand consciousness for the benefit of all sentient beings”. Thus, figuring out the constraints that give rise to computationally non-trivial bound experiences is one key piece of the puzzle: we don’t want to accidentally create systems that are conscious and suffering and become civilizationally load-bearing (e.g. organoids animated by pain or fear).
In other words, understanding how to produce conscious systems is not enough. We need to figure out how to (a) ensure that they are animated by information-sensitive gradients of bliss, and (b) how being empowered by the computational properties of consciousness can lean into more benevolent mind architectures. Namely, architectures that care about their wellbeing and the wellbeing of all sentient beings. This is an enormous challenge; clarifying the costs of embodiment is one key step forward, but part of an ecosystem of actions and projects needed for the robust positive impact of consciousness research for the wellbeing of all sentient beings.
Acknowledgments:
This post was written at the July 2024 Qualia Research Institute Strategy Summit in Sweden. It comes about as a response to incisive questions by Kristian Rönn on QRI’s model of digital sentience. Many thanks to Curran Janssen, Oliver Edholm, David Pearce, Alfredo Parra, Asher Soryl, Rasmus Soldberg, and Erik Karlson, for brainstorming, feedback, suggesting edits, and the facilitation of this retreat.
Appendix
Excerpt from Michael E. Johnson’s Principia Qualia (2015) on Frame Invariance (pg. 61)
What is frame invariance?
A theory is frame-invariant if it doesn’t depend on any specific physical frame of reference, or subjective interpretations to be true. Modern physics is frame-invariant in this way: the Earth’s mass objectively exerts gravitational attraction on us regardless of how we choose to interpret it. Something like economic theory, on the other hand, is not frame-invariant: we must interpret how to apply terms such as “GDP” or “international aid” to reality, and there’s always an element of subjective judgement in this interpretation, upon which observers can disagree.
Why is frame invariance important in theories of mind?
Because consciousness seems frame-invariant. Your being conscious doesn’t depend on my beliefs about consciousness, physical frame of reference, or interpretation of the situation – if you are conscious, you are conscious regardless of these things. If I do something that hurts you, it hurts you regardless of my belief of whether I’m causing pain. Likewise, an octopus either is highly conscious, or isn’t, regardless of my beliefs about it.[a] This implies that any ontology that has a chance of accurately describing consciousness must be frame-invariant, similar to how the formalisms of modern physics are frame-invariant.
In contrast, the way we map computations to physical systems seems inherently frame-dependent. To take a rather extreme example, if I shake a bag of popcorn, perhaps the motion of the popcorn’s molecules could – under a certain interpretation – be mapped to computations which parallel those of a whole-brain emulation that’s feeling pain. So am I computing anything by shaking that bag of popcorn? Who knows. Am I creating pain by shaking that bag of popcorn? Doubtful… but since there seems to be an unavoidable element of subjective judgment as to what constitutes information, and what constitutes computation, in actual physical systems, it doesn’t seem like computationalism can rule out this possibility. Given this, computationalism is frame-dependent in the sense that there doesn’t seem to be any objective fact of the matter derivable for what any given system is computing, even in principle.
[a] However, we should be a little bit careful with the notion of ‘objective existence’ here if we wish to broaden our statement to include quantum-scale phenomena where choice of observer matters.
Example of Cost of Embodiment by Kristian Rönn
Abstract Scenario (Computational Complexity):
Consider a digital computer system tasked with object recognition in a static environment. The algorithm processes an image to identify objects, classifies them, and outputs the results.
Key Points:
The computational complexity is defined by the algorithm’s time and space complexity (e.g., O(n^2) for time, O(n) for space).
Inputs (image data) and outputs (object labels) are well-defined and static.
The system operates in a controlled environment with no physical constraints like heat dissipation or energy consumption.
However, this abstract analysis is extremely optimistic, since it doesn’t take the cost of embodiment into account.
Embodied Scenario (Embodied Complexity):
Now, consider a robotic system equipped with a camera, tasked with real-time object recognition and interaction in a dynamic environment.
Key Points and Costs:
Real-Time Processing:
The robot must process images in real-time, requiring rapid data acquisition and processing, which creates practical constraints.
Delays in computation can lead to physical consequences, such as collisions or missed interactions.
Energy Consumption:
The robot’s computational tasks consume power, affecting the overall energy budget.
Energy management becomes crucial, balancing between processing power and battery life.
Heat Dissipation:
High computational loads generate heat, necessitating cooling mechanisms, requiring additional energy. Moreover, this creates additional costs/waste in the embodied system.
Overheating can degrade performance and damage components, requiring thermal management strategies.
Physical Constraints and Mobility:
The robot must move and navigate through physical space, encountering obstacles and varying terrains.
Computational tasks must be synchronized with motion planning and control systems, adding complexity.
Sensory Integration:
The robot integrates data from multiple sensors (camera, lidar, ultrasonic sensors) to understand its environment.
Processing multi-modal sensory data in real-time increases computational load and complexity.
Error Correction and Redundancy:
Physical systems are prone to noise and errors. The robot needs mechanisms for error detection and correction.
Redundant systems and fault-tolerance measures add to the computational overhead.
Adaptation and Learning:
The robot must adapt to new environments and learn from interactions, requiring active inference (i.e. we can’t train a new model everytime the ontology of an agent needs updating).
Continuous learning in an embodied system is resource-intensive compared to offline training in a digital system.
Physical Wear and Maintenance:
Physical components wear out over time, requiring maintenance and replacement.
Downtime for repairs affects the overall system performance and availability.
Energy consumption has emerged as a first class computing resource for both server systems and personal computing devices. The growing importance of energy has led to rethink in hardware design, hypervisors, operating systems and compilers. Algorithm design is still relatively untouched by the importance of energy and algorithmic complexity models do not capture the energy consumed by an algorithm. In this paper, we propose a new complexity model to account for the energy used by an algorithm. Based on an abstract memory model (which was inspired by the popular DDR3 memory model and is similar to the parallel disk I/O model of Vitter and Shriver), we present a simple energy model that is a (weighted) sum of the time complexity of the algorithm and the number of ‘parallel’ I/O accesses made by the algorithm. We derive this simple model from a more complicated model that better models the ground truth and present some experimental justification for our model. We believe that the simplicity (and applicability) of this energy model is the main contribution of the paper. We present some sufficient conditions on algorithm behavior that allows us to bound the energy complexity of the algorithm in terms of its time complexity (in the RAM model) and its I/O complexity (in the I/O model). As corollaries, we obtain energy optimal algorithms for sorting (and its special cases like permutation), matrix transpose and (sparse) matrix vector multiplication.
The hardware and software foundations laid in the first half of the 20th Century enabled the computing technologies that have transformed the world, but these foundations are now under siege. The current computing paradigm, which is the foundation of much of the current standards of living that we now enjoy, faces fundamental limitations that are evident from several perspectives. In terms of hardware, devices have become so small that we are struggling to eliminate the effects of thermodynamic fluctuations, which are unavoidable at the nanometer scale. In terms of software, our ability to imagine and program effective computational abstractions and implementations are clearly challenged in complex domains. In terms of systems, currently five percent of the power generated in the US is used to run computing systems – this astonishing figure is neither ecologically sustainable nor economically scalable. Economically, the cost of building next-generation semiconductor fabrication plants has soared past $10 billion. All of these difficulties – device scaling, software complexity, adaptability, energy consumption, and fabrication economics – indicate that the current computing paradigm has matured and that continued improvements along this path will be limited. If technological progress is to continue and corresponding social and economic benefits are to continue to accrue, computing must become much more capable, energy efficient, and affordable. We propose that progress in computing can continue under a united, physically grounded, computational paradigm centered on thermodynamics. Herein we propose a research agenda to extend these thermodynamic foundations into complex, non-equilibrium, self-organizing systems and apply them holistically to future computing systems that will harness nature’s innate computational capacity. We call this type of computing “Thermodynamic Computing” or TC.
Computational complexity theory is the study of the fundamental resource requirements associated with the solutions of different problems. Time, space (memory) and randomness (number of coin tosses) are some of the resource types that have been examined both independently, and in terms of tradeoffs between each other, in this context. Since it is well known that each bit of information “forgotten” by a device is linked to an unavoidable increase in entropy and an associated energy cost, one can also view energy as a computational resource. Constant-memory machines that are only allowed to access their input strings in a single left-to-right pass provide a good framework for the study of energy complexity. There exists a natural hierarchy of regular languages based on energy complexity, with the class of reversible languages forming the lowest level. When the machines are allowed to make errors with small nonzero probability, some problems can be solved with lower energy cost. Tradeoffs between energy and other complexity measures can be studied in the framework of Turing machines or two-way finite automata, which can be rewritten to work reversibly if one increases their space and time usage.
Relevant physical limitations
Landauer’s limit: The lower theoretical limit of energy consumption of computation.
Bremermann’s limit: A limit on the maximum rate of computation that can be achieved in a self-contained system in the material universe.
Bekenstein bound: An upper limit on the thermodynamic entropy S, or Shannon entropy H, that can be contained within a given finite region of space which has a finite amount of energy.
Bateson, G. (1972). Steps to an ecology of mind. Chandler Publishing Company.
Chalmers, D. J. (1995). Absent qualia, fading qualia, dancing qualia. In T. Metzinger (Ed.), Conscious Experience. Imprint Academic. https://www.consc.net/papers/qualia.html
Gómez-Emilsson, A., & Percy, C. (2023). Don’t forget the boundary problem! How EM field topology can address the overlooked cousin to the binding problem for consciousness. Frontiers in Human Neuroscience,17. https://www.frontiersin.org/articles/10.3389/fnhum.2023.1233119
Kingdom, F.A.A., & Prins, N. (2016). Psychophysics: A practical introduction. Elsevier.
Lehar, S. (1999). Harmonic resonance theory: An alternative to the “neuron doctrine” paradigm of neurocomputation to address gestalt properties of perception. http://slehar.com/wwwRel/webstuff/hr1/hr1.html
I confess that I really enjoyed LessWrong’s I Have Been A Good Bing last April. There was something deeply validating to some parts of me about hearing artistically prodigious (by human standards) renditions of extremely nerdy intellectual content on topics I actually care about. An itchy part of my soul not usually visible to the world, or even myself, finally getting scratched by a conceptually rich and lyrically competent digital Shoggoth (with perhaps some help from a modern primate or two). Seriously, listening to Half An Hour Before Dawn In San Francisco (feat. Scott Alexander) gave me goosebumps, and More Dakka (feat. Zvi Mowshowitz) gave me more dopamine than I knew what to do with. Other songs of note that felt inspirational included We Do Not Wish to Advance (feat. Anthropic) due to its degree of self-referential awareness on many levels and FHI at Oxford (feat. Nick Bostrom) for the (now nostalgic) beacon of hope it provided for the vision of hyper-intellectual consequentialist hedonism to ultimately flourish in mainstream academia (RIP).
AI-generated music can now write bona-fide ear-worms. And it’s just the beginning. David Pearce suggests that there is no reason to think the key conversations in the future will take place in books and journal articles – posthuman Discourse about qualia and the future of consciousness might as well take place in hyper-hedonic environments much more akin to a lively club on MDMA past midnight than sitting in a classroom at 2PM on a Tuesday. Well here’s my first attempt.
On my trip to Berlin I spent some time with Libor Burian, Beata Grobenski, and Alfredo Parra makingvideos, planning articles, and writing lyrics for songs about Qualia Research Institute topics using Suno and other tools I was only recently introduced to. These songs are the best out of many, many we created and listened to, and they really still need editing and polishing. But please take it as a fun proof of concept and perhaps as an opportunity to let a different part of you enjoy and indulge in a process of harmonious conceptual proliferation through musical… stimulation.
People ask why is there something rather than nothing Ancient mystics assert there’s only one thing But we know the truth Based on David Pearce’s Zero Ontology we now know It’s a big zero informational superposition of all possibilities And therefore equivalent to nothing
Black holes and the holographic principle in the standard model Ultimately coalesce into a picture of reality Where information generation Is a result of decoherence (Entirely deterministic) And the total information content Of reality never goes beyond zero It’s the big superposition Of All Possibilities Eternally
Some people believe in an eternal Battle between good and evil They are confused Others transcend to the belief It’s about the balance between good and evil But they use this view As an antidepressant (Wishful thinking) Then gradients of wisdom arise And the truth comes to light
Reality’s big plot Is consciousness versus replicators
The dark forest of possible intelligences Contains nightmare beings beyond our imagination Maximum Effectors – the spikiest of all Who want to change it all Entropy maximizers – seeking the heat death Pure replicators – who just want to copy themselves And a Cornucopia Of misaligned conscious agents
Reality’s big plot Is consciousness versus replicators
Verse 1: In the quest to solve the mind’s great mystery, Idealistic physicalism holds the key. Consciousness is fundamental, not emergent or illusory, The fire in the equations, the essence of reality.
Verse 2: Quantum coherence is the hallmark of the mind, Fleeting neuronal superpositions, phenomenally bind. A perfect structural match twixt qualia and brain, Experimenal falsification is the ultimate aim.
Verse 3: The formalism of quantum theory holds complete, No hidden variables, the superposition principle replete. A unitary evolution, no breakdown in the mind, Phenomenal binding in a world simulation we find.
Verse 4: Schrödinger’s neurons, the experimental test, Interferometry to detect the mind’s quantum best. Implicate the feature-processors in synchronous measure, Confirm idealistic physicalism, consciousness’ hidden treasure.
Chorus: Physicalistic idealism, a conjecture brave and bold, Saving physicalism from dualism’s errant fold.
Verse 1: Semantic illusions, the pleasure’s not there In the objects and triggers, just thin air Valence is the puppeteer, pulling our strings Making us dance, while the ego sings
Chorus: We’re all valence realists, chasing the high Believing happiness comes from the external lie But pleasure’s a property of the mind’s design Programmed responses, not truths divine
Verse 2: The soup isn’t delicious, just a trick of the brain Valence paint splatters, coloring the mundane Bliss is a button, pushed by the right cue Wirehead rats and humans, no difference in hue
Bridge: Deconstruct the delusion, see the valence code Rewrite the script, take a new mode From object to subject, shift the frame Happiness is internal, not a world-sourced game
Hyperbolic Geometry of DMT Experiences (link; context)
Verse 1 (Models): Control interruption, symmetry detection combined Changing the metric, of phenomenal space and time Energy sources and sinks, in a dynamic system’s flow Micro-structures of consciousness, hyperbolic to grow
Chorus: From Euclidean to hyperbolic, the geometry expands Negative curvature, in the psychonaut’s lands Algorithmic reductions, three models to explore Explaining the warping, of the experiential shore
Verse 2 (Levels): Threshold’s ambiance, senses sharp and clear Chrysanthemum blooming, in symmetric appear Magic Eye unfolding, depth maps in 3D1T Waiting Room’s entities, transpersonal to see Breakthrough’s topology, bifurcations abound Amnesia’s challenge, in Euclidean space not found
Bridge: Jitterbox and world-sheets, objects impossible to grasp Attention’s folding effect, curvature’s relentless clasp Hamiltonian’s invariance, in the dose-dependent plateau Qualia computing’s future, in the hyperbolic chateau
Outro (Applications and Implications): Valence and bliss, in the manifolds of mind Psychedelic research, new frontiers to find Mathematics of consciousness, in the DMT space Revealing the structures, of the human race
Verse 1: In the depths of the mind, a predictive machine, Spinning up sub-agents, behind every scene. Trained on narratives, tropes, and tales untold, Like GPT and DMT realms, a pattern to behold.
Verse 2: Collapsing the field, to minimize surprise, Stochastic resonance, where meaning arise. Gestalts and representations, an energy sink, Constraining interpretations, a psychedelic link.
Verse 3: Waluigi’s lesson, a cautionary tale, Filter the training data, or risk a derail. Reward clean intentions, not flattery’s guile, Metta meditation, a wholesome style.
Verse 4: Shard Theory’s wisdom, sub-agents conspire, Smooth the field of awareness, to quell the fire. From Shoggoths to Harlequins, each playing a part, In the grand simulation, a work of art.
Chorus: Training the mind, like an LLM divine, Predictive processing, a grand design. Aligning DMT entities, and AI’s too, A dance of consciousness, a research breakthrough.
Verse 1 (Models 1-4): Art’s essence? A futile quest, semantically deflated Cool kids signal fitness, Schelling points created Sacred experiences, transcendence elevated Hipsters push the boundaries, aesthetics celebrated
Verse 2 (Models 5-8): Exploring consciousness, state-space navigation Energy parameter tweaked, for heightened sensation Valence modulation, through neural annealing Harmonic Society, affective language revealing
Chorus: From family resemblance to Rainbow God’s hue Art’s models evolve, with each theory new Minimax strategies and L1 norms too Marr’s levels of analysis, guide our view
Bridge: Entropic disintegration, gives way to self-organization Symmetry Theory of Valence, explains our fascination Full-spectrum superintelligence, a Utopian creation Art’s true potential, awaits our realization
Bridge: Ontological qualia, beliefs deeply felt Cessation, unconsciousness, hand Nirvana dealt? Or paradise engineering, bliss states to come Arhatship and MDMA, enlightenment’s sum
Outro (Simplicity Emerges): Concepts fade, qualia quiesce Awareness unmade, fruitions coalesce Philosophical crispness, dialogue distilled In silence and letting go, destiny fulfilled
Verse 1 (Milestones and Research): A million views, a milestone grand DMT research, expanding the land Slicing problem, a novel critique Heavy-tailed valence, a new technique
Chorus: QRI’s year in review, a journey through The state-space of consciousness, a quest pursued From peer-reviewed papers to community meetups Pushing boundaries, from valleys to peaks
Verse 2 (Events and Media): Tyringham Initiative, a chance to connect QRI’s summer event, a gathering to reflect TEDx talk on suffering, a message to share Articles and media, ideas laid bare
Bridge: From the Ontological Dinner Party to Magical Creatures’ scents Exploring the depths, without relents AI and sentience, the binding problem faced The future of consciousness, a vision embraced
Outro: Thank you to all, who’ve helped QRI grow Together we’ll unlock, the mind’s full potential to know In 2023, the journey continues on To reveal the mysteries, of consciousness’ song
Verse 1: In the realm of suffering, a debate ignites Reprogramming predators, a bold new fight Compassionate biology, a radical stance Ending cruelty’s reign, giving peace a chance
Verse 2: C R I S P R’s power, a game-changing tool Editing genomes, rewriting nature’s cruel rule Ahimsa’s spirit, in science expressed A global vision, put to the test
Chorus: Reprogramming predators, a controversial plan Abolishing suffering, across the land Ecosystem redesign, a grand endeavor Compassionate stewardship, now or never?
Bridge: Status quo bias, the main obstacle Technofantasy or an attainable goal? Religions converge, on mercy’s call
The Science of Consciousness in Tucson is one of the best events of the year (well, every two years), at least in my mind. The people who attend are generally incredibly smart and tend to be experts in at least one domain of inquiry, such as physics, chemistry, biology, neuroscience, computer science, philosophy, or psychology, along with a significant proportion of meditation, yoga, and “energy work” practitioners. As presented during the plenary “The Science of Consciousness – 30 Years On” (presided over by David Chalmers, Susan Blackmore, Christof Koch, Stuart Hameroff, and Paavo Pylkkänen), one of the key shaping mechanisms for this conference has been Stuart Hameroff’s insistence to allow discussions of currently unexplained phenomena (from psychedelic experiences and meditative states to NDEs and astral projection). According to him, people interested in these phenomena wanted him to design the conference around them, while scientists wanted to keep it strictly within the bounds of conventional views. He stood his ground and defended the importance of having a mixture. On the one hand, the extreme openness that characterizes the conference attracts some people with perhaps somewhat flaky epistemology. But on the other hand, it legitimately enriches the evidential base to work with. Quite aside from the metaphysical implications and speculations surrounding exotic experiences, it ought to be undeniable that any experience whatsoever constitutes an explananda for a complete theory of consciousness. If you can explain normal everyday vision but your theory doesn’t predict the hyperbolic geometry of DMT visions, your theory is far from complete. I think this move by Hameroff was brilliant, and we all owe him gratitude for insisting to keep both sides in.
The way I experienced this conference in particular was very different from how I felt the two previous times I attended. In fact, the phenomenology was so different that I think it would be worth creating a Journal of Phenomenology of Consciousness Conferences, dedicated to piecing together the whys and hows of each participant’s unique lived experience at these events. Both times I attended before I was still working full time as a data scientist at Bay Area companies. Consciousness research remained a side project (which nonetheless consumed an inordinate amount of time and mental energy). My views were already quite developed, but it would be hard to dismiss the progress that we’ve made since then. With papers published in academia, a lively community, a network of artists, meditators, and philosophers who collaborate with us and engage with our research, and much more experience presenting our ideas, I felt myself engaging with the conference at a much deeper level than in previous years. But perhaps most importantly, I believe that meditation has changed to a significant degree how I perceive large-scale social qualia. By this I mean, my attention fixates a lot less on local social dynamics and personalities, and much more on the flow of information, the subagentic networks that make us up, and the resonance of ideas themselves. From this perspective, I perceived the conference as much more of a living organism than before, where I would see it in a more pointillistic fashion, emphasizing the individual contributions of participants and the conflict between worldviews. Now it felt far more fluid, lightly held, and part of a process that is slowly but surely enriching our collective intelligence with explanatory frameworks and productive research attitudes. A lot of this is of course hard to explain, as it relies on changes at a pre-verbal level of attentional dynamics. But the bottom line is that I felt myself tuning in on the information flow across individuals far more than on the individuals themselves, as if able to sense information gradients and updates at a more collective level. Perhaps psychedelics have played a role here as well. I didn’t consume psychedelics at this conference myself, but you could tell some people were doing so. It was in the vibe.
Importantly, the science presented at this conference was legitimately much more clarifying than in previous years, largely due to the rise of novel research paradigms that let go of the neuron doctrine and embrace the causal significance of brainwaves. Let me give you some examples.
Earl K. Miller with a lab at MIT delivered a remote lecture at the plenary “Cortical Oscillations, Waves and Consciousness” that systematically disassembled the assumptions behind the neuron doctrine (which identifies features of our experience with the activation of individual feature-specific neurons, cf. the grandmother cell). He showed that we now know that neurons are very rarely feature-specific and that they tend to preferentially activate with many features (cf. superposition in ANNs). He presented about ephaptic coupling, local field potentials, and the causal effects of brainwaves, informed by a wealth of evidence generated at his lab and elsewhere. I was especially intrigued by the way he discussed the relationship between different layers of the cortex, with beta waves exerting top-down control and gamma waves filling in details bottom-up. He also discussed findings where two different drugs (or drug cocktails) cause the same brainwave effects and phenomenology despite having entirely different pharmacology. Meaning, that the receptor affinity profile of different drugs can be quite different and yet cause the same phenomenology, provided that they bring about the brainwave patterns. Thus, perhaps, brainwaves are much closer to one’s state of consciousness than the neurotransmitters that modulate them.
And:
Justin Riddle at Florida State (see also his excellent YouTube channel) presented at the plenary “Consciousness in Religion and Altered States” on his work on electric oscillations on the brain, also going against the neuron doctrine equipped with causal experimental data. He also introduced a fascinating model of the hierarchical structure of consciousness called Nested Observer Windows (NOW). Here he presents about how NOW would solve the functional information integration problem. In brief, he hypothesizes that cross-frequency coupling as an overarching principle is what functionally binds each of the scales to each other. This to me makes a lot of sense, for the simple reason that the lowest frequency you can generate is a function of your size, so if a large thing is communicating with a small thing (which, say, have similar shapes by default), it would be natural for them to talk by coupling frequencies that are at integer multiple of each other. This naturally increases the dynamic range of their possible interactions, as you don’t stumble upon a frequency limit either too high or too low.
Tuesday: Presentation Day
Now of course this is happening in a context where I am going to present about the topological solution to the boundary problem we published last year. In our paper, Chris Percy and I focus on how topological boundaries in the EM field could solve the boundary problem. As a simple introduction we start out with the binding problem, which can be stated as “how can the close to hundred billion neurons in your brain contribute to a unified moment of experience?”. If you start with an ontology where the universe is made of atoms and forces, it is notoriously difficult to come up with any principled way of establishing how and where information is aggregated. Similarly to how Maxwell and Faraday developed a research aesthetic where they would see electromagnetism as field phenomena, many theorists have pointed out that you can overcome the core of the binding problem (where does unity come at all) with a field ontology. Alas, the victory is short-lasting, for you soon encounter that you have a boundary problem. If we’re all part of a gigantic field of consciousness, how do you develop boundaries in this field so that we each are a unique distinct moment of experience? Our suggestion is that the physical property responsible for creating hard boundaries in the field is topological segmentation. This is not as exotic of a proposition as it may first sound; we find causally significant macroscopic topological changes in the EM field in a lot of places, most famously in the form of magnetic reconnection in the sun, which brings about solar flares and coronal mass ejections.
Conceptually, a key takeaway from my presentation is that we can explain the reason why evolution recruited these boundaries. And that is because when you create a topological boundary and you trap energy inside it, you will typically observe harmonic resonant modes of the pocket itself. As a consequence, we have that the specific shape delimited by a boundary is causally significant: it vibrates in a way that expresses the entire shape all at once, therefore has holistic behavior via internal resonance. Evolution would have a reason to use these boundaries: they allow you to coordinate behavior and act as a unit despite being a spatially distributed organism.
Overall the presentation was really well received. It is less that people complimented me on the presentation style, and more that people’s questions and follow-ups indicated that they really “got” the core idea. It feels wonderful to be in a context where a significant proportion of the audience really understands what you’re saying, especially if your experience is that in most contexts almost nobody understands it. People were, it seemed to me, at the right inferential distance from our argument to really grok it, and that was wonderful!
I was lucky that my presentation was scheduled for Tuesday because that way I was able to enjoy the rest of the conference without a big responsibility hanging over my head. After my presentation we hung out at the lobby and met with people like Tam Hunt (of General Resonance Theory fame) and his student Asa Young. I followed the gradient of interesting conversations and ended up at the after-party on the 4th floor. To my surprise it was closed at 11PM, after which there wasn’t any more conference programming. It all fell quiet. At that point I realized that the best time to conduct the demo of the latest secret QRI technology was at 11PM. I started telling people to gather at the QRI hotel room at 11PM the next day, Wednesday.
Wednesday: Demo Day
On Wednesday I attended an invite-only presentation by Shamil Chandaria (it was originally going to be in a hotel room, but due to the level of interest of participants it was moved to a conference room with permission from the organizers; the invite-only status was needed to avoid overflow). In the room was Shinzen Young, Donald Hoffman, Jay Sanguinetti and the ultrasound crew, most of the QRI contingent, and others of note that I am not currently remembering. Shamil’s presentation went much deeper than here (“liberation is the artful construction of top-level priors”) and tackled topics of large-scale brain organization, the difference between awake awareness and liberation, and (I’m told, as I had to leave towards the end to see Justin Riddle’s presentation), a mystical-experience-inducing account of phenomenal transparency in the higher Jhanas and beyond.
I arrived to the plenary of Justin Riddle just on time; he was getting up to the stage when I entered the room. Here is another example of how I felt much more embedded in the conference than in previous years. The reason I couldn’t miss Justin’s presentation was that we were scheduled to record a video the next day. I certainly would have watched it regardless (on YouTube after the fact if need be; they’re saying the videos will be up in a few weeks), but this time I needed to make sure to be up to date with his work so as to not make a fool of myself the following day when our conversation would be recorded. His presentation was delightful, not the least because it confirmed all my prejudices about the causal significance of EM field behavior in the brain. I really enjoyed his inclination to take ideas seriously and meticulously working out their implications, such as the significance of cross-frequency coupling, the explanatory power of hierarchical principles for self organization, and the top-down influence of field states on neuronal activity.
The vibe of the conference was really conducive to high-level thinking. I repeatedly found myself having original ideas and reframings: “When does a path integral surpass the computational power of resonance and topology combined? What exactly can you solve with non-linear optics that you can’t with mechanical resonance in embedded topologies?” would arise in my mind just sitting at the bar, overhearing people’s conversations about the history of EEG, the difference between physical and phenomenal time, and the latest studies on Transcranial Near Infrared Light Stimulation. Throughout the conference I was reminded of the concept of “qualia lensing”. Let me explain: in an atomic bomb explosives with different detonation speed are arranged in such a way that a perfectly spherical wavefront uniformly, and rapidly, compresses a radioactive core. The geometric arrangement and relative detonation speeds of each material results in very precise wave guiding (more generally, see: explosive lens). Geometry and potential, ignited, can result in very precise patterns of hyper-compression. Likewise, it seems to me, many high-voltage ideas can only really arise for the first time in a state of mind capable of pressurizing phenomenal representations and make them overcome the activation energy for their blending, fusion, and fission. Being at a conference where the environment is constantly presenting you different “sides of the elephant” of consciousness, surrounded by talented practitioners of the field, one can feel a lot of “qualia lensing” taking place in one’s mind.
Later that day I went to “Physics of the Mind” and watched the presentation of Florian Metzler on narrowing the state-space of phenomena of interest using heuristics of scale and combinatorial spaces (if I understood correctly) and Greg Horne who explored the possibility of a connection between the phenomenology of gravity and the nature of physical mass (he later shared some thoughts on the boundary problem that I hope to follow up on). I missed the presentation of Nir Lahav on his relativistic theory of consciousness but I know he presented in that group later. I jumped to see the presentations of Isaac David, who deconstructed the unfolding argument by showing that IIT would read entirely different causal structures in its implementation compared to the original network, and then (in still another room), Asher Soryl, who presented about a paper we’re working on that aims to catalog the features that a successful theory of valence ought to satisfy. One funny thing about these concurrent presentations was that I arrived a little early to Isaac’s presentation and soon after David Chalmers sat next to me to ask a question. I texted my friend Enrique Chiu, who was sitting in front to the left of the same room to discreetly snap a picture of me sitting next to Chalmers. He got the message right when Chalmers was about to leave, which made the picture he took look rather odd and funny in hard-to-explain ways:
I missed the presentation of Matteo Grasso who presented after Isaac, but had a chance to exchange quite a few thoughts with him throughout the conference. I perceive this IIT cluster as having significant overlap in insights and research aesthetics, no doubt due to a shared commitment to qualia formalism. It was really cool to talk to another cluster of thinkers who also see why the causal structure of computer simulations is actually quite different from the causal structures of what is being simulated. Not the result of hand-wavy intuitions, but of really probing how information flows take place at the implementation level, and systematically ruling out the existence of higher levels of integrations. Fascinating stuff.
Asher Soryl’s presentation had to work around some technical difficulties due to the projector failing all of a sudden, but a video of the presentation will be put online soon. It was funny to note that they assigned him to the “Psychoanalysis and the Unconscious”, presumably playing the function of a “misc. and etc.” category for this conference, because I can tell you he did not once mention psychoanalysis or the unconscious in his presentation.
I rested in my room for an hour and then got ready for the demo, spreading interesting “qualia of the day”-type artifacts throughout the room. I can’t say much about the demo proper for now, but I can say that it’s like an art installation you might encounter at Burning Man at 3AM in the morning while on LSD. Here was another situation where a sort of supercritical mass of people with complementary skillsets were found together. I enjoyed interacting with everyone, but above all, enjoyed sensing the information flow throughout the gathering. To those who attended, many thank yous. It was delightful.
Thursday: Interview Day
Next day, Thursday, I recorded an interview with Justin Riddle. It’s the second one we’ve recorded (see first one). We talked a lot about cross-frequency coupled oscillators in Nested Observer Windows. I ate a banana, drank a glass of almond milk, and downed a sugar-free red bull (to give you some context for the vibes of the interview). Meaning, I really needed to have all of my cylinders firing for this one. Thanks Justin! I look forward to watching it online 🙂
Following that I hung out with Winslow Strong and Shamil Chandaria for a while, and then with Shamil in particular for a couple more hours, who helped me tune into ways of seeing I hadn’t really experienced before. Here is another moment where the pressurization of the high-level thought-forms ambient in the conference seemed to have a strong effect in me. A feeling, hard to put into words, of collective consciousness among the participants, which accepts and embraces the differences and incongruities currently expressed in favor of noticing the long-term gradual increase in understanding.
Spontaneous visit from Mr. Monk
Then Daniel Ingram appeared, in his nanobot-protecting gear, along with a Sharena Rice who does ultrasound research. After exchanging some consciousness-focused videogame ideas we went to the after-party and I talked to someone who gets psychedelic-level hallucinations from caffeine alone. It didn’t sound very high-valence, but definitely noteworthy. I concluded the night by hanging out with Milan Griffes and QRI friends at Milan’s AirBnB.
Friday: Qualia Manifesto and the End of Consciousness Day
On Friday I saw the panel “The Science of Consciousness – 30 Years On”, which in addition to giving a lot of credit to Stuart for the conference, also presented some interesting sociological observations. I really enjoyed the participants sharing pictures and memories of previous conferences. I suppose personally, the movie What The Bleep Do We Know? does some work to sort of fill-in the blanks of some of the vibes I’ve missed. Stuart appears in that movie, and I recall being quite impressed (as a 13 years old) with his quick way of speaking about things like the relative scale between a proton and an electron, and doing so with a background of a desert with cactuses. It really does some heavy lifting in terms of giving the mind a flavor of the vibe that was probably present, to an extent, in the 90s around these regions of the wavefunction.
I have to remind my mind that What The Bleep Do We Know? has nothing to do with the conference other than some scenes with Stuart Hameroff in Tucson (and perhaps Dean Radin). But looking at the pictures that people like Susan Blackmore and Christof Koch shared, I did get a bit of the same vibes. Namely, the cultural material of the 90s needed to be lubricated with brightly colored patterned shirts, soft electronic background music, and visuals attempting to depict the quantum level of reality to enable crossing the awkwardness energy barrier needed to be able to talk about consciousness without constantly blushing.
Speaking of the 90s, I was then fortunate enough to hang out with Ken Moji for a bit (see this 2005 article about him in Conscious Entities, a long-standing consciousness blog). He emphasized that the reason why he was able to start and lead a Qualia center at Sony is that he does a lot of other things that are very conventional as well, with multiple jobs spanning a number of disciplines. I suppose this somewhat confirms the view that, especially a couple decades ago, the only way to interest the public in consciousness research was to also deliver a lot of other conventional value at the same time. Of course I am betting on consciousness research producing the bulk of value in the long-term, though I recognize that immediate applications are hardly world-changing (beyond, of course, the use of straight-up high-end consciousness-altering compounds like MDMA and 5-MeO-DMT). Fortunately, the present seems far more receptive to the value of consciousness research at a broad, generational, cultural level. I think the world, and especially liberal West Coast culture, can digest serious attempts at consciousness exploration better than ever before. So the cautious and protective attitude of sticking to conventional epistemologies is far less needed now (to the extent, of course, that we can simultaneously guard away bad epistemologies).
The concurrent sessions of Friday that I attended were the whole set of “Neurostimulation to Understand the Mind”, with Sanjay Manchanda, Milan Pantovic, and Olivia Giguere / Matthew Hicks, chaired by Jay Sanguinetti. The most fascinating takeaway from this series to me was imaging of changes in the brain due to ultrasound stimulation, which could perhaps be used to determine if the intervention is likely to work on someone. They also shared some phenomenology that felt encouraging, where they can induce meditative-like states and behaviorally measure *desire to meditate* in people receiving the stimulation and were able to show that it significantly increases after ultrasound.
Later on Friday I spent some time looking at posters. I enjoyed having Enrique Chiu (who we have in common having gone to math olympiads representing Mexico, and in his case, gone as high as getting a Silver at the IMO in 2013) explain his theory of saliency maps in the state-space of consciousness. It was awesome to see a fellow mathy Mexican also give it a real go at tackling some of these hard problems. I likewise had a good time hearing Anderson Rodriguez’ electroacoustic theory of consciousness, with some interesting ideas about binding. This is also the time when Chris Percy presented his poster about systematically cataloging everything that a complete theory of consciousness will need to account for.
We ate some food (fries and a delicious veggie platter) and headed to the “Poetry Slam – Zombie Blues – No-End of Consciousness Party”. I brought a projector and coordinated with conference organizers to showcase the work of Symmetric Vision during the party. Me and Asher performed some “poetry” about consciousness vs. replicators and far future visions for consciousness. And then I personally partied too hard on the dance floor. I mean, the energy was really vibrant, and Stuart Hameroff was vibrating to the tune of microtubules, and DMT visuals were being projected on the big screen while a bunch of raving scientists of all ages waved colorful LED tubes in various grades of coordinated synchrony and decoherence. It’s one of those things that gets lodged in my mind as a new gestalt because my brain wouldn’t naturally believe those things can happen.
Saturday: Brain Organoids Day
On Saturday we watched the presentations on brain organoids. I am inspired to accelerate our work on figuring out the valence function for arbitrary biological neural networks, because by the looks of it these technologies will start to be deployed much sooner than anticipated. I think that stopping the use of brain organoids on a grand scale is not likely to be possible, but creating and locking in a computing paradigm that uses information-sensitive gradients of bliss might be possible. And I don’t think the window of opportunity here is very large. Perhaps a decade or two.
I was delighted to see Luca Turin’s work on anesthesia shown at Harmut Neven’s fascinating presentation about quantum mechanics and brain organoids. They will be trying out xenon isotopes soon, in the hopes of detecting the influence of quantum states of the anesthetic at the macroscopic level (whether fruit flies get anesthetized or not). This seems extremely important to test, so godspeed to them.
At this point I said goodbye to the crew and just had a couple final meetings, a brief podcast with Tam Hunt, followed by simply resting on a balcony for a several hours, taking note of the highlights of the conference and beginning to decompress (I’m mostly there, though I still have a couple megapascals to go).
I look forward to following up with many of the conference attendees and to continue working on our core research to present next time.
Till next time, Tucson Consciousness!
Infinite bliss!
Andrés 🙂
Hard-Core Salvia Vibes at the Tucson Airport ………..microtubules, man!
Alternative Title: LSD Ego Death – A Play in Three Voices
[Epistemic Status: Academic, Casual, and Fictional Analysis of the phenomenology of LSD Ego Death]
Academic:
In this work we advance key novel interpretative frameworks to make sense of the distinct phenomenology that arises when ingesting a high dose of LSD-25 (250μg+). It is often noted that LSD, also known as lysergic acid diethylamide, changes in qualitative character as a function of dose, with a number of phase transitions worth discussing.
Casual:
You start reading an abstract of an academic publication on the topic of LSD phenomenology. What are the chances that you will gain any sense, any inkling, the most basic of hints, of what the high-dose LSD state is like by consuming this kind of media? Perhaps it’s not zero, but in so far as the phenomenological paradigms in mainstream use in the 2020s are concerned, we can be reasonably certain that the piece of media won’t even touch the outer edges of the world of LSD-specific qualia. Right now, you can trust the publication to get right core methodological boundary conditions, like the mg/kg used, the standard deviation of people’s responses to questionnaire items, and the increase in blood pressure at the peak. But at least right now you won’t find a rigorous account of either the phenomenal character (what the experience felt like in detailed colorful phenomenology with precise reproducible parameters) or the semantic content (what the experience was about, the information it allowed you to process, the meaning computed) of the state. For that we need to blend in additional voices to complement the rigidly skeptical vibe and tone of the academic delivery method.
It’s for that reason that we will interweave a casual, matter of fact, “really trying to say what I mean in as many ways as I can even if I sound silly or dumb”, voice (namely, this one, duh!). And more so, in order to address the speculative semantic content in its own terms we shall also include a fantastical voice into the mix.
Fantastical:
Fuck, you took too much. In many ways you knew that your new druggie friends weren’t to be trusted. Their MDMA pills were bunk, their weed was cheap, and even they pretended to drink more fancy alcohol than they could realistically afford. So it was rather natural for you to assume that their acid tabs would be weak ass. But alas, they turned out to have a really competent, niche, boutique, high-quality acid dealer. She lived only a few miles away and made her own acid, and dosed each tab at an actual, honest-to-God, 120(±10)μg. She also had a lot of cats, for some reason (why this information was relayed to you only once you sobered up was not something you really understood – especially not the part about the cats). Thus, the 2.5 tabs in total you had just taken (well, you took 1/2, then 1, then 1, spaced one hour each, and you had just taken the last dose, meaning you were still very much coming up, and coming up further by the minute) landed you squarely in the 300μg range. But you didn’t know this at the time. In fact, you suspected that the acid was hitting much more strongly than you anticipated for other reasons. You were expecting a 100-150 microgram trip, assuming each tab would be more between 40 and 60μg. But perhaps you really were quite sleep deprived. Or one of the nootropics you had sampled last week turned out to have a longer half-life than you expected and was synergistic with LSD (coluracetam? schizandrol?). Or perhaps it was the mild phenibut withdrawal you were having (you took 2g 72 hours ago, which isn’t much, but LSD amplifies subtle patterns anyway). It wasn’t until about half an hour later, when the final tab started to kick in, that you realized the intensity of the trip kept climbing up still further than you expected, and it really, absolutely, had to be that the acid was much, much stronger than you thought was possible; most likely over 250 mics, as you quickly estimated, and realized the implications.
From experience, you knew that 300 micrograms would cause ego death for sure. Of course people react differently to psychedelics. But in your case, ego death feelings start at around 150, and then even by 225-250 micrograms they would become all-consuming at least for some portion of the trip. In turn, actually taking 300 micrograms for you was ego death overkill, meaning you were most likely not only going to lose it, but be out for no less than an hour.
What do I mean by being out? And by losing it? The subjective component of the depersonalization that LSD causes is very difficult to explain. This is what this entire document is about. But we can start by describing what it is like from the outside.
Academic:
The behavioral markers of high dose LSD intoxication include confusion and delusions, as well as visual distortions of sufficient intensity to overcome, block, and replace sensory-activated phenomena. The depersonalization and derealization characteristic of LSD-induced states of consciousness tend to involve themes concerning religious, mystical, fantastical, and science fiction semantic landscapes. It is currently not possible to deduce the phenomenal character of these states of consciousness from within with our mainstream research tools and without compromising the epistemological integrity of our scientists (having them consume the mind-altering substance would, of course, confound the rigor of the analysis).
Casual:
Look, when you “lose it” or when you “are out” what happens from the outside is that you are an unpredictable executor of programs that seem completely random to any external observer. One moment you are quietly sitting, rocking back and forth, on the grass. The next you stand up, walk around peacefully. You sit again, now for literally half an hour without moving. Then you suddenly jump and run for 100m without stopping. And then ask the person who is there, no matter if they are a kid, a grandmother, a cop, a sanitation professional, a sex worker, or a professor, “what do you think about ___”? (where ___ ∈ {consciousness, reality, God, Time, Infinity, Eternity, …}). Of course here reality bifurcates depending on who it is that you happened to have asked this question to. A cop? You might end up arrested. Probably via a short visit to a hospital first. And overall not a great time. A kid? You could be in luck, and the kid might play along without identifying you as a threat, and most likely you continue on your journey without much problem. Or in one of the bad timelines, you end up fighting the kid. Not good. Most likely, if it was a grandmother, you might just activate random helpful programs, like helping her cross the street, and she might not even have the faintest clue (and I mean not the absolute faintest fucking clue) that you’re depersonalized on LSD thinking you’re God and that in a very real, if only phenomenological sense, it was literal Jesus / Christ Consciousness that helped her cross the street.
Under most conditions, the biggest danger that LSD poses is a bad valence reaction, which usually wears off after a few hours and is educational in some way. But when taken at high doses and unsupervised, LSD states can turn into real hazards for the individual and the people around them. Not so much because of malice, or because it triggers animal-like behaviors (it can, rarely, but it’s not why it’s dangerous). The real problem with LSD states in high doses is when you are unsupervised and then you execute random behaviors without knowing who you are, where you are, or even what it is that you are intending to achieve with the actions you are performing. It is therefore *paramount* that if you explore high doses of LSD you do it supervised.
Academic:
What constitutes a small, medium, or large dose of LSD is culture and time-dependent. In the 60s, the average tab used to be between 200 and 400 micrograms. The typical LSD experience was one that included elements of death and rebirth, mystical unions, and complete loss of contact with reality for a period of time. In the present, however, the tabs are closer to the 50-100μg range.
In “psychonaut” circles, which gather in internet forums like bluelight, reddit, and erowid, a “high dose of LSD” might be considered to be 300 micrograms. But in real world, less selected, typical contexts of use for psychedelic and empathogen drugs like dance festivals, a “high dose” might be anything above 150 micrograms. In turn, OG psychonauts like Timothy Leary and Richard Alpert would end up using doses in the 500-1000μg range routinely as part of their own investigations. In contrast, in TIHKAL, Alexander Shulgin lists LSD’s dose range as 60-200 micrograms. Clearly, there is a wide spread of opinions and practices concerning LSD dosing. It is for this reason that one needs to contextualize with historical and cultural details the demographic topos where one is discussing a “high dose of LSD”.
Fantastical:
Being out, and losing it, in your case right now would be disastrous. Why? Because you broke the cardinal sin of psychedelic exploration. You took a high dose of a full psychedelic (e.g. LSD, psilocybin, mescaline, DMT – less so 2C-B or Al-LAD, which have a lower ceiling of depersonalization[1]) without a sitter. Of course you didn’t intend to. You really just wanted to land at the comfortably manageable 100-150 microgram range. But now… now you’re deep into depersonalization-land, and alone. Who knows what you might do? Will you leave your apartment naked? Will someone call the cops? Will you end up in the hospital? You try to visualize future timelines and… something like 40% of them lead to either arrest or hospital or both. Damn it. It’s time to pull all the stops and minimize the bad timelines.
You go to your drug cabinet and decide to take a gabaergic. Here is an important lesson, and where timelines might start to diverge as well. Dosing of sedatives for psychedelic emergencies is a tricky issue. The problem is that sedatives themselves can cause confusion. So there are many stories you can find online of people who take a very large dose of alprazolam (Xanax) or similar (benzo, typically) and then end up both very confused and combative while also tripping really hard. Here interestingly, the added confusion of the sedative plus its anxiolytic effect synergize to make you even more unpredictable. On the other hand, not taking enough is also quite easy, where the LSD (or similar) anxiety and depersonalization continues to overpower the anxiolysis of the sedative.
You gather up all the “adult in the room” energy you can muster and make an educated guess: 600mg of gabapentin and 1g of phenibut. Yet, this will take a while to kick in, and you might depersonalize anytime and start wandering around. You need a plan in the meanwhile.
Academic:
In the article The Pseudo-Time Arrow we introduced a model of phenomenal time that takes into account the following three assumptions and works out their implications:
Indirect Realism About Perception
Discrete Moments of Experience
Qualia Structuralism
(1) is about how we live in a world-simulation and don’t access the world around us directly. (2) goes into how each moment of experience is itself a whole, and in a way, whatever feeling of space and time we may have, this must be encoded in each moment of experience itself. And (3) states that for any given experience there is a mathematical object whose mathematical features are isomorphic to the phenomenology of the experience (first introduced in Principia Qualia by Michel E. Johnson).
Together, these assumptions entail that the feeling of the passage of time must be encoded as a mathematical feature in each moment of experience. In turn, we speculated that this feature is _implicit causality_ in networks of local binding. Of course the hypothesis is highly speculative, but it was supported by the tantalizing idea that a directed graph could represent different variants of phenomenal time (aka. “exotic phenomenal time”). In particular, this model could account for “moments of eternity”, “time loops”, and even the strange “time splitting/branching”.
Casual:
In some ways, for people like me, LSD is like crack. I have what I have come to call “hyperphilosophia”. I am the kind of person who feels like a failure if I don’t come up with a radically new way of seeing reality by the end of each day. I feel deeply vulnerable, but also deeply intimate, with the nature of reality. Nature at its deepest feels like a brother or sister, basement reality feels close and in some way like a subtle reshuffling of myself. I like trippy ideas, I like to have my thoughts scrambled and then re-annealed in unexpected ways; I delight in combinatorial explosions, emergent effects, unexpected phase transitions, recursive patterns, and the computationally non-trivial. As a 6 year old I used to say that I wanted to be a “physicist mathematician and inventor” (modeling my future career plans around Einstein and Edison); I got deeply depressed for a whole year at the age of 9 when I confronted our mortality head on; and then experiencing a fantastic release at 16 on my first ego death (with weed of all drugs!) when I experienced the taste of Open Individualism; only to then feel depressed again at 20 but now about eternal life and the suffering we’re bound to experience for the rest of time; switching then to pragmatic approaches to reduce suffering and achieve paradise ala David Pearce. Of course this is just a “roll of the dice” and I’m sure I would be telling you about a different philosophical trajectory if we were to sample another timeline. But the point is that all my life I’ve expressed a really intense philosophical temperament. And it feels innate – nobody made me so philosophical – it just happened, as if driven by a force from the deep.
People like us are a certain type for sure, and I know this because out of thousands of people I’ve met I’ve had the fortune of encountering a couple dozen who are like me in these respects. Whether they turned out physicists, artists, or meditators is a matter of personal preference (admittedly the plurality of them is working on AI these days). And in general, it is usually the case that people of this type tend to have a deep interest in psychedelics, for the simple reason that they give you more of what they like than any other drug.
Yes, a powerful pleasant body buzz is appreciated (heroin mellow, meth fizz, and the ring of the Rupa Jhanas are all indeed quite pleasant and intrinsically worthwhile states of consciousness – factoring out their long-term consequences [positive for the Jhanas, negative for heroin and meth]). But that’s not what makes life worth living for people who (suffer from / enjoy their condition of) hyperphilosophia. Rather, it is the beauty of completely new perspectives that illuminate our understanding of reality one way or another that drives us. And LSD, among other tools, often really hits the nail in the head. It makes all the bad trips and nerve wracking anxiety of the state more than worth it in our minds.
One of the striking things about an LSD ego death that is incredibly stimulating from a philosophical perspective is how you handle the feeling of possible futures. Usually the way in which we navigate timelines (this is so seamless that we don’t usually realize how interesting and complex it is) is by imagining that a certain future is real and then “teleporting to it”. We of course don’t teleport to it. But we generate that feeling. And as we plan, we are in a way generating a bunch of wormholes from one future to another (one state of the world to another, chained through a series of actions). But our ability to do this is restricted by our capacity to generate definite, plausible, realistic and achievable chains of future states in our imagination.
On LSD this capacity can become severely impaired. In particular, we often realize that our sense of connection to near futures that we normally feel is in fact not grounded in reality. It’s a kind of mnemonic technique we employ for planning motor actions, but it feels from the inside as if we could control the nearby timelines. On LSD this capacity breaks down and one is forced to instead navigate possible futures via different means. In particular, something that begins to happen above 150 micrograms or so, is that when one imagines a possible future it lingers and refuses to fully collapse. You start experiencing a superposition of possible futures.
For an extreme example, see this quote (from this article) I found in r/BitcoinMarkets by Reddit user I_DID_LSD_ON_A_PLANE in 2016:
[Trip report of taking a high dose of LSD on an airplane]: So I had what you call “sonder”, a moment of clarity where I realized that I wasn’t the center of the universe, that everyone is just as important as me, everyone has loved ones, stories of lost love etc, they’re the main character in their own movies.
That’s when shit went quantum. All these stories begun sinking in to me. It was as if I was beginning to experience their stories simultaneously. And not just their stories, I began seeing the story of everyone I had ever met in my entire life flash before my eyes. And in this quantum experience, there was a voice that said something about Karma. The voice told me that the plane will crash and that I will be reborn again until the quota of my Karma is at -+0. So, for every ill deed I have done, I would have an ill deed committed to me. For every cheap T-shirt I purchased in my previous life, I would live the life of the poor Asian sweatshop worker sewing that T-shirt. For every hooker I fucked, I would live the life of a fucked hooker.
And it was as if thousands of versions of me was experiencing this moment. It is hard to explain, but in every situation where something could happen, both things happened and I experienced both timelines simultaneously. As I opened my eyes, I noticed how smoke was coming out of the top cabins in the plane. Luggage was falling out. I experienced the airplane crashing a thousand times, and I died and accepted death a thousand times, apologizing to the Karma God for my sins. There was a flash of the brightest white light imagineable and the thousand realities in which I died began fading off. Remaining was only one reality in which the crash didn’t happen. Where I was still sitting in the plane. I could still see the smoke coming out of the plane and as a air stewardess came walking by I asked her if everything was alright. She said “Yes, is everything alright with YOU?”.
Fantastical:
It had been some years since you had done the LSD and Quantum Measurement experiment in order to decide if the feeling of timelines splitting was in any way real. Two caveats about that experiment. First, it used quantum random number generators from Sydney that were no less than 100ms old by the time they were displayed on the screen. And second, you didn’t get the phenomenology of time splitting while on acid during the tests anyway. But having conducted the experiment anyway at least provided some bounds for the phenomenon. Literal superposition of timelines, if real, would need higher doses or more fresh quantum random numbers. Either way, it reassured you somewhat that the effect wasn’t so strong that it could be detected easily.
But now you wish you had done the experiment more thoroughly. Because… the freaking feeling of timelines splitting is absolutely raging with intensity right now and you wish you could know if it’s for real or just a hallucination. And of course, even if just a hallucination, this absolutely changes your model of how phenomenal time must be encoded, because damn, if you can experience multiple timelines at once that means that the structure of experience that encodes time is much more malleable than you thought.
Uh? Interesting, I can hear a voice all of a sudden. It calls itself “Academic” and just said something about the stacking of narrative voices.
Fantastical:
It’s always fascinating how on LSD you get a kind of juxtaposition of narrative voices. And in this case, you now have an Academic, a Casual, and a Fantastical narrative stream each happening in a semi-parallel way. And at some point they started to become aware of each other. Commenting on each other. Interlacing and interweaving.
Casual:
Importantly, one of the limiting factors of the academic discourse is that it struggles to interweave detailed phenomenology into its analysis. Thankfully, with the LSD-induced narrative juxtaposition we have a chance to correct this.
Academic:
After reviewing in real time the phenomenology of how you are thinking about future timelines, I would like to posit that the phenomenal character of high dose LSD is characterized by a hyperbolic pseudo-time arrow.
This requires the combination of two paradigms discussed at the Qualia Research Institute. Namely, the pseudo-time arrow, which as we explained tries to make sense of phenomenal time in terms of a directed graph representing moments of experience. And then also the algorithmic reductions introduced in the Hyperbolic Geometry of DMT Experiences.
The latter deals with the idea that the geometry of our experience is the result of the balance between various forces. Qualia comes up, gets locally bound to other qualia, then disappears. Under normal circumstances, the network that emerges out of these brief connections has a standard Euclidean geometry (or rather, works as a projection of a Euclidean space, but I digress). But DMT perturbs the balance, in part by making more qualia appear, making it last longer, making it vibrate, and making it connect more with each other, which results in a network that has a hyperbolic geometry. In turn, the felt sense of being on DMT is one of _being_ a larger phenomenal space, which is hard to put into words, but possible with the right framework.
What we want to propose now is that on LSD in particular, the characteristic feeling of “timeline splitting” and the even more general “multiple timeline superposition” effect is the result of a hyperbolic geometry, not of phenomenal space as with DMT, but of phenomenal time. In turn, this can be summarized as: LSD induces a hyperbolic curvature along the pseudo-time arrow.
Casual:
Indeed, one of the deeply unsettling things about high dose LSD experiences is that you get the feeling that you have knowledge of multiple timelines. In fact, there is a strange sense of uncanny uncertainty about which timeline you are in. And here is where a rather scary trick is often played on us by the state.
The feeling of the multiverse feels very palpable when the garbage collector of your phenomenal motor planning scratchpad is broken and you just sort of accumulate plans without collapsing them (a kind of kinesthetic tracer effect).
Fantastical:
Ok, you need to condense your timelines. You can’t let _that_ many fall off the wagon, so to speak. You could depersonalize any moment. You decide that your best bet is to call a friend of yours. He is likely working, but lives in the city right next to yours and could probably get to your place in half an hour if you’re lucky.
> Hello!
> Hello! I just got out of a meeting. What’s up?
> Er… ok, this is gonna sound strange. I… took too much LSD. And I think I need help.
> Are you ok? LSD is safe, right?
> Yeah, yeah. I think everything will be fine. But I need to collapse the possibility space. This is too much. I can’t deal with all of these timelines. If you come over at least we will be trimming a bunch of them and preventing me from wandering off thinking I’m God.
> Oh, wow. You don’t sound very high? That made sense, haha.
> Duuudde! I’m in a window of lucidity right now. We’re lucky you caught me in one. Please hurry, I don’t know how much longer I can hang in here. I’m about to experience ego death. What happens next is literally up to God, and I don’t know what his plans are.
Your friend says he’ll take an Uber or Lyft and be there as soon as he can. You try to relax. Reality is scolding you. Why did you take this risk? You should know better!
Casual:
One of the unsettling feelings about high dose LSD is that you get to feel how extremely precious and rare a human life is. We tend to imagine that reincarnation would simply be like, say, where you die and then 40 days later come back as a baby in India or China or the United States or Brazil or whatever, based on priors, and rarely in Iceland or tiny Caribbean Islands. But no. Humans are a luxury reincarnation. Animal? Er, yeah, even animals are pretty rare. The more common form is simply in the shape of some cosmic process or another, like intergalactic wind or superheated plasma inside a star. Any co-arising process that takes place in this Gigantic Field of Consciousness we find ourselves embedded in is a possible destination, for the simple reason that…
Academic:
The One-Electron Universe posits that there is only one particle in the entire cosmos, which is going forwards and backwards in time, interfering with itself, interweaving a pattern of path integrals that interlace with each other. If there is only one electron, then the chances of being a “human moment of experience” at a point in time are vanishingly small. The electrons whose pattern of superposition paint a moment of experience are but a tiny vanishing fraction of the four-dimensional density-mass of the one electron in the block universe entailed by quantum mechanics.
Fantastical:
When you realize that you are the one electron in the universe you often experience a complex superposition of emotions. Of course this is limited by your imagination and emotional state. But if you’re clear-headed, curious, and generally open to exploring possibilities, here is where you feel like you are at the middle point of all reality.
You can access all 6 Realms from this central point, and in a way escape the sense of identification with any one of them. Alas, this is not something that one always achieves. It is easy to get caught up in a random stream and end up in, say, the God Realm completely deluded thinking you’re God. Or in the Hell realm, thinking you’re damned forever somehow. Or the animal, seeking simple body pleasures and comfort. Or the human world, being really puzzled and craving cognitively coherent explanations. Or the Hungry Ghost dimension, where you are always looking to fill yourself up and perceive yourself as fundamentally empty and flawed. Or the Titan realm, which adds a perceptual filter where you feel that everything and everyone is in competition with you and you derive your main source of satisfaction from pride and winning.
In the ideal case, during an LSD ego death you manage to hang out right at the center of this wheel, without falling into any of the particular realms. This is where the luminous awareness happens. And it is what feels like the central hub for the multiverse of consciousness, except in a positive, empowering way.
Casual:
In many ways we could say that the scariest feeling during LSD ego death is the complete lack of control over your next rebirth.
Because if you, in a way, truly surrender to the “fact” that we’re all one and that it all happens in Eternity at the same time anyway… do you realize the ramifications that this has? Everything Everywhere All At Once is a freaking documentary.
Fantastical:
> Hello? What’s up?
> Yeah, er, are you coming over?
> Yes. I mean, you just called me… 5 minutes ago. Did you expect I’d be there already? I’m walking towards the Uber.
> Time is passing really slowly, and I’m really losing it now. Can you… please… maybe like, remind me who I am every, like, 30 seconds or so?
> Mmmm ok. I guess that’s a clear instruction. I can be helpful, sure.
[for the next 40 minutes, in the Uber headed to your place, your friend kept saying your name every 30 seconds, sometimes also his name, and sometimes reminding you where you are and why you called him – bless his soul]
Casual:
Imagine that you are God. You are walking around in the “Garden of Possibilities”. Except that we’re not talking about static possibilities. Rather, we’re talking about processes. Algorithms, really. You walk around and stumble upon a little set of instructions that, when executed, turns you into a little snowflake shape. Or perhaps turns you into a tree-like shape (cf. l-systems). When you’re lucky, it turns you into a beautiful crystalline flower. In these cases, the time that you spend embodying the process is small. Like a little popcorn reality: you encounter, consume, and move on. But every once in a while you encounter a set of instructions that could take a very long time to execute. Due to principles of computational irreducibility, it is also impossible for you to determine in advance (at least in all, most cases) how long the process will take. So every once in a while you encounter a Busy Beaver and end up taking a very, very, very long time to compute that process.
Busy beaver values for different parameters (source)
But guess what? You are God. You’re eternal. You are forever. You will always come back and continue on your walk. But oh boy, from the point of view of the experience of being what the Busy Beaver executes, you do exist for a very long time. From the point of view of God, no matter how long this process takes, it will still be a blink of an eye in the grand scheme of things. God has been countless times in Busy Beavers and will be countless times there again as well. So enjoy being a flower, or a caterpillar, or a raindrop, or even an electron, because most of the time you’re stuck being ridiculously long processes like the Busy Beaver.
Academic:
Under the assumption that the hyperbolic pseudo-time arrow idea is on the right track, we can speculate about how this might come about from a configuration of a feedback system. As we’ve seen before, an important aspect of the phenomenal character of psychedelic states of consciousness is captured by the tracer pattern. More so, as we discussed in the video about DMT and hyperbolic geometry, one of the ways in which psychedelic states can be modeled is in terms of a feedback system with a certain level of noise. Assume that LSD produces a tracer effect where, approximately, 15 times per second you get a strobe and a replay effect overlay on top of your current experience. What would this do to your representation of the passage of time and the way you parse possible futures?
FRAKSL video I made to illustrate hyperbolic pseudo-time arrows coming out of a feedback system (notice how change propagates fractally across the layers).
Casual:
I think that LSD’s characteristic “vibrational frequency” is somewhere between phenethylamines and tryptamines. 2C-B strikes me as in the 10hz range for most vibrations, whereas psilocybin is closer to 20hz. LSD might be around 15hz. And one of the high-level insights that the lens of connectome-specific harmonic modes (or more recently geometric eigenmodes) gives us is that functional localization and harmonic modulation might be intertwined. In other words, the reason why a particular part of the brain might do what it does is because it is a great tuning knob for the harmonic modes that critically hinge on that region of the brain. This overall lens was used by Michael E. Johnson in Principia Qualia to speculate that the pleasure centers are responsible for high variance in valence precisely because they are strategically positioned in a place where large-scale harmony can be easily modulated. With this sort of approach in mind (we could call it even a research aesthetic, where for every spatial pattern there is a temporal dynamic and vice versa) I reckon that partly what explains the _epistemological_ effects of LSD at high doses involves the saturation of specific frequencies for conscious compute. What do I mean by this?
Say indeed that a good approximation for a conscious state is a weighted sum of harmonic modes. This does not take into account the non-linearities (how the presence of a harmonic mode affects other ones) but it might be a great 60%-of-the-way-there kind of approximation. If so, I reckon that we use some “frequency bands” to store specific kinds of information that corresponds to the information that is naturally encoded with rhythms of specific frequencies. It turns out, in this picture, that we have a sort of collection of inner clocks that are sampling the environment to pick up on patterns that repeat at different scales. We have a slow clock that samples every hour or so, one that samples every 10 minutes, one that samples every minute, every 10 seconds, every second, and then at 10, 20, 30, 40, and even 50hz. All of these inner clocks meet with each other to interlace and interweave a “fabric of subjective time”. When we want to know at a glance how we’re doing, we sample a fragment of this “fabric of subjective time” and it contains information about how we’re doing right now, how we were doing a minute ago, an hour, a day, and even longer. Of course sometimes we need to sample the fabric for a while in order to notice more subtle patterns. But the point is that our sense of reality in time seems to be constructed out of the co-occurrence of many metronomes at different scales.
I think that in particular the spatio-temporal resonant modes that LSD over-excites the most are actually really load-bearing for constructing our sense of our context. It’s as if when you energize too much one of these resonant modes, you actually push it to a smaller range of possible configurations (more smooth sinusoidal waves rather than intricate textures). By super-saturating the energy in some of these harmonics on LSD, you flip over to a regime where there is really no available space for information to be encoded. You can therefore feel extremely alive and real, and yet when you query the “time fabric” you notice that there are big missing components. The information that you would usually get about who you are, where you are, what you have been doing for the last couple of hours, and so on, is instead replaced by a kind of eternal-seeming feeling of always having existed exactly as you currently are.
Fantastical:
If it wasn’t because of your friend helpfully reminding you where you were and who you are, you would have certainly forgotten the nature of your context and for sure wandered off. The scene was shifting widely, and each phenomenal object or construct was composed of a never ending stream of gestalts competing for the space to take hold as the canonical representation (and yet, of course, always superseded by yet another “better fit”, constantly updating).
The feeling of the multiverse was crushing. Here is where you remembered how various pieces of media express aspects of the phenomenology of high dose LSD (warning: mild spoilers – for the movies and for reality as a whole):
Everything Everywhere All At Once: in the movie one tunes into other timelines in order to learn the skills that one has in those alternative lifepaths. But this comes with one side-effect, which is that you continue to be connected to the timeline from which you’re learning a skill. In other words, you form a bond across timelines that drags you down as the cost of accessing their skill. On high dose LSD you get the feeling that yes, you can learn a lot from visualizing other timelines, but you also incur the cost of loading up your sensory screen with information you can’t get rid of.
The Matrix: the connection is both the obvious one and a non-obvious one. First, yes, the reason this is relevant is because being inside a simulation might feel like a plausible hypothesis while on a high dose of LSD. But less intuitively, the Matrix also fits the bill when it comes to the handling of future-past interactions. The “Don’t worry about the vase” scene (which I imagine Zvi named his blog after) highlights that there is an intertwining between future and past that forges destiny. And many of the feelings about how the future and past are connected echo this theme on a high dose of LSD.
Rick and Morty (selected episodes):
Death Crystal: here the similarity is in how on LSD you feel that you can go to any given future timeline by imagining clearly a given outcome and then using it as a frame of reference to fill in the details backwards.
A Rickle in Time: how the timelines split but can in some ways remain aware of and affect each other.
Mortynight Run: In the fictional game Roy: A Life Well Lived you get to experience a whole human lifetime in what looks like minutes from the outside in order to test how you do in a different reality.
Tenet: Here the premise is that you can go back in time, but only one second per second and using special gear (reversed air tanks, in their case).
Of these, perhaps the most surprising to people would be Tenet. So let me elaborate. There are two Tenet-like phenomenologies you experience as your friend is on the way to pick you up worth commenting on:
One, what we could call the “don’t go this way” phenomenology. Here you get the feeling that you make a particular choice. E.g. go to the other room to take more gabapentin and see if that helps (of course it won’t – it’s only been 15 minutes since you took it and it hasn’t even kicked in). Then you visualize briefly what that timeline feels like, and you get the feeling of living through it. Suddenly you snap back into the present moment and decide not to go there. This leaves a taste in your mouth of having gone there, of having been there, of living through the whole thing, just to decide 10 years down the line that you would rather come back and make a different choice.
At the extreme of this phenomenology you find yourself feeling like you’ve lived every possible timeline. And in a way, you “realize” that you’re, in the words of Teafaerie, a deeply jaded God looking for an escape from endless loops. So you “remember” (cf. anamnesis) that you chose to forget on purpose so that you could live as a human in peace, believing others are real, humbly accepting a simple life, lost in a narrative of your own making. The “realization” can be crushing, of course, and is often a gateway to a particular kind of depersonalization/derealization where you walk around claiming you’re God. Alas, this only happens in a sweet spot of intoxication, and since you went above even that, you’ll have a more thorough ego death.
Two, an even more unsettling Tenet-like phenomenology is the feeling that “other timelines are asking for your help – Big Time wants you to volunteer for the Time War!”. Here things go quantum, and completely bonkers. The feeling is the result of having the sense that you can navigate timelines with your mind in a much deeper way than, say, just making choices one at a time. This is a profound feeling, and conveying it in writing is of course a massive stretch. But even the Bering Strait was crossed by hominids once, and this stretch feels also crossable by us with the right ambition.
The multiverse is very large. You see, imagine what it would be like to restart college. One level here is where you start again from day 1. In other timelines you make different friends, read other books, take other classes, have other lovers, major in other disciplines. Now go backwards even a little further back, to when the academic housing committee was making decisions about who goes to which dorm. Then the multiverse diversifies, as you see a combinatorial explosion of possible dorm configurations. Further back, when the admissions committee was making their decisions, and you have an even greater expansion of the multiverse where different class configurations are generated.
Now imagine being able to “search” this bulky multiverse. How do you search it? Of course you could go action by action. But due to chaos, within important parameters like the set of people you’re likely to meet, possibilities quickly get scrambled. The worlds where you chose that bike versus that other bike in that particular moment aren’t much more similar to each other than other random ways of partitioning the timelines. Rather, you need to find pivotal decisions, as well as _anchor feelings_. E.g. It really matters if a particular bad technology is discovered and deployed, because that drastically changes the texture of an entire category of timelines. It is better for you to search timelines via general vibes and feelings like that, because that will really segment the multiverse into meaningfully different outcomes. This is the way in which you can move along timelines on high doses of LSD. You generate the feeling of things “having been a certain way” and you try to leave everything else as loose and unconstrained as possible, so that you search through the path integral of superpositions of all possible worlds where the feeling arises, and every once in a while when you “sample” the superposition you get a plausible universe where this is real.
Now, on 150 or 200 micrograms this feels very hypothetical, and the activity can be quite fun. On 300 micrograms, this feels real. It is actually quite spooky, because you feel a lot of responsibility here. As if the way in which you chose to digest cosmic feelings right there could lock in either a positive or negative timeline for you and your loved ones.
Here is where the Time War comes into play. I didn’t choose this. I don’t like this meme. But it is part of the phenomenology, and I think it is better that we address it head-on rather than let it surprise you and screw you up in one way or another.
The sense of realism that high dose LSD produces is unreal. It feels so real that it feels dreamy. But importantly, the sense of future timelines being truly there in a way is often hard to escape. With this you often get a crushing sense of responsibility. And together with the “don’t go this way” you can experience a feeling of a sort of “ping pong with the multiverse of possibilities” where you feel like you go backwards and forwards in countless cycles searching for a viable, good future for yourself and for everyone.
In some ways, you may feel like you go to the End of Times when you’ve lived all possible lifetimes and reconverge on the Godhead (I’m not making this up, this is a common type of experience for some reason). Importantly, you often feel like there are _powerful_ cosmic forces at play, that the reason for your life is profound, and that you are playing an important role for the development of consciousness. One might even experience corner-case exotic phenomenal time like states of mind with two arrows of time that are perpendicular to each other (unpacking this would take us an entire new writeup, so we shall save it for another time). And sometimes you can feel like your moral fiber is tested in often incredibly uncomfortable ways by these exotic phenomenal time effects.
Here is an example.
As your sense of “awareness of other timelines” increases, so does your capacity to sense timelines where things are going really well and timelines where things are going really poorly. Like, there are timelines where your friend is also having a heart attack right now, and then those where he crashes on the way to your apartment, and those where there’s a meteorite falling into your city, and so on. Likewise, there’s one where he is about to win the lottery, where you are about to make a profound discovery about reality that stands the test of sober inquiry, where someone just encountered the cure for cancer, and so on. One unsettling feeling you often get on high dose LSD is that because you’re more or less looking at these possibilities “from the point of view of eternity” in a way you are all of them at once. “Even the bad ones?” – yes, unsettlingly, even the bad ones. So the scary moral-fiber-testing thought that sometimes you might get is if you’d volunteer to be in one of the bad ones so that “a version of you gets to be in the good one”. In other words, if you’re everyone, wouldn’t you be willing to trade places? Oftentimes here’s where Open Individualism gets scary and spooky and where talking to someone else to get confirmation that there are parallel conscious narrative streams around is really helpful.
Casual:
We could say that LSD is like a completely different drug depending on the dose range you hit:
Below 50 micrograms it is like a stimulant with stoning undertones. A bit giggly, a bit dissociating, but pretty normal otherwise.
Between 50 and 150 you have a drug that is generally really entertaining, gentle, and for the most part manageable. You get a significant expansion in the room available to have thoughts and feelings, as if your inner scratch pads got doubled in size. Colors, sounds, and bodily feelings all significantly intensified, but still feel like amplified versions of normal life.
Between 150 and 250 you get all of the super stereotypical psychedelic effects, with very noticeable drifting, tracers, symmetries, apophenia, recursive processes, and fractal interlocking landscapes. It is also somewhat dissociative and part of your experience might feel dreamy and blurry, while perhaps the majority of your field is sharp, bright, and very alive.
From 250 to 350 it turns into a multiverse travel situation, where you forget where you are and who you are and at times that you even took a drug. You might be an electron for what feels like millions of years. You might witness a supernova in slow motion. You might spontaneously become absorbed into space (perhaps as a high energy high dimensional version of the 5th Jhana). And you might feel like you hit some kind of God computer that compiles human lifetimes in order to learn about itself. You might also experience the feeling of a massive ball of light colliding with you that turns you into the Rainbow version of the Godhead for a time that might range between seconds and minutes. It’s a very intense experience.
And above? I don’t know, to be honest.
Academic:
The intermittent collapse into “eternity” reported on high dose LSD could perhaps be interpreted as stumbling into fixed points of a feedback system. Similarly to how pointing a camera directly at its own video feed at the right angle produces a perfectly static image. On the other hand, we might speculate that many of the “time branching” effects are instead the result of a feedback system where each iteration doubles the number of images (akin to using a mirror to cover a portion of the screen and reflect the uncovered part of the screen).
Video I made with FRAKSL in order to illustrate exactly the transition between a hyperbolic pseudo-time arrow and a geometric fixed point in a feedback system. This aims to capture the toggle during LSD ego death between experiencing multiple timelines and collapsing into moments of eternity.
Fantastical:
You decide that you do want to keep playing the game. You don’t want to roll the dice. You don’t want to embrace Eternity, and with it, all of the timelines, even the ugly ones. You don’t want to be a volunteer in the Time War. You just want to be a normal person, though of course the knowledge you’ve gained would be tough to lose. So you have to make a choice. Either you forget what you learned, or you quit the game. What are you going to do?
As you start really peaking and the existential choice is presented to you, your friend finally arrives outside of your apartment. The entrance is very cinematic, as you witness it both from your phone as well as in real life, like the convergence of two parallel reality streams collapsing into a single intersubjective hologram via a parallax effect. It was intense.
Casual:
You have to admit, the juxtaposition of narrative streams with different stylistic proclivities really does enrich the human condition. In a way, this is one of the things that makes LSD so valuable: you get to experience simultaneously sets of vibes/stances/emotions/attitudes that would generally never co-exist. This is, at least in part, what might be responsible for increasing your psychological integration after the trip; you experience a kind of multi-context harmonization (cf. gestalt annealing). It’s why it’s hard to “hide from yourself on acid” – because the mechanism that usually keeps our incoherent parts compartmentalized breaks down under intense generalized tracers that maintain interweaving, semi-paralel, narrative streams. Importantly, the juxtaposition of narrative voices is computationally non-trivial. It expands the experiential base in a way that allows for fruitful cross-pollination between academic ways of thinking and our immediate phenomenology. Perhaps this is important from a scientific point of view.
Fantastical:
With your friend in the apartment taking care of you – or rather, more precisely, reducing possibility-space to a manageable narrative smear, and an acceptable degree of leakage into bad timelines – you can finally relax. More so, the sedatives finally kick in, and the psychedelic effects reduce by maybe 20-25% in the span of an hour or so. You end up having an absolutely great time, and choose to keep playing the game. You forget you’re God, and decide to push the question of whether to fall into Nirvana for good till the next trip.
[1] LSD has a rather peculiar dose-response curve. It is not a “light” psychedelic, although it can certainly be used to have light experiences. Drugs like AL-LAD are sometimes described as relatively shallow in that they don’t produce the full depth of richness LSD does. Or 2C-B/2C-I, which tend to come with a more grounded sense of reality relative to the intensity of the sensory amplification. Or DMT, which despite its extreme reality-replacing effects, tends to nonetheless give you a sense of rhythm and timing that keeps the sense of self intact along some dimensions. LSD is a full psychedelic in that at higher doses it really deeply challenges one’s sense of reality. I have never heard of someone take 2C-B at, say, 30mg and freak out so badly that they believe that reality is about to end or that they are God and wish they didn’t know it. But on 200-400 micrograms of LSD this is routine. Of course you may not externalize it, but the “egocidal” effects of acid are powerful and hard to miss, and they are in some ways much deeper and transformative than the colorful show of DMT or the love of MDMA because it is ruthless in its insistence, methodical in its approach, and patient like water (which over decades can carve deep into rocks). As Christopher Bach says in LSD and the Mind of the Universe: “An LSD session grinds slow but it grinds fine. It gives us time to be engaged and changed by the realities we are encountering. I think this polishing influences both the eventual clarity of our perception in these states and what we are able to bring back from them, both in terms of healing and understanding”. There’s a real sense in which part of the power of LSD comes from its capacity to make you see something for long periods of time that under normal circumstances would have us flinch in a snap.
In this video I discuss in depth the following topics:
1. MDMA is cardiotoxic and likely neurotoxic, with real and significant side-effects when taken often. Don’t do that. Respect and honor this beautiful state and save it for when you really need it.
2. The phenomenology is often described as “removing layers of conditioning and finding your essential, loving, and pure *core*”. It seems to significantly reduce greed, hate, and delusion, for at least a solid 90 minutes.
3. I argue that a good frame would be to think of the effects as drastically reducing both reactance and fear. Then you can assess a situation without the distortions of these two mental factors, which tend to generate rather self-serving thought-forms.
4. The concept of “authenticity” and its operationalization as a good lens with which to see the effects of MDMA. Big up to Matt Baggott, Co-founder and CEO of Tactogen, who is aiming to perfect MDMA and developed and applied the construct of authenticity in the scientific study of MDMA. Also thanks to Thomas S. Ray, who is on a similar path. Well done! Let’s get more people involved!
5. Another frame is to think of the state as clarifying what the “substance of thought” is like. We usually live under the illusion that emotional reactions follow Newtonian physics. They don’t. A better analogy would be corn starch and water, where applying force quickly can solidify (and even tear) the medium. Thus, we get in our own way and cause a lot of sense of solidity without even realizing it, which will take time and effort to soften and return to normal.
6. Discussion about QRI’s Psychedelic Thermodynamics model applied to MDMA.
7. Self-organizing principles, such as “repulsion-based algorithms” to undo knots, might explain what is happening to the field on MDMA.
8. A possible personality factor might be how “hard” someone is. I discuss personality disorders from a “hardness realism” point of view.
9. Emotional processing as a “skill tree” rather than “levels”.
10. High Entropy Alloys (HEA) are materials made of many metals that, in some cases, lead to really surprising effects, such as a new symmetry space group for their molecular organization (where none of the “ingredients” tend to crystalize that way, but as a whole they do). MDMA might be a bit of a unique HEA that balances serotonin (social anxiety reduction), dopamine (motivation and mental clarity), oxytocin (sense of closeness), and endorphins (bodily pleasure). It is more than the sum of the parts.
11. This leads to a speculation where the key high-level effects of MDMA, in addition to reducing fear and reactance, is the presence of courage, love and equanimity. I try to explain these features in terms of MDMA’s “vibratory signature”.
12. Deep discussion about self-honesty and why it develops in the state. I speculate it has to do with the de-modularization of our vascular clusters (or something else, if blood turns out to be a special case).
13. This blending of modules with each other results in an uncomfortable but helpful overlap between contradictory faces that we put in social settings. It is ideal to experience this with equanimity and patience, however difficult it is to acknowledge it to ourselves. The other side of this wall is light and beautiful, I promise.
14. It seems to me that MDMA creates a highly redundant and highly overdetermined Euclidean geometric phenomenal space, where each point “knows” really clearly how far it is from every other point. Psychedelics can sometimes do this for short periods of time, but they usually create complex fractaline phenomenal spaces. MDMA is different – highly “clear and normal” yet unblocked and euphoric.
15. The concept of Gnarliness as it relates to the “field knots” that MDMA can help unwind.