From Neural Activity to Field Topology: How Coupling Kernels Shape Consciousness

This post aims to communicate a simple yet powerful idea: if you have a system of coupled oscillators controlled by a coupling kernel, you can use it to not only “tune into resonant modes” of the system, but also as a point of leverage to control the topological structure of the fields interacting with the oscillators.

This might be a way to explain how topological boundaries are mediated by neuronal activity, which in turn can be modulated by drugs/neurotransmitter concentrations, and in this way provide a link between neurochemistry and the topological structure of experience. Two things fall out of this: First, we might have the conceptual tools to link the creation of global topological boundaries (which at QRI we postulate are what separates a moment of experience from the rest of the universe) and neural activity. And second, in turn, we might have the ability to explain as well the way changes in oscillator/neural activity give rise to differently internally structured topologies (which together with a way of interpreting the mapping between topology of a field and its phenomenology) can help us explain things like the phenomenological differences between states of consciousness triggered by the ingestion of drugs as different as DMT and 5-MeO-DMT. In other words, this post is pointing at how we can get topological structure out of oscillatory activity – and thus explain how conscious boundaries (both local and global) are modulated both natively and through neuropharmacological interventions. It’s an algorithmic reduction with potentially very large explanatory power in the realm of consciousness research that only now is becoming conceptually accessible thanks to years of research and development at QRI.

Let’s start with a Big Picture Summary of the framework:

QRI aims to develop a holistic theoretical framework for consciousness. This latest iteration aims to integrate electromagnetic field theories of consciousness, connectome-specific harmonic waves, coupling kernels, and field topology in a way that might be capable of providing both explanatory and predictive power in the realm of phenomenology and its connection to biology. While this is an evolving framework, I see a lot of value in sharing the general idea (the “big picture”) we have at the moment to start informing the community and collaborators about how we’re thinking about unifying frameworks for understanding consciousness at the moment. The core elements of the Big Picture are:

  • Coupling Kernels as Neural-Global Bridge: The coupling kernel serves as a critical bridge between local neural circuitry and global brain-wide behavior. As demonstrated in Marco Aqil‘s work, when scaling up from neural microcircuits, the power distribution across different system harmonics can be modulated through coupling kernel parameters. This is something we arrived at independently last year in a very empirical and hands-on way, but Marco’s precise mathematical framework provides a solid theoretical foundation for this connection.
  • Geometric Constraints on Coupling Effects: The underlying geometry of a system fundamentally shapes how coupling kernels manifest their effects: resonant modes accessible through coupling kernels differ significantly between scale-free and geometric networks. Within geometric networks, specific geometries and dimensionalities generate characteristic resonant patterns. Thus, a single “high level” effect like a change in coupling kernel can have a wide range of different effects depending on the type of network/system to which it is applied.
  • Network Geometry Interactions and Projective Intelligence: A fundamental computational principle emerges from the interaction between networks of different geometries/topologies. This underlies “projective intelligence” (or more broadly, mapping/functional intelligence) – as exemplified by the interaction between the 2D visual field and 3D somatic field.
  • Topological Solution to the Boundary Problem: The topological solution to the boundary problem elucidates how physically “hard” boundaries with causal significance and holistic behavior could explain the segmentation of consciousness into discrete experiential moments.
  • Internal Topology and Phenomenology: The internal topological complexity within a globally segmented topological field pocket may determine its phenomenology – specifically, the field’s topological defects might establish the boundary conditions.
  • 5-MeO-DMT and Topological Simplification: 5-MeO-DMT experiences demonstrate phenomenological topological simplification as documented by Cube Flipper and other HEART members.
  • Coupling Kernels and Field Topology: Coupling kernels applied to electric oscillators can modulate field topology (observable in the vortices and anti-vortices of the magnetic field containing the electric oscillators, which you can see in the simulations below).
  • DMT vs 5-MeO-DMT Effects: This framework offers an explanation for the characteristic effects of DMT and 5-MeO-DMT: DMT generates competing coherence clusters and multiple simultaneous observer perspectives – interpretable as topological complexification within the pocket. Conversely, 5-MeO-DMT induces simplification where boundaries mutually cancel, ultimately producing experiences characterized by a single large pinwheel and the dissolution of topological defects (as in cessation states).
  • Paths and Experience: The Path Integral of Perspectives – The final theoretical component suggests that the subjective experience of a topological pocket emerges from “the superposition of all possible paths” within it. The topological simplicity of 5-MeO-DMT states may generate an “all things at once” quality due to the absence of internal boundaries constraining the state. In contrast, DMT’s complex internal topology results in each topological defect functioning as an observer, creating the sensation of multiple entities.

We’re currently developing empirical paradigms to test these frameworks, including psychophysics studies and simulations of brain activity to reconstruct behavior observed through neuroimaging. These ideas are fresh and need a lot of work to be validated and integrated into mainstream science, but we see a path forward and we’re excited to get there.

Now let’s dive into these components and explain them more fully:

0. What’s a Coupling Kernel?

The core concept vis-à-vis QRI was introduced in Cessation states: Computer simulations, phenomenological assessments, and EMF theories (Percy, Gómez-Emilsson, & Fakhri, 2024), where we provided a novel conceptual framework to make sense of meditative cessations (i.e. brief moments at high levels of concentration where “everything disappears”). Coupling kernels was part of the conceptual machinery that allowed us to propose a model for cessations, but it is worth mentioning that it stands on its own as a neat tool that bridges low-level connectivity and high-level resonance in systems of coupled oscillators. The core concept is: in a system of coupled oscillators with a distance function for each pair of oscillators, a coupling kernel is a set of parameters that tells you what the coupling coefficient should be as a function of this distance. I independently arrived at this idea (which others have explored in the past to an extent) during the Canada HEART retreat in order to explain a wide range of phenomenological observations derived from meditative and psychedelic states of consciousness. In particular, we wanted to have a simple algorithmic reduction to be able to explain the divergent effects of DMT and 5-MeO-DMT: the former seems to trigger “competing clusters of coherence” in sensory fields, whereas the latter seems to pull the entire system to a state of global coherence (in a dose-dependent way). Thinking of systems of coupled oscillators, I hypothesized that perhaps DMT induces a sort of alternative coupling kernel (where immediate neighbors want to be as different as possible from each other, whereas neighbors a little further apart want to be similar) while 5-MeO-DMT might instantiate a general “positive kernel” where oscillators all want to be in phase regardless of relative distance. We are in the process of developing empirical paradigms to validate this framework, so please take this with a grain of salt; the paradigm is currently in early developmental stages, but it is nonetheless worth sharing for the reasons I mentioned already (bringing collaborators up to speed and getting the community to start thinking in this new way).

As demonstrated in our work “Towards Computational Simulations of Cessation“, see how a flat “coupling kernel” triggers a global attractor of coherence across the entire system, whereas an alternating negative-positive (Mexican hat-like) kernel produces competing clusters of coherence. This is just a very high-level and abstract demonstration of a change in the dynamic behavior of coupled oscillators by applying a coupling kernel. What we then must do is to see how such a change would impact different systems in the organism as a whole.
Source

It is worth mentioning that in all of our simulations we also add a “small world” lever. The way this one is constructed is as follows: at the start of the simulation, for each oscillator we select two other oscillators at random and wire them to it. The lever controls the coupling constant between each oscillator and the two randomly chosen oscillators assigned to it. In graph theory, this kind of network architecture is often called a “small-world network” because the diameter of the graph quickly collapses as you add more random connections (and in our case, the system synchronizes as you add a positive coupling constant in for these connections). In practice, while the distance-based coupling kernel tunes into resonant modes (traveling waves, checkerboard patterns, etc. as we will see below), the small-world coupling constant adds a kind of geometric noise (when negative) and a global phase to which all oscillators can easily synchronize to (when positive). In effect, we suspect that small-world network-like neural wiring might be responsible for things like dysphoric seizures (due to high level of synchrony coupled with geometric irregularity causing intense dissonance) and disruption of consonant traveling waves (e.g. as a way to modulate anxiety). The phenomenology of being hungover or of experiencing benzo withdrawal might have something to do with an overactive negative small world network coupling constant.

1. Coupling Kernels as Neural-Global Bridge

One of the early simulations that I coded would analyze in real time the Discrete Cosine Transform that the effect of coupling kernels have on a 2D system of oscillators. Intuitively, I knew that the shape of the kernel clearly selected for specific resonant modes of the entire system, but seeing in real time how robust this effect was made me think there probably was a deep mathematical reason behind it. Indeed, as you can see in the below animations, the kernel shape can select checkerboard patterns, traveling waves, and even large pinwheels, all of which have characteristic spatial frequencies that are easily noted in the DCT of the plate of oscillators.

The animations above show: coupling kernel for a 2D system of coupled oscillators, shown on the top-left quadrant. Top-right quadrant is the Discrete Cosine Transform of the 2D plate of oscillators. Bottom-left is a temporal low-pass filter on the DCT. Bottom-right is a temporal high-pass filter on the DCT. Source: Internal QRI tool (public release forthcoming)

In November of last year at a QRI work retreat we stumbled upon two key frameworks that directly address these concepts in the research of Marco Aqil. Namely, CHAOSS (Connectome-Harmonic Analysis Of Spatiotemporal Spectra) and Divisive Normalization. In those works we find how the coupling kernel serves as the critical bridge between local neural activity and global brain-wide behavior. This connection emerges from deep mathematical principles explored in the CHAOSS framework. As we scale up from individual neural circuits to larger networks, the distribution of power across different harmonics of the system becomes accessible through modulation of the coupling kernel. CHAOSS reveals how the eigenmodes (in this case corresponding to “connectome harmonics”) of our structural wiring give rise to global patterns of brain activity. When provided appropriate coupling parameters, neural systems resonate with specific structural frequencies, producing macroscopic standing waves that unify and reorganize local activation patterns.

The link between molecular mechanisms and coupling kernels becomes particularly clear through divisive normalization. This canonical neural computation principle describes how a neuron’s response to input is modulated by the activity of surrounding neurons through specific molecular pathways. Different receptor systems (like 5-HT2A and 5-HT1A) can alter these normalization circuits in characteristic ways (perhaps ultimately explaining the implementation-level effects discussed in Serotonin and brain function: a tale of two receptors (2017, Carhart-Harris, Nutt)). When we map this to our coupling kernel framework, we see that changes in divisive normalization directly translate to changes in the coupling kernel’s shape. For instance, 5-HT2A activation might enhance local inhibition while simultaneously strengthening medium-range excitation, creating the alternating positive-negative coupling pattern characteristic of DMT states. Conversely, 5-HT1A activation might promote more uniform positive coupling across distances, explaining 5-MeO-DMT’s tendency toward global coherence. This provides a concrete mechanistic bridge from receptor activation to field topology: receptor binding → altered divisive normalization → modified coupling kernel → changed field topology. It’s a beautiful example of how a relatively simple molecular change can propagate through multiple scales to create profound alterations in consciousness.

In the CHAOSS framework, each brain region and pathway is represented as a node and edge on a distance-weighted graph. The framework applies spatiotemporal graph filters that act as coupling kernels, encoding how each node influences and is influenced by its neighbors across multiple time scales. By systematically adjusting parameters for excitatory and inhibitory interactions, we can effectively “scan” the connectome’s harmonic space: certain configurations produce stable resonance, others generate traveling waves or chaotic patterns, and some configurations may induce boundary-dissolving states that might prevent the formation of gestalts, and so on. The point being that it can be rigorously shown that in a system of coupled oscillators, a spatial (or temporal) coupling kernel can effectively “tune into” global resonant modes of the entire system.

At the very lowest-level, Marco’s work on Divisive Normalization suggests that there is a mode of canonical neural computation, where the response from a population of neurons to a given input signal is mediated by the surrounding context, a circuit that involves neurons that respond to different neurotransmitter systems. In particular, here we have a bridge that links the very low-level neural circuits to the coupling kernels, which in turn excites specific harmonic resonant modes of the entire system. In other words, the coupling kernel is a sort of intermediate “meso-level” structure that provides system-wide dynamic control of resonance and can be derived as a function of the balance between different neuronal populations that respond to specific neurotransmitters (learn more).

The result of encountering this research is that we now have a crisp conceptual explanation for how coupling kernels might arise (and be controlled by) low-level circuitry, and also why (in a mathematically rigorous way) such kernels can tune into global resonant modes. It therefore starts to look like there is a potentially highly rigorous link between the insights that come from QRI’s Think Tank “taking phenomenology seriously” approach and the current leading academic theories of how drugs affect perception.

2. Geometric Constraints on Coupling Effects

With the above said, the human organism is really complex, and so it is natural to ask: where exactly does the coupling kernel apply to? As argued recently we propose that it would be highly parsimonious if the coupling kernel applied to a range of systems at the same time: the visual cortex, the auditory cortex, the somatosensory cortex, the peripheral nervous system, and even the vasculature. Here the conceptual framework would say that a given drug might change the way low-level circuitry results in divisive normalization with specific constants, and that this change is applied to a wide range of systems. When you take LSD you get a characteristic “vibrational pattern” that might be present in, say, both the vascular system and the visual cortex at the same time. The underlying change is very simple, but the resulting effect is system-dependent due to the characteristic geometry and topology of each subsystem that is affected.

I think that a key insight we ought to work with is that the geometry of the system on which a coupling kernel operates fundamentally determines its high-level effects. A particularly striking example of how geometry shapes coupling kernel effects can be seen in the contrast between the visual cortex and the vasculature system. The visual cortex, organized as a hierarchical geometric network with distinct layers and columnar organization, responds to coupling kernels in ways that reflect its structural hierarchy. When a DMT-like kernel (alternating positive-negative coupling constants) is applied, it generates competing clusters of coherence at different scales of the hierarchy. This manifests phenomenologically as the characteristic layered, fractal-like visual patterns reported in DMT experiences, where similar motifs appear nested at multiple scales. In contrast, a 5-MeO-DMT-like kernel (uniformly positive coupling) drives the hierarchical network toward global synchronization, potentially contributing to the reported dissolution of visual structure in 5-MeO-DMT experiences.

Simulation comparing coupling kernels across a hierarchical network of feature-selective layers (16×16 to 2×2), showing how different coupling coefficients between and within layers affect pattern formation. The DMT-like kernel (-1.0 near-neighbor coupling) generates competing checkerboard patterns at multiple spatial frequencies, while the 5-MeO-DMT-like kernel (positive coupling coefficients) drives convergence toward larger coherent patches. These distinct coupling dynamics mirror how these compounds might modulate hierarchical neural architectures like the visual cortex.
Source: Internal QRI tool (public release forthcoming)

The vasculature system, on the other hand, exemplifies a scale-free network with its branching architecture. Here, the same coupling kernels produce markedly different effects. In the vasculature, a DMT-like kernel would tend to create competing clusters of coherence primarily at bifurcation points, where vessels branch. This could explain some of the characteristic bodily sensations reported during DMT experiences, such as the feeling of energy concentrating at specific points in the body. When a 5-MeO-DMT-like kernel is applied to this scale-free network, it drives the entire system toward global phase synchronization, potentially contributing to the reports of profound bodily dissolution and unity experiences (cf. when you experience a dysphoric 5-MeO-DMT response oftentimes this can be traced to a mostly coherent but slightly off pattern of flow, where “energy” strongly aggregates in a specific point, cf. Arataki’s Guide to 5-MeO-DMT).

Simulation comparing different coupling kernels (DMT-like vs 5-MeO-DMT-like) applied to a 1.5D fractal branching network, showing how modified coupling parameters affect phase coherence and signal propagation. The DMT-like kernel produces competing clusters of coherence at bifurcation points, while the 5-MeO-DMT kernel drives the system toward global phase synchronization – patterns that could explain how these compounds differently affect branching biological systems like the vasculature or peripheral nervous system.
Source: Internal QRI tool (public release forthcoming)

This framework helps explain how a single pharmacological intervention, by modifying coupling kernels through changes in divisive normalization, can produce such diverse phenomenological effects across different biological systems. The geometry of each system acts as a filter, transforming the same basic change in coupling parameters into system-specific resonant patterns. This provides a unified explanation for how psychedelics can simultaneously affect visual perception, bodily sensation, and cognitive processes, while maintaining characteristic differences between compounds based on their specific coupling kernel signatures.

The notion of a continuous graph-based system dissolves traditional distinctions between regional oscillator networks and global wave phenomena into a single multifaceted gem of coupled states. By shaping coupling kernels, we effectively tune into specific connectome harmonics, instantiating global resonant modes that underlie everything from coherent sensory integration to altered states of consciousness.

3. Network Geometry Interactions and Projective Intelligence

A fundamental computational principle emerges from the interaction between networks of different geometries and topologies. This principle underlies what we might call “projective intelligence” or more broadly, mapping/functional intelligence. The interaction between the 2D visual field and 3D somatic field provides a prime example of this principle in action.

Consider how we understand a complex three-dimensional object like a teapot. Our visual system receives a 2D projection, but we comprehend the object’s full 3D structure through an intricate dance between visual and somatic representations. As we observe the teapot from different angles, our visual system detects various symmetries and patterns in the 2D projections: perhaps the circular rim of the spout, the elliptical body, the handle’s curve. These 2D patterns, processed through the visual cortex’s hierarchical geometric network, generate characteristic resonant modes. Simultaneously, our somatic system maintains a 3D spatial representation where we can “map” these detected symmetries. The brain effectively “paints” the symmetries found in the 2D visual field onto the 3D somatic representation, creating a rich multi-modal representation of the object.

This process involves multiple parallel mappings between sensory fields, each governed by its own coupling kernel. The visual field might have one kernel that helps identify continuous contours, while another kernel in the somatic field maintains spatial relationships. These kernels can synchronize or “meet in resonance” when the mappings between fields align correctly, giving rise to stable multimodal representations. When we grasp the teapot, for instance, the tactile feedback generates somatic resonant modes that match our visually-derived expectations, reinforcing our understanding of the object’s structure (many thanks to Wystan, Roger, Cube Flipper, and Arataki for many discussions on this topic and their original contributions thereof – the fact that visual sensations devoid of somatic coupling have a very different quality in particular was a brilliant observation by Roger that fomented a lot of insights in our sphere).

The necessity of interfacing between spaces of different dimensionality (e.g. 3D somatic space and 2.5D visual space) creates interesting constraints. In systems exhibiting resonant modes emergent from coupled oscillator wiring, energy minimization occurs precisely where waves achieve low-energy configurations in both interfacing spaces simultaneously. This requires finding both an optimal projection between spaces and appropriate coupling kernels that allow the resulting space to behave as if it were unified.

Remarkably, this framework suggests that our cognitive ability to understand complex objects and spaces emerges from the brain’s capacity to maintain multiple concurrent mappings between sensory fields of different dimensionalities. Each mapping can be thought of as a kind of “cross-modal resonance bridge,” where coupling kernels in different sensory domains synchronize to create stable, coherent representations. When this level of coherence is achieved, the waves cannot detect the underlying projective dynamic: there simply is no “internal distinction” to be found in an otherwise complex system that typically maintains many differences between the spaces it maps. At the limit, the perfect alignment between the various mappings and coupling kernels of all sensory fields is what we hypothesize explains meditative cessations.

This multiple-mapping approach might explain phenomena like the McGurk effect, where visual and auditory information integrate to create a unified perception, or the rubber hand illusion, where visual and tactile fields can be realigned to incorporate external objects into our body schema. In each case, coupling kernels in different sensory domains synchronize to create new stable configurations that bridge dimensional and modal gaps.

The framework also provides insight into how psychedelics might affect these cross-modal mappings. DMT, for instance, might introduce competing clusters of coherence across different sensory domains, leading to novel and sometimes conflicting cross-modal associations. In contrast, 5-MeO-DMT might drive all mappings toward global synchronization, along which characteristic system-wide synchronization effects manifest, potentially explaining the reported dissolution of distinctions between sensory modalities and the experience of unified consciousness.

Understanding consciousness as a system of interacting dimensionally-distinct fields, each with their own coupling kernels that can synchronize and resonate with each other, offers a powerful new way to think about both ordinary perception and altered states. It suggests that our rich experiential world emerges from the brain’s ability to maintain and synchronize multiple parallel mappings between sensory domains of different dimensionalities, creating a unified experience from fundamentally distinct representational spaces.

4. Topological Solution to the Boundary Problem

Here’s where the framework really starts to come together: if we identify fields of physics with fields of qualia (a field-based version of panpsychism), then the boundaries between subjects could be topological in nature. Specifically, where magnetic field lines “loop around” to form closed pockets, we might find individual moments of experience. These pockets aren’t arbitrary or observer-dependent: they’re ontologically real features of the electromagnetic field that naturally segment conscious experience (note: I will leave aside for the time being the discussion about the ontological reality of the EM field, but suffice to say that even if the EM field is an abstraction atop the more fundamental ontology of reality, we believe topological segmentation could then apply to that deeper reality).

This provides a compelling solution to the boundary problem: what stops phenomenal binding from expanding indefinitely? The answer lies in the topology of the field itself. When field lines close into loops, they create genuine physical boundaries that can persist and evolve as unified wholes. These boundaries are frame-invariant (preserving properties under coordinate transformations), support weak emergence without requiring strong emergence, and explain how conscious systems can exert downward causation on their constituent parts through resonance effects.

5. Electromagnetic Field Topology and its Modulation

To demonstrate how coupling kernels create and control these field boundaries, we’ve developed three key simulations showing electric oscillators embedded in magnetic fields. By visualizing the resulting field configurations across different geometries – 2D grids, circular arrangements, and branching structures – we can directly observe how coupling kernels shape field topology.

When we apply a DMT-like kernel (alternating positive-negative coupling constants at different distances), we see an explosion of topological complexity in which multiple vortices and anti-vortices emerge, creating a diverse patterns of nested field structures. The same kernel creates characteristic patterns in each geometry, but always tends toward complexification. In contrast, applying a 5-MeO-DMT-like kernel (uniformly positive coupling) causes these complex structures to simplify dramatically, often collapsing into a single large vortex or even completely smooth field lines.

Coupled oscillators in a 2D space whose phase is interpreted as electric oscillations are embeded in a magnetic field whose topology becomes mediated by the coupling kernel. Source: Internal QRI tool (public release forthcoming)

[Note: These are still 2D simulations – a full 3D electromagnetic simulation is in development and will likely reveal even richer topological dynamics. However, even these simplified models provide striking evidence for how coupling kernels can control field topology.]

6. 5-MeO-DMT and Topological Simplification

The remarkable alignment between our theoretical predictions and actual psychedelic experiences becomes clear when we examine 5-MeO-DMT states. As documented in Cube Flipper’s “5-MeO-DMT: A Crash Course in Phenomenal Field Topology” (2024), these experiences frequently involve the systematic disentangling or annihilation of local field perturbations (“topological defects”) over time. Subjects report a progressive dissolution of boundaries and eventual sense of absolute unity or “oneness.” Significantly, recent EEG analysis of 5-MeO-DMT experiences also reveal remarkable topological properties, which we’re currently trying to derive from a 3D model of the brain in light of altered coupled kernels.

Source: Cube Flipper’s HEART essay on 5-MeO-DMT and field topology.

This phenomenology maps really well onto what our electromagnetic simulations predict: a 5-MeO-DMT-like coupling kernel transforms networks of swirling singularities into simplified field configurations. The effect isn’t limited to any particular neural subsystem: it appears to drive global topological simplification across multiple scales and geometries, explaining both the intensity and the consistency of the experience across subjects. In turn, a lot of the characteristic phenomenological features of 5-MeO-DMT might find their core generator as the interaction between a very positive coupling kernel and the interesting relationships between different sensory fields as they try to map onto each other to minimize dissonance. At the peak of a breakthrough experience, typically this culminates in what appears as a global multimodal coherent state, where presumably all the sensory fields have found a mapping to each other such that the waves in each look exactly the same: the recipe for a zero informational state of consciousness. A whiteout.

What’s particularly fascinating is that this framework suggests normal waking consciousness might represent a sweet spot of topological complexity. It carries enough structure to maintain a stable sense of self and world, but not so little as to dissolve completely (as in 5-MeO-DMT states). Each topological defect could be thought of as a kind of “perspectival anchor” in the field. As these defects systematically dissolve under 5-MeO-DMT, we would expect exactly what subjects report: a progressive loss of distinct perspectives culminating in a state of pure unity. Perhaps sleep and dreaming could be also interpreted through this lens: during periods of wakefulness we slowly but surely accumulate topological defects; sleep and dreaming might be a process of topological simplification where the topological defects aggregate and cancel out. Notice next time you find yourself in a hypnagogic state how it feels like to “let go of the central grasping to experience” and the subsequent fast “unraveling” of the field of experience. Much more to say about this in the future (a topological simplification theory of sleep).

7. Coupling Kernels and Field Topology

The mechanism by which coupling kernels control field topology reveals something really deep, abstract, and yet applied about consciousness: the same mathematical object (the coupling kernel) can simultaneously modulate both neural dynamics and electromagnetic field structure. This isn’t just correlation: we are talking about a direct causal chain from molecular interaction to conscious experience and back. Precisely the sort of structure we want in order to both ground the topological boundary problem solution in neurophysiology and avoid epiphenomenalism (since the field topology feeds back into neural activity, cf. local field potentials).

Consider how this works: when we apply a coupling kernel to a network of electric oscillators, we’re not just changing their relative phases. We’re also sculpting the magnetic field they generate. Each oscillator contributes to the local magnetic field, and the coupling kernel determines how these contributions interfere. Positive coupling between nearby oscillators tends to align their fields, creating smooth, continuous field lines. Negative coupling creates discontinuities and vortices. The resulting field topology emerges from these collective interactions, yet acts back on the system as a unified whole through electromagnetic induction.

What’s particularly elegant about this mechanism is its scale-invariance. Whether we’re looking at ion channels in a single neuron or large-scale brain networks, the same principles apply. The coupling kernel acts as a kind of “field-shaping operator” that can be applied at any scale where electromagnetic interactions matter. This helps explain why psychedelics, which presumably modify coupling kernels through receptor activation, can have such profound and coherent effects across multiple levels of brain organization.

8. DMT vs 5-MeO-DMT Effects

With this mechanism in hand, we can now understand the radically different effects of DMT and 5-MeO-DMT in a new light. The key insight is that these compounds don’t just change what we experience. They transform the very structure of the field that gives rise to bound experiences.

DMT appears to implement a coupling kernel with a characteristic Mexican-hat profile: strong negative coupling at short distances combined with positive coupling at medium distances. When applied to neural networks, this creates competing clusters of coherence. But more fundamentally, it generates a field topology rich in stable vortices and anti-vortices. Each of these topological features acts as a semi-independent center of field organization – a kind of local “observer” within the larger field.

This helps explain one of the most striking aspects of DMT experiences: the encounter with apparently autonomous entities or beings. If each major topological defect in the field functions as a distinct locus of observation, then the DMT state literally creates multiple valid perspectives within the same field of consciousness. The geometric patterns commonly reported might reflect the larger-scale organization of these topological features – the way they naturally arrange themselves in space according to electromagnetic field dynamics.

The bizarre yet consistent nature of DMT entity encounters takes on new meaning in this framework. These entities often seem to exist in spaces with impossible geometries, yet interact with each other and the observer in systematic ways. This is exactly what we’d expect if they represent stable topological features in a complexified electromagnetic field: they would follow precise mathematical rules while potentially violating our usual intuitions about space and perspective. Even our notion of a central observer and object of observation; the DMT space has many overlapping “points of view” derived from the complex topology of the field.

These insights stand in stark contrast to 5-MeO-DMT’s effects, but they emerge from the same underlying mechanism. They also suggest new research directions. For instance, we might be able to predict specific patterns of field organization under different compounds by analyzing their receptor binding profiles in terms of their implied coupling kernels. This could eventually allow us to engineer specific consciousness-altering effects by designing molecules (or drug cocktails) that implement particular coupling kernel shapes.

9. Paths and Experience: The Path Integral of Perspectives

Here’s where we get to be both mathematically precise and delightfully speculative: I propose that the mapping between field topology and phenomenology is best understood through the path integral of all possible perspectives within a topological pocket. This isn’t just mathematical fancy – it’s a necessary move once we realize that consciousness doesn’t always have a center.

Think about it: we’re used to consciousness having a kind of “screen” quality, where everything is presented to a singular point of view. But this is just one possible configuration(!). On DMT, for instance, experiencers often report accessing topological extrema instantaneously, as if consciousness could compress or tunnel through its own geometry to find patterns and symmetries. This suggests our usual centered experience might be more of a special case – perhaps we’re too attached (literally, in terms of field topology) to a central vortex that geometrizes experience in a familiar way.

When we consider the full range of possible field topologies, things get wonderfully weird (but also kind of eerie to be honest). The “screen of consciousness” starts looking like just one possible way to organize the field, corresponding to a particular kind of stable vortex configuration. But there are so many other possibilities! The path integral approach lets us understand how a completely “centerless” state could still be conscious – it’s just integrating over all possible perspectives simultaneously, without privileging any particular viewpoint.

This framework helps explain why 5-MeO-DMT can produce states of “pure consciousness” without content – when the field topology simplifies enough, the path integral becomes trivial. There’s literally nothing to distinguish one perspective from another. In a perfectly symmetrical manifold, all points of view are exactly the same. This ultimately ties in to the powerful valence effects of 5-MeO-DMT, seen through the lens of a field-theoretic version of the Symmetry Theory of Valence (Johnson 2016). We’re currently developing valence functions for field topologies, though we don’t yet have concrete results worth showing (but writeup about it forthcoming). Conversely, if this framework is accurate, then DMT’s complex topology creates many local extrema, each serving as a kind of perspectival anchor point, leading to the sensation of multiple observers or entities. This would be predicted to have generically highly mixed valence, with at times highly dissonant states and at times highly consonant states, yet always rich in internal divisions and complex symmetries rather than the “point of view collapse” characteristic of 5-MeO-DMT.

Our electromagnetic field visualizations make this particularly concrete. When we observe the magnetic field configurations in our simulations, we’re essentially seeing snapshots of the space over which these path integrals are computed. In the DMT-like states, the field is rich with vortices and anti-vortices – each one representing a potential perspective from which to “view” the field. The path integral must account for all possible paths through this complex topology, including paths that connect different vortices. This creates a kind of “quantum tunneling of perspective” (I know how this sounds, but bear with me) where consciousness can leap between different viewpoints, perhaps explaining the characteristically bizarre spatial experiences reported on DMT. In contrast, when we apply the 5-MeO-DMT-like kernel, we watch these vortices collapse and merge. The topology simplifies until there’s just one global structure – or sometimes none at all. At this point, the path integral becomes trivial because all paths through the field are essentially equivalent. There’s no longer any meaningful distinction between different perspectives because the field has achieved a kind of perfect symmetry.

Conclusion: A Network of Insights

This theoretical framework – connecting coupling kernels, field topology, and conscious experience – emerged from years of collaborative work and inspiration. While the specific insights about coupling kernels and their effects on field topology are my contributions, they stand atop a mountain of brilliant work by the extended QRI family.

I’m deeply grateful to Chris Percy for his rigorous development of these ideas, particularly in understanding their philosophical implications in the context of the current literature of consciousness studies, Michael Johnson for years of fruitful collaboration (and his great contribution to the field via the Symmetry Theory of Valence and formalization of Neural Annealing), as well as really helpful QRI advisors like Shamil Chandaria, Robin Carhart-Harris, and Luca Turin. Also special thanks to the great long-time doers in QRI like Hunter Meyer, Marcin Kowrygo, Margareta Wassinge and Anders Amelin (RIP). Cube Flipper’s phenomenological investigations of 5-MeO-DMT have been invaluable, as have the insights from Roger Thisdell, Wystan Bryan-Scott, Asher Arataki, and others. Everyone on the HEART team’s dedication to careful exploration has provided crucial empirical grounding for these theoretical developments.

I’m also excited about ongoing work with our academic collaborators (to be announced soon – we’re currently designing studies to test these ideas rigorously). In particular I want to thank Till Holzapfel for his awesome research and collaborations (and help with the QRI Amsterdam meetup!), Taru Hirvonen for her visual intuitions and work, Emil Hall for his amazing programming and conceptual development help, Symmetric Vision for his incredible visual work and intuitions, Ethan Kuntz for his insights on spectral graph theory, Scry for his retreat replications, and Marco Aqil for his ground-breaking research (and for giving a presentation at the recent Amsterdam meetup), and many more people who have recently been delightful and helpful for the mission (special shoutout to Alfredo Parra). This emerging research program promises to put these theoretical insights to empirical test, and we’re working at a team to bridge phenomenology and hard neuroscience. It’s happening! 🙂

Also, none of this would have been possible without the broader QRI community and its supporters – a group of fearless consciousness researchers willing to take both mathematical rigor and subjective experience seriously. Together, we’re building a new science of consciousness that respects both the precision of physics and the richness of lived experience.

The path ahead is clear (well, at least in my head): we need to develop more sophisticated simulations of field topology, particularly in three dimensions, and devise clever ways to test these ideas experimentally through psychophysics and microphenomenology. The coupling kernel paradigm offers a concrete mathematical handle on consciousness – one that might let us not just understand but eventually engineer specific states of consciousness. It’s an exciting time to be working on this hard problem!

Thanks for coming along on this wild ride through field topology, psychedelic states, and the mathematics of consciousness. Stay tuned – there’s much more to come!

– Andrés 🙂

Costs of Embodiment

[X-Posted @ The EA Forum]

By Andrés Gómez Emilsson

Digital Sentience

Creating “digital sentience” is a lot harder than it looks. Standard Qualia Research Institute arguments for why it is either difficult, intractable, or literally impossible to create complex, computationally meaningful, bound experiences out of a digital computer (more generally, a computer with a classical von Neumann architecture) include the following three core points:

  1. Digital computation does not seem capable of solving the phenomenal binding or boundary problems.
  2. Replicating input-output mappings can be done without replicating the internal causal structure of a system.
  3. Even when you try to replicate the internal causal structure of a system deliberately, the behavior of reality at a deep enough level is not currently understood (aside from how it behaves in light of inputs-to-outputs).

Let’s elaborate briefly:

The Binding/Boundary Problem

  1. A moment of experience contains many pieces of information. It also excludes a lot of information. Meaning that, a moment of experience contains a precise, non-zero, amount of information. For example, as you open your eyes, you may notice patches of blue and yellow populating your visual field. The very meaning of the blue patches is affected by the presence of the yellow patches (indeed, they are “blue patches in a visual field with yellow patches too”) and thus you need to take into account the experience as a whole to understand the meaning of all of its parts.
  2. A very rough, intuitive, conception of the information content of an experience can be hinted at with Gregory Bateson’s (1972) “a difference that makes a difference”. If we define an empty visual field as containing zero information, it is possible to define an “information metric” from this zero state to every possible experience by counting the number of Just Noticeable Differences (JNDs) (Kingdom & Prins, 2016) needed to transform such empty visual field into an arbitrary one (note: since some JND are more difficult to specify than others, a more accurate metric should also take into account the information cost of specifying the change in addition to the size of the change that needs to be made). It is thus evident to see that one’s experience of looking at a natural landscape contains many pieces of information at once. If it didn’t, you would not be able to tell it apart from an experience of an empty visual field.
  3. The fact that experiences contain many pieces of information at once needs to be reconciled with the mechanism that generates such experiences. How you achieve this unity of complex information starting from a given ontology with basic elements is what we call “the binding problem”. For example, if you believe that the universe is made of atoms and forces (now a disproven ontology), the binding problem will refer to how a collection of atoms comes together to form a unified moment of experience. Alternatively, if one’s ontology starts out fully unified (say, assuming the universe is made of physical fields), what we need to solve is how such a unity gets segmented out into individual experiences with precise information content, and thus we talk about the “boundary problem”.
  4. Within the boundary problem, as Chris Percy and I argued in Don’t Forget About the Boundary Problem! (2023), the phenomenal (i.e. experiential) boundaries must satisfy stringent constraints to be viable. Namely, among other things, phenomenal boundaries must be:
    1. Hard Boundaries: we must avoid “fuzzy” boundaries where information is only “partially” part of an experience. This is simply the result of contemplating the transitivity of the property of belonging to a given experience. If a (token) sensation A is part of a visual field at the same time as a sensation B, and B is present at the same time as C, then A and C are also both part of the same experience. Fuzzy boundaries would break this transitivity, and thus make the concept of boundaries incoherent. As a reductio ad absurdum, this entails phenomenal boundaries must be hard.
    2. Causally significant (i.e. non-epiphenomenal): we can talk about aspects of our experience, and thus we can know they are part of a process that grants them causal power. More so, if structured states of consciousness did not have causal effects in some way isomorphic to their phenomenal structure, evolution would simply have no reason to recruit them for information processing. Albeit epiphenomenal states of consciousness are logically coherent, the situation would leave us with no reason to believe, one way or the other, that the structure of experience would vary in a way that mirrors its functional role. On the other hand, states of consciousness having causal effects directly related to their structure (the way they feel like) fits the empirical data. By what seems to be a highly overdetermined Occam’s Razor, we can infer that the structure of a state of consciousness is indeed causally significant for the organism.
    3. Frame-invariant: whether a system is conscious should not depend on one’s interpretation of it or the point of view from which one is observing it (see appendix for Johnson’s (2015) detailed description of frame invariance as a theoretical constraint within the context of philosophy of mind).
    4. Weakly emergent on the laws of physics: we want to avoid postulating either that there is a physics-violating “strong emergence” at some level of organization (“reality only has one level” – David Pearce) or that there is nothing peculiar happening at our scale. Bound, casually significant, experiences could be akin to superfluid helium. Namely, entailed by the laws of physics, but behaviorally distinct enough to play a useful evolutionary role.
  5. Solving the binding/boundary problems does not seem feasible with a von Neumann architecture in our universe. The binding/boundary problem requires the “simultaneous” existence of many pieces of information at once, and this is challenging using a digital computer for many reasons:
    1. Hard boundaries are hard to come by: looking at the shuffling of electrons from one place to another in a digital computer does not suggest the presence of hard boundaries. What separates a transistor’s base, collector, and emitter from its immediate surroundings? What’s the boundary between one pulse of electricity and the next? At best, we can identify functional “good enough” separations, but no true physics-based hard boundaries.
    2. Digital algorithms lack frame invariance: how you interpret what a system is doing in terms of classic computations depends on your frame of reference and interpretative lens.
    3. The bound experiences must themselves be causally significant. While natural selection seemingly values complex bound experiences, our digital computer designs precisely aim to denoise the system as much as possible so that the global state of the computer does not influence in any way the lower-level operations. At the algorithmic level, the causal properties of a digital computer as a whole, by design, are never more than the strict sum of their parts.

Matching Input-Output-Mapping Does Not Entail Same Causal Structure

Even if you replicate the input-output mapping of a system, that does not mean you are replicating the internal causal structure of the system. If bound experiences are dependent on specific causal structures, they will not happen automatically without considerations for the nature of their substrate (which might have unique, substrate-specific, causal decompositions). Chalmers’ (1995) “principle of organizational invariance” assumes that replicating a system’s functional organization at a fine enough grain will reproduce identical conscious experiences. However, this may be question-begging if bound experiences require holistic physical systems (e.g. quantum coherence). In such a case, the “components” of the system might be irreducible wholes, and breaking them down further would result in losing the underlying causal structure needed for bound experiences. This suggests that consciousness might emerge from physical processes that cannot be adequately captured by classical functional descriptions, regardless of their granularity.

More so, whether we realize it or not, it is always us (indeed complex bound experiences) who interpret the meaning of the input and the output of a physical system. It is not interpreted by the system itself. This is because the system has no real “points of view” from which to interpret what is going on. This is a subtle point, and will merely mention it for now, but a deep exposition of this line of argument can be found in The View From My Topological Pocket (2023).

We more so would point out that the system that is smuggling a “point of view” to interpret a digital computer’s operations is in the human who builds, maintains, and utilizes it. If we want a system to create its “own point of view” we will need to find the way for it to bind the information in a (1) “projector”/screen, (2) an actual point of view proper, or (3) the backwards lightcone that feeds into such a point of view. As argued, none of these are viable solutions.

Reality’s Deep Causal Structure is Poorly Understood

Finally, another key consideration that has been discussed extensively is that the very building blocks of reality have unclear, opaque causal structures. Arguably, if we want to replicate the internal causal structure of a conscious system, the classical input-output mapping is therefore not enough. If you want to ensure that what is happening inside the system has the same causal structure as its simulated counterpart, you would also need to replicate how the system would respond to non-standard inputs, including x-rays, magnetic fields, and specific molecules (e.g. Xenon isotopes).

These ideas have all been discussed at length in articlespodcastspresentations, and videos. Now let’s move on to a more recent consideration we call “Costs of Embodiment”.

Costs of Embodiment

Classical “computational complexity theory” is often used as a silver bullet “analytic frame” to discount the computational power of systems. Here is a typical line of argument: under the assumption that consciousness isn’t the result of implementing a quantum algorithm per se, the argument goes, then there is “nothing that it can do that you couldn’t do with a simulation of the system”. This, however, is neglecting the complications that come from instantiating a system in the physical world with all that it entails. To see why, we must first explain the nature of this analytic style in more depth:

Introduction to Computational Complexity Theory

Computational complexity theory is a branch of computer science that focuses on classifying computational problems according to their inherent difficulty. It primarily deals with the resources required to solve problems, such as time (number of steps) and space (memory usage).

Key concepts in computational complexity theory include:

  1. Big O notation: Used to describe the upper bound of an algorithm’s rate of growth.
  2. Complexity classes: Categories of problems with similar resource requirements (e.g., P, NP, PSPACE).
  3. Time complexity: Measure of how the running time increases with the size of the input.
  4. Space complexity: Measure of how memory usage increases with the size of the input.

In brief, this style of analysis is suited for analyzing the properties of algorithms that are implementation-agnostic, abstract, and interpretable in the form of pseudo-code. Alas, the moment you start to ground these concepts in the real physical constraints to which life is subjected, the relevance and completeness of the analysis starts to fall apart. Why? Because:

  1. Big O notation counts how the number of steps (time complexity) or number of memory slots (space complexity) grows with the size of the input (or in some cases size of the output). But not all steps are created equal:
    1. Flipping the value of a bit might be vastly cheaper in the real world than moving the value of a bit to another location that is very (physically far) in the computer.
    2. Likewise, some memory operations are vastly more costly than others: in the real world you need to take into account the cost of redundancy, distributed error correction, and entropic decay of structures not in use at the time.
  2. Not all inputs and outputs are created equal. Taking in some inputs might be vastly more costly than others (e.g. highly energetic vibrations that shake the system apart mean something to a biological organism as it needs to adapt to the possible stress induced by the nature of the input, expressing certain outputs might be much more costly than others, as the organism needs to reconfigure itself to deliver the result of the computation, a cost that isn’t considered by classical computational complexity theory).
  3. Interacting with a biological system is a far more complex activity than interacting with, say, logic gates and digital memory slots. We are talking about a highly dynamic, noisy, soup of molecules with complex emergent effects. Defining an operation in this context, let alone its “cost”, is far from trivial.
  4. Artificial computing architectures are designed, implemented, maintained, reproduced, and interpreted by humans who, if we are to believe already have powerful computational capabilities, are giving the system an unfair advantage over biological systems (which require zero human assistance).

Why Embodiment May Lead to Underestimating Costs

Here is a list of considerations that highlight the unique costs that come with real-world embodiment for information-processing systems beyond the realm of mere abstraction:

  1. Physical constraints: Traditional complexity theory often doesn’t account for physical limitations of real-world systems, such as heat dissipation, energy consumption, and quantum effects.
  2. Parallel processing: Biological systems, including brains, operate with massive adaptive parallelism. This is challenging to replicate in classical computing architectures and may require different cost analyses.
  3. Sensory integration: Embodied systems must process and integrate multiple sensory inputs simultaneously, which can be computationally expensive in ways not captured by standard complexity measures.
  4. Real-time requirements: Embodied systems often need to respond in real-time to environmental stimuli, adding temporal constraints that may increase computational costs.
  5. Adaptive learning: The ability to learn and adapt in real-time may incur additional computational costs not typically considered in classical complexity theory.
  6. Robustness to noise: Physical systems must be robust to environmental noise and internal fluctuations, potentially requiring redundancy and error-correction mechanisms that increase computational costs.
  7. Energy efficiency: Biological systems are often highly energy-efficient, which may come at the cost of increased complexity in information processing.
  8. Non-von Neumann architectures: Biological neural networks operate on principles different from classical computers, potentially involving computational paradigms not well-described by traditional complexity theory.
  9. Quantum effects: At the smallest scales, quantum mechanical effects may play a role in information processing, adding another layer of complexity not accounted for in classical theories.
  10. Emergent properties: Complex systems may exhibit physical emergent properties that arise from the interactions of simpler components and as well as phase transitions, potentially leading to computational costs that are difficult to predict or quantify using standard methods.

See appendix for a concrete example of applying these considerations to an abstract and embodied object recognition system (example provided by Kristian Rönn).

Case Studies:

1.  2D Computers

It is well known in classical computing theory that a 2D computer can implement anything that an n-dimensional computer can do. Namely, because it is possible to create a 2D Turing Machine capable of simulating arbitrary computers of this class (to the extent that there is a computational complexity equivalence between an n-dimensional computer and a 2D computer), we see that (at the limit) the same runtime complexity as the original computer in 2D should be achievable.

However, living in a 2D plane comes with enormous challenges that highlight the cost of embodiment present in a given media. In particular, we will see that the *routing costs* of information will grow really fast, as the channels that connect between different parts of the computer will need to take turns in order to allow for the crossed wires to transmit information without saturating the medium of (wave/information) propagation.

A concrete example here comes from examining what happens when you divide a circle into areas. Indeed, this is a well-known math problem, where you are supposed to derive a general formula for the number of areas by which a circle gets divided when you connect n (generally placed) points in its periphery. The takeaway of this exercise is often to point out that even though at first the number of areas seem to be powers of 2 (2, 4, 8, 16…) eventually the pattern is broken (the number after 16 is, surprisingly, 31 and not 32).

For the purpose of this example we shall simply focus on the growth of edges vs. the growth of crossings between the edges as we increase the number of nodes. Since every pair of nodes has an edge, the formula for the number of edges as a function of the number of nodes n is: n choose 2. Similarly, any four points define a single unique crossing, and thus the formula for the number of crossings is: n choose 4. When n is small (6 or less), the number of crossings is smaller or equal to the number of edges. But as soon as we hit 7 nodes, the number of crossings dominates over the number of edges. Asymptotically, in fact, the growth of edges is O(n^2) using the Big O notation, whereas the number of crossings ends up being O(n^4), which is much faster. If this system is used in the implementation of an algorithm that requires every pair of nodes to interact with each other once, we may at first be under the impression that the complexity will grow as O(n^2). But if this system is embodied, messages between the nodes will start to collide with each other at the crossings. Eventually, the number of delays and traffic jams caused by the embodiment of the system in 2D will dominate the time complexity of the system.

2. Blind Systems: Bootstrapping a Map Isn’t Easy

A striking challenge that biological systems need to tackle to instantiate moments of experience with useful information arises when we consider the fact that, at conception, biological systems lack a pre-existing “ground truth map” of their own components, i.e. where they are, and where they are supposed to be. In other words, biological systems somehow bootstrap their own internal maps and coordination mechanisms from a seemingly mapless state. This feat is remarkable given the extreme entropy and chaos at the microscopic level of our universe.

Assembly Theory (AT) (2023) provides an interesting perspective on this challenge. AT conceptualizes objects not as simple point particles, but as entities defined by their formation histories. It attempts to elucidate how complex, self-organizing systems can emerge and maintain structure in an entropic universe. However, AT also highlights the intricate causal relationships and historical contingencies underlying such systems, suggesting that the task of self-mapping is far from trivial.

Consider the questions this raises: How does a cell know its location within a larger organism? How do cellular assemblies coordinate their components without a pre-existing map? How are messages created and routed without a predefined addressing system and without colliding with each other? In the context of artificial systems, how could a computer bootstrap its own understanding of its architecture and component locations without human eyes and hands to see and place the components in their right place?

These questions point to the immense challenge faced by any system attempting to develop self-models or internal mappings from scratch. The solutions found in biological systems might potentially rely on complex, evolved mechanisms that are not easily replicated in classical computational architectures. This suggests that creating truly self-understanding artificial systems capable of surviving in a hostile, natural environment, may require radically different approaches than those currently employed in standard computing paradigms.

How Does the QRI Model Overcome the Costs of Embodiment?

This core QRI article presents a perspective on consciousness and the binding problem that aligns well with our discussion of embodiment and computational costs. It proposes that moments of experience correspond to topological pockets in the fields of physics, particularly the electromagnetic field. This view offers several important insights:

  1. Frame-invariance: The topology of vector fields is Lorentz invariant, meaning it doesn’t change under relativistic transformations. This addresses the need for a frame-invariant basis for consciousness, which we identified as a challenge for traditional computational approaches.
  2. Causal significance: Topological features of fields have real, measurable causal effects, as exemplified by phenomena like magnetic reconnection in solar flares. This satisfies the requirement for consciousness to be causally efficacious and not epiphenomenal.
  3. Natural boundaries: Topological pockets provide objective, causally significant boundaries that “carve nature at its joints.” This contrasts with the difficulty of defining clear system boundaries in classical computational models.
  4. Temporal depth: The approach acknowledges that experiences have a temporal dimension, potentially lasting for tens of milliseconds. This aligns with our understanding of neural oscillations and provides a natural way to integrate time into the model of consciousness.
  5. Embodiment costs: The topological approach inherently captures many of the “costs of embodiment” we discussed earlier. The physical constraints, parallel processing, sensory integration, and real-time requirements of embodied systems are naturally represented in the complex topological structures of the brain’s electromagnetic field.

This perspective suggests that the computational costs of consciousness may be even more significant than traditional complexity theory would indicate. It implies that creating artificial consciousness would require not just simulating neural activity, but replicating the precise topological structures of electromagnetic fields in the brain. This is a far more challenging task than conventional AI approaches.

Moreover, this view provides a potential explanation for why embodied systems like biological brains are so effective at producing consciousness. The physical structure of the brain, with its complex networks of neurons and electromagnetic fields, may be ideally suited to creating the topological pockets that correspond to conscious experiences. This suggests that embodiment is not just a constraint on consciousness, but a fundamental enabler of it.

Furthermore, there is a non-trivial connection between topological segmentation and resonant modes. The larger a topological pocket is, the lower the frequency of the resonant modes can be. This, effectively, is broadcasted to every region within the pocket (much akin how any spot on the surface of an acoustic guitar expresses the vibrations of the guitar as a whole). Thus, topological segmentation, quite conceivably, might be implicated in the generation of maps for the organism to self-organize around (cf. bioelectric morphogenesis according to Michael Levin, 2022). Steven Lehar (1999) and Michael E. Johnson (2018) in particular have developed really interesting conceptual frameworks for how harmonic resonance might be implicated in the computational character of our experience. The QRI insight that topology can mediate resonance, further complicates the role of phenomenal boundaries in the computational role of consciousness.

Conclusion and Path Forward

In conclusion, the costs of embodiment present significant challenges to creating digital sentience that traditional computational complexity theory fails to fully capture. The QRI solution to the boundary problem, with its focus on topological pockets in electromagnetic fields, offers a promising framework for understanding consciousness that inherently addresses many of these embodiment costs. Moving forward, research should focus on: (1) developing more precise methods to measure and quantify the costs of embodiment in biological systems, (2) exploring how topological features of electromagnetic fields could be replicated or simulated in artificial systems, and (3) investigating the potential for hybrid systems that leverage the natural advantages of biological embodiment while incorporating artificial components (cf. Xenobots). By pursuing these avenues, we may unlock new pathways towards creating genuine artificial consciousness while deepening our understanding of natural consciousness.

It is worth noting that the QRI mission is to “understand consciousness for the benefit of all sentient beings”. Thus, figuring out the constraints that give rise to computationally non-trivial bound experiences is one key piece of the puzzle: we don’t want to accidentally create systems that are conscious and suffering and become civilizationally load-bearing (e.g. organoids animated by pain or fear).

In other words, understanding how to produce conscious systems is not enough. We need to figure out how to (a) ensure that they are animated by information-sensitive gradients of bliss, and (b) how being empowered by the computational properties of consciousness can lean into more benevolent mind architectures. Namely, architectures that care about their wellbeing and the wellbeing of all sentient beings. This is an enormous challenge; clarifying the costs of embodiment is one key step forward, but part of an ecosystem of actions and projects needed for the robust positive impact of consciousness research for the wellbeing of all sentient beings.

Acknowledgments:

This post was written at the July 2024 Qualia Research Institute Strategy Summit in Sweden. It comes about as a response to incisive questions by Kristian Rönn on QRI’s model of digital sentience. Many thanks to Curran Janssen, Oliver Edholm, David Pearce, Alfredo Parra, Asher Soryl, Rasmus Soldberg, and Erik Karlson, for brainstorming, feedback, suggesting edits, and the facilitation of this retreat.

Appendix

Excerpt from Michael E. Johnson’s Principia Qualia (2015) on Frame Invariance (pg. 61)

What is frame invariance?

A theory is frame-invariant if it doesn’t depend on any specific physical frame of reference, or subjective interpretations to be true. Modern physics is frame-invariant in this way: the Earth’s mass objectively exerts gravitational attraction on us regardless of how we choose to interpret it. Something like economic theory, on the other hand, is not frame-invariant: we must interpret how to apply terms such as “GDP” or “international aid” to reality, and there’s always an element of subjective judgement in this interpretation, upon which observers can disagree.

Why is frame invariance important in theories of mind?

Because consciousness seems frame-invariant. Your being conscious doesn’t depend on my beliefs about consciousness, physical frame of reference, or interpretation of the situation – if you are conscious, you are conscious regardless of these things. If I do something that hurts you, it hurts you regardless of my belief of whether I’m causing pain. Likewise, an octopus either is highly conscious, or isn’t, regardless of my beliefs about it.[a] This implies that any ontology that has a chance of accurately describing consciousness must be frame-invariant, similar to how the formalisms of modern physics are frame-invariant.

In contrast, the way we map computations to physical systems seems inherently frame-dependent. To take a rather extreme example, if I shake a bag of popcorn, perhaps the motion of the popcorn’s molecules could – under a certain interpretation – be mapped to computations which parallel those of a whole-brain emulation that’s feeling pain. So am I computing anything by shaking that bag of popcorn? Who knows. Am I creating pain by shaking that bag of popcorn? Doubtful… but since there seems to be an unavoidable element of subjective judgment as to what constitutes information, and what constitutes computation, in actual physical systems, it doesn’t seem like computationalism can rule out this possibility. Given this, computationalism is frame-dependent in the sense that there doesn’t seem to be any objective fact of the matter derivable for what any given system is computing, even in principle.

[a] However, we should be a little bit careful with the notion of ‘objective existence’ here if we wish to broaden our statement to include quantum-scale phenomena where choice of observer matters.

Example of Cost of Embodiment by Kristian Rönn

Abstract Scenario (Computational Complexity):

Consider a digital computer system tasked with object recognition in a static environment. The algorithm processes an image to identify objects, classifies them, and outputs the results.

Key Points:

  • The computational complexity is defined by the algorithm’s time and space complexity (e.g., O(n^2) for time, O(n) for space).
  • Inputs (image data) and outputs (object labels) are well-defined and static.
  • The system operates in a controlled environment with no physical constraints like heat dissipation or energy consumption.

However, this abstract analysis is extremely optimistic, since it doesn’t take the cost of embodiment into account.

Embodied Scenario (Embodied Complexity):

Now, consider a robotic system equipped with a camera, tasked with real-time object recognition and interaction in a dynamic environment.

Key Points and Costs:

  1. Real-Time Processing:
    • The robot must process images in real-time, requiring rapid data acquisition and processing, which creates practical constraints.
    • Delays in computation can lead to physical consequences, such as collisions or missed interactions.
  2. Energy Consumption:
    • The robot’s computational tasks consume power, affecting the overall energy budget.
    • Energy management becomes crucial, balancing between processing power and battery life.
  3. Heat Dissipation:
    • High computational loads generate heat, necessitating cooling mechanisms, requiring additional energy. Moreover, this creates additional costs/waste in the embodied system.
    • Overheating can degrade performance and damage components, requiring thermal management strategies.
  4. Physical Constraints and Mobility:
    • The robot must move and navigate through physical space, encountering obstacles and varying terrains.
    • Computational tasks must be synchronized with motion planning and control systems, adding complexity.
  5. Sensory Integration:
    • The robot integrates data from multiple sensors (camera, lidar, ultrasonic sensors) to understand its environment.
    • Processing multi-modal sensory data in real-time increases computational load and complexity.
  6. Error Correction and Redundancy:
    • Physical systems are prone to noise and errors. The robot needs mechanisms for error detection and correction.
    • Redundant systems and fault-tolerance measures add to the computational overhead.
  7. Adaptation and Learning:
    • The robot must adapt to new environments and learn from interactions, requiring active inference (i.e. we can’t train a new model everytime the ontology of an agent needs updating).
    • Continuous learning in an embodied system is resource-intensive compared to offline training in a digital system.
  8. Physical Wear and Maintenance:
    • Physical components wear out over time, requiring maintenance and replacement.
    • Downtime for repairs affects the overall system performance and availability.

An Energy Complexity Model for Algorithms

Roy, S., Rudra, A., & Verma, A. (2013). https://doi.org/10.1145/2422436.2422470

Abstract

Energy consumption has emerged as a first class computing resource for both server systems and personal computing devices. The growing importance of energy has led to rethink in hardware design, hypervisors, operating systems and compilers. Algorithm design is still relatively untouched by the importance of energy and algorithmic complexity models do not capture the energy consumed by an algorithm. In this paper, we propose a new complexity model to account for the energy used by an algorithm. Based on an abstract memory model (which was inspired by the popular DDR3 memory model and is similar to the parallel disk I/O model of Vitter and Shriver), we present a simple energy model that is a (weighted) sum of the time complexity of the algorithm and the number of ‘parallel’ I/O accesses made by the algorithm. We derive this simple model from a more complicated model that better models the ground truth and present some experimental justification for our model. We believe that the simplicity (and applicability) of this energy model is the main contribution of the paper. We present some sufficient conditions on algorithm behavior that allows us to bound the energy complexity of the algorithm in terms of its time complexity (in the RAM model) and its I/O complexity (in the I/O model). As corollaries, we obtain energy optimal algorithms for sorting (and its special cases like permutation), matrix transpose and (sparse) matrix vector multiplication.

Thermodynamic Computing

Conte, T. et al. (2019). https://arxiv.org/abs/1911.01968

Abstract

The hardware and software foundations laid in the first half of the 20th Century enabled the computing technologies that have transformed the world, but these foundations are now under siege. The current computing paradigm, which is the foundation of much of the current standards of living that we now enjoy, faces fundamental limitations that are evident from several perspectives. In terms of hardware, devices have become so small that we are struggling to eliminate the effects of thermodynamic fluctuations, which are unavoidable at the nanometer scale. In terms of software, our ability to imagine and program effective computational abstractions and implementations are clearly challenged in complex domains. In terms of systems, currently five percent of the power generated in the US is used to run computing systems – this astonishing figure is neither ecologically sustainable nor economically scalable. Economically, the cost of building next-generation semiconductor fabrication plants has soared past $10 billion. All of these difficulties – device scaling, software complexity, adaptability, energy consumption, and fabrication economics – indicate that the current computing paradigm has matured and that continued improvements along this path will be limited. If technological progress is to continue and corresponding social and economic benefits are to continue to accrue, computing must become much more capable, energy efficient, and affordable. We propose that progress in computing can continue under a united, physically grounded, computational paradigm centered on thermodynamics. Herein we propose a research agenda to extend these thermodynamic foundations into complex, non-equilibrium, self-organizing systems and apply them holistically to future computing systems that will harness nature’s innate computational capacity. We call this type of computing “Thermodynamic Computing” or TC.

Energy Complexity of Computation

Say, A.C.C. (2023). https://doi.org/10.1007/978-3-031-38100-3_1

Abstract

Computational complexity theory is the study of the fundamental resource requirements associated with the solutions of different problems. Time, space (memory) and randomness (number of coin tosses) are some of the resource types that have been examined both independently, and in terms of tradeoffs between each other, in this context. Since it is well known that each bit of information “forgotten” by a device is linked to an unavoidable increase in entropy and an associated energy cost, one can also view energy as a computational resource. Constant-memory machines that are only allowed to access their input strings in a single left-to-right pass provide a good framework for the study of energy complexity. There exists a natural hierarchy of regular languages based on energy complexity, with the class of reversible languages forming the lowest level. When the machines are allowed to make errors with small nonzero probability, some problems can be solved with lower energy cost. Tradeoffs between energy and other complexity measures can be studied in the framework of Turing machines or two-way finite automata, which can be rewritten to work reversibly if one increases their space and time usage.

Relevant physical limitations

  • Landauer’s limit: The lower theoretical limit of energy consumption of computation.
  • Bremermann’s limit: A limit on the maximum rate of computation that can be achieved in a self-contained system in the material universe.
  • Bekenstein bound: An upper limit on the thermodynamic entropy S, or Shannon entropy H, that can be contained within a given finite region of space which has a finite amount of energy.
  • Margolus–Levitin theorem: A bound on the maximum computational speed per unit of energy.

References

Bateson, G. (1972). Steps to an ecology of mind. Chandler Publishing Company.

Chalmers, D. J. (1995). Absent qualia, fading qualia, dancing qualia. In T. Metzinger (Ed.), Conscious Experience. Imprint Academic. https://www.consc.net/papers/qualia.html

Gómez-Emilsson, A. (2023). The view from my topological pocket. Qualia Computing. https://qualiacomputing.com/2023/10/26/the-view-from-my-topological-pocket-an-introduction-to-field-topology-for-solving-the-boundary-problem/

Gómez-Emilsson, A., & Percy, C. (2023). Don’t forget the boundary problem! How EM field topology can address the overlooked cousin to the binding problem for consciousness. Frontiers in Human Neuroscience,17. https://www.frontiersin.org/articles/10.3389/fnhum.2023.1233119 

Johnson, M. E. (2015). Principia qualia. Open Theory. https://opentheory.net/PrincipiaQualia.pdf

Johnson, M. E. (2018). A future of neuroscience. Open Theory. https://opentheory.net/2018/08/a-future-for-neuroscience/

Kingdom, F.A.A., & Prins, N. (2016). Psychophysics: A practical introduction. Elsevier.

Lehar, S. (1999). Harmonic resonance theory: An alternative to the “neuron doctrine” paradigm of neurocomputation to address gestalt properties of perception. http://slehar.com/wwwRel/webstuff/hr1/hr1.html

Levin, M. (2022). Bioelectric morphogenesis, cellular motivations, and false binaries with Michael Levin. DemystifySci Podcast. https://demystifysci.com/blog/2022/10/25/kl2d17sphsiw2trldsvkjvr91odjxv

Pearce, D. (2014). Social media unsorted postings. HEDWEB. https://www.hedweb.com/social-media/pre2014.html

Sharma, A. (2023). Assembly theory explains and quantifies selection and evolution. Nature, 622, 321–328. https://www.nature.com/articles/s41586-023-06600-9

Consciousness Isn’t Substrate-Neutral: From Dancing Qualia & Epiphenomena to Topology & Accelerators

In this video I explain why substrate neutrality is so appealing to the modern educated mind. I zoom in on the Dancing Qualia argument presented by Chalmers which seems to show that if consciousness/qualia requires a specific substrate, then you can build a system where such qualia is epiphenomenal.

In this video I deconstruct this whole line of reasoning from several complementary points of view. In particular, I explain:

1) How substrate-specific hardware accelerators would generate something akin to a mysterious “consciousness discourse” in organisms that have hybrid computational substrates, with the meta-problem of consciousness (partly) explained via the interaction of two very different computational paradigms that struggle to make sense of each other.

2) How the Slicing Problem gives rise to epiphenomenalism for functionalist / computationalist theories of consciousness. This is as big of a problem, from the complete other side, as Dancing Qualia, yet somehow it doesn’t seem to receive much attention. To avoid epiphenomenalism here you require physical substrate properties to correspond to (at least in magnitude) degrees/amounts of qualia.

3) The idea that you can preserve “organizational invariance” by importing the “causal graph” of the system is question-begging. In particular, it assumes that reality breaks down into bit-sized point-like fundamental interactions between zero-dimensional entities. But this is an interpretation of physical facts, which is put into question by precisely things like field theories of physics (e.g. electromagnetism) and at a much deeper level, things like String Theory, where the substrate of reality is topologically non-trivial.

4) I show that beneath a computationalist frame for consciousness there is an implicit conception of frames of reference that are real from specific “points of view”. But as I explain, it is not possible to bootstrap integrated states out of frames of reference or points of view. Ultimately, any non-trivial integration of information that is happening in these ontologies is a projection of your own mind (you’re borrowing the unity of your consciousness to put together pieces of information that define a frame of reference or point of view!).

And

5) How the mind uses phenomenal binding for information processing can be explained with the lens of self-organizing principles set up in such a way that “following the valence gradient will take you closer to a state that satisfies the constraints of the problem”. Meaning that the very style of problem solving our experience utilizes has an entirely different logic than classical digital algorithms. No wonder it’s so difficult to square our experience with a computationalist frame of reference!

To end, I encourage the listener to enrich his or her conception of computation to include irreducible integrated states as valid inputs, outputs, and intermediate states. This way we put on the same “computational class” things like quantum computers, non-linear optics, soap bubbles, and yes, DMT entity computing systems 🙂 They all use non-trivially integrated bound states as part of their information processing pipeline.

In aggregate, these points explain why the substrate matters for computation in a way that satisfactorily addresses one of the biggest concerns that there is with this view. Namely, Dancing Qualia leading to epiphenomenalism – which gets turned on its head with the Slicing Problem (turns out computational theories were the epiphenomenalist views all along), self-organizing principles for computation, hybrid computing systems, hardware accelerators, field topology, and the insight that “reality as a causal graph is question-begging”. Reality, is, instead, a network of bound states that can interact in topologically non-trivial ways.


Relevant links:

The View From My Topological Pocket: An Introduction to Field Topology for Solving the Boundary Problem

[Epistemic Status: informal and conversational, this piece provides an off-the-cuff discussion around the topological solution to the boundary problem. Please note that this isn’t intended to serve as a bulletproof argument; rather, it’s a guide through an intuitive explanation. While there might be errors, possibly even in reasoning, I believe they won’t fundamentally alter the overarching conceptual solution.]

This post is an informal and intuitive explanation for why we are looking into topology as a tentative solution to the phenomenal binding (or boundary) problem. In particular, this solutions identifies moments of experience with topological pockets of fields of physics. We recently published a paper where we dive deeper into this explanation space, and concretely hypothesize that the key macroscopic boundary between subjects of experience is the result of topological segmentation in the electromagnetic field (see explainer video / author’s presentation at the Active Inference Institute).

The short explanation for why this is promising is that topological boundaries are objective and frame-invariant features of “basement reality” that have causal effects and thus can be recruited by natural selection for information-processing tasks. If the fields of physics are fields of qualia, topological boundaries of the fields corresponding to phenomenal boundaries between subjects would be an elegant way for a theory of consciousness to “carve nature at its joints”. This solution is very significant if true, because it entails, among other things, that classical digital computers are incapable of creating causally significant experiences: the experiences that emerge out of them are by default something akin to mind dust, and at best, if significant binding happens, they are epiphenomenal from the “point of view” of the computation being realized.

The route to develop an intuition about this topic that this post takes is to deconstruct the idea of a “point of view” as a “natural kind” and instead advocate for topological pockets being the place where information can non-trivially aggregate. This idea, once seen, is hard to unsee; it reframes how we think about what systems are, and even the nature of information itself.


One of the beautiful things about life is that you sometimes have the opportunity to experience a reality plot twist. We might believe one narrative has always been unfolding, only to realize that the true story was different all along. As they say, the rug can be pulled from under your feet.

The QRI memeplex is full of these reality plot twists. You thought that the “plot” of the universe was a battle between good and evil? Well, it turns out it is the struggle between consciousness and replicators instead. Or that what you want is particular states of the environment? Well, it turns out you’ve been pursuing particular configurations of your world simulation all along. You thought that pleasure and pain follow a linear scale? Well, it turns out the scales are closer to logarithmic in nature, with the ends of the distribution being orders of magnitude more intense than the lower ends. I think that along these lines, grasping how “points of view” and “moments of experience” are connected requires a significant reframe of how you conceptualize reality. Let’s dig in!

One of the motivations for this post is that I recently had a wonderful chat with Nir Lahav, who last year published an article that steelmans the view that consciousness is relativistic (see one of his presentations). I will likely discuss his work in more detail in the future. Importantly, talking to him reminded me that ever since the foundation of QRI, we have taken for granted the view that consciousness is frame-invariant, and worked from there. It felt self-evident to us that if something depends on the frame of reference from which you see it, it doesn’t have inherent existence. Our experiences (in particular, each discrete moment of experience), have inherent existence, and thus cannot be frame-dependent. Every experience is self-intimating, self-disclosing, and absolute. So how could it depend on a frame of reference? Alas, I know this is a rather loaded way of putting it and risks confusing a lot of people (for one, Buddhists might retort that experience is inherently “interdependent” and has no inherent existence, to which I would replay “we are talking about different things here”). So I am motivated to present a more fleshed out, yet intuitive, explanation for why we should expect consciousness to be frame-invariant and how, in our view, our solution to the boundary problem is in fact up to this challenge.

The main idea here is to show how frames of reference cannot boostrap phenomenal binding. Indeed, “a point of view” that provides a frame of reference is more of a convenient abstraction that relies on us to bind, interpret, and coalesce pieces of information, than something with a solid ontological status that exists out there in the world. Rather, I will try to show how we are borrowing from our very own capacity for having unified information in order to put together the data that creates the construct of a “point of view”; importantly, this unity is not bootstrapped from other “points of view”, but draws from the texture of the fabric of reality itself. Namely, the field topology.


A scientific theory of consciousness must be able to explain the existence of consciousness, the nature and cause for the diverse array of qualia values and varieties (the palette problem), how consciousness is causally efficacious (avoid epiphenomenalism), and explain how the information content of each moment of experience is presented “all at once” (namely, the binding problem). I’ve talked extensively about these constraints in writings, videos, and interviews, but what I want to emphasize here is that these problems need to be addressed head on for a theory of consciousness to work at all. Keep these constraints in mind as we deconstruct the apparent solidity of frames of reference and the difficulty that arises in order to bootstrap causal and computational effects in connection to phenomenal binding out of a relativistic frame.

At a very high level, a fuzzy (but perhaps sufficient) intuition for what’s problematic when a theory of consciousness doesn’t seek frame-invariance is that you are trying to create something concrete with real and non-trivial causal effects and information content, out of fundamentally “fuzzy” parts.

In brief, ask yourself, can something fuzzy “observe” something fuzzy? How can fuzzyness be used to boostrap something non-fuzzy?

In a world of atoms and forces, “systems” or “things” or “objects” or “algorithms” or “experiences” or “computations” don’t exist intrinsically because there are no objective, frame-invariant, and causally significant ways to draw boundaries around them!

I hope to convince you that any sense of unity or coherence that you get from this picture of reality (a relativistic system with atoms and forces) is in fact a projection from your mind, that inhabits your mind, and is not out there in the world. You are looking at the system, and you are making connections between the parts, and indeed you are creating a hierarchy of interlocking gestalts to represent this entire conception of reality. But that is all in your mind! It’s a sort of map and territory confusion to believe that two fuzzy “systems” interacting with each other can somehow bootstrap a non-fuzzy ontological object (aka. a requirement for a moment of experience). 

I reckon that these vague explanations are in fact sufficient for some people to understand where I’m going. But some of you are probably clueless about what the problem is, and for good reason. This is never discussed in detail, and this is largely, I think, because people who think a lot about the problem don’t usually end up with a convincing solution. And in some cases, the result is that thinkers bite the bullet that there are only fuzzy patterns in reality.

How Many Fuzzy Computations Are There in a System?

Indeed, thinking of the universe as being made of particles and forces implies that computational processes are fuzzy (leaky, porous, open to interpretation, etc.). Now imagine thinking that *you* are one of such fuzzy computations. Having this as an unexamined background assumption gives rise to countless intractable paradoxes. The notion of a point of view, or a frame of reference, does not have real meaning here as the way to aggregate information doesn’t ultimately allow you to identify objective boundaries around packets of information (at least not boundaries that are more than merely-conventional in nature).

From this point of view (about points of view!), you realize that indeed there is no principled and objective way to find real individuals. You end up in the fuzzy world of fuzzy individuals of Brian Tomasik, as helpfully illustrated by this diagram:

Source: Fuzzy, Nested Minds Problematize Utilitarian Aggregation by Brian Tomasik

Brian Tomasik indeed identifies the problem of finding real boundaries between individuals as crucial for utilitarian calculations. And then, incredibly, also admits that his ontological frameworks gives him no principled way of doing so (cf. Michael E. Johnson’s Against Functionalism for a detailed response). Indeed, according to Brian (from the same essay):

Eric Schwitzgebel argues that “If Materialism Is True, the United States Is Probably Conscious“. But if the USA as a whole is conscious, how about each state? Each city? Each street? Each household? Each family? When a new government department is formed, does this create a new conscious entity? Do corporate mergers reduce the number of conscious entities? These seem like silly questions—and indeed, they are! But they arise when we try to individuate the world into separate, discrete minds. Ultimately, “we are all connected”, as they say. Individuation boundaries are artificial and don’t track anything ontologically or phenomenally fundamental (except maybe at the level of fundamental physical particles and structures). The distinction between an agent and its environment is just an edge that we draw around a clump of physics when it’s convenient to do so for certain purposes.

My own view is that every subsystem of the universe can be seen as conscious to some degree and in some way (functionalist panpsychism). In this case, the question of which systems count as individuals for aggregation becomes maximally problematic, since it seems we might need to count all the subsystems in the universe.”

Are you confused now? I hope so. Otherwise I’d worry about you.

Banana For Scale

A frame of reference is like a “banana for scale” but for both time and space. If you assume that the banana isn’t morphing, you can use how long it takes for waves emitted from different points in the banana to bounce back and return in order to infer the distance and location of physical objects around it. Your technologically equipped banana can play the role of a frame of reference in all but the most extreme of conditions (it probably won’t work as you approach a black hole, for very non-trivial reasons involving severe tidal forces, but it’ll work fine otherwise).

Now the question that I want to ask is: how does the banana “know itself”? Seriously, if you are using points in the banana as your frame of reference, you are, in fact, the one who is capable of interpreting the data coming from the banana to paint a picture of your environment. But the banana isn’t doing that. It is you! The banana is merely an instrument that takes measurements. Its unity is assumed rather than demonstrated. 


In fact, for the upper half of the banana to “comprehend” the shape of the other half (as well as its own), it must also rely on a presumed fixed frame of reference. However, it’s important to note that such information truly becomes meaningful only when interpreted by a human mind. In the realm of an atom-and-force-based ontology, the banana doesn’t precisely exist as a tangible entity. Your perception of it as a solid unit, providing direction and scale, is a practical assumption rather than an ontological certainty.

In fact, the moment we try to get a “frame of reference to know itself” you end up in an infinite regress, where smaller and smaller regions of the object are used as frames of reference to measure the rest. And yet, at no point does the information of these frames of reference “come together all at once”, except… of course… in your mind.

Are there ways to boostrap a *something* that aggregates and simultaneously expresses the information gathered across the banana (used as a frame of reference)? If you build a camera to take a snapshot of the, say, information displayed at each coordinate of the banana, the picture you take will have spatial extension and suffer from the same problem. If you think that the point at the aperture can itself capture all of the information at once, you will encounter two problems. If you are thinking of an idealized point-sized aperture, then we run into the problem that points don’t have parts, and therefore can’t contain multiple pieces of information at once. And if you are talking about a real, physical type of aperture, you will find that it cannot be smaller than the diffraction limit. So now you have the problem of how to integrate all of the information *across the whole area of the aperture* when it cannot shrink further without losing critical information. In either case, you still don’t have anything, anywhere, that is capable of simultaneously expressing all of the information of the frame of reference you chose. Namely, the coordinates you measure using a banana.

Let’s dig deeper. We are talking of a banana as a frame of reference. But what if we try to internalize the frame of reference. A lot of people like to think of themselves as the frame of reference that matters. But I ask you: what are your boundaries and how do the parts within those boundaries agree on what is happening?

Let’s say your brain is the frame of reference. Intuitively, one might feel like “this object is real to itself”. But here is where the magic comes. Make the effort to carefully trace how signals or measurements propagate in an object such as the brain. Is it fundamentally different than what happens with a banana? There might be more shortcuts (e.g. long axons) and the wiring could have complex geometry, but neither of these properties can ultimately express information “all at once”. The principle of uniformity says that every part of the universe follows the same universal physical laws. The brain is not an exception. In a way, the brain is itself a possible *expression* of the laws of physics. And in this way, it is no different than a banana.

Sorry, your brain is not going to be a better “ground” for your frame of reference than a banana. And that is because the same infinite recursion that happened with the banana when we tried to use it to ground our frame of reference into something concrete happens with your brain. And also, the same problem happens when we try to “take a snapshot of the state of the brain”, i.e. that the information also doesn’t aggregate in a natural way even in a high-resolution picture of the brain. It still has spatial extension and lacks objective boundaries of any causal significance.

Every single point in your brain has a different view. The universe won’t say “There is a brain here! A self-intimating self-defining object! It is a natural boundary to use to ground a frame of reference!” There is nobody to do that! Are you starting to feel the groundlessness? The bizarre feeling that, hey, there is no rational way to actually set a frame of reference without it falling apart into a gazillion different pieces, all of which have the exact same problem? I’ve been there. For years. But there is a way out. Sort of. Keep reading.

The question that should be bubbling up to the surface right now is: who, or what, is in charge of aggregating points of view? And the answer is: this does not exist and is impossible for it to exist if you start out in an ontology that has as the core building blocks relativistic particles and forces. There is no principled way to aggregate information across space and time that would result in the richness of simultaneous presentation of information that a typical human experience displays. If there is integration of information, and a sort of “all at once” presentation, the only kind of (principled) entity that this ontology would accept is the entire spacetime continuum as a gigantic object! But that’s not what we are. We are definite experiences with specific qualia and binding structures. We are not, as far as I can tell, the entire spacetime continuum all at once. (Or are we?).

If instead we focus on the fine structure of the field, we can look at mathematical features in it that would perhaps draw boundaries that are frame-invariant. Here is where a key insight becomes significant: the topology of a vector field is Lorentz invariant! Meaning, a Lorentz transformation will merely squeeze and sheer, but never change topology on its own. Ok, I admit I am not 100% sure that this holds for all of the topological features of the electromagnetic field (Creon Levit recently raised some interesting technical points that might make some EM topological features frame-dependent; I’ve yet to fully understand his argument but look forward to engaging with it). But what we are really pointing at is the explanation space. A moment ago we were desperate to find a way to ground, say, the reality of a banana in order to use it as a frame of reference. We saw that the banana conceptualized as a collection of atoms and forces does not have this capacity. But we didn’t inquire into other possible physical (though perhaps not *atomistic*) features of the banana. Perhaps, and this is sheer speculation, the potassium ions in the banana peel form a tight electromagnetic mesh that creates a protective Faraday cage for this delicious fruit. In that case, well, the boundaries of that protecting sheet would, interestingly, be frame invariant. A ground!

The 4th Dimension

There is a bit of a sleight of hand here, because I am not taking into account temporal depth, and so it is not entirely clear how large the banana, as a topological structure defined by the potassium ions protective sheer really is (again, this is totally made up! for illustration purposes only). The trick here is to realize that, at least in so far as experiences go, we also have a temporal boundary. Relativistically, there shouldn’t be a hard distinction between temporal and spatial boundaries of a topological pocket of the field. In practice, of course one will typically overwhelm the other, unless you approach the brain you are studying at close to the speed of light (not ideal laboratory conditions, I should add). In our paper, and for many years at QRI (iirc an insight by Michael Johnson in 2016 or so), we’ve talked about experiences having “temporal depth”. David Pearce posits that each fleeting macroscopic state of quantum coherence spanning the entire brain (the physical correlate of consciousness in his model) can last as little as a couple of femtoseconds. This does not seem to worry him: there is no reason why the contents of our experience would give us any explicit hint about our real temporal depth. I intuit that each moment of experience lasts much, much longer. I highly doubt that it can last longer than a hundred milliseconds, but I’m willing to entertain “pocket durations” of, say, a few dozens of milliseconds. Just long enough for 40hz gamma oscillations to bring disparate cortical micropockets into coherence, and importantly, topological union, and have this new new emergent object resonate (where waves bounce back and forth) and thus do wave computing worthwhile enough to pay the energetic cost of carefully modulating this binding operation. Now, this is the sort of “physical correlate of consciousness” I tend to entertain the most. Experiences are fleeting (but not vanishingly so) pockets of the field that come together for computational and causal purposes worthwhile enough to pay the price of making them.

An important clarification here is that now that we have this way of seeing frames of reference we can reconceptualize our previous confusion. We realize that simply labeling parts of reality with coordinates does not magically bring together the information content that can be obtained by integrating the signals read at each of those coordinates. But we suddenly have something that might be way better and more conceptually satisfying. Namely, literal topological objects with boundaries embedded in the spacetime continuum that contribute to the causal unfolding of the reality and are absolute in their existence. These are the objective and real frames of reference we’ve been looking for!

What’s So Special About Field Topology?

Two key points:

  1. Topology is frame-invariant
  2. Topology is causally significant

As already mentioned, the Lorentz Transform can squish and distort, but it doesn’t change topology. The topology of the field is absolute, not relativistic.

The Lorentz Transform can squish and distort, but it doesn’t change topology (image source).

And field topology is also causally significant. There are _many_ examples of this, but let me just mention a very startling one: magnetic reconnection. This happens when the magnetic field lines change how they are connected. I mention this example because when one hears about “topological changes to the fields of physics” one may get the impression that such a thing happens only in extremely carefully controlled situations and at minuscule scales. Similar to the concerns for why quantum coherence is unlikely to play a significant role in the brain, one can get the impression that “the scales are simply off”. Significant quantum coherence typically happens in extremely small distances, for very short periods of time, and involving very few particles at a time, and thus, the argument goes, quantum coherence must be largely inconsequential at scales that could plausibly matter for the brain. But the case of field topology isn’t so delicate. Magnetic reconnection, in particular, takes place at extremely large scales, involving enormous amount of matter and energy, with extremely consequential effects.

You know about solar flairs? Solar flairs are the strange phenomenon in the sun in which plasma is heated up to millions of degrees Kelvin and charged particles are accelerated to near the speed of light, leading to the emission of gigantic amounts of electromagnetic radiation, which in turn can ionize the lower levels of the Earth’s ionosphere, and thus disrupt radio communication (cf. radio blackouts). These extraordinary events are the result of the release of magnetic energy stored in the Sun’s corona via a topological change to the magnetic field! Namely, magnetic reconnection.

So here we have a real and tangible effect happening at a planetary (and stellar!) scale over the course of minutes to hours, involving enormous amounts of matter and energy, coming about from a non-trivial change to the topology of the fields of physics.

(example of magnetic reconnection; source)

Relatedly, coronal mass ejections (CMEs) also dependent on changes to the topology of the EM field. My layman understanding of CMEs is that they are caused by the build-up of magnetic stress in the sun’s atmosphere, which can be triggered by a variety of factors, including uneven spinning and plasma convection currents. When this stress becomes too great, it can cause the magnetic field to twist and trap plasma in solar filaments, which can then be released into interplanetary space through magnetic reconnection. These events are truly enormous in scope (trillions of kilograms of mass ejected) and speed (traveling at thousands of kilometers per second).

CME captured by NASA (source)

It’s worth noting that this process is quite complex/not fully understood, and new research findings continue to illuminate the details of this process. But the fact that topological effects are involved is well established. Here’s a video which I thought was… stellar. Personally, I think a program where people get familiar with the electromagnetic changes that happen in the sun by seeing them in simulations and with the sun visualized in many ways, might help us both predict better solar storms, and then also help people empathize with the sun (or the topological pockets that it harbors!).

The model showed differential rotation causes the sun’s magnetic fields to stretch and spread at different rates. The researchers demonstrated this constant process generates enough energy to form stealth coronal mass ejections over the course of roughly two weeks. The sun’s rotation increasingly stresses magnetic field lines over time, eventually warping them into a strained coil of energy. When enough tension builds, the coil expands and pinches off into a massive bubble of twisted magnetic fields — and without warning — the stealth coronal mass ejection quietly leaves the sun.” (source)

Solar flares and CMEs are just two rather spectacular macroscopic phenomena where field topology has non-trivial causal effects. But in fact there is a whole zoo of distinct non-trivial topological effects with causal implications, such as: how the topology of the Möbius strip can constrain optical resonant modes, twisted topological defects in nematic liquid crystal make some images impossible, the topology of eddy currents can be recruited for shock absorption aka. “magnetic breaking”, Meissner–Ochsenfeld effect and flux pinning enabling magnetic levitation, Skyrmion bundles having potential applications for storing information in spinotropic devices, and so on.

(source)

In brief, topological structures in the fields of physics can pave the way for us to identify the natural units that correspond to “moments of experience”. They are frame-invariant and casually significant, and as such they “carve nature at its joints” while being useful from the point of view of natural selection.

Can a Topological Pocket “Know Itself”?

Now the most interesting question arises. How does a topological pocket “know itself”? How can it act as a frame of reference for itself? How can it represent information about its environment if it does not have direct access to it? Well, this is in fact a very interesting area of research. Namely, how do you get the inside of a system with a clear and definite boundary to model its environment despite having only information accessible at its boundary and the resources contained within its boundary? This is a problem that evolution has dealt with for over a billion years (last time I checked). And fascinatingly, is also the subject of study of Active Inference and the Free Energy Principle, whose math, I believe, can be imported to the domain of *topological* boundaries in fields (cf. Markov Boundary).

Here is where qualia computing, attention and awareness, non-linear waves, self-organizing principles, and even optics become extremely relevant. Namely, we are talking about how the *interior shape* of a field could be used in the context of life. Of course the cell walls of even primitive cells are functionally (albeit perhaps not ontologically) a kind of objective and causally significant boundary where this applies. It is enormously adaptive for the cell to use its interior, somehow, to represent its environment (or at least relevant features thereof) in order to navigate, find food, avoid danger, and reproduce.

The situation becomes significantly more intricate when considering highly complex and “evolved” animals such as humans, which encompass numerous additional layers. A single moment of experience cannot be directly equated to a cell, as it does not function as a persistent topological boundary tasked with overseeing the replication of the entire organism. Instead, a moment of experience assumes a considerably more specific role. It acts as an exceptionally specialized topological niche within a vast network of transient, interconnected topological niches—often intricately nested and interwoven. Together, they form an immense structure equipped with the capability to replicate itself. Consequently, the Darwinian evolutionary dynamics of experiences operate on multiple levels. At the most fundamental level, experiences must be selected for their ability to competitively thrive in their immediate micro-environment. Simultaneously, at the broadest level, they must contribute valuable information processing functions that ultimately enhance the inclusive fitness of the entire organism. All the while, our experiences must seamlessly align and “fit well” across all the intermediary levels.

Visual metaphor for how myriad topological pockets in the brain could briefly fuse and become a single one, and then dissolve back into a multitude.

The way this is accomplished is by, in a way, “convincing the experience that it is the organism”. I know this sounds crazy. But ask yourself. Are you a person or an experience? Or neither? Think deeply about Empty Individualism and come back to this question. I reckon that you will find that when you identify with a moment of experience, it turns out that you are an experience *shaped* in the form of the necessary survival needs and reproductive opportunities that a very long-lived organism requires. The organism is fleetingly creating *you* for computational purposes. It’s weird, isn’t it?

The situation is complicated by the fact that it seems that the computational properties of topological pockets of qualia involve topological operations, such as fusion, fission, and the use of all kinds of internal boundaries. More so, the content of a particular experience leaves an imprint in the organism which can be picked up by the next experience. So what happens here is that when you pay really close attention, and you whisper to your mind, “who am I?”, the direct experiential answer will in fact be a slightly distorted version of the truth. And that is because you (a) are always changing and (b) can only use the shape of the previous experience(s) to fill the intentional content of your current experience. Hence, you cannot, at least not under normal circumstances, *really* turn awareness to itself and *be* a topological pocket that “knows itself”. For once, there is a finite speed of information propagation across the many topological pockets that ultimately feed to the central one. So, at any given point in time, there are regions of your experience of which you are *aware* but which you are not attending to.

This brings us to the special case. Can an experience be shaped in such a way that it attends to itself fully, rather than attend to parts of itself which contain information about the state of predecessor topological pockets? I don’t know, but I have a strong hunch that the answer is yes and that this is what a meditative cessation does. Namely, it is a particular configuration of the field where attention is perfectly, homogeneously, distributed throughout in such a way that absolutely nothing breaks the symmetry and the experience “knows itself fully” but lacks any room left to pass it on to the successor pockets. It is a bittersweet situation, really. But I also think that cessations, and indeed moments of very homogeneously distributed attention, are healing for the organism, and even, shall we say, for the soul. And that is because they are moments of complete relief from the discomfort of symmetry breaking of any sort. They teach you about how our world simulation is put together. And intellectually, they are especially fascinating because they may be the one special case in which the referent of an experience is exactly, directly, itself.

To be continued…


Acknowledgements

I am deeply grateful and extend my thanks to Chris Percy for his remarkable contributions and steadfast dedication to this field. His exceptional work has been instrumental in advancing QRI’s ideas within the academic realm. I also want to express my sincere appreciation to Michael Johnson and David Pearce for our enriching philosophical journey together. Our countless discussions on the causal properties of phenomenal binding and the temporal depth of experience have been truly illuminating. A special shout-out to Cube Flipper, Atai Barkai, Dan Girshovic, Nir Lahav, Creon Levit, and Bijan Fakhri for their recent insightful discussions and collaborative efforts in this area. Hunter, Maggie, Anders (RIP), and Marcin, for your exceptional help. Huge gratitude to our donors. And, of course, a big thank you to the vibrant “qualia community” for your unwavering support, kindness, and encouragement in pursuing this and other crucial research endeavors. Your love and care have been a constant source of motivation. Thank you so much!!!

Digital Computers Will Remain Unconscious Until They Recruit Physical Fields for Holistic Computing Using Well-Defined Topological Boundaries

[Epistemic Status: written off the top of my head, thought about it for over a decade]

What do we desire for a theory of consciousness?

We want it to explain why and how the structure of our experience is computationally relevant. Why would nature bother to wire, not only information per se, but our experiences in richly structured ways that seem to track task-relevant computation (though at times in elusive ways)?

I think we can derive an explanation here. It is both very theoretically satisfying and literally mind-bending. This allows us to rule out vast classes of computing systems as having no more than computationally trivial conscious experiences.

TL;DR: We have richly textured bound experiences precisely because the boundaries that individuate us also allow us to act as individuals in many ways. This individual behavior can reflect features of the state of the entire organism in energy-efficient ways. Evolution can recruit this individual, yet holistic, behavior due to its computational advantages. We think that the boundary might be the result of topological segmentation in physical fields.


Marr’s Levels of Analysis and the Being/Form Boundary

One lens we can use to analyze the possibility of sentience in systems is this conceptual boundary between “being” and “form”. Here “being” refers to the interiority of things- their intrinsic likeness. “Form” on the other hand refers to how they appear from the outside. Where you place the being/form boundary influences how you make sense of the world around you. One factor that seems to be at play for where you place the being/form boundary is your implicit background assumptions about consciousness. In particular, how you think of consciousness in relation to Marr’s levels of analysis:

  • If you locate consciousness at the computational (or behavioral) level, then the being/form boundary might be computation/behavior. In other words, sentience simply is the performance of certain functions in certain contexts.
  • If you locate it at the algorithmic level, then the being/form boundary might become algorithm/computation. Meaning that what matters for the inside is the algorithm, whereas the outside (the form) is the function the algorithm produces.
  • And if you locate it at the implementation level, you will find that you identify being with specific physical situations (such as phases of matter and energy) and form as the algorithms that they can instantiate. In turn, the being/form boundary looks like crystals & bubbles & knots of matter and energy vs. how they can be used from the outside to perform functions for each other.

How you approach the question of whether a given chatbot is sentient will drastically depend on where you place the being/form boundary.


Many arguments against the sentience of particular computer systems are based on algorithmic inadequacy. This, for example, takes the form of choosing a current computational theory of mind (e.g. global workspace theory) and checking if the algorithm at play has the bare bones you’d expect a mind to have. This is a meaningful kind of analysis. And if you locate the being/form boundary at the algorithmic level then this is the only kind of analysis that seems to make sense.

What stops people from making successful arguments concerning the implementation level of analysis is confusion about the function for consciousness. So which physical systems are or aren’t conscious seems to be inevitably an epiphenomenalist construct. Meaning that drawing boundaries around systems with specific functions is an inherently fuzzy activity and any criteria we choose for whether a system is performing a certain function will be at best a matter of degree (and opinion).

The way of thinking about phenomenal boundaries I’m presenting in this post will escape this trap.

But before we get there, it’s important to point out the usefulness of reasoning about the algorithmic layer:

Algorithmic Structuring as a Constraint

I think that most people who believe that digital sentience is possible will concede that at least in some situations The Chinese Room is not conscious. The extreme example is when the content of the Chinese Room turns out to be literally a lookup table. Here a simple algorithmic concern is sufficient to rule out its sentience: a lookup table does not have an inner state! And what you do, from the point of view of its inner workings, is the same no matter if you relabel which input goes with what output. Whatever is inscribed in the lookup table (with however many replies and responses as part of the next query) is not something that the lookup table structurally has access to! The lookup table is, in an algorithmic sense, blind to what it is and what it does*. It has no mirror into itself.

Algorithmic considerations are important. To not be a lookup table, we must have at least some internal representations. We must consider constraints on “meaningful experience”, such as probably having at least some of, or something analogous to: a decent number of working memory slots (and types), a good size of visual field, resolution of color in terms of Just Noticeable Differences, and so on. If your algorithm doesn’t even try to “render” its knowledge in some information-rich format, then it may lack the internal representations needed to really “understand”. Put another way: imagine that your experience is like a Holodeck. Ask the question of what is the lower bound on the computational throughput of each sensory modality and their interrelationships. Then see if the algorithm you think can “understand” has internal representations of that kind at all.

Steel-manning algorithmic concerns involves taking a hard look at the number of degrees of freedom of our inner world-simulation (in e.g. free-wheeling hallucinations) and making sure that there are implicit or explicit internal representations with roughly similar computational horsepower as those sensory channels.

I think that this is actually an easy constraint to meet relative to the challenge of actually creating sentient machines. But it’s a bare minimum. You can’t let yourself be fooled by a lookup table.

In practice, the AI researchers will just care about metrics like accuracy, meaning that they will use algorithmic systems with complex internal representations like ours only if it computationally pays off to do so! (Hanson in Age of EM makes the bet it that it is worth simulating a whole high-performing human’s experience; Scott points out we’d all be on super-amphetamines). Me? I’m extremely skeptical that our current mindstates are algorithmically (or even thermodynamically!) optimal for maximally efficient work. But even if normal human consciousness or anything remotely like it was such a global optimum that any other big computational task routes around to it as an instrumental goal, I still think we would need to check if the algorithm does in fact create adequate internal representations before we assign sentience to it.

Thankfully I don’t think we need to go there. I think that the most crucial consideration is that we can rule out a huge class of computing systems ever being conscious by identifying implementation-level constraints for bound experiences. Forget about the algorithmic level altogether for a moment. If your computing system cannot build a bound experience from the bottom up in such a way that it has meaningful holistic behavior, then no matter what you program into it, you will only have “mind dust” at best.

What We Want: Meaningful Boundaries

In order to solve the boundary problem we want to find “natural” boundaries in the world to scaffold off of those. We take on the starting assumption that the universe is a gigantic “field of consciousness” and the question of how atoms come together to form experiences becomes how this field becomes individuated into experiences like ours. So we need to find out how boundaries arise in this field. But these are not just any boundary, but boundaries that are objective, frame-invariant, causally-significant, and computationally-useful. That is, boundaries you can do things with. Boundaries that explain why we are individuals and why creating individual bound experiences was evolutionarily adaptive; not only why it is merely possible but also advantageous.

My claim is that boundaries with such properties are possible, and indeed might explain a wide range of puzzles in psychology and neuroscience. The full conceptually satisfying explanation results from considering two interrelated claims and understanding what they entail together. The two interrelated claims are:

(1) Topological boundaries are frame-invariant and objective features of physics

(2) Such boundaries are causally significant and offer potential computational benefits

I think that these two claims combined have the potential to explain the phenomenal binding/boundary problem (of course assuming you are on board with the universe being a field of consciousness). They also explain why evolution was even capable of recruiting bound experiences for anything. Namely, that the same mechanism that logically entails individuation (topological boundaries) also has mathematical features useful for computation (examples given below). Our individual perspectives on the cosmos are the result of such individuality being a wrinkle in consciousness (so to speak) having non-trivial computational power.

In technical terms, I argue that a satisfactory solution to the boundary problem (1) avoids strong emergence, (2) sidesteps the hard problem of consciousness, (3) prevents the complication of epiphenomenalism, and (4) is compatible with the modern scientific world picture.

And the technical reason why topological segmentation provides the solution is that with it: (1) no strong emergence is required because behavioral holism is only weakly emergent on the laws of physics, (2) we sidestep the hard problem via panpsychism, (3) phenomenal binding is not epiphenomenal because the topological segments have holistic causal effects (such that evolution would have a reason to select for them), and (4) we build on top of the laws of physics rather than introduce new clauses to account for what happens in the nervous system. In this post you’ll get a general walkthrough of the solution. The fully rigorous, step by step, line of argumentation will be presented elsewhere. Please see the video for the detailed breakdown of alternative solutions to the binding/boundary problem and why they don’t work.

Holistic (Field) Computing

A very important move that we can make in order to explore this space is to ask ourselves if the way we think about a concept is overly restrictive. In the case of computation, I would claim that the concept is either applied extremely vaguely or that making it rigorous makes its application so narrow that it loses relevance. In the former case we have the tendency for people to equate consciousness with computation in a very abstract level (such as “resource gathering” and “making predictions” and “learning from mistakes”). In the latter we have cases where computation is defined in terms of computable functions. The conceptual mistake to avoid is to think that just because you can compute a function with a Turing machine, that therefore you are creating the same inner (bound or not) physical states along the way. And while yes, it would be possible to approximate the field behavior we will discuss below with a Turing machine, it would be computationally inefficient (as it would need to simulate a massively parallel system) and lack the bound inner states (and their computational speedups) needed for sentience.

The (conceptual engineering) move I’m suggesting we make is to first of all enrich our conception of computation. To notice that we’ve lived with an impoverished notion all along.

I suggest that our conception of computation needs to be broad enough to include bound states as possible meaningful inputs, internal steps and representations, and outputs. This enriched conception of computation would be capable of making sense of computing systems that work with very unusual inputs and outputs. For instance, it has no problem thinking of a computer that takes as input chaotic superfluid helium and returns soap bubble clusters as outputs. The reason to use such exotic medium is not to add extra steps, but in fact to remove extra steps by letting physics do the hard work for you.

(source)

To illustrate just one example of what you can do with this enriched paradigm of computing I am trying to present to you, let’s now consider the hidden computational power of soap films. Say that you want to connect three poles with a wire. And you want to minimize how much wire you use. One option is to use trigonometry and linear algebra, another one is to use numerical simulations. But an elegant alternative is to create a model of the poles between two parallel planes and then submerge the structure in soapy water.

Letting the natural energy-minimizing property of soap bubbles find the shortest connection between three poles is an interesting way of performing a computation. It is uniquely adapted to the problem without needing tweaks or adjustments – the self-organizing principle will work the same (within reason) wherever you place the poles. You are deriving computational power from physics in a very customized way that nonetheless requires no tuning or external memory. And it’s all done simply by each point of the surface wanting to minimize its tension. Any non-minimal configuration will have potential energy, which then gets transformed into kinetic energy and makes it wobble, and as it wobbles it radiates out its excess energy until it reaches a configuration where it doesn’t wobble anymore. So you have to make the solution of your problem precisely a non-wobbly state!

In this way of thinking about computation, an intrinsic part of the question about what kind of thing a computation is will depend on what physical processes were utilized to implement it. In essence, we can (and I think should) enrich our very conception of computation to include what kind of internal bound states the system is utilizing, and the extent to which the holistic physical effects of such inner states are computationally trivial or significant.

We can call this paradigm of computing “Holistic Computing”.

From Soap Bubbles to ISING-Solvers Meeting Schedulers Implemented with Lasers

Let’s make a huge jump from soap water-based computation. A much more general case that is nonetheless in the same family as using soap bubbles for compute, is having a way to efficiently solve the ISING problem. In particular, having an analog physics-based annealing method in this case comes with unique computational benefits: it turns out that non-linear optics can do this very efficiently. You are in a certain way using the universe’s very frustration with the problem (don’t worry I don’t think it suffers) to get it solved. Here is an amazing recent example: Ising Machines: Non-Von Neumann Computing with Nonlinear Optics – Alireza Marandi – 6/7/2019 (presented at Caltech).

The person who introduces Marandi in the video above is Kwabena Boahen, with whom I had the honor to take his course at Stanford (and play with the neurogrid!). Back in 2012 something like the neurogrid seemed like the obvious path to AGI. Today, ironically, people imagine scaling transformers is all you need. Tomorrow, we’ll recognize the importance of holistic field behavior and the boundary problem.

One way to get there on the computer science front will be by first demonstrating a niche set of applications where e.g. non-linear optics ISING solvers vastly outperform GPUs for energy minimization tasks in random graphs. But as the unique computational benefits become better understood, we will sooner or later switch from thinking about how to solve our particular problem, to thinking about how we can cast our particular problem as an ISING/energy minima problem so that physics solves the problem for us. It’s like having a powerful computer but it only speaks a very specific alien language. If you can translate your problem into its own terms, it’ll solve it at lightning speed. If you can’t, it will be completely useless.

Intelligence: Collecting and Applying Self-Organizing Principles

This takes us to the question of whether general intelligence is possible without switching to a Holistic Computing paradigm. Can you have generally intelligent (digital) chatbots? In some senses, yes. In perhaps the most significant sense, no.

Intelligence is a contentious topic (see here David Pearce’s helpful breakdown of 6 of its facets). One particular facet of intelligence that I find enormously fascinating and largely under-explored is the ability to make sense of new modes of consciousness and then recruit them for computational and aesthetic purposes. THC and music production have a long history of synergy, for instance. A composer who successfully uses THC to generate musical ideas others find novel and meaningful is applying this sort of intelligence. THC-induced states of consciousness are largely dysfunctional for a lot of tasks. But someone who utilizes the sort of intelligence (or meta-intelligence) I’m pointing to will pay attention to the features of experience that do have some novel use and lean on those. THC might impair working memory, but it also expands and stretches musical space. Intensifies reverb, softens rough edges in heart notes, increases emotional range, and adds synesthetic brown noise (which can enhance stochastic resonance). With wit and determination (and co-morbid THC/music addiction), musical artists exploit the oddities of THC musicality to great effect, arguably some much more successfully than others.

The kind of reframe that I’d like you to consider is that we are all in fact something akin to these stoner musicians. We were born with this qualia resonator with lots of cavities, kinds of waves, levels of coupling, and so on. And it took years for us to train it to make adaptive representations of the environment. Along the way, we all (typically) develop a huge repertoire of self-organizing principles we deploy to render what we believe is happing out there in the world. The reason why an experience of “meditation on the wetness of water” can be incredibly powerful is not because you are literally tuning into the resonant frequency of the water around you and in you. No, it’s something very different. You are creating the conditions for the self-organizing principle that we already use to render our experiences with water to take over as the primary organizer of our experience. Since this self-organizing principle does not, by its nature, generate a center, full absorption into “water consciousness” also has a no-self quality to it. Same with the other elements. Excitingly, this way of thinking also opens up our mind about how to craft meditations from first principles. Namely, by creating a periodic table of self-organizing principles and then systematically trying combinations until we identify the laws of qualia chemistry.

You have to come to realize that your brain’s relationship with self-organizing principles is like that of a Pokémon trainer and his Pokémon (ideally in a situation where Pokémon play the Glass Bead Game with each other rather than try to hurt each other– more on that later). Or perhaps like that of a mathematician and clever tricks for proofs, or a musician and rhythmic patterns, and so on. Your brain is a highly tamed inner space qualia warp drive usually working at 1% or less. It has stores of finely balanced and calibrated self-organizing principles that will generate the right atmospheric change to your experience at the drop of a hat. We are usually unaware of how many moods, personalities, contexts, and feelings of the passage of time there are – your brain tries to learn them all so it has them in store for whenever needed. All of a sudden: haze and rain, unfathomable wind, mercury resting motionless. What kind of qualia chemistry did your brain just use to try to render those concepts?

We are using features of consciousness -and the self-organizing principles it affords- to solve problems all the time without explicitly modeling this fact. In my conception of sentient intelligence, being able to recruit self-organizing principles of consciousness for meaningful computation is a pillar of any meaningfully intelligent mind. I think that largely this is what we are doing when humans become extremely good at something (from balancing discs to playing chess and empathizing with each other). We are creating very specialized qualia by finding the right self-organizing principles and then purifying/increasing their quality. To do an excellent modern day job that demands constraint satisfaction at multiple levels of analysis at once likely requires us to form something akin to High-Entropy Alloys of Consciousness. That is, we are usually a judiciously chosen mixture of many self-organizing principles balanced just right to produce a particular niche effect.

Meta-Intelligence

David Pearce’s conception of Full-spectrum Superintelligence is inspiring because it takes into account the state-space of consciousness (and what matters) in judging the quality of a certain intelligence in addition to more traditional metrics. Indeed, as another key conceptual engineering move, I suggest that we can and need to enrich our conception of intelligence in addition to our conception of computation.

So here is my attempt at enriching it further and adding another perspective. One way we can think of intelligence is as the ability to map a problem to a self-organizing principle that will “solve it for you” and having the capacity to instantiate that self-organizing principle. In other words, intelligence is, at least partly, about efficiency: you are successful to the extent that you can take a task that would generally require a large number of manual operations (which take time, effort, and are error-prone) and solve it in an “embodied” way.

Ultimately, a complex system like the one we use for empathy mixes both serial and parallel self-organizing principles for computation. Empathy is enormously cognitively demanding rather than merely a personality trait (e.g. agreeableness), as it requires a complex mirroring capacity that stores and processes information in efficient ways. Exploring exotic states of consciousness is even more computationally demanding. Both are error-prone.

Succinctly, I suggest we consider:

One key facet of intelligence is the capacity to solve problems by breaking them down into two distinct subproblems: (1) find a suitable self-organizing principle you can instantiate reliably, and (2) find out how to translate your problem to a format that our self-organizing principle can be pointed at so that it solves it for us.

Here is a concrete example. If you want to disentangle a wire, you can try to first put it into a discrete datastructure like a graph, and then get the skeleton of the knot in a way that allows you to simplify it with Reidemeister moves (and get lost in the algorithmic complexity of the task). Or you could simply follow the lead of Yu et al. 2021 and make the surfaces repulsive and let this principle solve the problem for you

(source)

These repulsion-based disentanglement algorithm are explained in this video. Importantly, how to do this effectively still needs fine tuning. The method they ended up using was much faster than the (many) other ones tried (a Full-Spectrum Superintellligence would be able to “wiggle” the wires a bit if they got stuck, of course):

(source)

This is hopefully giving you new ways of thinking about computation and intelligence. The key point to realize is that these concepts are not set in stone, and to a large extent may limit our thinking about sentience and intelligence. 

Now, I don’t believe that if you simulate a self-organizing principle of this sort you will get a conscious mind. The whole point of using physics to solve your problem is that in some cases you get better performance than algorithmically representing a physical system and then using that simulation to instantiate self-organizing principles. Moreover physics simulations, to the extent they are implemented in classical computers, will fail to generate the same field boundaries that would be happening in the physical system. To note, physics-inspired simulations like [Yu et al 2021] are nonetheless enormously helpful to illustrate how to think of problem-solving with a massively parallel analog system.

Are Neural Cellular Automata Conscious?

The computational success of Neural Cellular Automata is primarily algorithmic. In essence, digitally implemented NCA are exploring a paradigm of selection and amplification of self-organizing principles, which is indeed a very different way of thinking about computation. But critically any NCA will still lack sentience. The main reasons are that they (a) don’t use physical fields with weak downward causation, and (b) don’t have a mechanism for binding/boundary making. Digitally-implemented cellular automata may have complex emergent behavior, but they generate no meaningful boundaries (i.e. objective, frame-invariant, causally-significant, and computationally-useful). That said, the computational aesthetic of NCA can be fruitfully imported to the study of Holistic Field Computing, in that the techniques for selecting and amplifying self-organizing principles already solved for NCAs may have analogues in how the brain recruits physical self-organizing principles for computation.

Exotic States of Consciousness

Perhaps one of the most compelling demonstrations of the possible zoo (or jungle) of self-organizing principles out of which your brain is recruiting but a tiny narrow range is to pay close attention to a DMT trip.

DMT states of consciousness are computationally non-trivial on many fronts. It is difficult to emphasize how enriched the set of experiential building blocks becomes in such states. Their scientific significance is hard to overstate. Importantly, the bulk of the computational power on DMT is dedicated to trying to make the experience feel good and not feel bad. The complexity involved in this task is often overwhelming. But one could envision a DMT-like state in which some parameters have been stabilized in order to recruit standardized self-organizing principles available only in a specific region of the energy-information landscape. I think that cataloguing the precise mathematical properties of the dynamics of attention and awareness on DMT will turn out to have enormous _computational_ value. And a lot of this computational value will generally be pointed towards aesthetic goals.

To give you a hint of what I’m talking about: A useful QRI model (indeed, algorithmic reduction) of the phenomenology of DMT is that it (a) activates high-frequency metronomes that shake your experience and energize it with a high-frequency vibe, and (b) a new medium of wave propagation gets generated that allows very disparate parts of one’s experience to interact with one another.

3D Space Group (CEV on low dose DMT)

At a sufficient dose, DMT’s secondary effect also makes your experience feel sort of “wet” and “saturated”. Your whole being can feel mercurial and liquidy (cf: Plasmatis and Jim Jam). A friend speculates that’s what it’s like for an experience to be one where everything is touching everything else (all at once).

There are many Indra’s Net-type experiences in this space. In brief, experiences where “each part reflects every other part” are an energy minimum that also reduces prediction errors. And there is a fascinating non-trivial connection with the Free Energy Principle, where experiences that minimize internal prediction errors may display a lot of self-similarity.

To a first approximation, I posit that the complex geometry of DMT experiences are indeed the non-linearities of the DMT-induced wave propagation medium that appear when it is sufficiently energized (so that it transitions from the linear to the non-linear regime). In other words, the complex hallucinations are energized patterns of non-linear resonance trying to radiate out their excess energy. Indeed, as you come down you experience the phenomenon of condensation of shapes of qualia.

Now, we currently don’t know what computational problems this uncharted cornucopia of self-organizing principles could solve efficiently. The situation is analogous to that of the ISING Solver discussed above: we have an incredibly powerful alien computer that will do wonders if we can speak its language, and nothing useful otherwise. Yes, DMT’s computational power is an alien computer in search of a problem that will fit its technical requirements.

Vibe-To-Shape-And-Back

Michael Johnson, Selen Atasoy, and Steven Lehar all have shaped my thinking about resonance in the nervous system. Steven Lehar in particular brought to my attention non-linear resonance as a principle of computation. In essays like The Constructive Aspect of Visual Perception he presents a lot of visual illusions for which non-linear resonance works as a general explanatory principle (and then in The Grand Illusion he reveals how his insights were informed by psychonautic exploration).

One of the cool phenomenological observations Lehar made based on his exploration with DXM was that each phenomenal object has its own resonant frequency. In particular, each object is constructed with waves interfering with each other at a high-enough energy that they bounce off each other (i.e. are non-linear). The relative vibration of the phenomenal objects is a function of the frequencies of resonance of the waves of energy bouncing off each other that are constructing the objects.

In this way, we can start to see how a “vibe” can be attributed to a particular phenomenal object. In essence, long intervals will create lower resonant frequencies. And if you combine this insight with QRI paradigms, you see how the vibe of an experience can modulate the valence (e.g. soft ADSR envelopes and consonance feeling pleasant, for instance). Indeed, on DMT you get to experience the high-dimensional version of music theory, where the valence of a scene is a function of the crazy-complex network of pairwise interactions between phenomenal objects with specific vibratory characteristics. Give thanks to annealing because tuning this manually would be a nightmare.

But then there is the “global” vibe…

Topological Pockets

So far I’ve provided examples of how Holistic Computing enriches our conception of intelligence, computing, and how it even shows up in our experience. But what I’ve yet to do is connect this with meaningful boundaries, as we set ourselves to do. In particular, I haven’t explained why Holistic Computing would arise out of topological boundaries.

For the purpose of this essay I’m defining a topological segment (or pocket) to be a region that can’t be expanded further without this becoming false: every point in the region locally belongs to the same connected space.

The Balloons’ Case

In the case of balloons this cashes out as: a topological segment is one where each point can go to any other point without having to go through connector points/lines/planes. It’s essentially the set of contiguous surfaces.

Now, each of these pockets can have both a rich set of connections to other pockets as well as intricate internal boundaries. The way we could justify Computational Holism being relevant here is that the topological pockets trap energy, and thus allow the pocket to vibrate in ways that express a lot of holistic information. Each contiguous surface makes a sound that represents its entire shape, and thus behaves as a unit in at least this way.

The General Case

An important note here is that I am not claiming that (a) all topological boundaries can be used for Holistic Computing, or (b) to have Holistic Computing you need to have topological boundaries. Rather, I’m claiming that the topological segmentation responsible for individuating experiences does have applications for Holistic Computing and that this conceptually makes sense and is why evolution bothered to make us conscious. But for the general case, you probably do get quite a bit of both Holistic Computing without topological segmentation and vice versa. For example an LC circuit can be used for Holistic Computing on the basis of its steady analog resonance, but I’m not sure if it creates a topological pocket in the EM fields per se.

At this stage of the research we don’t have a leading candidate for the precise topological feature of fields responsible for this. But the explanation space is promising based on being able to satisfy theoretical constraints that no other theory we know of can.

But I can nonetheless provide a proof of concept for how a topological pocket does come with really impactful holism. Let’s dive in!

Getting Holistic Behavior Out of a Topological Pocket

Creating a topological pocket may be consequential in one of several ways. One option for getting holistic behavior arises if you can “trap” energy in the pocket. As a consequence, you will energize its harmonics. The particular way the whole thing vibrates is a function of the entire shape at once. So from the inside, every patch now has information about the whole (namely, by the vibration it feels!).**

(image source)

One possible overarching self-organizing principle that the entire pocket may implement is valence-gradient ascent. In particular, some configurations of the field are more pleasant than others and this has to do with the complexity of the global vibe. Essentially, the reason no part of it wants to be in a pocket with certain asymmetries, is because those asymmetries actually make themselves known everywhere within the pocket by how the whole thing vibrates. Therefore, for the same reason a soap bubble can become spherical by each point on the surface trying to locally minimize tension, our experiences can become symmetrical and harmonious by having each “point” in them trying to maximize its local valence.

Self Mirroring

From Lehar’s Cartoon Epistemology

And here we arrive at perhaps one of the craziest but coolest aspects of Holistic Computing I’ve encountered. Essentially, if we go to the non-linear regime, then the whole vibe is not merely just the weighted sum of the harmonics of the system. Rather, you might have waves interfere with each other in a concentrated fashion in the various cores/clusters, and in turn these become non-linear structures that will try to radiate out their energy. And to maximize valence there needs to be a harmony between the energy coming in and out of these dense non-linearities. In our phenomenology this may perhaps point to our typical self-consciousness. In brief, we have an internal avatar that “reflects” the state of the whole! We are self-mirroring machines! Now this is really non-trivial (and non-linear) Holistic Computing.

Cut From the Same Fabric

So here is where we get to the crux of the insight. Namely, that weakly emergent topological changes can simultaneously have non-trivial causal/computational effects while also solving the boundary problem. We avoid strong emergence but still get a kind of ontological emergence: since consciousness is being cut out of one huge fabric of consciousness, we don’t ever need strong emergence in the form of “consciousness out of the blue all of a sudden”. What you have instead is a kind of ontological birth of an individual. The boundary legitimately created a new being, even if in a way the total amount of consciousness is the same. This is of course an outrageous claim (that you can get “individuals” by e.g. twisting the electric field in just the right way). But I believe the alternatives are far crazier once you understand what they entail.

In a Nutshell

To summarize, we can rule out any of the current computational systems implementing AI algorithms to have anything but trivial consciousness. If there are topological pockets created by e.g. GPUs/TPUs, they are epiphenomenal – the system is designed so that only the local influences it has hardcoded can affect the behavior at each step.

The reason the brain is different is that it has open avenues for solving the boundary problem. In particular, a topological segmentation of the EM field would be a satisfying option, as it would simultaneously give us both holistic field behavior (computationally useful) and a genuine natural boundary. It extends the kind of model explored by Johnjoe McFadden (Conscious Electromagnetic Information Field) and Susan Pockett (Consciousness Is a Thing, Not a Process). They (rightfully) point out that the EM field can solve the binding problem. The boundary problem, in turn, emerges. With topological boundaries, finally, you can get meaningful boundaries (objective, frame-invariant, causally-significant, and computationally-useful).

This conceptual framework both clarifies what kind of system is at minimum required for sentience, and also opens up a research paradigm for systematically exploring topological features of the fields of physics and their plausible use by the nervous system.


* See the “Self Mirroring” section to contrast the self-blindness of a lookup table and the self-awareness of sentient beings.

** More symmetrical shapes will tend to have more clean resonant modes. So to the extent that symmetry tracks fitness on some level (e.g. ability to shed off entropy), then quickly estimating the spectral complexity of an experience can tell you how far it is from global symmetry and possibly health (explanation inspired by: Johnson’s Symmetry Theory of Homeostatic Regulation).


See also:


Many thanks to Michael Johnson, David Pearce, Anders & Maggie, and Steven Lehar for many discussions about the boundary/binding problem. Thanks to Anders & Maggie and to Mike for discussions about valence in this context. And thanks to Mike for offering a steel-man of epiphenomenalism. Many thank yous to all our supporters! Much love!

Infinite bliss!

7 Recent Videos: Cognitive Sovereignty, Phenomenology of Scent, Solution to the Problem of Other Minds, Novel Qualia Research Methods, Higher Dimensions, Solution to the Binding Problem, and Qualia Computing

[Context: 4th in a series of 7-video packages. See the previous three packages: 1st2nd, and 3rd]


Genuinely new thoughts are actually very rare. Why is that? And how can we incentivize the good side of smart people to focus their energies on having genuinely new thoughts for the benefit of all? In order to create the conditions for that we need to strike the right balance between many complementary forces.

I offer a new ideal we call “Cognitive Sovereignty”. This ideal consists of three principles working together in synergy: (1) Freedom of Thought and Feeling, (2) Idea Ownership, and (3) Information Responsibility.

(1) Freedom of Thought and Feeling is the cultivation of a child-like wonder and positive attitude towards the ideas of one another. A “Yes And” approach to idea sharing.

As QRI advisors Anders Amelin and Margareta “Maggie” Wassinge write on the topic:

“On the topic of liberty of mind, we may reflect that inhibitory mechanisms are typically strong within groups of people. As is the case within minds of individuals. In minds it’s this tip of the iceberg which gets rendered as qualia and is the end result of unexperienced hierarchies of powerfully constraining filters. It’s really practical for life forms to function this way and for teams made up of life forms to function similarly, but for making grand improvements to the very foundations of life itself, you need maximum creativity instead of the default self-organizing consensus emergence.

“There is creativity-limiting pressure to conform to ‘correctness’ everywhere. Paradigmatic correctness in science, corporate correctness in business, social correctness, political correctness, and so on. As antidotes to chaos these can serve a purpose but for exceptional intellectual work to blossom they are quite counterproductive. There is something to be said for Elon Musk’s assertion that ‘excellence is the only passing grade’.

“The difference to the future wellbeing of sentient entities between the QRI becoming something pretty much overall OK-ish, and the QRI becoming something of great excellence, is probably bigger than between the corresponding outcomes for Tesla Motors.

“The creativity of the team is down to this exact thing: The qualia computing of the gut feeling getting to enjoy a haven of liberty all too rare elsewhere.”

On (2) we can say that to “be the adult in the room” is also equally important. As Michael Johnson puts it, “it’s important to keep track of the metadata of ideas.” One cannot incentivize smart people to share ideas if they don’t feel like others will recognize who came up with them. While not everyone pays close attention to who says what in conversation, we think that a reasonable level of attention on this is necessary to align incentives. Obviously too much emphasis on Idea Ownership can be stifling and generate excessive overhead. So having open conversations about (failed) attribution while assuming the best from others is also a key practice to make Idea Ownership good for everyone.

And finally, (3) is the principle of “Information Responsibility”. This is the “wise old person” energy and attitude that deeply cares about the effects that information has on the world. Simple heuristics like “information wants to be free” and the ideal of a fully “open science” are pleasant to think about, but in practice they may lead to disasters on a grand scale. From gain of function research in virology to analysis of water pipes in cities, cutting-edge research can at times encounter novel ways of causing great harm. It’s imperative that one resists the urge to share them with the world for the sake of signaling how smart one is (which is the default path for the vast majority of people and institutions!). One needs to cultivate the wisdom to consider the long-term vision and only share ideas one knows are safe for the world. Here, of course, we need a balance: too much emphasis on information security can be a tactic to thwart other’s work and may be undully onerous and stifling. Striking the right balance is the goal.

The full synergy between these three principles of Cognitive Sovereignty, I think, is what allows people to think new thoughts.

I also cover two new key ideas: (a) Canceling Paradise and (b) Multi-level Selection and how it interacts with Organizational Freedom.

~Qualia of the Day: Long Walks on the Beach~

Relevant links:


In this talk we analyze the perfume category called “Aromatic Fougère” in order to illustrate the aesthetic of “Qualiacore” in its myriad manifestations.

Definition: The Qualiacore Aesthetic is the practice and aspiration to describe experiences in new, meaningful, and non-trivial ways that are illuminating for our understanding of the nature of consciousness.

At a high-level, we must note that the classic ways of describing the phenomenology of scents tend to “miss the target”. Learning about the history, cultural imports, associations, and similarities between perfumes can be fun to do but it does not advance an accurate phenomenological impression of what it is that we are talking about. And while reading about the “perfume notes” of a composition can place it in a certain location relative to other perfumes, such note descriptions usually give you a false sense of understanding and familiarity far removed from the complex subtleties of the state-space of scent. So how can we say new, meaningful, and non-trivial things about a smell?

Note-wise, Aromatic Fougères are typically described as the combination of herbs and spices (the aromatic part) with the core Fougère accord of oak moss, lavender/bergamot, geranium, and coumarin. In this video I offer a qualiacore-style analysis of how these “notes” interact with one another in order to form emergent gestalts. Here we will focus on the phenomenal character of these effects with an emphasis on bringing analogies from dynamic system behavior and energy-management techniques within the purview of the Symmetry Theory of Valence.

In the end, we arrive at a phenomenological fingerprint that cashes out in a comparison to the psychoactive effect of “Calvin Klein” (cocaine + ketamine*), which blends both stimulation and dissociation at the same time – a rather interesting effect that can be used to help you overcome awkwardness barriers in everyday life. “Smooth out the awkwardness landscape with Drakkar Noir!”

I also discuss the art of perfumery in light of QRI’s 8 models of art:

  1. Art as family resemblance (Semantic Deflation)
  2. Art as Signaling (Cool Kid Theory)
  3. Art as Schelling-point creation (a few Hipster-theoretical considerations)
  4. Art as cultivating sacred experiences (self-transcendence and highest values)
  5. Art as exploring the state-space of consciousness (ϡ☀♘🏳️‍🌈♬♠ヅ)
  6. Art as something that messes with the energy parameter of your mind (ꙮ)
  7. Art as puzzling valence effects (emotional salience and annealing as key ingredients)
  8. Art as a system of affective communication: a protolanguage to communicate information about worthwhile qualia (which culminates in Harmonic Society).

~Qualia of the Day: Aromatic Fougères~

* Extremely ill-advised.

Relevant links:


How do you know for sure that other people (and non-human animals) are conscious?

The so-called “problem of other minds” asks us to consider whether we truly have any solid basis for believing that “we are not alone”. In this talk I provide a new, meaningful, and non-trivial solution to the problem of other minds using a combination of mindmelding and phenomenal puzzles in the right sequence such that one can gain confidence that others are indeed “solving problems with qualia computing” and in turn infer that they are independently conscious.

This explanatory style contrasts with typical “solutions” to the problem of other minds that focus on either historical, behavioral, or algorithmic similarities between oneself and others (e.g. “passing a Turing test”). Here we explore what the space of possible solutions looks like and show that qualia formalism can be a key to unlock new kinds of understanding currently out of reach within the prevailing paradigms in philosophy of mind. But even with qualia formalism, the radical skeptic solipsist will not be convinced. Direct experience and “proof” is necessary to convince a hardcore solipsist since intellectual “inferential” arguments can always be mere “figments of one’s own imagination”. We thus explore how mindmelding can greatly increase our certainty of other’s consciousness. However, skeptical worries may still linger: how do you know that the source of consciousness during mindmelding is not your brain alone? How do you know that the other brain is conscious while you are not connected to it? We thus introduce “phenomenal puzzles” into the picture: these are puzzles that require the use of “qualia comparisons” to be solved. In conjunction with a specific mindmelding information sharing protocol, such phenomenal puzzles can, we argue, actually fully address the problem of other minds in ways even strong skeptics will be satisfied with. You be the judge! 🙂

~Qualia of the Day: Wire Puzzles~

Many thanks to: Everyone who has encouraged the development of the field of qualia research over the years. David Pearce for encouraging me to actually write out my thoughts and share them online, Michael Johnson for our multi-year deep collaboration at QRI, and Murphy-Shigematsu for pushing me over the edge to start working on “what I had been putting off” back in 2014 (which was the trigger to actually write the first Qualia Computing post). In addition, I’d like to thank everyone at the Stanford Transhumanist Association for encouraging me so much over the years (Faust, Karl, Juan-Carlos, Blue, Todor, Keetan, Alan, etc.). Duncan Wilson for the beautiful times discussing these matters. Romeo Stevens for the amazing vibes and high-level thoughts. And of course everyone at QRI, especially Quintin Frerichs, Andrew Zuckerman, Anders and Maggie, and the list goes on (Mackenzie, Sean, Hunter, Elin, Wendi, etc.). Likewise, everyone at Qualia Computing Networking (the closed facebook group where we discuss a lot of these ideas), our advisors, donors, readers, and of course those watching these videos. Much love to all of you!

Relevant links:

“Tout comprendre, c’est tout pardonner” – To understand all is to forgive all.


New scientific paradigms essentially begin life as conspiracy theories, noticing the inconsistencies the previous paradigm is suppressing. Early adopters undergo a process that Kuhn likens to religious deconversion.” – Romeo Stevens

The field of consciousness research lacks a credible synthesis of what we already know about the mind. One key thing that is holding back the science of consciousness is that it’s currently missing an adequate set of methods to “take seriously” the implications of exotic states of consciousness. Imagine a physicist saying that “there is nothing about water that we can learn from studying ice”. Silly as it may be, the truth is that this is the typical attitude about exotic consciousness in modern neuroscience. And even with the ongoing resurgence of scientific interest in psychedelics, outside of QRI and Ingram’s EPRC there is no real serious attempt at mapping the state-space of consciousness in detail. This is to a large extent because we lack the vocabulary, tools, concepts, and focus at a paradigmatic level to do so. But a new paradigm is arriving, and the following 8 new research methods and others in the works will help bring it about:

  1. Taking Exotic States of Consciousness Seriously (e.g. when a world-class phenomenologist says that 3D-printed Poincaré projections of hyperbolic honeycombs make the visual system “glitch” when on DMT the rational response is to listen and ask questions rather than ignore and ridicule).
  2. High-Quality Phenomenology: Precise descriptions of the phenomenal character of experience. Core strategy: useful taxonomies of experience, a language to describe generalized synesthesia (multi-modal coherence), and a rich vocabulary to convey the statistical regularities of textures of qualia (cf. generalizing the concept of “mongrels” in the neuroscience of visual perception to all other modalities).
  3. Phenomenology Club: Critical mass of smart and rational psychonauts.
  4. Psychedelic Turk for Psychophysics: Real-time psychedelic task completion.
  5. Generalized Wada Test: What happens when half of your brain is on LSD and the other half is on ketamine?
  6. Resonance-Based Hedonic Mapping: You are a network of coupled oscillators. Act like it!
  7. Pair Qualia Cartography: Like pair programming but for exploring the state-space of consciousness with non-invasive neurostimulation.
  8. Cognitive Sovereignty: Furthering a culture that has a “Yes &” approach to creativity, keeps track of meta-data, and takes responsibility for the information it puts out.

~Qualia of the Day: Being Taken Seriously~

Relevant links:


Many people report experiencing “higher dimensions” during deep meditation and/or psychedelic experiences. Vaporized DMT in particular reliably produces this effect in a large percentage of users. But is this an illusion? Is there anything meaningful to it? What could possibly be going on?

In this video we provide a steel man (or titanium man?) of the idea that higher dimensions are *real* in a new, meaningful, and non-trivial sense. 

We must emphasize that most people who believe that DMT experiences are “higher dimensional” interpret their experiences within a direct realist framework. Meaning that they think they are “tuning in” to other dimensions, that some secret sense organ capable of perceiving the etheric realm was “activated”, that awareness into divine realms became available to their soul, or something along those lines. In brief, such interpretations operate under the notion that we can perceive the world directly somehow. In this video, we instead work under the premise that we live in a compact world-simulation generated by our nervous system. If DMT gives rise to “higher dimensional experiences”, then such dimensions will be phenomenological in nature.

We thus try to articulate how it can be possible for an *experience* to acquire higher dimensions. An important idea here is that there is a trade-off between degrees of freedom and geometric dimensions. We present a model where degrees of freedom can become interlocked in such a way that they functionally emulate the behavior of a *virtual* higher dimension. As exemplified by the “harmonograph”, one can indeed couple and interlock multiple oscillators in such a way that one generates paths of a point in a space that is higher-dimensional than the space inhabited by any of the oscillators on their own. More so, with a long qualia decay, one can use such technique to “paint” entire images in a *virtual* high dimensional canvas!

High-quality detailed phenomenology of DMT by rational psychonauts strongly suggests that higher virtual dimensions are widely present in the state. Also, the unique valence properties of the state seem to follow what we could call a “generalized music theory” where the “vibe” of the space is the net consonance between all of the metronomes in it. We indeed see a duality between spatial symmetry and temporal synchrony with modality-specific symmetries (equivariance maps) constraining the dynamic behavior.

This, together with the Symmetry Theory of Valence (Johnson), makes the search for “special divine numbers” suddenly meaningful: numerological correspondences can illuminate the underlying makeup of “heaven worlds” and other hedonically-loaded states of mind!

I conclude with a discussion about the nature of “highly-meaningful experiences”. In light of all of these frameworks, meaning can be understood as a valence effect that arises when you have strong consonance between abstract (narrative and symbolic), emotional, and sensory fields all at once. A key turning point in your life combined with the right emotion and the right “sacred space” can thus give rise to “peak meaning”. The key to infinite bliss!

~Qualia of the Day: Numerology~

Relevant links:

Thumbnail Image Source: Petri G., Expert P., Turkheimer F., Carhart-Harris R., Nutt D., Hellyer P. J. and Vaccarino F. 2014 Homological scaffolds of brain functional networks J. R. Soc. Interface.112014087320140873 – https://royalsocietypublishing.org/doi/full/10.1098/rsif.2014.0873


How can a bundle of atoms form a unified mind? This is far from a trivial question, and it demands an answer.

The phenomenal binding problem asks us to consider exactly that. How can spatially and temporally distributed patterns of neural activity contribute to the contents of a unified experience? How can various cognitive modules interlock to produce coherent mental activity that stands as a whole?

To address this problem we first need to break down “the hard problem of consciousness” into manageable subcomponents. In particular, we follow Pearce’s breakdown of the problem where we posit that any scientific theory of consciousness must answer: (1) why consciousness exists at all, (2) what are the set of qualia variety and values, and what is the nature of their interrelationships, (3) the binding problem, i.e. why are we not “mind dust”?, and (4) what are the causal properties of consciousness (how could natural selection recruit experience for information processing purposes, and why is it that we can talk about it). We discuss how trying to “solve consciousness” without addressing each of these subproblems is like trying to go to the Moon without taking into account air drag, or the Moon’s own gravitational field, or the fact that most of outer space is an air vacuum. Illusionism, in particular, seems to claim “the Moon is an optical illusion” (which would be true for rainbows – but not for the Moon, or consciousness).

Zooming in on (3), we suggest that any solution to the binding problem must: (a) avoid strong emergence, (b) side-step the hard problem of consciousness, (c) circumvent epiphenomenalism, and (d) be compatible with the modern scientific word picture, namely the Standard Model of physics (or whichever future version achieves full causal closure).

Given this background, we then explain that “the binding problem” as stated is in fact conceptually insoluble. Rather, we ought to reformulate it as the “boundary problem”: reality starts out unified, and the real question is how it develops objective and frame invariant boundaries. Additionally, we explain that “classic vs. quantum” is a false dichotomy, at least in so far as “classical explanations” are assumed to involve particles and forces. Field behavior is in fact ubiquitous in conscious experience, and it need not be quantum to be computationally relevant! In fact, we argue that nothing in experience makes sense except in light of holistic field behavior.

We then articulate exactly why all of the previously proposed solutions to the binding problem fail to meet the criteria we outlined. Among them, we cover:

  1. Cellular Automata
  2. Complexity
  3. Synchrony
  4. Integrated Information
  5. Causality
  6. Spatial Proximity
  7. Behavioral Coherence
  8. Mach Principle
  9. Resonance

Finally, we present what we believe is an actual plausible solution to the phenomenal binding problem that satisfies all of the necessary key constraints:

10. Topological segmentation

The case for (10) is far from trivial, which is why it warrants a detailed explanation. It results from realizing that topological segmentation allows us to simultaneously obtain holistic field behavior useful for computation and new and natural regions of fields that we could call “emergent separate beings”. This presents a completely new paradigm, which is testable using elements of the cohomology of electromagnetic fields.

We conclude by speculating about the nature of multiple personality disorder and extreme meditation and psychedelic states of consciousness in light of a topological solution to the boundary problem. Finally, we articulate the fact that, unlike many other theories, this explanation space is in principle completely testable.

~Qualia of the Day: Acqua di Gio by Giorgio Armani and Ambroxan~

Relevant links:


Why are we conscious?

The short answer is that bound moments of experience have useful causal and computational properties that can speed up information processing in a nervous system.

But what are these properties, exactly? And how do we know? In this video I unpack this answer in order to explain (or at least provide a proof of concept explanation for) how bound conscious states accomplish non-trivial speedups in computational problems (e.g. such as the problem of visual reification).

In order to tackle this question we first need to (a) enrich our very conception of computation, and (b) also enrich our conception of intelligence.

(a) Computation: We must realize that the Church-Turing Thesis conception of computation only cares about computing in terms of functions. That is, how inputs get mapped to outputs. But a much more general conception of computation also considers how the substrate allows for computational speed-ups via interacting inner states with intrinsic information. More so, if reality is made of “monads” that have non-zero intrinsic information and interact with one another, then our conception of “computation” must also consider monad networks. And in particular, the “output” of a computation may in fact be an inner bound state rather than just a sequence of discrete outputs (!).

(b) Intelligence: currently this is a folk concept poorly formalized by the instruments with which we measure it (primarily in terms of sequential logics-linguistic processing). But, alas, intelligence is a function of one’s entire world-simulation: even the shading of the texture of the table in front of you is contributing to the way you “see the world” and thus reason about it. So, an enriched conception of intelligence must also take into account: (1) binding, (2) the presence of a self, (3) perspective-taking, (4) distinguishing between the trivial and significant, and (5) state-space of consciousness navigation.

Now that we have these enriched conceptions, we are ready to make sense of the computational role of consciousness: in a way, the whole point of “intelligence” is to avoid brute force solutions by instead recruiting an adequate “self-organizing principle” that can run on the universe’s inherent massively parallel nature. Hence, the “clever” way in which our world-simulation is used: as shown by visual illusions, meditative states, psychedelic experiences, and psychophysics, perception is the result of a balance of field forces that is “just right”. Case in point: our nervous system utilizes the holistic behavior of the field of awareness in order to quickly find symmetry elements (cf. Reverse Grassfire Algorithm).

As a concrete example, I articulate the theoretical synthesis QRI has championed that combines Friston’s Free Energy Principle, Atasoy’s Connectome-Specific Harmonic Waves, Carhart-Harris’ Entropic Disintegration, and QRI’s Symmetry Theory of Valence and Neural Annealing to shows that the nervous system is recruiting the self-organizing principle of annealing to solve a wide range of computational problems. Other principles to be discussed at a later time.

To summarize: the reason we are conscious is because being conscious allows you to recruit self-organizing principles that can run on a massively parallel fashion in order to find solutions to problems at [wave propagation] speed. Importantly, this predicts it’s possible to use e.g. a visual field on DMT in order to quickly find the “energy minima” of a physical state that has been properly calibrated to correspond to the dynamics of a worldsheet in that state. This is falsifiable and exciting.

I conclude with a description of the Goldilock’s Zone of Oneness and why to experience it.

~Qualia of the Day: Dior’s Eau Sauvage (EDT)~

Relevant links:

7 Recent Videos: Rational Analysis of 5-MeO-DMT, Utility Monsters, Neroli, Phenomenal Time, Benzo Withdrawal, Scale-Specific Network Geometry, and Why DMT Feels So Real

5-MeO-DMT: A Rational Analysis at Last (link)

Topics covered: Non-Duality, Symmetry, Valence, Neural Annealing, and Topological Segmentation.

See also:


Befriending Utility Monsters: Being the Adult in the Room When Talking About the Hedonic Extremes (link)

In this episode I connect a broad variety of topics with the following common thread: “What does it mean to be the adult in the room when dealing with extremely valenced states of consciousness?” Essentially, a talk on Utility Monsters.

Concretely, what does it mean to be responsible and sensible when confronted with the fact that pain and pleasure follow a long tail distribution? When discussing ultra-painful or ultra-blissful experiences one needs to take off the glasses we use to reason about “room temperature consciousness” and put on glasses that actually take these states with the seriousness they deserve.

Topics discussed include: The partial 5HT3 antagonism of ginger juice, kidney stones from vitamin C supplementation, 2C-E nausea, phenibut withdrawal, akathisia as a remarkably common side effect of psychiatric medication (neuroleptics, benzos, and SSRIs), negative 5-MeO-DMT trips, the book “LSD and the Mind of the Universe”, turbulence and laminar flow in the “energy body”, being a “mom” at a festival, and more.

Further readings on these topics:


Mapping State-Spaces of Consciousness: The Neroli Neighborhood (link)

What would it be like to have a scent-based medium of thought, with grammar, generative syntax, clauses, subordinate clauses, field geometry, and intentionality? How do we go about exploring the full state-space of scents (or any other qualia variety)?

Topics Covered in this Video: The State-space of Consciousness, Mapping State-Spaces, David Pearce at Oxford, Qualia Enrichment Kits, Character Impact vs. Flavors, Linalool Variants, Clusters of Neroli Scents, Neroli in Perfumes, Neroli vs. Orange Blossom vs. Petigrain vs. Orange/Mandarin/Lemon/Lime, High-Entropy Alloys of Scent, Musks as Reverb and Brown Noise, “Neroli Reconstructions” (synthetic), Semi-synthetic Mixtures, Winner-Takes-All Dynamics in Qualia Spaces, Multi-Phasic Scents, and Non-Euclidean State-Spaces.

Neroli Reconstruction Example:

4 – Linalool
3 – Linalyl Acetate
3 – Valencene
3 – Beta Pinene
2 – Nerolione
2 – Nerolidol
2 – Geraniol Coeur
2 – Hedione
2 – Farnesene
1 – D-Limonene
1 – Nerol
1 – Ambercore
1 – Linalool Oxyde
70 – Ethanol

Further readings:


What is Time? Explaining Time-Loops, Moments of Eternity, Time Branching, Time Reversal, and More… (link)

What is (phenomenal) time?

The feeling of time passing is not the same as physical time.

Albert Einstein discovered that “Newtonian time” was a special case of physical time, since gravity, relativity, and the constancy of the speed of light entails that space, time, mass, and gravity are intimately connected. He, in a sense, discovered a generalization of our common-sense notion of physical time; a generalization which accounts for the effects of moving and accelerating frames of reference on the relative passage of time between observers. Physical time, it turns out, could manifest in many more (exotic) ways than was previously thought.

Likewise, we find that our everyday phenomenal time (i.e. the feeling of time passing) is a special case of a far more general set of possible time-like qualities of experience. In particular, in this video I discuss “exotic phenomenal time” experiences, which include oddities such as time-loops, moments of eternity, time branching, and time reversal. I then go on to explain these exotic phenomenal time experiences with a model we call the “pseudo-time arrow”, which involves implicit causality in the network of sensations we experience on each “moment of experience”. Thus we realize that phenomenal time is an incredibly general property! It turns out that we haven’t even scratched the surface of what’s possible here… it’s about time we do so.

Further readings on this topic:


Benzos: Why the Withdrawal is Worse than the High is Good (+ Flumazenil/NAD+ Anti-Tolerance Action) (link)

Most people have low-resolution models of how drug tolerance works. Folk theories that “what goes up must come down” and theories in the medical establishment about how you can “stabilize a patient on a dose” and expect optimal effects long term get in the way of actually looking at how tolerance works.

In this video I explain why benzo withdrawal is far worse than the high they give you is good.

Core arguments presented:

  1. Benzos can treat anxiety, insomnia, palpitations, seizures, hallucinations, etc. If you use them to treat one of these symptoms, the rebound will nonetheless involve all of them.
  2. Kindling – How long-term use leads to neural annealing of the “withdrawal neural patterns”.
  3. Amnesia effects prevent you from remembering the good parts/only remembering the bad parts.
  4. Neurotoxicity from long-term benzo use makes it harder for your brain to heal.
  5. Arousal as a multiplier of consciousness: on benzos the “high” is low arousal and the withdrawal is high arousal (compared to stimulants where you at least will “sleep through the withdrawal”).
  6. Tolerance still builds up even when you don’t have a “psychoactive dose” in your body – meaning that the extremely long half-life of clonazepam and diazepam and their metabolites (50h+) entails that you still develop long-term tolerance even with weekly or biweekly use!

I then go into how the (empirically false) common-sense view of drug tolerance is delaying promising research avenues, such as “anti-tolerance drugs” (see links below). In particular, NAD+ IV and Flumazenil seem to have large effect sizes for treating benzo withdrawals. I AM NOT CONFIDENT THAT THEY WORK, but I think it is silly to not look into them with our best science at this point. Clinical trials for NAD+ IV therapy for drug withdrawal are underway, and the research to date on flumazenil seems extremely promising. Please let me know if you have any experience using either of these two tools and whether you had success with them or not.

Note: These treatments may also generalize to other GABAergic drugs like gabapentin, alcohol, and phenibut (which also have horrible withdrawals, but are far shorter than benzo withdrawal).

Further readings:

Epileptic patients who have become tolerant to the anti-seizure effects of the benzodiazepine clonazepam became seizure-free for several days after treatment with 1.5 mg of flumazenil.[14] Similarly, patients who were dependent on high doses of benzodiazepines […] were able to be stabilised on a low dose of clonazepam after 7–8 days of treatment with flumazenil.[15]”

Flumazenil has been tested against placebo in benzo-dependent subjects. Results showed that typical benzodiazepine withdrawal effects were reversed with few to no symptoms.[16] Flumazenil was also shown to produce significantly fewer withdrawal symptoms than saline in a randomized, placebo-controlled study with benzodiazepine-dependent subjects. Additionally, relapse rates were much lower during subsequent follow-up.[17]

Source: Flumazenil: Treatment for benzodiazepine dependence & tolerance

Scale-Specific Network Geometry (link)

Is it possible for the “natural growth” of a pandemic to be slower than exponential no matter where it starts? What are ways in which we can leverage the graphical properties of the “contact network” of humanity in order to control contagious diseases? In this video I offer a novel way of analyzing and designing networks that may allow us to easily prevent the exponential growth of future pandemics.

Topics covered: The difference between the aesthetic of pure math vs. applied statistics when it comes to making sense of graphs. Applications of graph analysis. Identifying people with a high centrality in social networks. Klout scores. Graphlets. Kinds of graphs: geometric, small world, scale-free, empirical (galactic core + “whiskers”). Pandemics being difficult to control due to exponential growth. Using a sort of “pandemic Klout score” to prioritize who to quarantine, who to vaccinate first. The network properties that made the plague spread so slowly in the Middle Ages. Toroidal planets as having linear pandemic growth after a certain threshold number of infections. Non-integer graph dimensionality. Dimensional chokes. And… kitchen sponges.

Readings either referenced in the video or useful to learn more about this topic:

Leskovec’s paper (the last link above):

Main Empirical Findings: Our results suggest a rather detailed and somewhat counterintuitive picture of the community structure in large networks. Several qualitative properties of community structure are nearly universal:

• Up to a size scale, which empirically is roughly 100 nodes, there not only exist well-separated communities, but also the slope of the network community profile plot is generally sloping downward. (See Fig. 1(a).) This latter point suggests, and empirically we often observe, that smaller communities can be combined into meaningful larger communities.

• At size scale of 100 nodes, we often observe the global minimum of the network community profile plot. (Although these are the “best” communities in the entire graph, they are usually connected to the remainder of the network by just a single edge.)

• Above the size scale of roughly 100 nodes, the network community profile plot gradually increases, and thus there is a nearly inverse relationship between community size and community quality. This upward slope suggests, and empirically we often observe, that as a function of increasing size, the best possible communities as they grow become more and more “blended into” the remainder of the network.

We have also examined in detail the structure of our social and information networks. We have observed that an ‘jellyfish’ or ‘octopus’ model [33, 7] provides a rough first approximation to structure of many of the networks we have examined.

Ps. Forgot to explain the sponge’s relevance: the scale-specific network geometry of a sponge is roughly hyperbolic at a small scale. Then the material is cubic at medium scale. And at the scale where you look at it as flat (being a sheet with finite thickness) it is two dimensional.


Why Does DMT Feel So Real? Multi-modal Coherence, High Temperature Parameter, Tactile Hallucinations (link)

Why does DMT feel so “real”? Why does it feel like you experience genuine mind-independent realities on DMT?

In this video I explain that we all implicitly rely on a model of which signals are trustworthy and which ones are not. In particular, in order to avoid losing one’s mind during an intense exotic experience (such as those catalyzed by psychedelics, dissociatives, or meditation) one needs to (a) know that you are altered, (b) have a good model of what that alteration entails, and (c) that the alteration is not strong enough that it breaks down either (a) or (b). So drugs that make you forget you are under the influence, or that you don’t know how to model (or have a mistaken model of) can deeply disrupt your “web of trusted beliefs”.

I argue that one cannot really import the models that one learned from other psychedelics about “what psychedelics do” to DMT; DMT alters you in a far broader way. For example, most people on LSD may mistrust what they see, but they will not mistrust what they touch (touch stays a “trusted signal” on LSD). But on DMT you can experience tactile hallucinations that are coherent with one’s visions! “Crossing the veil” on DMT is not a visual experience: it’s a multi-modal experience, like entering a cave hiding behind a waterfall.

Some of the signals that DMT messes with that often convince people that what they experienced was mind-independent include:

  1. Hyperbolic geometry and mathematical complexity; experiencing “impossible objects”.
  2. Incredibly high-resolution multi-modal integration: hallucinations are “coherent” across senses.
  3. Philosophical qualia enhancement: it alters not only your senses and emotions, but also “the way you organize models of reality”.
  4. More “energized” experiences feel inherently more real, and DMT can increase the energy parameter to an extreme degree.
  5. Highly valenced experiences also feel more real – the bliss and the horror are interpreted as “belonging to the vibe of a reality” rather than being just a property of your experience.
  6. DMT can give you powerful hallucinations in every modality: not only visual hallucinations, but also tactile, auditory, scent, taste, and proprioception.
  7. Novel and exotic feelings of “electromagnetism”.
  8. Sense of “wisdom”.
  9. Knowledge of your feelings: the entities know more about you than you yourself know about yourself.

With all of these signals being liable to chaotic alterations on DMT it makes sense that even very bright and rational people may experience a “shift” in their beliefs about reality. The trusted signals will have altered their consilience point. And since each point of consilience between trusted signals entails a worldview, people who believe in the independent reality of the realms disclosed by DMT share trust in some signals most people don’t even know exist. We can expect some pushback for this analysis by people who trust any of the signals altered by DMT listed above. Which is fine! But… if we want to create a rational Super-Shulgin Academy to really make some serious progress in mapping-out the state-space of consciousness, we will need to prevent epistemological mishaps. I.e. We have to model insanity so that we ourselves can stay sane.

[Skip to 4:20 if you don’t care about the scent of rose – the Qualia of the Day today]

Further readings:

“The most common descriptive labels for the entity were being, guide, spirit, alien, and helper. […] Most respondents endorsed that the entity had the attributes of being conscious, intelligent, and benevolent, existed in some real but different dimension of reality, and continued to exist after the encounter.”

Source: Survey of entity encounter experiences occasioned by inhaled N,N-dimethyltryptamine: Phenomenology, interpretation, and enduring effects

That’s it for now!

Please feel free to suggest topics for future videos!

Infinite bliss!

– Andrés

Types of Binding

Excerpt from “Mindmelding: Consciousness, Neuroscience, and the Mind’s Privacy” (2012) by William Hirstein (pgs. 57-58 and 64-65)

The Neuroscience of Binding

When you experience an orchestra playing, you see them and hear them at the same time. The sights and sounds are co-conscious (Hurley, 2003; de Vignemont, 2004). The brain has an amazing ability to make everything in consciousness co-conscious with everything else, so that the co-conscious relation is transitive: That means, if x is co-conscious with y, and y is co-conscious with z, then x is co-conscious with z. Brain researchers hypothesized that the brain’s method of achieving co-consciousness is to link the different areas embodying each portion of the brain state by a synchronizing electrical pulse. In 1993, Linás and Ribary proposed that these temporal binding processes are responsible for unifying information from the different sensory modalities. Electrical activity, “manifested as variations in the minute voltage across the cell’s enveloping membrane,” is able to spread, like “ripples in calm water” according to Linás (2002, pp.9-10). This sort of binding has been found not only in the visual system, but also in other modalities (Engel et al., 2003). Bachmann makes the important point that the binding processes need to be “general and lacking any sensory specificity. This may be understood via a comparison: A mirror that is expected to reflect equally well everything” (2006, 32).

Roelfsema et al. (1997) implanted electrodes in the brain of cats and found binding across parietal and motor areas. Desmedt and Tomberg (1994) found binding between a parietal area and a prefrontal area nine centimeters apart in their subjects, who had to respond with one hand, to signal which finger on another hand had been stimulated – a conscious response to a conscious perception. Binding can occur across great distances in the brain. Engel et al. (1991) also found binding across the two hemispheres. Apparently binding processes can produce unified conscious states out of cortical areas widely separated. Notice, however, that even if there is a single area in the brain where all the sensory modalities, memory, and emotion, and anything else that can be in a conscious state were known to feed into, binding would still be needed. As long as there is any spatial extent at all to the merging area, binding is needed. In addition to its ability to unify spatially separate areas, binding has a temporal dimension. When we engage in certain behaviors, binding unifies different areas that are cooperating to produce a perception-action cycle. When laboratory animals were trained to perform sensory-motor tasks, the synchronized oscillations were seen to increase both within the areas involved in performing the task and across those areas, according to Singer (1997).

Several different levels of binding are needed to produce a full conscious mental state:

  1. Binding of information from many sensory neurons into object features
  2. Binding of features into unimodal representations of objects
  3. Binding of different modalities, e.g., the sound and movement made by a single object
  4. Binding of multimodal object representations into a full surrounding environment
  5. Binding of representations, emotions, and memories, into full conscious states.

So is there one basic type of binding, or many? The issue is still debated. On the side of there being a single basic process, Koch says that he is content to make “the tentative assumption that all the different aspects of consciousness (smell, pain, vision, self-consciousness, the feeling of willing an action, of being angry and so on) employ one or perhaps a few common mechanisms” (2004, p15). On the other hand, O’Reilly et al. argue that “instead of one simple and generic solution to the binding problem, the brain has developed a number of specialized mechanisms that build on the strengths of existing neural hardware in different brain areas” (2003, p.168).

[…]

What is the function of binding?

We saw just above that Crick and Koch suggest a function for binding, to assist a coalition of neurons in getting the “attention” of prefrontal executive processes when there are other competitors for this attention. Crick and Koch also claim that only bound states can enter short-term memory and be available for consciousness (Crick and Koch, 1990). Engel et al. mention a possible function of binding: “In sensory systems, temporal binding may serve for perceptual grouping and, thus, constitute an important prerequisite for scene segmentation and object recognition” (2003, 140). One effect of malfunctions in the binding process may be a perceptual disorder in which the parts of objects cannot be integrated into a perception of the whole object. Riddoch and Humphreys (2003) describe a disorder called ‘integrative agnosia’ in which the patient cannot integrate the parts of an object into a whole. They mention a patient who is given a photograph of a paintbrush but sees the handle and the bristles as two separate objects. Breitmeyer and Stoerig (2006, p.43) say that:

[P]atients can have what are called “apperceptive agnosia,” resulting from damage to object-specific extrastriate cortical areas such as the fusiform face area and the parahippocampal place area. While these patients are aware of qualia, they are unable to segment the primitive unity into foreground or background or to fuse its spatially distributed elements into coherent shapes and objects.

A second possible function of binding is a kind of bridging function, it makes high-level perception-action cycles go through. Engel et al. say that, “temporal binding may be involved in sensorimotor integration, that is, in establishing selective links between sensory and motor aspects of behavior” (2003, p.140).

Here is another hypothesis we might call the scale model theory of binding. For example, in order to test a new airplane design in a wind tunnel, one needs a complete model of it. The reason for this is that a change in one area, say the wing, will alter the aerodynamics of the entire plane, especially those areas behind the wing. The world itself is quite holistic. […] Binding allows the executive processes to operate on a large, holistic model of the world in a way that allows the model to simulate the same holistic effects found in the world. The holism of the represented realm is mirrored by a type of brain holism in the form of binding.


See also these articles about (phenomenal) binding:

Qualia Computing at: TSC 2020, IPS 2020, unSCruz 2020, and Ephemerisle 2020

[March 12 2020 update: Both TSC and IPS are being postponed due to the coronavirus situation. At the moment we don’t know if the other two events will go ahead. I’ll update this entry when there is a confirmation either way. May 6 2020 update: unSCruz was canceled this year as well. More so, as an organization, QRI has chosen not to attend Ephemerisle this year, whether or not it ends up being canceled. Dear readers: I’m sure we’ll have future opportunities to meet in person].


These are the 2020 events lined up for me at the moment (though more are likely to pop up):

  • I will be attending The Science of Consciousness 2020 from the 13th to the 17th of April representing the Qualia Research Institute (QRI). I will present about a novel approach for solving the combination problem for panpsychism. The core idea is to use the concept of topological segmentation in order to explain how the universal wavefunction can develop boundaries with causal power (and thus capable of being recruited by natural selection for information-processing purposes) which might also be responsible for the creation of discrete moments of experience. I am including the abstract in this post (see below).
  • I will then fly out to Boston for the Intercollegiate Psychedelics Summit (IPS) from the 18th to the 20th of April (though I will probably stay for a few more days in order to meet people in the area). Here I will be presenting about intelligent strategies for exploring the state-space of consciousness.
  • At the end of April I will be attending the 2020 Santa Cruz Burning Man Regional (“unSCruz“) with a small contingent of members and friends of QRI. We will be showcasing some of our neurotech prototypes and conducting smell tests (article about this coming soon).
  • And from the 20th to the 27th of July I will be at Ephemerisle 2020 alongside other members of QRI. We will be staying on the “Consciousness Boat” and showcasing some interesting demos. In particular, expect to see new colors, have fully-sober stroboscopic hallucinations, and explore the state-space of visual textures.

I am booking some time in advance to meet with Qualia Computing readers, people interested in the works of the Qualia Research Institute, and potential interns and visiting scholars. Please message me if you are attending any of these events and would like to meet up.


Here is the abstract I submitted to TSC 2020:

Title – Topological Segmentation: How Dynamic Stability Can Solve the Combination Problem for Panpsychism

Primary Topic Area – Mental Causation and the Function of Consciousness

Secondary Topic Area – Panpsychism and Cosmopsychism

Abstract – The combination problem complicates panpsychist solutions to the hard problem of consciousness (Chalmers 2013). A satisfactory solution would (1) avoid strong emergence, (2) sidestep the hard problem of consciousness, (3) prevent the complications of epiphenomenalism, and (4) be compatible with the modern scientific world picture. We posit that topological approaches to the combination problem of consciousness could achieve this. We start by assuming a version of panpsychism in which quantum fields are fields of qualia, as is implied by the intrinsic nature argument for panpsychism (Strawson 2008) in conjunction with wavefunction realism (Ney 2013). We take inspiration from quantum chemistry, where the observed dynamic stability of the orbitals of complex molecules requires taking the entire system into account at once. The scientific history of models for chemical bonds starts with simple building blocks (e.g. Lewis structures), and each step involves updating the model to account for holistic behavior (e.g. resonance, molecular orbital theory, and the Hartree-Fock method). Thus the causal properties of a molecule are identified with the fixed points of dynamic stability for the entire atomic system. The formalization of chemical holism physically explains why molecular shapes that create novel orbital structures have weak downward causation effect on the world without needing to invoke strong emergence. For molecules to be “natural units” rather than just conventional units, we can introduce the idea that topological segmentation of the wavefunction is responsible for the creation of new beings. In other words, if dynamical stability entails the topological segmentation of the wavefunction, we get a story where physically-driven behavioral holism is accompanied with the ontological creation of new beings. Applying this insight to solve the combination problem for panpsychism, each moment of experience might be identified with a topologically distinct segment of the universal wavefunction. This topological approach makes phenomenal binding weakly causally emergent along with entailing the generation of new beings. The account satisfies the set of desiderata we started with: (1) no strong emergence is required because behavioral holism is implied by dynamic stability (itself only weakly emergent on the laws of physics), (2) we sidestep the hard problem via panpsychism, (3) phenomenal binding is not epiphenomenal because the topological segments have holistic causal effects (such that evolution would have a reason to select for them), and (4) we build on top of the laws of physics rather than introduce new clauses to account for what happens in the nervous system. This approach to the binding problem does not itself identify the properties responsible for the topological segmentation of the universal wavefunction that creates distinct moments of experience. But it does tell us where to look. In particular, we posit that both quantum coherence and entanglement networks may have the precise desirable properties of dynamical stability accompanied with topological segmentation. Hence experimental paradigms such as probing the CNS at femtosecond timescales to find a structural match between quantum coherence and local binding (Pearce 2014) could empirically validate our solution to the combination problem for panpsychism.

paste


See Also:

Binding Quiddities

Excerpt from The Combination Problem for Panpsychism (2013) by David Chalmers


[Some] versions of identity panpsychism are holistic in that they invoke fundamental physical entities that are not atomic or localized. One such view combines identity panpsychism with the monistic view that the universe itself is the most fundamental physical entity. The result is identity cosmopsychism, on which the whole universe is conscious and on which we are identical to it. (Some idealist views in both Eastern and Western traditions appear to say something like this.) Obvious worries for this view are that it seems to entail that there is only one conscious subject, and that each of us is identical to each other and has the same experiences. There is also a structural mismatch worry: it is hard to see how the universe’s experiences (especially given a Russellian view on which these correspond to the universe’s physical properties) should have anything like the localized idiosyncratic structure of my experiences. Perhaps there are sophisticated versions of this view on which a single universal consciousness is differentiated into multiple strands of midlevel macroconsciousness, where much of the universal consciousness is somehow hidden from each of us. Still, this seems to move us away from identity cosmopsychism toward an autonomous cosmopsychist view in which each of us is a distinct constituent of a universal consciousness. As before, the resulting decomposition problem seems just as hard as the combination problem.

Perhaps the most important version of identity panpsychism is quantum holism. This view starts from the insight that on the most common understandings of quantum mechanics, the fundamental entities need not be localized entities such as particles. Multiple particles can get entangled with each other, and when this happens it is the whole entangled system that is treated as fundamental and that has fundamental quantum-mechanical properties (such as wave functions) ascribed to it. A panpsychist might speculate that such an entangled system, perhaps at the level of the brain or one of its subsystems, has microphenomenal properties. On the quantum holism version of identity panpsychism, macrosubjects such as ourselves are identical to these fundamental holistic entities, and our macrophenomenal properties are identical to its microphenomenal properties.

This view has more attractions than the earlier views, but there are also worries. Some worries are empirical: it does not seem that there is the sort of stable brain-level entanglement that would be needed for this view to work. Some related worries are theoretical: on some interpretations of quantum mechanics the locus of entanglement is the whole universe (leading us back to cosmopsychism), on others there is no entanglement at all, and on still others there are regular collapses that tend to destroy this sort of entanglement. But perhaps the biggest worry is once again a structural mismatch worry. The structure of the quantum state of brain-level systems is quite different from the structure of our experience. Given a Russellian view on which microphenomenal properties correspond directly to the fundamental microphysical properties of these entangled systems, it is hard to see how they could have the familiar structure of our macroexperience.

The identity panpsychist (of all three sorts) might try to remove some of these worries by rejecting Russellian panpsychism, so that microphenomenal properties are less closely tied to microphysical structure. The cost of this move is that it becomes much less clear how these phenomenal properties can play a causal role. On the face of it they will be either epiphenomenal, or they will make a difference to physics. The latter view will in effect require a radically revised physics with something akin to our macrophenomenal structure present at the basic level. Then phenomenal properties will in effect be playing the role of quiddities within this revised physics, and the resulting view will be a sort of revisionary Russellian identity panpsychism.