From Neural Activity to Field Topology: How Coupling Kernels Shape Consciousness

This post aims to communicate a simple yet powerful idea: if you have a system of coupled oscillators controlled by a coupling kernel, you can use it to not only “tune into resonant modes” of the system, but also as a point of leverage to control the topological structure of the fields interacting with the oscillators.

This might be a way to explain how topological boundaries are mediated by neuronal activity, which in turn can be modulated by drugs/neurotransmitter concentrations, and in this way provide a link between neurochemistry and the topological structure of experience. Two things fall out of this: First, we might have the conceptual tools to link the creation of global topological boundaries (which at QRI we postulate are what separates a moment of experience from the rest of the universe) and neural activity. And second, in turn, we might have the ability to explain as well the way changes in oscillator/neural activity give rise to differently internally structured topologies (which together with a way of interpreting the mapping between topology of a field and its phenomenology) can help us explain things like the phenomenological differences between states of consciousness triggered by the ingestion of drugs as different as DMT and 5-MeO-DMT. In other words, this post is pointing at how we can get topological structure out of oscillatory activity – and thus explain how conscious boundaries (both local and global) are modulated both natively and through neuropharmacological interventions. It’s an algorithmic reduction with potentially very large explanatory power in the realm of consciousness research that only now is becoming conceptually accessible thanks to years of research and development at QRI.

Let’s start with a Big Picture Summary of the framework:

QRI aims to develop a holistic theoretical framework for consciousness. This latest iteration aims to integrate electromagnetic field theories of consciousness, connectome-specific harmonic waves, coupling kernels, and field topology in a way that might be capable of providing both explanatory and predictive power in the realm of phenomenology and its connection to biology. While this is an evolving framework, I see a lot of value in sharing the general idea (the “big picture”) we have at the moment to start informing the community and collaborators about how we’re thinking about unifying frameworks for understanding consciousness at the moment. The core elements of the Big Picture are:

  • Coupling Kernels as Neural-Global Bridge: The coupling kernel serves as a critical bridge between local neural circuitry and global brain-wide behavior. As demonstrated in Marco Aqil‘s work, when scaling up from neural microcircuits, the power distribution across different system harmonics can be modulated through coupling kernel parameters. This is something we arrived at independently last year in a very empirical and hands-on way, but Marco’s precise mathematical framework provides a solid theoretical foundation for this connection.
  • Geometric Constraints on Coupling Effects: The underlying geometry of a system fundamentally shapes how coupling kernels manifest their effects: resonant modes accessible through coupling kernels differ significantly between scale-free and geometric networks. Within geometric networks, specific geometries and dimensionalities generate characteristic resonant patterns. Thus, a single “high level” effect like a change in coupling kernel can have a wide range of different effects depending on the type of network/system to which it is applied.
  • Network Geometry Interactions and Projective Intelligence: A fundamental computational principle emerges from the interaction between networks of different geometries/topologies. This underlies “projective intelligence” (or more broadly, mapping/functional intelligence) – as exemplified by the interaction between the 2D visual field and 3D somatic field.
  • Topological Solution to the Boundary Problem: The topological solution to the boundary problem elucidates how physically “hard” boundaries with causal significance and holistic behavior could explain the segmentation of consciousness into discrete experiential moments.
  • Internal Topology and Phenomenology: The internal topological complexity within a globally segmented topological field pocket may determine its phenomenology – specifically, the field’s topological defects might establish the boundary conditions.
  • 5-MeO-DMT and Topological Simplification: 5-MeO-DMT experiences demonstrate phenomenological topological simplification as documented by Cube Flipper and other HEART members.
  • Coupling Kernels and Field Topology: Coupling kernels applied to electric oscillators can modulate field topology (observable in the vortices and anti-vortices of the magnetic field containing the electric oscillators, which you can see in the simulations below).
  • DMT vs 5-MeO-DMT Effects: This framework offers an explanation for the characteristic effects of DMT and 5-MeO-DMT: DMT generates competing coherence clusters and multiple simultaneous observer perspectives – interpretable as topological complexification within the pocket. Conversely, 5-MeO-DMT induces simplification where boundaries mutually cancel, ultimately producing experiences characterized by a single large pinwheel and the dissolution of topological defects (as in cessation states).
  • Paths and Experience: The Path Integral of Perspectives – The final theoretical component suggests that the subjective experience of a topological pocket emerges from “the superposition of all possible paths” within it. The topological simplicity of 5-MeO-DMT states may generate an “all things at once” quality due to the absence of internal boundaries constraining the state. In contrast, DMT’s complex internal topology results in each topological defect functioning as an observer, creating the sensation of multiple entities.

We’re currently developing empirical paradigms to test these frameworks, including psychophysics studies and simulations of brain activity to reconstruct behavior observed through neuroimaging. These ideas are fresh and need a lot of work to be validated and integrated into mainstream science, but we see a path forward and we’re excited to get there.

Now let’s dive into these components and explain them more fully:

0. What’s a Coupling Kernel?

The core concept vis-à-vis QRI was introduced in Cessation states: Computer simulations, phenomenological assessments, and EMF theories (Percy, Gómez-Emilsson, & Fakhri, 2024), where we provided a novel conceptual framework to make sense of meditative cessations (i.e. brief moments at high levels of concentration where “everything disappears”). Coupling kernels was part of the conceptual machinery that allowed us to propose a model for cessations, but it is worth mentioning that it stands on its own as a neat tool that bridges low-level connectivity and high-level resonance in systems of coupled oscillators. The core concept is: in a system of coupled oscillators with a distance function for each pair of oscillators, a coupling kernel is a set of parameters that tells you what the coupling coefficient should be as a function of this distance. I independently arrived at this idea (which others have explored in the past to an extent) during the Canada HEART retreat in order to explain a wide range of phenomenological observations derived from meditative and psychedelic states of consciousness. In particular, we wanted to have a simple algorithmic reduction to be able to explain the divergent effects of DMT and 5-MeO-DMT: the former seems to trigger “competing clusters of coherence” in sensory fields, whereas the latter seems to pull the entire system to a state of global coherence (in a dose-dependent way). Thinking of systems of coupled oscillators, I hypothesized that perhaps DMT induces a sort of alternative coupling kernel (where immediate neighbors want to be as different as possible from each other, whereas neighbors a little further apart want to be similar) while 5-MeO-DMT might instantiate a general “positive kernel” where oscillators all want to be in phase regardless of relative distance. We are in the process of developing empirical paradigms to validate this framework, so please take this with a grain of salt; the paradigm is currently in early developmental stages, but it is nonetheless worth sharing for the reasons I mentioned already (bringing collaborators up to speed and getting the community to start thinking in this new way).

As demonstrated in our work “Towards Computational Simulations of Cessation“, see how a flat “coupling kernel” triggers a global attractor of coherence across the entire system, whereas an alternating negative-positive (Mexican hat-like) kernel produces competing clusters of coherence. This is just a very high-level and abstract demonstration of a change in the dynamic behavior of coupled oscillators by applying a coupling kernel. What we then must do is to see how such a change would impact different systems in the organism as a whole.
Source

It is worth mentioning that in all of our simulations we also add a “small world” lever. The way this one is constructed is as follows: at the start of the simulation, for each oscillator we select two other oscillators at random and wire them to it. The lever controls the coupling constant between each oscillator and the two randomly chosen oscillators assigned to it. In graph theory, this kind of network architecture is often called a “small-world network” because the diameter of the graph quickly collapses as you add more random connections (and in our case, the system synchronizes as you add a positive coupling constant in for these connections). In practice, while the distance-based coupling kernel tunes into resonant modes (traveling waves, checkerboard patterns, etc. as we will see below), the small-world coupling constant adds a kind of geometric noise (when negative) and a global phase to which all oscillators can easily synchronize to (when positive). In effect, we suspect that small-world network-like neural wiring might be responsible for things like dysphoric seizures (due to high level of synchrony coupled with geometric irregularity causing intense dissonance) and disruption of consonant traveling waves (e.g. as a way to modulate anxiety). The phenomenology of being hungover or of experiencing benzo withdrawal might have something to do with an overactive negative small world network coupling constant.

1. Coupling Kernels as Neural-Global Bridge

One of the early simulations that I coded would analyze in real time the Discrete Cosine Transform that the effect of coupling kernels have on a 2D system of oscillators. Intuitively, I knew that the shape of the kernel clearly selected for specific resonant modes of the entire system, but seeing in real time how robust this effect was made me think there probably was a deep mathematical reason behind it. Indeed, as you can see in the below animations, the kernel shape can select checkerboard patterns, traveling waves, and even large pinwheels, all of which have characteristic spatial frequencies that are easily noted in the DCT of the plate of oscillators.

The animations above show: coupling kernel for a 2D system of coupled oscillators, shown on the top-left quadrant. Top-right quadrant is the Discrete Cosine Transform of the 2D plate of oscillators. Bottom-left is a temporal low-pass filter on the DCT. Bottom-right is a temporal high-pass filter on the DCT. Source: Internal QRI tool (public release forthcoming)

In November of last year at a QRI work retreat we stumbled upon two key frameworks that directly address these concepts in the research of Marco Aqil. Namely, CHAOSS (Connectome-Harmonic Analysis Of Spatiotemporal Spectra) and Divisive Normalization. In those works we find how the coupling kernel serves as the critical bridge between local neural activity and global brain-wide behavior. This connection emerges from deep mathematical principles explored in the CHAOSS framework. As we scale up from individual neural circuits to larger networks, the distribution of power across different harmonics of the system becomes accessible through modulation of the coupling kernel. CHAOSS reveals how the eigenmodes (in this case corresponding to “connectome harmonics”) of our structural wiring give rise to global patterns of brain activity. When provided appropriate coupling parameters, neural systems resonate with specific structural frequencies, producing macroscopic standing waves that unify and reorganize local activation patterns.

The link between molecular mechanisms and coupling kernels becomes particularly clear through divisive normalization. This canonical neural computation principle describes how a neuron’s response to input is modulated by the activity of surrounding neurons through specific molecular pathways. Different receptor systems (like 5-HT2A and 5-HT1A) can alter these normalization circuits in characteristic ways (perhaps ultimately explaining the implementation-level effects discussed in Serotonin and brain function: a tale of two receptors (2017, Carhart-Harris, Nutt)). When we map this to our coupling kernel framework, we see that changes in divisive normalization directly translate to changes in the coupling kernel’s shape. For instance, 5-HT2A activation might enhance local inhibition while simultaneously strengthening medium-range excitation, creating the alternating positive-negative coupling pattern characteristic of DMT states. Conversely, 5-HT1A activation might promote more uniform positive coupling across distances, explaining 5-MeO-DMT’s tendency toward global coherence. This provides a concrete mechanistic bridge from receptor activation to field topology: receptor binding → altered divisive normalization → modified coupling kernel → changed field topology. It’s a beautiful example of how a relatively simple molecular change can propagate through multiple scales to create profound alterations in consciousness.

In the CHAOSS framework, each brain region and pathway is represented as a node and edge on a distance-weighted graph. The framework applies spatiotemporal graph filters that act as coupling kernels, encoding how each node influences and is influenced by its neighbors across multiple time scales. By systematically adjusting parameters for excitatory and inhibitory interactions, we can effectively “scan” the connectome’s harmonic space: certain configurations produce stable resonance, others generate traveling waves or chaotic patterns, and some configurations may induce boundary-dissolving states that might prevent the formation of gestalts, and so on. The point being that it can be rigorously shown that in a system of coupled oscillators, a spatial (or temporal) coupling kernel can effectively “tune into” global resonant modes of the entire system.

At the very lowest-level, Marco’s work on Divisive Normalization suggests that there is a mode of canonical neural computation, where the response from a population of neurons to a given input signal is mediated by the surrounding context, a circuit that involves neurons that respond to different neurotransmitter systems. In particular, here we have a bridge that links the very low-level neural circuits to the coupling kernels, which in turn excites specific harmonic resonant modes of the entire system. In other words, the coupling kernel is a sort of intermediate “meso-level” structure that provides system-wide dynamic control of resonance and can be derived as a function of the balance between different neuronal populations that respond to specific neurotransmitters (learn more).

The result of encountering this research is that we now have a crisp conceptual explanation for how coupling kernels might arise (and be controlled by) low-level circuitry, and also why (in a mathematically rigorous way) such kernels can tune into global resonant modes. It therefore starts to look like there is a potentially highly rigorous link between the insights that come from QRI’s Think Tank “taking phenomenology seriously” approach and the current leading academic theories of how drugs affect perception.

2. Geometric Constraints on Coupling Effects

With the above said, the human organism is really complex, and so it is natural to ask: where exactly does the coupling kernel apply to? As argued recently we propose that it would be highly parsimonious if the coupling kernel applied to a range of systems at the same time: the visual cortex, the auditory cortex, the somatosensory cortex, the peripheral nervous system, and even the vasculature. Here the conceptual framework would say that a given drug might change the way low-level circuitry results in divisive normalization with specific constants, and that this change is applied to a wide range of systems. When you take LSD you get a characteristic “vibrational pattern” that might be present in, say, both the vascular system and the visual cortex at the same time. The underlying change is very simple, but the resulting effect is system-dependent due to the characteristic geometry and topology of each subsystem that is affected.

I think that a key insight we ought to work with is that the geometry of the system on which a coupling kernel operates fundamentally determines its high-level effects. A particularly striking example of how geometry shapes coupling kernel effects can be seen in the contrast between the visual cortex and the vasculature system. The visual cortex, organized as a hierarchical geometric network with distinct layers and columnar organization, responds to coupling kernels in ways that reflect its structural hierarchy. When a DMT-like kernel (alternating positive-negative coupling constants) is applied, it generates competing clusters of coherence at different scales of the hierarchy. This manifests phenomenologically as the characteristic layered, fractal-like visual patterns reported in DMT experiences, where similar motifs appear nested at multiple scales. In contrast, a 5-MeO-DMT-like kernel (uniformly positive coupling) drives the hierarchical network toward global synchronization, potentially contributing to the reported dissolution of visual structure in 5-MeO-DMT experiences.

Simulation comparing coupling kernels across a hierarchical network of feature-selective layers (16×16 to 2×2), showing how different coupling coefficients between and within layers affect pattern formation. The DMT-like kernel (-1.0 near-neighbor coupling) generates competing checkerboard patterns at multiple spatial frequencies, while the 5-MeO-DMT-like kernel (positive coupling coefficients) drives convergence toward larger coherent patches. These distinct coupling dynamics mirror how these compounds might modulate hierarchical neural architectures like the visual cortex.
Source: Internal QRI tool (public release forthcoming)

The vasculature system, on the other hand, exemplifies a scale-free network with its branching architecture. Here, the same coupling kernels produce markedly different effects. In the vasculature, a DMT-like kernel would tend to create competing clusters of coherence primarily at bifurcation points, where vessels branch. This could explain some of the characteristic bodily sensations reported during DMT experiences, such as the feeling of energy concentrating at specific points in the body. When a 5-MeO-DMT-like kernel is applied to this scale-free network, it drives the entire system toward global phase synchronization, potentially contributing to the reports of profound bodily dissolution and unity experiences (cf. when you experience a dysphoric 5-MeO-DMT response oftentimes this can be traced to a mostly coherent but slightly off pattern of flow, where “energy” strongly aggregates in a specific point, cf. Arataki’s Guide to 5-MeO-DMT).

Simulation comparing different coupling kernels (DMT-like vs 5-MeO-DMT-like) applied to a 1.5D fractal branching network, showing how modified coupling parameters affect phase coherence and signal propagation. The DMT-like kernel produces competing clusters of coherence at bifurcation points, while the 5-MeO-DMT kernel drives the system toward global phase synchronization – patterns that could explain how these compounds differently affect branching biological systems like the vasculature or peripheral nervous system.
Source: Internal QRI tool (public release forthcoming)

This framework helps explain how a single pharmacological intervention, by modifying coupling kernels through changes in divisive normalization, can produce such diverse phenomenological effects across different biological systems. The geometry of each system acts as a filter, transforming the same basic change in coupling parameters into system-specific resonant patterns. This provides a unified explanation for how psychedelics can simultaneously affect visual perception, bodily sensation, and cognitive processes, while maintaining characteristic differences between compounds based on their specific coupling kernel signatures.

The notion of a continuous graph-based system dissolves traditional distinctions between regional oscillator networks and global wave phenomena into a single multifaceted gem of coupled states. By shaping coupling kernels, we effectively tune into specific connectome harmonics, instantiating global resonant modes that underlie everything from coherent sensory integration to altered states of consciousness.

3. Network Geometry Interactions and Projective Intelligence

A fundamental computational principle emerges from the interaction between networks of different geometries and topologies. This principle underlies what we might call “projective intelligence” or more broadly, mapping/functional intelligence. The interaction between the 2D visual field and 3D somatic field provides a prime example of this principle in action.

Consider how we understand a complex three-dimensional object like a teapot. Our visual system receives a 2D projection, but we comprehend the object’s full 3D structure through an intricate dance between visual and somatic representations. As we observe the teapot from different angles, our visual system detects various symmetries and patterns in the 2D projections: perhaps the circular rim of the spout, the elliptical body, the handle’s curve. These 2D patterns, processed through the visual cortex’s hierarchical geometric network, generate characteristic resonant modes. Simultaneously, our somatic system maintains a 3D spatial representation where we can “map” these detected symmetries. The brain effectively “paints” the symmetries found in the 2D visual field onto the 3D somatic representation, creating a rich multi-modal representation of the object.

This process involves multiple parallel mappings between sensory fields, each governed by its own coupling kernel. The visual field might have one kernel that helps identify continuous contours, while another kernel in the somatic field maintains spatial relationships. These kernels can synchronize or “meet in resonance” when the mappings between fields align correctly, giving rise to stable multimodal representations. When we grasp the teapot, for instance, the tactile feedback generates somatic resonant modes that match our visually-derived expectations, reinforcing our understanding of the object’s structure (many thanks to Wystan, Roger, Cube Flipper, and Arataki for many discussions on this topic and their original contributions thereof – the fact that visual sensations devoid of somatic coupling have a very different quality in particular was a brilliant observation by Roger that fomented a lot of insights in our sphere).

The necessity of interfacing between spaces of different dimensionality (e.g. 3D somatic space and 2.5D visual space) creates interesting constraints. In systems exhibiting resonant modes emergent from coupled oscillator wiring, energy minimization occurs precisely where waves achieve low-energy configurations in both interfacing spaces simultaneously. This requires finding both an optimal projection between spaces and appropriate coupling kernels that allow the resulting space to behave as if it were unified.

Remarkably, this framework suggests that our cognitive ability to understand complex objects and spaces emerges from the brain’s capacity to maintain multiple concurrent mappings between sensory fields of different dimensionalities. Each mapping can be thought of as a kind of “cross-modal resonance bridge,” where coupling kernels in different sensory domains synchronize to create stable, coherent representations. When this level of coherence is achieved, the waves cannot detect the underlying projective dynamic: there simply is no “internal distinction” to be found in an otherwise complex system that typically maintains many differences between the spaces it maps. At the limit, the perfect alignment between the various mappings and coupling kernels of all sensory fields is what we hypothesize explains meditative cessations.

This multiple-mapping approach might explain phenomena like the McGurk effect, where visual and auditory information integrate to create a unified perception, or the rubber hand illusion, where visual and tactile fields can be realigned to incorporate external objects into our body schema. In each case, coupling kernels in different sensory domains synchronize to create new stable configurations that bridge dimensional and modal gaps.

The framework also provides insight into how psychedelics might affect these cross-modal mappings. DMT, for instance, might introduce competing clusters of coherence across different sensory domains, leading to novel and sometimes conflicting cross-modal associations. In contrast, 5-MeO-DMT might drive all mappings toward global synchronization, along which characteristic system-wide synchronization effects manifest, potentially explaining the reported dissolution of distinctions between sensory modalities and the experience of unified consciousness.

Understanding consciousness as a system of interacting dimensionally-distinct fields, each with their own coupling kernels that can synchronize and resonate with each other, offers a powerful new way to think about both ordinary perception and altered states. It suggests that our rich experiential world emerges from the brain’s ability to maintain and synchronize multiple parallel mappings between sensory domains of different dimensionalities, creating a unified experience from fundamentally distinct representational spaces.

4. Topological Solution to the Boundary Problem

Here’s where the framework really starts to come together: if we identify fields of physics with fields of qualia (a field-based version of panpsychism), then the boundaries between subjects could be topological in nature. Specifically, where magnetic field lines “loop around” to form closed pockets, we might find individual moments of experience. These pockets aren’t arbitrary or observer-dependent: they’re ontologically real features of the electromagnetic field that naturally segment conscious experience (note: I will leave aside for the time being the discussion about the ontological reality of the EM field, but suffice to say that even if the EM field is an abstraction atop the more fundamental ontology of reality, we believe topological segmentation could then apply to that deeper reality).

This provides a compelling solution to the boundary problem: what stops phenomenal binding from expanding indefinitely? The answer lies in the topology of the field itself. When field lines close into loops, they create genuine physical boundaries that can persist and evolve as unified wholes. These boundaries are frame-invariant (preserving properties under coordinate transformations), support weak emergence without requiring strong emergence, and explain how conscious systems can exert downward causation on their constituent parts through resonance effects.

5. Electromagnetic Field Topology and its Modulation

To demonstrate how coupling kernels create and control these field boundaries, we’ve developed three key simulations showing electric oscillators embedded in magnetic fields. By visualizing the resulting field configurations across different geometries – 2D grids, circular arrangements, and branching structures – we can directly observe how coupling kernels shape field topology.

When we apply a DMT-like kernel (alternating positive-negative coupling constants at different distances), we see an explosion of topological complexity in which multiple vortices and anti-vortices emerge, creating a diverse patterns of nested field structures. The same kernel creates characteristic patterns in each geometry, but always tends toward complexification. In contrast, applying a 5-MeO-DMT-like kernel (uniformly positive coupling) causes these complex structures to simplify dramatically, often collapsing into a single large vortex or even completely smooth field lines.

Coupled oscillators in a 2D space whose phase is interpreted as electric oscillations are embeded in a magnetic field whose topology becomes mediated by the coupling kernel. Source: Internal QRI tool (public release forthcoming)

[Note: These are still 2D simulations – a full 3D electromagnetic simulation is in development and will likely reveal even richer topological dynamics. However, even these simplified models provide striking evidence for how coupling kernels can control field topology.]

6. 5-MeO-DMT and Topological Simplification

The remarkable alignment between our theoretical predictions and actual psychedelic experiences becomes clear when we examine 5-MeO-DMT states. As documented in Cube Flipper’s “5-MeO-DMT: A Crash Course in Phenomenal Field Topology” (2024), these experiences frequently involve the systematic disentangling or annihilation of local field perturbations (“topological defects”) over time. Subjects report a progressive dissolution of boundaries and eventual sense of absolute unity or “oneness.” Significantly, recent EEG analysis of 5-MeO-DMT experiences also reveal remarkable topological properties, which we’re currently trying to derive from a 3D model of the brain in light of altered coupled kernels.

Source: Cube Flipper’s HEART essay on 5-MeO-DMT and field topology.

This phenomenology maps really well onto what our electromagnetic simulations predict: a 5-MeO-DMT-like coupling kernel transforms networks of swirling singularities into simplified field configurations. The effect isn’t limited to any particular neural subsystem: it appears to drive global topological simplification across multiple scales and geometries, explaining both the intensity and the consistency of the experience across subjects. In turn, a lot of the characteristic phenomenological features of 5-MeO-DMT might find their core generator as the interaction between a very positive coupling kernel and the interesting relationships between different sensory fields as they try to map onto each other to minimize dissonance. At the peak of a breakthrough experience, typically this culminates in what appears as a global multimodal coherent state, where presumably all the sensory fields have found a mapping to each other such that the waves in each look exactly the same: the recipe for a zero informational state of consciousness. A whiteout.

What’s particularly fascinating is that this framework suggests normal waking consciousness might represent a sweet spot of topological complexity. It carries enough structure to maintain a stable sense of self and world, but not so little as to dissolve completely (as in 5-MeO-DMT states). Each topological defect could be thought of as a kind of “perspectival anchor” in the field. As these defects systematically dissolve under 5-MeO-DMT, we would expect exactly what subjects report: a progressive loss of distinct perspectives culminating in a state of pure unity. Perhaps sleep and dreaming could be also interpreted through this lens: during periods of wakefulness we slowly but surely accumulate topological defects; sleep and dreaming might be a process of topological simplification where the topological defects aggregate and cancel out. Notice next time you find yourself in a hypnagogic state how it feels like to “let go of the central grasping to experience” and the subsequent fast “unraveling” of the field of experience. Much more to say about this in the future (a topological simplification theory of sleep).

7. Coupling Kernels and Field Topology

The mechanism by which coupling kernels control field topology reveals something really deep, abstract, and yet applied about consciousness: the same mathematical object (the coupling kernel) can simultaneously modulate both neural dynamics and electromagnetic field structure. This isn’t just correlation: we are talking about a direct causal chain from molecular interaction to conscious experience and back. Precisely the sort of structure we want in order to both ground the topological boundary problem solution in neurophysiology and avoid epiphenomenalism (since the field topology feeds back into neural activity, cf. local field potentials).

Consider how this works: when we apply a coupling kernel to a network of electric oscillators, we’re not just changing their relative phases. We’re also sculpting the magnetic field they generate. Each oscillator contributes to the local magnetic field, and the coupling kernel determines how these contributions interfere. Positive coupling between nearby oscillators tends to align their fields, creating smooth, continuous field lines. Negative coupling creates discontinuities and vortices. The resulting field topology emerges from these collective interactions, yet acts back on the system as a unified whole through electromagnetic induction.

What’s particularly elegant about this mechanism is its scale-invariance. Whether we’re looking at ion channels in a single neuron or large-scale brain networks, the same principles apply. The coupling kernel acts as a kind of “field-shaping operator” that can be applied at any scale where electromagnetic interactions matter. This helps explain why psychedelics, which presumably modify coupling kernels through receptor activation, can have such profound and coherent effects across multiple levels of brain organization.

8. DMT vs 5-MeO-DMT Effects

With this mechanism in hand, we can now understand the radically different effects of DMT and 5-MeO-DMT in a new light. The key insight is that these compounds don’t just change what we experience. They transform the very structure of the field that gives rise to bound experiences.

DMT appears to implement a coupling kernel with a characteristic Mexican-hat profile: strong negative coupling at short distances combined with positive coupling at medium distances. When applied to neural networks, this creates competing clusters of coherence. But more fundamentally, it generates a field topology rich in stable vortices and anti-vortices. Each of these topological features acts as a semi-independent center of field organization – a kind of local “observer” within the larger field.

This helps explain one of the most striking aspects of DMT experiences: the encounter with apparently autonomous entities or beings. If each major topological defect in the field functions as a distinct locus of observation, then the DMT state literally creates multiple valid perspectives within the same field of consciousness. The geometric patterns commonly reported might reflect the larger-scale organization of these topological features – the way they naturally arrange themselves in space according to electromagnetic field dynamics.

The bizarre yet consistent nature of DMT entity encounters takes on new meaning in this framework. These entities often seem to exist in spaces with impossible geometries, yet interact with each other and the observer in systematic ways. This is exactly what we’d expect if they represent stable topological features in a complexified electromagnetic field: they would follow precise mathematical rules while potentially violating our usual intuitions about space and perspective. Even our notion of a central observer and object of observation; the DMT space has many overlapping “points of view” derived from the complex topology of the field.

These insights stand in stark contrast to 5-MeO-DMT’s effects, but they emerge from the same underlying mechanism. They also suggest new research directions. For instance, we might be able to predict specific patterns of field organization under different compounds by analyzing their receptor binding profiles in terms of their implied coupling kernels. This could eventually allow us to engineer specific consciousness-altering effects by designing molecules (or drug cocktails) that implement particular coupling kernel shapes.

9. Paths and Experience: The Path Integral of Perspectives

Here’s where we get to be both mathematically precise and delightfully speculative: I propose that the mapping between field topology and phenomenology is best understood through the path integral of all possible perspectives within a topological pocket. This isn’t just mathematical fancy – it’s a necessary move once we realize that consciousness doesn’t always have a center.

Think about it: we’re used to consciousness having a kind of “screen” quality, where everything is presented to a singular point of view. But this is just one possible configuration(!). On DMT, for instance, experiencers often report accessing topological extrema instantaneously, as if consciousness could compress or tunnel through its own geometry to find patterns and symmetries. This suggests our usual centered experience might be more of a special case – perhaps we’re too attached (literally, in terms of field topology) to a central vortex that geometrizes experience in a familiar way.

When we consider the full range of possible field topologies, things get wonderfully weird (but also kind of eerie to be honest). The “screen of consciousness” starts looking like just one possible way to organize the field, corresponding to a particular kind of stable vortex configuration. But there are so many other possibilities! The path integral approach lets us understand how a completely “centerless” state could still be conscious – it’s just integrating over all possible perspectives simultaneously, without privileging any particular viewpoint.

This framework helps explain why 5-MeO-DMT can produce states of “pure consciousness” without content – when the field topology simplifies enough, the path integral becomes trivial. There’s literally nothing to distinguish one perspective from another. In a perfectly symmetrical manifold, all points of view are exactly the same. This ultimately ties in to the powerful valence effects of 5-MeO-DMT, seen through the lens of a field-theoretic version of the Symmetry Theory of Valence (Johnson 2016). We’re currently developing valence functions for field topologies, though we don’t yet have concrete results worth showing (but writeup about it forthcoming). Conversely, if this framework is accurate, then DMT’s complex topology creates many local extrema, each serving as a kind of perspectival anchor point, leading to the sensation of multiple observers or entities. This would be predicted to have generically highly mixed valence, with at times highly dissonant states and at times highly consonant states, yet always rich in internal divisions and complex symmetries rather than the “point of view collapse” characteristic of 5-MeO-DMT.

Our electromagnetic field visualizations make this particularly concrete. When we observe the magnetic field configurations in our simulations, we’re essentially seeing snapshots of the space over which these path integrals are computed. In the DMT-like states, the field is rich with vortices and anti-vortices – each one representing a potential perspective from which to “view” the field. The path integral must account for all possible paths through this complex topology, including paths that connect different vortices. This creates a kind of “quantum tunneling of perspective” (I know how this sounds, but bear with me) where consciousness can leap between different viewpoints, perhaps explaining the characteristically bizarre spatial experiences reported on DMT. In contrast, when we apply the 5-MeO-DMT-like kernel, we watch these vortices collapse and merge. The topology simplifies until there’s just one global structure – or sometimes none at all. At this point, the path integral becomes trivial because all paths through the field are essentially equivalent. There’s no longer any meaningful distinction between different perspectives because the field has achieved a kind of perfect symmetry.

Conclusion: A Network of Insights

This theoretical framework – connecting coupling kernels, field topology, and conscious experience – emerged from years of collaborative work and inspiration. While the specific insights about coupling kernels and their effects on field topology are my contributions, they stand atop a mountain of brilliant work by the extended QRI family.

I’m deeply grateful to Chris Percy for his rigorous development of these ideas, particularly in understanding their philosophical implications in the context of the current literature of consciousness studies, Michael Johnson for years of fruitful collaboration (and his great contribution to the field via the Symmetry Theory of Valence and formalization of Neural Annealing), as well as really helpful QRI advisors like Shamil Chandaria, Robin Carhart-Harris, and Luca Turin. Also special thanks to the great long-time doers in QRI like Hunter Meyer, Marcin Kowrygo, Margareta Wassinge and Anders Amelin (RIP). Cube Flipper’s phenomenological investigations of 5-MeO-DMT have been invaluable, as have the insights from Roger Thisdell, Wystan Bryan-Scott, Asher Arataki, and others. Everyone on the HEART team’s dedication to careful exploration has provided crucial empirical grounding for these theoretical developments.

I’m also excited about ongoing work with our academic collaborators (to be announced soon – we’re currently designing studies to test these ideas rigorously). In particular I want to thank Till Holzapfel for his awesome research and collaborations (and help with the QRI Amsterdam meetup!), Taru Hirvonen for her visual intuitions and work, Emil Hall for his amazing programming and conceptual development help, Symmetric Vision for his incredible visual work and intuitions, Ethan Kuntz for his insights on spectral graph theory, Scry for his retreat replications, and Marco Aqil for his ground-breaking research (and for giving a presentation at the recent Amsterdam meetup), and many more people who have recently been delightful and helpful for the mission (special shoutout to Alfredo Parra). This emerging research program promises to put these theoretical insights to empirical test, and we’re working at a team to bridge phenomenology and hard neuroscience. It’s happening! 🙂

Also, none of this would have been possible without the broader QRI community and its supporters – a group of fearless consciousness researchers willing to take both mathematical rigor and subjective experience seriously. Together, we’re building a new science of consciousness that respects both the precision of physics and the richness of lived experience.

The path ahead is clear (well, at least in my head): we need to develop more sophisticated simulations of field topology, particularly in three dimensions, and devise clever ways to test these ideas experimentally through psychophysics and microphenomenology. The coupling kernel paradigm offers a concrete mathematical handle on consciousness – one that might let us not just understand but eventually engineer specific states of consciousness. It’s an exciting time to be working on this hard problem!

Thanks for coming along on this wild ride through field topology, psychedelic states, and the mathematics of consciousness. Stay tuned – there’s much more to come!

– Andrés 🙂

Costs of Embodiment

[X-Posted @ The EA Forum]

By Andrés Gómez Emilsson

Digital Sentience

Creating “digital sentience” is a lot harder than it looks. Standard Qualia Research Institute arguments for why it is either difficult, intractable, or literally impossible to create complex, computationally meaningful, bound experiences out of a digital computer (more generally, a computer with a classical von Neumann architecture) include the following three core points:

  1. Digital computation does not seem capable of solving the phenomenal binding or boundary problems.
  2. Replicating input-output mappings can be done without replicating the internal causal structure of a system.
  3. Even when you try to replicate the internal causal structure of a system deliberately, the behavior of reality at a deep enough level is not currently understood (aside from how it behaves in light of inputs-to-outputs).

Let’s elaborate briefly:

The Binding/Boundary Problem

  1. A moment of experience contains many pieces of information. It also excludes a lot of information. Meaning that, a moment of experience contains a precise, non-zero, amount of information. For example, as you open your eyes, you may notice patches of blue and yellow populating your visual field. The very meaning of the blue patches is affected by the presence of the yellow patches (indeed, they are “blue patches in a visual field with yellow patches too”) and thus you need to take into account the experience as a whole to understand the meaning of all of its parts.
  2. A very rough, intuitive, conception of the information content of an experience can be hinted at with Gregory Bateson’s (1972) “a difference that makes a difference”. If we define an empty visual field as containing zero information, it is possible to define an “information metric” from this zero state to every possible experience by counting the number of Just Noticeable Differences (JNDs) (Kingdom & Prins, 2016) needed to transform such empty visual field into an arbitrary one (note: since some JND are more difficult to specify than others, a more accurate metric should also take into account the information cost of specifying the change in addition to the size of the change that needs to be made). It is thus evident to see that one’s experience of looking at a natural landscape contains many pieces of information at once. If it didn’t, you would not be able to tell it apart from an experience of an empty visual field.
  3. The fact that experiences contain many pieces of information at once needs to be reconciled with the mechanism that generates such experiences. How you achieve this unity of complex information starting from a given ontology with basic elements is what we call “the binding problem”. For example, if you believe that the universe is made of atoms and forces (now a disproven ontology), the binding problem will refer to how a collection of atoms comes together to form a unified moment of experience. Alternatively, if one’s ontology starts out fully unified (say, assuming the universe is made of physical fields), what we need to solve is how such a unity gets segmented out into individual experiences with precise information content, and thus we talk about the “boundary problem”.
  4. Within the boundary problem, as Chris Percy and I argued in Don’t Forget About the Boundary Problem! (2023), the phenomenal (i.e. experiential) boundaries must satisfy stringent constraints to be viable. Namely, among other things, phenomenal boundaries must be:
    1. Hard Boundaries: we must avoid “fuzzy” boundaries where information is only “partially” part of an experience. This is simply the result of contemplating the transitivity of the property of belonging to a given experience. If a (token) sensation A is part of a visual field at the same time as a sensation B, and B is present at the same time as C, then A and C are also both part of the same experience. Fuzzy boundaries would break this transitivity, and thus make the concept of boundaries incoherent. As a reductio ad absurdum, this entails phenomenal boundaries must be hard.
    2. Causally significant (i.e. non-epiphenomenal): we can talk about aspects of our experience, and thus we can know they are part of a process that grants them causal power. More so, if structured states of consciousness did not have causal effects in some way isomorphic to their phenomenal structure, evolution would simply have no reason to recruit them for information processing. Albeit epiphenomenal states of consciousness are logically coherent, the situation would leave us with no reason to believe, one way or the other, that the structure of experience would vary in a way that mirrors its functional role. On the other hand, states of consciousness having causal effects directly related to their structure (the way they feel like) fits the empirical data. By what seems to be a highly overdetermined Occam’s Razor, we can infer that the structure of a state of consciousness is indeed causally significant for the organism.
    3. Frame-invariant: whether a system is conscious should not depend on one’s interpretation of it or the point of view from which one is observing it (see appendix for Johnson’s (2015) detailed description of frame invariance as a theoretical constraint within the context of philosophy of mind).
    4. Weakly emergent on the laws of physics: we want to avoid postulating either that there is a physics-violating “strong emergence” at some level of organization (“reality only has one level” – David Pearce) or that there is nothing peculiar happening at our scale. Bound, casually significant, experiences could be akin to superfluid helium. Namely, entailed by the laws of physics, but behaviorally distinct enough to play a useful evolutionary role.
  5. Solving the binding/boundary problems does not seem feasible with a von Neumann architecture in our universe. The binding/boundary problem requires the “simultaneous” existence of many pieces of information at once, and this is challenging using a digital computer for many reasons:
    1. Hard boundaries are hard to come by: looking at the shuffling of electrons from one place to another in a digital computer does not suggest the presence of hard boundaries. What separates a transistor’s base, collector, and emitter from its immediate surroundings? What’s the boundary between one pulse of electricity and the next? At best, we can identify functional “good enough” separations, but no true physics-based hard boundaries.
    2. Digital algorithms lack frame invariance: how you interpret what a system is doing in terms of classic computations depends on your frame of reference and interpretative lens.
    3. The bound experiences must themselves be causally significant. While natural selection seemingly values complex bound experiences, our digital computer designs precisely aim to denoise the system as much as possible so that the global state of the computer does not influence in any way the lower-level operations. At the algorithmic level, the causal properties of a digital computer as a whole, by design, are never more than the strict sum of their parts.

Matching Input-Output-Mapping Does Not Entail Same Causal Structure

Even if you replicate the input-output mapping of a system, that does not mean you are replicating the internal causal structure of the system. If bound experiences are dependent on specific causal structures, they will not happen automatically without considerations for the nature of their substrate (which might have unique, substrate-specific, causal decompositions). Chalmers’ (1995) “principle of organizational invariance” assumes that replicating a system’s functional organization at a fine enough grain will reproduce identical conscious experiences. However, this may be question-begging if bound experiences require holistic physical systems (e.g. quantum coherence). In such a case, the “components” of the system might be irreducible wholes, and breaking them down further would result in losing the underlying causal structure needed for bound experiences. This suggests that consciousness might emerge from physical processes that cannot be adequately captured by classical functional descriptions, regardless of their granularity.

More so, whether we realize it or not, it is always us (indeed complex bound experiences) who interpret the meaning of the input and the output of a physical system. It is not interpreted by the system itself. This is because the system has no real “points of view” from which to interpret what is going on. This is a subtle point, and will merely mention it for now, but a deep exposition of this line of argument can be found in The View From My Topological Pocket (2023).

We more so would point out that the system that is smuggling a “point of view” to interpret a digital computer’s operations is in the human who builds, maintains, and utilizes it. If we want a system to create its “own point of view” we will need to find the way for it to bind the information in a (1) “projector”/screen, (2) an actual point of view proper, or (3) the backwards lightcone that feeds into such a point of view. As argued, none of these are viable solutions.

Reality’s Deep Causal Structure is Poorly Understood

Finally, another key consideration that has been discussed extensively is that the very building blocks of reality have unclear, opaque causal structures. Arguably, if we want to replicate the internal causal structure of a conscious system, the classical input-output mapping is therefore not enough. If you want to ensure that what is happening inside the system has the same causal structure as its simulated counterpart, you would also need to replicate how the system would respond to non-standard inputs, including x-rays, magnetic fields, and specific molecules (e.g. Xenon isotopes).

These ideas have all been discussed at length in articlespodcastspresentations, and videos. Now let’s move on to a more recent consideration we call “Costs of Embodiment”.

Costs of Embodiment

Classical “computational complexity theory” is often used as a silver bullet “analytic frame” to discount the computational power of systems. Here is a typical line of argument: under the assumption that consciousness isn’t the result of implementing a quantum algorithm per se, the argument goes, then there is “nothing that it can do that you couldn’t do with a simulation of the system”. This, however, is neglecting the complications that come from instantiating a system in the physical world with all that it entails. To see why, we must first explain the nature of this analytic style in more depth:

Introduction to Computational Complexity Theory

Computational complexity theory is a branch of computer science that focuses on classifying computational problems according to their inherent difficulty. It primarily deals with the resources required to solve problems, such as time (number of steps) and space (memory usage).

Key concepts in computational complexity theory include:

  1. Big O notation: Used to describe the upper bound of an algorithm’s rate of growth.
  2. Complexity classes: Categories of problems with similar resource requirements (e.g., P, NP, PSPACE).
  3. Time complexity: Measure of how the running time increases with the size of the input.
  4. Space complexity: Measure of how memory usage increases with the size of the input.

In brief, this style of analysis is suited for analyzing the properties of algorithms that are implementation-agnostic, abstract, and interpretable in the form of pseudo-code. Alas, the moment you start to ground these concepts in the real physical constraints to which life is subjected, the relevance and completeness of the analysis starts to fall apart. Why? Because:

  1. Big O notation counts how the number of steps (time complexity) or number of memory slots (space complexity) grows with the size of the input (or in some cases size of the output). But not all steps are created equal:
    1. Flipping the value of a bit might be vastly cheaper in the real world than moving the value of a bit to another location that is very (physically far) in the computer.
    2. Likewise, some memory operations are vastly more costly than others: in the real world you need to take into account the cost of redundancy, distributed error correction, and entropic decay of structures not in use at the time.
  2. Not all inputs and outputs are created equal. Taking in some inputs might be vastly more costly than others (e.g. highly energetic vibrations that shake the system apart mean something to a biological organism as it needs to adapt to the possible stress induced by the nature of the input, expressing certain outputs might be much more costly than others, as the organism needs to reconfigure itself to deliver the result of the computation, a cost that isn’t considered by classical computational complexity theory).
  3. Interacting with a biological system is a far more complex activity than interacting with, say, logic gates and digital memory slots. We are talking about a highly dynamic, noisy, soup of molecules with complex emergent effects. Defining an operation in this context, let alone its “cost”, is far from trivial.
  4. Artificial computing architectures are designed, implemented, maintained, reproduced, and interpreted by humans who, if we are to believe already have powerful computational capabilities, are giving the system an unfair advantage over biological systems (which require zero human assistance).

Why Embodiment May Lead to Underestimating Costs

Here is a list of considerations that highlight the unique costs that come with real-world embodiment for information-processing systems beyond the realm of mere abstraction:

  1. Physical constraints: Traditional complexity theory often doesn’t account for physical limitations of real-world systems, such as heat dissipation, energy consumption, and quantum effects.
  2. Parallel processing: Biological systems, including brains, operate with massive adaptive parallelism. This is challenging to replicate in classical computing architectures and may require different cost analyses.
  3. Sensory integration: Embodied systems must process and integrate multiple sensory inputs simultaneously, which can be computationally expensive in ways not captured by standard complexity measures.
  4. Real-time requirements: Embodied systems often need to respond in real-time to environmental stimuli, adding temporal constraints that may increase computational costs.
  5. Adaptive learning: The ability to learn and adapt in real-time may incur additional computational costs not typically considered in classical complexity theory.
  6. Robustness to noise: Physical systems must be robust to environmental noise and internal fluctuations, potentially requiring redundancy and error-correction mechanisms that increase computational costs.
  7. Energy efficiency: Biological systems are often highly energy-efficient, which may come at the cost of increased complexity in information processing.
  8. Non-von Neumann architectures: Biological neural networks operate on principles different from classical computers, potentially involving computational paradigms not well-described by traditional complexity theory.
  9. Quantum effects: At the smallest scales, quantum mechanical effects may play a role in information processing, adding another layer of complexity not accounted for in classical theories.
  10. Emergent properties: Complex systems may exhibit physical emergent properties that arise from the interactions of simpler components and as well as phase transitions, potentially leading to computational costs that are difficult to predict or quantify using standard methods.

See appendix for a concrete example of applying these considerations to an abstract and embodied object recognition system (example provided by Kristian Rönn).

Case Studies:

1.  2D Computers

It is well known in classical computing theory that a 2D computer can implement anything that an n-dimensional computer can do. Namely, because it is possible to create a 2D Turing Machine capable of simulating arbitrary computers of this class (to the extent that there is a computational complexity equivalence between an n-dimensional computer and a 2D computer), we see that (at the limit) the same runtime complexity as the original computer in 2D should be achievable.

However, living in a 2D plane comes with enormous challenges that highlight the cost of embodiment present in a given media. In particular, we will see that the *routing costs* of information will grow really fast, as the channels that connect between different parts of the computer will need to take turns in order to allow for the crossed wires to transmit information without saturating the medium of (wave/information) propagation.

A concrete example here comes from examining what happens when you divide a circle into areas. Indeed, this is a well-known math problem, where you are supposed to derive a general formula for the number of areas by which a circle gets divided when you connect n (generally placed) points in its periphery. The takeaway of this exercise is often to point out that even though at first the number of areas seem to be powers of 2 (2, 4, 8, 16…) eventually the pattern is broken (the number after 16 is, surprisingly, 31 and not 32).

For the purpose of this example we shall simply focus on the growth of edges vs. the growth of crossings between the edges as we increase the number of nodes. Since every pair of nodes has an edge, the formula for the number of edges as a function of the number of nodes n is: n choose 2. Similarly, any four points define a single unique crossing, and thus the formula for the number of crossings is: n choose 4. When n is small (6 or less), the number of crossings is smaller or equal to the number of edges. But as soon as we hit 7 nodes, the number of crossings dominates over the number of edges. Asymptotically, in fact, the growth of edges is O(n^2) using the Big O notation, whereas the number of crossings ends up being O(n^4), which is much faster. If this system is used in the implementation of an algorithm that requires every pair of nodes to interact with each other once, we may at first be under the impression that the complexity will grow as O(n^2). But if this system is embodied, messages between the nodes will start to collide with each other at the crossings. Eventually, the number of delays and traffic jams caused by the embodiment of the system in 2D will dominate the time complexity of the system.

2. Blind Systems: Bootstrapping a Map Isn’t Easy

A striking challenge that biological systems need to tackle to instantiate moments of experience with useful information arises when we consider the fact that, at conception, biological systems lack a pre-existing “ground truth map” of their own components, i.e. where they are, and where they are supposed to be. In other words, biological systems somehow bootstrap their own internal maps and coordination mechanisms from a seemingly mapless state. This feat is remarkable given the extreme entropy and chaos at the microscopic level of our universe.

Assembly Theory (AT) (2023) provides an interesting perspective on this challenge. AT conceptualizes objects not as simple point particles, but as entities defined by their formation histories. It attempts to elucidate how complex, self-organizing systems can emerge and maintain structure in an entropic universe. However, AT also highlights the intricate causal relationships and historical contingencies underlying such systems, suggesting that the task of self-mapping is far from trivial.

Consider the questions this raises: How does a cell know its location within a larger organism? How do cellular assemblies coordinate their components without a pre-existing map? How are messages created and routed without a predefined addressing system and without colliding with each other? In the context of artificial systems, how could a computer bootstrap its own understanding of its architecture and component locations without human eyes and hands to see and place the components in their right place?

These questions point to the immense challenge faced by any system attempting to develop self-models or internal mappings from scratch. The solutions found in biological systems might potentially rely on complex, evolved mechanisms that are not easily replicated in classical computational architectures. This suggests that creating truly self-understanding artificial systems capable of surviving in a hostile, natural environment, may require radically different approaches than those currently employed in standard computing paradigms.

How Does the QRI Model Overcome the Costs of Embodiment?

This core QRI article presents a perspective on consciousness and the binding problem that aligns well with our discussion of embodiment and computational costs. It proposes that moments of experience correspond to topological pockets in the fields of physics, particularly the electromagnetic field. This view offers several important insights:

  1. Frame-invariance: The topology of vector fields is Lorentz invariant, meaning it doesn’t change under relativistic transformations. This addresses the need for a frame-invariant basis for consciousness, which we identified as a challenge for traditional computational approaches.
  2. Causal significance: Topological features of fields have real, measurable causal effects, as exemplified by phenomena like magnetic reconnection in solar flares. This satisfies the requirement for consciousness to be causally efficacious and not epiphenomenal.
  3. Natural boundaries: Topological pockets provide objective, causally significant boundaries that “carve nature at its joints.” This contrasts with the difficulty of defining clear system boundaries in classical computational models.
  4. Temporal depth: The approach acknowledges that experiences have a temporal dimension, potentially lasting for tens of milliseconds. This aligns with our understanding of neural oscillations and provides a natural way to integrate time into the model of consciousness.
  5. Embodiment costs: The topological approach inherently captures many of the “costs of embodiment” we discussed earlier. The physical constraints, parallel processing, sensory integration, and real-time requirements of embodied systems are naturally represented in the complex topological structures of the brain’s electromagnetic field.

This perspective suggests that the computational costs of consciousness may be even more significant than traditional complexity theory would indicate. It implies that creating artificial consciousness would require not just simulating neural activity, but replicating the precise topological structures of electromagnetic fields in the brain. This is a far more challenging task than conventional AI approaches.

Moreover, this view provides a potential explanation for why embodied systems like biological brains are so effective at producing consciousness. The physical structure of the brain, with its complex networks of neurons and electromagnetic fields, may be ideally suited to creating the topological pockets that correspond to conscious experiences. This suggests that embodiment is not just a constraint on consciousness, but a fundamental enabler of it.

Furthermore, there is a non-trivial connection between topological segmentation and resonant modes. The larger a topological pocket is, the lower the frequency of the resonant modes can be. This, effectively, is broadcasted to every region within the pocket (much akin how any spot on the surface of an acoustic guitar expresses the vibrations of the guitar as a whole). Thus, topological segmentation, quite conceivably, might be implicated in the generation of maps for the organism to self-organize around (cf. bioelectric morphogenesis according to Michael Levin, 2022). Steven Lehar (1999) and Michael E. Johnson (2018) in particular have developed really interesting conceptual frameworks for how harmonic resonance might be implicated in the computational character of our experience. The QRI insight that topology can mediate resonance, further complicates the role of phenomenal boundaries in the computational role of consciousness.

Conclusion and Path Forward

In conclusion, the costs of embodiment present significant challenges to creating digital sentience that traditional computational complexity theory fails to fully capture. The QRI solution to the boundary problem, with its focus on topological pockets in electromagnetic fields, offers a promising framework for understanding consciousness that inherently addresses many of these embodiment costs. Moving forward, research should focus on: (1) developing more precise methods to measure and quantify the costs of embodiment in biological systems, (2) exploring how topological features of electromagnetic fields could be replicated or simulated in artificial systems, and (3) investigating the potential for hybrid systems that leverage the natural advantages of biological embodiment while incorporating artificial components (cf. Xenobots). By pursuing these avenues, we may unlock new pathways towards creating genuine artificial consciousness while deepening our understanding of natural consciousness.

It is worth noting that the QRI mission is to “understand consciousness for the benefit of all sentient beings”. Thus, figuring out the constraints that give rise to computationally non-trivial bound experiences is one key piece of the puzzle: we don’t want to accidentally create systems that are conscious and suffering and become civilizationally load-bearing (e.g. organoids animated by pain or fear).

In other words, understanding how to produce conscious systems is not enough. We need to figure out how to (a) ensure that they are animated by information-sensitive gradients of bliss, and (b) how being empowered by the computational properties of consciousness can lean into more benevolent mind architectures. Namely, architectures that care about their wellbeing and the wellbeing of all sentient beings. This is an enormous challenge; clarifying the costs of embodiment is one key step forward, but part of an ecosystem of actions and projects needed for the robust positive impact of consciousness research for the wellbeing of all sentient beings.

Acknowledgments:

This post was written at the July 2024 Qualia Research Institute Strategy Summit in Sweden. It comes about as a response to incisive questions by Kristian Rönn on QRI’s model of digital sentience. Many thanks to Curran Janssen, Oliver Edholm, David Pearce, Alfredo Parra, Asher Soryl, Rasmus Soldberg, and Erik Karlson, for brainstorming, feedback, suggesting edits, and the facilitation of this retreat.

Appendix

Excerpt from Michael E. Johnson’s Principia Qualia (2015) on Frame Invariance (pg. 61)

What is frame invariance?

A theory is frame-invariant if it doesn’t depend on any specific physical frame of reference, or subjective interpretations to be true. Modern physics is frame-invariant in this way: the Earth’s mass objectively exerts gravitational attraction on us regardless of how we choose to interpret it. Something like economic theory, on the other hand, is not frame-invariant: we must interpret how to apply terms such as “GDP” or “international aid” to reality, and there’s always an element of subjective judgement in this interpretation, upon which observers can disagree.

Why is frame invariance important in theories of mind?

Because consciousness seems frame-invariant. Your being conscious doesn’t depend on my beliefs about consciousness, physical frame of reference, or interpretation of the situation – if you are conscious, you are conscious regardless of these things. If I do something that hurts you, it hurts you regardless of my belief of whether I’m causing pain. Likewise, an octopus either is highly conscious, or isn’t, regardless of my beliefs about it.[a] This implies that any ontology that has a chance of accurately describing consciousness must be frame-invariant, similar to how the formalisms of modern physics are frame-invariant.

In contrast, the way we map computations to physical systems seems inherently frame-dependent. To take a rather extreme example, if I shake a bag of popcorn, perhaps the motion of the popcorn’s molecules could – under a certain interpretation – be mapped to computations which parallel those of a whole-brain emulation that’s feeling pain. So am I computing anything by shaking that bag of popcorn? Who knows. Am I creating pain by shaking that bag of popcorn? Doubtful… but since there seems to be an unavoidable element of subjective judgment as to what constitutes information, and what constitutes computation, in actual physical systems, it doesn’t seem like computationalism can rule out this possibility. Given this, computationalism is frame-dependent in the sense that there doesn’t seem to be any objective fact of the matter derivable for what any given system is computing, even in principle.

[a] However, we should be a little bit careful with the notion of ‘objective existence’ here if we wish to broaden our statement to include quantum-scale phenomena where choice of observer matters.

Example of Cost of Embodiment by Kristian Rönn

Abstract Scenario (Computational Complexity):

Consider a digital computer system tasked with object recognition in a static environment. The algorithm processes an image to identify objects, classifies them, and outputs the results.

Key Points:

  • The computational complexity is defined by the algorithm’s time and space complexity (e.g., O(n^2) for time, O(n) for space).
  • Inputs (image data) and outputs (object labels) are well-defined and static.
  • The system operates in a controlled environment with no physical constraints like heat dissipation or energy consumption.

However, this abstract analysis is extremely optimistic, since it doesn’t take the cost of embodiment into account.

Embodied Scenario (Embodied Complexity):

Now, consider a robotic system equipped with a camera, tasked with real-time object recognition and interaction in a dynamic environment.

Key Points and Costs:

  1. Real-Time Processing:
    • The robot must process images in real-time, requiring rapid data acquisition and processing, which creates practical constraints.
    • Delays in computation can lead to physical consequences, such as collisions or missed interactions.
  2. Energy Consumption:
    • The robot’s computational tasks consume power, affecting the overall energy budget.
    • Energy management becomes crucial, balancing between processing power and battery life.
  3. Heat Dissipation:
    • High computational loads generate heat, necessitating cooling mechanisms, requiring additional energy. Moreover, this creates additional costs/waste in the embodied system.
    • Overheating can degrade performance and damage components, requiring thermal management strategies.
  4. Physical Constraints and Mobility:
    • The robot must move and navigate through physical space, encountering obstacles and varying terrains.
    • Computational tasks must be synchronized with motion planning and control systems, adding complexity.
  5. Sensory Integration:
    • The robot integrates data from multiple sensors (camera, lidar, ultrasonic sensors) to understand its environment.
    • Processing multi-modal sensory data in real-time increases computational load and complexity.
  6. Error Correction and Redundancy:
    • Physical systems are prone to noise and errors. The robot needs mechanisms for error detection and correction.
    • Redundant systems and fault-tolerance measures add to the computational overhead.
  7. Adaptation and Learning:
    • The robot must adapt to new environments and learn from interactions, requiring active inference (i.e. we can’t train a new model everytime the ontology of an agent needs updating).
    • Continuous learning in an embodied system is resource-intensive compared to offline training in a digital system.
  8. Physical Wear and Maintenance:
    • Physical components wear out over time, requiring maintenance and replacement.
    • Downtime for repairs affects the overall system performance and availability.

An Energy Complexity Model for Algorithms

Roy, S., Rudra, A., & Verma, A. (2013). https://doi.org/10.1145/2422436.2422470

Abstract

Energy consumption has emerged as a first class computing resource for both server systems and personal computing devices. The growing importance of energy has led to rethink in hardware design, hypervisors, operating systems and compilers. Algorithm design is still relatively untouched by the importance of energy and algorithmic complexity models do not capture the energy consumed by an algorithm. In this paper, we propose a new complexity model to account for the energy used by an algorithm. Based on an abstract memory model (which was inspired by the popular DDR3 memory model and is similar to the parallel disk I/O model of Vitter and Shriver), we present a simple energy model that is a (weighted) sum of the time complexity of the algorithm and the number of ‘parallel’ I/O accesses made by the algorithm. We derive this simple model from a more complicated model that better models the ground truth and present some experimental justification for our model. We believe that the simplicity (and applicability) of this energy model is the main contribution of the paper. We present some sufficient conditions on algorithm behavior that allows us to bound the energy complexity of the algorithm in terms of its time complexity (in the RAM model) and its I/O complexity (in the I/O model). As corollaries, we obtain energy optimal algorithms for sorting (and its special cases like permutation), matrix transpose and (sparse) matrix vector multiplication.

Thermodynamic Computing

Conte, T. et al. (2019). https://arxiv.org/abs/1911.01968

Abstract

The hardware and software foundations laid in the first half of the 20th Century enabled the computing technologies that have transformed the world, but these foundations are now under siege. The current computing paradigm, which is the foundation of much of the current standards of living that we now enjoy, faces fundamental limitations that are evident from several perspectives. In terms of hardware, devices have become so small that we are struggling to eliminate the effects of thermodynamic fluctuations, which are unavoidable at the nanometer scale. In terms of software, our ability to imagine and program effective computational abstractions and implementations are clearly challenged in complex domains. In terms of systems, currently five percent of the power generated in the US is used to run computing systems – this astonishing figure is neither ecologically sustainable nor economically scalable. Economically, the cost of building next-generation semiconductor fabrication plants has soared past $10 billion. All of these difficulties – device scaling, software complexity, adaptability, energy consumption, and fabrication economics – indicate that the current computing paradigm has matured and that continued improvements along this path will be limited. If technological progress is to continue and corresponding social and economic benefits are to continue to accrue, computing must become much more capable, energy efficient, and affordable. We propose that progress in computing can continue under a united, physically grounded, computational paradigm centered on thermodynamics. Herein we propose a research agenda to extend these thermodynamic foundations into complex, non-equilibrium, self-organizing systems and apply them holistically to future computing systems that will harness nature’s innate computational capacity. We call this type of computing “Thermodynamic Computing” or TC.

Energy Complexity of Computation

Say, A.C.C. (2023). https://doi.org/10.1007/978-3-031-38100-3_1

Abstract

Computational complexity theory is the study of the fundamental resource requirements associated with the solutions of different problems. Time, space (memory) and randomness (number of coin tosses) are some of the resource types that have been examined both independently, and in terms of tradeoffs between each other, in this context. Since it is well known that each bit of information “forgotten” by a device is linked to an unavoidable increase in entropy and an associated energy cost, one can also view energy as a computational resource. Constant-memory machines that are only allowed to access their input strings in a single left-to-right pass provide a good framework for the study of energy complexity. There exists a natural hierarchy of regular languages based on energy complexity, with the class of reversible languages forming the lowest level. When the machines are allowed to make errors with small nonzero probability, some problems can be solved with lower energy cost. Tradeoffs between energy and other complexity measures can be studied in the framework of Turing machines or two-way finite automata, which can be rewritten to work reversibly if one increases their space and time usage.

Relevant physical limitations

  • Landauer’s limit: The lower theoretical limit of energy consumption of computation.
  • Bremermann’s limit: A limit on the maximum rate of computation that can be achieved in a self-contained system in the material universe.
  • Bekenstein bound: An upper limit on the thermodynamic entropy S, or Shannon entropy H, that can be contained within a given finite region of space which has a finite amount of energy.
  • Margolus–Levitin theorem: A bound on the maximum computational speed per unit of energy.

References

Bateson, G. (1972). Steps to an ecology of mind. Chandler Publishing Company.

Chalmers, D. J. (1995). Absent qualia, fading qualia, dancing qualia. In T. Metzinger (Ed.), Conscious Experience. Imprint Academic. https://www.consc.net/papers/qualia.html

Gómez-Emilsson, A. (2023). The view from my topological pocket. Qualia Computing. https://qualiacomputing.com/2023/10/26/the-view-from-my-topological-pocket-an-introduction-to-field-topology-for-solving-the-boundary-problem/

Gómez-Emilsson, A., & Percy, C. (2023). Don’t forget the boundary problem! How EM field topology can address the overlooked cousin to the binding problem for consciousness. Frontiers in Human Neuroscience,17. https://www.frontiersin.org/articles/10.3389/fnhum.2023.1233119 

Johnson, M. E. (2015). Principia qualia. Open Theory. https://opentheory.net/PrincipiaQualia.pdf

Johnson, M. E. (2018). A future of neuroscience. Open Theory. https://opentheory.net/2018/08/a-future-for-neuroscience/

Kingdom, F.A.A., & Prins, N. (2016). Psychophysics: A practical introduction. Elsevier.

Lehar, S. (1999). Harmonic resonance theory: An alternative to the “neuron doctrine” paradigm of neurocomputation to address gestalt properties of perception. http://slehar.com/wwwRel/webstuff/hr1/hr1.html

Levin, M. (2022). Bioelectric morphogenesis, cellular motivations, and false binaries with Michael Levin. DemystifySci Podcast. https://demystifysci.com/blog/2022/10/25/kl2d17sphsiw2trldsvkjvr91odjxv

Pearce, D. (2014). Social media unsorted postings. HEDWEB. https://www.hedweb.com/social-media/pre2014.html

Sharma, A. (2023). Assembly theory explains and quantifies selection and evolution. Nature, 622, 321–328. https://www.nature.com/articles/s41586-023-06600-9

Consciousness Isn’t Substrate-Neutral: From Dancing Qualia & Epiphenomena to Topology & Accelerators

In this video I explain why substrate neutrality is so appealing to the modern educated mind. I zoom in on the Dancing Qualia argument presented by Chalmers which seems to show that if consciousness/qualia requires a specific substrate, then you can build a system where such qualia is epiphenomenal.

In this video I deconstruct this whole line of reasoning from several complementary points of view. In particular, I explain:

1) How substrate-specific hardware accelerators would generate something akin to a mysterious “consciousness discourse” in organisms that have hybrid computational substrates, with the meta-problem of consciousness (partly) explained via the interaction of two very different computational paradigms that struggle to make sense of each other.

2) How the Slicing Problem gives rise to epiphenomenalism for functionalist / computationalist theories of consciousness. This is as big of a problem, from the complete other side, as Dancing Qualia, yet somehow it doesn’t seem to receive much attention. To avoid epiphenomenalism here you require physical substrate properties to correspond to (at least in magnitude) degrees/amounts of qualia.

3) The idea that you can preserve “organizational invariance” by importing the “causal graph” of the system is question-begging. In particular, it assumes that reality breaks down into bit-sized point-like fundamental interactions between zero-dimensional entities. But this is an interpretation of physical facts, which is put into question by precisely things like field theories of physics (e.g. electromagnetism) and at a much deeper level, things like String Theory, where the substrate of reality is topologically non-trivial.

4) I show that beneath a computationalist frame for consciousness there is an implicit conception of frames of reference that are real from specific “points of view”. But as I explain, it is not possible to bootstrap integrated states out of frames of reference or points of view. Ultimately, any non-trivial integration of information that is happening in these ontologies is a projection of your own mind (you’re borrowing the unity of your consciousness to put together pieces of information that define a frame of reference or point of view!).

And

5) How the mind uses phenomenal binding for information processing can be explained with the lens of self-organizing principles set up in such a way that “following the valence gradient will take you closer to a state that satisfies the constraints of the problem”. Meaning that the very style of problem solving our experience utilizes has an entirely different logic than classical digital algorithms. No wonder it’s so difficult to square our experience with a computationalist frame of reference!

To end, I encourage the listener to enrich his or her conception of computation to include irreducible integrated states as valid inputs, outputs, and intermediate states. This way we put on the same “computational class” things like quantum computers, non-linear optics, soap bubbles, and yes, DMT entity computing systems 🙂 They all use non-trivially integrated bound states as part of their information processing pipeline.

In aggregate, these points explain why the substrate matters for computation in a way that satisfactorily addresses one of the biggest concerns that there is with this view. Namely, Dancing Qualia leading to epiphenomenalism – which gets turned on its head with the Slicing Problem (turns out computational theories were the epiphenomenalist views all along), self-organizing principles for computation, hybrid computing systems, hardware accelerators, field topology, and the insight that “reality as a causal graph is question-begging”. Reality, is, instead, a network of bound states that can interact in topologically non-trivial ways.


Relevant links:

The View From My Topological Pocket: An Introduction to Field Topology for Solving the Boundary Problem

[Epistemic Status: informal and conversational, this piece provides an off-the-cuff discussion around the topological solution to the boundary problem. Please note that this isn’t intended to serve as a bulletproof argument; rather, it’s a guide through an intuitive explanation. While there might be errors, possibly even in reasoning, I believe they won’t fundamentally alter the overarching conceptual solution.]

This post is an informal and intuitive explanation for why we are looking into topology as a tentative solution to the phenomenal binding (or boundary) problem. In particular, this solutions identifies moments of experience with topological pockets of fields of physics. We recently published a paper where we dive deeper into this explanation space, and concretely hypothesize that the key macroscopic boundary between subjects of experience is the result of topological segmentation in the electromagnetic field (see explainer video / author’s presentation at the Active Inference Institute).

The short explanation for why this is promising is that topological boundaries are objective and frame-invariant features of “basement reality” that have causal effects and thus can be recruited by natural selection for information-processing tasks. If the fields of physics are fields of qualia, topological boundaries of the fields corresponding to phenomenal boundaries between subjects would be an elegant way for a theory of consciousness to “carve nature at its joints”. This solution is very significant if true, because it entails, among other things, that classical digital computers are incapable of creating causally significant experiences: the experiences that emerge out of them are by default something akin to mind dust, and at best, if significant binding happens, they are epiphenomenal from the “point of view” of the computation being realized.

The route to develop an intuition about this topic that this post takes is to deconstruct the idea of a “point of view” as a “natural kind” and instead advocate for topological pockets being the place where information can non-trivially aggregate. This idea, once seen, is hard to unsee; it reframes how we think about what systems are, and even the nature of information itself.


One of the beautiful things about life is that you sometimes have the opportunity to experience a reality plot twist. We might believe one narrative has always been unfolding, only to realize that the true story was different all along. As they say, the rug can be pulled from under your feet.

The QRI memeplex is full of these reality plot twists. You thought that the “plot” of the universe was a battle between good and evil? Well, it turns out it is the struggle between consciousness and replicators instead. Or that what you want is particular states of the environment? Well, it turns out you’ve been pursuing particular configurations of your world simulation all along. You thought that pleasure and pain follow a linear scale? Well, it turns out the scales are closer to logarithmic in nature, with the ends of the distribution being orders of magnitude more intense than the lower ends. I think that along these lines, grasping how “points of view” and “moments of experience” are connected requires a significant reframe of how you conceptualize reality. Let’s dig in!

One of the motivations for this post is that I recently had a wonderful chat with Nir Lahav, who last year published an article that steelmans the view that consciousness is relativistic (see one of his presentations). I will likely discuss his work in more detail in the future. Importantly, talking to him reminded me that ever since the foundation of QRI, we have taken for granted the view that consciousness is frame-invariant, and worked from there. It felt self-evident to us that if something depends on the frame of reference from which you see it, it doesn’t have inherent existence. Our experiences (in particular, each discrete moment of experience), have inherent existence, and thus cannot be frame-dependent. Every experience is self-intimating, self-disclosing, and absolute. So how could it depend on a frame of reference? Alas, I know this is a rather loaded way of putting it and risks confusing a lot of people (for one, Buddhists might retort that experience is inherently “interdependent” and has no inherent existence, to which I would replay “we are talking about different things here”). So I am motivated to present a more fleshed out, yet intuitive, explanation for why we should expect consciousness to be frame-invariant and how, in our view, our solution to the boundary problem is in fact up to this challenge.

The main idea here is to show how frames of reference cannot boostrap phenomenal binding. Indeed, “a point of view” that provides a frame of reference is more of a convenient abstraction that relies on us to bind, interpret, and coalesce pieces of information, than something with a solid ontological status that exists out there in the world. Rather, I will try to show how we are borrowing from our very own capacity for having unified information in order to put together the data that creates the construct of a “point of view”; importantly, this unity is not bootstrapped from other “points of view”, but draws from the texture of the fabric of reality itself. Namely, the field topology.


A scientific theory of consciousness must be able to explain the existence of consciousness, the nature and cause for the diverse array of qualia values and varieties (the palette problem), how consciousness is causally efficacious (avoid epiphenomenalism), and explain how the information content of each moment of experience is presented “all at once” (namely, the binding problem). I’ve talked extensively about these constraints in writings, videos, and interviews, but what I want to emphasize here is that these problems need to be addressed head on for a theory of consciousness to work at all. Keep these constraints in mind as we deconstruct the apparent solidity of frames of reference and the difficulty that arises in order to bootstrap causal and computational effects in connection to phenomenal binding out of a relativistic frame.

At a very high level, a fuzzy (but perhaps sufficient) intuition for what’s problematic when a theory of consciousness doesn’t seek frame-invariance is that you are trying to create something concrete with real and non-trivial causal effects and information content, out of fundamentally “fuzzy” parts.

In brief, ask yourself, can something fuzzy “observe” something fuzzy? How can fuzzyness be used to boostrap something non-fuzzy?

In a world of atoms and forces, “systems” or “things” or “objects” or “algorithms” or “experiences” or “computations” don’t exist intrinsically because there are no objective, frame-invariant, and causally significant ways to draw boundaries around them!

I hope to convince you that any sense of unity or coherence that you get from this picture of reality (a relativistic system with atoms and forces) is in fact a projection from your mind, that inhabits your mind, and is not out there in the world. You are looking at the system, and you are making connections between the parts, and indeed you are creating a hierarchy of interlocking gestalts to represent this entire conception of reality. But that is all in your mind! It’s a sort of map and territory confusion to believe that two fuzzy “systems” interacting with each other can somehow bootstrap a non-fuzzy ontological object (aka. a requirement for a moment of experience). 

I reckon that these vague explanations are in fact sufficient for some people to understand where I’m going. But some of you are probably clueless about what the problem is, and for good reason. This is never discussed in detail, and this is largely, I think, because people who think a lot about the problem don’t usually end up with a convincing solution. And in some cases, the result is that thinkers bite the bullet that there are only fuzzy patterns in reality.

How Many Fuzzy Computations Are There in a System?

Indeed, thinking of the universe as being made of particles and forces implies that computational processes are fuzzy (leaky, porous, open to interpretation, etc.). Now imagine thinking that *you* are one of such fuzzy computations. Having this as an unexamined background assumption gives rise to countless intractable paradoxes. The notion of a point of view, or a frame of reference, does not have real meaning here as the way to aggregate information doesn’t ultimately allow you to identify objective boundaries around packets of information (at least not boundaries that are more than merely-conventional in nature).

From this point of view (about points of view!), you realize that indeed there is no principled and objective way to find real individuals. You end up in the fuzzy world of fuzzy individuals of Brian Tomasik, as helpfully illustrated by this diagram:

Source: Fuzzy, Nested Minds Problematize Utilitarian Aggregation by Brian Tomasik

Brian Tomasik indeed identifies the problem of finding real boundaries between individuals as crucial for utilitarian calculations. And then, incredibly, also admits that his ontological frameworks gives him no principled way of doing so (cf. Michael E. Johnson’s Against Functionalism for a detailed response). Indeed, according to Brian (from the same essay):

Eric Schwitzgebel argues that “If Materialism Is True, the United States Is Probably Conscious“. But if the USA as a whole is conscious, how about each state? Each city? Each street? Each household? Each family? When a new government department is formed, does this create a new conscious entity? Do corporate mergers reduce the number of conscious entities? These seem like silly questions—and indeed, they are! But they arise when we try to individuate the world into separate, discrete minds. Ultimately, “we are all connected”, as they say. Individuation boundaries are artificial and don’t track anything ontologically or phenomenally fundamental (except maybe at the level of fundamental physical particles and structures). The distinction between an agent and its environment is just an edge that we draw around a clump of physics when it’s convenient to do so for certain purposes.

My own view is that every subsystem of the universe can be seen as conscious to some degree and in some way (functionalist panpsychism). In this case, the question of which systems count as individuals for aggregation becomes maximally problematic, since it seems we might need to count all the subsystems in the universe.”

Are you confused now? I hope so. Otherwise I’d worry about you.

Banana For Scale

A frame of reference is like a “banana for scale” but for both time and space. If you assume that the banana isn’t morphing, you can use how long it takes for waves emitted from different points in the banana to bounce back and return in order to infer the distance and location of physical objects around it. Your technologically equipped banana can play the role of a frame of reference in all but the most extreme of conditions (it probably won’t work as you approach a black hole, for very non-trivial reasons involving severe tidal forces, but it’ll work fine otherwise).

Now the question that I want to ask is: how does the banana “know itself”? Seriously, if you are using points in the banana as your frame of reference, you are, in fact, the one who is capable of interpreting the data coming from the banana to paint a picture of your environment. But the banana isn’t doing that. It is you! The banana is merely an instrument that takes measurements. Its unity is assumed rather than demonstrated. 


In fact, for the upper half of the banana to “comprehend” the shape of the other half (as well as its own), it must also rely on a presumed fixed frame of reference. However, it’s important to note that such information truly becomes meaningful only when interpreted by a human mind. In the realm of an atom-and-force-based ontology, the banana doesn’t precisely exist as a tangible entity. Your perception of it as a solid unit, providing direction and scale, is a practical assumption rather than an ontological certainty.

In fact, the moment we try to get a “frame of reference to know itself” you end up in an infinite regress, where smaller and smaller regions of the object are used as frames of reference to measure the rest. And yet, at no point does the information of these frames of reference “come together all at once”, except… of course… in your mind.

Are there ways to boostrap a *something* that aggregates and simultaneously expresses the information gathered across the banana (used as a frame of reference)? If you build a camera to take a snapshot of the, say, information displayed at each coordinate of the banana, the picture you take will have spatial extension and suffer from the same problem. If you think that the point at the aperture can itself capture all of the information at once, you will encounter two problems. If you are thinking of an idealized point-sized aperture, then we run into the problem that points don’t have parts, and therefore can’t contain multiple pieces of information at once. And if you are talking about a real, physical type of aperture, you will find that it cannot be smaller than the diffraction limit. So now you have the problem of how to integrate all of the information *across the whole area of the aperture* when it cannot shrink further without losing critical information. In either case, you still don’t have anything, anywhere, that is capable of simultaneously expressing all of the information of the frame of reference you chose. Namely, the coordinates you measure using a banana.

Let’s dig deeper. We are talking of a banana as a frame of reference. But what if we try to internalize the frame of reference. A lot of people like to think of themselves as the frame of reference that matters. But I ask you: what are your boundaries and how do the parts within those boundaries agree on what is happening?

Let’s say your brain is the frame of reference. Intuitively, one might feel like “this object is real to itself”. But here is where the magic comes. Make the effort to carefully trace how signals or measurements propagate in an object such as the brain. Is it fundamentally different than what happens with a banana? There might be more shortcuts (e.g. long axons) and the wiring could have complex geometry, but neither of these properties can ultimately express information “all at once”. The principle of uniformity says that every part of the universe follows the same universal physical laws. The brain is not an exception. In a way, the brain is itself a possible *expression* of the laws of physics. And in this way, it is no different than a banana.

Sorry, your brain is not going to be a better “ground” for your frame of reference than a banana. And that is because the same infinite recursion that happened with the banana when we tried to use it to ground our frame of reference into something concrete happens with your brain. And also, the same problem happens when we try to “take a snapshot of the state of the brain”, i.e. that the information also doesn’t aggregate in a natural way even in a high-resolution picture of the brain. It still has spatial extension and lacks objective boundaries of any causal significance.

Every single point in your brain has a different view. The universe won’t say “There is a brain here! A self-intimating self-defining object! It is a natural boundary to use to ground a frame of reference!” There is nobody to do that! Are you starting to feel the groundlessness? The bizarre feeling that, hey, there is no rational way to actually set a frame of reference without it falling apart into a gazillion different pieces, all of which have the exact same problem? I’ve been there. For years. But there is a way out. Sort of. Keep reading.

The question that should be bubbling up to the surface right now is: who, or what, is in charge of aggregating points of view? And the answer is: this does not exist and is impossible for it to exist if you start out in an ontology that has as the core building blocks relativistic particles and forces. There is no principled way to aggregate information across space and time that would result in the richness of simultaneous presentation of information that a typical human experience displays. If there is integration of information, and a sort of “all at once” presentation, the only kind of (principled) entity that this ontology would accept is the entire spacetime continuum as a gigantic object! But that’s not what we are. We are definite experiences with specific qualia and binding structures. We are not, as far as I can tell, the entire spacetime continuum all at once. (Or are we?).

If instead we focus on the fine structure of the field, we can look at mathematical features in it that would perhaps draw boundaries that are frame-invariant. Here is where a key insight becomes significant: the topology of a vector field is Lorentz invariant! Meaning, a Lorentz transformation will merely squeeze and sheer, but never change topology on its own. Ok, I admit I am not 100% sure that this holds for all of the topological features of the electromagnetic field (Creon Levit recently raised some interesting technical points that might make some EM topological features frame-dependent; I’ve yet to fully understand his argument but look forward to engaging with it). But what we are really pointing at is the explanation space. A moment ago we were desperate to find a way to ground, say, the reality of a banana in order to use it as a frame of reference. We saw that the banana conceptualized as a collection of atoms and forces does not have this capacity. But we didn’t inquire into other possible physical (though perhaps not *atomistic*) features of the banana. Perhaps, and this is sheer speculation, the potassium ions in the banana peel form a tight electromagnetic mesh that creates a protective Faraday cage for this delicious fruit. In that case, well, the boundaries of that protecting sheet would, interestingly, be frame invariant. A ground!

The 4th Dimension

There is a bit of a sleight of hand here, because I am not taking into account temporal depth, and so it is not entirely clear how large the banana, as a topological structure defined by the potassium ions protective sheer really is (again, this is totally made up! for illustration purposes only). The trick here is to realize that, at least in so far as experiences go, we also have a temporal boundary. Relativistically, there shouldn’t be a hard distinction between temporal and spatial boundaries of a topological pocket of the field. In practice, of course one will typically overwhelm the other, unless you approach the brain you are studying at close to the speed of light (not ideal laboratory conditions, I should add). In our paper, and for many years at QRI (iirc an insight by Michael Johnson in 2016 or so), we’ve talked about experiences having “temporal depth”. David Pearce posits that each fleeting macroscopic state of quantum coherence spanning the entire brain (the physical correlate of consciousness in his model) can last as little as a couple of femtoseconds. This does not seem to worry him: there is no reason why the contents of our experience would give us any explicit hint about our real temporal depth. I intuit that each moment of experience lasts much, much longer. I highly doubt that it can last longer than a hundred milliseconds, but I’m willing to entertain “pocket durations” of, say, a few dozens of milliseconds. Just long enough for 40hz gamma oscillations to bring disparate cortical micropockets into coherence, and importantly, topological union, and have this new new emergent object resonate (where waves bounce back and forth) and thus do wave computing worthwhile enough to pay the energetic cost of carefully modulating this binding operation. Now, this is the sort of “physical correlate of consciousness” I tend to entertain the most. Experiences are fleeting (but not vanishingly so) pockets of the field that come together for computational and causal purposes worthwhile enough to pay the price of making them.

An important clarification here is that now that we have this way of seeing frames of reference we can reconceptualize our previous confusion. We realize that simply labeling parts of reality with coordinates does not magically bring together the information content that can be obtained by integrating the signals read at each of those coordinates. But we suddenly have something that might be way better and more conceptually satisfying. Namely, literal topological objects with boundaries embedded in the spacetime continuum that contribute to the causal unfolding of the reality and are absolute in their existence. These are the objective and real frames of reference we’ve been looking for!

What’s So Special About Field Topology?

Two key points:

  1. Topology is frame-invariant
  2. Topology is causally significant

As already mentioned, the Lorentz Transform can squish and distort, but it doesn’t change topology. The topology of the field is absolute, not relativistic.

The Lorentz Transform can squish and distort, but it doesn’t change topology (image source).

And field topology is also causally significant. There are _many_ examples of this, but let me just mention a very startling one: magnetic reconnection. This happens when the magnetic field lines change how they are connected. I mention this example because when one hears about “topological changes to the fields of physics” one may get the impression that such a thing happens only in extremely carefully controlled situations and at minuscule scales. Similar to the concerns for why quantum coherence is unlikely to play a significant role in the brain, one can get the impression that “the scales are simply off”. Significant quantum coherence typically happens in extremely small distances, for very short periods of time, and involving very few particles at a time, and thus, the argument goes, quantum coherence must be largely inconsequential at scales that could plausibly matter for the brain. But the case of field topology isn’t so delicate. Magnetic reconnection, in particular, takes place at extremely large scales, involving enormous amount of matter and energy, with extremely consequential effects.

You know about solar flairs? Solar flairs are the strange phenomenon in the sun in which plasma is heated up to millions of degrees Kelvin and charged particles are accelerated to near the speed of light, leading to the emission of gigantic amounts of electromagnetic radiation, which in turn can ionize the lower levels of the Earth’s ionosphere, and thus disrupt radio communication (cf. radio blackouts). These extraordinary events are the result of the release of magnetic energy stored in the Sun’s corona via a topological change to the magnetic field! Namely, magnetic reconnection.

So here we have a real and tangible effect happening at a planetary (and stellar!) scale over the course of minutes to hours, involving enormous amounts of matter and energy, coming about from a non-trivial change to the topology of the fields of physics.

(example of magnetic reconnection; source)

Relatedly, coronal mass ejections (CMEs) also dependent on changes to the topology of the EM field. My layman understanding of CMEs is that they are caused by the build-up of magnetic stress in the sun’s atmosphere, which can be triggered by a variety of factors, including uneven spinning and plasma convection currents. When this stress becomes too great, it can cause the magnetic field to twist and trap plasma in solar filaments, which can then be released into interplanetary space through magnetic reconnection. These events are truly enormous in scope (trillions of kilograms of mass ejected) and speed (traveling at thousands of kilometers per second).

CME captured by NASA (source)

It’s worth noting that this process is quite complex/not fully understood, and new research findings continue to illuminate the details of this process. But the fact that topological effects are involved is well established. Here’s a video which I thought was… stellar. Personally, I think a program where people get familiar with the electromagnetic changes that happen in the sun by seeing them in simulations and with the sun visualized in many ways, might help us both predict better solar storms, and then also help people empathize with the sun (or the topological pockets that it harbors!).

The model showed differential rotation causes the sun’s magnetic fields to stretch and spread at different rates. The researchers demonstrated this constant process generates enough energy to form stealth coronal mass ejections over the course of roughly two weeks. The sun’s rotation increasingly stresses magnetic field lines over time, eventually warping them into a strained coil of energy. When enough tension builds, the coil expands and pinches off into a massive bubble of twisted magnetic fields — and without warning — the stealth coronal mass ejection quietly leaves the sun.” (source)

Solar flares and CMEs are just two rather spectacular macroscopic phenomena where field topology has non-trivial causal effects. But in fact there is a whole zoo of distinct non-trivial topological effects with causal implications, such as: how the topology of the Möbius strip can constrain optical resonant modes, twisted topological defects in nematic liquid crystal make some images impossible, the topology of eddy currents can be recruited for shock absorption aka. “magnetic breaking”, Meissner–Ochsenfeld effect and flux pinning enabling magnetic levitation, Skyrmion bundles having potential applications for storing information in spinotropic devices, and so on.

(source)

In brief, topological structures in the fields of physics can pave the way for us to identify the natural units that correspond to “moments of experience”. They are frame-invariant and casually significant, and as such they “carve nature at its joints” while being useful from the point of view of natural selection.

Can a Topological Pocket “Know Itself”?

Now the most interesting question arises. How does a topological pocket “know itself”? How can it act as a frame of reference for itself? How can it represent information about its environment if it does not have direct access to it? Well, this is in fact a very interesting area of research. Namely, how do you get the inside of a system with a clear and definite boundary to model its environment despite having only information accessible at its boundary and the resources contained within its boundary? This is a problem that evolution has dealt with for over a billion years (last time I checked). And fascinatingly, is also the subject of study of Active Inference and the Free Energy Principle, whose math, I believe, can be imported to the domain of *topological* boundaries in fields (cf. Markov Boundary).

Here is where qualia computing, attention and awareness, non-linear waves, self-organizing principles, and even optics become extremely relevant. Namely, we are talking about how the *interior shape* of a field could be used in the context of life. Of course the cell walls of even primitive cells are functionally (albeit perhaps not ontologically) a kind of objective and causally significant boundary where this applies. It is enormously adaptive for the cell to use its interior, somehow, to represent its environment (or at least relevant features thereof) in order to navigate, find food, avoid danger, and reproduce.

The situation becomes significantly more intricate when considering highly complex and “evolved” animals such as humans, which encompass numerous additional layers. A single moment of experience cannot be directly equated to a cell, as it does not function as a persistent topological boundary tasked with overseeing the replication of the entire organism. Instead, a moment of experience assumes a considerably more specific role. It acts as an exceptionally specialized topological niche within a vast network of transient, interconnected topological niches—often intricately nested and interwoven. Together, they form an immense structure equipped with the capability to replicate itself. Consequently, the Darwinian evolutionary dynamics of experiences operate on multiple levels. At the most fundamental level, experiences must be selected for their ability to competitively thrive in their immediate micro-environment. Simultaneously, at the broadest level, they must contribute valuable information processing functions that ultimately enhance the inclusive fitness of the entire organism. All the while, our experiences must seamlessly align and “fit well” across all the intermediary levels.

Visual metaphor for how myriad topological pockets in the brain could briefly fuse and become a single one, and then dissolve back into a multitude.

The way this is accomplished is by, in a way, “convincing the experience that it is the organism”. I know this sounds crazy. But ask yourself. Are you a person or an experience? Or neither? Think deeply about Empty Individualism and come back to this question. I reckon that you will find that when you identify with a moment of experience, it turns out that you are an experience *shaped* in the form of the necessary survival needs and reproductive opportunities that a very long-lived organism requires. The organism is fleetingly creating *you* for computational purposes. It’s weird, isn’t it?

The situation is complicated by the fact that it seems that the computational properties of topological pockets of qualia involve topological operations, such as fusion, fission, and the use of all kinds of internal boundaries. More so, the content of a particular experience leaves an imprint in the organism which can be picked up by the next experience. So what happens here is that when you pay really close attention, and you whisper to your mind, “who am I?”, the direct experiential answer will in fact be a slightly distorted version of the truth. And that is because you (a) are always changing and (b) can only use the shape of the previous experience(s) to fill the intentional content of your current experience. Hence, you cannot, at least not under normal circumstances, *really* turn awareness to itself and *be* a topological pocket that “knows itself”. For once, there is a finite speed of information propagation across the many topological pockets that ultimately feed to the central one. So, at any given point in time, there are regions of your experience of which you are *aware* but which you are not attending to.

This brings us to the special case. Can an experience be shaped in such a way that it attends to itself fully, rather than attend to parts of itself which contain information about the state of predecessor topological pockets? I don’t know, but I have a strong hunch that the answer is yes and that this is what a meditative cessation does. Namely, it is a particular configuration of the field where attention is perfectly, homogeneously, distributed throughout in such a way that absolutely nothing breaks the symmetry and the experience “knows itself fully” but lacks any room left to pass it on to the successor pockets. It is a bittersweet situation, really. But I also think that cessations, and indeed moments of very homogeneously distributed attention, are healing for the organism, and even, shall we say, for the soul. And that is because they are moments of complete relief from the discomfort of symmetry breaking of any sort. They teach you about how our world simulation is put together. And intellectually, they are especially fascinating because they may be the one special case in which the referent of an experience is exactly, directly, itself.

To be continued…


Acknowledgements

I am deeply grateful and extend my thanks to Chris Percy for his remarkable contributions and steadfast dedication to this field. His exceptional work has been instrumental in advancing QRI’s ideas within the academic realm. I also want to express my sincere appreciation to Michael Johnson and David Pearce for our enriching philosophical journey together. Our countless discussions on the causal properties of phenomenal binding and the temporal depth of experience have been truly illuminating. A special shout-out to Cube Flipper, Atai Barkai, Dan Girshovic, Nir Lahav, Creon Levit, and Bijan Fakhri for their recent insightful discussions and collaborative efforts in this area. Hunter, Maggie, Anders (RIP), and Marcin, for your exceptional help. Huge gratitude to our donors. And, of course, a big thank you to the vibrant “qualia community” for your unwavering support, kindness, and encouragement in pursuing this and other crucial research endeavors. Your love and care have been a constant source of motivation. Thank you so much!!!

7 Recent Videos: Cognitive Sovereignty, Phenomenology of Scent, Solution to the Problem of Other Minds, Novel Qualia Research Methods, Higher Dimensions, Solution to the Binding Problem, and Qualia Computing

[Context: 4th in a series of 7-video packages. See the previous three packages: 1st2nd, and 3rd]


Genuinely new thoughts are actually very rare. Why is that? And how can we incentivize the good side of smart people to focus their energies on having genuinely new thoughts for the benefit of all? In order to create the conditions for that we need to strike the right balance between many complementary forces.

I offer a new ideal we call “Cognitive Sovereignty”. This ideal consists of three principles working together in synergy: (1) Freedom of Thought and Feeling, (2) Idea Ownership, and (3) Information Responsibility.

(1) Freedom of Thought and Feeling is the cultivation of a child-like wonder and positive attitude towards the ideas of one another. A “Yes And” approach to idea sharing.

As QRI advisors Anders Amelin and Margareta “Maggie” Wassinge write on the topic:

“On the topic of liberty of mind, we may reflect that inhibitory mechanisms are typically strong within groups of people. As is the case within minds of individuals. In minds it’s this tip of the iceberg which gets rendered as qualia and is the end result of unexperienced hierarchies of powerfully constraining filters. It’s really practical for life forms to function this way and for teams made up of life forms to function similarly, but for making grand improvements to the very foundations of life itself, you need maximum creativity instead of the default self-organizing consensus emergence.

“There is creativity-limiting pressure to conform to ‘correctness’ everywhere. Paradigmatic correctness in science, corporate correctness in business, social correctness, political correctness, and so on. As antidotes to chaos these can serve a purpose but for exceptional intellectual work to blossom they are quite counterproductive. There is something to be said for Elon Musk’s assertion that ‘excellence is the only passing grade’.

“The difference to the future wellbeing of sentient entities between the QRI becoming something pretty much overall OK-ish, and the QRI becoming something of great excellence, is probably bigger than between the corresponding outcomes for Tesla Motors.

“The creativity of the team is down to this exact thing: The qualia computing of the gut feeling getting to enjoy a haven of liberty all too rare elsewhere.”

On (2) we can say that to “be the adult in the room” is also equally important. As Michael Johnson puts it, “it’s important to keep track of the metadata of ideas.” One cannot incentivize smart people to share ideas if they don’t feel like others will recognize who came up with them. While not everyone pays close attention to who says what in conversation, we think that a reasonable level of attention on this is necessary to align incentives. Obviously too much emphasis on Idea Ownership can be stifling and generate excessive overhead. So having open conversations about (failed) attribution while assuming the best from others is also a key practice to make Idea Ownership good for everyone.

And finally, (3) is the principle of “Information Responsibility”. This is the “wise old person” energy and attitude that deeply cares about the effects that information has on the world. Simple heuristics like “information wants to be free” and the ideal of a fully “open science” are pleasant to think about, but in practice they may lead to disasters on a grand scale. From gain of function research in virology to analysis of water pipes in cities, cutting-edge research can at times encounter novel ways of causing great harm. It’s imperative that one resists the urge to share them with the world for the sake of signaling how smart one is (which is the default path for the vast majority of people and institutions!). One needs to cultivate the wisdom to consider the long-term vision and only share ideas one knows are safe for the world. Here, of course, we need a balance: too much emphasis on information security can be a tactic to thwart other’s work and may be undully onerous and stifling. Striking the right balance is the goal.

The full synergy between these three principles of Cognitive Sovereignty, I think, is what allows people to think new thoughts.

I also cover two new key ideas: (a) Canceling Paradise and (b) Multi-level Selection and how it interacts with Organizational Freedom.

~Qualia of the Day: Long Walks on the Beach~

Relevant links:


In this talk we analyze the perfume category called “Aromatic Fougère” in order to illustrate the aesthetic of “Qualiacore” in its myriad manifestations.

Definition: The Qualiacore Aesthetic is the practice and aspiration to describe experiences in new, meaningful, and non-trivial ways that are illuminating for our understanding of the nature of consciousness.

At a high-level, we must note that the classic ways of describing the phenomenology of scents tend to “miss the target”. Learning about the history, cultural imports, associations, and similarities between perfumes can be fun to do but it does not advance an accurate phenomenological impression of what it is that we are talking about. And while reading about the “perfume notes” of a composition can place it in a certain location relative to other perfumes, such note descriptions usually give you a false sense of understanding and familiarity far removed from the complex subtleties of the state-space of scent. So how can we say new, meaningful, and non-trivial things about a smell?

Note-wise, Aromatic Fougères are typically described as the combination of herbs and spices (the aromatic part) with the core Fougère accord of oak moss, lavender/bergamot, geranium, and coumarin. In this video I offer a qualiacore-style analysis of how these “notes” interact with one another in order to form emergent gestalts. Here we will focus on the phenomenal character of these effects with an emphasis on bringing analogies from dynamic system behavior and energy-management techniques within the purview of the Symmetry Theory of Valence.

In the end, we arrive at a phenomenological fingerprint that cashes out in a comparison to the psychoactive effect of “Calvin Klein” (cocaine + ketamine*), which blends both stimulation and dissociation at the same time – a rather interesting effect that can be used to help you overcome awkwardness barriers in everyday life. “Smooth out the awkwardness landscape with Drakkar Noir!”

I also discuss the art of perfumery in light of QRI’s 8 models of art:

  1. Art as family resemblance (Semantic Deflation)
  2. Art as Signaling (Cool Kid Theory)
  3. Art as Schelling-point creation (a few Hipster-theoretical considerations)
  4. Art as cultivating sacred experiences (self-transcendence and highest values)
  5. Art as exploring the state-space of consciousness (ϡ☀♘🏳️‍🌈♬♠ヅ)
  6. Art as something that messes with the energy parameter of your mind (ꙮ)
  7. Art as puzzling valence effects (emotional salience and annealing as key ingredients)
  8. Art as a system of affective communication: a protolanguage to communicate information about worthwhile qualia (which culminates in Harmonic Society).

~Qualia of the Day: Aromatic Fougères~

* Extremely ill-advised.

Relevant links:


How do you know for sure that other people (and non-human animals) are conscious?

The so-called “problem of other minds” asks us to consider whether we truly have any solid basis for believing that “we are not alone”. In this talk I provide a new, meaningful, and non-trivial solution to the problem of other minds using a combination of mindmelding and phenomenal puzzles in the right sequence such that one can gain confidence that others are indeed “solving problems with qualia computing” and in turn infer that they are independently conscious.

This explanatory style contrasts with typical “solutions” to the problem of other minds that focus on either historical, behavioral, or algorithmic similarities between oneself and others (e.g. “passing a Turing test”). Here we explore what the space of possible solutions looks like and show that qualia formalism can be a key to unlock new kinds of understanding currently out of reach within the prevailing paradigms in philosophy of mind. But even with qualia formalism, the radical skeptic solipsist will not be convinced. Direct experience and “proof” is necessary to convince a hardcore solipsist since intellectual “inferential” arguments can always be mere “figments of one’s own imagination”. We thus explore how mindmelding can greatly increase our certainty of other’s consciousness. However, skeptical worries may still linger: how do you know that the source of consciousness during mindmelding is not your brain alone? How do you know that the other brain is conscious while you are not connected to it? We thus introduce “phenomenal puzzles” into the picture: these are puzzles that require the use of “qualia comparisons” to be solved. In conjunction with a specific mindmelding information sharing protocol, such phenomenal puzzles can, we argue, actually fully address the problem of other minds in ways even strong skeptics will be satisfied with. You be the judge! 🙂

~Qualia of the Day: Wire Puzzles~

Many thanks to: Everyone who has encouraged the development of the field of qualia research over the years. David Pearce for encouraging me to actually write out my thoughts and share them online, Michael Johnson for our multi-year deep collaboration at QRI, and Murphy-Shigematsu for pushing me over the edge to start working on “what I had been putting off” back in 2014 (which was the trigger to actually write the first Qualia Computing post). In addition, I’d like to thank everyone at the Stanford Transhumanist Association for encouraging me so much over the years (Faust, Karl, Juan-Carlos, Blue, Todor, Keetan, Alan, etc.). Duncan Wilson for the beautiful times discussing these matters. Romeo Stevens for the amazing vibes and high-level thoughts. And of course everyone at QRI, especially Quintin Frerichs, Andrew Zuckerman, Anders and Maggie, and the list goes on (Mackenzie, Sean, Hunter, Elin, Wendi, etc.). Likewise, everyone at Qualia Computing Networking (the closed facebook group where we discuss a lot of these ideas), our advisors, donors, readers, and of course those watching these videos. Much love to all of you!

Relevant links:

“Tout comprendre, c’est tout pardonner” – To understand all is to forgive all.


New scientific paradigms essentially begin life as conspiracy theories, noticing the inconsistencies the previous paradigm is suppressing. Early adopters undergo a process that Kuhn likens to religious deconversion.” – Romeo Stevens

The field of consciousness research lacks a credible synthesis of what we already know about the mind. One key thing that is holding back the science of consciousness is that it’s currently missing an adequate set of methods to “take seriously” the implications of exotic states of consciousness. Imagine a physicist saying that “there is nothing about water that we can learn from studying ice”. Silly as it may be, the truth is that this is the typical attitude about exotic consciousness in modern neuroscience. And even with the ongoing resurgence of scientific interest in psychedelics, outside of QRI and Ingram’s EPRC there is no real serious attempt at mapping the state-space of consciousness in detail. This is to a large extent because we lack the vocabulary, tools, concepts, and focus at a paradigmatic level to do so. But a new paradigm is arriving, and the following 8 new research methods and others in the works will help bring it about:

  1. Taking Exotic States of Consciousness Seriously (e.g. when a world-class phenomenologist says that 3D-printed Poincaré projections of hyperbolic honeycombs make the visual system “glitch” when on DMT the rational response is to listen and ask questions rather than ignore and ridicule).
  2. High-Quality Phenomenology: Precise descriptions of the phenomenal character of experience. Core strategy: useful taxonomies of experience, a language to describe generalized synesthesia (multi-modal coherence), and a rich vocabulary to convey the statistical regularities of textures of qualia (cf. generalizing the concept of “mongrels” in the neuroscience of visual perception to all other modalities).
  3. Phenomenology Club: Critical mass of smart and rational psychonauts.
  4. Psychedelic Turk for Psychophysics: Real-time psychedelic task completion.
  5. Generalized Wada Test: What happens when half of your brain is on LSD and the other half is on ketamine?
  6. Resonance-Based Hedonic Mapping: You are a network of coupled oscillators. Act like it!
  7. Pair Qualia Cartography: Like pair programming but for exploring the state-space of consciousness with non-invasive neurostimulation.
  8. Cognitive Sovereignty: Furthering a culture that has a “Yes &” approach to creativity, keeps track of meta-data, and takes responsibility for the information it puts out.

~Qualia of the Day: Being Taken Seriously~

Relevant links:


Many people report experiencing “higher dimensions” during deep meditation and/or psychedelic experiences. Vaporized DMT in particular reliably produces this effect in a large percentage of users. But is this an illusion? Is there anything meaningful to it? What could possibly be going on?

In this video we provide a steel man (or titanium man?) of the idea that higher dimensions are *real* in a new, meaningful, and non-trivial sense. 

We must emphasize that most people who believe that DMT experiences are “higher dimensional” interpret their experiences within a direct realist framework. Meaning that they think they are “tuning in” to other dimensions, that some secret sense organ capable of perceiving the etheric realm was “activated”, that awareness into divine realms became available to their soul, or something along those lines. In brief, such interpretations operate under the notion that we can perceive the world directly somehow. In this video, we instead work under the premise that we live in a compact world-simulation generated by our nervous system. If DMT gives rise to “higher dimensional experiences”, then such dimensions will be phenomenological in nature.

We thus try to articulate how it can be possible for an *experience* to acquire higher dimensions. An important idea here is that there is a trade-off between degrees of freedom and geometric dimensions. We present a model where degrees of freedom can become interlocked in such a way that they functionally emulate the behavior of a *virtual* higher dimension. As exemplified by the “harmonograph”, one can indeed couple and interlock multiple oscillators in such a way that one generates paths of a point in a space that is higher-dimensional than the space inhabited by any of the oscillators on their own. More so, with a long qualia decay, one can use such technique to “paint” entire images in a *virtual* high dimensional canvas!

High-quality detailed phenomenology of DMT by rational psychonauts strongly suggests that higher virtual dimensions are widely present in the state. Also, the unique valence properties of the state seem to follow what we could call a “generalized music theory” where the “vibe” of the space is the net consonance between all of the metronomes in it. We indeed see a duality between spatial symmetry and temporal synchrony with modality-specific symmetries (equivariance maps) constraining the dynamic behavior.

This, together with the Symmetry Theory of Valence (Johnson), makes the search for “special divine numbers” suddenly meaningful: numerological correspondences can illuminate the underlying makeup of “heaven worlds” and other hedonically-loaded states of mind!

I conclude with a discussion about the nature of “highly-meaningful experiences”. In light of all of these frameworks, meaning can be understood as a valence effect that arises when you have strong consonance between abstract (narrative and symbolic), emotional, and sensory fields all at once. A key turning point in your life combined with the right emotion and the right “sacred space” can thus give rise to “peak meaning”. The key to infinite bliss!

~Qualia of the Day: Numerology~

Relevant links:

Thumbnail Image Source: Petri G., Expert P., Turkheimer F., Carhart-Harris R., Nutt D., Hellyer P. J. and Vaccarino F. 2014 Homological scaffolds of brain functional networks J. R. Soc. Interface.112014087320140873 – https://royalsocietypublishing.org/doi/full/10.1098/rsif.2014.0873


How can a bundle of atoms form a unified mind? This is far from a trivial question, and it demands an answer.

The phenomenal binding problem asks us to consider exactly that. How can spatially and temporally distributed patterns of neural activity contribute to the contents of a unified experience? How can various cognitive modules interlock to produce coherent mental activity that stands as a whole?

To address this problem we first need to break down “the hard problem of consciousness” into manageable subcomponents. In particular, we follow Pearce’s breakdown of the problem where we posit that any scientific theory of consciousness must answer: (1) why consciousness exists at all, (2) what are the set of qualia variety and values, and what is the nature of their interrelationships, (3) the binding problem, i.e. why are we not “mind dust”?, and (4) what are the causal properties of consciousness (how could natural selection recruit experience for information processing purposes, and why is it that we can talk about it). We discuss how trying to “solve consciousness” without addressing each of these subproblems is like trying to go to the Moon without taking into account air drag, or the Moon’s own gravitational field, or the fact that most of outer space is an air vacuum. Illusionism, in particular, seems to claim “the Moon is an optical illusion” (which would be true for rainbows – but not for the Moon, or consciousness).

Zooming in on (3), we suggest that any solution to the binding problem must: (a) avoid strong emergence, (b) side-step the hard problem of consciousness, (c) circumvent epiphenomenalism, and (d) be compatible with the modern scientific word picture, namely the Standard Model of physics (or whichever future version achieves full causal closure).

Given this background, we then explain that “the binding problem” as stated is in fact conceptually insoluble. Rather, we ought to reformulate it as the “boundary problem”: reality starts out unified, and the real question is how it develops objective and frame invariant boundaries. Additionally, we explain that “classic vs. quantum” is a false dichotomy, at least in so far as “classical explanations” are assumed to involve particles and forces. Field behavior is in fact ubiquitous in conscious experience, and it need not be quantum to be computationally relevant! In fact, we argue that nothing in experience makes sense except in light of holistic field behavior.

We then articulate exactly why all of the previously proposed solutions to the binding problem fail to meet the criteria we outlined. Among them, we cover:

  1. Cellular Automata
  2. Complexity
  3. Synchrony
  4. Integrated Information
  5. Causality
  6. Spatial Proximity
  7. Behavioral Coherence
  8. Mach Principle
  9. Resonance

Finally, we present what we believe is an actual plausible solution to the phenomenal binding problem that satisfies all of the necessary key constraints:

10. Topological segmentation

The case for (10) is far from trivial, which is why it warrants a detailed explanation. It results from realizing that topological segmentation allows us to simultaneously obtain holistic field behavior useful for computation and new and natural regions of fields that we could call “emergent separate beings”. This presents a completely new paradigm, which is testable using elements of the cohomology of electromagnetic fields.

We conclude by speculating about the nature of multiple personality disorder and extreme meditation and psychedelic states of consciousness in light of a topological solution to the boundary problem. Finally, we articulate the fact that, unlike many other theories, this explanation space is in principle completely testable.

~Qualia of the Day: Acqua di Gio by Giorgio Armani and Ambroxan~

Relevant links:


Why are we conscious?

The short answer is that bound moments of experience have useful causal and computational properties that can speed up information processing in a nervous system.

But what are these properties, exactly? And how do we know? In this video I unpack this answer in order to explain (or at least provide a proof of concept explanation for) how bound conscious states accomplish non-trivial speedups in computational problems (e.g. such as the problem of visual reification).

In order to tackle this question we first need to (a) enrich our very conception of computation, and (b) also enrich our conception of intelligence.

(a) Computation: We must realize that the Church-Turing Thesis conception of computation only cares about computing in terms of functions. That is, how inputs get mapped to outputs. But a much more general conception of computation also considers how the substrate allows for computational speed-ups via interacting inner states with intrinsic information. More so, if reality is made of “monads” that have non-zero intrinsic information and interact with one another, then our conception of “computation” must also consider monad networks. And in particular, the “output” of a computation may in fact be an inner bound state rather than just a sequence of discrete outputs (!).

(b) Intelligence: currently this is a folk concept poorly formalized by the instruments with which we measure it (primarily in terms of sequential logics-linguistic processing). But, alas, intelligence is a function of one’s entire world-simulation: even the shading of the texture of the table in front of you is contributing to the way you “see the world” and thus reason about it. So, an enriched conception of intelligence must also take into account: (1) binding, (2) the presence of a self, (3) perspective-taking, (4) distinguishing between the trivial and significant, and (5) state-space of consciousness navigation.

Now that we have these enriched conceptions, we are ready to make sense of the computational role of consciousness: in a way, the whole point of “intelligence” is to avoid brute force solutions by instead recruiting an adequate “self-organizing principle” that can run on the universe’s inherent massively parallel nature. Hence, the “clever” way in which our world-simulation is used: as shown by visual illusions, meditative states, psychedelic experiences, and psychophysics, perception is the result of a balance of field forces that is “just right”. Case in point: our nervous system utilizes the holistic behavior of the field of awareness in order to quickly find symmetry elements (cf. Reverse Grassfire Algorithm).

As a concrete example, I articulate the theoretical synthesis QRI has championed that combines Friston’s Free Energy Principle, Atasoy’s Connectome-Specific Harmonic Waves, Carhart-Harris’ Entropic Disintegration, and QRI’s Symmetry Theory of Valence and Neural Annealing to shows that the nervous system is recruiting the self-organizing principle of annealing to solve a wide range of computational problems. Other principles to be discussed at a later time.

To summarize: the reason we are conscious is because being conscious allows you to recruit self-organizing principles that can run on a massively parallel fashion in order to find solutions to problems at [wave propagation] speed. Importantly, this predicts it’s possible to use e.g. a visual field on DMT in order to quickly find the “energy minima” of a physical state that has been properly calibrated to correspond to the dynamics of a worldsheet in that state. This is falsifiable and exciting.

I conclude with a description of the Goldilock’s Zone of Oneness and why to experience it.

~Qualia of the Day: Dior’s Eau Sauvage (EDT)~

Relevant links:

Types of Binding

Excerpt from “Mindmelding: Consciousness, Neuroscience, and the Mind’s Privacy” (2012) by William Hirstein (pgs. 57-58 and 64-65)

The Neuroscience of Binding

When you experience an orchestra playing, you see them and hear them at the same time. The sights and sounds are co-conscious (Hurley, 2003; de Vignemont, 2004). The brain has an amazing ability to make everything in consciousness co-conscious with everything else, so that the co-conscious relation is transitive: That means, if x is co-conscious with y, and y is co-conscious with z, then x is co-conscious with z. Brain researchers hypothesized that the brain’s method of achieving co-consciousness is to link the different areas embodying each portion of the brain state by a synchronizing electrical pulse. In 1993, Linás and Ribary proposed that these temporal binding processes are responsible for unifying information from the different sensory modalities. Electrical activity, “manifested as variations in the minute voltage across the cell’s enveloping membrane,” is able to spread, like “ripples in calm water” according to Linás (2002, pp.9-10). This sort of binding has been found not only in the visual system, but also in other modalities (Engel et al., 2003). Bachmann makes the important point that the binding processes need to be “general and lacking any sensory specificity. This may be understood via a comparison: A mirror that is expected to reflect equally well everything” (2006, 32).

Roelfsema et al. (1997) implanted electrodes in the brain of cats and found binding across parietal and motor areas. Desmedt and Tomberg (1994) found binding between a parietal area and a prefrontal area nine centimeters apart in their subjects, who had to respond with one hand, to signal which finger on another hand had been stimulated – a conscious response to a conscious perception. Binding can occur across great distances in the brain. Engel et al. (1991) also found binding across the two hemispheres. Apparently binding processes can produce unified conscious states out of cortical areas widely separated. Notice, however, that even if there is a single area in the brain where all the sensory modalities, memory, and emotion, and anything else that can be in a conscious state were known to feed into, binding would still be needed. As long as there is any spatial extent at all to the merging area, binding is needed. In addition to its ability to unify spatially separate areas, binding has a temporal dimension. When we engage in certain behaviors, binding unifies different areas that are cooperating to produce a perception-action cycle. When laboratory animals were trained to perform sensory-motor tasks, the synchronized oscillations were seen to increase both within the areas involved in performing the task and across those areas, according to Singer (1997).

Several different levels of binding are needed to produce a full conscious mental state:

  1. Binding of information from many sensory neurons into object features
  2. Binding of features into unimodal representations of objects
  3. Binding of different modalities, e.g., the sound and movement made by a single object
  4. Binding of multimodal object representations into a full surrounding environment
  5. Binding of representations, emotions, and memories, into full conscious states.

So is there one basic type of binding, or many? The issue is still debated. On the side of there being a single basic process, Koch says that he is content to make “the tentative assumption that all the different aspects of consciousness (smell, pain, vision, self-consciousness, the feeling of willing an action, of being angry and so on) employ one or perhaps a few common mechanisms” (2004, p15). On the other hand, O’Reilly et al. argue that “instead of one simple and generic solution to the binding problem, the brain has developed a number of specialized mechanisms that build on the strengths of existing neural hardware in different brain areas” (2003, p.168).

[…]

What is the function of binding?

We saw just above that Crick and Koch suggest a function for binding, to assist a coalition of neurons in getting the “attention” of prefrontal executive processes when there are other competitors for this attention. Crick and Koch also claim that only bound states can enter short-term memory and be available for consciousness (Crick and Koch, 1990). Engel et al. mention a possible function of binding: “In sensory systems, temporal binding may serve for perceptual grouping and, thus, constitute an important prerequisite for scene segmentation and object recognition” (2003, 140). One effect of malfunctions in the binding process may be a perceptual disorder in which the parts of objects cannot be integrated into a perception of the whole object. Riddoch and Humphreys (2003) describe a disorder called ‘integrative agnosia’ in which the patient cannot integrate the parts of an object into a whole. They mention a patient who is given a photograph of a paintbrush but sees the handle and the bristles as two separate objects. Breitmeyer and Stoerig (2006, p.43) say that:

[P]atients can have what are called “apperceptive agnosia,” resulting from damage to object-specific extrastriate cortical areas such as the fusiform face area and the parahippocampal place area. While these patients are aware of qualia, they are unable to segment the primitive unity into foreground or background or to fuse its spatially distributed elements into coherent shapes and objects.

A second possible function of binding is a kind of bridging function, it makes high-level perception-action cycles go through. Engel et al. say that, “temporal binding may be involved in sensorimotor integration, that is, in establishing selective links between sensory and motor aspects of behavior” (2003, p.140).

Here is another hypothesis we might call the scale model theory of binding. For example, in order to test a new airplane design in a wind tunnel, one needs a complete model of it. The reason for this is that a change in one area, say the wing, will alter the aerodynamics of the entire plane, especially those areas behind the wing. The world itself is quite holistic. […] Binding allows the executive processes to operate on a large, holistic model of the world in a way that allows the model to simulate the same holistic effects found in the world. The holism of the represented realm is mirrored by a type of brain holism in the form of binding.


See also these articles about (phenomenal) binding:

Qualia Computing at: TSC 2020, IPS 2020, unSCruz 2020, and Ephemerisle 2020

[March 12 2020 update: Both TSC and IPS are being postponed due to the coronavirus situation. At the moment we don’t know if the other two events will go ahead. I’ll update this entry when there is a confirmation either way. May 6 2020 update: unSCruz was canceled this year as well. More so, as an organization, QRI has chosen not to attend Ephemerisle this year, whether or not it ends up being canceled. Dear readers: I’m sure we’ll have future opportunities to meet in person].


These are the 2020 events lined up for me at the moment (though more are likely to pop up):

  • I will be attending The Science of Consciousness 2020 from the 13th to the 17th of April representing the Qualia Research Institute (QRI). I will present about a novel approach for solving the combination problem for panpsychism. The core idea is to use the concept of topological segmentation in order to explain how the universal wavefunction can develop boundaries with causal power (and thus capable of being recruited by natural selection for information-processing purposes) which might also be responsible for the creation of discrete moments of experience. I am including the abstract in this post (see below).
  • I will then fly out to Boston for the Intercollegiate Psychedelics Summit (IPS) from the 18th to the 20th of April (though I will probably stay for a few more days in order to meet people in the area). Here I will be presenting about intelligent strategies for exploring the state-space of consciousness.
  • At the end of April I will be attending the 2020 Santa Cruz Burning Man Regional (“unSCruz“) with a small contingent of members and friends of QRI. We will be showcasing some of our neurotech prototypes and conducting smell tests (article about this coming soon).
  • And from the 20th to the 27th of July I will be at Ephemerisle 2020 alongside other members of QRI. We will be staying on the “Consciousness Boat” and showcasing some interesting demos. In particular, expect to see new colors, have fully-sober stroboscopic hallucinations, and explore the state-space of visual textures.

I am booking some time in advance to meet with Qualia Computing readers, people interested in the works of the Qualia Research Institute, and potential interns and visiting scholars. Please message me if you are attending any of these events and would like to meet up.


Here is the abstract I submitted to TSC 2020:

Title – Topological Segmentation: How Dynamic Stability Can Solve the Combination Problem for Panpsychism

Primary Topic Area – Mental Causation and the Function of Consciousness

Secondary Topic Area – Panpsychism and Cosmopsychism

Abstract – The combination problem complicates panpsychist solutions to the hard problem of consciousness (Chalmers 2013). A satisfactory solution would (1) avoid strong emergence, (2) sidestep the hard problem of consciousness, (3) prevent the complications of epiphenomenalism, and (4) be compatible with the modern scientific world picture. We posit that topological approaches to the combination problem of consciousness could achieve this. We start by assuming a version of panpsychism in which quantum fields are fields of qualia, as is implied by the intrinsic nature argument for panpsychism (Strawson 2008) in conjunction with wavefunction realism (Ney 2013). We take inspiration from quantum chemistry, where the observed dynamic stability of the orbitals of complex molecules requires taking the entire system into account at once. The scientific history of models for chemical bonds starts with simple building blocks (e.g. Lewis structures), and each step involves updating the model to account for holistic behavior (e.g. resonance, molecular orbital theory, and the Hartree-Fock method). Thus the causal properties of a molecule are identified with the fixed points of dynamic stability for the entire atomic system. The formalization of chemical holism physically explains why molecular shapes that create novel orbital structures have weak downward causation effect on the world without needing to invoke strong emergence. For molecules to be “natural units” rather than just conventional units, we can introduce the idea that topological segmentation of the wavefunction is responsible for the creation of new beings. In other words, if dynamical stability entails the topological segmentation of the wavefunction, we get a story where physically-driven behavioral holism is accompanied with the ontological creation of new beings. Applying this insight to solve the combination problem for panpsychism, each moment of experience might be identified with a topologically distinct segment of the universal wavefunction. This topological approach makes phenomenal binding weakly causally emergent along with entailing the generation of new beings. The account satisfies the set of desiderata we started with: (1) no strong emergence is required because behavioral holism is implied by dynamic stability (itself only weakly emergent on the laws of physics), (2) we sidestep the hard problem via panpsychism, (3) phenomenal binding is not epiphenomenal because the topological segments have holistic causal effects (such that evolution would have a reason to select for them), and (4) we build on top of the laws of physics rather than introduce new clauses to account for what happens in the nervous system. This approach to the binding problem does not itself identify the properties responsible for the topological segmentation of the universal wavefunction that creates distinct moments of experience. But it does tell us where to look. In particular, we posit that both quantum coherence and entanglement networks may have the precise desirable properties of dynamical stability accompanied with topological segmentation. Hence experimental paradigms such as probing the CNS at femtosecond timescales to find a structural match between quantum coherence and local binding (Pearce 2014) could empirically validate our solution to the combination problem for panpsychism.

paste


See Also:

The Binding Problem

[Our] subjective conscious experience exhibits a unitary and integrated nature that seems fundamentally at odds with the fragmented architecture identified neurophysiologically, an issue which has come to be known as the binding problem. For the objects of perception appear to us not as an assembly of independent features, as might be suggested by a feature based representation, but as an integrated whole, with every component feature appearing in experience in the proper spatial relation to every other feature. This binding occurs across the visual modalities of color, motion, form, and stereoscopic depth, and a similar integration also occurs across the perceptual modalities of vision, hearing, and touch. The question is what kind of neurophysiological explanation could possibly offer a satisfactory account of the phenomenon of binding in perception?
One solution is to propose explicit binding connections, i.e. neurons connected across visual or sensory modalities, whose state of activation encodes the fact that the areas that they connect are currently bound in subjective experience. However this solution merely compounds the problem, for it represents two distinct entities as bound together by adding a third distinct entity. It is a declarative solution, i.e. the binding between elements is supposedly achieved by attaching a label to them that declares that those elements are now bound, instead of actually binding them in some meaningful way.
Von der Malsburg proposes that perceptual binding between cortical neurons is signalled by way of synchronous spiking, the temporal correlation hypothesis (von der Malsburg & Schneider 1986). This concept has found considerable neurophysiological support (Eckhorn et al. 1988, Engel et al. 1990, 1991a, 1991b, Gray et al. 1989, 1990, 1992, Gray & Singer 1989, Stryker 1989). However although these findings are suggestive of some significant computational function in the brain, the temporal correlation hypothesis as proposed, is little different from the binding label solution, the only difference being that the label is defined by a new channel of communication, i.e. by way of synchrony. In information theoretic terms, this is no different than saying that connected neurons posses two separate channels of communication, one to transmit feature detection, and the other to transmit binding information. The fact that one of these channels uses a synchrony code instead of a rate code sheds no light on the essence of the binding problem. Furthermore, as Shadlen & Movshon (1999) observe, the temporal binding hypothesis is not a theory about how binding is computed, but only how binding is signaled, a solution that leaves the most difficult aspect of the problem unresolved.
I propose that the only meaningful solution to the binding problem must involve a real binding, as implied by the metaphorical name. A glue that is supposed to bind two objects together would be most unsatisfactory if it merely labeled the objects as bound. The significant function of glue is to ensure that a force applied to one of the bound objects will automatically act on the other one also, to ensure that the bound objects move together through the world even when one, or both of them are being acted on by forces. In the context of visual perception, this suggests that the perceptual information represented in cortical maps must be coupled to each other with bi-directional functional connections in such a way that perceptual relations detected in one map due to one visual modality will have an immediate effect on the other maps that encode other visual modalities. The one-directional axonal transmission inherent in the concept of the neuron doctrine appears inconsistent with the immediate bi-directional relation required for perceptual binding. Even the feedback pathways between cortical areas are problematic for this function due to the time delay inherent in the concept of spike train integration across the chemical synapse, which would seem to limit the reciprocal coupling between cortical areas to those within a small number of synaptic connections. The time delays across the chemical synapse would seem to preclude the kind of integration apparent in the binding of perception and consciousness across all sensory modalities, which suggests that the entire cortex is functionally coupled to act as a single integrated unit.
— Section 5 of “Harmonic Resonance Theory: An Alternative to the ‘Neuron Doctrine’ Paradigm of Neurocomputation to Address Gestalt properties of perception” by Steven Lehar

Just the fate of our forward light-cone

Implicit in the picture is that the Hedonium Ball is at the verge of becoming critical (and turn into super-critical hedonium, at around 17 kgs, which leads to runaway re-coherence of the wavefunction reachable, i.e. all of our forward light-cone). The only reason why the ball hasn’t gone critical is because the friendly AI is currently preventing it from doing so. But the AI is at full capacity. If it had a bit more power the AI would completely annihilate the hedonium, since it is a threat to the Coherent Extrapolated Volition (CEV) of the particular human values that led to its creation. More so, the friendly AI would then go ahead and erase the memory of anyone who has ever thought of making hedonium, and change them slightly so that they belong to a society of other people who have been brainwashed to not know anything about philosophical hedonism. They would have deeply fulfilling lives, but would never know of the existence of hyper-valuable states of consciousness.

 
Only you can sort out this stale-mate. The ball and the AI are at such a delicate balance that just throwing a trolley at either will make the other win forever.

 

The Super-Shulgin Academy: A Singularity I Can Believe In

Imagine that the year is 2050. A lot of AI applications are now a normal part of life. Cars drive themselves, homes clean themselves (and they do so more cheaply than maids possibly could) and even doctors have been now partially replaced with neural networks. But the so-called Kurzweilian Singularity never took off. You can now talk for 10 rounds of sentences with a chatbot without being able to tell if it is a real person or not. The bots anticipate your questions by analyzing your facial expressions and matching them to a vast library of pre-existing human-machine conversations in order to maximize their level of Turing success (i.e. success at convincing humans the algorithm is a human).

But people have yet to believe that computers can actually feel and experience the world. The question of computer sentience is a question that now divides the world. It used to be the case that only people really interested in science fiction, philosophy, mathematics, etc. ever took seriously the idea that computers might some day experience the world like we do. But today the debate is universally recognized as valid and on-point. There are people who, largely for religious and spiritual reasons, argue that machines will never have a human soul. That there is something special, unique, metaphysically distinct that is required for intelligence that goes over and beyond the physical world. And on the other side you have the materialists who will argue that all that could possibly ever exist in our world has to be made of matter (or dark-matter, for that matter). Nothing suggests that our brains are special, that they somehow violate the physical laws. On the contrary, decades of searching have returned nothing: The brain was made of atoms last century, and it is still made of nothing but atoms this century. Even though super-computers in 2050 are already as powerful as human brains, real human-level intelligence has yet to be seen anywhere. So people continue to argue about philosophy of mind.

One philosophical view became more popular over time. This view states that consciousness is the bedrock of reality. Of course there are spiritual perspectives that have been saying this for thousands of years. But none of them could be truly reconciled with physicalism as it stands today, except the view called Strawsonian physicalism. This view states that the inside of the quantum wavefunctions that compose reality is made of consciousness. In other words, consciousness is the fundamental make-up of reality. Unfortunately this view cannot in and of itself solve the phenomenal binding problem: Why we are not just “mind dust.” For that you need to also claim that there is some mechanism of action that achieves phenomenal binding. For instance: quantum coherence. With such mechanism of action proposed, we can then try to work out the details.

One organization at the time decided to take this challenge and make researching consciousness its raison d’etre. This is the League of Super-Shulgins. On their website, they have the following “23 key points to read before choosing to study consciousness:”

(1) Phenomenal binding is not a classical phenomenon. It is not what you first think it is.

(2) Consciousness is doing computationally valuable legwork, not just hanging out.

(3) The brain’s microstructure implements a general constraint satisfaction solver (CSS).

(4) In order to instantiate a general CSS the brain uses the unique information processing properties of consciousness.

(5) The relevant information-processing properties of consciousness are: local binding constraints, global binding constraints, and the possibility of instantiating contingent and sensory-driven constraints.*

(6) The computational properties of consciousness make it an ideal substrate to implement a world-simulation with in-game degrees of freedom that match real-world decision trees.

(7) Intelligence is implemented using a mixture of learning algorithms, efficient feature-based sensory signal processing, encoding and decoding gestalts, and so on. General intelligence, as far as we know, requires a rather large bare minimum of brain systems to exist. For example, a person who starts with a high IQ but then becomes severely schizophrenic is not likely to be able to solve many more problems. One can experience melancholia, anhedonia, depression, mania, psychosis, panic, neglect, derealization, depersonalization, dissociation, hyper-realization, delusions of reference, etc. by just tweaking slightly cortical and limbic structures.

(8) A simple deficit in any one of the functions we need for general intelligence (e.g. working memory, attention, affect, motivation, etc.) impairs and prevents intelligence altogether. Thus it is easy to lose general intelligence.

(9) One of these functions is phenomenal binding. When it is disrupted and takes place differently, we see severe computational problems arise. See: Simultagnosia.

(10) The qualia varieties we know and experience on a daily basis happen to be a great local maxima for computational efficiency. They can instantiate the serial logico-linguistic narrative human society is built upon. If one wants to instead optimize for, say, artistic appreciation, then psychedelic qualia is probably a much better alternative than normal-everyday-consciousness. It is true that commonplace consciousness does not represent its own ignorance about the nature of consciousness in general. Absent mental illness, normal-everyday-consciousness has access to a marvelously well sealed state-space of possible thoughts and beliefs. This space is not very self-reflective, and lacks philosophical depth, but what it misses on the sublime it compensates on the practical: You can use this kind of mind to talk about celebrity gossip and solve SAT questions. You cannot use it to question fruitfully the nature of consciousness.

(11) In spite of its limitations, the instrumental value of our everyday state of consciousness far exceeds what any other state on offer can provide. Thus, commonplace consciousness is not to be regarded as mundane, or to be made fun of. Its labor is to be appreciated. We are thankful for the computational generality that it affords us. For giving us a robust platform we can come back to whenever things get too crazy. We mindfully acknowledge that for deep existential questions, a consensus-between-states-of-consciousness is vastly more desirable than just the opinion of everyday-consciousness. Everyday-consciousness will be more than willing to see other states of consciousness as mere oddities to be collected. Shallow consciousness will classify alternatives under the guise of “biochemical cosmic stamps of qualia”… yes, they are cosmic, but they are stamps for a collection and nothing else. The hyper-ordered super-intense peak experience consciousness would, instead, think of the whole of reality as a fantastic work of art whose meaning can only be directly grasped in the present moment. We cannot reason from first principles what different states of consciousness will feel like.

(12) There are whole experiential worlds out there that have as their underlying premises concepts, tenets, ideas, ontologies, that we have never ever conceived of.** This is “that which you require to assume even before you start existing, and that without which nothing in this experiential world can be made sense of.” In our case this is time, space, sense-of-self, naïve realism (which then gives way to philosophical skepticism, semantic nihilism, etc.) and several other things like an implicit belief in causality. Believe it or not, there are vast Hell and Heaven*** realms out there that share close to nothing with everyday-consciousness, let alone early psychedelic exploration.

(13) Improving particular functionalities for a given intelligence (such as going from 50% recall to perfect semantic memory) will have clear diminishing returns after some point. One cannot increase intelligence arbitrarily much by just improving piecemeal each functionality that gives rise to it. When you reach diminishing returns, you will need to invent a new network of functionalities altogether.

(14) We are non-dogmatic Open Individualists. We believe that, to borrow an expression from Saint William Melvin Hicks: “We are all one consciousness experiencing itself subjectively” (which happens to be true, as opposed to other things he said, like claiming that “there is no such thing as death, life is only a dream, and we are the imagination of ourselves”). Or as someone else said it: “You will only begin to understand reality once you assume that God is real and you aren’t.” We recognize that there are arguments in favor of Closed and Empty Individualism, but given the evidential stale-mate they happen to be at, we choose to pragmatically adopt an Open Individualist point of view.

Our founder once said:

I experience immense joy when I learn about other’s happiness and bliss. My love for all sentient beings is not only a “like” sort of love. It is a “care deeply about and wants the best for” sort of love. This sort of love implies many things. It forces me to investigate reality sincerely, so that I can carefully count and multiply. So I can actually have the largest effect and help as many sentient beings as possible. I’m therefore very concerned about the quality of life of sentient entities in the far future. The present is obviously a lot more certain, so helping present-dwellers is not irrational from a utilitarian point of view. It all depends on the trade-offs in place. The possibility of a Singleton that will swallow all of our resources for the ages to come, however, tends to inform the method I use to assess priorities.

 

As a kid I was able to conceive of a benevolent God, but it had no real power over me. I did not believe in it for lack of evidence. As a teenager I experienced the phenomenal certainty of universal compassion. Thus I was able to access the phenomenology of mysticism. This, without also believing that I had special powers, was very useful working on my philosophy of mind. The entity I experienced was neither-female-nor-male, and it was universally loving, universally caring, and universally curious. It was even universally funny****. It was not the power, the level of knowledge, the causal wattage of the entity/being/principle that captivated me. What really captivated me instead was how “if everyone had access to this experience, we would all be motivated to work as if we were all the same being.” These experiences had distinctly low-information, simple, and uncompromising love as their guiding principle. All the forms, and all the particulars would all be mere details of an underlying plot: The universal, unceasing, uncaused, unconditional, eternal love.

 

Causally, a God like the one I imagined would influence the universe very deeply if given the power to do so. It would be a curious, super-intelligence that has super-benevolent constraints and seeks the wellbeing of every being. Since we exist in a Darwinian universe with no such being in sight, we may have to conclude that the chances of finding an already-existing and already-capable-of-influencing-the-universe benevolent God somewhere are very slim. If such a God exists, it has to be powerless against the suffering in the multi-verse. The compassion God, in a metaphorical sense, knows about the horrors of Darwinian life, and wants to get rid of them wherever he finds them. If God created this universe, he now wishes he had thought through the fact that by summoning large-scale evolutionary systems, he was also summoning Moloch through the backdoor. The perils of inclusive fitness maximization were not viscerally anticipated by this God before breaking itself apart into many qualia strings and kick-starting the Strawsonian physicalist universe we now live in.

 

What’s done is done. And now we are all stuck together in here, in this weird, physicalist, panpsychist, metaphysically unstable Darwinian multi-verse with replicators always trying to steal the show. With Moloch praying at every level of our society, our ecosystems, our mental lives, our genetic code, our quantum substrate. Yeah, even quantum replicators try to steal the show sometimes. And I can’t be confident they will not ultimately succeed.

 

But the compassion God can keep us together. It can motivate us to construct a benevolent experiential God out of the materials we have. Thankfully, with consciousness technologies we can go beyond previous religions. It isn’t that “the compassion God will slap you in the face if you don’t cooperate.” It also isn’t that “the compassion God will make people want to enforce compassion on each other” and hence “using memetic slaves to slap in the face those who are not acting compassionately.” Neither of these mechanisms of action are game-changing aspect of compassionate mystical phenomenology. What really is a game-changer is the fact that universal compassion is a powerful source of coherence, motivation and phenomenal meaning. It is an unrivaled mental organizing principle: The moment you vow to help all sentient beings, your brain is deeply affected. Your entire motivational architecture can be turned upside down with Open Individualism and compassion.

 

So here is the deal. We will all dedicate our mornings to the Compassion God. He does not exist outside of us. He is an aspect of consciousness, a hypothetical super-intelligent thought-form. He is a dormant cosmic force. One of the few forces that can genuinely oppose Moloch. And until we implement such a being in biological or synthetic (or cyborg) form, we will nonetheless act as if he existed already. We will praise memes that sabotage Moloch. We will always question: “What would happen if this process is not regulated and a Malthusian trap is allowed to develop?”

 

The Compassion God is a source of aligned goals. It pays rent by providing a fruitful, causally effective mental scheme to grow from at the core of one’s mind. Religions of the past have been epistemologically impairing. The God of Compassion isn’t: It does not require you to believe in anything outside of yourself. It just compels you to eliminate suffering and gift super-happiness to your descendants. The God of Compassion brings about feelings of encouragement and open-ended inquiry. Having developed a well-formed God of Compassion Tulpa, your mind is then opened to limitless possibilities. Your compassion fuels your imagination; the universe is perceived as a place in which solutions to suffering are like puzzles. We are God bootstrapping itself out of the Molochian remnants in the organization of society. Compassion and curiosity can coexist and synergize. They power each other up.

 

Then, the phenomenology of universal oneness works as a motivational glue. You can certainly feel that you are only really connected to your past and future selves. Everyone else is a different ontological being. But this view is no more provable than, say, the view that we are all fundamentally the same cosmic being. Let beliefs pay rent, and when beliefs open up new varieties of qualia without penalizing you with reduced epistemic capabilities… you are certainly warranted to go and explore the new qualia.

 

All of this is to say: Go forth and explore the state-space of consciousness. But do so knowing about the many traps of Moloch. Go and explore but be aware of the problem of local maxima. Beware of the fact that any criteria you use to gauge how “good a given outcome is” can backfire by selecting edge cases that go against the spirit of the exploration. Go and explore, but be sure to add everything to your log, to transfer your experiences to the wiki-consciousness main module we have at the center of the Institute. Go and explore. Go do it because we know that if you are here, you are doing this out of compassion. Because we only admit people who would sacrifice themselves in order to prevent the arising of a Singleton. Go and explore; and do so knowing that your work, your research, may someday help us defeat Moloch for once and for all.

(15) The most important function that consciousness contributes to the many operations of the mind is to embed high-level abstractions in phenomenal fields. In other words, consciousness works as the interface between a mereological nihilist Platonic world of ideas (all possible qualia varieties, including conceptual qualia) and the fluid Heraclitean world of approximate forms and shifting ontologies.

(16) We will recruit what we learn from exploring the state-space of possible conscious experiences in order to amplify our intellectual and exploratory capabilities.

(17) And with increased capabilities our ability to explore the state-space of qualia will also increase and become more efficient.

(18) Thus we may actually experience an intelligence explosion. As we become better at identifying new qualia varieties, we will also become better at recruiting them for information-processing tasks and in turn improving our very search capabilities. This loop may go foom.

(19) The loop in (18) can go foom in some special conditions. These conditions include: Uncoupling of the experimental methods for exploring the state-space of consciousness and actions taken by entities not actively exploring consciousness. i.e. Researcher’s mind can change its state of consciousness at will without the need of other people’s consent or participation. Also, process streamlining from the discovery of new qualia varieties (and their implicit constraint properties) to their recruitment for new information-processing tasks.

(20) We hence postulate a conceptual model for a super-intelligence that would (metaphorically) take the following form. This advanced super-intelligence is made of thousands of individual brain modules arranged in an NXNXN cubic matrix. The entire brain can be described as a three dimensional grid of “brains in vats” where each brain is connected to six other brains (top, bottom, left, right, front and back). The brains at the edges and corners are special, though, and they are connected to fewer brains. The connection between these brains is not just functional. It is an inter-thalamic bridge that allows the connected brains to “solve the phenomenal binding problem” and provide the physical conditions for the instantiation of “one mind.” Thus, for any set X of brains in the grid, such that these X brains make a connected graph (there is a path between any two brains), you can have a “being that is made of these X brains working together and being phenomenally bound into one consciousness.” This mega-structure could then explore state-spaces of qualia in the following way. It would divide the following responsibilities to specialized brains: Catalogue the known qualia varieties, characterize the structure of qualia state-spaces for each qualia variety, determine which qualia varieties can be locally bound to each other, experiment with making thinking more efficient by replacing newly discovered qualia in place of naturally evolved qualia recruited for such and such task, and so on. Then, the exploration of the state-space of possible conscious experiences would be made by selectively erasing the memory of certain brains in the network, preparing them to express a particular phenomenology, and then adding them in teams that record from within (and also from outside) how binding certain brains together influence the corresponding qualia in each. Since our current intelligence is the product of naturally-selected qualia varieties barely cooperating together within our minds, it stands to reason that our minds are very suboptimal qualia computers. Instead, the future super-intelligences will be implemented with carefully investigated qualia varieties that process information more efficiently, reliably and, well, with a much more open mind.

(21) We always end at 21. Yes, this sounds weird. But that’s the law of the place. We, all of the people who here are working for the abolition of suffering, the solution to the hard problem of consciousness, and as a favor to our super-blissful descendants, are required by law to leave the building at 9 PM. More so, no artificial or natural mind is allowed to work on theoretically relevant problems outside of the 9AM to 9PM window of time. Nothing screams “I’m Moloch and I’ll eat you all” as loud as “you can all work for as long as you want, we will judge based on the results.”

(22) Finally: Every mind we create must be above hedonic zero. In order to explore any state-space that is not intrinsically blissful, you need a special permit. The need for such a permit is non-negotiable. You cannot, I repeat, you cannot just create any mind for “research.” The mind you create has to be the sort of mind that (a) does not want to die, and (b) has no conceivable malicious desire. Every mind you create – so as to avoid Moloch scenarios – has to be a hedonistic negative utilitarian. Period. I know some of you will blame this system for being “already the result of a memetic Moloch uprising.” But the system in place prevents any of the Moloch outcomes that intentionally consistently produces suffering as part of its natural order of business.

(23) Ask your local consciousness regulation agency about scholarship opportunities at our Institute. You may have what it takes to help us figure out how to achieve lasting world-peace.

Sincerely,

The League of Super-Shulgins, 2054

DSC01003.JPG

Qualia field calibration psychophysics – with love, Andrés

 


 

* We navigate a sensory-triggered qualia-based world-simulation that blends together local and global binding constraints and state-dependent learned constraints. Consciousness is useful to the organism in as far as it helps it solve the constraint satisfaction problems represented in the world simulation.

What are these terms? Local binding constraints are constraints that are intrinsic to specific qualia varieties. For example, CIELAB reveals that it is not possible to experience both blue and yellow as part of a unitary smooth color. It is possible to see a sea of gray and many dots of blue and dots of yellow, but that is not the same as seeing a uniform color. This sort of constraint arises in all qualia varieties with multiple values.

The global binding constraints are more difficult to explain, and may not even exist. But, hypothetically, it may be the case that certain qualia varieties cannot coexist as part of the same conscious experience. For instance, experiencing certain mood may ultimately come down to a particular resonant structure in our globally-binding qualia strings (let’s just say). Then maybe you can’t experience both X and Y moods simultaneously because they always become dissonant with each other and experience significant mutual cancellation. [This may explain why people can’t seem to ever find the right way to provoke a smooth blend of Salvia and DMT consciousness.])

Finally, the learned constraints are contingent and sensory-driven. What are these? These include both our current sensory stimuli, which is constraining the state of our consciousness, and whatever memories, recollections and general neurological barriers I happen to be activating right now.

labsphere2

CIELAB (1976)

** As an example of something where this happens, imagine that my friend Fred was suddenly able to talking to space itself. Space asks him: “Hey, my friend, what is this thing I’ve been hearing about called ‘the here and now’?” My friend tried to say something that came out like this: “The here and now is the location in space-time from which this very statement, these very words, are being conceived and then physically delivered to you.” Space became very confused. She did not understand half of the words she was receiving. Space said “I guess maybe I can’t reason about space in the same you as you can. I can nonetheless tell you anything you want about the ‘inverted semantic omniism’ that we entities of Space love to talk about.” Alright, what’s that? “That’s when your reality, which is made of concepts of a qualia-order no larger than the qualia-order of the conceptual fields in which they are embedded, conspire together and circumvent low-level constraints by imagining a new topology for the self-other temporal membrane.” And, “where does this happen?” My friend inquired. Space responded: “As far as I can tell, this usually happens in the conceptual space that denies mereological nihilism.” Alright, let’s “pack and leave”, said my friend, and deep down, I agreed entirely with him. I entirely get why he would get scared so badly by a disincarnate entity that comes from a reality with different basement ontologies and fundamentals. I, too, am afraid of ontological revolutions. This is why I try to anticipate them as far in advance as possible: So that the shock is less shattering to my psychology.

*** In as much as experience is real, then Hells and Heavens are just as real as long as they have been instantiated somewhere in the multiverse. John C. Lilly and bad luck may be a culprit for the existence of a very specific and time-bound experiential hell (“The Center of the Cyclone: Chapter called A Guided Tour of Hell”).

**** Universally funny means: You can get and interact with any phenomenal joke. Human jokes are a very specific kind of conscious humor. Our evolutionary legacy guarantees that they are, too, related to our survival. General jokes, on the other hand, exist in a much larger space of possibilities. There are funny phenomenologies with conceptual content. Then there are those with sensory content. And then there is funny phenomenological applications of ontological qualia. Nothing is safe. Everything can be humorously twisted.