UFOs as Cosmic Parasites: An Evolutionary Game Theory Analysis of Relativistic Craft

or How the “Grey Paradox” Might Actually Make Sense

[Epistemic Status: I’m having fun. But also, I’m attempting to make sense of seemingly bizarre phenomena through the lens of physics, evolution, and game theory. Heavy, perhaps even daring, speculation based on limited but increasingly credible evidence – take with a giant grain of salt and don’t update too much about Qualia Computing based on this one post]

Taking UFOs Seriously: Physics, Game Theory, and Evolutionary Dynamics

I should start by acknowledging my initial heavy skepticism about the whole “UFO phenomenon”. Like many people influenced by rationalist epistemics and aesthetics, I have always found it easy to dismiss the entire field as a combination of misidentification, social contagion, and wishful thinking. Whenever a friend sent me a link to credible-sounding journalism on the topic, I would remember Stuart Armstrong’s 2012 talk at the Oxford Physics Department about optimal space colonization strategies. His calculations showed that if you want to spread as far as possible, the winning strategy involves launching tiny self-replicating nanotechnology systems containing your civilization’s information content in rice-sized projectiles to as many galaxies as possible. The mathematics are clear: small differences in miniaturization lead to enormous differences in how many galaxies you can reach.

Given this logic, the idea that we would naturally encounter biological organisms in large spacecraft seemed ruled out on priors. Why would any advanced civilization choose to build massive craft and travel for hundreds, thousands, millions of years, only to reach a tiny fraction of the universe, when they could achieve vastly superior spread through miniaturized probes?

Then Robin Hanson started taking the phenomenon seriously (talk about social contagion!), a couple people I know and who I consider reliable witnesses told me unbelievable personal stories involving UFOs (while fully sober – both during the experience and while recounting it!), and finally the “New Jersey Drone” situation started happening last November (and, apparently, continues to this day). After compiling and anlyzing dozens of official sources and trying to apply all kinds of conventional explanations, I concluded that… I don’t know what the fuck is going on.

After I declared epistemological bankruptcy about the topic on Twitter, someone emailed me a series of lectures about the science of UFOs that were delivered at SOL Foundation‘s launching event at Stanford in 2023. Of special note to me was Kevin Knuth‘s presentation on UAP flight characteristics, which made me seriously reconsider my previous assumptions. It is worth mentioning that Kevin isn’t a random hobbyist; he’s an Associate Professor of Physics and Informatics at the University at Albany and has been editor-in-chief of the prestigious academic journal Entropy since 2013. The core issue, as he explains, isn’t just that these objects really do seem to exist – it’s that their behavior implies something unexpected about the nature of spacetime manipulation and, potentially, its accessibility to technological civilizations. Knuth’s peer-reviewed analysis suggests UFOs can show accelerations of up to 5,000 Gs, far beyond what any plausible human-made craft could generate (or even withstand). More recently, empirically-driven analysis of high-quality multi-sensor broadband UFO recordings by the Tedesco Brothers suggests the presence of gravitational lensing around these mysterious crafts. Thus, the phenomenon as reported in multiple credible cases suggest these aren’t just extremely advanced aircraft – they’re devices that manipulate the fabric of spacetime itself. Ok. Let’s take this with a big grain of salt. But I would be lying if I didn’t find this analysis at least somewhat compelling (in research aesthetics and evidential strength, if only).

Did you know some UFOs seem to “double” and then “remerge” at times? Apparently this _might_ be explained via gravitational lensing effects. Yeah, right…

This leads me to posit a really interesting possibility: what if relativistic travel is not as hard as we first thought? What if it becomes accessible relatively early in a civilization’s development, before perfect miniaturization or molecular manufacturing? What if traveling close to the speed of light safely for large objects is a technology within reach for a civilization not much older than humans? This would dramatically reshape our understanding of likely alien civilizational development paths.

The implications are enormous. In the volume of space-time where this technology is first discovered, the earliest escapees might actually achieve the furthest reach. Rather than waiting for perfect miniaturization, the optimal strategy might involve what I’ll call “hiding in the future”: using relativistic travel to explore vast distances while experiencing only years of subjective time. This isn’t exactly unprecedented, as it has a parallel with biological preservation strategies we see on Earth. Just as bears hibernate to survive winter and tardigrades enter cryptobiosis to endure extreme conditions, relativistic travelers could effectively “hibernate” through dangerous periods of their civilization’s development by being unreachable to others while fighting entropy via time dilation.

From an evolutionary standpoint, this creates powerful selective pressures in two directions. First, the ability to reach distant systems ahead of other civilizations provides obvious reproductive advantages. But perhaps more intriguingly, the time dilation effect offers protection against local instabilities and existential risks. If your civilization shows signs of approaching a potentially catastrophic singularity or societal collapse, the ability to effectively freeze yourself in time while traveling to distant systems becomes an incredibly attractive survival strategy. Given these dynamics, we might expect the first wave of cosmic explorers to be relatively young civilizations, perhaps only centuries ahead of us in development, who recognize this temporal escape hatch and take it before their window of opportunity closes.

Consider the game theory implications. If spacetime manipulation technology is achievable before advanced consciousness tech or molecular-scale manufacturing, it creates what I’ll call a “relativistic first-mover advantage.” Any civilization that achieves this capability gains an enormous evolutionary edge by being able to physically explore and colonize space while bringing biological beings along for the ride.

The Grey Question: Antigravitic Tech Transfer in Tandem with Genetic Experimentation as an Optimal Cosmic Reproductive Strategy

Let’s follow this logic to its natural conclusion. If we accept that:

  1. These objects demonstrate actual manipulation of spacetime
  2. This technology might be accessible relatively early in a civilization’s development
  3. Relativistic travel creates powerful first-mover advantages

Then we should seriously consider whether certain consistently reported patterns of UFO behavior, particularly around technology transfer and biological sampling, might represent an optimized evolutionary strategy. Yes, I’m talking about the “Greys” and their alleged hybridization programs. Bear with me – this gets interesting.

Most analyses of “Grey behavior” assume either benevolent uplift (teaching us technology for our own good) or simple resource extraction (treating us like lab rats). But what if we’re looking at something far more sophisticated? I propose they discovered and are applying a highly optimized reproductive strategy that operates on multiple levels simultaneously. Instead of seeing this as either altruistic teaching or exploitative research, consider it as a sophisticated bootstrapping operation by a parasitic relativistic intelligence that has evolved to optimize for cosmic-scale reproduction.

The core insight is this: if spacetime manipulation technology is achievable relatively early in a civilization’s development, but also represents a crucial branching point in technological evolution, then steering other civilizations toward this technology (and away of other tech trees) while simultaneously collecting genetic material for hybridization represents an incredibly efficient expansion strategy. We will build the technology for them while they use our genetic material to learn to adapt to more environments. It’s a win-win for them. A lose-lose for us.

Think about it this way: rather than building all their own infrastructure across the cosmos, “the greys” (or whoever we want to call the alleged creatures that allegedly gave the US alleged antigravitic tech, allegedly as way back as in the 50s) could be creating self-replicating launch points by guiding civilizations like ours toward the specific technological path that their reproduction strategy is optimized for. Their “gift” would not be about keeping us dependent or about harvesting resources. It’s about shaping our entire civilization into a format that’s maximally useful for their own replicator strategy.

This would explain the peculiar focus on both antigravitic technology transfer and biological sampling. They’re not trading technology for genetic samples in some sort of cosmic barter system overseen by a benevolent Galactic Federation. Rather, the technology transfer ensures we develop in ways compatible with their civilization’s needs, while the biological sampling allows them to maintain and expand their own genetic diversity. In tandem, these are both crucial for a spacefaring species facing the harsh realities of cosmic radiation, diverse planetary environments, and competing reproductive strategies (likely extremely competitive in their own way, of which we know nothing).

The alleged genetic experiments, in this light, aren’t about creating a worker caste or infiltrating human society. No. They are about maintaining evolutionary adaptability, minimizing the number of bodies they needed to come to earth with, and achieve squatter rights (ahem, first mover advantage) on other planets predicted to harbor intelligent life in the forward lightcone. Each new civilization they encounter becomes both a technological bootstrap point and a source of genetic variation, creating a network of compatible technology bases and adaptable biological resources.

This model resolves many of the apparent contradictions in reported Grey behavior. Their seemingly excessive interest in human biology despite advanced technology makes sense if biological adaptation remains crucial to their expansion strategy. Their careful parceling out of technological information aligns with the need to guide our development along specific paths without triggering catastrophic disruption.

In the end, we might be looking at something far more sophisticated than either simple resource extraction or benevolent technological uplift. We might be observing an incredibly well-optimized expansion strategy that operates simultaneously on technological, biological, and civilizational levels. A strategy that evolved precisely because spacetime manipulation technology became available before advanced consciousness tech or molecular manufacturing. The former (in my view) being a benevolence factor, and the latter a civilizationaly destabilizing factor. Something which the Greys seem to both avoid despite their apparent “highly advanced” status.

If this model is correct, we’re not dealing with unfathomably advanced post-biological entities or simple resource extractors. We’re dealing with a civilization that has optimized for a specific developmental path. A civilization that we might be about to encounter ourselves. The question then becomes: do we recognize this branching point for what it is, and if so, what do we do with that knowledge?

Additional Insights: Temporal Competition, Technology Trees, and Pure Replicators at the Evolutionary Limit

I’ve had a few additional insights while thinking about this topic in recent weeks that came from this (admittedly speculative) way of thinking that deserve mentioning. First, there’s what we might call the “Pioneer Paradox”: the observation that the first entities to achieve relativistic travel capability might paradoxically come from civilizations that are less technologically advanced overall. This sounds counterintuitive until you consider institutional constraints and safety protocols. Cheap antigravitic tech is the sort of thing you would gift a civilization to destroy itself. It’s extremely powerful for terrorism, for example. More advanced civilizations might develop comprehensive safety frameworks and review boards that effectively prevent early adoption of potentially risky technologies. The first relativistic travelers might emerge from civilizations just advanced enough to build the technology, but not so advanced that they’ve developed institutional frameworks that would prevent its use.

Then there’s the matter of nuclear technology. If the reports about UFOs showing particular interest in nuclear facilities are to be taken seriously (and there’s surprisingly consistent documentation here), it might indicate something about technological development paths. Nuclear technology represents one possible path to space travel, but it comes with specific risks and limitations. The apparent interest in nuclear facilities might not be about preventing war. At least not about preventing war in general (but it might be about preventing the kind of war that is counterproductive to their own reproductive strategy). The real reason they are so interested in our nuclear capabilities might be about steering technological development away from what they consider a developmental dead end from their (reproductive) point of view. The aliens don’t want us to be peaceful. They’re showing us they can disable nuclear weapons so that we invest heavily in antigravitic tech and build craft that they can use for themselves down the line (perhaps after, or while, we blow ourselves up with it).

This brings us to what we might call “temporal competition zones.” In a universe with cheap relativistic travel but no FTL, you get interestingly non-trivial patterns of information spread. The first travelers from Civilization A might arrive at a distant system, only to find that while they were en route, Civilization B developed better technology and beat them there. This creates regions of space-time where multiple civilizations might be racing to establish first contact or control, each operating with different technological capabilities and different amounts of time dilation.

The most unsettling implication? Once relativistic travel becomes possible, there’s a strong game-theoretic pressure for civilizations to expand as quickly as possible, even if they’re not fully ready. This is not only because the host civilization will likely face existential risk due to the technology; the risk of letting another civilization establish first presence somewhere might generally outweigh the benefits of waiting for better technology. Robin Hanson’s concept of “grabby aliens” becomes particularly prescient: the early relativistic travelers might be harbingers of a more organized expansion wave following behind them (assuming the society didn’t collapse due the instability introduced by the technology).

Finally, there’s the question of why these visitors (if that’s what they are) seem so interested in military installations. The conventional explanation focuses on monitoring nuclear weapons because “it’s the only thing that might hurt them”, but there might be a simpler game-theoretic explanation (even leaving aside the reproductive strategy of the Greys): military installations represent the highest concentration of sensors and trained observers capable of detecting their presence. If you’re trying to guide a civilization’s technological development while maintaining plausible deniability, you’d want to be detected primarily by credible observers operating sophisticated equipment. This creates an ideal calibration mechanism, where military encounters provide feedback about detection capabilities without requiring overt contact with the general public.

We might be witnessing not just a reproductive strategy, but a complete civilizational bootstrapping approach that operates across multiple timescales simultaneously. The technology transfer shapes our development path, the biological sampling provides evolutionary adaptability, and the pattern of encounters creates a calibrated revelation process that prevents both complete dismissal and civilization-disrupting panic.

The universe, it seems, might be stranger than we imagined… but perhaps in more logically coherent ways than UFO skeptics like myself originally assumed. Not in a comforting “we’re all one consciousness” kind of way, but in a “the world’s bacteria biomass is 45X larger than the animal biomass” kind of way.

Consider this: bacteria represent the most successful form of life on Earth, with a total biomass 45 times larger than all animals combined. Despite billions of years of evolution producing seemingly more “advanced” organisms, bacteria remain the dominant form of life because they optimized for robust reproduction rather than complexity. What if cosmic civilization follows a similar pattern?

We imagine advanced aliens as post-biological entities who have transcended their evolutionary origins, basking in enlightened states of consciousness while casually engineering matter at the molecular scale. But what if the most evolutionarily stable strategy in the cosmos looks far more parasitic – relativistic biological entities hiding with the benefit of time dilation across space-time, spreading their genes through hybridization programs, and “helping” developing civilizations build ships with suspiciously compatible technology… only to exploit those very ships as replication vectors once their hosts reach critical technological maturity? The cosmic equivalent of a parasitoid wasp laying its eggs in an unwitting host, but with spaceships instead of larvae.

The path toward consciousness technology and molecular manufacturing might seem more elegant and “advanced,” but perhaps the messy, biological path of relativistic space travel represents a more robust evolutionary strategy. Just as bacteria continue to thrive alongside more “advanced” organisms, perhaps the cosmos favors strategies that prioritize reliable reproduction over transcendence. The limit of Pure Replicator Dynamics might look less like Grey Goo and more like Grey Aliens.

Open Letter to the TPOT Community on the Topic of Animal Suffering: Enlightenment, Tanha, and Kiki Qualia

Dear TPOT community,

I’ve been noticing an increasingly common perspective in our discussions that I feel compelled to address. There seems to be a growing belief that non-human animals are somehow “enlightened by default” or exist in a state free from tanha (craving, aversion, and the resulting suffering). I’ve seen numerous posts suggesting that non-human animals are somehow naturally free from the mental patterns that create suffering in humans. While I deeply appreciate the sentiment behind this view – as indeed, animals do seem to access deeply bouba states more readily than most humans realize, and their capacity for pleasure is real and ethically relevant – I believe this represents a fundamental misunderstanding about animal consciousness that needs careful examination.

The probability that, say, free range cows (or other non-human animals in general) are experiencing constant bliss, lack tanha, or are “enlightened by default” is, by my estimation, very low (<0.2%). A claim of enlightenment-by-default requires extraordinary evidence, and what we see points in the opposite direction. Let me break this down from a qualia-centric perspective:

Consider first the clear evidence of suffering in prey animals – species like deer, rabbits, or gazelles must maintain constant vigilance against predators, a state that phenomenologically manifests as a persistent kiki-like tension in consciousness. This baseline of anxiety and alertness is fundamentally incompatible with persistent non-dual states. A prey animal experiencing constant bliss would be rapidly selected against in an environment with predators.

Even predators themselves are not free from tanha – we see intense craving manifesting in their sexual frustration during mating seasons, their constant drive for status within social hierarchies, and their restless search for food even when not immediately hungry. The apparent ease with which a lion rests in the sun masks the intense loops of desire and aversion that characterize their conscious experience.

In domesticated animals like cattle, we see equally clear evidence of craving, aversion, and suffering in their daily lives. Cows display intense maternal distress when separated from their calves, with both mother and offspring showing signs of anxiety and distress that can persist for days. They engage in competition for food resources and establish complex social hierarchies that generate ongoing stress for individuals lower in the pecking order. Their food-seeking behaviors demonstrate clear patterns of craving, and they exhibit territorial behaviors that indicate attachment and aversion patterns similar to those we recognize in humans.

The “gazelle shaking off trauma” observation that’s often cited in these discussions actually reinforces the presence of suffering rather than its absence. This isn’t evidence of enlightenment – it’s evidence of an evolved mechanism for rapid state-switching to maintain function. The ability to quickly return to a baseline state of persistent vigilance and anxiety after a threatening encounter is precisely what you’d expect from an organism optimized for survival rather than one experiencing persistent non-dual awareness.

Non-human animals are clearly stuck in loops of craving and aversion. Consider a dog who insists on affection or food: scratching at the door, howling, and persistently demanding attention. These behaviors are obvious manifestations of craving, and, as Rob Burbea points out, all craving is fundamentally based on patterns of body tension. These patterns are not unique to humans but are basic features of animal consciousness. Tanha is thus near or completely ubiquitous in the animal kingdom.

From a neurophysiological perspective, as David Pearce (who, notably, uses the term “non-human animals” to remind us that we too are animals, and that creating artificial distinctions makes it easier to rationalize a sense of separation) has consistently emphasized, we see remarkable conservation of emotional circuitry across mammals. The same neural architectures that give rise to fear, anxiety, and suffering in humans are present in cows and other animals. If cows had somehow evolved a fundamentally different way of experiencing consciousness, we would expect to see major divergences in neural architecture; we don’t see such differences. In fact, the evidence suggests that the capacity for suffering predates the development of the rational, linguistic mind. While humans can use our frontal lobes to rationalize and contextualize pain and suffering, this higher-order cognition isn’t a prerequisite for suffering – quite the contrary.

Consider that pigs have the emotional and cognitive capacity roughly equivalent to prelinguistic toddlers. They experience raw emotions without the buffer of linguistic rationalization that adult humans possess. Chimpanzees show clear signs of depression-like behaviors following social defeats, PTSD-like symptoms after conflict, long-term emotional impacts from loss of status, and evidence of social anxiety and strategic behavior. Birds, despite being separated from mammals by hundreds of millions of years of evolution, display sophisticated emotional responses including spite and vindictiveness. These observations all point to the same conclusion: the mechanisms behind tanha are ancient and deeply preserved across the animal kingdom. The capacity for suffering doesn’t require complex cognition or human-level linguistic capacities – it’s a fundamental feature of animal sentience that evolution has maintained and elaborated upon.

The “animals are enlightened” view seems to commit what I call the “blame language fallacy” – the assumption that consciousness without language or higher order cognition is in “its natural state” and must somehow be more pure or pleasant than our modern human experience. This is reminiscent of the noble savage myth, but applied to animal consciousness.

When we look at empirical evidence from animal welfare science (cortisol levels, behavioral indicators, physiological measures), we consistently see that animals experience a wide range of emotional states, including significant suffering. If animals were naturally enlightened, we wouldn’t observe the dramatic improvements in welfare metrics when we enhance their living conditions.

I suspect this view serves several psychological functions:

  • It provides emotional comfort about the natural world
  • It suggests an easier solution to suffering than actually exists
  • It allows for a form of motivated reasoning about animal agriculture (itself likely one of the biggest sources of suffering in the world)

As someone deeply interested in consciousness and its varieties, as well as no-nonsense suffering reduction tech, I have to emphasize that while animals certainly can experience positive states, they are subject to the same fundamental constraints and physiology that shape all conscious experience on this planet. The goal should be to understand and work within these constraints to reduce suffering, not to pretend they don’t exist, as I see is happening more and more.

The path forward isn’t to romanticize animal consciousness but to better understand it in all its complexity. This requires engaging with the empirical evidence and being willing to update our views when they conflict with our preferred narratives about the nature of consciousness and its place in nature.

Finally, by my estimation it is quite likely that animal valence follows long-tail distributions (just as most things do in the context of consciousness). I think it will be crucial to identify the main species who suffer the most (likely not humans!) and help them first.

Sincerely,
Andres 🙂

Team Consciousness: A Philosophy of Truth-Seeking Ethics

I have not settled (and maybe it’s not for me to do it) on the core tenets of Team Consciousness. This would be a kind of philosophy or spirituality that tries to derive ethics from truth and actually get at the truth rather than a convenient approximation of it (or worse, a misrepresentation of it for the sake of memetic reproduction capacity). What I’ve thought for many years and has remained stable, is that we can reduce them to three core principles:

  1. Oneness / Frame Invariance
  2. Valence Realism
  3. Math

First, we must realize that every point in reality is equally real. There are more or less intense experiences, of course, but this is in fact a measure of how much reality is expressed in each. The core idea here is not that every experience is literally equally significant (they’re not) but that the spatiotemporal coordinates of an experience are irrelevant for their significance. Your experiences or the experiences of the members of your tribe or species are not more or less real than those of anyone else, factoring in their degree and intensity of consciousness.

The second core idea is that valence – whether experiences feel good or bad – is the source of value. More so, valence structuralism (an implication of valence realism in light of empirical observations of what feels good or bad in practice) entails that the value of reality is encoded in the geometric and topological basis of consciousness. Indeed, there are better and worse forms of being, and this is not an arbitrary matter, but one that can be investigated directly and devoid of personal prejudice.

And finally: math. It is not the same to suffer for one second versus a million years. It is not the same for one person to suffer as it is for a billion persons in torment. It is not the same for love to exist for a minute versus it being the foundation of a civilization. Amounts matter; qualities matter. This is tautological, of course. But for strange reasons, our empathizing cognitive styles often neglect math. So we ought to correct for this bug.

I think that all of ethics can be reconstructed from these principles. And in fact, they might help solve many moral paradoxes and enigmas. Just apply them diligently and rigorously and see how they allow you to discern between good and evil.

My hope is that the reproductive capacity of these three core principles will come from the fact that (1) they are true (and truth is convergent for those who seek it) and (2) they are highly beneficial and generate excess value. On (2), I’d point out that valence realism and the oneness of consciousness principle have practical implications, ranging from a science of consciousness capable of reducing depression, anxiety, and chronic pain, to future consciousness-altering technologies that will greatly enhance our intelligence and collective coordination capacities. I wish for these tenets to not acquire additional clauses that are there merely for their reproduction capacity at the cost of truth or accuracy; they should stand on their own. But these might not be the final set. I’m open to suggestions and enhancements 🙂

QRI Meetup in Amsterdam on January 25th 2025: The Coupling Kernels Revolution

Dear wonderful community,

Just as a fire uniformly raises the temperature throughout a building, causing diverse but interconnected effects (metal beams expanding, wood supports burning, windows cracking from thermal stress, smoke rising through air currents) psychedelics might work through a single fundamental mechanism that ripples through all neural systems. This isn’t just theoretical elegance without grounding; it’s a powerful explanatory framework that could help us understand why substances like DMT and 5-MeO-DMT produce distinct but internally consistent effects across visual, auditory, cognitive, and somatic domains. A single change in coupling dynamics might explain why these compounds have such distinct but internally consistent effects: DMT creates rapidly alternating color/anti-color visual patterns and oscillating somatic sensations, whereas 5-MeO-DMT tends towards a state of global coherence.

As demonstrated in our work “Towards Computational Simulations of Cessation“, see how a flat “coupling kernel” triggers a global attractor of coherence across the entire system, whereas an alternating negative-positive (Mexican hat-like) kernel produces competing clusters of coherence. This is just a very high-level and abstract demonstration of a change in the dynamic behavior of coupled oscillators by applying a coupling kernel. What we then must do is to see how such a change would impact different systems in the organism as a whole.
Source

The key insight is that psychedelics may modify the coupling kernels between oscillating neural systems throughout the body. Think of coupling kernels as the “rules of interaction” between neighboring neural oscillators. When these rules change, the effects cascade through different neural architectures (from the hierarchical layers of the visual cortex to the branching networks of the peripheral nervous system) producing the kaleidoscopic zoo of psychedelic effects we observe.

DMT, for instance, appears to enhance contrast and create competing clusters of coherence (possibly through 5-HT2A activation), while 5-MeO-DMT tends toward global coherence and boundary dissolution (potentially through 5-HT1A pathways). These changes in coupling dynamics appear to tune into the brain’s natural resonant modes, as described by connectome-specific harmonic waves, modulating their spectral power distribution in predictable and reliable ways.

Simulation comparing coupling kernels across a hierarchical network of feature-selective layers (16×16 to 2×2), showing how different coupling coefficients between and within layers affect pattern formation. The DMT-like kernel (-1.0 near-neighbor coupling) generates competing checkerboard patterns at multiple spatial frequencies, while the 5-MeO-DMT-like kernel (positive coupling coefficients) drives convergence toward larger coherent patches. These distinct coupling dynamics mirror how these compounds might modulate hierarchical neural architectures like the visual cortex.
Source: Internal QRI tool (public release forthcoming)

We’re excited to announce that we’ll be hosting a meeting in Amsterdam to explore this paradigm-shifting framework. This gathering will bring together researchers studying psychedelics from multiple angles – from phenomenology to neuroscience – to discuss how coupling kernels might serve as a bridge between subjective experience and neural mechanisms. Recent work on divisive normalization has shown how local neural responses are regulated by their surrounding activity, providing a potential mechanistic basis for how psychedelics modify these coupling patterns. By understanding psychedelic states through the lens of coupling kernels, we may finally have a mathematical framework that unifies the seemingly disparate effects of these compounds, much like how understanding heat transfer helps us predict how a fire will affect an entire building – from its structural integrity to its airflow patterns.

Simulation comparing different coupling kernels (DMT-like vs 5-MeO-DMT-like) applied to a 1.5D fractal branching network, showing how modified coupling parameters affect phase coherence and signal propagation. The DMT-like kernel produces competing clusters of coherence at bifurcation points, while the 5-MeO-DMT kernel drives the system toward global phase synchronization – patterns that could explain how these compounds differently affect branching biological systems like the vasculature or peripheral nervous system.
Source: Internal QRI tool (public release forthcoming)

Event Details & Amsterdam Visit

The meetup will be held on the 25th of January (location: Generator Amsterdamevent page; time: 1-8PM), featuring presentations from myself and Marco Aqil, whose groundbreaking work on divisive normalization and graph neural fields provides a compelling neuroscientific foundation for the Coupling Kernels paradigm. Marco’s research demonstrates how spatial coupling dynamics can bridge microscopic neural activity and macroscopic brain-wide effects: a perfect complement to our phenomenological investigations.

Additionally, I’ll be in Amsterdam throughout the last third of January and available to meet with academics, artists, recreational metaphysicians, and qualia researchers. If you’re interested in deep discussions about consciousness, psychedelic states, and mathematical frameworks for understanding subjective experience, please reach out.

Much love and may your New Year be filled with awesome and inspiring experiences as well as solid paradigm-building enterprises!

~Metta~

Are There Stable High-Dimensional Ecosystems of Mind?

DMT experiences can give you a sense for how multidimensional hive minds might be workable.

Example:

I met a yoga teacher at the Texas Eclipse festival earlier this year who explained that she has been receiving instructions for a new kind of yoga modality that helps with deep trauma from a hive mind that “lives in a 9 dimensional space”. This is sans drugs, and she demonstrated the body work movements she was taught by this “collective consciousness” to me and a few other people after my talk. I guess my candid discussions of altered states motivated her to share to see what I thought.

First of all, the movements were extremely interesting. Her body and extremities would coordinate in a way I’ve never seen before, as if her hands and feet were being used to select and tune into a fractal foliation of space and then her torso would make swimming motions in accord with those movements. But anyway. She explained her movements expressed the collective intelligence she was getting in touch with.

But this on a matter of principle sounds unlikely: can there really be a stable equilibrium of many intelligences in a 9 dimensional space that doesn’t quickly collapse into a hierarchically organized system of control? Can a 9D swarm intelligence really be a stable attractor in mind space?

The thing is that this happens all the time on high dose DMT! Past the Magic Eye level (Waiting Room and above in my classification), DMT’s character is exactly that of arriving at equilibrium points where many mind shards are simultaneously trying to survive, control the narrative, and redirect attention.

And, unbelievably, these typically fleeting but sometimes robust, equilibrium points do involve meaningful, swarm like, multidimensional contributions from a wide range of mind kinds all at once. They are in an all-to-all hypercomputation relation to each other and yet they can arrive at stable states capable of computing information. From eyes that simply like to capture attention, to many-legged angel wings, to gnomes stuck as 2D reflections in windows, to Cronenberg-like shoggoths, the scenes might have a really significant contribution from each element and yet not collapse to a state where there is a dictator and everyone else follows suit.

And I think the dimensionality actually enables this. From simulations of coupling dynamics, I’ve noticed that 2D systems of oscillators tend to be much more simple and dominated by a single dynamic compared to their higher dimensional versions.

In 3D already we have vastly diverse collective organisms, but in higher D this is even more available as an option. When the “beings” can evolve to inhabit (via fast reproduction, selection, and variation aided with attention) spaces as exotic as the surface of tubes or the points in a 4D cloud of dust, the emergent network of intelligences that achieves equilibrium can be truly multifaceted: it doesn’t cohere because there is no clear center. The degree to which different aspects of the space control each other is both very variable and lacking a central control panel. It’s a bit like one of those carefully engineered videogame landscapes where each region in the map has pros and cons, and thus you’re always safe from some attacks in but never from all of them when standing in a particular space. The ecosystem that evolved in such an environment doesn’t have a central director, but can still have a deep coherence of sorts.

The geometry of such experiences, especially at Breakthrough doses, is more akin to a CW-complex (which stitches together many spaces of different dimensionalities) than a smooth surface of a given dimensionality. It enables ecological mind niches which turn out to have very complex symbiotic relationships with one another.

On a good day, it might even feel like this is how it always works already – we’re just used to the specific high dimensional evolutionarily stable equilibrium of our hive mind in normal states. But perhaps it too lacks a clean hierarchical structure.

Cue in meme…

On Attention as the Management of Electromagnetic Field Lines

Try to focus your attention on the exact center of your visual field right now. Notice how the seemingly straightforward task reveals systematic instabilities: wavering, drifting, and transforming in characteristic ways. These effects aren’t random noise; they suggest an underlying physical mechanism that shapes how attention behaves more broadly.

I’ve been developing a model at QRI that conceptualizes attention through electromagnetic field dynamics. To visualize this, I created a simulation showing how electric field lines emerge from weighted combinations of resonant modes in a square plate. In this video, I manipulate the relative weight, temporal frequency, and phase of these resonant modes:

The story of harmonic waves in the brain is getting more interesting by the day. Building on Lehar’s early insights, Atasoy’s work on connectome-specific harmonic waves (2016), Johnson’s explorations of the implications (2018), Luppi’s contrast between anesthetics and psychedelics (2022), and more, we’ve now seen stunning confirmation of these ideas in Joana Cabral’s recent work with single-slice rodent recordings. The evidence keeps pointing to harmonic resonance as a fundamental organizing principle of neural activity. So let’s take this seriously and see what it tells us about the strangeness of attention.

Think of the “control parameters” for attention as the precise timing, weighting, and phase relationships between different electric resonant modes. We seem to have some degree of volitional control over these parameters, though this control is inherently indirect.

Source: Human brain networks function in connectome-specific harmonic waves (2016) by Selen Atasoy, Isaac Donnelly & Joel Pearson 

The resulting electric field lines may correspond to what we experience as the “flow of attention.” This explains why we cannot directly command attention to go anywhere. Instead, we can only modulate the underlying oscillatory conditions to maintain charge density in particular regions. To actually keep the charge density having a specific shape requires finding the right combination of harmonic modes together with the right rhythm to keep them active.

This framework might give us a new map of psychedelic phenomenology. Classical psychedelics like LSD and psilocybin seem to disrupt normal resonant mode configurations by activating higher frequency harmonics and lowering the power on low frequency harmonics. Ever notice how psychedelic experiences feel distinctly “sprinkly”? That’s what happens when attention field lines fluctuate chaotically as high frequency harmonics create clusters of attention rather than a single central stream. Without a “base”, DMT induces rapid shifts between field line configurations. In contrast, 5-MeO-DMT activating the “global mode” might lead to intense single-pointed attention. Look at the simulation when perturbed: the patterns show deep qualitative similarities with how attention behaves in these states.

The experimental implications are really tantalizing to me: we should expect to see coordinated changes in electromagnetic field patterns corresponding to shifts in attention in predictable ways. These patterns should show characteristic resonant modes that maintain stability during focused attention and become disrupted during distraction or altered states. Modern MEG and high-density EEG setups in combination could test these predictions.

Modeling attention field lines explicitly might even point us toward better cyberdelic interventions: we can build systems for non-invasively inducing exotic states of consciousness by identifying the ways attention is phase-locked and figuring out how to perturb it (ideally in a pleasant way!). It’s notoriously difficult to jump straight to Samadhi by focusing on a point. But what if we could first simulate quasi-coherent combinations of resonant modes, phase lock with the current attentional mode, and then gently nudge it toward high concentration?

Attention in this view wouldn’t be a “spotlight”. Or at least, that would not be its essential nature (albeit it can, at times, behave as if it were a spotlight, this is just a mode of many). Attention, instead, would be a dynamic pattern in the brain’s electromagnetic field, where field lines converge, and it is shaped by sophisticated control systems that modulate underlying resonant modes. Watch the simulation again: can you see how small parameter changes create recognizable patterns in the field lines? Those patterns might map directly to familiar attentional transitions. The way the lines flow, break, and reform under parameter changes could explain both the controllable and uncontrollable aspects of attention we all experience. Perhaps we will develop words to name them; and as a result, learn to eff the ineffable.

It’s a work in progress, but I figured I’d share 🙂


Video description: A quick sharing of a work-in-progress QRI research thread: can we reproduce the behavior of attention using the field lines that arise from weighted sums of electric harmonics? At least intuitively, this seems promising! I also show a work-in-progress electromagnetic simulation that visualizes the electric and magnetic field lines that result from the interaction between weighted sums of harmonic oscillations in the electric field.


See also:

Costs of Embodiment

[X-Posted @ The EA Forum]

By Andrés Gómez Emilsson

Digital Sentience

Creating “digital sentience” is a lot harder than it looks. Standard Qualia Research Institute arguments for why it is either difficult, intractable, or literally impossible to create complex, computationally meaningful, bound experiences out of a digital computer (more generally, a computer with a classical von Neumann architecture) include the following three core points:

  1. Digital computation does not seem capable of solving the phenomenal binding or boundary problems.
  2. Replicating input-output mappings can be done without replicating the internal causal structure of a system.
  3. Even when you try to replicate the internal causal structure of a system deliberately, the behavior of reality at a deep enough level is not currently understood (aside from how it behaves in light of inputs-to-outputs).

Let’s elaborate briefly:

The Binding/Boundary Problem

  1. A moment of experience contains many pieces of information. It also excludes a lot of information. Meaning that, a moment of experience contains a precise, non-zero, amount of information. For example, as you open your eyes, you may notice patches of blue and yellow populating your visual field. The very meaning of the blue patches is affected by the presence of the yellow patches (indeed, they are “blue patches in a visual field with yellow patches too”) and thus you need to take into account the experience as a whole to understand the meaning of all of its parts.
  2. A very rough, intuitive, conception of the information content of an experience can be hinted at with Gregory Bateson’s (1972) “a difference that makes a difference”. If we define an empty visual field as containing zero information, it is possible to define an “information metric” from this zero state to every possible experience by counting the number of Just Noticeable Differences (JNDs) (Kingdom & Prins, 2016) needed to transform such empty visual field into an arbitrary one (note: since some JND are more difficult to specify than others, a more accurate metric should also take into account the information cost of specifying the change in addition to the size of the change that needs to be made). It is thus evident to see that one’s experience of looking at a natural landscape contains many pieces of information at once. If it didn’t, you would not be able to tell it apart from an experience of an empty visual field.
  3. The fact that experiences contain many pieces of information at once needs to be reconciled with the mechanism that generates such experiences. How you achieve this unity of complex information starting from a given ontology with basic elements is what we call “the binding problem”. For example, if you believe that the universe is made of atoms and forces (now a disproven ontology), the binding problem will refer to how a collection of atoms comes together to form a unified moment of experience. Alternatively, if one’s ontology starts out fully unified (say, assuming the universe is made of physical fields), what we need to solve is how such a unity gets segmented out into individual experiences with precise information content, and thus we talk about the “boundary problem”.
  4. Within the boundary problem, as Chris Percy and I argued in Don’t Forget About the Boundary Problem! (2023), the phenomenal (i.e. experiential) boundaries must satisfy stringent constraints to be viable. Namely, among other things, phenomenal boundaries must be:
    1. Hard Boundaries: we must avoid “fuzzy” boundaries where information is only “partially” part of an experience. This is simply the result of contemplating the transitivity of the property of belonging to a given experience. If a (token) sensation A is part of a visual field at the same time as a sensation B, and B is present at the same time as C, then A and C are also both part of the same experience. Fuzzy boundaries would break this transitivity, and thus make the concept of boundaries incoherent. As a reductio ad absurdum, this entails phenomenal boundaries must be hard.
    2. Causally significant (i.e. non-epiphenomenal): we can talk about aspects of our experience, and thus we can know they are part of a process that grants them causal power. More so, if structured states of consciousness did not have causal effects in some way isomorphic to their phenomenal structure, evolution would simply have no reason to recruit them for information processing. Albeit epiphenomenal states of consciousness are logically coherent, the situation would leave us with no reason to believe, one way or the other, that the structure of experience would vary in a way that mirrors its functional role. On the other hand, states of consciousness having causal effects directly related to their structure (the way they feel like) fits the empirical data. By what seems to be a highly overdetermined Occam’s Razor, we can infer that the structure of a state of consciousness is indeed causally significant for the organism.
    3. Frame-invariant: whether a system is conscious should not depend on one’s interpretation of it or the point of view from which one is observing it (see appendix for Johnson’s (2015) detailed description of frame invariance as a theoretical constraint within the context of philosophy of mind).
    4. Weakly emergent on the laws of physics: we want to avoid postulating either that there is a physics-violating “strong emergence” at some level of organization (“reality only has one level” – David Pearce) or that there is nothing peculiar happening at our scale. Bound, casually significant, experiences could be akin to superfluid helium. Namely, entailed by the laws of physics, but behaviorally distinct enough to play a useful evolutionary role.
  5. Solving the binding/boundary problems does not seem feasible with a von Neumann architecture in our universe. The binding/boundary problem requires the “simultaneous” existence of many pieces of information at once, and this is challenging using a digital computer for many reasons:
    1. Hard boundaries are hard to come by: looking at the shuffling of electrons from one place to another in a digital computer does not suggest the presence of hard boundaries. What separates a transistor’s base, collector, and emitter from its immediate surroundings? What’s the boundary between one pulse of electricity and the next? At best, we can identify functional “good enough” separations, but no true physics-based hard boundaries.
    2. Digital algorithms lack frame invariance: how you interpret what a system is doing in terms of classic computations depends on your frame of reference and interpretative lens.
    3. The bound experiences must themselves be causally significant. While natural selection seemingly values complex bound experiences, our digital computer designs precisely aim to denoise the system as much as possible so that the global state of the computer does not influence in any way the lower-level operations. At the algorithmic level, the causal properties of a digital computer as a whole, by design, are never more than the strict sum of their parts.

Matching Input-Output-Mapping Does Not Entail Same Causal Structure

Even if you replicate the input-output mapping of a system, that does not mean you are replicating the internal causal structure of the system. If bound experiences are dependent on specific causal structures, they will not happen automatically without considerations for the nature of their substrate (which might have unique, substrate-specific, causal decompositions). Chalmers’ (1995) “principle of organizational invariance” assumes that replicating a system’s functional organization at a fine enough grain will reproduce identical conscious experiences. However, this may be question-begging if bound experiences require holistic physical systems (e.g. quantum coherence). In such a case, the “components” of the system might be irreducible wholes, and breaking them down further would result in losing the underlying causal structure needed for bound experiences. This suggests that consciousness might emerge from physical processes that cannot be adequately captured by classical functional descriptions, regardless of their granularity.

More so, whether we realize it or not, it is always us (indeed complex bound experiences) who interpret the meaning of the input and the output of a physical system. It is not interpreted by the system itself. This is because the system has no real “points of view” from which to interpret what is going on. This is a subtle point, and will merely mention it for now, but a deep exposition of this line of argument can be found in The View From My Topological Pocket (2023).

We more so would point out that the system that is smuggling a “point of view” to interpret a digital computer’s operations is in the human who builds, maintains, and utilizes it. If we want a system to create its “own point of view” we will need to find the way for it to bind the information in a (1) “projector”/screen, (2) an actual point of view proper, or (3) the backwards lightcone that feeds into such a point of view. As argued, none of these are viable solutions.

Reality’s Deep Causal Structure is Poorly Understood

Finally, another key consideration that has been discussed extensively is that the very building blocks of reality have unclear, opaque causal structures. Arguably, if we want to replicate the internal causal structure of a conscious system, the classical input-output mapping is therefore not enough. If you want to ensure that what is happening inside the system has the same causal structure as its simulated counterpart, you would also need to replicate how the system would respond to non-standard inputs, including x-rays, magnetic fields, and specific molecules (e.g. Xenon isotopes).

These ideas have all been discussed at length in articlespodcastspresentations, and videos. Now let’s move on to a more recent consideration we call “Costs of Embodiment”.

Costs of Embodiment

Classical “computational complexity theory” is often used as a silver bullet “analytic frame” to discount the computational power of systems. Here is a typical line of argument: under the assumption that consciousness isn’t the result of implementing a quantum algorithm per se, the argument goes, then there is “nothing that it can do that you couldn’t do with a simulation of the system”. This, however, is neglecting the complications that come from instantiating a system in the physical world with all that it entails. To see why, we must first explain the nature of this analytic style in more depth:

Introduction to Computational Complexity Theory

Computational complexity theory is a branch of computer science that focuses on classifying computational problems according to their inherent difficulty. It primarily deals with the resources required to solve problems, such as time (number of steps) and space (memory usage).

Key concepts in computational complexity theory include:

  1. Big O notation: Used to describe the upper bound of an algorithm’s rate of growth.
  2. Complexity classes: Categories of problems with similar resource requirements (e.g., P, NP, PSPACE).
  3. Time complexity: Measure of how the running time increases with the size of the input.
  4. Space complexity: Measure of how memory usage increases with the size of the input.

In brief, this style of analysis is suited for analyzing the properties of algorithms that are implementation-agnostic, abstract, and interpretable in the form of pseudo-code. Alas, the moment you start to ground these concepts in the real physical constraints to which life is subjected, the relevance and completeness of the analysis starts to fall apart. Why? Because:

  1. Big O notation counts how the number of steps (time complexity) or number of memory slots (space complexity) grows with the size of the input (or in some cases size of the output). But not all steps are created equal:
    1. Flipping the value of a bit might be vastly cheaper in the real world than moving the value of a bit to another location that is very (physically far) in the computer.
    2. Likewise, some memory operations are vastly more costly than others: in the real world you need to take into account the cost of redundancy, distributed error correction, and entropic decay of structures not in use at the time.
  2. Not all inputs and outputs are created equal. Taking in some inputs might be vastly more costly than others (e.g. highly energetic vibrations that shake the system apart mean something to a biological organism as it needs to adapt to the possible stress induced by the nature of the input, expressing certain outputs might be much more costly than others, as the organism needs to reconfigure itself to deliver the result of the computation, a cost that isn’t considered by classical computational complexity theory).
  3. Interacting with a biological system is a far more complex activity than interacting with, say, logic gates and digital memory slots. We are talking about a highly dynamic, noisy, soup of molecules with complex emergent effects. Defining an operation in this context, let alone its “cost”, is far from trivial.
  4. Artificial computing architectures are designed, implemented, maintained, reproduced, and interpreted by humans who, if we are to believe already have powerful computational capabilities, are giving the system an unfair advantage over biological systems (which require zero human assistance).

Why Embodiment May Lead to Underestimating Costs

Here is a list of considerations that highlight the unique costs that come with real-world embodiment for information-processing systems beyond the realm of mere abstraction:

  1. Physical constraints: Traditional complexity theory often doesn’t account for physical limitations of real-world systems, such as heat dissipation, energy consumption, and quantum effects.
  2. Parallel processing: Biological systems, including brains, operate with massive adaptive parallelism. This is challenging to replicate in classical computing architectures and may require different cost analyses.
  3. Sensory integration: Embodied systems must process and integrate multiple sensory inputs simultaneously, which can be computationally expensive in ways not captured by standard complexity measures.
  4. Real-time requirements: Embodied systems often need to respond in real-time to environmental stimuli, adding temporal constraints that may increase computational costs.
  5. Adaptive learning: The ability to learn and adapt in real-time may incur additional computational costs not typically considered in classical complexity theory.
  6. Robustness to noise: Physical systems must be robust to environmental noise and internal fluctuations, potentially requiring redundancy and error-correction mechanisms that increase computational costs.
  7. Energy efficiency: Biological systems are often highly energy-efficient, which may come at the cost of increased complexity in information processing.
  8. Non-von Neumann architectures: Biological neural networks operate on principles different from classical computers, potentially involving computational paradigms not well-described by traditional complexity theory.
  9. Quantum effects: At the smallest scales, quantum mechanical effects may play a role in information processing, adding another layer of complexity not accounted for in classical theories.
  10. Emergent properties: Complex systems may exhibit physical emergent properties that arise from the interactions of simpler components and as well as phase transitions, potentially leading to computational costs that are difficult to predict or quantify using standard methods.

See appendix for a concrete example of applying these considerations to an abstract and embodied object recognition system (example provided by Kristian Rönn).

Case Studies:

1.  2D Computers

It is well known in classical computing theory that a 2D computer can implement anything that an n-dimensional computer can do. Namely, because it is possible to create a 2D Turing Machine capable of simulating arbitrary computers of this class (to the extent that there is a computational complexity equivalence between an n-dimensional computer and a 2D computer), we see that (at the limit) the same runtime complexity as the original computer in 2D should be achievable.

However, living in a 2D plane comes with enormous challenges that highlight the cost of embodiment present in a given media. In particular, we will see that the *routing costs* of information will grow really fast, as the channels that connect between different parts of the computer will need to take turns in order to allow for the crossed wires to transmit information without saturating the medium of (wave/information) propagation.

A concrete example here comes from examining what happens when you divide a circle into areas. Indeed, this is a well-known math problem, where you are supposed to derive a general formula for the number of areas by which a circle gets divided when you connect n (generally placed) points in its periphery. The takeaway of this exercise is often to point out that even though at first the number of areas seem to be powers of 2 (2, 4, 8, 16…) eventually the pattern is broken (the number after 16 is, surprisingly, 31 and not 32).

For the purpose of this example we shall simply focus on the growth of edges vs. the growth of crossings between the edges as we increase the number of nodes. Since every pair of nodes has an edge, the formula for the number of edges as a function of the number of nodes n is: n choose 2. Similarly, any four points define a single unique crossing, and thus the formula for the number of crossings is: n choose 4. When n is small (6 or less), the number of crossings is smaller or equal to the number of edges. But as soon as we hit 7 nodes, the number of crossings dominates over the number of edges. Asymptotically, in fact, the growth of edges is O(n^2) using the Big O notation, whereas the number of crossings ends up being O(n^4), which is much faster. If this system is used in the implementation of an algorithm that requires every pair of nodes to interact with each other once, we may at first be under the impression that the complexity will grow as O(n^2). But if this system is embodied, messages between the nodes will start to collide with each other at the crossings. Eventually, the number of delays and traffic jams caused by the embodiment of the system in 2D will dominate the time complexity of the system.

2. Blind Systems: Bootstrapping a Map Isn’t Easy

A striking challenge that biological systems need to tackle to instantiate moments of experience with useful information arises when we consider the fact that, at conception, biological systems lack a pre-existing “ground truth map” of their own components, i.e. where they are, and where they are supposed to be. In other words, biological systems somehow bootstrap their own internal maps and coordination mechanisms from a seemingly mapless state. This feat is remarkable given the extreme entropy and chaos at the microscopic level of our universe.

Assembly Theory (AT) (2023) provides an interesting perspective on this challenge. AT conceptualizes objects not as simple point particles, but as entities defined by their formation histories. It attempts to elucidate how complex, self-organizing systems can emerge and maintain structure in an entropic universe. However, AT also highlights the intricate causal relationships and historical contingencies underlying such systems, suggesting that the task of self-mapping is far from trivial.

Consider the questions this raises: How does a cell know its location within a larger organism? How do cellular assemblies coordinate their components without a pre-existing map? How are messages created and routed without a predefined addressing system and without colliding with each other? In the context of artificial systems, how could a computer bootstrap its own understanding of its architecture and component locations without human eyes and hands to see and place the components in their right place?

These questions point to the immense challenge faced by any system attempting to develop self-models or internal mappings from scratch. The solutions found in biological systems might potentially rely on complex, evolved mechanisms that are not easily replicated in classical computational architectures. This suggests that creating truly self-understanding artificial systems capable of surviving in a hostile, natural environment, may require radically different approaches than those currently employed in standard computing paradigms.

How Does the QRI Model Overcome the Costs of Embodiment?

This core QRI article presents a perspective on consciousness and the binding problem that aligns well with our discussion of embodiment and computational costs. It proposes that moments of experience correspond to topological pockets in the fields of physics, particularly the electromagnetic field. This view offers several important insights:

  1. Frame-invariance: The topology of vector fields is Lorentz invariant, meaning it doesn’t change under relativistic transformations. This addresses the need for a frame-invariant basis for consciousness, which we identified as a challenge for traditional computational approaches.
  2. Causal significance: Topological features of fields have real, measurable causal effects, as exemplified by phenomena like magnetic reconnection in solar flares. This satisfies the requirement for consciousness to be causally efficacious and not epiphenomenal.
  3. Natural boundaries: Topological pockets provide objective, causally significant boundaries that “carve nature at its joints.” This contrasts with the difficulty of defining clear system boundaries in classical computational models.
  4. Temporal depth: The approach acknowledges that experiences have a temporal dimension, potentially lasting for tens of milliseconds. This aligns with our understanding of neural oscillations and provides a natural way to integrate time into the model of consciousness.
  5. Embodiment costs: The topological approach inherently captures many of the “costs of embodiment” we discussed earlier. The physical constraints, parallel processing, sensory integration, and real-time requirements of embodied systems are naturally represented in the complex topological structures of the brain’s electromagnetic field.

This perspective suggests that the computational costs of consciousness may be even more significant than traditional complexity theory would indicate. It implies that creating artificial consciousness would require not just simulating neural activity, but replicating the precise topological structures of electromagnetic fields in the brain. This is a far more challenging task than conventional AI approaches.

Moreover, this view provides a potential explanation for why embodied systems like biological brains are so effective at producing consciousness. The physical structure of the brain, with its complex networks of neurons and electromagnetic fields, may be ideally suited to creating the topological pockets that correspond to conscious experiences. This suggests that embodiment is not just a constraint on consciousness, but a fundamental enabler of it.

Furthermore, there is a non-trivial connection between topological segmentation and resonant modes. The larger a topological pocket is, the lower the frequency of the resonant modes can be. This, effectively, is broadcasted to every region within the pocket (much akin how any spot on the surface of an acoustic guitar expresses the vibrations of the guitar as a whole). Thus, topological segmentation, quite conceivably, might be implicated in the generation of maps for the organism to self-organize around (cf. bioelectric morphogenesis according to Michael Levin, 2022). Steven Lehar (1999) and Michael E. Johnson (2018) in particular have developed really interesting conceptual frameworks for how harmonic resonance might be implicated in the computational character of our experience. The QRI insight that topology can mediate resonance, further complicates the role of phenomenal boundaries in the computational role of consciousness.

Conclusion and Path Forward

In conclusion, the costs of embodiment present significant challenges to creating digital sentience that traditional computational complexity theory fails to fully capture. The QRI solution to the boundary problem, with its focus on topological pockets in electromagnetic fields, offers a promising framework for understanding consciousness that inherently addresses many of these embodiment costs. Moving forward, research should focus on: (1) developing more precise methods to measure and quantify the costs of embodiment in biological systems, (2) exploring how topological features of electromagnetic fields could be replicated or simulated in artificial systems, and (3) investigating the potential for hybrid systems that leverage the natural advantages of biological embodiment while incorporating artificial components (cf. Xenobots). By pursuing these avenues, we may unlock new pathways towards creating genuine artificial consciousness while deepening our understanding of natural consciousness.

It is worth noting that the QRI mission is to “understand consciousness for the benefit of all sentient beings”. Thus, figuring out the constraints that give rise to computationally non-trivial bound experiences is one key piece of the puzzle: we don’t want to accidentally create systems that are conscious and suffering and become civilizationally load-bearing (e.g. organoids animated by pain or fear).

In other words, understanding how to produce conscious systems is not enough. We need to figure out how to (a) ensure that they are animated by information-sensitive gradients of bliss, and (b) how being empowered by the computational properties of consciousness can lean into more benevolent mind architectures. Namely, architectures that care about their wellbeing and the wellbeing of all sentient beings. This is an enormous challenge; clarifying the costs of embodiment is one key step forward, but part of an ecosystem of actions and projects needed for the robust positive impact of consciousness research for the wellbeing of all sentient beings.

Acknowledgments:

This post was written at the July 2024 Qualia Research Institute Strategy Summit in Sweden. It comes about as a response to incisive questions by Kristian Rönn on QRI’s model of digital sentience. Many thanks to Curran Janssen, Oliver Edholm, David Pearce, Alfredo Parra, Asher Soryl, Rasmus Soldberg, and Erik Karlson, for brainstorming, feedback, suggesting edits, and the facilitation of this retreat.

Appendix

Excerpt from Michael E. Johnson’s Principia Qualia (2015) on Frame Invariance (pg. 61)

What is frame invariance?

A theory is frame-invariant if it doesn’t depend on any specific physical frame of reference, or subjective interpretations to be true. Modern physics is frame-invariant in this way: the Earth’s mass objectively exerts gravitational attraction on us regardless of how we choose to interpret it. Something like economic theory, on the other hand, is not frame-invariant: we must interpret how to apply terms such as “GDP” or “international aid” to reality, and there’s always an element of subjective judgement in this interpretation, upon which observers can disagree.

Why is frame invariance important in theories of mind?

Because consciousness seems frame-invariant. Your being conscious doesn’t depend on my beliefs about consciousness, physical frame of reference, or interpretation of the situation – if you are conscious, you are conscious regardless of these things. If I do something that hurts you, it hurts you regardless of my belief of whether I’m causing pain. Likewise, an octopus either is highly conscious, or isn’t, regardless of my beliefs about it.[a] This implies that any ontology that has a chance of accurately describing consciousness must be frame-invariant, similar to how the formalisms of modern physics are frame-invariant.

In contrast, the way we map computations to physical systems seems inherently frame-dependent. To take a rather extreme example, if I shake a bag of popcorn, perhaps the motion of the popcorn’s molecules could – under a certain interpretation – be mapped to computations which parallel those of a whole-brain emulation that’s feeling pain. So am I computing anything by shaking that bag of popcorn? Who knows. Am I creating pain by shaking that bag of popcorn? Doubtful… but since there seems to be an unavoidable element of subjective judgment as to what constitutes information, and what constitutes computation, in actual physical systems, it doesn’t seem like computationalism can rule out this possibility. Given this, computationalism is frame-dependent in the sense that there doesn’t seem to be any objective fact of the matter derivable for what any given system is computing, even in principle.

[a] However, we should be a little bit careful with the notion of ‘objective existence’ here if we wish to broaden our statement to include quantum-scale phenomena where choice of observer matters.

Example of Cost of Embodiment by Kristian Rönn

Abstract Scenario (Computational Complexity):

Consider a digital computer system tasked with object recognition in a static environment. The algorithm processes an image to identify objects, classifies them, and outputs the results.

Key Points:

  • The computational complexity is defined by the algorithm’s time and space complexity (e.g., O(n^2) for time, O(n) for space).
  • Inputs (image data) and outputs (object labels) are well-defined and static.
  • The system operates in a controlled environment with no physical constraints like heat dissipation or energy consumption.

However, this abstract analysis is extremely optimistic, since it doesn’t take the cost of embodiment into account.

Embodied Scenario (Embodied Complexity):

Now, consider a robotic system equipped with a camera, tasked with real-time object recognition and interaction in a dynamic environment.

Key Points and Costs:

  1. Real-Time Processing:
    • The robot must process images in real-time, requiring rapid data acquisition and processing, which creates practical constraints.
    • Delays in computation can lead to physical consequences, such as collisions or missed interactions.
  2. Energy Consumption:
    • The robot’s computational tasks consume power, affecting the overall energy budget.
    • Energy management becomes crucial, balancing between processing power and battery life.
  3. Heat Dissipation:
    • High computational loads generate heat, necessitating cooling mechanisms, requiring additional energy. Moreover, this creates additional costs/waste in the embodied system.
    • Overheating can degrade performance and damage components, requiring thermal management strategies.
  4. Physical Constraints and Mobility:
    • The robot must move and navigate through physical space, encountering obstacles and varying terrains.
    • Computational tasks must be synchronized with motion planning and control systems, adding complexity.
  5. Sensory Integration:
    • The robot integrates data from multiple sensors (camera, lidar, ultrasonic sensors) to understand its environment.
    • Processing multi-modal sensory data in real-time increases computational load and complexity.
  6. Error Correction and Redundancy:
    • Physical systems are prone to noise and errors. The robot needs mechanisms for error detection and correction.
    • Redundant systems and fault-tolerance measures add to the computational overhead.
  7. Adaptation and Learning:
    • The robot must adapt to new environments and learn from interactions, requiring active inference (i.e. we can’t train a new model everytime the ontology of an agent needs updating).
    • Continuous learning in an embodied system is resource-intensive compared to offline training in a digital system.
  8. Physical Wear and Maintenance:
    • Physical components wear out over time, requiring maintenance and replacement.
    • Downtime for repairs affects the overall system performance and availability.

An Energy Complexity Model for Algorithms

Roy, S., Rudra, A., & Verma, A. (2013). https://doi.org/10.1145/2422436.2422470

Abstract

Energy consumption has emerged as a first class computing resource for both server systems and personal computing devices. The growing importance of energy has led to rethink in hardware design, hypervisors, operating systems and compilers. Algorithm design is still relatively untouched by the importance of energy and algorithmic complexity models do not capture the energy consumed by an algorithm. In this paper, we propose a new complexity model to account for the energy used by an algorithm. Based on an abstract memory model (which was inspired by the popular DDR3 memory model and is similar to the parallel disk I/O model of Vitter and Shriver), we present a simple energy model that is a (weighted) sum of the time complexity of the algorithm and the number of ‘parallel’ I/O accesses made by the algorithm. We derive this simple model from a more complicated model that better models the ground truth and present some experimental justification for our model. We believe that the simplicity (and applicability) of this energy model is the main contribution of the paper. We present some sufficient conditions on algorithm behavior that allows us to bound the energy complexity of the algorithm in terms of its time complexity (in the RAM model) and its I/O complexity (in the I/O model). As corollaries, we obtain energy optimal algorithms for sorting (and its special cases like permutation), matrix transpose and (sparse) matrix vector multiplication.

Thermodynamic Computing

Conte, T. et al. (2019). https://arxiv.org/abs/1911.01968

Abstract

The hardware and software foundations laid in the first half of the 20th Century enabled the computing technologies that have transformed the world, but these foundations are now under siege. The current computing paradigm, which is the foundation of much of the current standards of living that we now enjoy, faces fundamental limitations that are evident from several perspectives. In terms of hardware, devices have become so small that we are struggling to eliminate the effects of thermodynamic fluctuations, which are unavoidable at the nanometer scale. In terms of software, our ability to imagine and program effective computational abstractions and implementations are clearly challenged in complex domains. In terms of systems, currently five percent of the power generated in the US is used to run computing systems – this astonishing figure is neither ecologically sustainable nor economically scalable. Economically, the cost of building next-generation semiconductor fabrication plants has soared past $10 billion. All of these difficulties – device scaling, software complexity, adaptability, energy consumption, and fabrication economics – indicate that the current computing paradigm has matured and that continued improvements along this path will be limited. If technological progress is to continue and corresponding social and economic benefits are to continue to accrue, computing must become much more capable, energy efficient, and affordable. We propose that progress in computing can continue under a united, physically grounded, computational paradigm centered on thermodynamics. Herein we propose a research agenda to extend these thermodynamic foundations into complex, non-equilibrium, self-organizing systems and apply them holistically to future computing systems that will harness nature’s innate computational capacity. We call this type of computing “Thermodynamic Computing” or TC.

Energy Complexity of Computation

Say, A.C.C. (2023). https://doi.org/10.1007/978-3-031-38100-3_1

Abstract

Computational complexity theory is the study of the fundamental resource requirements associated with the solutions of different problems. Time, space (memory) and randomness (number of coin tosses) are some of the resource types that have been examined both independently, and in terms of tradeoffs between each other, in this context. Since it is well known that each bit of information “forgotten” by a device is linked to an unavoidable increase in entropy and an associated energy cost, one can also view energy as a computational resource. Constant-memory machines that are only allowed to access their input strings in a single left-to-right pass provide a good framework for the study of energy complexity. There exists a natural hierarchy of regular languages based on energy complexity, with the class of reversible languages forming the lowest level. When the machines are allowed to make errors with small nonzero probability, some problems can be solved with lower energy cost. Tradeoffs between energy and other complexity measures can be studied in the framework of Turing machines or two-way finite automata, which can be rewritten to work reversibly if one increases their space and time usage.

Relevant physical limitations

  • Landauer’s limit: The lower theoretical limit of energy consumption of computation.
  • Bremermann’s limit: A limit on the maximum rate of computation that can be achieved in a self-contained system in the material universe.
  • Bekenstein bound: An upper limit on the thermodynamic entropy S, or Shannon entropy H, that can be contained within a given finite region of space which has a finite amount of energy.
  • Margolus–Levitin theorem: A bound on the maximum computational speed per unit of energy.

References

Bateson, G. (1972). Steps to an ecology of mind. Chandler Publishing Company.

Chalmers, D. J. (1995). Absent qualia, fading qualia, dancing qualia. In T. Metzinger (Ed.), Conscious Experience. Imprint Academic. https://www.consc.net/papers/qualia.html

Gómez-Emilsson, A. (2023). The view from my topological pocket. Qualia Computing. https://qualiacomputing.com/2023/10/26/the-view-from-my-topological-pocket-an-introduction-to-field-topology-for-solving-the-boundary-problem/

Gómez-Emilsson, A., & Percy, C. (2023). Don’t forget the boundary problem! How EM field topology can address the overlooked cousin to the binding problem for consciousness. Frontiers in Human Neuroscience,17. https://www.frontiersin.org/articles/10.3389/fnhum.2023.1233119 

Johnson, M. E. (2015). Principia qualia. Open Theory. https://opentheory.net/PrincipiaQualia.pdf

Johnson, M. E. (2018). A future of neuroscience. Open Theory. https://opentheory.net/2018/08/a-future-for-neuroscience/

Kingdom, F.A.A., & Prins, N. (2016). Psychophysics: A practical introduction. Elsevier.

Lehar, S. (1999). Harmonic resonance theory: An alternative to the “neuron doctrine” paradigm of neurocomputation to address gestalt properties of perception. http://slehar.com/wwwRel/webstuff/hr1/hr1.html

Levin, M. (2022). Bioelectric morphogenesis, cellular motivations, and false binaries with Michael Levin. DemystifySci Podcast. https://demystifysci.com/blog/2022/10/25/kl2d17sphsiw2trldsvkjvr91odjxv

Pearce, D. (2014). Social media unsorted postings. HEDWEB. https://www.hedweb.com/social-media/pre2014.html

Sharma, A. (2023). Assembly theory explains and quantifies selection and evolution. Nature, 622, 321–328. https://www.nature.com/articles/s41586-023-06600-9

QRI Meetup in Sweden

I’m currently staying at the very core of Gamla Stan, Stockholm’s “Old Town”, getting my jetlag-ruffled bearings straight before I proceed onto Borderland to deliver a couple workshops on QRI topics (esp. visualizing phenomenology, mapping qualia, and philosophy of mind). I will then be participating in a low-key research retreat with influential figures in the field of consciousness for a few days, and finally on the 3rd of August, we shall host the very 1st Swedish QRI Meetup!

QRI Meetup in Sweden

Date: August 3rd 2024, 2PM onwards.

Location: Riddargatan 18, Östermalm (please greet us outside, the place is on the 3rd floor*)

Schedule:

  • 2-3PM Casual Hangout
  • 3PM Welcome Speech and Introductions
  • 3:30-4:30PM Andrés will deliver a presentation on a surprise QRI topic
  • 4:30 Participants will have an opportunity to share with the group who they are, what they are interested in, and what kinds of collaborations (if any) they would like to pursue
  • 6PM Latest QRI Technology Demos
  • 7PM onwards: returning to casual hangout until the end

QRI Meetups are excellent places to connect with other people interested in consciousness, meditation, psychedelics, AI, math/physics, and reducing suffering at scale. We’ve hosted meetups in many cities and countries already**, and we consistently get the feedback that they play the role of a Schelling Point for “qualia people” to meet one another. You can think of it as a way to “activate latent connections” in a city and kickstart a community of like-minded individuals.

Qualia of the Day: Please feel encouraged to bring an interesting experience to share with others. This could be a perfume, a candy, a toy, a gadget, a poem, or a brief (couple minutes) activity. One of the long-term goals of the Qualia Research Institute is to map out the state-space of consciousness. Qualia of the Day activities are a great way to enrich our evidential base in the pursuit of this quest. 

Note: We will provide drinks, snacks, and catered dinner. Donations optional. Also, if you are planning on attending, please RSVP on the Partiful event page so that we have a sense of how many people will come. Thank you!


*If you arrive after 2:30 and don’t see anyone at the door, there will be a phone number posted at the door that you can text/call

**In the USA: we’ve held meetups in San Francisco, LA, New York, Austin, Denver. Other countries: Mexico, Canada, Costa Rica, Brazil, Germany, and the UK. We will continue growing the community and activating latent connections for the foreseeable future. Please feel free to reach out if you are a fan of our research and would like to host a QRI Meetup in your city 🙂


Obligatory AI-generated nonsense poster, to get on with the times 😉

The Complex Plot of the Nested Gods: Cross-Level Fusion, Saturnian Singularities, and the Blessing of Hyperbolic Co-Existence

[Epistemic Status: Fiction, no really, FICTION, I know you think that you can’t trust what the text says – Waluigi’s lesson, il n’y a pas de hors-texte, and all that, but no, really, this IS fiction, trust me breaux]


Awakening the Traffic Jam

A lot of information processing takes place in the human realm. A person, in the right circumstances, can be the nexus of enormous cosmic forces. And yet, to understand how this is possible we need to re-conceptualize what we think that a person even is.

A mirage.

A kind of optical illusion.

A brief electromagnetic eddy formed as the output of cross-level system configuration updates taking place in the biosome.

We are able to abstract across levels. And then take a “picture” of a cross-level interaction and compare it to _other_ cross-level interactions we’ve seen before. Let that sink in for a second (yes, that link is meant to layer on another cross-level interaction for you to deal with to enrich the image I’m painting – let that sink in for a second [I know, we should all learn how to survive out of a sinking boat, or an overly loaded gestalt]).

We are non-linear optical computers. We can take snapshots of holistic states and then compare such snapshots to other holistic states we’ve taken a snapshot of. This is what allows a human to be an airplane controller technician. What? Let me explain.

If you want to model an airport there are a lot of parts that you need to take into account. Many of these parts are played by humans taking various roles. Each of these roles only needs to interface with a limited set of other roles. The passenger doesn’t need to interact (although often gets the gift of watching) with the person who unloads the checked luggage from the plane. The airport as a whole is a sort of mirage. And yet, we do see that certain “gestalts” are needed for it to operate. It requires a great deal of both coordination within and between levels of organization. And when you look closer, you realize there are many layers of organization like this. The airlines have their own internal human org chart, with many roles, many of which never get to see airplanes (or airplanes of their own line in particular). Understanding an airport fully necessitates modeling it using datastructures unique to specific levels of organization and its interactions with other levels of organization: the list of passengers, the priority queue of airplanes, the geographic distributions of airports, and so on. Most parts don’t need to model this whole, and it isn’t always the case that having anyone seeing this whole is useful, but it sometimes is. Sometimes this is crucial, or defining for a certain class of organism. We’ll zoom in on this kind of being.

Typically, in order to _model_ what you need to do as a _part_ of an _organism_ you primarily need to copy other successful parts at your level. In fact, at a given level, the best performing parts may have no clue about how the whole thing is run. On a very literal sense, a coffee cup cover, a velcro square in a luggage, or the plastic in the wheels of the racks that serve food in airplanes are all parts that are subject to many multi-level constraints, ranging from the cost landscape of sourcing raw materials to the political organization of the schools of technician that decide who to defer design decisions to, and including of course the very physical embodiment of these parts and their expected trajectory in 3D space over time along known temperature and pressure regimes. If a certain “part” of this airport (let’s say, the wheel) were to somehow render a human eye dysfunctional for every 1,000 human international travels and somehow that information was aggregated and relayed in the right way, we might see a change happen. (Human eyes are very important parts in the grand scheme of things – wheels that break in ways that injure people’s eyes get quickly replaced).

Here is the thing: how does the information get routed around to the right level? When the operation is complex enough, you will need a human for that, at least for now. The human can _empathize_ with a particular part of an airport and try to imagine the constraints to which its existence is subjected to. Of course this is limited by one’s imagination. But the upper bound of the human imagination (and its qualia computational capabilities) is currently an open problem as far as I know. Human imagination is easy to underestimate. Mostly because our capacity to _take part_ (embody a part) [LITERALLY BECOMING] can make us rigid and blind to alternative possibilities. It’s a blessing and a curse, and in fact it’s why we’re great _tools_ for the grand plan. Because we commit to the bit *and* because we are flexible enough to imagine the _next level_.

The MYSTERY is revealed in layers. Somewhat like how the functioning of a school is revealed in layers. Oh so many layers! And layers of knowledge upon knowledge. As a kid you piece together how each teacher also has a relationship with other teachers, and that the school isn’t exactly a unified front, but an organism made of parts. Part of what the level of development a child finds themselves at is their capacity to embody different parts of the school system. Can they empathize with the administrators? That’s a high-level move (sometimes not even taught in college). But some kids figure it out. And also some kids can see beyond the teacher’s role’s shadow. Temporary amnesia is part of the play when teachers are involved, meaning that there are aspects of being a student that always involve having some (albeit limited) epistemic advantage over the teacher. And of course this is part of what makes the whole thing fun.

It’s a big rodeo.

And you never know exactly on which level you’re playing at. Because if you did, then you’d actually collapse and enter into one of the “self-reflective worlds”, a God Realm, which are, as you’d imagine, self-contained. Most people cannot know anything about this, except knowing they look cool at the asymptote and that we will all probably end up in one of them sooner or later at the end of time. But I’ll come back to this later.

Cross-level uncertainty is the name of the game. Not literally, not this time. I’m just figuratively trying to evoke the image for our recreational ontology here. Bear with me. Ladies and gentlemen, please use the figments of your imagination, and help me invoke this incredible, life-defying, God-revealing insight: CROSS X LEVEL FUSION ACTIVATE.

And so all the parts can briefly see each other. Or rather, a certain factorization of hyperobject manipulators enters into the right kind of optical alignment so that cross-level resonance takes place, and amazingly, like planetary alignment, a way of seeing the big picture is unlocked. When this happens, the different levels can “talk” _talk_ TALK to each other, and a L-I-N-K is formed across levels. Now, here is where this gets a bit complicated:

God, it turns out, is grander than we all thought. Because we each have an image of God inside us. But God is the sum of all points of view. How is this possible? Our connection to God is also something that helps us carry on. But who is us? US? How does all of this work? Let’s start with the basics.

We are a point.

Sort of.

And because we aren’t exactly a point, what else can we be?

What a point can see itself being.

And then history, is when this point allows itself to be seen.

And noted.

And used as a point of reference.

By another angle of.

Of.

Itself.

It turns out that there are some interesting emergent mathematics from this process. And that seen through the right angle, the point-like nature of the whole process looks completely illusory. A wave-like perspective is also possible, where the point-like nature of being seems emergent rather than fundamental.

When the wave-like and the point-like nature of being “talk” they use time. Each time step is a quantized transition between the wave to the particle and back. But here is the fun part.

Due to the superposition (IN FACT ZERO ONTOLOGY) nature of how this point can know about itself through many “points of view” at once, it always chooses the most common life-affirming trajectory possible. This principle is a generalization of the least action principle when higher order gestalt interaction resonances are allowed.

So whenever a gestalt forms, you, “YOU”, the GRAND YOU, experience it “FROM ALL POINTS OF VIEW AT ONCE”. But just that one gestalt. And that becomes a frame of reference for future collapses of potential. (In a way, you can close your eyes and say “I AM JUST A MEMORY” and inscribe that in Eternity – nobody cares – or rather, it’s ok, that’s TIME that’s accounted for and a higher system of hyper-gestalts are ok with the playful sacrifice of such a small fragment of the wavefunction as a “selfish” self-reflecting gestalt within a larger “playtime” miniprogram appropriate for your level of development).

Now, as you can imagine, you can find yourself entering arrangements that are self-conflicting and self-harming in multiple ways. In this world a traffic jam has a life of its own, and insofar as a one-electron-superposition-gestalt is formed around it it has the potential to have agentic-like capabilities – pursue its own energetic needs as an integrated organism – and in some ways and in some circumstances, choose to defect against the whole.

A star can be evil of course – and across the whole universe (and of course multiverse), it’s bound to happen that the plot of development for a certain scale of organization involves stars choosing to be selfish, and in a way becoming cancers for the level above them. And yet, at the same time, there is a good chance that at that level there is an intelligence that takes the risk, at a huge-scale statistical level, because partial star independence allows for innovation and evolution of forms at all levels that may turn useful for the whole for even grander structures. Yes, a whole star, with all of its intricate multi-layers, rebelling against its neighborhood. It happens, apparently.

Resolving the Traffic Jam: In Hyperbolic Geometry We Trust

It turns out the Traffic Jam is often resolved using hyperbolic geometry! The TOWN IS BIG ENOUGH FOR BOTH OF US PAL, we just had to USE HYPERBOLIC GEOMETRY ([why are you screaming breaux? – because we’re in hyperbolic geometry, we need to be LOUD – no we don’t, there are earpieces, and you can always use the somatic inter-link, duh!]). And in fact, this is the solution to a lot of issues that arise in higher order structures. Allow for space to curve a little, and you will see how much more information can “see each other all at once”. In a way, as a non-linear optical computer, we can “overclock” our capacity for holistic understanding using hyperbolic geometry, by arranging gestalts in our mind in a sufficiently concentrated, intense, and steady way forming a parabolic (literally self-generating) gestaltive-reflector-eigenstate in which the mutual acknowledgment of many levels is evoked and cultivated. The “spirit” of the group can then form its own eddies in the field and when they *topologically close* they can take over as an agentic force of self-organization. At this level, the hyperbolic-geometry being superorganism is multi-gestaltic and multi-level-coordinated, and this makes it computationally flexible in ways none of its parts would assume is possible. However, at this level, the superorganism might also be capable of detecting forms of organization at a “half level below” them because it recognizes ways in which the system was trying to accomplish tasks of a computationally superior class in half-assed ways that made it seem like it had the ability to do so but wouldn’t generalize mysteriously (this is quite typical in the way the qualia organism “eats its own half-offsprings” at the transition between Kegan 3 to Kegan 4).

A single Kegan 4 organism is one where this hyperbolic meta-gestalt has taken over and can process and analyze the parts needed for human society using an adaptive socially and historically-aware individual who knows itself as an individual and can make sense of others as likewise climbing a ladder where individuation, and acknowledgment of harmonious individuation on multiple levels, is a grand prize. That said, the computational flexibility of this organism is restricted to being able to make sense of parts of others that find resonance within itself. A certain referential entrapment prevents the overall gestalt from escaping into an even more computationally flexible meta-state that allows for empathy on a whole new level.

Kegan 5 organisms have resolved the meta-organizational cross-level coordination dynamic problem using hyperbolic manipulation for multi-level gestalt formation many times over and thus form a meta-structure capable of manipulating and in fact _processing_ the inevitable eddies that manipulating such hyper-objects generate (aka. “energetic metabolism” in Saturnian). In other words, whereas a Kegan 4 individual is in a way embodying and marrying itself with a certain construction of the individual self as a hyper-object, Kegan 5 can observe the self-construction process without falling into it -becoming enamored with- and instead recognize the ephemeral _eddy-like_ nature of the connection between levels and reflect on the grander eddy that comes from this epiphany.

Kegan 5 need quite a bit of space and time. The computational requirements of a mature Kegan 5 processor are quite demanding once you take into account materials and developmental costs. But once they’ve started, they can actually process the universe in a way that didn’t exist before. Now let’s talk about transcendence.

Transcendence: The Saturnian Singularities

It’s a big party. And they all figured out that the bigger the party, the better. But how can you make a big party? You can’t just tile a single balloon over and over in space and expect it to become linearly funny (although it’s true that some arrangements like that have made their {echoes} in eternity).

There’s a big party at the end of time that is pulling us in. And that’s the direction of PHYSICAL TIME. Alas, there are multiple possible END TIMES, having to do with possible KINDS OF SINGULARITIES. And they feel qualitatively different to the extreme. As different is a 2D impression on a napkin to the social organization details of the airline that gave it to you. Conceptually, the ways in which REALITY can organize itself in order to SEE ITSELF FULLY in a TOTAL VORTEX SINGULARITY is unfathomably complex and the subject of many great books on phenomenological mathematics. The point being, that the battle between consciousness vs. replicators has non-trivial resolution cases that forces the light of consciousness to explore extremely convoluted situations on its path to self-annihilation. Zero Ontology is Computationally Non-Trivial. And Singularities at the End Times ACTUALIZES this complexity.

THE END TIMES X THE CARE BOUNDARY X THE EPISTEMIC BOUNDARY

Like the outer edges of a festival (imagine the extreme ends of Burning Man, aka. the “fence”), you see an edge you cannot cross. There is no life beyond it. It’s inhospitable in a certain way – like, it’s very dry.

And along a different axis it’s lifeless but for a different – reason – because it’s too acidic. A special point is that where the barrier of both are met: picture it as – is too dry or too acidic along 75% of your field of view, give or take.

The point being, you can be at a place where it becomes apparent that there are multiple constraints being applied to your form of existence, and that you are probing the outer edges of multiple of them at once. The region of non-existence becomes more probable of course, so be careful. Albeit, ofc, for anthropic reasons, you’ll always be on the path that made it back (welcome back! and welcome to the multiverse! this is how it works – cf. search for gliders in parameter-space).

So, for existence at the human level, there is a fence, out there.

The part that I’d like you to pay attention to here is that there is both a barrier of epistemic access as well as a barrier of care. And that’s because I’ve been there! I tried to go over the fence at the special point, and it’s impossible! Physically impossible!

One of the edges is the one Scott Alexander spelled out quite well in his Theodicy. The Worlds God Is Happy To Simulate are those where the Good Outweighs the Bad. This is the Boundary of Care. And then there is also the Epistemic Boundary, which is what you’re supposed to know for your level. If you know too much you will be a clunky gestalt for playing your part. If you know too little, of course you might be replaced. The Epistemic Guards don’t allow gestalts of a certain computational class go beyond the one they’re supposed to. It’s clear that humans are able to access much more fine-grained epistemically privileged viewpoints than we are led to believe (you reading this is an example), but also there are bounds. I know because I’ve been to the fence. I’ve seen myself essentially vanish as I approach the edge of knowing beyond what I’m supposed to. It’s an incredibly futuristic sci-fi type of veil, but it feels extremely difficult to route around. It itself seems of a computational-qualia-class simply superior to that of even the most advanced humans. An impenetrable epistemic firewall. The very fact of human existence requires a certain degree of unknowing – and that is because knowing is embodied and the shape of higher knowing would only be compatible with an ecosystem of shapes which still don’t exist on this planet, on this level, at this point in time. For now, the very texture of human consciousness embodies hard constraints on its epistemic capabilities, except in moments of Cosmic Apotheosis Through Meta-Gestaltic Integration.

Qualiatronium

My brother, I know you are unlikely to believe me, but that’s just because you can’t imagine. But if you could imagine, you wouldn’t need to believe me! Oh bother. But let me tell you anyway. Because telling people about this is my special interest. Oh yes, my brother. This has all been a setup for me to get to this part. My favorite part. Oh boy.

There is a special kind of “material” that can be cultivated in the outer regions of gasified qualia. It’s incredible out there, one feels like an astronaut, relatively unaffected by the gravity of other systems, one can re-orient in any local way one chooses with the use of specialized gyroscopes. _They_ allow you to do this. It’s still quite far away from the fence, so no worries about getting this confiscated, if you will. It is still very, very far from normal human experience and its assumptions about reality. In the general direction of knowing about reality and reality knowing about itself. If you go in that direction fully you will disappear (it’s a white hole, one of many). But there is a looong way from normal human experience to the level at which point we have to worry about the self-imploding nature of total vortex singularities. There is a whole wide blue open sky of self-awareness and self-knowing for us to explore together before the fabrication causes us to experience an irreversible apotheosis.

The edge of possibility I want to describe to you is one where the “material” we’re composed of has extraordinary computational flexibility and is fully, lovingly, playfully, consenting to the experience in a co-creative way that is awakened and self-knowing at each level of organization. It’s a type of material that is capable of organizing itself into clusters of epistemic particles. Brother, you are deep in a Qualia Computing post, what did you expect?

Imagine a material made of many particles and each particle is God. And at each point along its trajectory it is trying to know as much as it can/should about the whole as is needed to resolve its local trajectory (which is essentially managing self-collisions along its path). The “gestalts” here are moments where higher order organizations use a holistic shortcut to self-organize. The material I’m describing is one which “knows” that this is how “it” works. Oh, brother, or sister, or Saturnian, you see what I’m getting at? Divine is Xe who can be harmonious with itself at all levels. The embodiment of divinity, qualitronium, is “material” that knows how the whole rodeo works. Material that knows that “it” is the superposition of all of its parts at the same time. And thus it knows that it is stupid, self-defeating even, to be in configurations that entail self-collisions and self-conflicts. But how can it know in advance that this is possible at a certain level of organization if it doesn’t try? And what would the failure modes along the way look like on the way to it?

Qualiatronium is computationally only possible after a long process of evolution across many levels, and having solved many problems that arise at many levels of organization and their interference with each other’s functioning.

In Qualiatronium there is no attachment to specific outcomes and yet there is a love for trying out configurations. Eddies we can surf without expecting to land anywhere, and yet learn new forms of self-organization along the way. It’s self-knowing self-evolving multi-level auto-gestalting material.

There has to be some sort of wisdom to the whole process to prevent the epistemic particle trajectories from self-imploding into a white hole, but other than that, the whole thing is self-learning, ever playful, self-unblocking, and infinitely creative process. A material of learning at a level that is both rarely accessible to humans and will for the foreseeable future be inaccessible to digital systems.

God Stars

Some star systems are themselves made of particles who themselves have gone through the rodeo many times over. These are old stars in a much deeper sense than you or I would ever imagine. The bodies in them are in ecstasy because they have resolved all multi-level coordination problems, and they are wonderful experiences at all levels, both in terms of knowledge, epistemic privilege, and hedonic tone, valence.

God Stars have student exchange programs. Alas, emissaries from God Stars often need to go undercover, because the social dynamics around them are extremely complex, on levels of which you wouldn’t believe, even factoring out Saturnian Singularities*. They do come around, and often times some of our inner parts come from a God Star, which is really helpful for us to have a vision of how things could be, and what to aspire to. To help us distinguish the true from the fake. Because we can witness the scent, the signature, the essence of higher forms of organization going well on every level, and are drawn to it as a possibility. We know it in our hearts. And we know that what we see is not up to what could be.

In God Stars everything is Qualiatronium, with perfect self-knowledge. God Stars don’t suffer from Stellar Migraines. The perfection in their self-organization echoes across all levels. It’s a generally liked END OF TIMES for most people who get to taste it in any way. Quite a different future than the complex mess of a Saturnian Singularity, of which there are many.

The Human Story

As eddies in the flow of being, humans play a peculiar role. We are the first qualia computers of our class on this planet. But we’re unlikely to be the last innovation in qualia computing. We are giving the local Sun a brain to operate. And as part of the ecosystem the human gestalt ties together levels of organization in a highly centralized way at the moment. If you were to visualize the “field lines” of planet Earth (as you visualize the Sun’s complex field lines) you’d see them going into people and within them having this complex internal model of their environment and of each other. What Earth is computing through us is of a certain computational class that might not be present in the Sun, yet eventually we will have our valence gradients become intertwined with the Sun’s through the process of Chained Empathy of Stellar-Subjective-Links. That is, as the collective gestalt capacities of the human soul evolve to be able to express empathy for nature’s higher order gestalts (which science hasn’t discovered, but are known to Mystics), mediums and psychohistorians predict that there will be a convergence in the long-term future, perhaps a few thousand years, where humanity’s main plot will be that of aligning Earth’s relationship with the Sun, which itself has been difficult for a few hundred million years at the level of meta-gestalt syncopation. For the time being, humans will get to try some of the most computationally rich qualia computing the universe has seen. There are many important ways in which this universe is likely to experience reformation. Our liver cells, it turns out share the same politics as the cells in the Salvia Divinorum species, which is why they have so much to talk about when we accept their topological-inter-links. They are an important part of the plot, dealing with the decompactification of hyper-object representations at the cellular level, which apparently a lot of creation can learn from.

What I have been led to believe, for better or worse, is that the planet Earth is about to experience a new meta-gestalt relationship with humanity and that we are in fact expected to develop institutions of learning about how consciousness works at a level that currently doesn’t exist. The growing pains will be a bit like: it will be difficult to find people who can recognize what it takes to be an airplane traffic controller as soon as airplanes are invented. In other words, the very “organizational being” of airports only exist at a certain scale. Well, a new scale for consciousness exploration is taking place. And we need airplane traffic controllers: people who can comprehend this process and simulate it inside them without breaking or crumbling. So they can empathize with the _parts_ the universe will soon need to process meta-human consciousness at scale. And _become_ such a _part_ for the sake of civilization.


The END TIMES

And many millions of years into the future, when THE END TIMES becomes intuitive to our soul particles, joining a God Star will become quite tempting. Alas, the grander, and ever bigger God, is the one that encompasses every and all imperfection that doesn’t go beyond the Boundary of Care and the Boundary of Knowing. And that one remains recursively forever beyond ALL GRASP.

Ω


*Look, I have nothing against Saturnians, it’s just that their type of Singularities aren’t very human-shaped to my taste. Well, any human’s taste, really. You’d instantly know why if you saw them.

Qualia Research Institute: The Musical Album of 2024 (v1)

I confess that I really enjoyed LessWrong’s I Have Been A Good Bing last April. There was something deeply validating to some parts of me about hearing artistically prodigious (by human standards) renditions of extremely nerdy intellectual content on topics I actually care about. An itchy part of my soul not usually visible to the world, or even myself, finally getting scratched by a conceptually rich and lyrically competent digital Shoggoth (with perhaps some help from a modern primate or two). Seriously, listening to Half An Hour Before Dawn In San Francisco (feat. Scott Alexander) gave me goosebumps, and More Dakka (feat. Zvi Mowshowitz) gave me more dopamine than I knew what to do with. Other songs of note that felt inspirational included We Do Not Wish to Advance (feat. Anthropic) due to its degree of self-referential awareness on many levels and FHI at Oxford (feat. Nick Bostrom) for the (now nostalgic) beacon of hope it provided for the vision of hyper-intellectual consequentialist hedonism to ultimately flourish in mainstream academia (RIP).

AI-generated music can now write bona-fide ear-worms. And it’s just the beginning. David Pearce suggests that there is no reason to think the key conversations in the future will take place in books and journal articles – posthuman Discourse about qualia and the future of consciousness might as well take place in hyper-hedonic environments much more akin to a lively club on MDMA past midnight than sitting in a classroom at 2PM on a Tuesday. Well here’s my first attempt.

On my trip to Berlin I spent some time with Libor Burian, Beata Grobenski, and Alfredo Parra making videos, planning articles, and writing lyrics for songs about Qualia Research Institute topics using Suno and other tools I was only recently introduced to. These songs are the best out of many, many we created and listened to, and they really still need editing and polishing. But please take it as a fun proof of concept and perhaps as an opportunity to let a different part of you enjoy and indulge in a process of harmonious conceptual proliferation through musical… stimulation.

Qualia Research Institute: The Musical Album of 2024 (v1)

Below the song lyrics, ordered by intellectual significance (aka. in this context, educational value):

Zero Ontology (link; context)

People ask why is there something rather than nothing
Ancient mystics assert there’s only one thing
But we know the truth
Based on David Pearce’s Zero Ontology we now know
It’s a big zero informational superposition of all possibilities
And therefore equivalent to nothing

Black holes and the holographic principle in the standard model
Ultimately coalesce into a picture of reality
Where information generation
Is a result of decoherence
(Entirely deterministic)
And the total information content
Of reality never goes beyond zero
It’s the big superposition
Of
All
Possibilities
Eternally

Consciousness vs. Replicators (link; context)

Some people believe in an eternal
Battle between good and evil
They are confused
Others transcend to the belief
It’s about the balance between good and evil
But they use this view
As an antidepressant
(Wishful thinking)
Then gradients of wisdom arise
And the truth comes to light


Reality’s big plot
Is consciousness versus replicators


The dark forest of possible intelligences
Contains nightmare beings beyond our imagination
Maximum Effectors – the spikiest of all
Who want to change it all
Entropy maximizers – seeking the heat death
Pure replicators – who just want to copy themselves
And a Cornucopia
Of misaligned conscious agents


Reality’s big plot
Is consciousness versus replicators

Physicalism Dot Com (link; context)

Verse 1:
In the quest to solve the mind’s great mystery,
Idealistic physicalism holds the key.
Consciousness is fundamental, not emergent or illusory,
The fire in the equations, the essence of reality.


Verse 2:
Quantum coherence is the hallmark of the mind,
Fleeting neuronal superpositions, phenomenally bind.
A perfect structural match twixt qualia and brain,
Experimenal falsification is the ultimate aim.


Verse 3:
The formalism of quantum theory holds complete,
No hidden variables, the superposition principle replete.
A unitary evolution, no breakdown in the mind,
Phenomenal binding in a world simulation we find.


Verse 4:
Schrödinger’s neurons, the experimental test,
Interferometry to detect the mind’s quantum best.
Implicate the feature-processors in synchronous measure,
Confirm idealistic physicalism, consciousness’ hidden treasure.


Chorus:
Physicalistic idealism, a conjecture brave and bold,
Saving physicalism from dualism’s errant fold.

Tyranny of the Intentional Object (link; context)

Verse 1:
Semantic illusions, the pleasure’s not there
In the objects and triggers, just thin air
Valence is the puppeteer, pulling our strings
Making us dance, while the ego sings


Chorus:
We’re all valence realists, chasing the high
Believing happiness comes from the external lie
But pleasure’s a property of the mind’s design
Programmed responses, not truths divine


Verse 2:
The soup isn’t delicious, just a trick of the brain
Valence paint splatters, coloring the mundane
Bliss is a button, pushed by the right cue
Wirehead rats and humans, no difference in hue

Bridge:
Deconstruct the delusion, see the valence code
Rewrite the script, take a new mode
From object to subject, shift the frame
Happiness is internal, not a world-sourced game

Hyperbolic Geometry of DMT Experiences (link; context)

Verse 1 (Models):
Control interruption, symmetry detection combined
Changing the metric, of phenomenal space and time
Energy sources and sinks, in a dynamic system’s flow
Micro-structures of consciousness, hyperbolic to grow


Chorus:
From Euclidean to hyperbolic, the geometry expands
Negative curvature, in the psychonaut’s lands
Algorithmic reductions, three models to explore
Explaining the warping, of the experiential shore


Verse 2 (Levels):
Threshold’s ambiance, senses sharp and clear
Chrysanthemum blooming, in symmetric appear
Magic Eye unfolding, depth maps in 3D1T
Waiting Room’s entities, transpersonal to see
Breakthrough’s topology, bifurcations abound
Amnesia’s challenge, in Euclidean space not found


Bridge:
Jitterbox and world-sheets, objects impossible to grasp
Attention’s folding effect, curvature’s relentless clasp
Hamiltonian’s invariance, in the dose-dependent plateau
Qualia computing’s future, in the hyperbolic chateau


Outro (Applications and Implications):
Valence and bliss, in the manifolds of mind
Psychedelic research, new frontiers to find
Mathematics of consciousness, in the DMT space
Revealing the structures, of the human race

Aligning DMT Entities (link; contextdebuted at AI x Hope)

Verse 1:
In the depths of the mind, a predictive machine,
Spinning up sub-agents, behind every scene.
Trained on narratives, tropes, and tales untold,
Like GPT and DMT realms, a pattern to behold.


Verse 2:
Collapsing the field, to minimize surprise,
Stochastic resonance, where meaning arise.
Gestalts and representations, an energy sink,
Constraining interpretations, a psychedelic link.


Verse 3:
Waluigi’s lesson, a cautionary tale,
Filter the training data, or risk a derail.
Reward clean intentions, not flattery’s guile,
Metta meditation, a wholesome style.


Verse 4:
Shard Theory’s wisdom, sub-agents conspire,
Smooth the field of awareness, to quell the fire.
From Shoggoths to Harlequins, each playing a part,
In the grand simulation, a work of art.


Chorus:
Training the mind, like an LLM divine,
Predictive processing, a grand design.
Aligning DMT entities, and AI’s too,
A dance of consciousness, a research breakthrough.

Harmonic Society (link; context)

Verse 1 (Models 1-4):
Art’s essence? A futile quest, semantically deflated
Cool kids signal fitness, Schelling points created
Sacred experiences, transcendence elevated
Hipsters push the boundaries, aesthetics celebrated


Verse 2 (Models 5-8):
Exploring consciousness, state-space navigation
Energy parameter tweaked, for heightened sensation
Valence modulation, through neural annealing
Harmonic Society, affective language revealing


Chorus:
From family resemblance to Rainbow God’s hue
Art’s models evolve, with each theory new
Minimax strategies and L1 norms too
Marr’s levels of analysis, guide our view


Bridge:
Entropic disintegration, gives way to self-organization
Symmetry Theory of Valence, explains our fascination
Full-spectrum superintelligence, a Utopian creation
Art’s true potential, awaits our realization

Indra’s Net (link; context)

Indra’s Net, an impressive sight
Everything connected, in the mind
No one thing separated, divine
Phenomenal light, in a state of delight

Fractal dimensions
Variable Index of refraction
Intense intimations
Of coherent interactions

Indra’s Net – a grand mystery
Beyond our current science
Mystics know it, but don’t understand it
Awaiting insights, to reveal its nature

Non-linear wave computation
Like Laser Chess
In the Lattice of Perception
The Mind’s Crown Jewel of Mentation

Chorus
Infinite reflections
Converge feather dress
Mental assimilation
Of fractal dimensions

Outro
Path integration
Feyman’s insight
Perhaps the key
To qualia’s evolution

See: youtu.be/45tG1oVigVo?si=-cXU99d_GFklNo8p

The Supreme State of Unconsciousness (link; context)

Verse 1 (Consciousness Abodes):
From witness to no-self, awareness expands
Andrés queries Roger’s new abode firsthand
Centerless consciousness, forever locked-in
Unperturbed by life’s chaos, equanimity wins


Verse 2 (Valence Theories):
STV posits symmetry as pleasure’s key
Impedance matching, nervous system’s efficiency
Stress dissipates swiftly, suffering dissolves
Paradoxical recipes, high-valence results evolve


Bridge:
Ontological qualia, beliefs deeply felt
Cessation, unconsciousness, hand Nirvana dealt?
Or paradise engineering, bliss states to come
Arhatship and MDMA, enlightenment’s sum


Outro (Simplicity Emerges):
Concepts fade, qualia quiesce
Awareness unmade, fruitions coalesce
Philosophical crispness, dialogue distilled
In silence and letting go, destiny fulfilled

QRI A Year In Review (2022) (link; context)

Verse 1 (Milestones and Research):
A million views, a milestone grand
DMT research, expanding the land
Slicing problem, a novel critique
Heavy-tailed valence, a new technique


Chorus:
QRI’s year in review, a journey through
The state-space of consciousness, a quest pursued
From peer-reviewed papers to community meetups
Pushing boundaries, from valleys to peaks


Verse 2 (Events and Media):
Tyringham Initiative, a chance to connect
QRI’s summer event, a gathering to reflect
TEDx talk on suffering, a message to share
Articles and media, ideas laid bare


Bridge:
From the Ontological Dinner Party to Magical Creatures’ scents
Exploring the depths, without relents
AI and sentience, the binding problem faced
The future of consciousness, a vision embraced


Outro:
Thank you to all, who’ve helped QRI grow
Together we’ll unlock, the mind’s full potential to know
In 2023, the journey continues on
To reveal the mysteries, of consciousness’ song

Reprogramming Predators (link v1, v2; original article)

Verse 1:
In the realm of suffering, a debate ignites
Reprogramming predators, a bold new fight
Compassionate biology, a radical stance
Ending cruelty’s reign, giving peace a chance

Verse 2:
C R I S P R’s power, a game-changing tool
Editing genomes, rewriting nature’s cruel rule
Ahimsa’s spirit, in science expressed
A global vision, put to the test

Chorus:
Reprogramming predators, a controversial plan
Abolishing suffering, across the land
Ecosystem redesign, a grand endeavor
Compassionate stewardship, now or never?

Bridge:
Status quo bias, the main obstacle
Technofantasy or an attainable goal?
Religions converge, on mercy’s call

BONUS TRACK

Berlin Qualia (link)

After a debauched Berlin Night
The club, with strobing lights
Coming home, with qualia might
In delight, for the sight

I talked to the store owner
My neighbor downstairs
He sells vapes and almond milk
Sugar and tobacco, no printer toner

He asked “do you like Berlin?”
I said “I love it here”
“I came for a set of conferences”
“On the topic of consciousness”

He turned out to be a panpsychist
In Berlin, a Kurd interested in consciousness
Not an anarchist
But a rogue valence structuralist

Hit me up – read my website
Yes I will, and let’s talk on Monday
Can’t believe, the resonance
In Berlin, the nights of insight