The View From My Topological Pocket: An Introduction to Field Topology for Solving the Boundary Problem

[Epistemic Status: informal and conversational, this piece provides an off-the-cuff discussion around the topological solution to the boundary problem. Please note that this isn’t intended to serve as a bulletproof argument; rather, it’s a guide through an intuitive explanation. While there might be errors, possibly even in reasoning, I believe they won’t fundamentally alter the overarching conceptual solution.]

This post is an informal and intuitive explanation for why we are looking into topology as a tentative solution to the phenomenal binding (or boundary) problem. In particular, this solutions identifies moments of experience with topological pockets of fields of physics. We recently published a paper where we dive deeper into this explanation space, and concretely hypothesize that the key macroscopic boundary between subjects of experience is the result of topological segmentation in the electromagnetic field (see explainer video / author’s presentation at the Active Inference Institute).

The short explanation for why this is promising is that topological boundaries are objective and frame-invariant features of “basement reality” that have causal effects and thus can be recruited by natural selection for information-processing tasks. If the fields of physics are fields of qualia, topological boundaries of the fields corresponding to phenomenal boundaries between subjects would be an elegant way for a theory of consciousness to “carve nature at its joints”. This solution is very significant if true, because it entails, among other things, that classical digital computers are incapable of creating causally significant experiences: the experiences that emerge out of them are by default something akin to mind dust, and at best, if significant binding happens, they are epiphenomenal from the “point of view” of the computation being realized.

The route to develop an intuition about this topic that this post takes is to deconstruct the idea of a “point of view” as a “natural kind” and instead advocate for topological pockets being the place where information can non-trivially aggregate. This idea, once seen, is hard to unsee; it reframes how we think about what systems are, and even the nature of information itself.


One of the beautiful things about life is that you sometimes have the opportunity to experience a reality plot twist. We might believe one narrative has always been unfolding, only to realize that the true story was different all along. As they say, the rug can be pulled from under your feet.

The QRI memeplex is full of these reality plot twists. You thought that the “plot” of the universe was a battle between good and evil? Well, it turns out it is the struggle between consciousness and replicators instead. Or that what you want is particular states of the environment? Well, it turns out you’ve been pursuing particular configurations of your world simulation all along. You thought that pleasure and pain follow a linear scale? Well, it turns out the scales are closer to logarithmic in nature, with the ends of the distribution being orders of magnitude more intense than the lower ends. I think that along these lines, grasping how “points of view” and “moments of experience” are connected requires a significant reframe of how you conceptualize reality. Let’s dig in!

One of the motivations for this post is that I recently had a wonderful chat with Nir Lahav, who last year published an article that steelmans the view that consciousness is relativistic (see one of his presentations). I will likely discuss his work in more detail in the future. Importantly, talking to him reminded me that ever since the foundation of QRI, we have taken for granted the view that consciousness is frame-invariant, and worked from there. It felt self-evident to us that if something depends on the frame of reference from which you see it, it doesn’t have inherent existence. Our experiences (in particular, each discrete moment of experience), have inherent existence, and thus cannot be frame-dependent. Every experience is self-intimating, self-disclosing, and absolute. So how could it depend on a frame of reference? Alas, I know this is a rather loaded way of putting it and risks confusing a lot of people (for one, Buddhists might retort that experience is inherently “interdependent” and has no inherent existence, to which I would replay “we are talking about different things here”). So I am motivated to present a more fleshed out, yet intuitive, explanation for why we should expect consciousness to be frame-invariant and how, in our view, our solution to the boundary problem is in fact up to this challenge.

The main idea here is to show how frames of reference cannot boostrap phenomenal binding. Indeed, “a point of view” that provides a frame of reference is more of a convenient abstraction that relies on us to bind, interpret, and coalesce pieces of information, than something with a solid ontological status that exists out there in the world. Rather, I will try to show how we are borrowing from our very own capacity for having unified information in order to put together the data that creates the construct of a “point of view”; importantly, this unity is not bootstrapped from other “points of view”, but draws from the texture of the fabric of reality itself. Namely, the field topology.


A scientific theory of consciousness must be able to explain the existence of consciousness, the nature and cause for the diverse array of qualia values and varieties (the palette problem), how consciousness is causally efficacious (avoid epiphenomenalism), and explain how the information content of each moment of experience is presented “all at once” (namely, the binding problem). I’ve talked extensively about these constraints in writings, videos, and interviews, but what I want to emphasize here is that these problems need to be addressed head on for a theory of consciousness to work at all. Keep these constraints in mind as we deconstruct the apparent solidity of frames of reference and the difficulty that arises in order to bootstrap causal and computational effects in connection to phenomenal binding out of a relativistic frame.

At a very high level, a fuzzy (but perhaps sufficient) intuition for what’s problematic when a theory of consciousness doesn’t seek frame-invariance is that you are trying to create something concrete with real and non-trivial causal effects and information content, out of fundamentally “fuzzy” parts.

In brief, ask yourself, can something fuzzy “observe” something fuzzy? How can fuzzyness be used to boostrap something non-fuzzy?

In a world of atoms and forces, “systems” or “things” or “objects” or “algorithms” or “experiences” or “computations” don’t exist intrinsically because there are no objective, frame-invariant, and causally significant ways to draw boundaries around them!

I hope to convince you that any sense of unity or coherence that you get from this picture of reality (a relativistic system with atoms and forces) is in fact a projection from your mind, that inhabits your mind, and is not out there in the world. You are looking at the system, and you are making connections between the parts, and indeed you are creating a hierarchy of interlocking gestalts to represent this entire conception of reality. But that is all in your mind! It’s a sort of map and territory confusion to believe that two fuzzy “systems” interacting with each other can somehow bootstrap a non-fuzzy ontological object (aka. a requirement for a moment of experience). 

I reckon that these vague explanations are in fact sufficient for some people to understand where I’m going. But some of you are probably clueless about what the problem is, and for good reason. This is never discussed in detail, and this is largely, I think, because people who think a lot about the problem don’t usually end up with a convincing solution. And in some cases, the result is that thinkers bite the bullet that there are only fuzzy patterns in reality.

How Many Fuzzy Computations Are There in a System?

Indeed, thinking of the universe as being made of particles and forces implies that computational processes are fuzzy (leaky, porous, open to interpretation, etc.). Now imagine thinking that *you* are one of such fuzzy computations. Having this as an unexamined background assumption gives rise to countless intractable paradoxes. The notion of a point of view, or a frame of reference, does not have real meaning here as the way to aggregate information doesn’t ultimately allow you to identify objective boundaries around packets of information (at least not boundaries that are more than merely-conventional in nature).

From this point of view (about points of view!), you realize that indeed there is no principled and objective way to find real individuals. You end up in the fuzzy world of fuzzy individuals of Brian Tomasik, as helpfully illustrated by this diagram:

Source: Fuzzy, Nested Minds Problematize Utilitarian Aggregation by Brian Tomasik

Brian Tomasik indeed identifies the problem of finding real boundaries between individuals as crucial for utilitarian calculations. And then, incredibly, also admits that his ontological frameworks gives him no principled way of doing so (cf. Michael E. Johnson’s Against Functionalism for a detailed response). Indeed, according to Brian (from the same essay):

Eric Schwitzgebel argues that “If Materialism Is True, the United States Is Probably Conscious“. But if the USA as a whole is conscious, how about each state? Each city? Each street? Each household? Each family? When a new government department is formed, does this create a new conscious entity? Do corporate mergers reduce the number of conscious entities? These seem like silly questions—and indeed, they are! But they arise when we try to individuate the world into separate, discrete minds. Ultimately, “we are all connected”, as they say. Individuation boundaries are artificial and don’t track anything ontologically or phenomenally fundamental (except maybe at the level of fundamental physical particles and structures). The distinction between an agent and its environment is just an edge that we draw around a clump of physics when it’s convenient to do so for certain purposes.

My own view is that every subsystem of the universe can be seen as conscious to some degree and in some way (functionalist panpsychism). In this case, the question of which systems count as individuals for aggregation becomes maximally problematic, since it seems we might need to count all the subsystems in the universe.”

Are you confused now? I hope so. Otherwise I’d worry about you.

Banana For Scale

A frame of reference is like a “banana for scale” but for both time and space. If you assume that the banana isn’t morphing, you can use how long it takes for waves emitted from different points in the banana to bounce back and return in order to infer the distance and location of physical objects around it. Your technologically equipped banana can play the role of a frame of reference in all but the most extreme of conditions (it probably won’t work as you approach a black hole, for very non-trivial reasons involving severe tidal forces, but it’ll work fine otherwise).

Now the question that I want to ask is: how does the banana “know itself”? Seriously, if you are using points in the banana as your frame of reference, you are, in fact, the one who is capable of interpreting the data coming from the banana to paint a picture of your environment. But the banana isn’t doing that. It is you! The banana is merely an instrument that takes measurements. Its unity is assumed rather than demonstrated. 


In fact, for the upper half of the banana to “comprehend” the shape of the other half (as well as its own), it must also rely on a presumed fixed frame of reference. However, it’s important to note that such information truly becomes meaningful only when interpreted by a human mind. In the realm of an atom-and-force-based ontology, the banana doesn’t precisely exist as a tangible entity. Your perception of it as a solid unit, providing direction and scale, is a practical assumption rather than an ontological certainty.

In fact, the moment we try to get a “frame of reference to know itself” you end up in an infinite regress, where smaller and smaller regions of the object are used as frames of reference to measure the rest. And yet, at no point does the information of these frames of reference “come together all at once”, except… of course… in your mind.

Are there ways to boostrap a *something* that aggregates and simultaneously expresses the information gathered across the banana (used as a frame of reference)? If you build a camera to take a snapshot of the, say, information displayed at each coordinate of the banana, the picture you take will have spatial extension and suffer from the same problem. If you think that the point at the aperture can itself capture all of the information at once, you will encounter two problems. If you are thinking of an idealized point-sized aperture, then we run into the problem that points don’t have parts, and therefore can’t contain multiple pieces of information at once. And if you are talking about a real, physical type of aperture, you will find that it cannot be smaller than the diffraction limit. So now you have the problem of how to integrate all of the information *across the whole area of the aperture* when it cannot shrink further without losing critical information. In either case, you still don’t have anything, anywhere, that is capable of simultaneously expressing all of the information of the frame of reference you chose. Namely, the coordinates you measure using a banana.

Let’s dig deeper. We are talking of a banana as a frame of reference. But what if we try to internalize the frame of reference. A lot of people like to think of themselves as the frame of reference that matters. But I ask you: what are your boundaries and how do the parts within those boundaries agree on what is happening?

Let’s say your brain is the frame of reference. Intuitively, one might feel like “this object is real to itself”. But here is where the magic comes. Make the effort to carefully trace how signals or measurements propagate in an object such as the brain. Is it fundamentally different than what happens with a banana? There might be more shortcuts (e.g. long axons) and the wiring could have complex geometry, but neither of these properties can ultimately express information “all at once”. The principle of uniformity says that every part of the universe follows the same universal physical laws. The brain is not an exception. In a way, the brain is itself a possible *expression* of the laws of physics. And in this way, it is no different than a banana.

Sorry, your brain is not going to be a better “ground” for your frame of reference than a banana. And that is because the same infinite recursion that happened with the banana when we tried to use it to ground our frame of reference into something concrete happens with your brain. And also, the same problem happens when we try to “take a snapshot of the state of the brain”, i.e. that the information also doesn’t aggregate in a natural way even in a high-resolution picture of the brain. It still has spatial extension and lacks objective boundaries of any causal significance.

Every single point in your brain has a different view. The universe won’t say “There is a brain here! A self-intimating self-defining object! It is a natural boundary to use to ground a frame of reference!” There is nobody to do that! Are you starting to feel the groundlessness? The bizarre feeling that, hey, there is no rational way to actually set a frame of reference without it falling apart into a gazillion different pieces, all of which have the exact same problem? I’ve been there. For years. But there is a way out. Sort of. Keep reading.

The question that should be bubbling up to the surface right now is: who, or what, is in charge of aggregating points of view? And the answer is: this does not exist and is impossible for it to exist if you start out in an ontology that has as the core building blocks relativistic particles and forces. There is no principled way to aggregate information across space and time that would result in the richness of simultaneous presentation of information that a typical human experience displays. If there is integration of information, and a sort of “all at once” presentation, the only kind of (principled) entity that this ontology would accept is the entire spacetime continuum as a gigantic object! But that’s not what we are. We are definite experiences with specific qualia and binding structures. We are not, as far as I can tell, the entire spacetime continuum all at once. (Or are we?).

If instead we focus on the fine structure of the field, we can look at mathematical features in it that would perhaps draw boundaries that are frame-invariant. Here is where a key insight becomes significant: the topology of a vector field is Lorentz invariant! Meaning, a Lorentz transformation will merely squeeze and sheer, but never change topology on its own. Ok, I admit I am not 100% sure that this holds for all of the topological features of the electromagnetic field (Creon Levit recently raised some interesting technical points that might make some EM topological features frame-dependent; I’ve yet to fully understand his argument but look forward to engaging with it). But what we are really pointing at is the explanation space. A moment ago we were desperate to find a way to ground, say, the reality of a banana in order to use it as a frame of reference. We saw that the banana conceptualized as a collection of atoms and forces does not have this capacity. But we didn’t inquire into other possible physical (though perhaps not *atomistic*) features of the banana. Perhaps, and this is sheer speculation, the potassium ions in the banana peel form a tight electromagnetic mesh that creates a protective Faraday cage for this delicious fruit. In that case, well, the boundaries of that protecting sheet would, interestingly, be frame invariant. A ground!

The 4th Dimension

There is a bit of a sleight of hand here, because I am not taking into account temporal depth, and so it is not entirely clear how large the banana, as a topological structure defined by the potassium ions protective sheer really is (again, this is totally made up! for illustration purposes only). The trick here is to realize that, at least in so far as experiences go, we also have a temporal boundary. Relativistically, there shouldn’t be a hard distinction between temporal and spatial boundaries of a topological pocket of the field. In practice, of course one will typically overwhelm the other, unless you approach the brain you are studying at close to the speed of light (not ideal laboratory conditions, I should add). In our paper, and for many years at QRI (iirc an insight by Michael Johnson in 2016 or so), we’ve talked about experiences having “temporal depth”. David Pearce posits that each fleeting macroscopic state of quantum coherence spanning the entire brain (the physical correlate of consciousness in his model) can last as little as a couple of femtoseconds. This does not seem to worry him: there is no reason why the contents of our experience would give us any explicit hint about our real temporal depth. I intuit that each moment of experience lasts much, much longer. I highly doubt that it can last longer than a hundred milliseconds, but I’m willing to entertain “pocket durations” of, say, a few dozens of milliseconds. Just long enough for 40hz gamma oscillations to bring disparate cortical micropockets into coherence, and importantly, topological union, and have this new new emergent object resonate (where waves bounce back and forth) and thus do wave computing worthwhile enough to pay the energetic cost of carefully modulating this binding operation. Now, this is the sort of “physical correlate of consciousness” I tend to entertain the most. Experiences are fleeting (but not vanishingly so) pockets of the field that come together for computational and causal purposes worthwhile enough to pay the price of making them.

An important clarification here is that now that we have this way of seeing frames of reference we can reconceptualize our previous confusion. We realize that simply labeling parts of reality with coordinates does not magically bring together the information content that can be obtained by integrating the signals read at each of those coordinates. But we suddenly have something that might be way better and more conceptually satisfying. Namely, literal topological objects with boundaries embedded in the spacetime continuum that contribute to the causal unfolding of the reality and are absolute in their existence. These are the objective and real frames of reference we’ve been looking for!

What’s So Special About Field Topology?

Two key points:

  1. Topology is frame-invariant
  2. Topology is causally significant

As already mentioned, the Lorentz Transform can squish and distort, but it doesn’t change topology. The topology of the field is absolute, not relativistic.

The Lorentz Transform can squish and distort, but it doesn’t change topology (image source).

And field topology is also causally significant. There are _many_ examples of this, but let me just mention a very startling one: magnetic reconnection. This happens when the magnetic field lines change how they are connected. I mention this example because when one hears about “topological changes to the fields of physics” one may get the impression that such a thing happens only in extremely carefully controlled situations and at minuscule scales. Similar to the concerns for why quantum coherence is unlikely to play a significant role in the brain, one can get the impression that “the scales are simply off”. Significant quantum coherence typically happens in extremely small distances, for very short periods of time, and involving very few particles at a time, and thus, the argument goes, quantum coherence must be largely inconsequential at scales that could plausibly matter for the brain. But the case of field topology isn’t so delicate. Magnetic reconnection, in particular, takes place at extremely large scales, involving enormous amount of matter and energy, with extremely consequential effects.

You know about solar flairs? Solar flairs are the strange phenomenon in the sun in which plasma is heated up to millions of degrees Kelvin and charged particles are accelerated to near the speed of light, leading to the emission of gigantic amounts of electromagnetic radiation, which in turn can ionize the lower levels of the Earth’s ionosphere, and thus disrupt radio communication (cf. radio blackouts). These extraordinary events are the result of the release of magnetic energy stored in the Sun’s corona via a topological change to the magnetic field! Namely, magnetic reconnection.

So here we have a real and tangible effect happening at a planetary (and stellar!) scale over the course of minutes to hours, involving enormous amounts of matter and energy, coming about from a non-trivial change to the topology of the fields of physics.

(example of magnetic reconnection; source)

Relatedly, coronal mass ejections (CMEs) also dependent on changes to the topology of the EM field. My layman understanding of CMEs is that they are caused by the build-up of magnetic stress in the sun’s atmosphere, which can be triggered by a variety of factors, including uneven spinning and plasma convection currents. When this stress becomes too great, it can cause the magnetic field to twist and trap plasma in solar filaments, which can then be released into interplanetary space through magnetic reconnection. These events are truly enormous in scope (trillions of kilograms of mass ejected) and speed (traveling at thousands of kilometers per second).

CME captured by NASA (source)

It’s worth noting that this process is quite complex/not fully understood, and new research findings continue to illuminate the details of this process. But the fact that topological effects are involved is well established. Here’s a video which I thought was… stellar. Personally, I think a program where people get familiar with the electromagnetic changes that happen in the sun by seeing them in simulations and with the sun visualized in many ways, might help us both predict better solar storms, and then also help people empathize with the sun (or the topological pockets that it harbors!).

The model showed differential rotation causes the sun’s magnetic fields to stretch and spread at different rates. The researchers demonstrated this constant process generates enough energy to form stealth coronal mass ejections over the course of roughly two weeks. The sun’s rotation increasingly stresses magnetic field lines over time, eventually warping them into a strained coil of energy. When enough tension builds, the coil expands and pinches off into a massive bubble of twisted magnetic fields — and without warning — the stealth coronal mass ejection quietly leaves the sun.” (source)

Solar flares and CMEs are just two rather spectacular macroscopic phenomena where field topology has non-trivial causal effects. But in fact there is a whole zoo of distinct non-trivial topological effects with causal implications, such as: how the topology of the Möbius strip can constrain optical resonant modes, twisted topological defects in nematic liquid crystal make some images impossible, the topology of eddy currents can be recruited for shock absorption aka. “magnetic breaking”, Meissner–Ochsenfeld effect and flux pinning enabling magnetic levitation, Skyrmion bundles having potential applications for storing information in spinotropic devices, and so on.

(source)

In brief, topological structures in the fields of physics can pave the way for us to identify the natural units that correspond to “moments of experience”. They are frame-invariant and casually significant, and as such they “carve nature at its joints” while being useful from the point of view of natural selection.

Can a Topological Pocket “Know Itself”?

Now the most interesting question arises. How does a topological pocket “know itself”? How can it act as a frame of reference for itself? How can it represent information about its environment if it does not have direct access to it? Well, this is in fact a very interesting area of research. Namely, how do you get the inside of a system with a clear and definite boundary to model its environment despite having only information accessible at its boundary and the resources contained within its boundary? This is a problem that evolution has dealt with for over a billion years (last time I checked). And fascinatingly, is also the subject of study of Active Inference and the Free Energy Principle, whose math, I believe, can be imported to the domain of *topological* boundaries in fields (cf. Markov Boundary).

Here is where qualia computing, attention and awareness, non-linear waves, self-organizing principles, and even optics become extremely relevant. Namely, we are talking about how the *interior shape* of a field could be used in the context of life. Of course the cell walls of even primitive cells are functionally (albeit perhaps not ontologically) a kind of objective and causally significant boundary where this applies. It is enormously adaptive for the cell to use its interior, somehow, to represent its environment (or at least relevant features thereof) in order to navigate, find food, avoid danger, and reproduce.

The situation becomes significantly more intricate when considering highly complex and “evolved” animals such as humans, which encompass numerous additional layers. A single moment of experience cannot be directly equated to a cell, as it does not function as a persistent topological boundary tasked with overseeing the replication of the entire organism. Instead, a moment of experience assumes a considerably more specific role. It acts as an exceptionally specialized topological niche within a vast network of transient, interconnected topological niches—often intricately nested and interwoven. Together, they form an immense structure equipped with the capability to replicate itself. Consequently, the Darwinian evolutionary dynamics of experiences operate on multiple levels. At the most fundamental level, experiences must be selected for their ability to competitively thrive in their immediate micro-environment. Simultaneously, at the broadest level, they must contribute valuable information processing functions that ultimately enhance the inclusive fitness of the entire organism. All the while, our experiences must seamlessly align and “fit well” across all the intermediary levels.

Visual metaphor for how myriad topological pockets in the brain could briefly fuse and become a single one, and then dissolve back into a multitude.

The way this is accomplished is by, in a way, “convincing the experience that it is the organism”. I know this sounds crazy. But ask yourself. Are you a person or an experience? Or neither? Think deeply about Empty Individualism and come back to this question. I reckon that you will find that when you identify with a moment of experience, it turns out that you are an experience *shaped* in the form of the necessary survival needs and reproductive opportunities that a very long-lived organism requires. The organism is fleetingly creating *you* for computational purposes. It’s weird, isn’t it?

The situation is complicated by the fact that it seems that the computational properties of topological pockets of qualia involve topological operations, such as fusion, fission, and the use of all kinds of internal boundaries. More so, the content of a particular experience leaves an imprint in the organism which can be picked up by the next experience. So what happens here is that when you pay really close attention, and you whisper to your mind, “who am I?”, the direct experiential answer will in fact be a slightly distorted version of the truth. And that is because you (a) are always changing and (b) can only use the shape of the previous experience(s) to fill the intentional content of your current experience. Hence, you cannot, at least not under normal circumstances, *really* turn awareness to itself and *be* a topological pocket that “knows itself”. For once, there is a finite speed of information propagation across the many topological pockets that ultimately feed to the central one. So, at any given point in time, there are regions of your experience of which you are *aware* but which you are not attending to.

This brings us to the special case. Can an experience be shaped in such a way that it attends to itself fully, rather than attend to parts of itself which contain information about the state of predecessor topological pockets? I don’t know, but I have a strong hunch that the answer is yes and that this is what a meditative cessation does. Namely, it is a particular configuration of the field where attention is perfectly, homogeneously, distributed throughout in such a way that absolutely nothing breaks the symmetry and the experience “knows itself fully” but lacks any room left to pass it on to the successor pockets. It is a bittersweet situation, really. But I also think that cessations, and indeed moments of very homogeneously distributed attention, are healing for the organism, and even, shall we say, for the soul. And that is because they are moments of complete relief from the discomfort of symmetry breaking of any sort. They teach you about how our world simulation is put together. And intellectually, they are especially fascinating because they may be the one special case in which the referent of an experience is exactly, directly, itself.

To be continued…


Acknowledgements

I am deeply grateful and extend my thanks to Chris Percy for his remarkable contributions and steadfast dedication to this field. His exceptional work has been instrumental in advancing QRI’s ideas within the academic realm. I also want to express my sincere appreciation to Michael Johnson and David Pearce for our enriching philosophical journey together. Our countless discussions on the causal properties of phenomenal binding and the temporal depth of experience have been truly illuminating. A special shout-out to Cube Flipper, Atai Barkai, Dan Girshovic, Nir Lahav, Creon Levit, and Bijan Fakhri for their recent insightful discussions and collaborative efforts in this area. Hunter, Maggie, Anders (RIP), and Marcin, for your exceptional help. Huge gratitude to our donors. And, of course, a big thank you to the vibrant “qualia community” for your unwavering support, kindness, and encouragement in pursuing this and other crucial research endeavors. Your love and care have been a constant source of motivation. Thank you so much!!!

Digital Computers Will Remain Unconscious Until They Recruit Physical Fields for Holistic Computing Using Well-Defined Topological Boundaries

[Epistemic Status: written off the top of my head, thought about it for over a decade]

What do we desire for a theory of consciousness?

We want it to explain why and how the structure of our experience is computationally relevant. Why would nature bother to wire, not only information per se, but our experiences in richly structured ways that seem to track task-relevant computation (though at times in elusive ways)?

I think we can derive an explanation here. It is both very theoretically satisfying and literally mind-bending. This allows us to rule out vast classes of computing systems as having no more than computationally trivial conscious experiences.

TL;DR: We have richly textured bound experiences precisely because the boundaries that individuate us also allow us to act as individuals in many ways. This individual behavior can reflect features of the state of the entire organism in energy-efficient ways. Evolution can recruit this individual, yet holistic, behavior due to its computational advantages. We think that the boundary might be the result of topological segmentation in physical fields.


Marr’s Levels of Analysis and the Being/Form Boundary

One lens we can use to analyze the possibility of sentience in systems is this conceptual boundary between “being” and “form”. Here “being” refers to the interiority of things- their intrinsic likeness. “Form” on the other hand refers to how they appear from the outside. Where you place the being/form boundary influences how you make sense of the world around you. One factor that seems to be at play for where you place the being/form boundary is your implicit background assumptions about consciousness. In particular, how you think of consciousness in relation to Marr’s levels of analysis:

  • If you locate consciousness at the computational (or behavioral) level, then the being/form boundary might be computation/behavior. In other words, sentience simply is the performance of certain functions in certain contexts.
  • If you locate it at the algorithmic level, then the being/form boundary might become algorithm/computation. Meaning that what matters for the inside is the algorithm, whereas the outside (the form) is the function the algorithm produces.
  • And if you locate it at the implementation level, you will find that you identify being with specific physical situations (such as phases of matter and energy) and form as the algorithms that they can instantiate. In turn, the being/form boundary looks like crystals & bubbles & knots of matter and energy vs. how they can be used from the outside to perform functions for each other.

How you approach the question of whether a given chatbot is sentient will drastically depend on where you place the being/form boundary.


Many arguments against the sentience of particular computer systems are based on algorithmic inadequacy. This, for example, takes the form of choosing a current computational theory of mind (e.g. global workspace theory) and checking if the algorithm at play has the bare bones you’d expect a mind to have. This is a meaningful kind of analysis. And if you locate the being/form boundary at the algorithmic level then this is the only kind of analysis that seems to make sense.

What stops people from making successful arguments concerning the implementation level of analysis is confusion about the function for consciousness. So which physical systems are or aren’t conscious seems to be inevitably an epiphenomenalist construct. Meaning that drawing boundaries around systems with specific functions is an inherently fuzzy activity and any criteria we choose for whether a system is performing a certain function will be at best a matter of degree (and opinion).

The way of thinking about phenomenal boundaries I’m presenting in this post will escape this trap.

But before we get there, it’s important to point out the usefulness of reasoning about the algorithmic layer:

Algorithmic Structuring as a Constraint

I think that most people who believe that digital sentience is possible will concede that at least in some situations The Chinese Room is not conscious. The extreme example is when the content of the Chinese Room turns out to be literally a lookup table. Here a simple algorithmic concern is sufficient to rule out its sentience: a lookup table does not have an inner state! And what you do, from the point of view of its inner workings, is the same no matter if you relabel which input goes with what output. Whatever is inscribed in the lookup table (with however many replies and responses as part of the next query) is not something that the lookup table structurally has access to! The lookup table is, in an algorithmic sense, blind to what it is and what it does*. It has no mirror into itself.

Algorithmic considerations are important. To not be a lookup table, we must have at least some internal representations. We must consider constraints on “meaningful experience”, such as probably having at least some of, or something analogous to: a decent number of working memory slots (and types), a good size of visual field, resolution of color in terms of Just Noticeable Differences, and so on. If your algorithm doesn’t even try to “render” its knowledge in some information-rich format, then it may lack the internal representations needed to really “understand”. Put another way: imagine that your experience is like a Holodeck. Ask the question of what is the lower bound on the computational throughput of each sensory modality and their interrelationships. Then see if the algorithm you think can “understand” has internal representations of that kind at all.

Steel-manning algorithmic concerns involves taking a hard look at the number of degrees of freedom of our inner world-simulation (in e.g. free-wheeling hallucinations) and making sure that there are implicit or explicit internal representations with roughly similar computational horsepower as those sensory channels.

I think that this is actually an easy constraint to meet relative to the challenge of actually creating sentient machines. But it’s a bare minimum. You can’t let yourself be fooled by a lookup table.

In practice, the AI researchers will just care about metrics like accuracy, meaning that they will use algorithmic systems with complex internal representations like ours only if it computationally pays off to do so! (Hanson in Age of EM makes the bet it that it is worth simulating a whole high-performing human’s experience; Scott points out we’d all be on super-amphetamines). Me? I’m extremely skeptical that our current mindstates are algorithmically (or even thermodynamically!) optimal for maximally efficient work. But even if normal human consciousness or anything remotely like it was such a global optimum that any other big computational task routes around to it as an instrumental goal, I still think we would need to check if the algorithm does in fact create adequate internal representations before we assign sentience to it.

Thankfully I don’t think we need to go there. I think that the most crucial consideration is that we can rule out a huge class of computing systems ever being conscious by identifying implementation-level constraints for bound experiences. Forget about the algorithmic level altogether for a moment. If your computing system cannot build a bound experience from the bottom up in such a way that it has meaningful holistic behavior, then no matter what you program into it, you will only have “mind dust” at best.

What We Want: Meaningful Boundaries

In order to solve the boundary problem we want to find “natural” boundaries in the world to scaffold off of those. We take on the starting assumption that the universe is a gigantic “field of consciousness” and the question of how atoms come together to form experiences becomes how this field becomes individuated into experiences like ours. So we need to find out how boundaries arise in this field. But these are not just any boundary, but boundaries that are objective, frame-invariant, causally-significant, and computationally-useful. That is, boundaries you can do things with. Boundaries that explain why we are individuals and why creating individual bound experiences was evolutionarily adaptive; not only why it is merely possible but also advantageous.

My claim is that boundaries with such properties are possible, and indeed might explain a wide range of puzzles in psychology and neuroscience. The full conceptually satisfying explanation results from considering two interrelated claims and understanding what they entail together. The two interrelated claims are:

(1) Topological boundaries are frame-invariant and objective features of physics

(2) Such boundaries are causally significant and offer potential computational benefits

I think that these two claims combined have the potential to explain the phenomenal binding/boundary problem (of course assuming you are on board with the universe being a field of consciousness). They also explain why evolution was even capable of recruiting bound experiences for anything. Namely, that the same mechanism that logically entails individuation (topological boundaries) also has mathematical features useful for computation (examples given below). Our individual perspectives on the cosmos are the result of such individuality being a wrinkle in consciousness (so to speak) having non-trivial computational power.

In technical terms, I argue that a satisfactory solution to the boundary problem (1) avoids strong emergence, (2) sidesteps the hard problem of consciousness, (3) prevents the complication of epiphenomenalism, and (4) is compatible with the modern scientific world picture.

And the technical reason why topological segmentation provides the solution is that with it: (1) no strong emergence is required because behavioral holism is only weakly emergent on the laws of physics, (2) we sidestep the hard problem via panpsychism, (3) phenomenal binding is not epiphenomenal because the topological segments have holistic causal effects (such that evolution would have a reason to select for them), and (4) we build on top of the laws of physics rather than introduce new clauses to account for what happens in the nervous system. In this post you’ll get a general walkthrough of the solution. The fully rigorous, step by step, line of argumentation will be presented elsewhere. Please see the video for the detailed breakdown of alternative solutions to the binding/boundary problem and why they don’t work.

Holistic (Field) Computing

A very important move that we can make in order to explore this space is to ask ourselves if the way we think about a concept is overly restrictive. In the case of computation, I would claim that the concept is either applied extremely vaguely or that making it rigorous makes its application so narrow that it loses relevance. In the former case we have the tendency for people to equate consciousness with computation in a very abstract level (such as “resource gathering” and “making predictions” and “learning from mistakes”). In the latter we have cases where computation is defined in terms of computable functions. The conceptual mistake to avoid is to think that just because you can compute a function with a Turing machine, that therefore you are creating the same inner (bound or not) physical states along the way. And while yes, it would be possible to approximate the field behavior we will discuss below with a Turing machine, it would be computationally inefficient (as it would need to simulate a massively parallel system) and lack the bound inner states (and their computational speedups) needed for sentience.

The (conceptual engineering) move I’m suggesting we make is to first of all enrich our conception of computation. To notice that we’ve lived with an impoverished notion all along.

I suggest that our conception of computation needs to be broad enough to include bound states as possible meaningful inputs, internal steps and representations, and outputs. This enriched conception of computation would be capable of making sense of computing systems that work with very unusual inputs and outputs. For instance, it has no problem thinking of a computer that takes as input chaotic superfluid helium and returns soap bubble clusters as outputs. The reason to use such exotic medium is not to add extra steps, but in fact to remove extra steps by letting physics do the hard work for you.

(source)

To illustrate just one example of what you can do with this enriched paradigm of computing I am trying to present to you, let’s now consider the hidden computational power of soap films. Say that you want to connect three poles with a wire. And you want to minimize how much wire you use. One option is to use trigonometry and linear algebra, another one is to use numerical simulations. But an elegant alternative is to create a model of the poles between two parallel planes and then submerge the structure in soapy water.

Letting the natural energy-minimizing property of soap bubbles find the shortest connection between three poles is an interesting way of performing a computation. It is uniquely adapted to the problem without needing tweaks or adjustments – the self-organizing principle will work the same (within reason) wherever you place the poles. You are deriving computational power from physics in a very customized way that nonetheless requires no tuning or external memory. And it’s all done simply by each point of the surface wanting to minimize its tension. Any non-minimal configuration will have potential energy, which then gets transformed into kinetic energy and makes it wobble, and as it wobbles it radiates out its excess energy until it reaches a configuration where it doesn’t wobble anymore. So you have to make the solution of your problem precisely a non-wobbly state!

In this way of thinking about computation, an intrinsic part of the question about what kind of thing a computation is will depend on what physical processes were utilized to implement it. In essence, we can (and I think should) enrich our very conception of computation to include what kind of internal bound states the system is utilizing, and the extent to which the holistic physical effects of such inner states are computationally trivial or significant.

We can call this paradigm of computing “Holistic Computing”.

From Soap Bubbles to ISING-Solvers Meeting Schedulers Implemented with Lasers

Let’s make a huge jump from soap water-based computation. A much more general case that is nonetheless in the same family as using soap bubbles for compute, is having a way to efficiently solve the ISING problem. In particular, having an analog physics-based annealing method in this case comes with unique computational benefits: it turns out that non-linear optics can do this very efficiently. You are in a certain way using the universe’s very frustration with the problem (don’t worry I don’t think it suffers) to get it solved. Here is an amazing recent example: Ising Machines: Non-Von Neumann Computing with Nonlinear Optics – Alireza Marandi – 6/7/2019 (presented at Caltech).

The person who introduces Marandi in the video above is Kwabena Boahen, with whom I had the honor to take his course at Stanford (and play with the neurogrid!). Back in 2012 something like the neurogrid seemed like the obvious path to AGI. Today, ironically, people imagine scaling transformers is all you need. Tomorrow, we’ll recognize the importance of holistic field behavior and the boundary problem.

One way to get there on the computer science front will be by first demonstrating a niche set of applications where e.g. non-linear optics ISING solvers vastly outperform GPUs for energy minimization tasks in random graphs. But as the unique computational benefits become better understood, we will sooner or later switch from thinking about how to solve our particular problem, to thinking about how we can cast our particular problem as an ISING/energy minima problem so that physics solves the problem for us. It’s like having a powerful computer but it only speaks a very specific alien language. If you can translate your problem into its own terms, it’ll solve it at lightning speed. If you can’t, it will be completely useless.

Intelligence: Collecting and Applying Self-Organizing Principles

This takes us to the question of whether general intelligence is possible without switching to a Holistic Computing paradigm. Can you have generally intelligent (digital) chatbots? In some senses, yes. In perhaps the most significant sense, no.

Intelligence is a contentious topic (see here David Pearce’s helpful breakdown of 6 of its facets). One particular facet of intelligence that I find enormously fascinating and largely under-explored is the ability to make sense of new modes of consciousness and then recruit them for computational and aesthetic purposes. THC and music production have a long history of synergy, for instance. A composer who successfully uses THC to generate musical ideas others find novel and meaningful is applying this sort of intelligence. THC-induced states of consciousness are largely dysfunctional for a lot of tasks. But someone who utilizes the sort of intelligence (or meta-intelligence) I’m pointing to will pay attention to the features of experience that do have some novel use and lean on those. THC might impair working memory, but it also expands and stretches musical space. Intensifies reverb, softens rough edges in heart notes, increases emotional range, and adds synesthetic brown noise (which can enhance stochastic resonance). With wit and determination (and co-morbid THC/music addiction), musical artists exploit the oddities of THC musicality to great effect, arguably some much more successfully than others.

The kind of reframe that I’d like you to consider is that we are all in fact something akin to these stoner musicians. We were born with this qualia resonator with lots of cavities, kinds of waves, levels of coupling, and so on. And it took years for us to train it to make adaptive representations of the environment. Along the way, we all (typically) develop a huge repertoire of self-organizing principles we deploy to render what we believe is happing out there in the world. The reason why an experience of “meditation on the wetness of water” can be incredibly powerful is not because you are literally tuning into the resonant frequency of the water around you and in you. No, it’s something very different. You are creating the conditions for the self-organizing principle that we already use to render our experiences with water to take over as the primary organizer of our experience. Since this self-organizing principle does not, by its nature, generate a center, full absorption into “water consciousness” also has a no-self quality to it. Same with the other elements. Excitingly, this way of thinking also opens up our mind about how to craft meditations from first principles. Namely, by creating a periodic table of self-organizing principles and then systematically trying combinations until we identify the laws of qualia chemistry.

You have to come to realize that your brain’s relationship with self-organizing principles is like that of a Pokémon trainer and his Pokémon (ideally in a situation where Pokémon play the Glass Bead Game with each other rather than try to hurt each other– more on that later). Or perhaps like that of a mathematician and clever tricks for proofs, or a musician and rhythmic patterns, and so on. Your brain is a highly tamed inner space qualia warp drive usually working at 1% or less. It has stores of finely balanced and calibrated self-organizing principles that will generate the right atmospheric change to your experience at the drop of a hat. We are usually unaware of how many moods, personalities, contexts, and feelings of the passage of time there are – your brain tries to learn them all so it has them in store for whenever needed. All of a sudden: haze and rain, unfathomable wind, mercury resting motionless. What kind of qualia chemistry did your brain just use to try to render those concepts?

We are using features of consciousness -and the self-organizing principles it affords- to solve problems all the time without explicitly modeling this fact. In my conception of sentient intelligence, being able to recruit self-organizing principles of consciousness for meaningful computation is a pillar of any meaningfully intelligent mind. I think that largely this is what we are doing when humans become extremely good at something (from balancing discs to playing chess and empathizing with each other). We are creating very specialized qualia by finding the right self-organizing principles and then purifying/increasing their quality. To do an excellent modern day job that demands constraint satisfaction at multiple levels of analysis at once likely requires us to form something akin to High-Entropy Alloys of Consciousness. That is, we are usually a judiciously chosen mixture of many self-organizing principles balanced just right to produce a particular niche effect.

Meta-Intelligence

David Pearce’s conception of Full-spectrum Superintelligence is inspiring because it takes into account the state-space of consciousness (and what matters) in judging the quality of a certain intelligence in addition to more traditional metrics. Indeed, as another key conceptual engineering move, I suggest that we can and need to enrich our conception of intelligence in addition to our conception of computation.

So here is my attempt at enriching it further and adding another perspective. One way we can think of intelligence is as the ability to map a problem to a self-organizing principle that will “solve it for you” and having the capacity to instantiate that self-organizing principle. In other words, intelligence is, at least partly, about efficiency: you are successful to the extent that you can take a task that would generally require a large number of manual operations (which take time, effort, and are error-prone) and solve it in an “embodied” way.

Ultimately, a complex system like the one we use for empathy mixes both serial and parallel self-organizing principles for computation. Empathy is enormously cognitively demanding rather than merely a personality trait (e.g. agreeableness), as it requires a complex mirroring capacity that stores and processes information in efficient ways. Exploring exotic states of consciousness is even more computationally demanding. Both are error-prone.

Succinctly, I suggest we consider:

One key facet of intelligence is the capacity to solve problems by breaking them down into two distinct subproblems: (1) find a suitable self-organizing principle you can instantiate reliably, and (2) find out how to translate your problem to a format that our self-organizing principle can be pointed at so that it solves it for us.

Here is a concrete example. If you want to disentangle a wire, you can try to first put it into a discrete datastructure like a graph, and then get the skeleton of the knot in a way that allows you to simplify it with Reidemeister moves (and get lost in the algorithmic complexity of the task). Or you could simply follow the lead of Yu et al. 2021 and make the surfaces repulsive and let this principle solve the problem for you

(source)

These repulsion-based disentanglement algorithm are explained in this video. Importantly, how to do this effectively still needs fine tuning. The method they ended up using was much faster than the (many) other ones tried (a Full-Spectrum Superintellligence would be able to “wiggle” the wires a bit if they got stuck, of course):

(source)

This is hopefully giving you new ways of thinking about computation and intelligence. The key point to realize is that these concepts are not set in stone, and to a large extent may limit our thinking about sentience and intelligence. 

Now, I don’t believe that if you simulate a self-organizing principle of this sort you will get a conscious mind. The whole point of using physics to solve your problem is that in some cases you get better performance than algorithmically representing a physical system and then using that simulation to instantiate self-organizing principles. Moreover physics simulations, to the extent they are implemented in classical computers, will fail to generate the same field boundaries that would be happening in the physical system. To note, physics-inspired simulations like [Yu et al 2021] are nonetheless enormously helpful to illustrate how to think of problem-solving with a massively parallel analog system.

Are Neural Cellular Automata Conscious?

The computational success of Neural Cellular Automata is primarily algorithmic. In essence, digitally implemented NCA are exploring a paradigm of selection and amplification of self-organizing principles, which is indeed a very different way of thinking about computation. But critically any NCA will still lack sentience. The main reasons are that they (a) don’t use physical fields with weak downward causation, and (b) don’t have a mechanism for binding/boundary making. Digitally-implemented cellular automata may have complex emergent behavior, but they generate no meaningful boundaries (i.e. objective, frame-invariant, causally-significant, and computationally-useful). That said, the computational aesthetic of NCA can be fruitfully imported to the study of Holistic Field Computing, in that the techniques for selecting and amplifying self-organizing principles already solved for NCAs may have analogues in how the brain recruits physical self-organizing principles for computation.

Exotic States of Consciousness

Perhaps one of the most compelling demonstrations of the possible zoo (or jungle) of self-organizing principles out of which your brain is recruiting but a tiny narrow range is to pay close attention to a DMT trip.

DMT states of consciousness are computationally non-trivial on many fronts. It is difficult to emphasize how enriched the set of experiential building blocks becomes in such states. Their scientific significance is hard to overstate. Importantly, the bulk of the computational power on DMT is dedicated to trying to make the experience feel good and not feel bad. The complexity involved in this task is often overwhelming. But one could envision a DMT-like state in which some parameters have been stabilized in order to recruit standardized self-organizing principles available only in a specific region of the energy-information landscape. I think that cataloguing the precise mathematical properties of the dynamics of attention and awareness on DMT will turn out to have enormous _computational_ value. And a lot of this computational value will generally be pointed towards aesthetic goals.

To give you a hint of what I’m talking about: A useful QRI model (indeed, algorithmic reduction) of the phenomenology of DMT is that it (a) activates high-frequency metronomes that shake your experience and energize it with a high-frequency vibe, and (b) a new medium of wave propagation gets generated that allows very disparate parts of one’s experience to interact with one another.

3D Space Group (CEV on low dose DMT)

At a sufficient dose, DMT’s secondary effect also makes your experience feel sort of “wet” and “saturated”. Your whole being can feel mercurial and liquidy (cf: Plasmatis and Jim Jam). A friend speculates that’s what it’s like for an experience to be one where everything is touching everything else (all at once).

There are many Indra’s Net-type experiences in this space. In brief, experiences where “each part reflects every other part” are an energy minimum that also reduces prediction errors. And there is a fascinating non-trivial connection with the Free Energy Principle, where experiences that minimize internal prediction errors may display a lot of self-similarity.

To a first approximation, I posit that the complex geometry of DMT experiences are indeed the non-linearities of the DMT-induced wave propagation medium that appear when it is sufficiently energized (so that it transitions from the linear to the non-linear regime). In other words, the complex hallucinations are energized patterns of non-linear resonance trying to radiate out their excess energy. Indeed, as you come down you experience the phenomenon of condensation of shapes of qualia.

Now, we currently don’t know what computational problems this uncharted cornucopia of self-organizing principles could solve efficiently. The situation is analogous to that of the ISING Solver discussed above: we have an incredibly powerful alien computer that will do wonders if we can speak its language, and nothing useful otherwise. Yes, DMT’s computational power is an alien computer in search of a problem that will fit its technical requirements.

Vibe-To-Shape-And-Back

Michael Johnson, Selen Atasoy, and Steven Lehar all have shaped my thinking about resonance in the nervous system. Steven Lehar in particular brought to my attention non-linear resonance as a principle of computation. In essays like The Constructive Aspect of Visual Perception he presents a lot of visual illusions for which non-linear resonance works as a general explanatory principle (and then in The Grand Illusion he reveals how his insights were informed by psychonautic exploration).

One of the cool phenomenological observations Lehar made based on his exploration with DXM was that each phenomenal object has its own resonant frequency. In particular, each object is constructed with waves interfering with each other at a high-enough energy that they bounce off each other (i.e. are non-linear). The relative vibration of the phenomenal objects is a function of the frequencies of resonance of the waves of energy bouncing off each other that are constructing the objects.

In this way, we can start to see how a “vibe” can be attributed to a particular phenomenal object. In essence, long intervals will create lower resonant frequencies. And if you combine this insight with QRI paradigms, you see how the vibe of an experience can modulate the valence (e.g. soft ADSR envelopes and consonance feeling pleasant, for instance). Indeed, on DMT you get to experience the high-dimensional version of music theory, where the valence of a scene is a function of the crazy-complex network of pairwise interactions between phenomenal objects with specific vibratory characteristics. Give thanks to annealing because tuning this manually would be a nightmare.

But then there is the “global” vibe…

Topological Pockets

So far I’ve provided examples of how Holistic Computing enriches our conception of intelligence, computing, and how it even shows up in our experience. But what I’ve yet to do is connect this with meaningful boundaries, as we set ourselves to do. In particular, I haven’t explained why Holistic Computing would arise out of topological boundaries.

For the purpose of this essay I’m defining a topological segment (or pocket) to be a region that can’t be expanded further without this becoming false: every point in the region locally belongs to the same connected space.

The Balloons’ Case

In the case of balloons this cashes out as: a topological segment is one where each point can go to any other point without having to go through connector points/lines/planes. It’s essentially the set of contiguous surfaces.

Now, each of these pockets can have both a rich set of connections to other pockets as well as intricate internal boundaries. The way we could justify Computational Holism being relevant here is that the topological pockets trap energy, and thus allow the pocket to vibrate in ways that express a lot of holistic information. Each contiguous surface makes a sound that represents its entire shape, and thus behaves as a unit in at least this way.

The General Case

An important note here is that I am not claiming that (a) all topological boundaries can be used for Holistic Computing, or (b) to have Holistic Computing you need to have topological boundaries. Rather, I’m claiming that the topological segmentation responsible for individuating experiences does have applications for Holistic Computing and that this conceptually makes sense and is why evolution bothered to make us conscious. But for the general case, you probably do get quite a bit of both Holistic Computing without topological segmentation and vice versa. For example an LC circuit can be used for Holistic Computing on the basis of its steady analog resonance, but I’m not sure if it creates a topological pocket in the EM fields per se.

At this stage of the research we don’t have a leading candidate for the precise topological feature of fields responsible for this. But the explanation space is promising based on being able to satisfy theoretical constraints that no other theory we know of can.

But I can nonetheless provide a proof of concept for how a topological pocket does come with really impactful holism. Let’s dive in!

Getting Holistic Behavior Out of a Topological Pocket

Creating a topological pocket may be consequential in one of several ways. One option for getting holistic behavior arises if you can “trap” energy in the pocket. As a consequence, you will energize its harmonics. The particular way the whole thing vibrates is a function of the entire shape at once. So from the inside, every patch now has information about the whole (namely, by the vibration it feels!).**

(image source)

One possible overarching self-organizing principle that the entire pocket may implement is valence-gradient ascent. In particular, some configurations of the field are more pleasant than others and this has to do with the complexity of the global vibe. Essentially, the reason no part of it wants to be in a pocket with certain asymmetries, is because those asymmetries actually make themselves known everywhere within the pocket by how the whole thing vibrates. Therefore, for the same reason a soap bubble can become spherical by each point on the surface trying to locally minimize tension, our experiences can become symmetrical and harmonious by having each “point” in them trying to maximize its local valence.

Self Mirroring

From Lehar’s Cartoon Epistemology

And here we arrive at perhaps one of the craziest but coolest aspects of Holistic Computing I’ve encountered. Essentially, if we go to the non-linear regime, then the whole vibe is not merely just the weighted sum of the harmonics of the system. Rather, you might have waves interfere with each other in a concentrated fashion in the various cores/clusters, and in turn these become non-linear structures that will try to radiate out their energy. And to maximize valence there needs to be a harmony between the energy coming in and out of these dense non-linearities. In our phenomenology this may perhaps point to our typical self-consciousness. In brief, we have an internal avatar that “reflects” the state of the whole! We are self-mirroring machines! Now this is really non-trivial (and non-linear) Holistic Computing.

Cut From the Same Fabric

So here is where we get to the crux of the insight. Namely, that weakly emergent topological changes can simultaneously have non-trivial causal/computational effects while also solving the boundary problem. We avoid strong emergence but still get a kind of ontological emergence: since consciousness is being cut out of one huge fabric of consciousness, we don’t ever need strong emergence in the form of “consciousness out of the blue all of a sudden”. What you have instead is a kind of ontological birth of an individual. The boundary legitimately created a new being, even if in a way the total amount of consciousness is the same. This is of course an outrageous claim (that you can get “individuals” by e.g. twisting the electric field in just the right way). But I believe the alternatives are far crazier once you understand what they entail.

In a Nutshell

To summarize, we can rule out any of the current computational systems implementing AI algorithms to have anything but trivial consciousness. If there are topological pockets created by e.g. GPUs/TPUs, they are epiphenomenal – the system is designed so that only the local influences it has hardcoded can affect the behavior at each step.

The reason the brain is different is that it has open avenues for solving the boundary problem. In particular, a topological segmentation of the EM field would be a satisfying option, as it would simultaneously give us both holistic field behavior (computationally useful) and a genuine natural boundary. It extends the kind of model explored by Johnjoe McFadden (Conscious Electromagnetic Information Field) and Susan Pockett (Consciousness Is a Thing, Not a Process). They (rightfully) point out that the EM field can solve the binding problem. The boundary problem, in turn, emerges. With topological boundaries, finally, you can get meaningful boundaries (objective, frame-invariant, causally-significant, and computationally-useful).

This conceptual framework both clarifies what kind of system is at minimum required for sentience, and also opens up a research paradigm for systematically exploring topological features of the fields of physics and their plausible use by the nervous system.


* See the “Self Mirroring” section to contrast the self-blindness of a lookup table and the self-awareness of sentient beings.

** More symmetrical shapes will tend to have more clean resonant modes. So to the extent that symmetry tracks fitness on some level (e.g. ability to shed off entropy), then quickly estimating the spectral complexity of an experience can tell you how far it is from global symmetry and possibly health (explanation inspired by: Johnson’s Symmetry Theory of Homeostatic Regulation).


See also:


Many thanks to Michael Johnson, David Pearce, Anders & Maggie, and Steven Lehar for many discussions about the boundary/binding problem. Thanks to Anders & Maggie and to Mike for discussions about valence in this context. And thanks to Mike for offering a steel-man of epiphenomenalism. Many thank yous to all our supporters! Much love!

Infinite bliss!

That Time Daniel Dennett Took 200 Micrograms of LSD (In Another Timeline)

[Epistemic status: fiction]

Andrew Zuckerman messaged me:

Daniel Dennett admits that he has never used psychedelics! What percentage of functionalists are psychedelic-naïve? What percentage of qualia formalists are psychedelic-naïve? In this 2019 quote, he talks about his drug experience and also alludes to meme hazards (though he may not use that term!):

Yes, you put it well. It’s risky to subject your brain and body to unusual substances and stimuli, but any new challenge may prove very enlightening–and possibly therapeutic. There is only a difference in degree between being bumped from depression by a gorgeous summer day and being cured of depression by ingesting a drug of one sort or another. I expect we’ll learn a great deal in the near future about the modulating power of psychedelics. I also expect that we’ll have some scientific martyrs along the way–people who bravely but rashly do things to themselves that disable their minds in very unfortunate ways. I know of a few such cases, and these have made me quite cautious about self-experimentation, since I’m quite content with the mind I have–though I wish I were a better mathematician. Aside from alcohol, caffeine, nicotine and cannabis (which has little effect on me, so I don’t bother with it), I have avoided the mind-changing options. No LSD, no psilocybin or mescaline, though I’ve often been offered them, and none of the “hard” drugs.

 

As a philosopher, I have always accepted the possibility that the Athenians were right: Socrates was quite capable of corrupting the minds of those with whom he had dialogue. I don’t think he did any clear lasting harm, but it is certainly possible for a philosopher to seriously confuse an interlocutor or reader—to the point of mental illness or suicide, or other destructive behavior. Ideas can be just as dangerous as drugs.

 

Dennett Explained by Brendan Fleig-Goldstein and Daniel A. Friedman (2019)


It would be quite fascinating to know what Dan would say about lived psychedelic states. With that in mind, here is an essay prompt originally conceived for GPT-3 to satisfy our curiosity:

And after seeing some surprising empirical results with his heterophenomenological methods when examining the experience of people on psychedelics, Daniel Dennett decided to experience it for himself by taking 200 micrograms of LSD. The first thing he said to himself as he felt the first indications of the come-up was…


anders_and_maggie

Maggie and Anders

Maggie Wassinge and Anders Amelin (QRI Sweden volunteer coordinators and content contributors – see letters I & II, letters III, IV, V, & VI, and letters VII, VIII, & IX) decided to give it a go first. We shall then compare it with what GPT-3 can come up with at a later point.

In a way, what you are about to read can be thought of as Anders & Maggie’s simulation of a hyper-intelligent GPT-3-like system’s simulation of a hypothetical Daniel Dennett on LSD. So many layers, I know. But the results speak for themselves:


Dan says: Correlation is all that is needed. So what states a mind uses for modeling conditions of importance to it, is fundamentally arbitrary. Like with language. Words represent things to humans but there are many languages, and words do not resemble what they represent. They only correlate with what they represent, and the correlation needs to be learned. To do science about the mind, one does not need to attach any special weight to any conscious state of mind over any other. One person’s hope may be another person’s despair. These “qualia” are like internally projected hypo-languages which are different all over the map, if there were one. In fact there cannot be an interpersonal map of what things feel like personally. Only naïve people could hope for a fundamental linguistics of consciousness, as if these states were to constitute a universal language of some ground truth of the mind. Silly. Romantic. It’s all arbitrary. For the record, I never said subjective experiential states do not exist. People misrepresent me on that. What I am saying is that it will never be possible to scientifically map what the state that a word such as, for instance, “green”, translates to feels like by the brain of a given individual. My green could be your red.cielab-lch-color-tolerancing-methods

Just drop the whole idea of trying to map the state-space of qualia. That is my position. Or at least I know it is, logically. Right now I begin to notice how everything intensifies and becomes somehow more salient. More revealingly “real”. As I reflect on the notion of how “states” correlate, a humorous episode from my undergraduate student life so long ago, is brought to the surface. At Wesleyan it was, where I was taking a course in Art Appreciation. The lecturer was showing a slide of a still life. A bowl of fruit it was, conspicuously over-ripe. Pointing at one of the fruits, saying “Can anyone tell me what state this peach is in?” There was silence for about three seconds, then one student exclaimed: “Georgia”. Everyone laughed joyfully. Except me. I never quite liked puns. Too plebeian. Sense of humor is arbitrary. I believe that episode helped convince me that the mind is not mysterious after all. It is just a form of evolved spaghetti code finding arbitrary solutions to common problems. Much like adaptations of biochemistry in various species of life. The basic building blocks remain fixed as an operative system if you will, but what is constructed with it is arbitrary and only shaped by fitness proxies. Which are, again, nothing but correlations. I realized then that I’d be able to explain consciousness within a materialist paradigm without any mention of spirituality or new realms of physics. All talk of such is nonsense.Daniel_dennett_Oct2008

I have to say, however, that a remarkable transformation inside my mind is taking place as a result of this drug. I notice the way I now find puns quite funny. Fascinating. I also reflect on the fact that I find it fascinating that I find puns funny. It’s as if… I hesitate to think it even to myself, but there seems to be some extraordinarily strong illusion that “funny” and “fascinating” are in fact those very qualia states which… which cannot possibly be arbitrary. Although the reality of it has got to be that when I feel funniness or fascination, those are brain activity patterns unique to myself, not possible for me to relate to any other creature in the universe experiencing them the same way, or at least not to any non-human species. Not a single one would feel the same, I’m sure. Consider a raven, for example. It’s a bird that behaves socially intricately, makes plans for the next day, can grasp how tools are used, and excels at many other mental tasks even sometimes surpassing a chimpanzee. Yet a raven has a last common ancestor with humans more than three hundred million years ago. The separate genetic happenstances of evolution since then, coupled with the miniaturization pressure due to weight limitations on a flying creature, means that if I were to dissect and anatomically compare the brain of a raven and a human, I’d be at a total loss. Does the bird even have a cerebral cortex?03-brai-diagram

An out of character thing is happening to me. I begin to feel as if it were in fact likely that a raven does sense conscious states of “funny” and “fascinating”. I still have functioning logic that tells me it must be impossible. Certainly, it’s an intelligent creature. A raven is conscious, probably. Maybe the drug makes me exaggerate even that, but it ought to have a high likelihood of being the case. But the states of experience in a raven’s mind must be totally alien if it were possible to compare them side by side with those of a human, which of course it is not. The bird might as well come from another planet.Head_of_Raven

The psychedelic drug is having an emotional effect on me. It does not twist my logic, though. This makes for internal conflict. Oppositional suggestions spontaneously present themselves. Could there be at least some qualia properties which are universal? Or is every aspect arbitrary? If the states of the subjective are not epiphenomenal, there would be evolutionary selection pressures shaping them. Logically there should be differences in computational efficiency when the information encoded in qualia feeds back into actions carried out by the body that the mind controls. Or is it epiphenomenal after all? Well, there’s the hard problem. No use pondering that. It’s a drug effect. It’ll wear off. Funny thing though, I feel very, very happy. I’m wondering about valence. It now appeals strongly to take the cognitive leap that at least the positive/negative “axis” of experience may in fact be universal. A modifier of all conscious states, a kind of transform function. Even alien states could then have a “good or bad” quality to them. Not directly related to the cognitive power of intelligences, but used as an efficient guidance for agency by them all, from the humblest mite to the wisest philosopher. Nah. Romanticizing. Anthropomorphizing.

36766208_10160731731785637_6606215010454601728_oFurther into this “trip” now. Enjoying the ride. It’s not going to change my psyche permanently, so why not relax and let go? What if conscious mind states really do have different computational efficiency for various purposes? That would mean there is “ground truth” to be found about consciousness. But how does nature enable the process for “hitting” the efficient states? If that has been convergently perfected by evolution, conscious experience may be more universal than I used to take for granted. Without there being anything supernatural about it. Suppose the possibility space of all conscious states is very large, so that within it there is an ideally suited state for any mental task. No divine providence or intelligent design, just a law of large numbers.

The problem then is only a search algorithmic one, really. Suppose “fright” is a state ideally suited for avoiding danger. At least now, under the influence, fright strikes me as rather better for the purpose than attraction. Come to think of it, Toxoplasma Gondii has the ability to replace fright with attraction in mice with respect to cats. It works the same way in other mammals, too. Are things then not so arbitrarily organized in brains? Well, those are such basic states we’d share them with rodents presumably. Still can’t tell if fright feels like fear in a raven or octopus. But can it feel like attraction? Hmmm, these are just mind wanderings I go through while I wait for this drug to wear off. What’s the harm in it?

Suppose there is a most computationally efficient conscious state for a given mental task. I’d call that state the ground state of conscious intelligence with respect to that task. I’m thinking of it like mental physical chemistry. In that framework, a psychedelic drug would bring a mind to excited states. Those are states the mind has not practiced using for tasks it has learned to do before. The excited states can then be perceived as useless, for they perform worse at tasks one has previously become competent at while sober. Psychedelic states are excited with respect to previous mental tasks, but they would potentially be ground states for new tasks! It’s probably not initially evident exactly what those tasks are, but the great potential to in fact become more mentally able would be apparent to those who use psychedelics. Right now this stands out to me as absolutely crisp, clear and evident. And the sheer realness of the realization is earth-shaking. Too bad my career could not be improved by any new mental abilities.Touched_by_His_Noodly_Appendage_HD

Oh Spaghetti Monster, I’m really high now. I feel like the sober me is just so dull. Illusion, of course, but a wonderful one I’ll have to admit. My mind is taking off from the heavy drudgery of Earth and reaching into the heavens on the wings of Odin’s ravens, eternally open to new insights about life, the universe and everything. Seeking forever the question to the answer. I myself am the answer. Forty-two. I was born in nineteen forty two. The darkest year in human history. The year when Adolf Hitler looked unstoppable at destroying all human value in the entire world. Then I came into existence, and things started to improve.

It just struck me that a bird is a good example of embodied intelligence. Sensory input to the brain can produce lasting changes in the neural connectivity and so on, resulting in a saved mental map of that which precipitated the sensory input. Now, a bird has the advantage of flight. It can view things from the ground and from successively higher altitudes and remember the appearance of things on all these different scales. Plus it can move sideways large distances and find systematic changes over scales of horizontal navigation. Entire continents can be included in a bird’s area of potential interest. Continents and seasons. I’m curious if engineers will someday be able to copy the ability of birds into a flying robot. Maximizing computational efficiency. Human-level artificial intelligence I’m quite doubtful of, but maybe bird brains are within reach, though quite a challenge, too.

This GPT-3 system by OpenAI is pretty good for throwing up somewhat plausible suggestions for what someone might say in certain situations. Impressive for a purely lexical information processing system. It can be trained on pretty much any language. I wonder if it could become useful for formalizing those qualia ground states? The system itself is not an intelligence in the agency sense but it is a good predictor of states. Suppose it can model the way the mind of the bird cycles through all those mental maps the bird brain has in memory. Where the zooming in and out on different scales brings out different visual patterns. If aspects of patterns from one zoom level is combined with aspect from another zoom level, the result can be a smart conclusion about where and when to set off in what direction and with what aim. Then there can be combinations also with horizontally displaced maps and time-displaced maps. Essentially, to a computer scientist we are talking massively parallel processing through cycles of information compression and expansion with successive approximation combinations of pattern pieces from the various levels in rapid repetition until something leads to an action which becomes rewarded via a utility function maximization.

Integrated_information_theory_postulates

Axioms of Integrated Information Theory (IIT)

Thank goodness I’m keeping all this drugged handwaving to myself and not sharing it in the form of any trip report. I have a reputation for being down to Earth, and I wouldn’t want to spoil it. Flying with ravens, dear me. Privately it is quite fun right now, though. That cycling of mental maps, could it be compatible with the Integrated Information Theory? I don’t think Tononi’s people have gone into how an intelligent system would search qualia state-space and how it would find the task-specific ground states via successive approximations. Rapidly iterated cycling would bring in a dynamic aspect they haven’t gotten to, perhaps. I realize I haven’t read the latest from them. Was always a bit skeptical of the unwieldy mathematics they use. Back of the envelope here… if you replace the clunky “integration” with resonance, maybe there’s a continuum of amplitudes of consciousness intensity? Possibly with a threshold corresponding to IIT’s nonconscious feed-forward causation chains. The only thing straight from physics which would allow this, as far as I can tell from the basic mathematics of it, would be wave interference dynamics. If so, what property might valence correspond to? Indeed, be mappable to? For conscious minds, experiential valence is the closest one gets to updating on a utility function. Waves can interfere constructively and destructively. That gives us frequency-variable amplitude combinations, likely isomorphic with the experienced phenomenology and intensity of conscious states. Such as the enormous “realness” and “fantastic truth” I am now immersed in. Not sure if it’s even “I”. There is ego dissolution. It’s more like a free-floating cosmic revelation. Spectacular must be the mental task for which this state is the ground state!

Wave pattern variability is clearly not a bottleneck. Plotting graphs of frequencies and amplitudes for even simple interference patterns shows there’s a near-infinite space of distinct potential patterns to pick from. The operative system, that is evolution and development of nervous systems, must have been slow going to optimize by evolution via genetic selection early on in the history of life, but then it could go faster and faster. Let me see, humans acquired a huge redundancy of neocortex of the same type as animals use for avigation in spacetime locations. Hmmm…, that which the birds are so good at. Wonder if the same functionality in ravens also got increased in volume beyond what is needed for navigation? Opening up the possibility of using the brain to also “navigate” in social relational space or tool function space. Literally, these are “spaces” in the brain’s mental models.2000px-Migrationroutes.svg

Natural selection of genetics cannot have found the ground states for all the multiple tasks a human with our general intelligence is able to take on. Extra brain tissue is one thing it could produce, but the way that tissue gets efficiently used must be trained during life. Since the computational efficiency of the human brain is assessed to be near the theoretical maximum for the raw processing power it has available, inefficient information-encoding states really aren’t very likely to make up any major portion of our mental activity. Now, that’s a really strong constraint on mechanisms of consciousness there. If you don’t believe it was all magically designed by God, you’d have to find a plausible parsimonious mechanism for how the optimization takes place.

If valence is in the system as a basic property, then what can it be if it’s not amplitude? For things to work optimally, valence should in fact be orthogonal to amplitude. Let me see… What has a natural tendency to persist in evolving systems of wave interference? Playing around with some programs on my computer now… well, appears it’s consonance which continues and dissonance which dissipates. And noise which neutralizes. Hey, that’s even simple to remember: consonance continues, dissonance dissipates, noise neutralizes. Goodness, I feel like a hippie. Beads and Roman sandals won’t be seen. In Muskogee, Oklahoma, USA. Soon I’ll become convinced love’s got some cosmic ground state function, and that the multiverse is mind-like. Maybe it’s all in the vibes, actually. Spaghetti Monster, how silly that sounds. And at the same time, how true!

matthew_smith_65036312_10158707068303858_8051960337261395968_o

Artist: Matthew Smith

I’m now considering the brain to produce self-organizing ground state qualia selection via activity wave interference with dissonance gradient descent and consonance gradient ascent with ongoing information compression-expansion cycling and normalization via buildup of system fatigue. Wonder if it’s just me tripping, or if someone else might seriously be thinking along these lines. If so, what could make a catchy name for their model?

Maybe “Resonant State Selection Theory”? I only wish this could be true, for then it would be possible to unify empty individualism with open individualism in a framework of full empathic transparency. The major ground states for human intelligence could presumably be mapped pretty well with an impressive statistical analyzer like GPT-3. Mapping the universal ground truth of conscious intelligence, what a vision!

But, alas, the acid is beginning to wear off. Back to the good old opaque arbitrariness I’ve built my career on. No turning back now. I think it’s time for a cup of tea, and maybe a cracker to go with that.

raven-99568_960_720

QRI’s FAQ

These are the answers to the most Frequently Asked Questions about the Qualia Research Institute. (See also: the glossary).


(Organizational) Questions About the Qualia Research Institute

  • What type of organization is QRI?

    • QRI is a nonprofit research group studying consciousness based in San Francisco, California. We are a registered 501(c)(3) organization.

  • What is the relationship between QRI, Qualia Computing, and Opentheory?

    • Qualia Computing and Opentheory are the personal blogs of QRI co-founders Andrés Gómez Emilsson and Michael Johnson, respectively. While QRI was in its early stages, all original QRI research was initially published on these two platforms. However, from August 2020 onward, this is shifting to a unified pipeline centered on QRI’s website.

  • Is QRI affiliated with an academic institution or university?

    • Although QRI does collaborate regularly with university researchers and laboratories, we are an independent research organization. Put simply, QRI is independent because we didn’t believe we could build the organization we wanted and needed to build within the very real constraints of academia. These constraints include institutional pressure to work on conventional projects, to optimize for publication metrics, and to clear various byzantine bureaucratic hurdles. It also includes professional and social pressure to maintain continuity with old research paradigms, to do research within an academic silo, and to pretend to be personally ignorant of altered states of consciousness. It’s not that good research cannot happen under these conditions, but we believe good consciousness research happens despite the conditions in academia, not because of them, and the best use of resources is to build something better outside of them.

  • How does QRI align with the values of EA?

    • Effective Altruism (EA) is a movement that uses evidence and reason to figure out how to do the most good. QRI believes this aesthetic is necessary and important for creating a good future. We also believe that if we want to do the most good, foundational research on the nature of the good is of critical importance. Two frames we offer are Qualia Formalism and Sentientism. Qualia Formalism is the claim that experience has a precise mathematical description, that a formal account of experience should be the goal of consciousness research. Sentientism is the claim that value and disvalue are entirely expressed in the nature and quality of conscious experiences. We believe EA is enriched by both Qualia Formalism and Sentientism.

  • What would QRI do with $10 billion?

    • Currently, QRI is a geographically distributed organization with access to commercial-grade neuroimaging equipment. The first thing we’d do with $10 billion is set up a physical headquarters for QRI and buy professional-grade neuroimaging devices (fMRI, MEG, PET, etc.) and neurostimulation equipment. We’d also hire teams of full-time physicists, mathematicians, electrical engineers, computer scientists, neuroscientists, chemists, philosophers, and artists. We’ve accomplished a great deal on a shoestring budget, but it would be hard to overestimate how significant being able to build deep technical teams and related infrastructure around core research threads would be for us (and, we believe, for the growing field of consciousness research). Scaling is always a process and we estimate our ‘room for funding’ over the next year is roughly ~$10 million. However, if we had sufficiently deep long-term commitments, we believe we could successfully scale both our organization and research paradigm into a first-principles approach for decisively diagnosing and curing most forms of mental illness. We would continue to run studies and experiments, collect interesting data about exotic and altered states of consciousness, pioneer new technologies that help eliminate involuntary suffering, and develop novel ways to enable conscious beings to safely explore the state-space of consciousness.

Questions About Our Research Approach

  • What differentiates QRI from other research groups studying consciousness?

    • The first major difference is that QRI breaks down “solving consciousness” into discrete subtasks; we’re clear about what we’re trying to do, which ontologies are relevant for this task, and what a proper solution will look like. This may sound like a small thing, but an enormous amount of energy is wasted in philosophy by not being clear about these things. This lets us “actually get to work.”

    • Second, our focus on valence is rare in the field of consciousness studies. A core bottleneck in understanding consciousness is determining what its ‘natural kinds’ are: terms which carve reality at the joints. We believe emotional valence (the pleasantness/unpleasantness of an experience) is one such natural kind, and this gives us a huge amount of information about phenomenology. It also offers a clean bridge for interfacing with (and improving upon) the best neuroscience.

    • Third, QRI takes exotic states of consciousness extremely seriously whereas most research groups do not. An analogy we make here is that ignoring exotic states of consciousness is similar to people before the scientific enlightenment thinking that they can understand the nature of energy, matter, and the physical world just by studying it at room temperature while completely ignoring extreme states such as what’s happening in the sun, black holes, plasma, or superfluid helium. QRI considers exotic states of consciousness as extremely important datapoints for reverse-engineering the underlying formalism for consciousness.

    • Lastly, we have a focus on precise, empirically testable predictions, which is rare in philosophy of mind. Any good theory of consciousness should also contribute to advancements in neuroscience. Likewise, any good theory of neuroscience should contribute to novel, bold, falsifiable predictions, and blueprints for useful things, such as new forms of therapy. Having such a full-stack approach to consciousness which does each of those two things is thus an important marker that “something interesting is going on here” and is simply very useful for testing and improving theory.

  • What methodologies are you using? How do you actually do research? 

    • QRI has three core areas of research: philosophy, neuroscience, and neurotechnology 

      • Philosophy: Our philosophy research is grounded in the eight problems of consciousness. This divide-and-conquer approach lets us explore each subproblem independently, while being confident that when all piecemeal solutions are added back together, they will constitute a full solution to consciousness.

      • Neuroscience: We’ve done original synthesis work on combining several cutting-edge theories of neuroscience (the free energy principle, the entropic brain, and connectome-specific harmonic waves) into a unified theory of Bayesian emotional updating; we’ve also built the world’s first first-principles method for quantifying emotional valence from fMRI. More generally, we focus on collecting high valence neuroimaging datasets and developing algorithms to analyze, quantify, and visualize them. We also do extensive psychophysics research, focusing on both the fine-grained cognitive-emotional effects of altered states, and how different types of sounds, pictures, body vibrations, and forms of stimulation correspond with low and high valence states of consciousness.

      • Neurotechnology: We engage in both experimentation-driven exploration, tracking the phenomenological effects of various interventions, as well as theory-driven development. In particular, we’re prototyping a line of neurofeedback tools to help treat mental health disorders.

  • What does QRI hope to do over the next 5 years? Next 20 years?

    • Over the next five years, we intend to further our neurotechnology to the point that we can treat PTSD (post-traumatic stress disorder), especially treatment-resistant PTSD. We intend to empirically verify or falsify the symmetry theory of valence. If it is falsified, we will search for a new theory that ties together all of the empirical evidence we have discovered. We aim to create an Effective Altruist cause area regarding the reduction of intense suffering as well as the study of very high valence states of consciousness.

    • Over the next 20 years, we intend to become a world-class research center where we can put the discipline of “paradise engineering” (as described by philosopher David Pearce) on firm academic grounds.

Questions About Our Mission

  • How can understanding the science of consciousness make the world a better place?

    • Understanding consciousness would improve the world in a tremendous number of ways. One obvious outcome would be the ability to better predict what types of beings are conscious—from locked-in patients to animals to pre-linguistic humans—and what their experiences might be like.

    • We also think it’s useful to break down the benefits of understanding consciousness in three ways: reducing the amount of extreme suffering in the world, increasing the baseline well-being of conscious beings, and achieving new heights for what conscious states are possible to experience.

    • Without a good theory of valence, many neurological disorders will remain completely intractable. Disorders such as fibromyalgia, complex regional pain syndrome (CRPS), migraines, and cluster headaches are all currently medical puzzles and yet have incredibly negative effects on people’s livelihoods. We think that a mathematical theory of valence will explain why these things feel so bad and what the shortest path for getting rid of them looks like. Besides valence-related disorders, nearly all mental health disorders, from clinical depression and PTSD to schizophrenia and anxiety disorders, will become better understood as we discover the structure of conscious experience.

    • We also believe that many (though not all) of the zero-sum games people play are the products of inner states of dissatisfaction and suffering. Broadly speaking, people who have a surplus of cognitive and emotional energy tend to play more positive sum games, are more interested in cooperation, and are very motivated to do so. We think that studying states such as those induced by MDMA that combine both high valence and a prosocial behavior mindset can radically alter the game theoretical landscape of the world for the better.

  • What is the end goal of QRI? What does QRI’s perfect world look like?

    • In QRI’s perfect future:

      • There is no involuntary suffering and all sentient beings are animated by gradients of bliss,

      • Research on qualia and consciousness is done at a very large scale for the purpose of mapping out the state-space of consciousness and understanding its computational and intrinsic properties (we think that we’ve barely scratched the surface of knowledge about consciousness),

      • We have figured out the game-theoretical subtleties in order to make that world dynamic yet stable: radically positive, without just making it fully homogeneous and stuck in a local maxima.

Questions About Getting Involved

  • How can I follow QRI’s work?

    • You can start by signing up for our newsletter! This is by far our most important communication channel. We also have a Facebook page, Twitter account, and Linkedin page. Lastly, we share some exclusive tidbits of ideas and thoughts with our supporters on Patreon.

  • How can I get involved with QRI?

    • The best ways to help QRI are to:

      • Donate to help support our work.

      • Read and engage with our research. We love critical responses to our ideas and encourage you to reach out if you have an interesting thought!

      • Spread the word to friends, potential donors, and people that you think would make great collaborators with QRI.

      • Check out our volunteer page to find more detailed ways that you can contribute to our mission, from independent research projects to QRI content creation.

Questions About Consciousness

  • What assumptions about consciousness does QRI have? What theory of consciousness does QRI support?

    • The most important assumption that QRI is committed to is Qualia Formalism, the hypothesis that the internal structure of our subjective experience can be represented precisely by mathematics. We are also Valence Realists: we believe valence (how good or bad an experience feels) is a real and well-defined property of conscious states. Besides these positions, we are fairly agnostic and everything else is an educated guess useful for pragmatic purposes.

  • What does QRI think of functionalism?

    • QRI thinks that functionalism takes many high-quality insights about how systems work and combines them in such a way that both creates confusion and denies the possibility of progress. In its raw, unvarnished form, functionalism is simply skepticism about the possibility of Qualia Formalism. It is simply a statement that “there is nothing here to be formalized; consciousness is like élan vital, confusion to be explained away.” It’s not actually a theory of consciousness; it’s an anti-theory. This is problematic in at least two ways:

      • 1. By assuming consciousness has formal structure, we’re able to make novel predictions that functionalism cannot (see e.g. QRI’s Symmetry Theory of Valence, and Quantifying Bliss). A few hundred years ago, there were many people who doubted that electromagnetism had a unified, elegant, formal structure, and this was a reasonable position at the time. However, in the age of the iPhone, skepticism that electricity is a “real thing” that can be formalized is no longer reasonable. Likewise, everything interesting and useful QRI builds using the foundation of Qualia Formalism stretches functionalism’s credibility thinner and thinner.

      • 2. Insofar as functionalism is skeptical about the formal existence of consciousness, it’s skeptical about the formal existence of suffering and all sentience-based morality. In other words, functionalism is a deeply amoral theory, which if taken seriously dissolves all sentience-based ethical claims. This is due to there being an infinite number of functional interpretations of a system: there’s no ground-truth fact of the matter about what algorithm a physical system is performing, about what information-processing it’s doing. And if there’s no ground-truth about which computations or functions are present, but consciousness arises from these computations or functions, then there’s no ground-truth about consciousness, or things associated with consciousness, like suffering. This is a strange and subtle point, but it’s very important. This point alone is not sufficient to reject functionalism: if the universe is amoral, we shouldn’t hold a false theory of consciousness in order to try to force reality into some ethical framework. But in debates about consciousness, functionalists should be up-front that functionalism and radical moral anti-realism is a package deal, that inherent in functionalism is the counter-intuitive claim that just as we can reinterpret which functions a physical system is instantiating, we can reinterpret what qualia it’s experiencing and whether it’s suffering.

    • For an extended argument, see Against Functionalism.

  • What does QRI think of panpsychism?

    • At QRI, we hold a position that is close to dual-aspect monism or neutral monism, which states that the universe is composed of one kind of thing that is neutral, and that both the mental and physical are two features of this same substance. One of the motivating factors for holding this view is that if there is deep structure in the physical, then there should be a corresponding deep structure to phenomenal experience. And we can tie this together with physicalism in the sense that the laws of physics ultimately describe fields of qualia. While there are some minor disagreements between dual-aspect monism and panpsychism, we believe that our position mostly fits well with a panpsychist view—that phenomenal properties are a fundamental feature of the world and aren’t spontaneously created only when a certain computation is being performed.

    • However, even with this view, there still are very important questions, such as: what makes a unified conscious experience? Where does one experience end and another begin? Without considering these problems in the light of Qualia Formalism, it is easy to tie animism into panpsychism and believe that inanimate objects like rocks, sculptures, and pieces of wood have spirits or complex subjective experiences. At QRI, we disagree with this and think that these types of objects might have extremely small pockets of unified conscious experience, but will mostly be masses of micro-qualia that are not phenomenally bound into some larger experience.

  • What does QRI think of IIT (Integrated Information Theory)?

    • QRI is very grateful for IIT because it is the first mainstream theory of consciousness that satisfies a Qualia Formalist account of experience. IIT says (and introduced the idea!) that for every conscious experience, there is a corresponding mathematical object such that the mathematical features of that object are isomorphic to the properties of the experience. QRI believes that without this idea, we cannot solve consciousness in a meaningful way, and we consider the work of Giulio Tononi to be one of our core research lineages. That said, we are not in complete agreement with the specific mathematical and ontological choices of IIT, and we think it may be trying to ‘have its cake and eat it too’ with regard to functionalism vs physicalism. For more, see Sections III-V of Principia Qualia.

    • We make no claim that some future version of IIT, particularly something more directly compatible with physics, couldn’t cleanly address our objections, and see a lot of plausible directions and promise in this space.

  • What does QRI think of the free energy principle and predictive coding?

    • On our research lineages page, we list the work of Karl Friston as one of QRI’s core research lineages. We consider the free energy principle (FEP), as well as related research such as predictive coding, active inference, the Bayesian brain, and cybernetic regulation, as an incredibly elegant and predictive story of how brains work. Friston’s idea also forms a key part of the foundation for QRI’s theory of brain self-organization and emotional updating, Neural Annealing.

    • However, we don’t think that the free energy principle is itself a theory of consciousness, as it suffers from many of the shortcomings of functionalism: we can tell the story about how the brain minimizes free energy, but we don’t have a way of pointing at the brain and saying *there* is the free energy! The FEP is an amazing logical model, but it’s not directly connected to any physical mechanism. It is a story that “this sort of abstract thing is going on in the brain” without a clear method of mapping this abstract story to reality.

    • Friston has supported this functionalist interpretation of his work, noting that he sees consciousness as a process of inference, not a thing. That said, we are very interested in his work on calculating the information geometry of Markov blankets, as this could provide a tacit foundation for a formalist account of qualia under the FEP. Regardless of this, though, we believe Friston’s work will play a significant role in a future science of mind.

  • What does QRI think of global workspace theory?

    • The global workspace theory (GWT) is a cluster of empirical observations that seem to be very important for understanding what systems in the brain contribute to a reportable experience at a given point in time. The global workspace theory is a very important clue for answering questions of what philosophers call Access Consciousness, or the aspects of our experience on which we can report.

    • However, QRI does not consider the global workspace theory to be a full theory of consciousness. Parts of the brain that are not immediately contributing to the global workspace may be composed of micro qualia, or tiny clusters of experience. They’re obviously impossible to report on, but they are still relevant to the study of consciousness. In other words, just because a part of your brain wasn’t included in the instantaneous global workspace, doesn’t mean that it can’t suffer or it can’t experience happiness. We value global workspace research because questions of Access Consciousness are still very critical for a full theory of consciousness.

  • What does QRI think of higher-order theories of consciousness?

    • QRI is generally opposed to theories of consciousness that equate consciousness with higher order reflective thought and cognition. Some of the most intense conscious experiences are pre-reflective or unreflective such as blind panic, religious ecstasy, experiences of 5-MeO-DMT, and cluster headaches. In these examples, there is not much reflectivity nor cognition going on, yet they are intensely conscious. Therefore, we largely reject any attempt to define consciousness with a higher-order theory.

  • What is the relationship between evolution and consciousness?

    • The relationship between evolution and consciousness is very intricate and subtle. An eliminativist approach arrives at the simple idea that information processing of a certain type is evolutionarily advantageous, and perhaps we can call this consciousness. However, with a Qualia Formalist approach, it seems instead that the very properties of the mathematical object isomorphic to consciousness can play key roles (either causal or in terms of information processing) that make it advantageous for organisms to recruit consciousness.

    • If you don’t realize that consciousness maps onto a mathematical object with properties, you may think that you understand why consciousness was recruited by natural selection, but your understanding of the topic would be incomplete. In other words, to have a full understanding of why evolution recruited consciousness, you need to understand what advantages the mathematical object has. One very important feature of consciousness is its capacity for binding. For example, the unitary nature of experience—the fact that we can experience a lot of qualia simultaneously—may be a key feature of consciousness that accelerates the process of finding solutions to constraint satisfaction problems. In turn, evolution would hence have a reason to recruit states of consciousness for computation. So rather than thinking of consciousness as identical with the computation that is going on in the brain, we can think of it as a resource with unique computational benefits that are powerful and dynamic enough to make organisms that use it more adaptable to their environments.

  • Does QRI think that animals are conscious?

    • QRI thinks there is a very high probability that every animal with a nervous system is conscious. We are agnostic about unified consciousness in insects, but we consider it very likely. We believe research on animal consciousness has relevance when it comes to treating animals ethically. Additionally, we do think that the ethical importance of consciousness has more to do with the pleasure-pain axis (valence), rather than cognitive ability. In that sense, the suffering of non-human animals may be just as morally relevant, if not more relevant than humans. The cortex seems to play a largely inhibitory role for emotions, such that the larger the cortex is, the better we’re able to manage and suppress our emotions. Consequently, animals whose cortices are less developed than ours may experience pleasure and pain in a more intense and uncontrollable way, like a pre-linguistic toddler.

  • Does QRI think that plants are conscious?

    • We think it’s very unlikely that plants are conscious. The main reason is that they lack an evolutionary reason to recruit consciousness. Large-scale phenomenally bound experience may be very energetically expensive, and plants don’t have much energy to spare. Additionally, plants have thick cellulose walls that separate individual cells, making it very unlikely that plants can solve the binding problem and therefore create unified moments of experience.

  • Why do some people seek out pain?

    • This is a very multifaceted question. As a whole, we postulate that in the vast majority of cases, when somebody may be nominally pursuing pain or suffering, they’re actually trying to reduce internal dissonance in pursuit of consonance or they’re failing to predict how pain will actually feel. For example, when a person hears very harsh music, or enjoys extremely spicy food, this can be explained in terms of either masking other unpleasant sensations or raising the energy parameter of experience, the latter of which can lead to neural annealing: a very pleasant experience that manifests as consonance in the moment.

  • I sometimes like being sad. Is QRI trying to take that away from me?

    • Before we try to ‘fix’ something, it’s important to understand what it’s trying to do for us. Sometimes suffering leads to growth; sometimes creating valuable things involves suffering. Sometimes, ‘being sad’ feels strangely good. Insofar as suffering is doing good things for us, or for the world, QRI advocates a light touch (see Chesterton’s fence). However, we also suggest two things:

      • 1. Most kinds of melancholic or mixed states of sadness usually are pursued for reasons that cash out as some sort of pleasure. Bittersweet experiences are far more preferable than intense agony or deep depression. If you enjoy sadness, it’s probably because there’s an aspect of your experience that is enjoyable. If it were possible to remove the sad part of your experience while maintaining the enjoyable part of it, you might be surprised to find that you prefer this modified experience more than the original one.

      • 2. There are kinds of sadness and suffering that are just bad, that degrade us as humans, and would be better to never feel. QRI doesn’t believe in forcibly taking away voluntary suffering, or pushing bliss on people. But we would like to live in a world where people can choose to avoid such negative states, and on the margin, we believe it would be better for humanity for more people to be joyful, filled with a deep sense of well-being.

  • If dissonance is so negative, why is dissonance so important in music?

    • When you listen to very consonant music or consonant tones, you will quickly adapt to these sounds and get bored of them. This has nothing to do with consonance itself being unpleasant and everything to do with learning in the brain. Whenever you experience the same stimuli repeatedly, most brains will trigger a boredom mechanism and add dissonance of its own in order to make you enjoy the stimuli less or simply inhibit it, not allowing you to experience it at all. Semantic satiation is a classic example of this where repeating the same word over and over will make it lose its meaning. For this reason, to trigger many high valence states of consciousness consecutively, you need contrast. In particular, music works with gradients of consonance and dissonance, and in most cases, moving towards consonance is what feels good rather than the absolute value of consonance. Music tends to feel the best when you mix a high absolute value of consonance together with a very strong sense of moving towards an even higher absolute value of consonance. Playing some levels of dissonance during a song will later enhance the enjoyment of the more consonant parts such as the chorus of songs, which are reported to be the most euphoric parts of song and typically are extremely consonant.

  • What is QRI’s perspective on AI and AI safety research?

    • QRI thinks that consciousness research is critical for addressing AI safety. Without a precise way of quantifying an action’s impact on conscious experiences, we won’t be able to guarantee that an AI system has been programmed to act benevolently. Also, certain types of physical systems that perform computational tasks may be experiencing negative valence without any outside observer being aware of it. We need a theory of what produces unpleasant experiences to avoid inadvertently creating superintelligences that suffer intensely in the process of solving important problems or accidentally inflict large-scale suffering.

    • Additionally, we think that a very large percentage of what will make powerful AI dangerous is that the humans programming these machines and using these machines may be reasoning from states of loneliness, resentment, envy, or anger. By discovering ways to help humans transition away from these states, we can reduce the risks of AI by creating humans that are more ethical and aligned with consciousness more broadly. In short: an antidote for nihilism could lead to a substantial reduction in existential risk.

    • One way to think about QRI and AI safety is that the world is building AI, but doesn’t really have a clear, positive vision of what to do with AI. Lacking this, the default objective becomes “take over the world.” We think a good theory of consciousness could and will offer new visions of what kind of futures are worth building—new Schelling points that humanity (and AI researchers) could self-organize around.

  • Can digital computers implementing AI algorithms be conscious?

    • QRI is agnostic about this question. We have reasons to believe that digital computers in their current form cannot solve the phenomenal binding problem. Most of the activity in digital computers can be explained in a stepwise fashion in terms of localized processing of bits of information. Because of this, we believe that current digital computers could be creating fragments of qualia, but are unlikely to be creating strongly globally bound experiences. So, we consider the consciousness of digital computers unlikely, although given our current uncertainty over the Binding Problem (or alternatively framed, the Boundary Problem), this assumption is lightly held. In the previous question, when we write that “certain types of physical systems that perform computational tasks may be experiencing negative valence”, we assume that these hypothetical computers have some type of unified conscious experience as a result of having solved the phenomenal binding problem. For more on this topic, see: “What’s Out There?

  • How much mainstream recognition has QRI’s work received, either for this line of research or others? Has it published in peer-reviewed journals, received any grants, or garnered positive reviews from other academics?

    • We are collaborating with researchers from Johns Hopkins University and Stanford University on several studies involving the analysis of neuroimaging data of high-valence states of consciousness. Additionally, we are currently preparing two publications for peer-reviewed journals on topics from our core research areas. Michael Johnson will be presenting at this year’s MCS seminar series, along with Karl Friston, Anil Seth, Selen Atasoy, Nao Tsuchiya, and others; Michael Johnson, Andrés Gómez Emilsson, and Quintin Frerichs have also given invited talks at various east-coast colleges (Harvard, MIT, Princeton, and Dartmouth).

    • Some well-known researchers and intellectuals that are familiar and think positively about our work include: Robin Carhart-Harris, Scott Alexander, David Pearce, Steven Lehar, Daniel Ingram, and more. Scott Alexander acknowledged that QRI put together the paradigms that contributed to Friston’s integrative model of how psychedelics work before his research was published. Our track record so far has been to foreshadow (by several years in advance) key discoveries later proposed and accepted in mainstream academia. Given our current research findings, we expect this trend to continue in the years to come.

Miscellaneous

  • How does QRI know what is best for other people/animals? What about cultural relativism?

    • We think that, to a large extent, people and animals work under the illusion that they are pursuing intentional objects, states of the external environment, or relationships that they may have with the external environment. However, when you examine these situations closely, you realize that what we actually pursue are states of high valence triggered by external circumstances. There may be evolutionary and cultural selection pressures that push us toward self-deception as to how we actually function. And we consider it negative to have these selection pressures makes us less self-aware because it often focuses our energy on unpleasant, destructive, or fruitless strategies. QRI hopes to support people in fostering more self-awareness, which can come through experiments with one’s own consciousness, like meditation, as well as through the deeper theoretical understanding of what it is that we actually want.

  • How central is David Pearce’s work to the work of the QRI?

    • We consider David Pearce to be one of our core lineages. We particularly value his contribution to valence realism, the insistence that states of consciousness come with an overall valence, and that this is very morally relevant. We also consider David Pearce to be very influential in philosophy of mind; Pearce, for instance, coined the phrase ‘tyranny of the intentional object’, the title of a core QRI piece of the same name. We have been inspired by Pearce’s descriptions for what any scientific theory of consciousness should be able to explain, as well as his particular emphasis on the binding problem. David’s vision of a world animated by ‘gradients of bliss’ has also been very generative as a normative thought experiment which integrates human and non-human well-being. We do not necessarily agree with all of David Pearce’s work, but we respect him as an insightful and vivid thinker who has been brave enough to actually take a swing at describing utopia and who we believe is far ahead of his time.

  • What does QRI think of negative utilitarianism?

    • There’s general agreement within QRI that intense suffering is an extreme moral priority, and we’ve done substantial work on finding simple ways of getting rid of extreme suffering (with our research inspiring at least one unaffiliated startup to date). However, we find it premature to strongly endorse any pre-packaged ethical theory, especially because none of them are based on any formalism, but rather an ungrounded concept of ‘utility’. The value of information here seems enormous, and we hope that we can get to a point where the ‘correct’ ethical theory may simply ‘pop out of the equations’ of reality. It’s also important to highlight the fact that common versions and academic formulations of utilitarianism seem to be blind to many subtleties concerning valence. For example, they do not distinguish between mixed states of consciousness where you have extreme pleasure combined with extreme suffering in such a way that you judge the experience to be neither entirely suffering nor entirely happiness and states of complete neutrality, such as extreme white noise. Because most formulations of utilitarianism do not distinguish between them, we are generally suspicious of the idea that philosophers of ethics have considered all of the relevant attributes of consciousness in order to make accurate judgments about morality.

  • What does QRI think of philosophy of mind departments?

    • We believe that the problems that philosophy of mind departments address tend to be very disconnected from what truly matters from an ethical, moral, and philosophical point of view. For example, there is little appreciation of the value of bringing mathematical formalisms into discussions about the mind, or what that might look like in practice. Likewise there is close to no interest in preventing extreme suffering nor understanding its nature. Additionally, there is usually a disregard for extreme states of positive valence, and strange or exotic experiences in general. It may be the case that there are worthwhile things happening in departments and classes creating and studying this literature, but we find them characterized by processes which are unlikely to produce progress on their nominal purpose, creating a science of mind.

    • In particular, in academic philosophy of mind, we’ve seen very little regard for producing empirically testable predictions. There are millions of pages written about philosophy of mind, but the number of pages that provide precise, empirically testable predictions is quite thin.

  • What therapies does QRI recommend for depression, anxiety, and chronic pain?

    • At QRI, we do not make specific recommendations to individuals, but rather point to areas of research that we consider to be extremely important, tractable, and neglected, such as anti-tolerance drugs, neural annealing techniques, frequency specific microcurrent for kidney stone pain, and N,N-DMT and other tryptamines for cluster headaches and migraines.

  • Why does QRI think it’s so important to focus on ending extreme suffering? 

    • QRI thinks ending extreme suffering is important, tractable, and neglected. It’s important because of the logarithmic scales of pleasure and pain—the fact that extreme suffering is far worse by orders of magnitude than what people intuitively believe. It’s tractable because there are many types of extreme suffering that have existing solutions that are fairly trivial or at least have a viable path for being solved with moderately funded research programs. And it’s neglected mostly because people are unaware of the existence of these states, though not necessarily because of their rarity. For example, 10% of the population experiences kidney stones at some point in their life, but for reasons having to do with trauma, PTSD, and the state-dependence of memory, even people who have suffered from kidney stones do not typically end up dedicating their time or resources toward eradicating them.

    • It’s also likely that if we can meaningfully improve the absolute worst experiences, much of the knowledge we’ll gain in that process will translate into other contexts. In particular, we should expect to figure out how to make moderately depressed people happier, fix more mild forms of pain, improve the human hedonic baseline, and safely reach extremely great peak states. Mood research is not a zero-sum game. It’s a web of synergies.



Many thanks to Andrew Zuckerman, Mackenzie Dion, and Mike Johnson for their collaboration in putting this together. Featured image is QRI’s logo – animated by Hunter Meyer.

The QRI Ecosystem: Friends, Collaborators, Blogs, Media, and Adjacent Communities

The Qualia Research Institute has the vision of a world free from involuntary suffering in which conscious agents are empowered to have full control over their lived experiences. Its mission tackles this objective by combining foundational research on consciousness with a focus on explaining the mathematical properties of pleasure and pain for a full, formal account of valence.

By relating our mission to existing memeplexes, we could perhaps accurately describe the ethos of QRI as “Qualia Formalist Sentientist Effective Altruism“. That’s a mouthful. Let’s break it down:

  • Qualia Formalism refers to the notion that experience has a precise mathematical description that ties it with physics (for a more detailed breakdown see the Formalism section of the glossary).
  • Sentientism refers to the claim that value and disvalue are entirely expressed in the nature and quality of conscious experiences. In other words, that the only reason why states of affairs matter is because of the way in which they impact experiences.
  • Effective Altruism refers to the view that we can aspire to do the most good we can rather than settle for less. If you examine the actual extent to which different interventions cash out in terms of reduction in suffering throughout the world, you will notice that they follow a long-tail distribution. Thus, research on how to prioritize interventions really pays off. Focusing on the top interventions (and being willing to spend extra time digging for even better ones) can multiply your positive impact by orders of magnitude.

We could thus say that people and organizations are more or less aligned with QRI to the extent that they are aligned with each of these notions and their combinations thereof. More so, QRI also values the practice of rational psychonautics and the study of one’s own mind with meditation – hence we also include lists of rational psychonauts and great dharma teachers.

Find below the list of people and organizations that have a significant degree of alignment with QRI on each front. We also include a list of blogs and websites from readers of our work, which is meant to incentivize community-building around the aforementioned core ideas.

Format:

Name of Person/Organization – Blog/Website/Media [if any] (Representative Post of the Author- Sometimes Not from Their Primary Site [if any])


QRI Canon

Qualia Research Institute – QRI (Glossary)

Michael Edward Johnson – Open Theory (Neural Annealing)

Andrés Gómez Emilsson – Qualia Computing (Wireheading Done Right)

Current and Former QRI Employees and Collaborators Who Write About QRI Topics

Romeo Stevens – Neurotic Gradient Descent (Core Transformation)

Quintin Frerichs – The Youtopia Project (Wada Test + Phenomenal Puzzles)

Andrew “Zuck” Zuckerman – andzuck.com (Super Free Will)

Kenneth Shinozuka – Blank Horizons (A Future for Humanity)

Wendi Yan – wendiyan.com (The Psychedelic Club)

Jeremy Hadfield – jeremyhadfield.com (How to Steal a Vibe)

Elin Ahlstrand – Mind Nomad (Floating Through First Fears)

Margareta Wassinge and Anders Amelin – Qualia Productions (When AI Means Advanced Incompetence)

List of current and former QRI collaborators and volunteers not listed above (in no particular order): Patrick Taylor, Hunter Meyer, Sean McGowan, Alex Zhao, Boian Etropolski, Robin Goins, Bence Vass, Brian Westerman, Jacob Shwartz-Lucas.


People and Organizations that Advocate for Sentientism and the Elimination of Suffering

David Pearce – Hedweb.com (The Hedonistic Imperative)

Manu Herrán – manuherran.com (Psychological Biases that Impede the Success in the Reduction of Intense Suffering Movement)

Jonathan Leighton – jonathanleighton.org (Why Access to Morphine is a Human Right)

Magnus Vinding – magnusvinding.com (Suffering-Focused Ethics: Defense and Implications)

Robert Daoust – robert.algosphere.org (Review of Precursor Works)

Jacob Shwartz-Lucas – Invincible Wellbeing (Pleasure in the Brain)

Algosphere Alliance – algosphere.org (Vision)

Organization for the Prevention of Intense Suffering (OPIS) – preventsuffering.org (Cluster Headaches and Potential Therapies)

Sentience Research – sentience-research.org (Algonomy)

People and Organizations Aligned with Qualia Formalism

Giulio Tononi – integratedinformationtheory.org (Phi: A Voyage from the Brain to the Soul)

Steven Lehar – slehar.com (Harmonic Resonance Theory)

Jonathan W. D. Mason – jwmason.net (Quasi-Conscious Multivariate System)

Johannes Kleiner – jkleiner.de (Mathematical Consciousness Science)

Dan Lloyd – Labyrinth of Consciousness (The Music of Consciousness)

Luca Turin – A Spectroscopic Mechanism for Primary Olfactory Reception (The Science of Scent)

William Marshall – Google Scholar (PyPhi)

Larissa Albantakis – Google Scholar (Causal Composition)

Models of Consciousness Conference – models-of-consciousness.org (YouTube channel)

People and Organizations Aligned with Effective Altruism

Nick Bostrom – nickbostrom.com (What is a Singleton?)

Anders Sandberg – aleph.se (Uriel’s Stacking Problem)

Toby Ord – tobyord.com (The Precipice)

80000 Hours – 80000hours.org (We Could Feed All 8 Billion People Through a Nuclear Winter)

Future of Humanity Institute – fhi.ox.ac.uk (Publications)

Future of Life Institute – futureoflife.org (AI Alignment Podcast: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson)

Center on Long-Term Risk – longtermrisk.org (The Case for Suffering-Focused Ethics)

Rethink Priorities – rethinkpriorities.org (Invertebrate Welfare Cause Profile)

Happier Lives Institute – happierlivesinstitute.org (Cause Profile: Mental Health)

Effective Altruism Forum – forum.effectivealtruism.org (Logarithmic Scales of Pleasure and Pain)


Rational Psychonautics

Steven Lehar – slehar.com (The Grand Illusion)

James Kent – psychedelic-information-theory.com (The Control Interrupt Model of Psychedelic Action)

Alexander Shulgin – Shulgin Research Institute (Phenethylamines I Have Known And Loved)

Thomas S. Ray – Breadth and Depth (Psychedelics and the Human Receptorome)

Matthew Baggott – Beyond Fear: MDMA and Emotion (MDA and Contour Integration)

Psychonaut Wiki – psychonautwiki.org (Visual Effects)

Psychedelic Replications – reddit.com/r/replications (Best of All Times Replicationsspecific floor tile example)

Great Dharma Teachers

Daniel M. Ingram – Integrated Daniel (No-Self vs. True Self)

Leigh Brasington – leighb.com (Right Concentration)

Shinzen Young – shinzen.org (The Science of Enlightenment)

Culadasa – culadasa.com (Joy and Meditation)


QRI Friends and Supporters

Ryan Ferris and James Ormrod – The Good Timeline  (5-MeO-DMT, Paradise Engineering)

Adrian Nelson – Origins of Consciousness (Consciousness Blindness in Science Fiction)

Alex K. Chen – Quora (What are the long term effects of Adderall, Dexedrine, or Ritalin use?)

Andy Vargas – Neologos (Praxis for Open Individualism; Purpose Statement)

Tyger Gruber – tygergruber.com (The Show)

Jacob Lyles – Jacob ex machina (Building a Better Anti-Capitalism)

Adjacent Communities, Organizations, and Allies

Scott Alexander – Slate Start Codex (Relaxed Beliefs Under Psychedelics and the Anarchic Brain)

Geoffrey Miller – Primal Poly (The Mating Mind: How sexual choice shaped the evolution of human nature)

Zvi Mowshowitz – Don’t Worry About the Vase (More Dakka)

Sarah Constantin – Multiple websites: 12, 3 (More Dakka in Medicine)

Scott Aaronson – scottaaronson.com/blog/ (Why I Am Not An Integrated Information Theorist)

Gwern – gwern.net (Iodine and IQ Meta-Analysis)

Venkatesh Rao – Ribbonfarm (Why We Slouch)

David Chapman – meaningness.com (Romantic Rebellion)

Atman Retreat – atmanretreat.com (FAQ)

Foresight Institute – foresight.org (YouTube Channel)

Convergence Analysis – convergenceanalysis.org (List of Works)

Simulation Series – About (YouTube Channel)

Consciousness Hacking – cohack.org (blog posts)

HeartMath Institute – heartmath.org (Chapter on Coherence)

The Wider World of People Who are Friends and Acquaintances of the QRI Ecosystem

Note: I asked (on social media) our readers to share their blogs and personal sites with us. Some of these links are very aligned with QRI and some are not. That said, together they represent a good sample of the memetic ecosystem that surrounds QRI. Namely, these links can be taken as a whole to be suggestive of “the memetic ground upon which QRI is founded”. Please feel free to share your blog or personal site in the comment section of this post.

Jack Foust – Welcome to the Symbolic Domain

Scott Jackisch – Oakland Futurist (Art as a Superweapon)

Maurits Luyben – Energy and Structure

Anonymous – deluks917 (What does ‘Actually Trying’ look like?)

Sameer Halai – sameerhalai.com (Toilet Paper Shortage is Not Caused by Hoarding)

Yohan John – neurologism.com (Some Wild Speculation On Goodhart’s Law And Its Manifestations In The Brain)

Jamie Joyce – The Society Library (Deconstructing the Logic of “Plandemic”)

João Mirage – YouTube Channel (The Mirror of the Spirit)

Natália Mendonça – Axiomatic Doubts (What Truths are Worth Seeking?)

Dustin Ali Francis Janatpour – Tales From Samarkand (The Inspector and the Crow)

Zarathustra Amadeus Goertzel – zarathustra.gitlab.io (Garden of Minds)

Duncan Sabien – Human Parts (In Defense of Punch Bug)

Brenda Esquivel – Abanico de Historias (La Reina Tamar y el Pájaro Condenado)

Vishnu Bachani – vishnubachani.com (Latent Possibilities of the Tonal System)

Martin Utheraptor Duřt – utheraptor.art (Psychedelic Series)

Qiaochu Yuan – Thicket Forte (Monist Nihilism)

Jedediah Logan – Medium Account (Coping with Death During the COVID-19 Crisis)

Eliezer da Silva – eliezersilva.blog (Prior Specification via Prior Predictive Matching)

Cassandra McClure – Lexicaldoll (On Save States)

Gaige Clark – mad.science.blog / Querky Science (The Phoenix Effect)

Ben Finn – optima.blog (Too much to do? Plan your day with Hopscotch [longer])

Michael Dello-Iacovo – michaeldello.com (How I Renounced Christianity and Became Atheist)

Robin Hanson – Overcoming Bias (What Can Money Buy Directly?)

Katja Grace – meteuphoric.com, Worldly Positions, AI Impacts

Mundy Otto Reimer – mundyreimer.github.io (On Thermodynamics, Agency, and Living Systems)

Khuyen Bui – Medium Account (Beyond Ambition)

Jessica Watson Miller – Autotranslucence (Art as the Starting Point; Becoming a Magician)

Aella – knowingless.com (The Trauma Narrative)

Jacob Falkovich – Put a Number on It (The Scent of Bad Psychology)

Javi Otero – iawaketechnologies.com (Fractal Entrainment: A New Psychoacoustic Technology Inspired by Nature)

José Luis Ricón – Nintil

Eliot Redelman – BearLamp

Tee Barnett – teebarnett.com (Are you a job search drone?)

Juan Fernandez Zaragoza – filosofiadelfuturo.com (Pandemia de Ideas)

Eric Layne – (The Antidote to a Global Crisis)

Kazi Adi Shakti – holo-poiesis.com (Beyond Affirmation and Negation)

Pushan Kumar Datta – kaiserpush1 (Ramayana and Cognition of Self)

Yan Liu – Inflection Point (Seeing a World Unshackled from Neoclassical Economics)

Joseph Kelly – (Entrepreneurship is Metaphysical Labor)

Logan Thrasher Collins – logancollinsblog.com

Malcolm Ocean – malcolmocean.com (Transcending Regrets, Problems, and Mistakes)

Jesse Parent – (Why ‘Be Yourself’ is Still Excellent Relationship Advice)

Milan Griffes – Flight From Perfection (Contemplative Practices, Optimal Stopping, Explore/Exploit)

Cody Kuiack – cosmeffect.com (The Holomorphic Self – Meditations)

Daniel Eth – thinkingofutils.com (Quantum Computing for Morons)

Brian P. Ellis – brianpellis.net (Refuting Dr. Erickson and Dr. Massihi)

John Greer – johncgreer.com (The Three Buckets)


Finally: List of Other Relevant Lists

Effective Altruism Blogs – eablogs.net

LessWrong Wiki – List of Rationalist Diaspora Blogs

Effective Altruism Hub – effectivealtruism.org (Resources)

Open Individualism Readings – r/OpenIndividualism (Wiki Reading List)

Phenomenal Binding Resources – binding-problem.com

Physicalist Hotlinks – physicalism.com/physicalist-hotlinks

Qualia Productions Presents: When AI Equals Advanced Incompetence

By Maggie and Anders Amelin

Letter I: Introduction

We are Maggie & Anders. A mostly harmless Swedish old-timer couple only now beginning to discover the advanced incompetence that is the proto-science — or “alchemy” — of consciousness research. A few centuries ago a philosopher of chemistry could have claimed with a straight face to be quite certain that a substance with negative mass had to be invoked to explain the phenomenon of combustion. Another could have been equally convinced that the chemistry of life involves a special force of nature absent from all non-living matter. A physicist of today may recognize that the study of consciousness has even less experimental foundation than alchemy did, yet be confident that at least it cannot feel like something to be a black hole. Since, obviously, black holes are simple objects and consciousness is a phenomenon which only emerges from “complexity” as high as that of a human brain.

Is there some ultimate substrate, basic to reality and which has properties intrinsic to itself? If so, is elementary sentience one of those properties? Or is it “turtles all the way down” in a long regress where all of reality can be modeled as patterns within patterns within patterns ending in Turing-style “bits”? Or parsimoniously never ending?

Will it turn out to be patterns all the way down, or sentience all the way up? Should people who believe themselves to perhaps be in an ancestor simulation take for granted that consciousness exists for biologically-based people in base-level reality? David Chalmers does. So at least that must be one assumption it is safe to make, isn’t it? And the one about no sentience existing in a black hole. And the one about phlogiston. And the four chemical elements.

This really is good material for silly comedy or artistic satire. To view a modest attempt by us in that direction, please feel encouraged to enjoy this youtube video we made with QRI in mind:

When ignorance is near complete, it is vital to think outside the proverbial box if progress is to be made. However, spontaneous creative speculation is more context-constrained than it feels like, and it rarely correlates all that beautifully with anything useful. Any science has to work via the baby steps of testable predictions. The integrated information theory (IIT) does just that, and has produced encouraging early results. IIT could turn out to be a good starting point for eventually mapping and modeling all of experiential phenomenology. For a perspective, IIT 3.0 may be comparable to how Einstein’s modeling of the photoelectric effect stands in relation to a full-blown theory of quantum gravity. There is a fair bit of ground to cover. We have not been able to find any group more likely than the QRI to speed up the process whereby humanity eventually manages to cover that ground. That is, if they get a whole lot of help in the form of outreach, fundraising and technological development. Early pioneers have big hurdles to overcome, but the difference they can make for the future is enormous.anders_and_maggie_thermometer

For those who feel inspired, a nice start is to go through all that is on or linked via the QRI website. Indulge in Principia Qualia. If that leaves you confused on a higher level, you are in good company. With us. We are halfway senile and are not information theorists, neuroscientists or physicists. All we have is a nerdy sense of humor and work experience in areas like marketing and planetary geochemistry. One thing we think we can do is help bridge the gap between “experts” and “lay people”. Instead of “explain it like I am five”, we offer the even greater challenge of explaining it like we are Maggie & Anders. Manage that, and you will definitely be wiser afterwards!

– Maggie & Anders


Letter II: State-Space of Matter and State-Space of Consciousness

A core aspect of science is the mapping out of distributions, spectra, and state-spaces of the building blocks of reality. Naturally occurring states of things can be spontaneously discovered. To gain more information about them, one can experimentally alter such states to produce novel ones, and then analyze them in a systematic way.

The full state-space of matter is multidimensional and vast. Zoom in anywhere in it and there will be a number of characteristic physics phenomena appearing there. Within a model of the state-space you can follow independent directions as you move towards regions and points. As an example, you can hold steady at one particular simple chemical configuration. Diamond, say. The stable region of diamond and its emergent properties like high hardness extends certain distances in other parameter directions such as temperature and pressure. The diamond region has neighboring regions with differently structured carbon, such as graphite. Diamond and graphite make for an interesting case since the property of hardness emerges very differently in the two regions. (In the pure carbon state-space the dimensions denoting amounts of all other elements can be said to be there but set to zero). Material properties like hardness can be modeled as static phenomena. According to IIT however, consciousness cannot. It’s still an emergent property of matter though, so just stay in the matter state-space and add a time dimension to it. Then open chains and closed loops of causation emerge as a sort of fundamental level of what matter “does”. Each elementary step of causation may be regarded to produce or intrinsically be some iota of proto-experience. In feedback loops this self-amplifies into states of feeling like something. Many or perhaps most forms of matter can “do” these basic things at various regions of various combinations of parameter settings. Closed causal loops require more delicate fine-tuning in parameter space, so the state-space of nonconscious causation structure is larger than that of conscious structure. The famous “hard problem” has to do with the fact that both an experientially very weak and a very strong state can emerge from the same matter (shown to be the case so far only within brains). A bit like the huge difference in mechanical hardness of diamond and graphite both emerging from the same pure carbon substrate (a word play on “hard” to make it sticky).

By the logic of IIT it should be possible to model (in arbitrarily coarse or fine detail) the state-space of all conscious experience whose substrate is all possible physical states of pure carbon. Or at room temperature in any material. And so on. If future advanced versions of IIT turn out to be a success then we may guess there’ll be a significant overlap to allow for a certain “substrate invariance” for hardware that can support intelligence with human-recognizable consciousness. Outside of that there will be a gargantuan additional novel space to explore. It ought to contain maxima of (intrinsic) attractiveness, none of which need to reside within what a biological nervous system can host. Biological evolution has only been able to search through certain parts of the state-space of matter. One thing it has not worked with on Earth is pure carbon. Diamond tooth enamel or carbon nanotube tendons would be useful but no animal has them. What about conscious states? Has biology come close to hit upon any of the optima in those? If all of human sentience is like planet Earth, and all of Terrestrial biologically-based sentience is like the whole Solar System, that leaves an entire extrasolar galaxy out there to explore. (Boarding call: Space X Flight 42 bound for Nanedi Settlement, Mars. Sentinauts please go to the Neuralink check-in terminal).

Of course we don’t currently know how IIT is going to stand up, but thankfully it does make testable predictions. There is, therefore, a beginning of something to be hoped for with it. In a hopeful scenario IIT turns out to be like special relativity, and what QRI is reaching for is like quantum gravity. It will be a process of taking baby steps, for sure. But each step is likely to bring benefits in many ways.

Is any of this making you curious? Then you may enjoy reading “Principia Qualia” and other QRI articles.

– Maggie & Anders

Breaking Down the Problem of Consciousness

Below you will find three different breakdowns for what a scientific theory of consciousness must be able to account for, formulated in slightly different ways.

First, David Pearce posits these four fundamental questions (the simplicity of this breakdown comes with the advantage that it might be the easiest to remember):

  1. The existence of consciousness
  2. The causal and computational properties of experience (including why we can even talk about consciousness to begin with, why consciousness evolved in animals, etc.)
  3. The nature and interrelationship between all the qualia varieties and values (why does scent exist? and in exactly what way is it related to color qualia?)
  4. The binding problem (why are we not “mind dust” if we are made of atoms)

david_pearce_criteria_for_scientific_theory_of_consciousness

David Pearce’s Four Questions Any Scientific Theory of Consciousness Must Be Able to Answer


Second, we have Giulio Tononi‘s IIT:

  1. The existence of consciousness
  2. The composition of consciousness (colors, shapes, etc.)
  3. Its information content (the fact each experience is “distinct”)
  4. The unity of consciousness (why does seeing the color blue does not only change a part of your visual field, but in some sense it changes the experience as a whole?)
  5. The borders of experience (also called ‘exclusion principle’; that each experience excludes everything not in it; presence of x implies negation of ~x)

Axioms_and_postulates_of_integrated_information_theory

Giulio Tononi’s 5 Axioms of Consciousness


Finally, Michael Johnson breaks it down in terms of what he sees as a set of what ultimately are tractable problems. As a whole the problem of consciousness may be conceptually daunting and scientifically puzzling, but this framework seeks to paint a picture of what a solution should look like. These are:

  1. Reality mapping problem (what is the formal ontology that can map reality to consciousness?)
  2. Substrate problem (in such an ontology, which objects and processes contribute to consciousness?)
  3. Boundary problem (akin to the binding problem, but reformulated to be agnostic about an atomistic ontology of systems)
  4. Scale problem (how to connect the scale of our physical ontology with the spatio-temporal scale at which experiences happen?)
  5. Topology of information problem (how do we translate the physical information inside the boundary into the adequate mathematical object used in our formalism?)
  6. State-space problem (what mathematical features does each qualia variety, value, and binding architecture correspond to?)
  7. Translation problem (starting with the mathematical object corresponding to a specific experience within the correct formalism, how do you derive the phenomenal character of the experience?)
  8. Vocabulary problem (how can we improve language to talk directly about natural kinds?)

Eight-Problems2

Michael Johnson’s 8 Problems of Consciousness

Each of these different breakdowns have advantages and disadvantages. But I think that they are all very helpful and capable of improving the way we understand consciousness. While pondering about the “hard problem of consciousness” can lead to fascinating and strange psychological effects (much akin to asking the question “why is there something rather than nothing?”), addressing the problem space at a finer level of granularity almost always delivers better results. In other words, posing the “hard problem” is less useful than decomposing the question into actually addressable problems. The overall point being that by doing so one is in some sense actually trying to understand rather than merely restating one’s own confusion.

Do you know of any other such breakdown of the problem space?


27983216_842901315900851_2411647783839521476_o